Skip to content

A Deep Learning Beginner: Nvidia's End to End Learning on Steering for Self-Driving Cars

License

Notifications You must be signed in to change notification settings

Lchaerin/KANPilotNet

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

KANPilotNet: Advanced PilotNet with KAN

Implementation of KAN in PilotNet to verify the effectiveness of KAN (Kolmogorov–Arnold Networks) for autonomous driving models. We used tfkan, a TensorFlow-based KAN library.

We trained the [original model from PilotNet](./src/nets/pilotNet_original.py), [CNN+DenseKAN](./src/nets/pilotNet_KAN1.py), and [ConvKAN+DenseKAN](./src/nets/pilotNet_KAN2.py) on a small dataset with a batch size of 32.

Original CNN+DenseKAN ConvKAN+DenseKAN
Total Parameters 1,595,513 12,753,559 414,691
Model Size(.ckpt) 18.2MB 145MB 4.61MB
Training Time (per epoch) 1m 10s 2m 30s 22m
Inference Time 0.01s 0.02s 0.09s

Although the CNN+DenseKAN model reduced the number of layers, its number of parameters became large. In the ConvKAN+DenseKAN model, we reduced not only the number of layers but also other hyperparameters, achieving a smaller model size than the original model. However, regardless of parameter size, KAN showed limitations in both training time and inference time.



Replacing traditional convolution layers showed a meaningful improvement in loss.

How to Use

The instructions are tested on Ubuntu 18.04 with python 3.8 and tensorflow 2.10.0 (CUDA 11.2 and cuDNN 8.1).

Installation

  • Clone the PilotNet repository:

    $ git clone https://github.com/Lchaerin/KANPilotNet.git
  • Create a conda environment

    $ conda create -n [env_name] python=3.8
    $ conda activate [env_name]

Dataset

Option 1 (Small)

 If you want to run the demo on the dataset or try some training works, download the driving_dataset.zip and recommend you to extract into the dataset folder ./data/datasets/.

$ cd $ROOT/data/datasets/
$ wget -t https://drive.google.com/file/d/0B-KJCaaF7elleG1RbzVPZWV4Tlk/view?usp=sharing
$ unzip driving_dataset.zip -d .

 This driving_dataset.zip consists of images of the road ahead (*.jpg) and recorded steering wheel angles (%.6f), data.txt should in following format:

    ...
98.jpg 2.120000
99.jpg 2.120000
100.jpg 2.120000
101.jpg 2.120000
    ...

 PS: The Official download link driving_dataset.zip is on Google Drive, here I also share a backup link in Baidu Net: download link (extract code: gprm).

Option 2 (Big)

※ This dataset is not fully checked. Option 1 is recommended.

Download a chunk(or chunks) from comma2k19 dataset.

Then, run export_frames_with_angles.py

$ python export_frames_with_angles.py ~/[root]/KANPilotNet/Chunk_1 ~/[root]/KANPilotNet/data/datasets/driving_dataset --width 455 --height 256 --jpeg-quality 70

Demo

 You can run this demo directly on a live webcam feed in actual running scenario (online) or just offline, given input images of the road ahead.

  • Either rename the file you want to use among pilotNet_original.py (original (PilotNet)[https://github.com/AutoDeep/PilotNet]), pilotNet_KAN1.py (CNN+DenseKAN), and PilotNet_KAN2.py (ConvKAN+DenseKAN) located in ./src/nets to pilotNet.py;

    or edit the import statements in run_capture.py, run_dataset.py, and train.py in ./src from

    from nets.pilotNet import PilotNet

    to

    from nets.[desired model architecture] import PilotNet
  • Run the model on the dataset.

    $ ./scripts/demo.sh
  • Run the model on a live webcam feed

    $ ./scripts/demo.sh -online

Training/Validation

  • After downloading the dataset, you can train your own model parameters as following:
    $ ./scripts/train.sh
    • You can run ./scripts/train.sh to train your model from downloaded dataset following tips above. Training logs and model will be stored into ./logs and ./logs/checkpoint respectively.
    • -dataset_dir can help you to specify other available dataset.
    • You can use -log_dir to set another log directory, and be careful to use -f for log files synchronization, fix WARNING:tensorflow:Found more than one metagraph event per run. Overwriting the metagraph with the newest event.
    • You can use -num_epochs and -batch_size to control the training step if good at it.

About

A Deep Learning Beginner: Nvidia's End to End Learning on Steering for Self-Driving Cars

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 89.3%
  • Shell 10.7%