Implementation of KAN in PilotNet to verify the effectiveness of KAN (Kolmogorov–Arnold Networks) for autonomous driving models. We used tfkan, a TensorFlow-based KAN library.
We trained the [original model from PilotNet](./src/nets/pilotNet_original.py), [CNN+DenseKAN](./src/nets/pilotNet_KAN1.py), and [ConvKAN+DenseKAN](./src/nets/pilotNet_KAN2.py) on a small dataset with a batch size of 32.| Original | CNN+DenseKAN | ConvKAN+DenseKAN | |
|---|---|---|---|
| Total Parameters | 1,595,513 | 12,753,559 | 414,691 |
| Model Size(.ckpt) | 18.2MB | 145MB | 4.61MB |
| Training Time (per epoch) | 1m 10s | 2m 30s | 22m |
| Inference Time | 0.01s | 0.02s | 0.09s |
Although the CNN+DenseKAN model reduced the number of layers, its number of parameters became large. In the ConvKAN+DenseKAN model, we reduced not only the number of layers but also other hyperparameters, achieving a smaller model size than the original model. However, regardless of parameter size, KAN showed limitations in both training time and inference time.
The instructions are tested on Ubuntu 18.04 with python 3.8 and tensorflow 2.10.0 (CUDA 11.2 and cuDNN 8.1).
-
Clone the PilotNet repository:
$ git clone https://github.com/Lchaerin/KANPilotNet.git
-
Create a conda environment
$ conda create -n [env_name] python=3.8 $ conda activate [env_name]
If you want to run the demo on the dataset or try some training works, download the
driving_dataset.zip and recommend you to
extract into the dataset folder ./data/datasets/.
$ cd $ROOT/data/datasets/
$ wget -t https://drive.google.com/file/d/0B-KJCaaF7elleG1RbzVPZWV4Tlk/view?usp=sharing
$ unzip driving_dataset.zip -d . This driving_dataset.zip consists of
images of the road ahead (*.jpg) and recorded steering wheel angles (%.6f), data.txt should in following
format:
...
98.jpg 2.120000
99.jpg 2.120000
100.jpg 2.120000
101.jpg 2.120000
...PS: The Official download link driving_dataset.zip is on Google Drive, here I also share a backup link in Baidu Net: download link (extract code: gprm).
※ This dataset is not fully checked. Option 1 is recommended.
Download a chunk(or chunks) from comma2k19 dataset.
Then, run export_frames_with_angles.py
$ python export_frames_with_angles.py ~/[root]/KANPilotNet/Chunk_1 ~/[root]/KANPilotNet/data/datasets/driving_dataset --width 455 --height 256 --jpeg-quality 70You can run this demo directly on a live webcam feed in actual running scenario (online) or just offline, given input images of the road ahead.
-
Either rename the file you want to use among
pilotNet_original.py(original (PilotNet)[https://github.com/AutoDeep/PilotNet]),pilotNet_KAN1.py(CNN+DenseKAN), andPilotNet_KAN2.py(ConvKAN+DenseKAN) located in./src/netstopilotNet.py;or edit the import statements in
run_capture.py,run_dataset.py, andtrain.pyin./srcfromfrom nets.pilotNet import PilotNet
to
from nets.[desired model architecture] import PilotNet
-
Run the model on the dataset.
$ ./scripts/demo.sh
-
Run the model on a live webcam feed
$ ./scripts/demo.sh -online
- After downloading the dataset, you can train your own model parameters as following:
$ ./scripts/train.sh
- You can run
./scripts/train.shto train your model from downloaded dataset following tips above. Training logs and model will be stored into ./logs and ./logs/checkpoint respectively. -dataset_dircan help you to specify other available dataset.- You can use
-log_dirto set another log directory, and be careful to use-ffor log files synchronization, fixWARNING:tensorflow:Found more than one metagraph event per run. Overwriting the metagraph with the newest event. - You can use
-num_epochsand-batch_sizeto control the training step if good at it.
- You can run

