# Light-Condition-Style-Transfer **Repository Path**: captainscott/Light-Condition-Style-Transfer ## Basic Information - **Project Name**: Light-Condition-Style-Transfer - **Description**: Lane Detection in Low-light Conditions Using an Efficient Data Enhancement : Light Conditions Style Transfer (IV 2020) - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-04-20 - **Last Updated**: 2021-04-20 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Light Conditions Style Transfer ## Paper [Lane Detection in Low-light Conditions Using an Efficient Data Enhancement : Light Conditions Style Transfer](https://arxiv.org/abs/2002.01177) Accepted by 2020 IEEE Intelligent Vehicles Symposium (IV 2020). The main framework is as follows: ![Our framework](https://github.com/Chenzhaowei13/Light-Condition-Style-Transfer/blob/master/data/framework.png) Empirically, lane detection model trained using our method demonstrated adaptability in low-light conditions and robustness in complex scenarios. (It can achieve **73.9** F1-measure in CULane testing set) ## Datasets #### CULane The whole dataset is available at [CULane](https://xingangpan.github.io/projects/CULane.html). ``` CULane ├── driver_23_30frame # training&validation ├── driver_161_90frame # training&validation ├── driver_182_30frame # training&validation ├── driver_193_90frame # testing ├── driver_100_30frame # testing ├── driver_37_30frame # testing ├── laneseg_label_w16 # labels └── list # list ``` #### Generated Images The images in low-light conditions are generated by the proposed SIM-CycleGAN. ## Requirements - [PyTorch 1.3.0](https://pytorch.org/get-started/previous-versions/). - Matlab (for tools/prob2lines), version R2017a or later. - [Opencv](https://opencv.org/releases/) (for tools/lane_evaluation). ## Before start ``` conda create -n your_env_name python=3.6 conda activate your_env_name conda install pytorch==1.3.0 torchvision==0.4.1 cudatoolkit=10.0 -c pytorch pip install -r requirements.txt ``` ## SIM-CycleGAN The source code for SIM-CycleGAN has been released. (11/03) ### train Train your own SIM-CycleGAN model as follow. ``` python train.py --name repo_name \ --dataset_loadtxt_A /path/to/domain_A_txt \ --dataset_loadtxt_B /path/to/domain_B_txt \ --gpu_ids 6 \ ``` ### test Use your trained model to generate images. ``` python test.py --name repo_name \ --model simcycle_gan \ --dataset_loadtxt_A /path/to/domain_A_txt \ --dataset_loadtxt_B /path/to/domain_B_txt \ --gpu_ids 6 \ ``` ## Lane Detetcion The source code used for the lane detction is made publicly available by [HOU Yuenan](https://github.com/cardwing/Codes-for-Lane-Detection/tree/master/ERFNet-CULane-PyTorch). #### Test for Demo We provide demo for testing a single image or a video. ``` sh ./demo.sh ``` You can get the results as follow. Result for probability map ![images](https://github.com/Chenzhaowei13/Light-Condition-Style-Transfer/blob/master/data/result_pb.jpg) Result for points ![images](https://github.com/Chenzhaowei13/Light-Condition-Style-Transfer/blob/master/data/result_points.jpg) If you want to test the model for video, you can set mode=0 in demo.sh. #### Evaluate the Model The trained model used in this paper is available in ./trained. 1. Run test script ``` sh ./test_erfnet.sh ``` 2. Get lines from probability maps ``` cd tools/prob2lines matlab -nodisplay -r "main;exit" ``` Please check the file path in Matlab code before. 3. Evaluation ``` cd /tools/lane_evaluation make # You may also use cmake instead of make, via: # mkdir build && cd build && cmake .. sh eval_all.sh # evaluate the whole test set sh eval_split.sh # evaluate each scenario separately ``` The evaluation results are saved in /tools/lane_evaluation/output. ## Performance #### Light Conditions Stlye Transfer Some examples of real images in normal light conditions and their corresponding translations images in low-light conditions. ![images](https://github.com/Chenzhaowei13/Light-Condition-Style-Transfer/blob/master/data/transfer_result.png) #### Lane Detetcion Performance ( (F1-measure) ) of different methods on CULane testing set. For crossroad, only FP is shown. | Category | ERFNet | CycleGAN+ERFNet | SIM-CycleGAN + ERFNet(ours) | SCNN | ENet-SAD | ResNet-101-SAD | |:----:|:----:|:----:|:----:|:----:|:----:|:----:| | Normal | 91.5 | 91.7 | **91.8** | 90.6 | 90.1 | 90.7 | | Crowded | 71.6 | 71.5 | **71.8** | 69.7 | 68.8 | 70.0 | | Night | 67.1 | 68.9 | **69.4** | 66.1 | 66.0 | 66.3 | | No Line | 45.1 | 45.2 | **46.1** | 43.4 | 41.6 | 43.5 | | Shadow | 71.3 | 73.1 | **76.2** | 66.9 | 65.9 | 67.0 | | Arrow | 87.2 | 87.2 | **87.8**| 66.9 | 65.9 | 67.0 | | Dazzle Light | 66.0 | **67.5** | 66.4 | 58.5 | 60.2 | 59.9 | | Curve | 66.3 | **69.0** | 67.1 | 64.4 | 65.7 | 65.7 | | Crossroad | 2199 | 2402 | 2346 | **1990** | 1998 | 2052 | | Total | 73.1 | 73.6 | **73.9** | 71.6 | 70.8 | 71.8 | The probability maps output by the three methods above are shown as following ![images](https://github.com/Chenzhaowei13/Light-Condition-Style-Transfer/blob/master/data/lane_detection_results.png) ## To do - [ ] Add attenction on ERFNet - [x] Open the source code for SIM-CycleGAN - [x] Upgade pytorch (from 0.3.0 to 1.3.0) - [x] Upload demo for test ## Citation Please cite this in your publication if our work helps your research. ``` @inproceedings{Liu2020Lane, title={Lane Detection in Low-light Conditions Using an Efficient Data Enhancement : Light Conditions Style Transfer}, author={Liu, Tong and Chen, Zhaowei and Yang, Yi and Wu, Zehao and Li, Haowei}, booktitle={2020 IEEE intelligent vehicles symposium (IV)}, year={2020}, } ``` ## Acknowledgement This project refers to the following projects: - [Codes-for-Lane-Detection](https://github.com/cardwing/Codes-for-Lane-Detection) - [SCNN](https://github.com/XingangPan/SCNN) - [CycleGAN](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix)