|
|
Stereo low-light enhancement based on feature fusion and consistency loss |
Jia-wen LIAO1(),Yan-wei PANG1,2,*(),Jing NIE1,3,Han-qing SUN1,Jia-le CAO1 |
1. School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China 2. Shanghai Artificial Intelligence Laboratory, Shanghai 200232, China 3. School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 401331, China |
|
|
Abstract A large scale real scene stereo low-light image dataset SLL10K was proposed. There were 12 658 pairs of unreferenced stereo low-illumination images and 205 pairs of referenced stereo images contained in the dataset. The images in the SLL10K dataset cover a wealth of lighting, time, and scene. FCNet, a stereo low-light image enhancement network based on feature fusion and consistency loss was proposed. The feature fusion module was used to fully integrate intra-monocular and inter-stereo features, and the consistency loss function was used to maintain the consistency between images before and after enhancement. Experiments on the SLL10K dataset and darkening KITTI dataset show that the images with FCNet obtain better performance on low-light image enhancement and object detection than the monocular enhancement method.
|
Received: 27 February 2023
Published: 27 December 2023
|
|
Fund: 国家科技创新2030新一代人工智能重大项目(2022ZD0160400);国家自然科学基金资助项目(62271346) |
Corresponding Authors:
Yan-wei PANG
E-mail: gavin971209@tju.edu.cn;pyw@tju.edu.cn
|
基于特征融合和一致性损失的双目低光照增强
构建大规模真实场景的双目低光照图像数据集SLL10K. 该数据集包含12 658对无参考双目低照度图像和205对有参考双目图像,数据集图像涵盖丰富的光照、时间及场景. 提出基于特征融合和一致性损失的双目低光照图像增强网络FCNet,特征融合模块用于充分融合单目内和双目间的特征,一致性损失函数用于保持增强前和增强后图像间的一致性. 在SLL10K数据集和暗化KITTI数据集上的实验表明,使用FCNet增强后的图像获得了比单目增强方法更好的低光照图像增强效果和目标检测效果.
关键词:
图像增强,
低光照,
双目数据集,
无参考图像增强,
特征融合
|
|
[1] |
WANG Y, WAN R, YANG W, et al. Low-light image enhancement with normalizing flow [C]// Proceedings of the AAAI Conference on Artificial Intelligence. [S.l.]: AAAI, 2022: 2604-2612.
|
|
|
[2] |
LV F, LI Y, LU F Attention guided low-light image enhancement with a large scale low-light simulation dataset[J]. International Journal of Computer Vision, 2021, 129: 2175- 2193
doi: 10.1007/s11263-021-01466-8
|
|
|
[3] |
WANG L, LIU Z, SIU W, et al Lightening network for low-light image enhancement[J]. IEEE Transactions on Image Processing, 2020, 29: 7984- 7996
doi: 10.1109/TIP.2020.3008396
|
|
|
[4] |
ZHOU S, LI C, LOY C C. LEDNet: joint low-light enhancement and deblurring in the dark [C]// Proceedings of the European Conference on Computer Vision. [S.l.]: Springer, 2022: 573-589.
|
|
|
[5] |
ZHENG C, SHI D, SHI W. Adaptive unfolding total variation network for low-light image enhancement [C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 4439-4448.
|
|
|
[6] |
WU W, WENG J, ZHANG P, et al. URetinex-Net: retinex-based deep unfolding network for low-light image enhancement [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022: 5901-5910.
|
|
|
[7] |
HAO S, HAN X, GUO Y, et al Low-light image enhancement with semi-decoupled decomposition[J]. IEEE Transactions on Multimedia, 2020, 22 (12): 3025- 3038
doi: 10.1109/TMM.2020.2969790
|
|
|
[8] |
GUO C, LI C, GUO J, et al. Zero-reference deep curve estimation for low-light image enhancement [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 1777–1786.
|
|
|
[9] |
MA L, MA T, LIU R, et al. Toward fast, flexible, and robust low-light image enhancement [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022: 5627-5636.
|
|
|
[10] |
ZHANG Z, ZHENG H, HONG R, et al. Deep color consistent network for low-light image enhancement [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022: 1889-1898.
|
|
|
[11] |
WEI C, WANG W, YANG W, et al. Deep retinex decomposition for low-light enhancement [C]// Proceedings of the 29th British Machine Vision Conference. Newcastle Upon Tyne: [s.n.], 2018.
|
|
|
[12] |
CHEN C, CHEN Q, XU J, et al. Learning to see in the dark [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 3291-3300.
|
|
|
[13] |
PANG Y, NIE J, XIE J, et al. BidNet: binocular image dehazing without explicit disparity estimation [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 5931-5940.
|
|
|
[14] |
ZHOU S, ZHANG J, ZUO W, et al. DAVANet: stereo deblurring with view aggregation [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 10996-11005.
|
|
|
[15] |
ZHANG K, LUO W, REN W, et al. Beyond monocular deraining: stereo image deraining via semantic understanding [C]// Proceedings of the European Conference on Computer Vision. [S.l.]: Springer, 2020: 71-89.
|
|
|
[16] |
HUANG J, FU X, XIAO Z, et al Low-light stereo image enhancement[J]. IEEE Transactions on Multimedia, 2022, 25: 2978- 2992
|
|
|
[17] |
LI C, GUO C, LOY C C Learning to enhance low-light image via zero-reference deep curve estimation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44 (8): 4225- 4238
|
|
|
[18] |
WANG L, WANG Y, LIANG Z, et al. Learning parallax attention for stereo image super-resolution [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 12250-12259.
|
|
|
[19] |
MITTAL A, MOORTHY A K, BOVIK A C No-reference image quality assessment in the spatial domain[J]. IEEE Transactions on Image Processing, 2012, 21 (12): 4695- 4708
doi: 10.1109/TIP.2012.2214050
|
|
|
[20] |
MITTAL A, SOUNDARARAJAN R, BOVIK A C Making a “completely blind” image quality analyzer[J]. IEEE Signal Processing Letters, 2012, 20 (3): 209- 212
|
|
|
[21] |
VENKATANATH N, PRANEETH D, BH M C, et al. Blind image quality evaluation using perception based features [C]// Proceedings of the Twenty First National Conference on Communications. Mumbai: IEEE, 2015: 1-6.
|
|
|
[22] |
WANG S, ZHENG J, HU H, et al Naturalness preserved enhancement algorithm for non-uniform illumination images[J]. IEEE Transactions on Image Processing, 2013, 22 (9): 3538- 3548
doi: 10.1109/TIP.2013.2261309
|
|
|
[23] |
ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 586-595.
|
|
|
[24] |
FAN M, WANG W, YANG W, et al. Integrating semantic segmentation and retinex model for low-light image enhancement [C]// Proceedings of the 28th ACM International Conference on Multimedia. [S.l.]: ACM, 2020: 2317-2325.
|
|
|
[25] |
WANG W, WEI C, YANG W, et al. GLADNet: low-light enhancement network with global awareness [C]// Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition. Xi’an: IEEE, 2018: 751-755.
|
|
|
[26] |
LIU R, MA L, ZHANG J, et al. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021: 10561-10570.
|
|
|
[27] |
GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? The KITTI vision benchmark suite [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Providence: IEEE, 2012: 3354-3361.
|
|
|
[28] |
彭国福, 林正浩 图像处理中Gamma校正的研究和实现[J]. 电子工程师, 2006, 32 (2): 30- 32 PENG Guo-fu, LIN Zheng-hao A study on Gamma correction and its implementation in image processing[J]. Electronic Engineer, 2006, 32 (2): 30- 32
doi: 10.3969/j.issn.1674-4888.2006.02.008
|
|
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|