Coherent Event Guided Low-Light Video Enhancement

1Peking University, 2South China University of Technology
Teaser.

EvLowLight reconstructs high-quality videos from hybrid inputs of low-light frames and events.

Abstract

With frame-based cameras, capturing fast-moving scenes without suffering from blur often comes at the cost of low SNR and low contrast. Worse still, the photometric constancy that enhancement techniques heavily relied on is fragile for frames with short exposure. Event cameras can record brightness changes at an extremely high temporal resolution. For low-light videos, event data are not only suitable to help capture temporal correspondences but also provide alternative observations in the form of intensity ratios between consecutive frames and exposure-invariant information. Motivated by this, we propose a low-light video enhancement method with hybrid inputs of events and frames. Specifically, a neural network is trained to establish spatiotemporal coherence between visual signals with different modalities and resolutions by constructing correlation volume across space and time. Experimental results on synthetic and real data demonstrate the superiority of the proposed method compared to the state-of-the-art methods.

Video

Method

Method.

An overview of the proposed method. All-pair correlation volumes between each pixel of events and frames are computed from their features by using the proposed multimodal coherence modeling module firstly, which enables the event features to be aligned and the optical flow to be jointly estimated. In the subsequent module of temporal coherence propagation, observations corresponding to the same scene point are sampled and propagated across time to estimate the underlying clean frame. Parallelly, exposure parameters are extracted from both events and frames to produce a high-quality frame.

Results on synthetic data

Results on synthetic data.

Results on real data

Results on real data.

References

[1] S. Zhang, Y. Zhang, Z. Jiang, D. Zou, J. Ren, and B. Zhou, “Learning to See in the Dark with Events,” ECCV, 2020.

[2] H. Rebecq, R. Ranftl, V. Koltun, and D. Scaramuzza, “Events-To-Video: Bringing Modern Computer Vision to Event Cameras,” CVPR, 2019.

[3] X. Guo, Y. Li, and H. Ling, “LIME: Low-Light Image Enhancement via Illumination Map Estimation,” IEEE TIP, 2017.

[4] W. Wu, J. Weng, P. Zhang, X. Wang, W. Yang, and J. Jiang, “URetinex-Net: Retinex-Based Deep Unfolding Network for Low-Light Image Enhancement,” CVPR, 2022.

[5] X. Xu, R. Wang, C.-W. Fu, and J. Jia, “SNR-Aware Low-Light Image Enhancement,” CVPR, 2022.

[6] L. Ma, T. Ma, R. Liu, X. Fan, and Z. Luo, “Toward Fast, Flexible, and Robust Low-Light Image Enhancement,” CVPR, 2022.

[7] F. Lv, F. Lu, J. Wu, and C. Lim, “MBLLEN: Low-Light Image/Video Enhancement Using CNNs,” BMVC, 2018.

[8] R. Wang, X. Xu, C.-W. Fu, J. Lu, B. Yu, and J. Jia, “Seeing Dynamic Scene in the Dark: A High-Quality Video Dataset With Mechatronic Alignment,” ICCV, 2021.

BibTeX

@article{liang2023evlowlight,
  author    = {Liang, Jinxiu and Yang, Yixin and Li, Boyu and Duan, Peiqi and Xu, Yong and Shi, Boxin},
  title     = {Coherent Event Guided Low-Light Video Enhancement},
  journal   = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year      = {2023},
}