A Unified Framework for High-Frame-Rate High-Dynamic-Range Video Synthesis
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Creating high-dynamic-range (HDR) video at high-frame rates is a technically demanding and application-critical problem, particularly in domains such as cinematography and autonomous perception. The challenge arises from the limitations of conventional image sensors in capturing both temporal and radiometric fidelity. This work introduces a unified framework that jointly addresses HDR reconstruction and temporal interpolation from sequences captured with alternating exposures. In contrast to prior methods that focus only on middle-frame interpolation or rely on computationally intensive pipelines, our approach employs a lightweight, end-to-end network capable of generating HDR frames at arbitrary timesteps in real time on mid-range GPUs. To mitigate the need for ground-truth HDR video, we propose a novel self-supervised training paradigm that leverages reconstruction objectives designed to preserve both photometric accuracy and temporal coherence. Experimental results demonstrate that our framework not only maintains competitive visual fidelity but also significantly reduces computational overhead compared to state-of-the-art baselines.