Konstantinos G. DerpanisAditya Arora2024-11-072024-11-072024-09-132024-11-07https://hdl.handle.net/10315/42495Performing white-balance (WB) correction for scenes with multiple illuminants remains a challenging task in computer vision. Most previous methods estimate per-pixel scene illumination directly in the RAW sensor image space. Recent work explored an alternative fusion strategy, where a neural network fuses multiple white-balanced versions of the input image processed to sRGB using pre-defined white-balance settings. Inspired by this line of work, we present two contributions targeting fusion-based multi-illuminant WB correction. First, we introduce a large-scale multi-illumination dataset rendered from RAW images to support training fusion models and evaluation. The dataset comprises over 16,000 sRGB images with ground truth sRGB white-balance corrected images. Next, we introduce an attention-based architecture to fuse five white-balance settings. This architecture yields an improvement of up to 25% over prior work.Author owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.Computer scienceImage White Balance for Multi-Illuminant ScenesElectronic Thesis or Dissertation2024-11-07Image White-BalanceMulti-Illuminant ScenesComputational PhotographyComputer Vision