应用环境光传感器的颜色恒常性算法
Color constancy is a fundamental characteristic of human vision that refers to the ability of correcting color deviations caused by a difference in illumination. However, digital cameras cannot automatically remove the color cast of the illumination, and the color bias is adjusted by correcting the image with illuminant estimation, generally executed by color constancy algorithms. As an essential part of image signal processing, color constancy algorithms are critical for improving image quality and accuracy of computer vision tasks. Substantial efforts have been made to develop illuminant estimation methods, resulting in the proliferation of statistical- and learning-based algorithms. The existing color constancy algorithms usually allow one to obtain accurate and stable illuminant estimation on conventional scenes. However, unacceptable errors may often arise on the low color complexity scenes with monotonous content and uniformly colored large surfaces due to the lack of hints about the illuminant color. To address these problems, this study proposes a color constancy algorithm with ambient light sensors (ALS) to improve the accuracy of illuminant estimation in scenes with low color complexity. This approach leverages the fact that most intelligent terminals are equipped with ALS, and can enhance illuminant estimation accuracy by using ALS measurements alongside the image content.
The color constancy algorithm proposed in this study comprises two steps. The first step involves evaluating the reliability of the ALS measurement using a confidence assessment model, based on which the illuminant estimation is performed using the appropriate method. The reliability of the ALS is affected by the relative position of the ALS and the light source. Therefore, a bagging tree classifier is trained to serve as the confidence assessment model, with the posture of the camera, the color complexity of the image, and Duv (distance from the black body locus) of the estimated illuminant chromaticity as input parameters. Two illuminant estimation methods are designed for different levels of confidence. When the confidence of the ALS measurement is high, the illuminant estimation is performed by color space transformation from the ALS response to camera RGB via a second-order root polynomial model. This model is trained by minimizing the mean angular error of the training samples. Furthermore, if the ALS measurement has low confidence and the base algorithm has high confidence, illuminant estimation is performed by extracting neutral pixels using a mask determined by the ALS measurement and illuminant distribution characteristics based on the results of the existing neutral color extracting methods (Fig. 2). Finally, if both the ALS measurement and base algorithm have low confidence, the illuminant color is obtained by averaging the results of the two methods mentioned above. To evaluate the proposed ALS based color constancy algorithm (ALS-based CC), a dataset was collected using a Nikon D3X camera mounted with TCS3440 ALS. The dataset includes both conventional and low color complexity scenes from indoors and outdoors (Fig. 5), illuminated by light sources with a wide range of chromaticity (Fig. 4). In each image of the dataset, a classic color checker was positioned as a label, which was masked out during the evaluation.
The confidence assessment model of the ALS is trained and tested using 50 and 20 samples, respectively, collected using the aforementioned setup. It is demonstrated that the confidence assessment model correctly identifies all of the low confidence testing samples, but misjudges some of the high confidence ones (Table 2). The ALS-based CC, whose parameters were determined based on the performance evaluated by statistics of angular error, is executed with Grey Pixels (GP) as the base algorithm for neutral pixel extraction. The performance of ALS-based CC is compared with statistical-based counterparts using the established dataset. The results show that our proposed algorithm outperforms the counterparts in terms of the mean, tri-mean, and median of angular errors among the testing samples, indicating its overall high accuracy. Moreover, ALS-based CC achieves an angular error of less than 5° on the mean of the worst 25% of angular errors, demonstrating its excellent stability even in challenging scenes (Table 3). In terms of the visualization of typical scenes, ALS-based CC accurately estimates the illuminant most of the time, resulting in processed images that are largely consistent with the ground truth. However, all the counterparts perform poorly on some of the scenes with large pure color surfaces, resulting in quality degradation in their corrected images due to significant color bias (Fig. 6). Furthermore, the operation time of ALS-based CC is reduced to 66% of GP on MATLAB 2021b, suggesting its potential for real-time illuminant estimation applications.
This study proposes a color constancy algorithm that integrates the ALS with the camera to improve illuminant estimation accuracy in scenes with low color complexity. The algorithm consists of a confidence assessment model for the ALS and two illuminant estimation methods based on color space transformation and neutral pixel extraction, designed for different confidence levels. Furthermore, a dataset with ALS measurement was established to evaluate the algorithm, and the results show that mean, median, and mean of worst 25% angular errors of the proposed method decrease by 32%, 21%, and 41%, respectively, compared with the existing most accurate method. The proposed algorithm also has a potential for real-time illuminant estimation in both conventional and low color complexity scenes.
1 引言
颜色恒常性算法是图像增强[1]、物体识别、目标追踪等计算机视觉任务的基础。已有的颜色恒常性算法大多针对单幅图像,基于假设或基于学习进行光源颜色估计,计算通道增益并利用对角线模型进行白平衡校正。
基于假设的算法根据图像颜色分布规律提出假设,设计光源颜色估计方法。例如,灰度世界算法[2]、灰色边界算法[3]和白块算法[4]分别假设所有像素均值、边缘像素均值和各通道最大响应值反映光源颜色。在此基础上,研究者们不断优化假设,提出了灰色阴影算法[5]、加权灰色边界算法[6]、基于主成分分析的算法[7]、灰色像素算法[8]、灰色系数算法[9]和基于置信度的算法[10]等。这类算法大多计算简单,适用于不同的设备,但在不满足假设要求的图像中精度较低。
基于学习的算法利用数据集训练光源颜色估计模型。近年来,神经网络的应用为算法性能带来了较大提升,例如FC4[11]、IGTN[12]、CLCC[13]、Reweighted-CC[14]、BoCF[15]、One-net[16]、多通道置信度加权网络[17]、渐进式多尺度特征级联融合网络[18]等均取得了比传统算法更优的结果,但这类算法对训练集有较强的依赖性,且在颜色、内容单一的低颜色复杂度场景中难以得到准确、稳定的光源颜色估计结果。
随着智能终端摄影系统硬件的优化升级,可用于光源颜色估计的信息逐渐增加。目前大多数智能手机等终端设备都装备有探测光环境的传感器,以实现屏幕自适应于环境光的亮度、色温等调节。为了提升颜色恒常性算法在低颜色复杂度等疑难场景中的表现,本文挖掘环境光传感器在光源颜色估计中的潜力,提出一种应用环境光传感器的颜色恒常性算法,在置信度估计的基础上,设计了光源颜色估计方法,并建立数据库对算法性能进行验证。
2 算法原理
2.1 传感器置信度评估
从物理模型[19]出发,在不考虑噪声的情况下,传感器某一通道的原始响应值可表示为反映光源颜色的镜面反射分量与反映物体颜色的漫反射分量的加权和,即
式中:
入射光成分主要取决于光源、被摄场景和环境光传感器的几何关系,本文主要考虑3个方面:1)由于日常生活中大多数场景为沉浸式照明,光源位于场景上方,传感器向上倾斜时,更可能获得高置信度测量结果;2)照明光源多分布在黑体轨迹附近,若测量结果明显偏离黑体轨迹,更可能与实际光源颜色出现较大差异,置信度低;3)纯色场景中,物体反射光的混合结果更易出现色偏,传感器测量结果无法准确反映光源特征,在该类场景中传感器获得低置信度测量结果的可能性更高。
本文选择传感器姿态、色品点到黑体轨迹的偏差距离(Duv)和场景颜色复杂度作为置信度评估模型的输入参数。其中,传感器姿态如
以光源色品测量值
并以阈值
2.2 基于颜色空间转换的光源颜色估计
近年来,应用于智能手机的环境光传感器(ALS)从单通道亮度传感器、三通道颜色传感器到多通道光谱传感器不断更迭,本文采用TCS3440多通道环境光传感器进行实验。该传感器利用余弦透射体将入射光混合均匀,具有8个窄带可见光通道、1个红外通道和1个全通通道[22],并内置了2个线性映射模型,可完成三刺激值计算与光源光谱重构等任务。
受图像信号处理(ISP)流程中颜色校正的启发,本文建立颜色空间映射模型,将传感器三刺激值计算结果从XYZ空间映射到相机RGB空间,作为光源颜色估计结果。权衡算法准确性与泛化能力,采用二次根式多项式模型,以训练样本角度误差[19]均值为损失函数,采用内点法进行优化,即
式中:
2.3 基于传感器辅助中性色像素提取的光源颜色估计
由于环境光传感器测量结果在一定程度上反映光源特征,故可利用传感器辅助基于单幅图像的颜色恒常性算法进行光源颜色估计,以提高容错率。选择基于中性色像素提取的颜色恒常性算法作为基础算法,基于基础算法中性色像素提取结果,以传感器光源颜色估计结果为参照,对中性色像素进行二次提取。为了便于对中性色区域进行表征,将基础算法初步提取的中性色像素与传感器光源颜色估计结果映射到正交色度平面(
式中:
式中:
2.4 光源颜色估计策略
根据环境光传感器置信度评估结果,针对不同情况设计了光源颜色估计策略,当传感器置信度高时,认为传感器测量结果具有较高的准确性,可将其作为最终的光源颜色估计结果,以减小基于图像内容进行光源颜色估计的不确定性。当传感器置信度低时,根据初步提取的中性色像素评估基础算法置信度,若有初步提取的中性色像素位于预设的中性色区域内,认为基础算法置信度高,否则认为基础算法置信度低。当基础算法置信度高时,采用传感器辅助中性色像素提取的光源颜色估计结果作为最终的光源颜色估计结果;当基础算法置信度低时,参考相关算法组合方式[24],将传感器光源估计结果与传感器辅助中性色像素提取的光源估计结果的均值作为最终的光源颜色估计结果。
3 实验与结果分析
3.1 数据集建立
针对传感器置信度评估模型、颜色空间映射模型的训练与测试以及应用环境光传感器的颜色恒常性算法的测试3项任务,在室内、室外多种光源下实际采集数据,并结合模拟数据建立数据集。
对于传感器置信度评估模型,将环境光传感器固定在摄影镜头旁,在室内、室外自然场景下随机拍摄70幅图像,同时记录传感器测量结果与设备姿态,并利用远距光谱辐射计CS-2000测量场景中白平衡卡的三刺激值,构建训练集。另外,分别对颜色恒常性算法测试集中的前20组数据补充采集光源三刺激值,组成测试集。
对于颜色空间映射模型,从开源数据集[25-26]中获得光源光谱功率分布,计算白平衡卡的相机RGB值与三刺激值,构成包含91组模拟数据的训练集。在标准灯箱、多通道LED、荧光灯、日光等光源下,分别利用Nikon D3X相机与CS-2000光谱辐射计获得白平衡卡的相机RGB值与三刺激值,建立包含27组数据的测试集。训练集和测试集的光源颜色分布如
图 3. 颜色空间映射模型数据集的光源颜色分布
Fig. 3. Distribution of the light sources in dataset for color space transformation
对于颜色恒常性算法测试,模拟手机等终端设备上环境光传感器与镜头的位置关系,将环境光传感器固定在Nikon D3X摄影镜头旁,拍摄时记录传感器测量结果与设备姿态,由此采集140组室内、室外场景数据,以组成测试图像数据集,光源颜色分布如
图 4. 颜色恒常性算法测试集的光源颜色分布
Fig. 4. Distribution of the light sources in dataset for color constancy algorithm testing
为了测试所提算法在低颜色复杂度图像中的表现,算法测试数据集包含了100幅有高饱和度、大面积、单一颜色物体的场景,部分典型示例如
图 5. 测试集中典型的低颜色复杂度场景
Fig. 5. Typical scenes with low color complexity in the testing dataset
3.2 传感器置信度评估模型评价
根据算法在训练集中的表现,确定置信度评估模型的待定参数,如
采用五折验证法,得到的训练与测试结果如
表 2. 置信度评估模型的评价结果
Table 2. Evaluation of the confidence assessment model
|
3.3 颜色空间映射模型评价
训练颜色空间映射模型时,利用Y通道对三刺激值进行归一化,并基于G通道对相机RGB值进行归一化。采用角度误差均值、三均值、中位数、最优25%均值、最差25%均值为评价指标,得到的训练集和测试集的表现性能如
表 3. 颜色空间映射模型的评价结果(角度误差)
Table 3. Evaluation of the color space transformation model (angular error)
|
3.4 应用环境光传感器的颜色恒常性算法评价
根据算法在测试集中的表现,确定环境光传感器辅助中性色像素筛选方法中的细节。其中,基础算法选择灰色像素算法[8],而相机RGB空间与正交色度平面映射算法中的参数取值如
式中:
表 4. 相机RGB空间与正交色度平面转换算法的参数取值
Table 4. Value of parameters in transformation from camera RGB space to plane
|
所提颜色恒常性算法为基于假设的算法,因此主要与该类算法进行比较,结果如
表 5. 颜色恒常性算法在测试集中的表现(角度误差)
Table 5. Performance of color constancy algorithms in the test set (angular error)
|
总体而言,所提算法具有更高的准确性与稳定性,在保证常规场景光源颜色估计精度的基础上,提升了颜色恒常性算法在低颜色复杂度等疑难场景中的表现。在MATLAB平台上,所提算法在测试集中的平均运行时间仅为基础算法GP的0.66,并未明显增加时间成本。
4 结论
利用装备于摄影镜头旁的环境光传感器探测光源信息,提出了一种应用环境光传感器的颜色恒常性算法。通过建立传感器置信度估计模型,设计了基于颜色空间映射与传感器辅助中性色像素提取的光源颜色估计方法,并针对不同置信度采取合理的策略,兼顾了算法在常规场景与疑难场景中的表现,与同类算法相比,所提算法具有更高的准确性与稳定性。未来将基于物理模型建立更为精确的置信度评估模型,优化中性色像素筛选方法,以进一步提升颜色恒常性算法的性能,并探索其在手机等智能终端上的应用。
[1] 王晓琦, 赵宣植, 刘增力. 基于颜色恒常性和多尺度小波的水下光学图像增强[J]. 激光与光电子学进展, 2022, 59(16): 1601002.
[2] Buchsbaum G. A spatial processor model for object colour perception[J]. Journal of the Franklin Institute, 1980, 310(1): 337-350.
[3] GijsenijA, GeversT, van de WeijerJ. Physics-based edge evaluation for improved color constancy[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition, June 20-25, 2009, Miami, FL, USA. New York: IEEE Press, 2009: 581-588.
[4] Land E H, McCann J J. Lightness and retinex theory[J]. Journal of the Optical Society of America A, 1971, 61(1): 1-11.
[5] FinlaysonG D, TrezziE. Shades of gray and colour constancy[C]//Proceedings of the 12th Color Imaging Conference, November 9-12, 2004, Scottsdale, AZ. Springfield: Society for Imaging Science and Technology, 2004: 37-41.
[6] Gijsenij A, Gevers T, van de Weijer J. Improving color constancy by photometric edge weighting[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(5): 918-929.
[7] Cheng D L, Prasad D, Brown M S. Illuminant estimation for color constancy: why spatial-domain methods work and the role of the color distribution[J]. Journal of the Optical Society of America A, 2014, 31(5): 1049-1058.
[8] YangK F, GaoS B, LiY J. Efficient illuminant estimation for color constancy using gray pixels[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 7-12, 2015, Boston, MA, USA. New York: IEEE Press, 2015: 2254-2263.
[9] QianY L, KämäräinenJ K, NikkanenJ, et al. On finding gray pixels[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 15-20, 2019, Long Beach, CA, USA. New York: IEEE Press, 2020: 8054-8062.
[10] LaakomF, RaitoharjuJ, IosifidisA, et al. Probabilistic color constancy[C]//Proceedings of the IEEE International Conference on Image Processing (ICIP), October 25-28, 2020, Abu Dhabi, United Arab Emirates. New York: IEEE Press, 2020: 978-982.
[11] HuY M, WangB Y, LinS. FC4: fully convolutional color constancy with confidence-weighted pooling[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 21-26, 2017, Honolulu, HI, USA. New York: IEEE Press, 2017: 330-339.
[12] XuB L, LiuJ X, HouX, et al. End-to-end illuminant estimation based on deep metric learning[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 13-19, 2020, Seattle, WA, USA. New York: IEEE Press, 2020: 3613-3622.
[13] LoY C, ChangC C, ChiuH C, et al. CLCC: contrastive learning for color constancy[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 19-25, 2021, virtual. New York: IEEE Press, 2021: 8049-8059.
[14] Qiu J Q, Xu H S, Ye Z N. Color constancy by reweighting image feature maps[J]. IEEE Transactions on Image Processing, 2020, 29: 5711-5721.
[15] Laakom F, Passalis N, Raitoharju J, et al. A bag of color features for color constancy[J]. IEEE Transactions on Image Processing, 2020, 29: 7722-7734.
[16] Domislović I, Vršnak D, Subašić M, et al. One-net: convolutional color constancy simplified[J]. Pattern Recognition Letters, 2022, 159: 31-37.
[17] 杨泽鹏, 解凯, 李桐, 等. 多通道置信度加权颜色恒常性算法[J]. 光学学报, 2021, 41(11): 1133002.
[18] 杨泽鹏, 解凯, 李桐. 渐进式多尺度特征级联融合颜色恒常性算法[J]. 光学学报, 2022, 42(5): 0533002.
[19] Gijsenij A, Gevers T, van de Weijer J. Computational color constancy: survey and experiments[J]. IEEE Transactions on Image Processing, 2011, 20(9): 2475-2489.
[20] Ohno Y. Practical use and calculation of CCT and Duv[J]. Leukos, 2014, 10(1): 47-55.
[21] Breiman L. Bagging predictors[J]. Machine Learning, 1996, 24(2): 123-140.
[23] 邱珏沁. 基于原始响应值预测模型的数码相机图像信号处理方法与技术研究[D]. 杭州: 浙江大学, 2020: 88-90.
QiuJ Q. Study on the methodology and technology of digital camera image signal processing based on the raw response prediction model[D]. Hangzhou: Zhejiang University, 2020: 88-90.
[24] Li B, Xiong W H, Hu W M, et al. Evaluating combinational illumination estimation methods for real-world images[J]. IEEE Transactions on Image Processing, 2014, 23(3): 1194-1209.
[27] Gao S B, Yang K F, Li C Y, et al. Color constancy using double-opponency[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(10): 1973-1985.
Article Outline
李悦敏, 徐海松, 黄益铭, 杨敏航, 胡兵, 张云涛. 应用环境光传感器的颜色恒常性算法[J]. 光学学报, 2023, 43(14): 1433001. Yuemin Li, Haisong Xu, Yiming Huang, Minhang Yang, Bing Hu, Yuntao Zhang. Color Constancy Algorithm Using Ambient Light Sensor[J]. Acta Optica Sinica, 2023, 43(14): 1433001.