Chinese Optics Letters, 2023, 21 (10): 101102, Published Online: Oct. 11, 2023  

Enhanced imaging through turbid water based on quadrature lock-in discrimination and retinex aided by adaptive gamma function for illumination correction

Author Affiliations
1 Optical Communication Laboratory, Ocean College, Zhejiang University, Zhoushan 316021, China
2 Hainan Institute of Zhejiang University, Sanya 572000, China
3 Key Laboratory of Ocean Observation-Imaging Testbed of Zhejiang Province, Ocean College, Zhejiang University, Zhoushan 316021, China
Abstract
This paper presents an improved method for imaging in turbid water by using the individual strengths of the quadrature lock-in discrimination (QLD) method and the retinex method. At first, the high-speed QLD is performed on images, aiming at capturing the ballistic photons. Then, we perform the retinex image enhancement on the QLD-processed images to enhance the contrast of the image. Next, the effect of uneven illumination is suppressed by using the bilateral gamma function for adaptive illumination correction. The experimental results depict that the proposed approach achieves better enhancement than the existing approaches, even in a high-turbidity environment.

1. Introduction

Underwater image restoration plays a crucial role in object detection, object recognition, and video tracking[1]. The visibility of underwater images is degraded by the scattering and absorption of the incident light field. The imaging quality deteriorates with the increased distance between the target and the sensor as well as with the turbidity.

In recent years, many de-scattering techniques have been put forth to cope with the degradation of the image. These methods are typically divided into two categories: image restoration methods based on the physical model and image recovery methods based on image enhancement[2,3]. The image restoration methods use the atmospheric scattering model or prior knowledge to reverse the degradation caused by the scattering of light, which includes using the dark channel prior (DCP) method[4], the polarization imaging method[5], and the intensity modulation of an active light source[6]. The other is built on image enhancement algorithms, such as the histogram equalization (HE)[7], the contrast limited adaptive histogram equalization (CLAHE)[8], and the retinex algorithms[9]. These algorithms can improve image contrast, but they are ineffective at restoring visibility range.

A simpler and more competitive approach is to use an intensity modulated continuous-wave light source[6,10,11]. The theory builds on the hypothesis that the modulating frequency and phase of the captured ballistic photons, in contrast to those of the multiply scattered photons, remain the same as that of an incident modulated light source. This method requires demodulation of the received signal at the modulating frequency. Typical ballistic filtering requires modulation at high frequencies[12]. However, low modulation frequencies can be chosen, at the expense of fewer ballistic or snake-like photons, to meet the requirements of available imaging systems[13]. Sudarsanam et al. used low frequencies to demonstrate imaging through spherical polydisperse scatterers, and the demodulation was performed by using quadrature lock-in discrimination (QLD)[13]. An instantaneous all-optical single-shot technique demonstrated demodulation at higher frequencies (5kHz) up to the radio frequency range[14]. However, the aforementioned technique has a few shortcomings, such as a smaller field of view, an increased cost of optical elements, and system complexity.

Imaging through the real fog has been realized over hectometric distances to validate the performance of the QLD technique[15]. In our previous work, we developed a tracking method for active light beacons to realize underwater docking in highly turbid water. The QLD technique was employed to lock on the blinking frequency of the light beacons located at the docking station and to successfully suppress the effect of unwanted light and stray noise at other frequencies[16]. Recently, imaging through flame and smoke was demonstrated by employing a blue light-emitting diode (LED) and by using the QLD algorithm.[17] Although the QLD algorithm is well studied for imaging using scattering media such as polydisperse scatterers, real-time fog, smoke, and flame, it is still not thoroughly investigated for underwater image restoration where an LED is used to illuminate the target object.

In this Letter, we presented a novel underwater image recovery method based on a cascade method. It benefits from the strengths of the image restoration method, such as the traditional QLD technique, to improve visibility and mitigate the noise. The high-speed QLD method is proposed to help implement our cascaded approach in real-world systems. The well-known image enhancement method, such as the multiscale retinex (MSR) technique, is employed to recover the contrast of the output image. In the MSR, the multiscale guided filter is used instead of the multiscale Gaussian filter to avoid the halo artifacts at the boundaries, information loss, and the blurring effect of the output image. Additionally, the adaptive illumination correction algorithm is optimized and incorporated to overcome the non-uniform illumination in the output image. The weighted fusion method is then developed to obtain the final enhanced output image.

2. Proposed Method

The proposed approach consists of three main steps described in the following sections.

2.1. High-speed quadrature lock-in discrimination algorithm

The quadrature lock-in discrimination technique works based on the principle of a lock-in amplifier. Consider the captured light modulated at a frequency of fm (Hz) and modulation index M; the intensity at the receiver is written as Ir(t)=Iavg[1+Msin(2πfmt)] for the average received intensity Iavg. When the signal is multiplied by a sine wave of the known modulating frequency and a relative phase Δϕ followed by the time averaging over a few cycles, one obtains an in-phase component I=AcosΔϕ. Meanwhile, the multiplication of the signal with the cosine wave of the known modulating frequency and the time averaging over a few cycles give rise to the quadrature component Q=AsinΔϕ. The quadrature components can be squared and added to retrieve the amplitude A=I2+Q2, and the relative phase difference Δϕ=arctan(Q/I) between the source and the detector can also be obtained. In our experiments, we used a scientific complementary metal-oxide-semiconductor (sCMOS) camera to capture signals as 2D images of a scene over a certain length of time. The images are then processed offline by the QLD algorithm to reconstruct an output image by computing the amplitude of the received signal at each pixel using MATLAB as a programming tool.

The frame rate (or sampling frequency fs) of the camera is N times the modulation frequency in the traditional QLD method. Moreover, N should be greater than or equal to 2, i.e., fs=N×fm, N2 in accordance with the Nyquist sampling criterion. When the multiple is four (i.e., N=4), a periodic sequence of sine and cosine signals can be written as S=[0,1,0,−1] and C=[1,0,−1,0], respectively, and the I and Q components can be written in the form of Eqs. (1) and (2)[18]. I=ImM×L×[01010101]1×LT,Q=ImM×L×[10101010]1×LT,where ImM×L is a matrix, and the subscript M indicates the total number of pixels in an image and L is the number of captured images. The sine (S) and cosine (C) sequences are concatenated L/4 times in Eqs. (1) and (2), respectively, to calculate the I and Q components. The number of multiplications has been reduced to half of the traditional QLD method for the same multiple of modulated signal frequency (i.e., N=4) using orthogonal vector arithmetic. Additionally, the central processing unit (CPU) does not need to allocate memory space to save the reference signals of the sine and cosine at the known frequency, which reduces the processor’s burden and results in faster calculations. The higher values of N are found to have a negligible effect on the quality of the QLD processed image.

2.2. Multiscale retinex method

Since the underwater image is degraded due to low contrast and uneven illumination, the retinex method is used to overcome these problems[19]. Retinex theory states that the perceived image can be broken down into illumination and reflection images, as shown in Eq. (3), IQLD(x,y)=IL(x,y)×R(x,y),where IQLD(x,y) is the input image to retinex method. IL(x,y) and R(x,y) denote the illumination and reflection images, respectively. The conventional retinex algorithm uses the Gaussian filtering of a perceived image to get the illumination image. The later versions proposed the multiscale retinex (MSR) method, which employs a multiscale Gaussian filter with different weights to recover the local dynamics and contrast of the image more efficiently[20], ILi(x,y)=IQLD(x,y)*Gi(x,y),where * denotes a convolution operator. The illumination image at the ith scale ILi(x,y) is approximated from IQLD(x,y) by convolving it with a Gaussian filter of the ith scale. Gi(x,y) is a multiscale Gaussian filter with a standard deviation σi defined as Gi(x,y)=12πσi2exp(x2y22σi2),rMSR(x,y)=i=1nwi{log[IQLD(x,y)]log[ILi(x,y)]},where rMSR(x,y) is the logarithm of RMSR(x,y); wi is the weighting factor, which should add up to 1.0; and n is the number of scales. We used MSR along with the guided filter to avoid the relatively lower contrast and the halo artifacts at the edges of the image obtained by using the traditional MSR method[21]. Three different window sizes of the guided filter, 15×15, 25×25, and 40×40, corresponding to n=3, are used in our experiments.

The Gaussian filtering of the resulting illumination component is done at the highest scale to remove the noise. We used the simplest color balancing algorithm[22] as a post-processing method, which clips a certain proportion of pixels on either side of the image histogram to stretch the values of the image to the widest possible range [0, 255].

2.3. Improved bilateral gamma function for adaptive intensity correction

The resulting image from the previous step still has an impact of uneven illumination, especially at high turbidity levels. Here, to reduce the impact, we employed an improved bilateral gamma function in the adaptive intensity correction algorithm[23] to adaptively update the illumination component from the previous step. The equations of the improved bilateral gamma function are as follows: Oh(x,y)=[255·(RMSR(x,y)255)]γ,Ol(x,y)=255·[1(255RMSR(x,y)255)]γ,γ=γ0|μIL(x,y)|μ,Iadpt(x,y)=α·Oh(x,y)+(1α)·Ol(x,y),α={1,IL(x,y)μ,0,IL(x,y)>μ,where μ is the mean of the illumination image, and α is a binary subsection correction parameter, which can take the value 0 or 1. When a pixel value (x,y) of the illumination image is less than or equal to μ, the output of the improved bilateral gamma function Ic(x,y) is a gamma function Oh(x,y), which implies that the intensity value of the pixel (x,y) is increased for the low illumination pixels. If the pixel value (x,y) of the illumination image is greater than μ, then the output is a gamma function Ol(x,y), which results in the reduction of intensity values for the high illumination pixels. The parameter γ varies dynamically and is controlled by the distribution characteristics of the illumination image, which enables adaptive correction of the nonuniformly illuminated underwater image. The base γ0 is optimally chosen to be equal to 0.8 since it gives the best illumination distribution for each turbidity level and different target objects in our experiments.

We proposed an adaptive illumination correction algorithm for underwater images based on the bilateral gamma function by using both the reflection and illumination images. Meanwhile, the gamma-corrected illumination image ILc(x,y) is calculated and added back to the reflection image to restore the naturalness of the image[24]. The corrected reflection image Ic(x,y) is expressed as Ic(x,y)=RMSR(x,y)×ILc(x,y).

The final output image is the weighted fusion of the two different illumination corrected images, Iout(x,y)=β·Ic(x,y)+(1β)·Iadpt(x,y).

We chose β=0.5 in our experiments. The flowchart of the proposed method is shown in Fig. 1.

Fig. 1. Flowchart of the proposed method.

下载图片 查看所有图片

3. Experiments and Results

The experimental setup is shown in Fig. 2. We used a 625-nm red LED (M625L4) as a light source, and the current through the LED is modulated (modulation index M=1.43) using the internal sinusoidal modulation function. Two types of objects, such as a Rubik’s cube and a rubber toy, are used as underwater targets with the corresponding modulation frequencies for the LED being adjusted to 37 Hz and 38 Hz, respectively. The modulated LED illuminates the target, and the image, formed by the reflection of light from the target, is captured using a camera (16-bit Dhyana 400BSI sCMOS camera). The volume of the transparent water tank is 38cm×25cm×26cm. We added up to 21 mL of milk into the water tank to simulate a high-turbidity environment. The distance between the target object and the camera is 90 cm. The frame rate of the camera is adjusted to four times the modulating frequency of the LED. The images are captured for a time duration of 2 seconds. We did not observe any improvement in the final results for a longer time series.

Fig. 2. Experimental setup.

下载图片 查看所有图片

To demonstrate that our approach can realize image restoration, we used an image of a rubber toy, which is a multi-level gray image and is more prone to degradation caused by noise and turbidity. The performance comparison of our approach with other traditional image restoration and image enhancement methods is shown in Figs. 3 and 5. It is important to mention that the time averaging of 100 images is performed to minimize the effect of noise prior to applying traditional methods. It can be seen in Fig. 3 that the grayscale span of the output image of the CLAHE is more widely distributed, and the overall contrast is enhanced. Despite the enhancement in visibility, the problems of uneven illumination and poor performance for high-turbidity images cannot be solved. The guided filter used in the MSR method contributed to the high contrast and elimination of the halo artifacts along the boundaries. The adaptive gamma correction adjusts the illumination adaptively by increasing the intensity in low illumination areas and vice versa. Thus, the shadow caused by uneven illumination is eliminated considerably, and from the results, it can be seen that our method is more efficient for a high-turbidity environment. We select a zoomed-in region of the high-turbidity image, and the intensity profiles at colored dashed lines in the zoomed-in view of Fig. 3 are plotted and shown in Fig. 4. For a fair comparison, the minimum grayscale intensity value of each curve is subtracted from the original value to shift its lowest point to the horizontal axis. The intensity profile of a clear image (image captured in clear water) is also plotted. It can be seen that the trend of the curve in our method is similar to that of a clear image. Furthermore, it is clear that the proposed approach has the highest contrast and signal-to-noise ratio (SNR) as compared with other methods. The MSR method tends to have a loss of details, bleaching of image information, and lower contrast as compared to our method, and the DCP and DehazeNet[25] suffer from a low contrast and uneven illumination problem.

Fig. 3. Comparison results for the images captured in low-turbidity (first row) and high-turbidity (second row) and their zoomed-in views (of high-turbidity in the third row) of different methods using the rubber toy as a target object.

下载图片 查看所有图片

Fig. 4. Intensity profiles at colored dashed lines in the zoomed-in view of Fig. 3.

下载图片 查看所有图片

Fig. 5. Comparison results for the images captured in low-turbidity (first row) and high-turbidity (second row) and their zoomed-in views (of high-turbidity in the third row) of different methods using the Rubik’s cube as a target object.

下载图片 查看所有图片

The universality of the proposed method is verified by imaging the Rubik’s cube with handwritten words on it as a target. Different methods at different turbidities and their zoomed-in views are shown in Fig. 5. We compute various evaluation metrics of the image quality for the zoomed-in view in Fig. 5 to quantify and compare the image quality for various methods in the absence of reference images. The metrics include the standard deviation (STD), the peak signal-to-noise ratio (PSNR), the average gradient (AG)[26], the entropy[27], the value of the measure of enhancement (EME)[28], the blind-reference-less image spatial quality evaluator (BRISQUE)[29], and the natural image quality evaluator (NIQE)[30]. The values of these metrics are reported in Table 1, and the best indicators are marked in bold. One may find that our method achieved the highest values for most of the metrics, which further supports the effectiveness of the proposed method over the other traditional methods.

Table 1. Quantitative Comparison of Zoomed-in View of Fig. 5

 EntropyAGSTDPSNR (dB)BRISQUENIQEEME
Intensity image6.693.35 × 10−20.10511.9740.1219.493.86
QLD6.821.7 × 10−30.12411.0343.2112.340.47
CLAHE5.9971.6 × 10−20.07212.8435.18211.72.79
DCP7.111.5 × 10−20.1738.12835.95611.874.877
MSR7.351.05 × 10−10.19414.4942.30114.3918.95
DehazeNet7.517.5 × 10−30.19910.31623.5949.6171.952
Ours7.176.51 × 10−20.2616.9333.284.7986.49

查看所有表

The performance of the proposed method is also evaluated for the modulated light source with different modulation indexes. The zoomed-in parts of the high-turbidity images of the target objects from Fig. 5 (Rubik’s cube) are used to evaluate the effect by taking the PSNR as an evaluation metric. Two modulation indexes have been chosen, first by taking a modulation index of M=1.43 and second M=0.66. The PSNR values of the recovered images are shown at the top of the corresponding images in Fig. 6. It is clear that the small modulation index leads to poor image recovery results for high-turbidity images. Compared with each other, the PSNR value has been decreased by 1.42 dB. It is deduced that the higher values of the modulation index aid in obtaining a better quality of the recovered image in our experiments.

Fig. 6. Comparison results for different modulation indexes in a high-turbidity environment.

下载图片 查看所有图片

4. Conclusion

We present a three-stage processing method for recovering underwater images. Initially, we preprocessed the series of images of the scene, illuminated with a modulated source of light, using the high-speed QLD technique at the known modulating frequency. The QLD technique helped reduce the noise caused by turbidity and increased the visibility of the scene by filtering the small number of ballistic photons. Next, we performed retinex enhancement using a guided filter to separate the illumination component, which restores the contrast of the image and reduces uneven illumination at the cost of increased processing noise. The proposed approach makes better use of the retinex method to improve contrast by using a QLD-processed image instead of the original underwater image. Finally, the usage of the bilateral gamma function for adaptive illumination correction aids the visual quality by reducing the over-exposure effects that then preserve details in the image. The results show that our method has distinct benefits in contrast enhancement, detail recovery, uneven illumination correction, and noise reduction in both low and high-turbidity environments.

References

[1] CaimiF. M.KocakD. M.DalgleishF.WatsonJ., “Underwater imaging and optics: recent advances,” in OCEANS (2008), p. 1.

[2] Y. Xu, J. Wen, L. Fei, Z. Zhang. Review of video and image defogging algorithms and related studies on image restoration and enhancement. IEEE Access, 2016, 4: 165.

[3] K. Hu, C. Weng, Y. Zhang, J. Jin, Q. Xia. An overview of underwater vision enhancement: from traditional methods to recent deep learning. J. Mar. Sci. Eng., 2022, 10: 241.

[4] HeK.SunJ.TangX., “Single image haze removal using dark channel prior,” in IEEE Conference on Computer Vision and Pattern Recognition (2009), p. 1956.

[5] Y. Wei, P. Han, F. Liu, J. Liu, X. Shao. Polarization descattering imaging: a solution for nonuniform polarization characteristics of a target surface. Chin. Opt. Lett., 2021, 19: 111101.

[6] H. Ramachandran, A. Narayanan. Two-dimensional imaging through turbid media using a continuous wave light source. Opt. Commun., 1998, 154: 255.

[7] JainA. K., Fundamentals of Digital Image Processing (Prentice Hall, 1989).

[8] ZuiderveldK. J., “Contrast limited adaptive histogram equalization,” in Graphics Gems (1994), p. 474.

[9] D. J. Jobson, Z. Rahman, G. A. Woodell. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process., 1997, 6: 965.

[10] L. Mullen, A. Laux, B. Concannon, E. P. Zege, I. L. Katsev, A. S. Prikhach. Amplitude-modulated laser imager. Appl. Opt., 2004, 43: 3874.

[11] L. Mullen, A. Laux, B. Cochenour, E. P. Zege, I. L. Katsev, A. S. Prikhach. Demodulation techniques for the amplitude modulated laser imager. Appl. Opt., 2007, 46: 7374.

[12] S. Panigrahi, J. Fade, H. Ramachandran, M. Alouini. Theoretical optimal modulation frequencies for scattering parameter estimation and ballistic photon filtering in diffusing media. Opt. Express, 2016, 24: 16066.

[13] S. Sudarsanam, J. Mathew, S. Panigrahi, J. Fade, M. Alouini, H. Ramachandran. Real-time imaging through strongly scattering media: seeing through turbid media, instantly. Sci. Rep., 2016, 6: 25033.

[14] S. Panigrahi, J. Fade, R. Agaisse, H. Ramachandran, M. Alouini. An all-optical technique enables instantaneous single-shot demodulation of images at high frequency. Nat. Commun., 2020, 11: 549.

[15] S. Kumar, B. Debnath, M. S. Meena, J. Fade, S. Dhar, M. Alouini, F. Bretenaker, H. Ramachandran. Imaging through fog using quadrature lock-in discrimination. OSA Contin., 2021, 4: 1649.

[16] R. T. Amjad, M. Mane, A. Ali Amjad, W. Ge, Z. Zhang, J. Xu. Tracking of light beacons in highly turbid water and application to underwater docking. Proc. SPIE, 2022, 12118: 121180A.

[17] B. Debnath, J. A. Dharmadhikari, M. S. Meena, H. Ramachandran, A. K. Dharmadhikari. Improved imaging through flame and smoke using blue LED and quadrature lock-in discrimination algorithm. Opt. Lasers Eng., 2022, 154: 107045.

[18] G. Li, M. Zhou, F. He, L. Lin. A novel algorithm combining oversampling and digital lock-in amplifier of high speed and precision. Rev. Sci. Instrum., 2011, 82: 095106.

[19] A. B. Petro, C. Sbert, J.-M. Morel. Multiscale retinex. Image Process. Line, 2014, 4: 71.

[20] S. Zhang, T. Wang, J. Dong, H. Yu. Underwater image enhancement via extended multi-scale retinex. Neurocomputing, 2017, 245: 1.

[21] K. He, J. Sun, X. Tang. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35: 1397.

[22] N. Limare, J.-L. Lisani, J.-M. Morel, A. B. Petro, C. Sbert. Simplest color balance. Image Process. Line, 2011, 1: 297.

[23] WangD.YanW.ZhuT.XieY.SongH.HuX., “An adaptive correction algorithm for non-uniform illumination panoramic images based on the improved bilateral gamma function,” in International Conference on Digital Image Computing: Techniques and Applications (DICTA) (2017), p. 1.

[24] J. Zhou, D. Zhang, P. Zou, W. Zhang, W. Zhang. Retinex-based Laplacian pyramid method for image defogging. IEEE Access, 2019, 7: 122459.

[25] B. Cai, X. Xu, K. Jia, C. Qing, D. Tao. DehazeNet: an end-to-end system for single image haze removal. IEEE Trans. Image Process., 2016, 25: 5187.

[26] N. He, J.-B. Wang, L.-L. Zhang, K. Lu. An improved fractional-order differentiation model for image denoising. Signal Process., 2015, 112: 180.

[27] R. R. Coifman, M. V. Wickerhauser. Entropy-based algorithms for best basis selection. IEEE Trans. Inf. Theory, 1992, 38: 713.

[28] S. S. Agaian, K. Panetta, A. M. Grigoryan. Transform-based image enhancement algorithms with performance measure. IEEE Trans. Image Process., 2001, 10: 367.

[29] A. Mittal, A. K. Moorthy, A. C. Bovik. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process., 2012, 21: 4695.

[30] A. Mittal, R. Soundararajan, A. C. Bovik. Making a “completely blind” image quality analyzer. IEEE Signal Process Lett., 2013, 20: 209.

Riffat Tehseen, Amjad Ali, Mithilesh Mane, Wenmin Ge, Yanlong Li, Zejun Zhang, Jing Xu. Enhanced imaging through turbid water based on quadrature lock-in discrimination and retinex aided by adaptive gamma function for illumination correction[J]. Chinese Optics Letters, 2023, 21(10): 101102.

引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!