Chinese Optics Letters, 2016, 14 (7): 070901, Published Online: Aug. 3, 2018   

Fast reconstruction of digital holograms for extended depths of field Download: 843次

Author Affiliations
1 Department of Electronic Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong SAR, China
2 Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA 24061, USA
3 Dept of Optical Engineering, Sejong University, 209, Neungdong-ro, Gwangjin-gu, Seoul, South Korea
Abstract
Past research has demonstrated that a static, three-dimensional (3D) object scene can be directly recorded as a complex digital hologram. However, numerical reconstruction of the object scene, which may comprise multiple sections located at unknown distances from the hologram, is a complicated and computation-intensive process. To the best of our knowledge, we propose, for the first time, a low complexity method that is capable of reconstructing a complex hologram, such that sections at different depths in the 3D object scene can be automatically reconstructed at the correct focal distances and merged into a single image for an extended depth of field. We demonstrate an order of magnitude increase of the depth of field for binary objects. With the use of a graphical processing unit, the reconstruction of a 512×512 complex hologram can be accomplished in about 100 ms, equivalent to around 10 frames per second.

Digital holography[1] is a powerful technique that enables a three-dimensional (3D) object scene to be recorded as a complex digital hologram. Phase-shifting holography (PSH) and optical-scanning holography (OSH) are the common methods used to obtain complex holograms[2,3]. However, PSH is a coherent technique, whereas OSH can either be configured to operate in a coherent mode or an incoherent mode. OSH has been applied in many disciplines, such as remote sensing[4], fluorescence microscopy[5,6], and 3D image recognition[7]. In any case, a complex digital hologram obtained by PSH or OSH can be reconstructed with a filter matched to a particular depth to provide an in-focus image that is located at a particular depth distance or a depth section from the hologram. However, apart from the capturing range (which is determined by the setting of the digital holographic system), in general, there is no prior information on the location of the object points in the 3D space. Hence, methods that assume a certain geometrical distribution in the object scene, such as a tiled plane approach[8], are not applicable. The rest of the object scene that does not reside in the reconstructed section will appear as a de-focused haze. An effective method to overcome this problem, which is referred to as “blind sectional image reconstruction,” has been reported in Refs. [9,10]. In this approach, filters matched to a range of depth distances are applied to reconstruct the discrete image sections from the hologram at regularly spaced, focal distances. Each section corresponds to a physical plane of the object scene that is either empty or housing at least one object point. Next, edge detection and analysis are performed to associate a non-empty object plane with one of the reconstructed sections. This process is known as “focus estimation.” Finally, an in-focus image of the object points represented in each section is reconstructed through an iterative optimization process. Integrating the reconstructed images in all the sections result in a view of the 3D object scene with an extended depth of field. Recently, the optimization process has been simplified with an iterative shrinkage-thresholding algorithm[11]. Despite the success achieved in Refs. [911], the iterative optimization process is complicated and computationally intensive. A faster, non-iterative method is reported in Ref. [12]. Instead of employing the iterative optimization process, the edge analysis technique is applied to extract the in-focus image in each reconstructed non-empty section. The downside of the method in Refs. [9,10,12] is that edge analysis is employed in focus estimation, based on the assumption that the edge count in a local region is weakest when an image area is in focus. However, edge detection and analysis are sensitive to the image content, and the effectiveness is affected by the correct choice of parameters, such as the threshold value for deciding the existence of an edge point. The problem is even more severe in Ref. [12], as a similar strategy is employed to select the in-focus image contents from the correct non-empty sections.

In this Letter, we report a novel technique to reconstruct a 3D image of the object scene with an extended depth of field from a complex hologram generated by OSH. Our method can be divided into 2 stages. First, a sequence of uniformly spaced discrete image sections, each corresponding to a vertical plane of the object scene located at a unique distance from the hologram, is reconstructed. Second, a decision rule is applied to select, for each pixel in the reconstructed image, a corresponding in-focus pixel from one of the image sections.

Our proposed method is described as follows. To begin with, the following terminology is adopted. Let IR(x,y) and Si(x,y)|0i<N denote the reconstructed image (to be determined) and the ith reconstructed section, respectively. There is a total of ‘N’ evenly spaced sections, each corresponding to a vertical plane in the object scene. A pixel in Si(x,y) is represented by a complex number given by ai(x,y)+jbi(x,y).

In the first stage of our method, each section Si(x,y) is derived with a filter matched to a particular depth, which is done by convolving the complex hologram H(x,y) with the conjugate of a Fresnel zone plate F(x,y;zi) at distance zi as Si(x,y)=H(x,y)F*(x,y;zi),where denotes convolution over the x and y coordinates. The convolution operation can be realized in the frequency domain, which is given by Si(x,y)=FT1[S˜i(ωx,ωy)]=FT1FT[H(x,y)]×FTF*(x,y;zi),where S˜i(ωx,ωy) is the Fourier transform of Si(x,y), with ωx and ωy denoting spatial frequencies. FT[] and FT1[] denote the forward and the inverse Fourier transform operations, respectively.

In the second stage, a decision rule is applied to select pixels from the reconstructed sections to IR(x,y). The decision is based on the assumption that the intensity of each object point in the scene is positive real. As such, when an object point is reconstructed at the correct focal distance, the real part of its intensity should be positive, and the imaginary part should be close to zero. Although the above criteria can be taken to decide whether a pixel in a section can be accepted in the reconstructed image, it is ambiguous, as sections that are closed to the focal distance is also likely to satisfy such acceptance criteria. In other words, for a given position (x,y), in general, there should be more than one section in which the imaginary part of a pixel is small and can be selected for IR(x,y). To overcome this problem, we propose to derive the intensity of each pixel in IR(x,y) to be the average value of the corresponding pixels from all the sections that satisfy the acceptance criteria. Mathematically, we have IR(x,y)=KM(x,y)i=0N1ai(x,y)di(x,y),where di(x,y)={1(|bi(x,y)|<thres)AND(ai(x,y)0)0otherwise is the acceptance criteria, and M(x,y)=i=0N1di(x,y).

In Eq. (3), K is a constant to normalize the pixel values of the reconstructed image IR(x,y) to the range [0, 255] so that it can be displayed on a typical computer monitor. The term thres is a threshold value that is assigned a value equal to 0.2 of the peak value of Si(x,y). Suppose each pixel bi(x,y) in Si(x,y) is normalized to the range [1,+1]: thres will be set to 0.2. In general, the reconstructed image becomes progressively blurrier with increasing values of thres, as more pixels from reconstruction planes that are farther from the focused plane will be able to pass the acceptance criteria and average out in Eq. (3). On the other hand, if thres is too small, some pixels in the focused plane may be discarded. From our evaluation, a threshold value between 0.2 and 0.3 is applicable to all the hologram samples.

In the simulations, we have used a standard “Lenna” image, shown in Fig. 1(a), placed 0.1 m from the holographic recording plane. The cosine and sine holograms, which are defined by the real and imaginary parts of the complex hologram, i.e., Re[H(x,y)] and Im[H(x,y)], are shown in Figs. 1(b) and 1(c), respectively. The result of applying a filter matched to 0.1 m based on Eq. (2) in reconstructing the hologram is shown in Fig. 2(a). It can be seen that an in-focus image can be reconstructed. We then apply our proposed algorithm. A capture range between 0.08 to 0.13 m is employed, which is partitioned into 51 sections, each separated by 0.001 m. The proposed algorithm is applied to reconstruct the complex hologram without prior knowledge of the exact focal distance, and the result is shown in Fig. 2(b). We observe that apart from some slight blurriness, the reconstructed image is similar to the one obtained in Fig. 2(a).

Fig. 1. (a) Image “Lenna.” (b) Cosine hologram of the “Lenna” image, and (c) sine hologram of the “Lenna” image.

下载图片 查看所有图片

Fig. 2. (a) Reconstructed image, at the focal distance of 0.1 m, based on Eq. (2). (b) Reconstructed image, based on our proposed algorithm.

下载图片 查看所有图片

The blurriness is caused by the averaging of corresponding pixels from multiple planes, as expressed in Eq. (3). This effectuates a low-pass filtering on the reconstructed image, which attenuates the high-frequency components of the image. On the bright side, the blurriness also reduces the high-frequency noise that is imposed in the hologram capturing process, which makes the reconstructed image in Fig. 2(b) appear less obscure than the one in Fig. 2(a).

Subsequently, we have captured a second complex hologram by OSH. Two holograms, the cosine and sine holograms, are generated for a single two-dimensional active laser scan of a 3D object. The 3D object consists of two planar transparencies that represent a “Star” and a “Heart,” symbols located at about 0.1 and 0.12 m from the holographic recording system, respectively. The holograms are acquired with an He–Ne laser of wavelength 633 nm. The holographic recording system has a numerical aperture of NA0.025. Each hologram is composed of 512 rows and 512 columns with a square pixel size of 10.583μm×10.583μm. Subsequently a complex hologram from the sine and cosine holograms is constructed digitally. The physical size of the sample is around 0.5cm×0.5cm. In our calculations, only the capture range of the OSH system is known, and the depths of the object points are not provided. The cosine and sine holograms are shown in Figs. 3(a) and 3(b). The results of applying filters matched to the two depths based on Eq. (2) are shown in Figs. 4(a) and 4(b). It can be seen that each symbol can be reconstructed as an in-focus image in its corresponding section, with the other symbol appearing as a de-focused image. Next, we apply our proposed algorithm to reconstruct the complex hologram. A capture range between 0.08 to 0.13 m is employed and is partitioned into 51 sections, each separated by 0.001 m. The result is shown in Fig. 4(c). It can be seen that both symbols are clearly reconstructed in focus simultaneously.

Fig. 3. (a) Cosine hologram of the “Star-Heart” pattern and (b) sine hologram of the “Star-Heart” pattern.

下载图片 查看所有图片

Fig. 4. (a) Reconstructed image, at the focal distance 0.099 m (focusing on the Star symbol), based on Eq. (2). (b) Reconstructed image, at the focal distance 0.124 m (focusing on the Heart symbol), based on Eq. (2). (c) Image reconstructed from the “Star-Heart” complex hologram with our proposed method.

下载图片 查看所有图片

To further demonstrate our proposed method in reconstructing images that fall within the capture range, we superimpose the “Lenna” and the “Star-Heart” complex holograms. The reconstructed image obtained with our proposed algorithm is shown in Fig. 5. We observe that the “Lenna” image and both symbols in the “Star-Heart” pattern are successfully recovered. As explained previously, due to the low-pass filtering effect imposed by Eq. (3), the reconstructed image in Fig. 5 is less noisy and appears less obscure than the one in Fig. 2(a).

Fig. 5. Image reconstructed from the combined “Lenna” and “Star-Heart” complex holograms with our proposed method.

下载图片 查看所有图片

Referring to Eqs. (2) and (3), it can be inferred that reconstructing a complex hologram with our proposed algorithm mainly involves the Fourier transform of the complex hologram H(x,y) and the calculation of Si(x,y) for each section. These two stages are realized with the compute unified device architecture (CUDA) language, and executed using a PC (Q6600 @ 2.4 GHz) that is equipped with a (Nvidia Geforce GTX260+) graphical processing unit. A breakdown of the time taken to compute each process based on a hologram size of 512×512 pixels is listed in Table 1.

Table 1. Computation Time Involved in Each Stage of the Proposed Algorithm, N=51

DescriptionComputation time
Fourier transform of H(x,y), 512×512 pixels2 ms
S˜i(ωx,ωy)|0i<N=FT[H(x,y)]×FT[F*(x,y;zi)]Negligiblea
Inverse fourier transform of S˜i(ωx,ωy)0i<N2Nms
Selection of pixels from sections (Eq. (3))Negligiblea
Total2(N+1)ms

查看所有表

According to the breakdown in Table 1, reconstructing a complex hologram of 512×512 pixels with 51 sections will only require about 104 ms, equivalent to around 10 frames per second.

We propose a fast algorithm for reconstructing a complex hologram for extended depths of field in 3D imaging. To verify the effectiveness of our proposed algorithm, we reconstruct a complex hologram obtained by OSH. The numerical aperture of the holographic recording system is NA0.025, giving a depth of field of Δz2λ/NA2=2.02mm at 633 nm[3]. For the two planar transparencies (the Star and the Heart patterns) separated by 2.5 cm and imaged in-focus simultaneously, the increase of the depth of field is about 2.5cm/2.02mm12 times, which is impressive for simple binary objects. The proposed algorithm should be tested against more realistic, diffusely reflecting objects in the future.

P. W. M. Tsang, T.-C. Poon, T. Kim, Y. S. Kim. Fast reconstruction of digital holograms for extended depths of field[J]. Chinese Optics Letters, 2016, 14(7): 070901.

本文已被 1 篇论文引用
被引统计数据来源于中国光学期刊网
引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!