Photonics Research, 2015, 3 (1): 01000019, Published Online: Apr. 15, 2015   

Incoherent Fourier ptychographic photography using structured light Download: 1458次

Author Affiliations
1 Biomedical Engineering, University of Connecticut, Storrs, Connecticut 06269, USA
2 Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut 06269, USA
Abstract
Controlling photographic illumination in a structured fashion is a common practice in computational photography and image-based rendering. Here we introduce an incoherent photographic imaging approach, termed Fourier ptychographic photography, that uses nonuniform structured light for super-resolution imaging. In this approach, frequency mixing between the object and the structured light shifts the high-frequency object information to the passband of the photographic lens. Therefore, the recorded intensity images contain object information that is beyond the cutoff frequency of the collection optics. Based on multiple images acquired under different structured light patterns, we used the Fourier ptychographic algorithm to recover the super-resolution object image and the unknown illumination pattern. We demonstrated the reported approach by imaging various objects, including a resolution target, a quick response code, a dollar bill, an insect, and a color leaf. The reported approach may find applications in photographic imaging settings, remote sensing, and imaging radar. It may also provide new insights for high-resolution imaging by shifting the focus from the collection optics to the generation of structured light.

1. INTRODUCTION

Structured light illumination is important in computational photography and image-based rendering. It has been widely used in object recognition [1,2], 3D shape recovery [3], 4D light field rendering [4], synthetic aperture confocal imaging [5], and dual photography [6]. For example, Microsoft Kinect uses an IR projector to project a structured dense light pattern on the sample and a postprocessing algorithm to recover the 3D image of the object. A 3D laser light scanner projects a pattern of parallel stripes on the object and recovers the 3D information based on the geometrical deformation of the strips. In dual photography [6], a video projector is used to generate multiplexed patterns for sample illumination. The acquired information is then used to recover the transport matrix describing how light from each pixel of the projector arrives at each pixel of the camera. This matrix can be used to generate a new photorealistic image from the point of view of the projector.

The above-mentioned research directions and applications share a similar strategy on the system design: projecting multiple light patterns onto the sample and recovering new information based on the acquired images. However, in these approaches, the frequency mixing between the object and the illumination pattern has not been considered in the reconstruction process, and the image resolution is determined by the cutoff frequency of the employed photographic lens. In this article, we propose an incoherent photography imaging approach, termed Fourier ptychographic photography (FPP), that uses the frequency mixing effect for super-resolution imaging. The concept of the reported approach is similar to the structured illumination microscopy (SIM) approach, where a sinusoidal pattern is used to modulate the high-frequency sample information into the low-frequency passband [7]. In our implementation, we project a number of high-frequency random patterns onto the sample and acquire the corresponding images of the object. Similar to SIM, the frequency mixing between the sample and the random illumination pattern shifts the high-frequency component to the passband of the collection optics. Therefore, each raw image contains information that is beyond the cutoff frequency of the photographic lens. Based on multiple images acquired under different structured light patterns, we used the Fourier ptychographic algorithm to recover both the high-resolution object image and the unknown illumination pattern. Similar to SIM, the resolution of the recovered image using the reported approach is better than the resolution limit of the lens’s aperture.

We note that the use of the frequency mixing effect for super-resolution imaging is not a new idea. This concept is well known in the microscopy community [7]. It has also been used in telescope settings [8,9]. In this paper, we will use such a frequency mixing effect for super-resolution photographic imaging. In particular, we will demonstrate, for the first time to our knowledge, the use of the Fourier ptychographic recovery scheme for the photographic imaging setting. We will show that the reported approach is able to recover both the super-resolution image of the object and the unknown illumination pattern at the same time. Since the system design of the reported approach is compatible with many illumination-based imaging platforms, it may provide new insight for the development of computational photography and find applications in remote sensing, active-illumination night vision systems, and imaging radar.

In the following, we will first introduce the principle and the setup of the reported approach. We will then demonstrate the imaging performance using various targets. Finally, we will discuss the advantages and limitations of the reported approach.

2. FOURIER PTYCHOGRAPHIC PHOTOGRAPHY

Figure 1 shows the schematic of our FPP setup. In the illumination path, we projected the image of a semitransparent diffuser (white paint sprayed on a glass slide) onto the object. The speckle size of the projection pattern is in the range from 200 to 500 μm. In the detection path, we used a CCD and a photographic lens (Nikon, 50 mm) to capture the image of the object. The f-number of the lens was adjusted to 22, with the optical resolution matching the speckle size of the projection pattern. We then used a mechanical scanner to move the semitransparent diffuser to different positions, and thus the corresponding projection pattern shifts across the object (Media 1). For each projection pattern Pn(n=1,2,3), we acquired one corresponding image of the object In(n=1,2,3). Based on all acquired images, we recovered the high-resolution sample image Iobj and the unknown projection pattern Pn. Similar to SIM, the final resolution of Iobj bypasses the limit of the collection optics.

Fig. 1. Schematic of the FPP setup. In the illumination path, a diffused LED was used for incoherent illumination (red arrow on the left). A semitransparent diffuser was placed in front of the diffused LED, and its image was projected onto the object. In the detection path, a photographic lens (Nikon, 50 mm) was used to collect the reflected light from the object.

下载图片 查看所有图片

In our implementation, we used the iterative Fourier ptychographic algorithm to recover both Iobj and Pn. The Fourier ptychography (FP) technique [1016" target="_self" style="display: inline;">16] is a recently developed approach that facilitates high-resolution imaging beyond the diffraction limit of the employed optics. This FP approach shares its roots with phase retrieval techniques such as the Gerchberg–Saxton algorithm [17], ptychography [18], and other error-reduction phase retrieval methods [19,20]. The recovery process of FP iteratively switches between two working domains: the spatial and Fourier domains. In the spatial domain, the acquired image is used as the object constraint for the solution. In the Fourier domain, the optical transfer function (OTF) of the objective lens is used as the support constraint for the solution. The recovery process of the FP algorithms is summarized in Fig. 2. For the case of incoherent FP [16], it starts with an initial guess of the Iobj and Pn. In each iteration step, the initial guess is updated by the acquired intensity image In, in both the spatial and Fourier domains. The iteration is repeated until the solution converges (it stops if the difference of the solutions from sequential iterations is less than a predefined value).

Fig. 2. Recovery procedures of the coherent and incoherent FP approaches. For the case of incoherent FP, the updating processes in steps 3 and 4 can be expressed as Eqs. (1)–(3).

下载图片 查看所有图片

The updating process in the Fourier domain (step 3, Fig. 2) can be expressed as follows: where F() denotes Fourier transform and OTF denotes the optical transfer function of the employed lens. Unlike with the conventional deconvolution approach, Eq. (1) does not amplify the noise of the image, because the OTF is not a denominator. The updating process in the spatial domain can be expressed as follows (step 4): The super-resolution information comes from the multiplication between the illumination pattern and the object in Eq. (2). In our implementation, we translated one unknown illumination pattern to different spatial positions, xn, and captured the corresponding images of the object. Following the same logic of Eq. (2), we can recover the unknown illumination pattern Punknown as follows (the initial guess of Punknown can be a constant number): where xn (n=1,2,3) represents the translational positions of the illumination pattern. These positions were controlled by the motion stage in our platform, and they were known information. If the shifted positions of the illumination pattern are unknown, it is possible to use the cross correlation of the acquired images to recover them [21].

The iterative updating process using Eqs. (1)–(3) is inspired by the extended ptychographic iterative engine developed by Maiden and Rodenburg [22]. It is the same as the pupil recovery scheme of the coherent FP approach [15]. The only difference is the incoherent intensity patterns versus angle-varied coherent planes waves.

3. IMAGING PERFORMANCE OF THE FPP

To demonstrate the imaging performance of the reported FPP approach, we first used a resolution target as the sample and followed the FPP procedures to recover the super-resolution image.

Figure 3 shows the result using the reported FPP platform. Figure 3(a) shows the reference image under uniform illumination (removing the semitransparent diffuser in Fig. 1). The resolution of this image is determined by the aperture of the employed photographic lens. From this image, we can resolve the line pair in group 22. Figure 3(b1) shows the raw image under the pattern illumination (also refer to Media 1). Figures 3(b2) and 3(b3) show the recovered object image and the recovered illumination pattern using the incoherent FP algorithm (15 loops). In this implementation, we translated the semitransparent diffuser to 100 different spatial positions and captured the corresponding 100 raw images for the reconstruction. The line traces of Figs. 3(a) and 3(b2) are shown in Fig. 3(b4). From the recovered image, we can clearly resolve the line pair in group 40, and thus the resolution enhancement factor is about 1.8 (i.e., 40/22). The computational time for Fig. 3(b2) is 5s using an Intel i7 quad-core CPU and MATLAB. In Fig. 4, we used different numbers of raw images for the reconstruction. We can see that the solution converges quickly with 16 raw images.

Fig. 3. Imaging performance of the reported FPP platform. (a) The reference image captured under uniform illumination, (b1) the captured raw image under pattern illumination, (b2) the recovered image using 100 raw images, (b3) the recovered illumination pattern, (b4) line traces of (a) and (b2). Also refer to Media 1.

下载图片 查看所有图片

Fig. 4. Image reconstruction using different numbers of raw images. The solution converges with the 16 raw images. We used 15–20 loops in this experiment.

下载图片 查看所有图片

We also used the reported platform to image different objects, and the results are shown in Fig. 5. Figures 5(a1)5(c1) show the captured images under uniform illumination (also refer to Media 2). Figures 5(a2)5(c2) show the recovered images using 100 raw images and 15 loops for the reconstruction process. In particular, we note that we are not solving the pixel aliasing problem using a subpixel shift in the raw images. The super-resolution effect here is to bypass the diffraction limit of the photographic lens. The raw image we captured is oversampled. We have shown the Fourier spectrum of the raw image in the inset of Fig. 5(c1). The yellow circle denotes the cutoff frequency of the OTF of the photographic lens.

Fig. 5. Demonstration of the reported platform for different objects: a dollar bill, a quick response code, and an insect. (a1)–(c1) The reference images under uniform illumination, (a2)–(c2) the recovered images using the reported platform. We used 100 raw images and 15 loops for the reconstruction. Also refer to Media 2.

下载图片 查看所有图片

In Fig. 6, we also demonstrate the reported platform for color imaging. We used R/G/B LEDs to capture raw images at different channels and combined the reconstructions to form a high-resolution color image. The resolution improvement is evident when we compare Fig. 6(d) to Fig. 6(b).

Fig. 6. Imaging a color object using the reported platform. (a1)–(a3) Reference images using uniform R/G/B illumination, (b) combined reference color image, (c1)–(c3) recovered super-resolution images using the reported platform, (d) combined super-resolution color image.

下载图片 查看所有图片

4. CONCLUSION AND DISCUSSION

In conclusion, we have demonstrated a photographic imaging approach that uses computational illumination for super-resolution imaging. Our setup is similar to many existing computational illumination platforms, where structured light patterns are projected onto the sample and corresponding images are used to recover the hidden information of the object. In the reported approach, we used the structured light pattern to modulate the high-frequency component of the object into the low-frequency passband. We then used the Fourier ptychographic algorithm to recover the object image. We show that the reported approach is able to improve the resolution beyond the resolution limit of the photographic lens.

The design focus of conventional photographic imaging platforms is the collection optics. More and more lens elements are used in the optical design to correct for aberrations. This effort is required mainly due to the use of a 2D planar image sensor. If the sensor surface itself is curved instead of the lens elements, image capture solutions with large apertures can easily be realized by means of a simple lens system with a lower number of optical elements. However, an image sensor with a curved surface is not compatible with the existing workflow in the semiconductor industry. In the reported platform, on the other hand, we can make a semitransparent diffuser on a curved surface. Therefore, we can project high-frequency speckle patterns onto the object with a simple large-aperture lens system. We can then capture the corresponding images and use the iterative algorithm to recover the high-resolution image of the object.

There are also three limitations associated with the reported platform. (1) It is computationally intensive. Real-time reconstruction needs to be implemented using a graphical processing unit. (2) Multiple frames are needed for high-resolution reconstruction. One future direction is to design an optimal projection pattern to minimize the number of acquisitions. (3) We assume the object is static in our implementation. If the object moves during acquisition, the motion of the object may provide a mean of mechanical scanning (for example, products on a conveyer belt). In this case, we may need to track the motion of the object instead of projecting the translated patterns.

Finally, we reiterate that the use of the frequency mixing effect for super-resolution imaging is well known in the microscopy community. In this paper, we have demonstrated, for the first time to our knowledge, the use of FP for incoherent photographic imaging settings. The results of this paper may provide some insights for the computational photography and computer vision communities. It may also find applications in remote sensing, active-illumination night vision systems, and imaging radar.

References

[1] K.-C. Lee, J. Ho, D. Kriegman. Nine points of light: acquiring subspaces for face recognition under variable lighting. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recogn., 2001, 1: 519-526.

[2] A. S. Georghiades, P. N. Belhumeur, D. Kriegman. From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intell., 2001, 23: 643-660.

[3] HornB., Robot Vision (MIT, 1986).

[4] M. Levoy, Z. Zhang, I. McDowall. Recording and controlling the 4D light field in a microscope using microlens arrays. J. Microsc., 2009, 235: 144-162.

[5] M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, M. Bolas. Synthetic aperture confocal imaging. ACM Trans. Graph., 2004, 23: 825-834.

[6] P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, H. Lensch. Dual photography. ACM Trans. Graph., 2005, 24: 745-755.

[7] M. G. Gustafsson. Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J. Microsc., 2000, 198: 82-87.

[8] D. R. Gerwe, M. A. Plonus. Superresolved image reconstruction of images taken through the turbulent atmosphere. J. Opt. Soc. Am. A, 1998, 15: 2620-2628.

[9] A. J. Lambert, D. Fraser. Superresolution in imagery arising from observation through anisoplanatic distortion. Proc. SPIE, 2004, 5562: 65-75.

[10] X. Ou, R. Horstmeyer, C. Yang, G. Zheng. Quantitative phase imaging via Fourier ptychographic microscopy. Opt. Lett., 2013, 38: 4845-4848.

[11] G. Zheng, R. Horstmeyer, C. Yang. Wide-field, high-resolution Fourier ptychographic microscopy. Nat. Photonics, 2013, 7: 739-745.

[12] S. Dong, Z. Bian, R. Shiradkar, G. Zheng. Sparsely sampled Fourier ptychography. Opt. Express, 2014, 22: 5455-5464.

[13] S. Dong, R. Horstmeyer, R. Shiradkar, K. Guo, X. Ou, Z. Bian, H. Xin, G. Zheng. Aperture-scanning Fourier ptychography for 3D refocusing and super-resolution macroscopic imaging. Opt. Express, 2014, 22: 13586-13599.

[14] S. Dong, R. Shiradkar, P. Nanda, G. Zheng. Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging. Biomed. Opt. Express, 2014, 5: 1757-1767.

[15] X. Ou, G. Zheng, C. Yang. Embedded pupil function recovery for Fourier ptychographic microscopy. Opt. Express, 2014, 22: 4960-4972.

[16] S. Dong, P. Nanda, R. Shiradkar, K. Guo, G. Zheng. High-resolution fluorescence imaging via pattern-illuminated Fourier ptychography. Opt. Express, 2014, 22: 20856-20870.

[17] R. W. Gerchberg, W. O. Saxton. A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik, 1972, 35: 237-250.

[18] H. Faulkner, J. Rodenburg. Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm. Phys. Rev. Lett., 2004, 93: 023903.

[19] J. R. Fienup. Reconstruction of an object from the modulus of its Fourier transform. Opt. Lett., 1978, 3: 27-29.

[20] J. R. Fienup. Phase retrieval algorithms: a comparison. Appl. Opt., 1982, 21: 2758-2769.

[21] F. Zhang, I. Peterson, J. Vila-Comamala, A. Diaz, F. Berenguer, R. Bean, B. Chen, A. Menzel, I. K. Robinson, J. M. Rodenburg. Translation position determination in ptychographic coherent diffraction imaging. Opt. Express, 2013, 21: 13592-13606.

[22] A. M. Maiden, J. M. Rodenburg. An improved ptychographical phase retrieval algorithm for diffractive imaging. Ultramicroscopy, 2009, 109: 1256-1262.

Siyuan Dong, Pariksheet Nanda, Kaikai Guo, Jun Liao, Guoan Zheng. Incoherent Fourier ptychographic photography using structured light[J]. Photonics Research, 2015, 3(1): 01000019.

本文已被 4 篇论文引用
被引统计数据来源于中国光学期刊网
引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!