Chinese Optics Letters, 2016, 14 (1): 010007, Published Online: Aug. 6, 2018  

Demonstration of full-parallax three-dimensional holographic display on commercial 4  K flat-panel displayer

Author Affiliations
AFC Technology Co., Ltd., Shenzhen 518104, China
Abstract
A novel method for a full-parallax three-dimensional (3D) holographic display by means of a lens array and a holographic functional screen is proposed. The process of acquisition, coding, restoration, and display is described in detail. It provides an efficient way to transfer the two-dimensional redundant information for human vision to the identifiable 3D display for human eyes. A holo-video system based on a commercial 4 K flat-panel displayer is demonstrated as the result.

Based on the principle of the holographic stereogram, we have published Letters to realize a full-color and real-time holographic display by means of a holographic functional screen (HFS) combined with a system formed by a camera-projector array[15" target="_self" style="display: inline;">5]. In practice, it is difficult to integrate the whole system due to the calibration of each individual camera-projector; meanwhile, the high cost of adapting masses of camera-projectors makes such a system unacceptable for public consumption.

Integral photography[6] theoretically seems like an ideal approach for both the acquisition and restoration of three dimensional (3D) light fields; however, it is difficult to overcome the inherent inconsistency formed by the microlens between the sub-image quality and the resolution of the final 3D display because of diffraction effect of the lens aperture. Therefore, a satisfactory 3D display is a challenge that has yet to be overcome.

In this Letter, we propose a novel approach to realize a perfect holographic display perceived by human eyes. It is equivalent to the setup in Refs. [15" target="_self" style="display: inline;">5], while the optical axis of each individual camera-projector is parallel to each other, i.e., anchored at an infinitely far point. It could be thought of as a successive technical innovation derived from our proposed physical concepts named the hoxel and spatial spectrum, which are properly defined by the four-dimensional Fourier transform of the wave function of this nature. The purpose is for the most compact design to carry out the application at the lowest cost.

There are four steps in our innovation:

1. Parallel acquisition of the spatial spectrum.

Figure 1 is the sketched map for the parallel acquisition of the spatial spectrum. L1 is a plate of lens arrays comprised by M*N small lenses with the same imaging parameters, which are denoted by a1 for the aperture of each lens, d1 for the concentric distance, and f1 for the focal length. The viewing angle of each lens could be expressed as tan(Ω/2)=a1/2f1. As the optical axis of each individual lens is parallel to each other, the spatial spectrum Imn(j,k) (m=1 to M, n=1 to N) of a 3D object O acquired by each lens inside its viewing angle Ω corresponds to what we have described before in Refs. [15" target="_self" style="display: inline;">5]. The sampling angle of acquisition could be denoted as ωmn=d1/l1, and l1 is the distance between the lens plate L1 and the object O. S is a light-sensitive component (such as film, CCD, or CMOS, etc.) placed near the focal plane of L1 with a distance l1 to the lens plate to record the spatial spectrum Imn(j,k). J*K is the resolution of the digital light-sensitive component corresponding to each imaging unit of the lens plate: j=1 corresponds to J, and k=1 corresponds to K. The corresponding hoxel is denoted as Hjk, i.e., the acquired object O is constructed by J*K hoxels, Hjk(m,n). The distance between object O and the reference surface PR is l3, and the reference point R is located at the center of PR. Field aperture M1 is placed between S and L1 to prevent the crosstalk of each Imn. Compared with the traditional integral photography, the lens array here is not a microlens array; the aperture a1 of each lens is big, so as to acquire enough of a distinct image of each spatial spectrum, but it is never bigger than d1. The focal length f1 determines the viewing angle Ω of each individual lens. The bigger the Ω of the lens, the bigger the scope of the 3D object it can acquire. Here, we suppose that Ω is big enough to make at least one lens near the center (M/2,N/2) of the lens array acquire the whole object O(j,k), as shown in Fig. 1.

Fig. 1. Sketched map for parallel acquisition of spatial spectrum: J*K hoxels Hjk(m,n) are imaged by M*N small lenses to form M*N images Imn(j,k) of the spatial spectrum.

下载图片 查看所有图片

Compared with the work we have described in Refs. [15" target="_self" style="display: inline;">5], where the anchoring acquisition was adopted, except for the spatial spectrum image I(M/2)(N/2)(j,k) at the center of the lens array, which is exactly the same, other sub-images are shifted a phase factor of δmn on the spectrum surface corresponding to the original spatial spectrum Imn(j,k) acquired by the anchoring acquisition. They are then trimmed by the field aperture M1 to make the reference point Rmn on each sub-image of the original object O overlap at the same position R after imaging back to the original space. In Figs. 2 and 3, the corresponding coordinates of the reference point R and its sub-image Rmn inside each spatial spectrum are respectively compared. The phase factor δmn is the inherent character for parallel acquisition described in this Letter; it could be the accordance of each spatial spectrum shift when the 3D data is acquired by the anchoring acquisition and playing back in a parallel situation or vice visa.

Fig. 2. Sketched map for anchoring acquisition of spatial spectrum in Refs. [15" target="_self" style="display: inline;">5]. The reference point R is at the same position inside each individual sub-image.

下载图片 查看所有图片

Fig. 3. Sketched map for parallel acquisition of spatial spectrum in this Letter. The sub-image Rmn is shifted a phase factor δmn compared with Fig. 2.

下载图片 查看所有图片

2. Holographic coding of the spatial spectrum.

It is necessary here to create holographic coding by making use of the J*K pixels of each M*N spatial spectrum acquired from Fig. 1 to generate the J*K holographic coded spatial spectrum Sjk(m,n). The details are shown in Fig. 4. We can use a computer to pick the (jth,kth) pixel Pmnjk of the image Imn(j,k) to fill the inside of a certain hoxel Hjk of the object space shown in Fig. 1 to get the coded spatial spectrum Sjk(m,n) of this hoxel. The significance of such holographic coding is as follows: (1) We can efficiently realize coordinate transformation between “image and spectrum” to eradicate the fatal drawback of “pseudo-scope imaging.” (2) Such a coding method is versatile and can be used in any kind of 3D display system; the holographic coded image Sjk(m,n) can be directly broadcasted by the lens array, or treated as the “hogel” to print the 3D hologram dot by dot[7]. (3) By means of simply magnifying or reducing the pattern size of Sjk, the size of the hoxel Hjk could be arbitrarily changed to get a magnified or reduced display of a 3D object,. (4) According to the details of acquiring or displaying a 3D space (such as resolution, depth, and viewing angle, etc.), the maximum sampling angle ωmn could be designed for perfect 3D displays by the minimum spatial spectrum number (M,N) for the most efficiency.

Fig. 4. Sketched map for holographic coded spatial spectrum image Sjk of a hoxel Hjk.

下载图片 查看所有图片

3. Recovery of discrete spatial spectrum.

After a simple treatment involving magnifying or reducing, J*K frames of holographic coded image Sjk are displayed at the corresponding positions on a flat-panel displayer D, which has a resolution bigger than M*N*J*K. Figure 5 is the sketched map for the restoration of the integral discrete spatial spectrum, where lens plate L2 is located in front of D with a distance of l2. l2 is equivalent to l3 in Fig. 1 when the hoxels Hjk are correspondingly reduced or magnified. L2 is still comprised by J*K small lenses with the same imaging parameters denoted by a2 for the aperture of each lens and d2 for the concentric distance (here is just the hoxel size of the preset Hjk in Fig. 1). Field aperture M2 is also placed between D and L2 to prevent the crosstalk of each Sjk. Each lens on L2 has the same viewing angle Ω as in the acquisition to avoid the deformation of the final image. As shown in Fig. 5, each coded spatial spectrum Sjk(m,n) displayed on the monitor D would be projected backwards as the discrete spatial spectrum Imn(j,k) of the original object used to form a 3D images O where the number of the preset hoxels Hjk is changed from J*K to J*K which is number of the final displayed hoxels Hjk. J*K is obtained as follows: (1) It is supposed that the pixel size of the displayer D is ΔD. (2) When Sjk is imaged by a lens inside L2 and the image size is magnified M times, the corresponding hoxel size is MΔD. (3) It is supposed that the length and width of the displayer D are respectively a and b. (4) J=a/(MΔD), and K=b/(MΔD). It can be seen that J*K do not have a direct relation to J*K, which is the eventual number of hoxels Hjk that is formed by M*N directional projections from the original hoxel Hjk, i.e., the final hoxel resolution of the holographic display inside the display area of a*b. Compared with the traditional integral imaging techniques, the lens here is not a microlens; otherwise, the white light speckle noise would be unacceptable. The aperture a2 of each lens is big, so as to distinguish the features of Sjk(m,n), but it is never bigger than d2.

Fig. 5. Sketched map for 3D reconstruction decoded by the HFS.

下载图片 查看所有图片

4. Integral reconstruction decoded by HFS.

As shown in Fig. 5, we placed a corresponding HFS, which is described in our previous work[15" target="_self" style="display: inline;">5], at the position of O to make the expanding angle of each discrete spatial spectrum input Sjk the same as the sampling angle ωmn shown in Fig. 1; i.e., to make each coded spatial spectrum Sjk combined together but not severely overlapped (the appearance here is a uniform bright background because the edge features of each lens are just smeared together by the HFS). This forms an integrally continuous output of the spatial spectrum. Human eyes can then observe a real holographic 3D image O floating on the HFS. It should be noted that the HFS should be located at the above-mentioned place; this is the most efficient way to display a certain sampling angle ωmn. The HFS could be regarded as the standard plane straddled by the displayed 3D space with the depth determined by ωmn. When the HFS is not correctly located to make the broadcasting angle much bigger or smaller than the sampling angle, the displayed space would be lack the original 3D data, which would result in severe crosstalk or a nonlinear appearance.

In order to make our innovation more comprehensible, the following analysis was done of the imaging quality:

1. Spatial spectrum description of 3D information:

Suppose Δjk is the size of a preset hoxel Hjk in a 3D space, and ΔZ is the depth of that space, then the corresponding sampling angle can be expressed as ωmn=Δjk/ΔZ. That is to say, a 3D object O constructed by J*K*ΔZ*Δjk individual small cubic irradiators (Δjk)3 can be completely derived by M*N*J*K individual light tapers, in which the apex of each light taper is located inside the plane of the HFS, while the divergent angle is ωmn. The viewing angle of this 3D object is Ω=ΣΣωmn.

Here, we have ΔZ*Δjk=Δjk*Δjk/ωmn=M*N, because M*N spatial spectra are included inside the hoxel Hjk.

2. Spatial spectrum description of human vision:

Some basic parameters of human eyes are as follows: (1) pupil distance (the average distance of two eyes) dE6.5cm, (2) pupil diameter (2–8 mm, depending on the brightness), on average, is aE5mm, (3) angular resolution limitation: ωE1.5*104, and (4) viewing angle in the stationary state: ΩE90°. When human eyes are fixed on a certain position, human vision is able to express J*K=(ΩE/ωE)2[(π/2)/1.5*104]2108 hoxels and needs only two spatial spectra (M*N=2) to form the binocular stereoscopic image. There are 108 spatial spectra identified by human eyes, included in two hoxels HR and HL to form the objective 3D knowledge acquired by human eyes submerged into such hoxel oceans.

3. Effective acquisition and restoration:

Aiming at the spatial spectrum expression described in 1 and 2, the visible 3D space information could be fully acquired by the lens array plate L1 shown in Fig. 1, and also could be fully restored by the lens array plate L2 shown in Fig. 5. The detailed requirements are as follows: a1=2λl1/Δjk, a2=2λl2/Δjk, λ is approximately 550 nm, which is the average wavelength of visible light, ωmn=d1/l1=d2/l2, tan(Ω/2)=a1/(2f1)=a2/(2f2). Here, the sizes of the lens apertures (a1 and a2) determine the size of hoxels Δjk or the cubic voxels (Δjk)3 that are acquired or restored; the concentric distances (d1 and d2) determine the sampling angle ωmn of the space acquired or restored, and therefore determine the depth of this space ΔZ=Δjk/ωmn. The focal lengths (f1 and f2) determine the viewing angle Ω of this space, which behaves as the processing capability of a lens unit in the spatial spectrum information, i.e., Ω=ΣΣωmn. Because we adopt the HFS to compromise the nonlinear features of the lens array, the microlens paradox for integral photography could be completely avoided. The key is achieving a high-enough resolution of the corresponding sensor (S in Fig. 1) and displayer (D in Fig. 5) to identify and display the spatial spectrum information composed by above-mentioned J*K*M*N individual pixels.

By making use of a commercially available 4 K flat-panel displayer KKTV LED39K60U with the resolution of 3840*2160, according to the above-mentioned principles, we have achieved a digital holographic display with full color and full parallax. The details of the parameters are as follows: (1) hoxel size is 2.5mm*2.5mm, (2) number of hoxels is J*K=337*188, (3) the number of the spatial spectrum is M*N=36*36, and (4) the viewing angle is Ω=30°.

Figure 6 is the sketched map of the holographic coded pattern of the spatial spectrum inside each small lens; here, the process of acquisition is replaced by directly rendering the computer-simulated 3D models. In order to fully use the limited pixels on the 4 K displayer, we aligned 3818 small lenses with the aperture a2=10mm diameter in a honeycomb array. Figure 7 is the picture taken from one direction before the HFS is applied. No detailed features can be identified in the picture, only discrete light rays from the hoxel Hjk. Figure 8 is the picture taken from one direction after the HFS is applied. All features are properly decoded by the HFS as the final displayed hoxel Hjk. Figure 9 shows the pictures taken from multiple directions of the holographic displayed digital 3D models formed by the coded spatial spectrum shown in Fig. 6; the smooth color restoration and the full parallax relationship of the displayed space can be distinctly seen. Figure 10 is another result of a holographic display “skull” in which each profile is clearly expressed.

Fig. 6. Sketched map of holographic coded pattern of the spatial spectrum inside each small lens.

下载图片 查看所有图片

Fig. 7. Sketched map for restoration before the HFS is applied.

下载图片 查看所有图片

Fig. 8. Sketched map for restoration after the HFS is applied.

下载图片 查看所有图片

Fig. 9. Pictures taken from multiple directions of the holographic display digital 3D models. Q1: Can you find out any differences between the 9 pictures? Q2: Can you imagine the 3D relations of each object only with the clues of such differences? It would be eye seeing in site for real 3D display.

下载图片 查看所有图片

Fig. 10. Pictures taken from multiple directions of the holographic display digital 3D “skull.”

下载图片 查看所有图片

In conclusion, we demonstrate the design and experimental result of identifiable holographic display for human vision. The key is to transform the visual redundant pixels into an identifiable hoxel display. Although the available 4 K flat-panel displayer could only obtain a 2.5 mm hoxel size, the developing 8 K or even 16 K flat-panel displayer would eventually improve the final hoxel resolution for the eye-catching level if the lens aperture is bigger than the human pupil. We expect this novel device would find its first application in medical imaging, with the obvious advantage of seeing 36*36 pictures in real 3D form simultaneously.

References

[1] FanF. C.ChoiS.JiangC. C., in Proceedings of the Eighth International Symposium on Display Holography424 (2009).

[2] ShangX.FanF. C.JiangC. C.ChoiS.DouW.YuC.XuD., Opt. Lett.34, 3803 (2009).OPLEDP0146-9592

[3] FanF. C.ChoiS.JiangC. C., Appl. Opt.49, 2676 (2010).APOPAI0003-6935

[4] YuC.YuanJ.FanF. C.JiangC. C.ChoiS.SangX.LinC.XuD., Opt. Express18, 27820 (2010).OPEXFF1094-4087

[5] FanF. C.ChoiS.JiangC. C., in Techniques and Principles in Three-Dimensional Imaging: An Introductory Approach (IGI Global, 2014), Chap. 8.

[6] BjelkhagenH. I., in Techniques and Principles in Three-Dimensional Imaging: An Introductory Approach (IGI Global, 2014), Chap. 6.

[7] KlugM. A.KleinA.PlesniakW. J.KroppA. B.ChenB., Proc. SPIE3011, 78 (1997).PSISDG0277-786X

Frank C. Fan, Sam Choi, C. C. Jiang. Demonstration of full-parallax three-dimensional holographic display on commercial 4  K flat-panel displayer[J]. Chinese Optics Letters, 2016, 14(1): 010007.

引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!