Photonics Research, 2020, 8 (7): 07001072, Published Online: Jun. 3, 2020   

Computational 4D imaging of light-in-flight with relativistic effects Download: 652次

Author Affiliations
1 School of Instrumentation Science and Opto-electronic Engineering, Beihang University, Beijing 100191, China
2 School of Physics, Beihang University, Beijing 100191, China
3 School of Physics and Astronomy, University of Glasgow, Glasgow G128QQ, UK
Abstract
Light-in-flight imaging enables the visualization and characterization of light propagation, which provides essential information for the study of the fundamental phenomena of light. A camera images an object by sensing the light emitted or reflected from it, and interestingly, when a light pulse itself is to be imaged, the relativistic effects, caused by the fact that the distance a pulse travels between consecutive frames is of the same scale as the distance that scattered photons travel from the pulse to the camera, must be accounted for to acquire accurate space–time information of the light pulse. Here, we propose a computational light-in-flight imaging scheme that records the projection of light-in-flight on a transverse x?y plane using a single-photon avalanche diode camera, calculates z and t information of light-in-flight via an optical model, and therefore reconstructs its accurate (x, y, z, t) four-dimensional information. The proposed scheme compensates the temporal distortion in the recorded arrival time to retrieve the accurate time of a light pulse, with respect to its corresponding spatial location, without performing any extra measurements. Experimental light-in-flight imaging in a three-dimensional space of 375 mm×75 mm×50 mm is performed, showing that the position error is 1.75 mm, and the time error is 3.84 ps despite the fact that the camera time resolution is 55 ps, demonstrating the feasibility of the proposed scheme. This work provides a method to expand the recording and measuring of repeatable transient events with extremely weak scattering to four dimensions and can be applied to the observation of optical phenomena with ps temporal resolution.

1. INTRODUCTION

Optical imaging of ultra-fast phenomena [1,2] provides critical information for understanding fundamental aspects of the world we live in [35" target="_self" style="display: inline;">–5]. The recording of light-in-flight, which enables the visualization and characterization of the propagation of light, is one such example. The capturing of light-in-flight, sometimes referred to as transient imaging, was first performed with intensity gating [6], and then holographic gating [7], which selected photons at a specific time by a shutter or an interference scheme. Outstanding works with various applications have been developed based on the time gating principle [811" target="_self" style="display: inline;">11]. More recently, advances in optical devices have enabled the capture of light-in-flight by recording the arrival of photons in a continuous manner with their corresponding arrival time; such devices include streak cameras [12,13], photonic mixer devices [14,15], and single-photon avalanche diode (SPAD) arrays [16,17].

Compared to everyday photography, imaging light-in-flight is particularly interesting because light is both the medium carrying information to the camera in the form of scattered photons and the object to be imaged itself. In such a scenario, the speed of light cannot be treated as an infinite number, as is otherwise true for everyday photography. The recording of such events will be significantly observer-dependent and will exhibit spatiotemporal distortions [1821" target="_self" style="display: inline;">21]. In the context of this work, “relativistic effects” refer to such spatiotemporal distortions.

To explain this point further, see two examples in Fig. 1. As shown in Fig. 1(a), two consecutive frames are taken by a camera, recording a car moving from point A at time 0 to point B within a time interval Δt. It is worth noting that the actual moments when the camera records these two frames are at t1 and Δt+t2, and the time interval between these two frames would be Δt+t2t1 rather than Δt. Nevertheless, because the speed of light can be treated as infinite compared to the speed of the car, the camera-measured Δt+t2t1 can be treated as Δt, which leads to observer-independent results. On the contrary, when a light pulse is travelling from A to B, as shown in Fig. 1(b), the camera-measured time interval Δt+t2t1 can no longer be treated as Δt, because t2t1 is of the same scale as Δt. The recorded information of such events is significantly observer-dependent and contains spatiotemporal distortions.

Fig. 1. Schematics of difference between imaging (a) a moving car and (b) a flying light pulse. Δt stands for the time during which the object moves from position A to position B, and t1 and t2 denote the time of flight for the scattered photons to propagate to the camera from positions A and B, respectively.

下载图片 查看所有图片

In order to retrieve observer-independent information of light-in-flight, this relativistic effect (finite light speed of c) needs to be compensated for to determine the accurate time t when the event actually happens rather than the arrival time ta at which it is detected by the camera. In holographic light-in-flight imaging, this compensation can be performed using a graphical method based on the ellipsoids of the holodiagram [22]. A straightforward approach is to simply remove the time-of-flight, i.e., t1 and t2 in Fig. 1, from each measurement corresponding to its spatial location, which in turn is measured point by point three-dimensionally by a distance meter before performing the actual imaging of light-in-flight [23].

Interestingly, we note that the observer-dependent data of light-in-flight contains more information than the aforementioned works have exploited. Here, we demonstrate that the relativistic effects can be compensated during the imaging of light-in-flight by further exploiting the (x, y, ta) data recorded by a SPAD camera via a strictly built optical model and a computation layer to obtain non-distorted time t of a flying light pulse, without any additional measurements or auxiliary ranging equipment. Simultaneously, the information of an extra dimension, i.e., z dimension, can be retrieved, leading to the observer-independent space-time (x, y, z, t) four-dimensional (4D) reconstruction of light-in-flight. The proposed scheme enables the accurate visualization of transient optical phenomena such as light scattering or interaction with materials.

2. EXPERIMENTAL SETUP

Our experimental system is illustrated in Fig. 2. A 637 nm pulsed laser (PicoQuant, LDH-P-635, wavelength 636–638 nm, repetition rate 20 MHz, 1.2 mW, pulse width 68 ps) emits laser pulses with 68 ps pulse duration at 20 MHz repetition rate. The pulses propagate across the field of view (FOV) of a SPAD camera, which consists of a SPAD array (Photon Force PF32, time resolution 55 ps, pixel resolution 32×32, pixel pitch 50 μm, fill factor 1.5%, operating at 5000 frames per second) and a camera lens (Thorlabs, MVL4WA, effective focal length 3.5 mm, F/1.4, CS mount). The camera is synchronized to the pulsed laser and contains a 32×32 array of SPAD detectors, each of which operates in time-correlated single photon counting (TCSPC) mode individually, and records the temporal information of a laser pulse by sensing one of the scattered photons from the laser pulse. By accumulating data over multiple detection frames, each pixel obtains a temporal histogram of scattered photons whose total number represents the scattering intensity of the laser pulse at the corresponding spatial location, and the histogram shape indicates the arrival time of the scattered photons. Combining the histogram data recorded by the pixels of the SPAD camera, the projection of the light-in-flight on the xy plane inside the FOV of SPAD camera can be reconstructed [24].

Fig. 2. Experimental system for light-in-flight measurement and data processing. (a) In the experiment, the pulsed laser and the SPAD camera are synchronized via a trigger generator. Placed at z=0  mm, a 636 nm pulsed laser emits pulses across the field of view of the SPAD camera. The SPAD camera, with a lens of 3.5 mm focal length, is located at z=535  mm. The object focal plane of the camera is the xy plane at z=0  mm, having a field of view of 245  mm×245  mm. The SPAD camera collects the scattered photons from the propagating laser pulses and records a histogram at each pixel using TCSPC mode. (b) The raw data of the histograms is fitted with a Gaussian distribution. Histograms with widths too large or too small are discarded (pixels 1 and 2). Malfunctioning pixels with abnormally large counts are also discarded (pixel 4), leaving only effective pixels (pixel 3). (c) The arrival time of the scattered photons ta in the effective pixels is determined as the peak position of the fitted Gaussian distribution, and a pixel versus arrival time can be obtained. Consequently, the projection of the light path on the xy plane, as well as the arrival times along the path is obtained, forming the (x, y, ta) three-dimensional data of light-in-flight.

下载图片 查看所有图片

The accurate estimation for the arrival time ta of the scattered photons at each pixel of the SPAD camera is important for the light-in-flight reconstruction. Gaussian fitting is performed on the raw histogram data at each pixel to suppress the statistical random noise of photon counting. Under the assumption that the background light and dark count of the SPAD camera, whose temporal distribution is quasi-flat, add a bias to the histogram, a constant term is added to the Gaussian polynomial during the fitting in order to improve the estimation accuracy of the arrival time. During the data processing, if the width of a fitted Gaussian curve is much larger or smaller than the expected width, the corresponding pixel is assumed to be malfunctioning or extremely noisy, and is therefore discarded. Furthermore, the systematic overall delay, which is mainly caused by electronic jitter of the related devices and is different for each pixel, is compensated by a temporal offset for each effective Gaussian curve. The offset for the corresponding pixel is determined as the temporal difference between the measured peak position of the corresponding pixel and its theoretical value, which is measured when the camera is uniformly illuminated with a collimated and expanded pulsed laser.

Once the raw histogram data is Gaussian fitted and the temporal-delay is compensated at each pixel, the peak of its histogram is determined, which represents the arrival time ta of the light recorded at the corresponding pixel. As shown in Fig. 2, the path of a laser pulse propagating through the FOV of the camera is reconstructed as its projection on the xy plane, and the arrival time ta along that path is estimated, forming the (x, y, ta) three-dimensional (3D) data of light-in-flight.

3. OPTICAL MODEL AND COMPUTATIONAL LAYER

In order to transfer the arrival time ta distorted by the relativistic effects to the accurate time t at which the light pulse actually is at a given position, an optical model is built as shown in Fig. 3. For simplicity, the computation is based on the assumption that light propagates in air, though it works for any uniform and homogenous medium in which the light propagates in a straight line at a fixed velocity. The moment at which a light pulse enters the FOV of the camera is defined as t0=0  s, which makes t the propagation time from the entering position to where it is now. t will be referred as propagation time hereafter. The xy plane containing the entering point is defined as the reference plane. If a light pulse propagates from B towards C, and its arrival time at an arbitrary point D is recorded, this arrival time ta will correspond to a timespan of light traveling from B to D and then scattering to A (while the propagation time t of the light pulse corresponds to the time during which light travels from B to D). These times, ta and t, satisfy the equation where s is the distance from B to the camera A and can be calculated using the recorded arrival time of the pixel corresponding to B. θ is the angle between AB and AF, and can be calculated using the value of s and the known camera FOV. The second term of Eq. (1) represents the time interval that light propagates from D to A, in which the propagation angle α is defined as the angle between the light path BD and its projection BE on reference plane. t and α are related via the following equation: where l is the length of BE and can be calculated using the value of s and the known camera FOV. By substituting Eq. (2) into Eq. (1), the arrival time ta and the propagation angle α form a one-to-one relationship. BG is the projection of BC on RP, and can be recorded by the SPAD camera, forming 32 photon-counting histograms. The arrival time tai of the ith histogram can be used to yield a propagation angle αi. However, due to noise contained in the recorded data, 32 tai yields 32 different αi, which should be identical theoretically. The optimal estimation of the propagation angle α is then calculated as the value having the minimum root-mean-square error (RMSE), with 32 resulting αi.

Fig. 3. Optical model for the computation of propagation time t. α and θ are the angles of CBG and BAF, respectively. s and l are the lengths of BA and BE, respectively. BE is the projection of BD on the reference plane (RP).

下载图片 查看所有图片

Using the calculated propagation angle α, the propagation time t can be determined for each recorded (x, y). Furthermore, the z information of the corresponding (x, y) can also be retrieved via the knowledge of α. Therefore, the observer-independent 4D (x, y, z, t) information of light-in-flight is reconstructed.

The procedure to reconstruct multiple light paths is illustrated in Fig. 4. The light path from the laser emitting point to Mirror 1 is denoted as LP1, and the consecutive paths from Mirror 1 to Mirror 2 and from Mirror 2 to the exit are denoted as LP2 and LP3, respectively. The light paths can be reconstructed one after another sequentially with their corresponding propagation angles and reference planes. As shown in Fig. 4(a), the xy plane at z=0, containing the laser emitting point, is defined as the reference plane 1 (RP1). The projection of LP1 in RP1, denoted as PP1, with its spatiotemporal information, is recorded by the SPAD camera. Using the propagation angle estimation procedure just described, α1 can be estimated, and the observer-independent 4D (x, y, z, t) information of LP1 is reconstructed with the starting point of LP1 (S1) and ending point of LP1 (E1) determined. As shown in Fig. 4(b), the starting point S2 of LP2 is determined as E1 and the reference plane 2 (RP2) is defined as the xy plane containing E1. In the same manner, α2 and the observer-independent 4D information of LP2 are calculated. Using the ending point E2 of LP2 as the starting point of LP3 (S3), the reference plane 3 (RP3) of LP3 is defined as shown in Fig. 4(c). Similarly, α3 and the 4D information of LP3 can be retrieved. The full evolution of light-in-flight in the FOV of the camera is then reconstructed in (x, y, z, t).

Fig. 4. Reconstruction procedure for consecutive light paths. (a) For light path 1 (LP1), the reference plane (RP1) is the xy plane containing the starting point 1 (S1). The spatial location of the projection (PP1), propagation angle, and ending point (E1) of LP1 are determined using the proposed geometric model. (b) E1 is used as S2 for the reconstruction of LP2, and RP2 is the xy plane containing S2. The equation of LP2 and the position of E2 can be obtained. (c) In the same manner, LP3 and E3 are determined with RP3.

下载图片 查看所有图片

4. RESULTS

4.1 A. Propagation Angle Estimation

An experiment is performed using the setup in Fig. 2 to evaluate the estimation accuracy of α before performing the 4D light-in-flight reconstruction. In this experiment, laser pulses propagate horizontally, i.e., parallel to the x direction, through the center of the camera FOV with a propagation angle α gradually adjusted from 10° to 10° with a 0.5° step (positive is towards the camera and negative is away from the camera). For each angle, measurements are acquired for 200 s with an exposure time of 200 μs for each detection frame. The averaged photon count of the SPAD camera is 105 photon per pulse per pixel, which satisfies the photon-starved condition required for TCSPC mode. A total of 20 measurements are performed, and the resulting α is the averaged value of these 20 measurements.

During the experiment, the xy plane containing the laser-emitting point is selected to be the reference plane. The histogram of any malfunctioning pixels is discarded and its corresponding arrival time ta is determined as the linear interpolated value of the arrival times from the neighboring pixels. It is worth mentioning that theoretically the propagation angle α can be estimated using one or several arrival times ta [25,26]. However, due to the discrete nature of the temporal measuring with the SPAD camera and the noise recorded during a practical experiment, the greater number of tai are involved in the calculation, the more accurate the estimated α will be. Figure 5(a) shows the angular errors of the estimated propagation angle α to the ground truth when different numbers of tai are used in the estimation. As one would expect, 2 tai yield the largest mean error, which is 3.03°, and 32 tai give the smallest, which is 0.15°.

Fig. 5. Experimental results of the propagation angle estimation. (a) Angle error resulting from using different numbers of tai for the estimation of α. (b) Calculated propagation time t with respect to arrival time ta at different propagation angles. (c) The variation of measured full width at half maximum for a laser pulse with respect to its propagation angle α, caused by the relativistic effects.

下载图片 查看所有图片

Figure 5(b) shows the relationship between the measured arrival time ta and the actual propagation time t under the influence of different propagation angles. Figure 5(c) demonstrates how relativistic effects distort the measured pulse width of a laser pulse. The variation of the pulse width is indistinguishable when the propagation angle is between 6° and 6° due to the temporal discretization of the SPAD camera, whose time bin is 55 ps. Nevertheless, the experimental results are in good agreement with the theoretical curve. Furthermore, the results yield a measured full width at half maximum (FWHM) pulse width of 65 ps after deconvolving the systematic impulse respond function from the Gaussian fitted recording data, which is close to the 68 ps pulse width given by the laser manual.

4.2 B. Light-in-Flight Reconstruction

A second experiment is then performed to reconstruct light-in-flight in a 3D space of 375  mm×75  mm×50  mm, where the pulses are emitted from a laser and reflected by two mirrors to generate three consecutive light paths across the FOV of the SPAD camera. In the experiment, there is a 40 mm distance between the FOV and any optical elements (e.g., the laser source and the mirrors) in order to avoid spurious scattering of light into the measurement. The emitting point of the pulsed laser is selected as (0,0,0) of the xyz coordinate for calculation. The object focal plane of the SPAD camera is determined to be the xy plane at z=0. Using the same configuration as before, the SPAD camera records the observer-dependent (x, y, ta) information of the three light paths inside the FOV. The reconstruction of light-in-flight is performed by sequentially determining the light paths from laser emitting point to Mirror 1, to Mirror 2, and then to the exiting point. The reconstruction procedure is given in previous section.

Figure 6(a) shows the reconstructed propagation of the laser pulse in the xyz space, which is overlaid onto a photograph of the experimental setup. The instantaneous positions (x, y, z) of the laser pulse in path are reconstructed with an accuracy of 1.75 mm RMSE with respect to the ground truth in a 3D space of 375  mm×75  mm×50  mm. The propagation times t of the light pulse are estimated with an accuracy of 3.84 ps, which is determined as the difference between the ground truth and the estimated propagation times t, calculated using Eq. (2). The 3.84 ps accuracy is dramatically smaller compared to the 55 ps time resolution of the SPAD camera. The reason for this improvement lies in the fact that the inaccuracy caused by the discrete temporal measurements and the experimental noise is suppressed during the estimation of each propagation angle α, which involves 32 measured arrival times ta rather than one. The full evolution of the laser pulse propagation can be found in Visualization 1. The FWHM of the propagating laser pulse, which is the deconvolved result of the systematic impulse respond function from the Gaussian fitted recording data, is approximately 70 ps, consistent with the specification of the laser.

Fig. 6. Experimental 4D reconstruction of light-in-flight. (a) A reconstruction of a laser pulse reflected by two mirrors is demonstrated. The RMSEs of the reconstruction (red line) to the ground truth (dashed line) in position and time are 1.75 mm and 3.84 ps, respectively. (b) The difference between the calculated propagation time t (red line) and measured arrival time ta (blue line) at each recorded frame. The propagation time is in good agreement with the ground truth (dashed line), demonstrating a feasible compensation for the relativistic effects via the proposed scheme.

下载图片 查看所有图片

Figure 6(b) shows the difference between the calculated propagation time t (red line) and the measured arrival time ta (blue line) at each recorded frame (55 ps time interval) of the SPAD camera, where the arrival time ta has been biased so that it starts at 0 ps at the first frame. The measured arrival time ta has been successfully compensated to be the observer-independent propagation time t, which is in a good agreement with the ground truth (dashed line). The temporal RMSE to the ground truth is significantly improved from 174.80 to 3.84 ps.

5. DISCUSSION AND CONCLUSION

The estimation of propagation angle is crucial to the light-in-flight reconstruction in this work, and we have demonstrated accurate estimations of propagation angle from 10° to 10°. Theoretically, the proposed approach can be used to estimate any angle between ±90°, not including ±90°. Practically, there are two major aspects, noise and diffusion, to consider when measuring a steep angle. On the noise aspect, the reconstruction will be even more accurate with a steeper angle because the relativistic effect is more obvious and the difference between the measured data of two adjacent pixels is easier to be recorded above the noise. An accurate reconstruction at a smaller angle is more difficult to achieve because the milder distortion can be easily drowned in the systematic noise. From the diffusion aspect, a forward steep angle increases the signal-to-noise ratio of the measured data while a backward steep angle decreases it.

The proposed method has the assumption that the light-in-flight to be reconstructed happens in a uniform and homogenous medium, where light propagates in a straight line at a fixed velocity. However, it is also possible to reconstruct self-bending light beams, such as Airy beams [27], in a differential manner. That is, the self-bending light path of the Airy beam propagation viewed as a combination of many tiny straight paths, each of which can then be reconstructed individually by the proposed method.

The position error for a reconstructed light path is mainly determined by the estimation accuracy of the propagation angle and the recorded light path projection on a SPAD camera. The estimation accuracy of the propagation angle can be further improved by taking more measurements or by using a camera with lower noise. The accuracy of the recorded light path projection is limited by the pixel resolution (32×32) and fill factor (1.5%) of the SPAD camera. In particular, when a light pulse is propagating in a quasi-horizontal x direction, the small variation in the y direction cannot be spatially resolved by the SPAD camera, which accumulates as the light pulse propagates and degrades the resulting accuracy of the reconstruction. A newly developed SPAD camera with a 256×256 pixel resolution and 61% fill factor [28] will improve the reconstruction accuracy of the proposed light-in-flight imaging system. A backside-illuminated multi-collection-gate silicon sensor can also be used in light-in-flight imaging [29] to provide a higher fill factor, larger photoreceptive area, and higher spatial resolution, with a temporal resolution currently of 10 ns, but its sensitivity is not as good as a SPAD camera. However, the ultimate limit for temporal resolution of these cameras implies that in the future, sub-ns temporal resolution could be achievable, thus allowing precise light-in-flight measurements with just one single laser shot, as shown in the proof-of-concept work by Etoh et al. [29].

In summary, we have proposed a computational imaging scheme to achieve the reconstruction of light-in-flight in observer-independent 4D (x, y, z, t) by recording the scattered photons of the light propagation with a SPAD camera and compensating the relativistic effects via an optical model-based computation layer. The relativistic effects in this context refer to the spatiotemporal distortion caused by the fact that the speed of light needs to be treated as a finite number in certain scenarios such as transient imaging. The estimation of the light propagation angle α, which is crucial to the 4D light-in-flight reconstruction, has a mean error of 0.15° for the range from 10° to 10°. In the reconstruction of the light-in-flight in a 3D space of 375  mm×75  mm×50  mm, the temporal accuracy is improved from 174.80 ps of the distorted arrival time to 3.84 ps of the compensated propagation time. The spatial accuracy of the reconstruction is 1.75 mm, which is better than both the 8 mm transverse spatial resolution determined by the optical setup of the system and the 16.5 mm longitude spatial resolution determined by the 55 ps time resolution of the SPAD camera. The improvement is mainly achieved by the accurate estimation of the propagation angle α, where the random-natured noise and the inaccuracy of discrete measurement are suppressed by estimation involving multiple measurements. The accurately estimated propagation angle can be further exploited to correct other distorted measurements.

The proposed 4D imaging scheme is applicable to the reconstruction of light-in-flight for other circumstances, such as light traveling inside a cavity or interacting with other materials. This work provides the ability to expand the recording and measuring of repeatable ultra-fast events with extremely low scattering from 3D to 4D. It can also be applied to observe optical phenomena which pose a difficulty for other imaging schemes, e.g., the behavior of light in micro- or nanostructures and the interaction between light and matter.

References

[1] L. Gao, J. Liang, C. Li, L. V. Wang. Single-shot compressed ultrafast photography at one hundred billion frames per second. Nature, 2014, 516: 74-77.

[2] T. Gorkhover, S. Schorb, R. Coffee, M. Adolph, L. Foucar, D. Rupp, A. Aquila, J. D. Bozek, S. W. Epp, B. Erk, L. Gumprecht, L. Holmegaard, A. Hartmann, R. Hartmann, G. Hauser, P. Holl, A. Hömke, P. Johnsson, N. Kimmel, K.-U. Kühnel, M. Messerschmidt, C. Reich, A. Rouzée, B. Rudek, C. Schmidt, J. Schulz, H. Soltau, S. Stern, G. Weidenspointner, B. White, J. Küpper, L. Strüder, I. Schlichting, J. Ullrich, D. Rolles, A. Rudenko, T. Möller, C. Bostedt. Femtosecond and nanometre visualization of structural dynamics in superheated nanoparticles. Nat. Photonics, 2016, 10: 93-97.

[3] M. B. Bouchard, B. R. Chen, S. A. Burgess, E. M. C. Hillman. Ultra-fast multispectral optical imaging of cortical oxygenation, blood flow, and intracellular calcium dynamics. Opt. Express, 2009, 17: 15670-15678.

[4] C.-M. Liu, T. Wong, E. Wu, R. Luo, S.-M. Yiu, Y. Li, B. Wang, C. Yu, X. Chu, K. Zhao, R. Li, T.-W. Lam. SOAP3: ultra-fast GPU-based parallel alignment tool for short reads. Bioinformatics, 2012, 28: 878-879.

[5] J. Liang, L. V. Wang. Single-shot ultrafast optical imaging. Optica, 2018, 5: 1113-1127.

[6] J. A. Giordmaine, P. M. Rentzepis, S. L. Shapiro, K. W. Wecht. Two-photon excitation of fluorescence by picosecond light pulses. Appl. Phys. Lett., 1967, 11: 216-218.

[7] N. Abramson. Light-in-flight recording by holography. Opt. Lett., 1978, 3: 121-123.

[8] T. L. Cocker, D. Peller, P. Yu, J. Repp, R. Huber. Tracking the ultrafast motion of a single molecule by femtosecond orbital imaging. Nature, 2016, 539: 263-267.

[9] K. Goda, K. K. Tsia, B. Jalali. Serial time-encoded amplified imaging for real-time observation of fast dynamic phenomena. Nature, 2009, 458: 1145-1149.

[10] K. Nakagawa, A. Iwasaki, Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, I. Sakuma. Sequentially timed all-optical mapping photography (STAMP). Nat. Photonics, 2014, 8: 695-700.

[11] T. Kakue, K. Tosa, J. Yuasa, T. Tahara, Y. Awatsuji, K. Nishio, S. Ura, T. Kubota. Digital light-in-flight recording by holography by use of a femtosecond pulsed laser. IEEE J. Sel. Top. Quantum Electron., 2012, 18: 479-485.

[12] A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, R. Raskar. Femto-photography: capturing and visualizing the propagation of light. ACM Trans. Graph., 2013, 32: 44.

[13] L. Zhu, Y. Chen, J. Liang, Q. Xu, L. Gao, C. Ma, L. V. Wang. Space- and intensity-constrained reconstruction for compressed ultrafast photography. Optica, 2016, 3: 694-697.

[14] F. Heide, M. B. Hullin, J. Gregson, W. Heidrich. Low-budget transient imaging using photonic mixer devices. ACM Trans. Graph., 2013, 32: 45.

[15] A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, R. Raskar. Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles. ACM Trans. Graph., 2013, 32: 167.

[16] C. Niclass, M. Gersbach, R. Henderson, L. Grant, E. Charbon. A single photon avalanche diode implemented in 130-nm CMOS technology. IEEE J. Sel. Top. Quantum Electron., 2007, 13: 863-869.

[17] D. Bronzi, F. Villa, S. Tisa, A. Tosi, F. Zappa, D. Durini, S. Weyers, W. Brockherde. 100000 frames/s 64 × 32 single-photon detector array for 2-D imaging and 3-D ranging. IEEE J. Sel. Top. Quantum Electron., 2014, 20: 354-363.

[18] J. M. Hill, B. J. Cox. Einstein’s special relativity beyond the speed of light. Proc. R. Soc. A, 2012, 468: 4174-4192.

[19] VeltenA.WuD.JaraboA.MasiaB.BarsiC.LawsonE.JoshiC.GutierrezD.BawendiM. G.RaskarR., “Relativistic ultrafast rendering using time-of-flight imaging,” in ACM SIGGRAPH 2012 (2012), paper 41.

[20] M. Laurenzis, J. Klein, E. Bacher, N. Metzger. Multiple-return single-photon counting of light in flight and sensing of non-line-of-sight objects at shortwave infrared wavelengths. Opt. Lett., 2015, 40: 4815-4818.

[21] M. Clerici, G. C. Spalding, R. Warburton, A. Lyons, C. Aniculaesei, J. M. Richards, J. Leach, R. Henderson, D. Faccio. Observation of image pair creation and annihilation from superluminal scattering sources. Sci. Adv., 2016, 2: e1501691.

[22] N. Abramson. Light-in-flight recording 3: compensation for optical relativistic effects. Appl. Opt., 1984, 23: 4007-4014.

[23] A. Jarabo, B. Masia, A. Velten, C. Barsi, R. Raskar, D. Gutierrez. Relativistic effects for time-resolved light transport. Comput. Graph. Forum., 2015, 34: 1-12.

[24] G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, D. Faccio. Single-photon sensitive light-in-fight imaging. Nat. Commun., 2015, 6: 6021.

[25] M. Laurenzis, J. Klein, E. Bacher. Relativistic effects in imaging of light in flight with arbitrary paths. Opt. Lett., 2016, 41: 2001-2004.

[26] M. Laurenzis, J. Klein, E. Bacher, N. Metzger, F. Christnacher. Sensing and reconstruction of arbitrary light-in-flight paths by a relativistic imaging approach. Proc. SPIE, 2016, 9988: 998804.

[27] G. A. Siviloglou, J. Broky, A. Dogariu, D. N. Christodoulides. Observation of accelerating Airy beams. Phys. Rev. Lett., 2007, 99: 213901.

[28] I. Gyongy, N. Calder, A. Davies, N. A. W. Dutton, R. R. Duncan, C. Rickman, P. Dalgarno, R. K. Henderson. A 256 × 256, 100-kfps, 61% fill-factor SPAD image sensor for time-resolved microscopy applications. IEEE Trans. Electron. Devices, 2018, 65: 547-554.

[29] T. G. Etoh, T. Okinaka, Y. Takano, K. Takehara, H. Nakano, K. Shimonomura, T. Ando, N. Ngo, Y. Kamakura, V. T. S. Dao, A. Q. Nguyen, E. Charbon, C. Zhang, P. De Moor, P. Goetschalckx, L. Haspeslagh. Light-in-flight imaging by a silicon image sensor: toward the theoretical highest frame rate. Sensors, 2019, 19: 2247.

Yue Zheng, Ming-Jie Sun, Zhi-Guang Wang, Daniele Faccio. Computational 4D imaging of light-in-flight with relativistic effects[J]. Photonics Research, 2020, 8(7): 07001072.

本文已被 1 篇论文引用
被引统计数据来源于中国光学期刊网
引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!