Chinese Optics Letters, 2016, 14 (12): 121501, Published Online: Aug. 2, 2018  

Deep-sky image live stacking via star descriptor Download: 785次

Author Affiliations
1 State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou 310027, China
2 Computer Science Department, University of Southern California, 941 Bloom Walk, Los Angeles, CA 90089, USA
Abstract
Image registration is an old topic but has a new application in deep-sky imaging fields named live stacking. In this Letter, we propose a live stacking algorithm based on star detection, description, and matching. A thresholding method based on Otsu and centralization is proposed to implement star detection. Then, a translation and rotation invariant descriptor is proposed to provide accurate feature matching. Extensive experiments illustrate that our proposed method is feasible in deep-sky image live stacking.

Consumer astronomy photography involves deep-sky and planetary imaging. For deep-sky imaging, long exposure times are necessary because of the dim light from the scene. However, the increased exposure time will bring more hot noise and distortion. The alternative plan to this situation is image stacking, an old topic but one that has a novel application in deep-sky imaging fields. There are two methods of live stacking: average and additive stacking. For average stacking, the goal is to improve the signal-to-noise ratio, while additive stacking is designed to achieve a preview result that used to be from long exposures but can instead be achieved with a relatively short exposure time. There is already some offline deep-sky stacking software, such as DeepskyStacker[1], Registax[2], and PixInsight[3]. Since they are all offline-use software products, the user cannot preview the stacked result while photographing. Recently, SharpCap[4] provided a new function named live stacking, which stacks multiple frames and allows users to preview the result immediately. However, it is still a highly parameter-related application. No matter which method of stacking is implemented in the above-mentioned software, accurate registration between images is always a prerequisite of stacking.

Image registration is the process of overlaying two or more images of the same scene taken at different times[5]. It is required in many fields, including remote sensing, laser radar and aerial optical imagery[6,7], multi-focus, microscopic, and fluoroscopic image registration[811" target="_self" style="display: inline;">11], optical stabilization[12], and other 2D and 3D applications[1316" target="_self" style="display: inline;">16]. Image registration in deep-sky image live stacking mainly handles translation and rotation deformations caused by the vibrations of imaging devices and the Earth’s rotation.

As a kind of image registration method, feature-based image registration is the current research hotspot; we refer the reader some articles for an overview of this method[5,17,18]. A standard feature-based image registration method consists of feature detection, feature description, feature matching, transformation estimation, and image resampling. Among these parts, feature detection, feature description, and feature matching are the critical parts whose accuracy will affect the final registration result. Compared with edges, corners, endpoints, intersections, and other features, stars serve as relatively ideal features for deep-sky images.

The most commonly used star detection method is the thresholding method, where every pixel that fulfills a certain intensity, area, or distance criterion will be considered as star pixel. A few thresholding-related star detection algorithms have been developed in the past few years. Cristo et al.[19] proposed a novel thresholding method for detecting stars automatically. That method introduced a set of image pre-processing and post-processing models, so it is not simple enough to be applied in live situations. Wang et al.[20] proposed a fast onboard star extraction algorithm, which is more like a bright star extraction algorithm and cannot be applied in our dim deep-sky image stacking applications. Arbabmir et al.[21] proposed a thresholding method that performs image binalization properly even for images with uneven illumination. Regardless of its iterative nature, its performance still depends on the size of the local window, which additionally depends on the optics design of the camera, and that limits its popularity. Xu et al.[22] described a new weighted threshold algorithm based on the estimation of the optimal threshold for achieving minimal centroid error. As an iterative method, the computation efficiency is difficult to guarantee without the proper initial threshold. However, consumer software, such as Registax, SharpCap, and PixInsight, still requires user to input the threshold manually. Although PixInsight implemented a wavelet-based automatic star detection algorithm, its detection criteria are still controlled through process parameters. Nowadays, these methods suffer either a computation or parameter-dependent problem. In this Letter, an automatic and efficient thresholding technique based on Otsu and centralization is proposed. It could process deep-sky images even with existing illumination differences. In the proposed method, the threshold firstly is calculated by Otsu and then centralized to make it feasible for uneven intensities.

In order to match features between different images, a feature descriptor is necessary. In computer vision fields, several feature descriptors, such as SIFT, SURF, KAZE, etc., have been proposed for complicated computer vision applications. The rotation invariant property of these descriptors either comes from the local statistics of gradients or Haar wavelet responses. However, for star features used in deep-sky images, this rotation invariant nature may fail due to the star’s special shape, where the intensity decreases similarly in all directions. In order to handle this problem and make the star feature rotation invariant, we creatively proposed a translation and rotation invariant descriptor. This descriptor relies on the star’s characteristics and relationship with its nearby stars. The best candidate star from the deformed image for each star from the reference image can be efficiently obtained by calculating the Euclidean distance.

Figure 1 shows the flow diagram of our proposed algorithm. Besides the pre-processing and post-processing steps, our approach involves three core components: star detection, star description, and star matching. Since our algorithm is designed to perform live stacking, after pre-processing, such as subtracting the dark, bias, and flat-field frames, each frame will be sent to the stack thread, which works with the grab thread asynchronously. Then the stack thread will register the input frame to the reference frame one by one, and the final stacked result will be used as the new reference frame.

Fig. 1. Flow diagram of our proposed algorithm.

下载图片 查看所有图片

First, stars should be extracted as accurately as possible and in large amounts. Otherwise, the following steps will produce errors and even cause registration failure. In this Letter, we proposed a thresholding method based on Otsu and centralization. The famous Otsu[23] method thresholds a given image by maximizing the inter-class variance. Given an image with L gray levels and N total pixels, the inter-class variance for the chosen threshold t can be calculated by σ(t)=wA(μμA)2+wB(μμB)2.

The means μ, μA, and μB are defined as μ=i=1LPi*i,μA=i=1ti·PiwA,μB=i=tLi·PiwB,and the probabilities Pi, wA, and wB are defined as Pi=NiN,wA=i=1tPi,wB=i=tLPi.

The main drawback of Otsu is that it may fail for deep-sky images with uneven illuminations; this commonly takes place when the imaging device is exposed to undesirable light sources. This is a common problem that all global thresholding methods face. We modify the Otsu threshold by introducing centralization, and that makes it applicable to uneven background intensities and noise. This improvement can be achieved by Tc=Mftmean+totsu,where Mf represents the mean filtered image matrix, tmean denotes the mean value of the filtered image, and totsu is the global threshold Otsu method identified. Tc is the centralized pixel-related threshold matrix, which includes thresholds for every pixel in a given image. Then, the binary image can be easily achieved by applying the threshold matrix to the input frame. Figure 2 shows a thresholding example using M42 image. As shown in Fig. 2, compared with Otsu, our result achieves higher detection accuracy, even for these nebula regions.

Fig. 2. Comparison of thresholding results: (a) Original M42 image. (b) Otsu result. (c) Our result.

下载图片 查看所有图片

For stars, the feature is more like a speckle instead of a point. Therefore, we need to extract the stars’ region before proceeding further. In order to extract the region of stars, we apply the Moore-Neighbor tracing algorithm modified by Jacob’s stopping criteria.

The state-of-the-art algorithms used in deep-sky image stacking only use a star’s position and brightness to describe it. This is not sufficient to achieve a rotation invariant nature. In order to keep the registration rotation invariant, they use the triangle similarity to cope with this rotation problem during star matching. So state-of-the-art algorithms are not only heavily reliant on memory to save the formed triangles, but they also have a very expensive computational cost. So, we creatively proposed a descriptor which not only includes a star’s characteristics, such as position and brightness, but also takes the spatial relationship with their nearby stars into consideration. This idea of using the geometrical relationship has been proven effective by Shi et al.[24], who proposed a topology-based affine invariant descriptor. Topology is constructed among regions, and the number of relative neighboring regions is not fixed. Therefore, its computational cost may increase while processing deep-sky images because that star may have a large number of relative neighboring pairs. Thus, we need to propose a new descriptor used to process deep-sky images. Because of the artifacts of the imaging system and the effects of atmospheric turbulence, the shape of the star mostly looks like an ellipse instead of an ideal circle. In our implementation, since we have extracted the counters of stars, we can simplify the procedure to fit the best ellipse for the counters using the direct ellipse fit method[25]. The ellipse’s semi-major axis a, semi-minor axis b, center coordinates (xc,yc), and rotation angle θ can be easily obtained using the coefficients of a standard ellipse function.

Then, we introduce two orientations to each star to make it rotation invariant. As shown in Fig. 3, for the current star A, B and C are the two nearest neighboring stars. Connecting A and the two neighboring stars forms two line segments AB and AC, whose lengths are L1 and L2, respectively. We mark the angles’ line segments AB and AC to the major axis direction x as θ1 and θ2. Finally, we add L1, θ1, L2, and θ2 to current star’s descriptor. Now, our descriptor is formed as {xc,yc,a,b,θ,L1,θ1,L2,θ2}.

Fig. 3. Schematic diagram of the rotation invariant descriptor.

下载图片 查看所有图片

Figure 4 shows our description results for two deep-sky images with minor rotation. The white quiver denotes the orientation, and its length is the distance between two stars. Taking the dominant orientation into consideration, the relationships between the stars are kept perfectly between the two images. Compared with SIFT and SURF, our descriptor is only an 8-vector descriptor, so it is efficient to compute and match by the simplest Euclidean distance. The proposed descriptor also makes our algorithm perform much more efficiently than methods based on triangle similarity.

Fig. 4. Description results using our proposed star descriptor.

下载图片 查看所有图片

In the final step of star matching, we applied the random sample consensus (RANSAC)[26] to reject mismatches and calculate the transformation matrix to be used for the post-processing procedures.

As mentioned earlier, the mainstream SIFT and SURF cannot be directly applied in our technique, because the dominant orientation obtained by these algorithms is not robust enough to keep the descriptor rotation invariant. This is mainly due to two reasons: the region size used for assigning the dominant orientation and the region size used for the describing the feature. For star features, different region sizes will result in different matching results. Ruiz-del-Solar et al.[27] applied the SIFT descriptor to stellar image matching and introduced an additional verification procedure to discard false detections. With the help of the additional verification procedure, it achieved acceptable results. In other words, regardless of the additional procedure, the defect of the SIFT descriptor still exists.

In order to illustrate the superiority of our descriptor, we compare it with SIFT and SURF using two real deep-sky images captured by a Canon EOS 1100D with a 50 mm lens. For a fair comparison, the features used by the three descriptors are the star centroid points detected by our star detection algorithm. Figure 5 shows the results. The region size for SIFT and SURF is the maximum star contour’s size. As shown in Fig. 5, because of the rotation invariant descriptor’s inaccuracy, both SIFT and SURF generate many mismatches, while our descriptor achieves a sufficient number of correctly matched pairs.

Fig. 5. Matching results using SIFT, SURF, and our descriptor. (a) Star detection results. (b) SIFT result. (c) SURF result. (d) Our result.

下载图片 查看所有图片

In order to further illustrate the uncertainty that different region sizes bring into feature matching, we plot the percentage of correct matches with different region sizes in Fig. 6. The horizontal axis denotes the region size, while the vertical axis denotes the percentage of correct matches. As shown in Fig. 6, both the SIFT and SURF descriptors vary dramatically with the increase of the region size. Among the three descriptors, our descriptor obtains the best matching result. Although, compared with SIFT, SURF finally achieves a relatively good and stable result that nearly reaches our result, it still needs expert experience to achieve better results.

Fig. 6. Matching ratio results among SIFT, SURF, and our descriptor.

下载图片 查看所有图片

Evaluating the registration quality and accuracy is a necessary part of image registration because, without quantitative evaluation, no registration method can be accepted for practical use[5]. Therefore, we do an evaluation on ten randomly rotated (1°–90°) and translated (1–10 pixels) images by root-mean-squared error (RMSE) and joint entropy[28] against SIFT and SURF. The results are shown in Table 1. For both measures, the smaller the calculated value is, the better the registration will be. This quantitative evaluation also could facilitate the following analysis of the influence of the registration accuracy on image live stacking. As shown in Table 1, our descriptor obtains a smaller RMSE and joint entropy than SIFT and SURF overall. This result represents that our descriptor has a higher registration accuracy. For offline registration or the situation where there is a small amount of images to be registered, SIFT and SURF could achieve acceptable results. However, in deep-sky image live stacking, every intermediate averaged or stacked image will be assigned as the new reference frame used for the next registration. The error produced by the inaccurate registration will be accumulated after averaging or stacking images continuously and finally leads to the registration failure. So, a higher registration accuracy is required.

Table 1. Quantitative Registration Evaluation Against SIFT and SURF

 RMSE
SIFT0.88800.40510.01610.11500.71010.86500.43090.90720.43140.9771
SURF0.14750.23060.01520.07040.45950.79860.01990.58540.34520.7318
OUR0.01060.01800.01430.04550.02240.02810.01270.03800.03070.0230
Joint Entropy
SIFT9.33399.37158.33699.36009.45039.37347.07499.36369.37179.8534
SURF9.33549.37117.33659.35889.31859.36516.26289.34149.36637.2840
OUR6.00736.24746.65228.09606.80467.04146.15357.07497.48586.8305

查看所有表

In our final experiment, where the preview resolution is 1920×1200, the computation cost for one frame is kept below 0.5 s, and this is acceptable in deep-sky live stacking. Although the efficiency already is high enough to ensure a live process, we still introduce a buffering mechanism in the implementation to avoid the problem that the calculation cannot keep up with the frame rate under a very low exposure. Every grabbed frame will be saved in the buffering pool while the stack thread picks frames from the buffering pool one by one. These two threads work with each other asynchronously. If the frame rate is high and the calculation of the image registration cannot catch up with it, the old frames saved in the buffering pool will be dropped and the newly arrived frames will be added in. This implementation always guarantees the latest frame will be stacked into the preview result.

Figure 7 shows the live stacking results of M51. These M51 images are captured with a 15-s exposure and a 10 gain camera setting. Figure 7(a) is the original frame without stacking any more images. Due to the dim light, the original frame shows nothing except several bright points. With the increase of the stacked image number, the details of M51 emerge gradually without any affecting the translation and rotation. The black edges in Fig. 7(d) illustrate that translation and rotation between frames do exist. This is caused by setting pixels that does not exist in the current frame compared with the reference frame to zero.

Fig. 7. Live stacking results while photographing M51. (a) No stack. (b) Three frames stacked. (c) Six frames stacked. (d) Nine frames stacked.

下载图片 查看所有图片

Figure 8 shows two ten-frames-stacked M92 images where every frame is captured using 5 s of exposure and a 10 gain camera setting with or without alignment enabled. Both ten-frames-stacked results in Fig. 8 make more stars emerge and achieve a long-exposure effect. However, in Fig. 8(b), there are clear star trails introduced by the equatorial mount’s inaccurate compensation for the Earth’s rotation, while the stars’ shapes are still sharp in Fig. 8(a). That means even when using a short exposure time, image registration is still needed, and our live stacking method accurately registers every input frame and successfully eliminates these inaccurate compensations.

Fig. 8. Comparison results using ten-frames-stacked M92 images and their zoomed views. (a) Result with alignment enabled. (b) Result with alignment disabled.

下载图片 查看所有图片

In conclusion, we propose a live stacking algorithm based on star detection, description and matching. Our star detection overcomes the problem that the traditional global threshold-based method using centralization. The major contribution of our work is the proposed descriptor, which makes for up the defect of traditional descriptors, which cannot provide image registration in deep-sky imaging. As a new descriptor, our star descriptor has fewer memory requirements and is better for deep-sky image live stacking. The experiments indicate that our proposed live stacking algorithm achieves a long-exposure effect with translation and rotation invariant features.

References

[1] CoiffierL., “DeepSkyStacker,” http://deepskystacker.free.fr/english/ (2016).

[2] BerrevoetsC., “RegiStax,” http://www.astronomie.be/registax/ (2016).

[3] ConejeroJ., “PixInsight,” https://pixinsight.com/ (2016).

[4] RobinG., “SharpCap,” http://www.sharpcap.co.uk/ (2016).

[5] ZitováB.FlusserJ., Image Vision Comput.21, 977 (2003).

[6] ChenJ.FengH.PanK.XuZ.LiQ., Optik-Int. J. Light Electron Opt.125, 697 (2014).

[7] MastinA.KepnerJ.FisherJ., in IEEE Conference on Computer Vision and Pattern Recognition, 2009 (CVPR, 2009), 2639.

[8] WangC.-W.KaS.-M.ChenA., Sci. Rep.4, 6050 (2014).SRCEC32045-2322

[9] LiuY.YuF., Opt. Commun.341, 101 (2015).OPCOB80030-4018

[10] BingjianW.QuanL.YapengL.FanL.LipingB.GangL.RuiL., Appl. Opt.50, 1861 (2011).APOPAI0003-6935

[11] LiuS.HuaH., Opt. Express19, 353 (2011).OPEXFF1094-4087

[12] YangQ.ZhangJ.NozatoK.SaitoK.WilliamsD. R.RoordaA.RossiE. A., Biomed. Opt. Express5, 3174 (2014).BOEICL2156-7085

[13] PohitM.SharmaJ., Appl. Opt.54, 4514 (2015).APOPAI0003-6935

[14] DongX.ZhengY.BaiS.XuW.HuangX., Chin. Opt. Lett.12, 121002 (2014).CJOEE31671-7694

[15] CaoP.YangY.LiC.ChaiH.LiY.XieS.LiuD., Chin. Opt. Lett.13, 041102 (2015).CJOEE31671-7694

[16] FanX.ZhouC.WangS.LiC.YangB., Chin. Opt. Lett.14, 081101 (2016).CJOEE31671-7694

[17] TuytelaarsT.MikolajczykK., Found. Trends. Comput. Graph. Vision3, 177 (2008).

[18] SzeliskiR., Found. Trends. Comput. Graph. Vision2, 1 (2006).

[19] CristoA.PlazaA.ValenciaD., in IEEE International Symposium on Signal Processing and Information Technology (ISSPIT, 2008), 180.

[20] WangT.QiuY.CaiH.DengJ., Sci. China Phys. Mech. Astron.53, 51 (2010).SCPMCL1674-7348

[21] ArbabmirM. V.MohammadiS. M.SalahshourS.SomayeheeF., J. Opt. Soc. Am. A31, 794 (2014).JOAOD60740-3232

[22] XuW.LiQ.FengH.-J.XuZ.-H.ChenY.-T., Optik-Int. J. Light Electron Opt.124, 4673 (2013).

[23] OtsuN., Automatica11, 23 (1975).ATCAA90005-1098

[24] ShiC.WangG.LinX.WangY.LiaoC.MiaoQ., in 2010 IEEE International Conference on Image Processing (2010), p. 133.

[25] FitzgibbonA.PiluM.FisherR. B., IEEE Trans. Pattern Anal. Mach. Intell.21, 476 (1999).ITPIDJ0162-8828

[26] FischlerM. A.BollesR. C., Commun. ACM24, 381 (1981).CACMA20001-0782

[27] Ruiz-del-SolarJ.LoncomillaP.ZorziP., in Progress in Pattern Recognition, Image Analysis and Applications: 13th Iberoamerican Congress on Pattern Recognition, CIARP 2008, Proceedings, Ruiz-ShulcloperJ.KropatschW. G., eds. Havana, Cuba, Sept.9–12, 2008 (Springer, 2008), p. 618.

[28] GoshtasbyA. A., Image Registration: Principles, Tools and Methods (Springer Science & Business Media, 2012).

Haiyang Zhou, Yunzhi Yu. Deep-sky image live stacking via star descriptor[J]. Chinese Optics Letters, 2016, 14(12): 121501.

引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!