State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing 100084, China
Key Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong Province, College of Optoelectronic Engineering, Shenzhen University, Shenzhen, Guangdong 518060, China
Abstract
Three-dimensional (3D) scanning based on optical principle is a technology of scanning the object''s spatial shape by optical system, which can acquire 3D information of the objects. The technology has the advantages of non-contact, high precision and high resolution. To our knowledge, the structured-light 3D scanning accuracy is up to 0.01 mm, and the point cloud contains millions of points with the working distance less than 1 m. Texture reconstruction can further present the color, material and other information of the scanned objects, and improve the verisimilitude of reconstructed objects. Due to the influence of camera error and illumination environment, it is easy to produce seam, blurring and ghosting in texture images after texture mapping. By introducing the camera model, we derive the relationship between 3D space point and two-dimensional (2D) image, and then analyze the causes of texture artifacts. The methods of eliminating residual artifacts in texture construction are reviewed, and their advantages and limits are summarized. At last, in view of the shortcomings of texture reconstruction, the development trend of the texture reconstruction method for colored 3D model is prospected.
Key words three-dimensional sensing; texture reconstruction; Markov random field; composite-weight texture blending; color correction
Causes of the artifacts in texture image. (a) Camera parameter errors; (b) low model accuracy; (c) inconsistent illumination environment; (d) texture aliasing
图4 复合权重示意图。权重从左至右分别为方向权重、深度权重、边缘权重,以及复合权重 Fig. 4. Composite-weight schematic. From left to right: angle weight, depth weight, border weight, and composite-weight
BDS function diagram. Left: source image; right: target image. s1,t1 and s2,t2 are two pairs of patches between source and target images which have minimum Euclidean distance, respectively
图5 双向相似函数示意图。左图为原始图像,右图为目标图像。s1,t1和s2,t2分别为原图和目标图欧氏距离最小的两对区块 Fig. 5. BDS function diagram. Left: source image; right: target image. s1,t1 and s2,t2 are two pairs of patches between source and target images which have minimum Euclidean distance, respectively
Influence of illumination disagreement on the method in Ref.[73]. Each row gives the source texture image and the target texture image in the experiment, and the illumination disagreement increases gradually from top to bottom
图7 光照差异对文献[ 73]方法的影响。每一行给出一次实验中的原始纹理图像和目标纹理图像,从上往下光照差异逐渐增大 Fig. 7. Influence of illumination disagreement on the method in Ref.[73]. Each row gives the source texture image and the target texture image in the experiment, and the illumination disagreement increases gradually from top to bottom
Seam levelling on a circumference. Function values are shown as the height above the circumference. (a) Original function; (b) levelling function; (c) sum of original function and levelling function (minus a constant)
图9 采用校平函数平滑接缝示意图,其中函数值表示为圆周上的高度。(a)原始函数; (b)校平函数;(c)原始函数和校平函数之和(已减去一常数) Fig. 9. Seam levelling on a circumference. Function values are shown as the height above the circumference. (a) Original function; (b) levelling function; (c) sum of original function and levelling function (minus a constant)
Sutherland IE, Sproull RF and Schumacker RA. A characterization of ten hidden-surface algorithms [J]. , 1974, 6(1):1-55.
The paper asserts that the hidden-surface problem is mainly one of sorting. The various surfaces of an object to be shown in hidden-surface or hidden-line form must be sorted to find out which ones are visible at various places on the screen. Surfaces may be sorted by lateral position in the picture (XY), by depth (Z), or by other criteria. The paper shows that the order of sorting and the types of sorting used form differences among the existing hidden-surface algorithms. (Modified author abstract)
3
LevoyM, PulliK and CurlessBet al, 2000,
4
An YT and ZhangS. Three-dimensional absolute shape measurement by combining binary statistical pattern matching with phase-shifting methods [J]. , 2017, 56(19):5418-5426.
Abstract This paper presents a novel method that leverages the stereo geometric relationship between projector and camera for absolute phase unwrapping on a standard one-projector and one-camera structured light system. Specifically, we use only one additional binary random image and the epipolar geometric constraint to generate a coarse correspondence map between projector and camera images. The coarse correspondence map is further refined by using the wrapped phase as a constraint. We then use the refined correspondence map to determine a fringe order for absolute phase unwrapping. Experimental results demonstrated the success of our proposed method.
5
Li BW and ZhangS. Superfast high-resolution absolute 3D recovery of a stabilized flapping flight process [J]. , 2017, 25(22):27270-27282.
Abstract Scientific research of a stabilized flapping flight process (e.g. hovering) has been of great interest to a variety of fields including biology, aerodynamics, and bio-inspired robotics. Different from the current passive photogrammetry based methods, the digital fringe projection (DFP) technique has the capability of performing dense superfast (e.g. kHz) 3D topological reconstructions with the projection of defocused binary patterns, yet it is still a challenge to measure a flapping flight process with the presence of rapid flapping wings. This paper presents a novel absolute 3D reconstruction method for a stabilized flapping flight process. Essentially, the slow motion parts (e.g. body) and the fast-motion parts (e.g. wings) are segmented and separately reconstructed with phase shifting techniques and the Fourier transform, respectively. The topological relations between the wings and the body are utilized to ensure absolute 3D reconstruction. Experiments demonstrate the success of our computational framework by testing a flapping wing robot at different flapping speeds.
6
Hyun JS. Chiu G T C, Zhang S. High-speed and high-accuracy 3D surface measurement using a mechanical projector [J]. , 2018, 26(2):1474-1487.
This paper presents a method to achieve high-speed and high-accuracy 3D surface measurement using a custom-designed mechanical projector and two high-speed cameras. We developed a computational framework that can achieve absolute shape measurement in sub-pixel accuracy through: 1) capturing precisely phase-shifted fringe patterns by synchronizing the cameras with the projector; 2) generating a rough disparity map between two cameras by employing a standard stereo-vision method using texture images with encoded statistical patterns; and 3) utilizing the wrapped phase as a constraint to refine the disparity map. The projector can project binary patterns at a speed of up to 10,000 Hz, and the camera can capture the required number of phase-shifted fringe patterns with 1/10,000 second, and thus 3D shape measurement can be realized as high as 10,000 Hz regardless the number of phase-shifted fringe patterns required for one 3D reconstruction. Experimental results demonstrated the success of our proposed method.
7
IzadiS, KimD and HilligesOet al, 2011,
8
Chen DG.[P].2018, -03-27[2018-04-10].
9
Cui HN, Shen SH and Hu ZY. Global fusion of generalized camera model for efficient large-scale structure from motion [J]. , 2017, 60
10
Dong QL, ShuM and Cui HN, et al. Learning stratified 3D reconstruction [J]. , 2018, 61
11
Wang JL, Lu YH and Liu JB, et al. A robust three-stage approach to large-scale urban scene recognition [J]. , 2017, 60
12
ZhouL, Zhu SY and Shen TWet al, Progressive large scale-invariant image matching in scale space [J]. , 2017,
13
Zhang RZ, Zhu SY and FangTet al, Distributed very large scale bundle adjustment by global camera consensus [J]. , 2017,
14
Su XY, Zhang QC and Chen WJ. Three-dimensional imaging based on structured illumination [J]. , 2014, 41(2). 苏显渝,张启灿,陈文静.结构光三维成像技术[J]., 2014, 41(2).
Lu MT, Su XY and Cao YP, et al. 3D shape reconstruction algorithms for modulation measuring profilometry with synchronous scanning [J]. , 2016, 43(3). 卢明腾,苏显渝,曹益平, 等.同步扫描的调制度测量轮廓术三维面形重建算法[J]., 2016, 43(3).
基于三角法的结构光三维测量技术具有较高的精度,但投影光轴和观察光轴之间的夹角在测量过程中可能产生遮挡和阴影,需要通过两次或多次不同方向的测量和拼接解决。与三角测量不同,基于调制度测量的三维面形测量方法采用了垂直测量原理,将投影光轴和观察光轴重合,从而摆脱了基于三角测量原理的光学三维传感方法中阴影、遮挡等限制。对一种连续相移和垂直扫描的调制度测量轮廓术三维面形重建算法进行了研究,分析了这种类型的结构光扫描条纹的特点,基于这种特点介绍了几种同步扫描的调制度测量轮廓术提取调制度及三维重建算法,比较了几种算法的特点,实验表明采用适当的三维面形重建算法,可以在垂直测量的模式下实现115 mm深度测量范围,对被测面积为120 mm×120 mm 检验平面测量,标准差可达0.19 mm。
16
Jing HL, Su XY and You ZS. Uniaxial three-dimensional shape measurement with multioperation modes for different modulation algorithms [J]. , 2017, 56(3).
A uniaxial three-dimensional shape measurement system with multioperation modes for different modulation algorithms is proposed. To provide a general measurement platform that satisfies the specific measurement requirements in different application scenarios, a measuring system with multioperation modes based on modulation measuring profilometry (MMP) is presented. Unlike the previous solutions, vertical scanning by focusing control of an electronic focus (EF) lens is implemented. The projection of a grating pattern is based on a digital micromirror device, which means fast phase-shifting with high precision. A field programmable gate array-based master control center board acts as the coordinator of the MMP system; it harmonizes the workflows, such as grating projection, focusing control of the EF lens, and fringe pattern capture. Fourier transform, phase-shifting technique, and temporary Fourier transform are used for modulation analysis in different operation modes. The proposed system features focusing control, speed, programmability, compactness, and availability. This paper details the principle of MMP for multioperation modes and the design of the proposed system. The performances of different operation modes are analyzed and compared, and a work piece with steep holes is measured to verify this multimode MMP system.
17
ZhouP, Zhu JP and Su XY, et al. Three-dimensional shape measurement using color random binary encoding pattern projection [J]. , 2017, 56(10).
Acquiring the three-dimensional (3-D) surface geometry of objects with a full-frame resolution is of great concern in many applications. This paper reports a 3-D measurement scheme based on single-frame pattern projection in the combination of random binary encoding and color encoding. Three random binary encoding patterns generated by a computer embedded in three channels of a color pattern lead to a color binary encoding pattern. Two color cameras with a stereo-vision arrangement simultaneously capture the measured scene under the proposed encoding structured illumination. From captured images, three encoding images are extracted and analyzed using the extended spatial鈥搕emporal correlation algorithm for 3-D reconstruction. Theoretical explanation and analysis concerning the encoding principle and reconstruction algorithm, followed by experiments for reconstructing 3-D geometry of stationary and dynamic scenes show the feasibility and practicality of the proposed method.
18
Liu SQ, Zhong JG and MaX, et al. Embossed imaging technology based on phase-shifting structured light illumination [J]. , 2017, 38(3):392-399. 刘淑琴,钟金钢,马骁, 等.基于相移结构光照明的浮雕成像技术研究[J]., 2017, 38(3):392-399.
Zhao ML, MaX and Zhang ZB, et al. Three-dimensional shape absolute measurement based on laser speckles [J]. , 2016, 43(2). 赵明路,马骁,张子邦, 等.激光散斑三维形貌绝对测量技术[J]., 2016, 43(2).
Zhang ZB and Zhong JG. Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels [J]. , 2016, 41(11):2497-2500.
Typical single-pixel imaging techniques inherently consume a large number of measurements to reconstruct a high-quality and high-resolution image. Three-dimensional (3-D) single-pixel imaging with both high sampling efficiency and high depth accuracy remains a challenge. We implement fringe projection virtually by exploiting Helmholtz reciprocity. Depth information is modulated into a deformed fringe pattern whose Fourier spectrum is sampled by using sinusoidal intensity pattern illumination and single-pixel detection. The fringe pattern has a highly focused first-order component in its Fourier spectrum, which allows us to efficiently acquire the depth information from measurements far fewer than illumination pattern pixels. The 3-D information is retrieved through Fourier analysis. We experimentally obtained a 3-D reconstruction of a complex object with 599脳599 effective pixels, achieving a measurement-to-pixel ratio of 5.78%. The depth accuracy is evaluated at sub-millimetric level by using a test object.
21
He JY, Liu XL and PengX, et al. Integer pixel correlation searching for three-dimensional digital speckle based on gray constraint [J]. , 2017, 44(4). 何进英,刘晓利,彭翔, 等.基于灰度约束的三维数字散斑整像素相关搜索[J]., 2017, 44(4).
Cai ZW, Liu XL and Li AM, et al. Phase-3D mapping method developed from back-projection stereovision model for fringe projection profilometry [J]. , 2017, 25(2):1262-1277.
Abstract Two major methods for 3D reconstruction in fringe projection profilometry, phase-height mapping and stereovision, have their respective problems: the former has low-flexibility in practical application due to system restrictions and the latter requires time-consuming homogenous points searching. Given these limitations, we propose a phase-3D mapping method developed from back-projection stereovision model to achieve flexible and high-efficient 3D reconstruction for fringe projection profilometry. We showed that all dimensional coordinates (X, Y, and Z), but not just the height coordinate (Z), of a measured point can be mapped from phase through corresponding rational functions directly and independently. To determine the phase-3D mapping coefficients, we designed a flexible two-step calibration strategy. The first step, ray reprojection calibration, is to determine the stereovision system parameters; the second step, sampling-mapping calibration, is to fit the mapping coefficients using the calibrated stereovision system parameters. Experimental results demonstrated that the proposed method was suitable for flexible and high-efficient 3D reconstruction that eliminates practical restrictions and dispenses with the time-consuming homogenous point searching.
23
Cai ZW, Liu XL and PengX, et al. Universal phase-depth mapping in a structured light field [J]. , 2018, 57(1):A26-A32.
Technologies and devices for light field imaging have recently been developed for both industrial applications and scientific research to achieve excellent imaging properties. In our previous work, we combined light field imaging with structured illumination to propose a structured light field method in which multidirectional depth estimation can be performed for high-quality 3D imaging. However, the projection axis was implicitly assumed to be perpendicular to the reference plane, which is hard to meet in practice. In this paper, we derive a universal phase-depth mapping in a structured light field by relaxing this implicit condition. Both nonlinear and linear models were proposed based on this universal relationship. To test the model鈥檚 practical performance, we simulated experiments by adding errors to the real measured values to evaluate the deviation in depth estimation. By comparing the root-mean-square distributions of the depth deviations with respect to the depth positions, we demonstrated that the nonlinear model was precise and consistent in a wide range of depth, and we employed this model to realize high-quality multidirectional scene reconstruction.
24
Cai ZW, Liu XL and PengX, et al. Ray calibration and phase mapping for structured-light-field 3D reconstruction [J]. , 2018, 26(6):7598-7613.
In previous work, we presented a structured light field (SLF) method combining light field imaging with structured illumination to perform multi-view depth measurement. However, the previous work just accomplishes depth rather than 3D reconstruction. In this paper, we propose a novel active method involving ray calibration and phase mapping, to achieve SLF 3D reconstruction. We performed the ray calibration for the first time to determine each light field ray with metric spatio-angular parameters, making the SLF realize multi-view 3D reconstruction. Based on the ray parametric equation, we further derived the phase mapping in the SLF that spatial coordinates can be directly mapped from phase. A flexible calibration strategy was correspondently designed to determine mapping coefficients for each light field ray, achieving high-efficiency SLF 3D reconstruction. Experimental results demonstrated that the proposed method was suitable for high-efficiency multi-view 3D reconstruction in the SLF.
25
Catmull EE. A subdivision algorithm for computer display of curved surfaces [D]. , 1974.
26
Blinn JF and Newell ME. Texture and reflection in computer generated images [J]. , 1976, 19(10):542-547.
27
BierE and Sloan KR. Two-part texture mappings [J]. , 1986, 6(9):40-53.
28
Debevec PE, Taylor CJ and MalikJ, 1996,
29
DebevecP, Yu YZ and BorshukovG, Efficient view-dependent image-based rendering with projective texture-mapping [J]. , 1998,
30
PulliK, Abi-RachedH and DuchampTet al, Acquisition and visualization of colored 3D objects [J]. , 1998, 6096417
31
HartleyR and ZissermanA. Multiple view geometry in computer vision [J]. , 2004, 30(9/10):1865-1872.
32
Brown DC. Close-range camera calibration [J]. , 1971, 37(8):855-866.
33
FrankenT, DellepianeM and GanovelliF, et al. Minimizing user intervention in registering 2D images to 3D models [J]. , 2005, 21(8/9/10):619-628.
This paper proposes a novel technique to speed up the registration of 2D images to 3D models. This problem often arises in the process of digitalization of real objects, because pictures are often taken independently from the 3D geometry. Although there are a number of methods for solving the problem of registration automatically, they all need some further assumptions, so in the most general case the process still requires the user to provide some information about how the image corresponds to geometry, for example providing point-to-point correspondences. We propose a method based on a graph representation where the nodes represent the 2D photos and the 3D object, and arcs encode correspondences, which are either image–to–geometry or image–to–image point pairs. This graph is used to infer new correspondences from the ones specified by the user and from successful alignment of single images and to factually encode the state of the registration process. After each action performed by the user, our system explores the states space to find the shortest path from the current state to a state where all the images are aligned, i.e. a final state and, therefore, guides the user in the selection of further alignment actions for a faster completion of the job. Experiments on empirical data are reported to show the effectiveness of the system in reducing the user workload considerably.
34
LiuL and StamosI, Automatic 3D to 2D registration for the photorealistic rendering of urban scenes [J]. , 2005, 8624028
35
Neugebauer PJ and KleinK. Texturing 3D models of real world objects from multiple unregistered photographic views [J]. , 1999, 18(3):245-256.
As the efficiency of computer graphic rendering methods is increasing, generating realistic models is now becoming a limiting factor. In this paper we present a new technique to enhance already existing geometry models of real world objects with textures reconstructed from a sparse set of unregistered still photographs. The aim of the proposed technique is the generation of nearly photo-realistic models of arbitrarily shaped objects with minimal effort. In our approach, we require neither a prior calibration of the camera nor a high precision of the user''s interaction. Two main problems have to be addressed of which the first is the recovery of the unknown positions and parameters of the camera. An initial estimate of the orientation is calculated from interactively selected point correspondences. Subsequently, the unknown parameters are accurately calculated by minimising a blend of objective functions in a 3D-2D projective registration approach. The key point of the proposed method of registration is a novel filtering approach which utilises the spatial information provided by the geometry model. Second, the individual images have to be combined yielding a set of consistent texture maps. We present a robust method to recover the texture from the photographs thereby preserving high spatial frequencies and eliminating artifacts, particularly specular highlights. Parts of the object not seen in any of the photographs are interpolated in the textured model. Results are shown for three complex example objects with different materials and numerous self-occlusions.
36
IkeuchiK, NakazawaA and HasegawaKet al, The great buddha project: modeling cultural heritage for VR systems through observation [J]. , 2003,
37
YangG, BeckerJ and Stewart CV, Estimating the location of a camera with respect to a 3D model [J]. , 2007,
38
WuC, ClippB and LiXet al, 3D model matching with viewpoint-invariant patches (VIP) [J]. , 2008, 10139806
39
Besl PJ and Mckay ND. Method for registration of 3-D shapes [J]. , 1992, 14(2):239-256.
The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point. The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given an adequate set of initial rotations and translations for a particular class of objects with a certain level of ''shape complexity'', one can globally minimize the mean-square distance metric over all six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces.>
40
Xiong FG, HuoW and HanX, et al. Removal method of mismatching keypoints in 3D point cloud [J]. , 2018, 38(2). 熊风光,霍旺,韩燮, 等.三维点云中关键点误匹配剔除方法[J]., 2018, 38(2).
SteinbrückerF, SturmJ and CremersD, Real-time visual odometry from dense RGB-D images [J]. , 2011,
42
MatsushitaK and KanekoT. Efficient and handy texture mapping on 3D surfaces [J]. , 1999, 18(3):349-358.
There has been a rapid technical progress in three-dimensional (3D) computer graphics. But gathering surface and texture data is yet a laborious task. This paper addresses the problem of mapping photographic images on the surface of a 3D object whose geometric data are already known. We propose an efficient and handy method for acquiring textures and mapping them precisely on the surface, employing a digital camera alone. We describe an algorithm for selecting a minimal number of camera positions that can cover the entire surface of a given object and also an algorithm to determine camera''s position and direction for each photograph taken so as to paste it to the corresponding surfaces precisely. We obtained a matching accuracy within a pixel on a surface through three experimental examples, by which the practicability of our method is demonstrated.
43
DellepianeM and ScopignoR, Global refinement of image-to-geometry registration for color projection [J]. , 2013,
44
Zhou QY and KoltunV. Color map optimization for 3D reconstruction with consumer depth cameras [J]. , 2014, 33(4).
We present a global optimization approach for mapping color images onto geometric reconstructions. Range and color videos produced by consumer-grade RGB-D cameras suffer from noise and optical distortions, which impede accurate mapping of the acquired color data to the reconstructed geometry. Our approach addresses these sources of error by optimizing camera poses in tandem with non-rigid correction functions for all images. All parameters are optimized jointly to maximize the photometric consistency of the reconstructed mapping. We show that this optimization can be performed efficiently by an alternating optimization algorithm that interleaves analytical updates of the color map with decoupled parameter updates for all images. Experimental results demonstrate that our approach substantially improves color mapping fidelity.
45
ZhangF, HuangH, ZhangZ, RemoteSensing and Spatial InformationScienceset al. High precision texture reconstruction for 3D sculpture model[J], XXXIX- [J]. 2012, B5:139-143.
46
WalkowskiF, Johnston RA and Price NB, Texture mapping for the fastSCAN TM hand-held laser scanner [J]. , 2009, 10400909
47
PagésR, BerjónD and MoránF, et al. Seamless, static multi-texturing of 3D meshes [J]. , 2015, 34(1):228-238.
Abstract In the context of 3D reconstruction, we present a static multi-texturing system yielding a seamless texture atlas calculated by combining the colour information from several photos from the same subject covering most of its surface. These pictures can be provided by shooting just one camera several times when reconstructing a static object, or a set of synchronized cameras, when dealing with a human or any other moving object. We suppress the colour seams due to image misalignments and irregular lighting conditions that multi-texturing approaches typically suffer from, while minimizing the blurring effect introduced by colour blending techniques. Our system is robust enough to compensate for the almost inevitable inaccuracies of 3D meshes obtained with visual hull鈥揵ased techniques: errors in silhouette segmentation, inherently bad handling of concavities, etc.
48
NiemW and BroszioH, Mapping texture from multiple camera views onto 3D-object models for computer animation [J]. , 1995,
49
CallieriM, CignoniP and ScopignoR, Reconstructing textured meshes from multiple range RGB maps [J]. , 2002,
50
RocchiniC, CignoniP and MontaniC, et al. Acquiring, stitching and blending diffuse appearance attributes on 3D models [J]. , 2002, 18(3):186-204.
51
LempitskyV and IvanovD, Seamless mosaicing of image-based texture maps [J]. , 2007, 9737957
52
AlleneC, Pons JP and KerivenR, Seamless image-based texture atlases using multi-band blending [J]. , 2008, 10458222
53
GalR, WexlerY and OfekE, et al. Seamless montage for texturing models [J]. , 2010, 29(2):479-486.
We present an automatic method to recover high-resolution texture over an object by mapping detailed photographs onto its surface. Such high-resolution detail often reveals inaccuracies in geometry and registration, as well as lighting variations and surface reflections. Simple image projection results in visible seams on the surface. We minimize such seams using a global optimization that assigns compatible texture to adjacent triangles. The key idea is to search not only combinatorially over the source images, but also over a set of local image transformations that compensate for geometric misalignment. This broad search space is traversed using a discrete labeling algorithm, aided by a coarse-to-fine strategy. Our approach significantly improves resilience to acquisition errors, thereby allowing simple and easy creation of textured models for use in computer graphics.
54
WaechterM, MoehrleN and GoeseleM, Let there be color! Large-scale texturing of 3D reconstructions [J]. , 2014, 8693:836-850.
55
Jiang HQ, Wang BS and Zhang GF, et al. High-quality texture mapping for complex 3D scenes [J]. , 2015, 38(12):2349-2360. 姜翰青,王博胜,章国锋, 等.面向复杂三维场景的高质量纹理映射[J]., 2015, 38(12):2349-2360.
56
ShuJ, Liu YG and LiJet al, Rich and seamless texture mapping to 3D mesh models [J]. , 2016, 634:69-76.
57
LiM, Zhang WL and Fan DY. Automatic texture optimization for 3D urban reconstruction [J]. , 2017, 46(3):338-345. 李明,张卫龙,范丁元.城市三维重建中的自动纹理优化方法[J]., 2017, 46(3):338-345.
EisemannM, Decker BD and MagnorM, et al. Floating textures [J]. , 2008, 27(2):409-418.
62
DellepianeM, MarroquimR and CallieriM, et al. Flow-based local optimization for image-to-geometry projection [J]. , 2012, 18(3):463-474.
The projection of a photographic data set on a 3D model is a robust and widely applicable way to acquire appearance information of an object. The first step of this procedure is the alignment of the images on the 3D model. While any reconstruction pipeline aims at avoiding misregistration by improving camera calibrations and geometry, in practice a perfect alignment cannot always be reached. Depending on the way multiple camera images are fused on the object surface, remaining misregistrations show up either as ghosting or as discontinuities at transitions from one camera view to another. In this paper we propose a method, based on the computation of Optical Flow between overlapping images, to correct the local misalignment by determining the necessary displacement. The goal is to correct the symptoms of misregistration, instead of searching for a globally consistent mapping, which might not exist. The method scales up well with the size of the data set (both photographic and geometric) and is quite independent of the characteristics of the 3D model (topology cleanliness, parametrization, density). The method is robust and can handle real world cases that have different characteristics: low level geometric details and images that lack enough features for global optimization or manual methods. It can be applied to different mapping strategies, such as texture or per-vertex attribute encoding.
63
Horn B KP and Schunck BG. Determining optical flow [J]. , 1981, 17(1/2/3):185-203.
64
Lensch H PA, HeidrichW and Seidel HP, Automated texture registration and stitching for real world models [J]. , 2000,
65
RocchiniC, CignoniP and MontaniCet al, Multiple textures stitching and blending on 3D objects [J]. , 1999,
66
BaumbergA, 2002,
67
WangL, Kang SB and SzeliskiRet al, Optimal texture map reconstruction from multiple views [J]. , 2001, 7176885
68
BernardiniF, Martin IM and RushmeierH. High-quality texture reconstruction from multiple scans [J]. , 2001, 7(4):318-332.
The creation of three-dimensional digital content by scanning real objects has become common practice in graphics applications for which visual quality is paramount, such as animation, e-commerce, and virtual museums. While a lot of attention has been devoted recently to the problem of accurately capturing the geometry of scanned objects, the acquisition of high-quality textures is equally important, but not as widely studied. In this paper, we focus on methods to construct accurate digital models of scanned objects by integrating high-quality texture and normal maps with geometric data. These methods are designed for use with inexpensive, electronic camera-based systems in which low-resolution range images and high-resolution intensity images are acquired. The resulting models are well-suited for interactive rendering on the latest-generation graphics hardware with support for bump mapping. Our contributions include new techniques for processing range, reflectance, and surface normal data, for image-based registration of scans, and for reconstructing high-quality textures for the output digital object
69
CallieriM, CignoniP and CorsiniM, et al. Masked photo blending: mapping dense photographic data set on high-resolution sampled 3D models [J]. , 2008, 32(4):464-473.
The technological advance of sensors is producing an exponential size growth of the data coming from 3D scanning and digital photography. The production of digital 3D models consisting of tens or even hundreds of millions of triangles is quite easy nowadays; at the same time, using high-resolution digital cameras it is also straightforward to produce a set of pictures of the same real object totalling more than 50M pixel. The problem is how to manage all this data to produce 3D models that could fit the interactive rendering constraints. A common approach is to go for mesh parametrization and texture synthesis, but finding a parametrization for such large meshes and managing such large textures can be prohibitive. Moreover, digital photo sampling produces highly redundant data; this redundancy should be eliminated while mapping to the 3D model but, at the same time, should also be efficiently used to improve the sampled data coherence and the appearance representation accuracy. In this paper we present an approach where a multivariate blending function weights all the available pixel data with respect to geometric, topological and colorimetric criteria. The blending approach proposed is efficient, since it mostly works independently on each image, and can be easily extended to include other image quality estimators. The resulting weighted pixels are then selectively mapped on the geometry, preferably by adopting a multiresolution per-vertex encoding to make profitable use of all the data available and to avoid the texture size bottleneck. Some practical examples on complex data sets are presented.
70
Liu XM, Liu XL and Yin YK, et al. Texture blending of 3D photo-realistic model [J]. , 2012, 24(11):1440-1446. 刘星明,刘晓利,殷永凯, 等.真实感三维模型的纹理融合[J]., 2012, 24(11):1440-1446.
71
Jiang CS, ChristieD and Paudel DPet al, High quality reconstruction of dynamic objects using 2D-3D camera fusion [J]. , 2017, 17597138
72
MaL, DoL and BondarevEet al, 3D colored model generation based on multiview textures and triangular mesh [J]. , 2013, 14197187
73
BiS, Kalantari NK and RamamoorthiR. Patch-based optimization for image-based texture mapping [J]. , 2017, 36(4):1-11.
Abstract Image-based texture mapping is a common way of producing texture maps for geometric models of real-world objects. Although a high-quality texture map can be easily computed for accurate geometry and calibrated cameras, the quality of texture map degrades significantly in the presence of inaccuracies. In this paper, we address this problem by proposing a novel global patch-based optimization system to synthesize the aligned images. Specifically, we use patch-based synthesis to reconstruct a set of photometrically-consistent aligned images by drawing information from the source images. Our optimization system is simple, flexible, and more suitable for correcting large misalignments than other techniques such as local warping. To solve the optimization, we propose a two-step approach which involves patch search and vote, and reconstruction. Experimental results show that our approach can produce high-quality texture maps better than existing techniques for objects scanned by consumer depth cameras such as Intel RealSense. Moreover, we demonstrate that our system can be used for texture editing tasks such as hole-filling and reshuffling as well as multiview camouflage.
74
SimakovD, CaspiY and ShechtmanEet al, Summarizing visual data using bidirectional similarity [J]. , 2008, 10140146
75
WangZ, Bovik AC and Sheikh HR, et al. Image quality assessment: from error visibility to structural similarity [J]. , 2004, 13(4):600-612.
76
Xu SC, Ye XZ and WuYet al, Highlight detection and removal based on chromaticity [J]. , 2005, 3626:199-206.
77
HoiemD, Single-image shadow detection and removal using paired regions [J]. , 2011, 12218867
78
BirsakM, MusialskiP and ArikanMet al, Seamless texturing of archaeological data [J]. , 2013, 14143729
79
HeindlC, Akkaladevi SC and BauerH, Photorealistic texturing of human busts reconstructions [J]. , 2016,
80
VelhoL and Júnior JS. Projective texture atlas construction for 3D photography [J]. , 2007, 23(9-11):621-629.
The use of attribute maps for 3D surfaces is an important issue in geometric modeling, visualization and simulation. Attribute maps describe various properties of a surface that are necessary in applications. In the case of visual properties, such as color, they are also called texture maps.
Usually, the attribute representation exploits a parametrization g:U⊂ℝ2→ℝ3 of a surface in order to establish a two-dimensional domain where attributes are defined. However, it is not possible, in general, to find a global parametrization without introducing distortions into the mapping. For this reason, an atlas structure is often employed. The atlas is a set of charts defined by a piecewise parametrization of a surface, which allows local mappings with small distortion.
81
ChuangM, LuoL and Brown BJ, et al. Estimating the laplace-beltrami operator by restricting 3D functions [J]. , 2010, 28(5):1475-1484.
Abstract We present a novel approach for computing and solving the Poisson equation ov
82
DesseinA, Smith W A P, Wilson R C, et al. Seamless texture stitching on a 3D mesh by poisson blending in patches [J]. , 2015, 14884131
83
Pan RJ and TaubinG. Color adjustment in image-based texture maps [J]. , 2015, 79:39-48.
We propose a color adjustment technique to eliminate the visible seams in image-based texture map of a 3D object. The process is carried out in three steps. First, texture coordinates are locally displaced to minimize the misalignment of adjacent texture patches. Second, color discontinuities between different texture patches at each corner of the mesh faces are resolved. We minimize a global energy function over the mesh to ensure continuous color transitions and fit the color gradient at each corner of the mesh faces. Finally, the color adjustment at the corners is propagated over the texture patch for each face by solving a Poisson equation with mixed boundary conditions. By means of the proposed processing techniques, the visibility of seams is minimized while fine details are preserved in image-based texture maps. This can be used as a last refinement stage in image-based 3D reconstruction pipelines. The proposed color adjustment algorithm is tested on a variety of real-world datasets and compares very favorably with known methods.
84
TroccoliA and AllenP. Building illumination coherent 3D models of large-scale outdoor scenes [J]. , 2008, 78(2/3):261-280.
Systems for the creation of photorealistic models using range scans and digital photographs are becoming increasingly popular in a wide range of fields, from reverse engineering to cultural heritage preservation. These systems employ a range finder to acquire the geometry information and a digital camera to measure color detail. But bringing together a set of range scans and color images to produce an accurate and usable model is still an area of research with many unsolved problems. In this paper we address the problem of how to build illumination coherent integrated texture maps from images that were taken under different illumination conditions. To achieve this we present two different solutions. The first one is to align all the images to the same illumination, for which we have developed a technique that computes a relighting operator over the area of overlap of a pair of images that we then use to relight the entire image. Our proposed method can handle images with shadows and can effectively remove the shadows from the image, if required. The second technique uses the ratio of two images to factor out the diffuse reflectance of an image from its illumination. We do this without any light measuring device. By computing the actual reflectance we remove from the images any effects of the illumination, allowing us to create new renderings under novel illumination conditions.
85
Laffont PY, BousseauA and ParisS, et al. Coherent intrinsic images from photo collections [J]. , 2012, 31(6):1-11.
An intrinsic image is a decomposition of a photo into an illumination layer and a reflectance layer, which enables powerful editing such as the alteration of an object''s material independently of its illumination. However, decomposing a single photo is highly under-constrained and existing methods require user assistance or handle only simple scenes. In this paper, we compute intrinsic decompositions using several images of the same scene under different viewpoints and lighting conditions. We use multi-view stereo to automatically reconstruct 3D points and normals from which we derive relationships between reflectance values at different locations, across multiple views and consequently different lighting conditions. We use robust estimation to reliably identify reflectance ratios between pairs of points. From these, we infer constraints for our optimization and enforce a coherent solution across multiple views and illuminations. Our results demonstrate that this constrained optimization yields high-quality and coherent intrinsic decompositions of complex scenes. We illustrate how these decompositions can be used for image-based illumination transfer and transitions between views with consistent lighting.
86
AgathosA and Fisher RB, Colour texture fusion of multiple range images [J]. , 2003, 8322497
87
BannaiN, AgathosA and Fisher RB, Fusing multiple color images for texturing models [J]. , 2004, 8224592
88
BannaiN, Fisher RB and AgathosA. Multiple color texture map fusion for 3D models [J]. , 2007, 28(6):748-758.
Abstract
A commonly encountered problem when creating 3D models of large real scenes is unnatural color texture fusion. Due to variations in lighting and camera settings (both manual and automatic), captured color texture maps of the same 3D structures can have very different appearances. When fusing multiple texture maps to create larger models, this color variation leads to poor appearance with patchwork color tilings on homogeneous surfaces. This paper extends previous research on pairwise global color correction to multiple overlapping texture map images. The central idea is to estimate a set of blending transformations that minimize the overall global color discrepancy between the texture maps, thus spreading residual color errors, rather than letting them accumulate.
89
XuL, LiE and Li JGet al, A general texture mapping framework for image-based 3D modeling [J]. , 2010, 11692796
90
Park IK, ZhangH and VezhnevetsV. Image-based 3D face modeling system [J]. , 2005, 2005(13):1-19.
p/ pThis paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2 inline-formulagraphic file=1687-6180-2005-271406-i1.gif//inline-formula3 minutes./p
91
Lee WB, Man HL and Park IK, Photorealistic 3D face modeling on a smartphone [J]. , 2011,
92
MaQ, Ge BZ and ChenL. Correction technique for color difference of multi-sensor texture [J]. , 2016, 36(4):1075-1079. 马倩,葛宝臻,陈雷.多传感器彩色纹理色彩差异修正方法[J]., 2016, 36(4):1075-1079.
ReinhardE, AdhikhminM and GoochB, et al. Color transfer between images [J]. , 2002, 21(5):34-41.
94
PintusR and GobbettiE. A fast and robust framework for semiautomatic and automatic registration of photographs to 3D geometry [J]. , 2015, 7(4):1-23.
We present a simple, fast, and robust complete framework for 2D/3D registration capable to align in a semiautomatic or completely automatic manner a large set of unordered images to a massive point cloud. Our method converts the hard to solve image-to-geometry registration task in a Structure-from-Motion (SfM) plus a 3D/3D alignment problem. We exploit a SfM framework that, starting just from an unordered image collection, computes an estimate of the camera parameters and a sparse 3D geometry deriving from matched image features. We then coarsely register this model to the given 3D geometry by estimating a global scale and absolute orientation using two solutions: a minimal user intervention or a stochastic global point set registration approach. A specialized sparse bundle adjustment (SBA) step, that exploits the correspondence between the sparse geometry and the fine input 3D model, is then used to refine intrinsic and extrinsic parameters of each camera. Output data is suitable for photo blending frameworks to produce seamless colored models. The effectiveness of the method is demonstrated on a series of synthetic and real-world 2D/3D Cultural Heritage datasets.
95
PrevitaliM, BarazzettiL and ScaioniM, An automated and accurate procedure for texture mapping from images [J]. , 2012, 13154673
96
OrtinD and RemondinoF, Occlusion-free image generation for realistic texture mapping [J]. , 2005, 36(5/W17).
97
LiM, Guo BX and Zhang WL. An occlusion detection algorithm for 3D texture reconstruction of multi-view images [J]. , 2017, 7(5):152-155.
Li HB, Wu LL and WuY. Two-step light source detection algorithm importing the 3D shadow and specular reflection [J]. , 2013, 34(4):892-896.
The inverse detection algorithms based on the illumination model are widely used in augmented reality,however their detection errors of the light source''s position are relatively large,and they demand that the surface of the calibration object is purely diffuse.A two-step detection algorithm of the light source is proposed which imports 3-D features of the shadows and specular reflection.The algorithm is divided into two parts,position detection of the light source and intensity detection of the light.Under the condition of having known the position of the light source,the Cook-Torrance model with the specular reflection is imported,and such methods as random sampling partition and error handling are also adopted.Besides,the 3-D features of the shadow are used instead of the plane corner features,and the position of the light source is detected according to the ray tracing.Experiment results show that the position detected by this algorithm is more accurate.Therefore,it solves the inaccuracy in detecting the position of the point light source and the limitations in the calibration objects.
101
Sun SJ, Zhai AP and Cao YP. A fast algorithm for obtaining 3D shape and texture information of objects [J]. , 2016, 36(3). 孙士杰,翟爱平,曹益平.一种快速获取物体三维形貌和纹理信息的算法[J]., 2016, 36(3).