WO2022021680A1 - 融合结构光和光度学的三维对象重建方法及终端设备 - Google Patents

融合结构光和光度学的三维对象重建方法及终端设备 Download PDF

Info

Publication number
WO2022021680A1
WO2022021680A1 PCT/CN2020/129563 CN2020129563W WO2022021680A1 WO 2022021680 A1 WO2022021680 A1 WO 2022021680A1 CN 2020129563 W CN2020129563 W CN 2020129563W WO 2022021680 A1 WO2022021680 A1 WO 2022021680A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
structured light
dimensional object
photometric
images
Prior art date
Application number
PCT/CN2020/129563
Other languages
English (en)
French (fr)
Inventor
宋展
宋钊
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Priority to US17/927,692 priority Critical patent/US20230298189A1/en
Publication of WO2022021680A1 publication Critical patent/WO2022021680A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2518Projection by scanning of the object
    • G01B11/2527Projection by scanning of the object with phase change by in-plane movement of the patern
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2536Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object using several gratings with variable grating pitch, projected on the object with the same angle of incidence
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/586Depth or shape recovery from multiple images from multiple light sources, e.g. photometric stereo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination

Definitions

  • the present application belongs to the technical field of computer vision, and in particular relates to a three-dimensional object reconstruction method and terminal device integrating structured light and photometry.
  • laser 3D scanning and projected structured light 3D scanning technology are the main technologies.
  • the laser 3D scanning system captures the projected laser features with a camera by projecting laser lines or lattices, and recovers objects through triangulation.
  • the main disadvantage of this point-by-point and line-by-line scanning method is the slow speed.
  • the structured light coding technology is used to realize the one-time measurement of the entire surface, which has obvious advantages of high speed and high precision. Therefore, the structured light 3D scanning technology based on projection has become the current mainstream technology.
  • the projector-based structured light 3D scanning system can produce a better reconstruction effect for the non-textured Lambertian surface of an object.
  • 3D object reconstruction refers to constructing a 3D model corresponding to the object.
  • fringe localization e.g., reflectivity, internal occlusion, etc.
  • embodiments of the present application provide a three-dimensional object reconstruction method and terminal device integrating structured light and photometry, so as to solve the three-dimensional object reconstruction for objects with complex surfaces (textured Lambertian surfaces and non-Lambertian surfaces).
  • the accuracy of the reconstruction results is not high.
  • a first aspect of the embodiments of the present application provides a three-dimensional object reconstruction method integrating structured light and photometry, including: acquiring N first images, each of which is obtained by projecting an encoded pattern having an encoded fringe sequence onto a The three-dimensional object is obtained after photographing, and N is a positive integer; based on the N first images, the structured light depth information of the three-dimensional object is determined; and M second images are obtained, and the M second images are obtained by combining the P The light sources are respectively projected to the three-dimensional object from different directions, and M and P are both positive integers; based on the M second images, the luminosity information of the three-dimensional object is determined; based on the structured light depth information and all The photometric information is used to reconstruct the three-dimensional object.
  • a second aspect of the embodiments of the present application provides a three-dimensional object reconstruction device integrating structured light and photometry, including: a structured light image acquisition unit, configured to acquire N first images, each first image is obtained by The coding pattern of the coding fringe sequence is projected to the three-dimensional object and captured, and N is a positive integer; the structured light depth information determination unit is used to determine the structured light depth information of the three-dimensional object based on the N first images; an image acquisition unit, configured to acquire M second images, the M second images are obtained by projecting P light sources to the three-dimensional object from different directions, and M and P are both positive integers; the luminosity information a determining unit, configured to determine photometric information of the three-dimensional object based on the M second images; a three-dimensional object reconstruction unit, configured to reconstruct the three-dimensional object based on the structured light depth information and the photometric information.
  • a third aspect of the embodiments of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, when the processor executes the computer program Implement the steps of the method as described above.
  • a fourth aspect of the embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the above method are implemented.
  • a fifth aspect of the embodiments of the present application provides a computer program product, which enables the terminal device to implement the steps of the above method when the computer program product runs on a terminal device.
  • the photometric system is also used to collect the images of the 3D object in different directions, so that the structured light depth information of the 3D object can be obtained at the same time.
  • the photometric information of the 3D object can also be obtained, and then the 3D object can be reconstructed by combining the structured light depth information and photometric information.
  • a structured light system and a photometric system are integrated to locate and reconstruct 3D objects, and the photometric information can be used to correct the structured light fringe localization error caused by the surface texture (eg, texture reflectivity) of the 3D object, thereby improving the Accuracy of 3D reconstruction results for 3D objects with complex surfaces.
  • FIG. 1 shows a schematic structural diagram of an example of a structured light system suitable for applying the three-dimensional object reconstruction method integrating structured light and photometry according to an embodiment of the present application;
  • FIG. 2 shows a schematic structural diagram of an example of a photometric system suitable for applying the three-dimensional object reconstruction method fused with structured light and photometry according to an embodiment of the present application;
  • FIG. 3 shows a flowchart of an example of a method for reconstructing a three-dimensional object by integrating structured light and photometry according to an embodiment of the present application
  • FIG. 4 shows a schematic diagram of an example of a 4-bit Gray code structured light encoding pattern
  • 5A shows a schematic diagram of an example of a 4-bit Gray code plus a 4-bit line-shift structured light coding pattern
  • 5B shows a schematic diagram of an example of an 8-bit Gray code plus a 4-bit line-shifted binary Gray code structured light encoding pattern
  • FIG. 6 shows a flowchart of an example of calibrating a light source in a photometric system according to an embodiment of the present application
  • FIG. 7 shows a flowchart of an example of determining photometric information of a three-dimensional object according to an embodiment of the present application
  • FIG. 8 shows a flowchart of an example of reconstructing a three-dimensional object based on structured light depth information and photometric information according to an embodiment of the present application
  • FIG. 10 shows a flowchart of an example of reconstructing a three-dimensional object by a fusion system based on photometry and structured light according to an embodiment of the present application
  • 11A shows a schematic diagram of an example of a richly textured paper to be reconstructed
  • FIG. 11B shows a schematic diagram of an example of the reconstruction result of the paper in FIG. 11A based on the first structured light system
  • FIG. 11C shows a schematic diagram of an example of the reconstruction result of the paper in FIG. 11A based on the second structured light system
  • FIG. 11D shows a schematic diagram of an example of the reconstruction result of the paper in FIG. 11A based on the fusion system according to the embodiment of the present application;
  • FIG. 12A shows a schematic diagram of an example of a circuit board having a surface with various reflective properties to be reconstructed
  • FIG. 12B shows a schematic diagram of an example of a reconstruction result of the circuit board in FIG. 12A based on a single structured light system
  • Fig. 12C shows a schematic diagram of an example of the reconstruction result of the paper in Fig. 12A based on the fusion system according to the embodiment of the present application;
  • Figure 13A shows a schematic diagram of an example of a bowl to be rebuilt
  • Figure 13B shows a schematic diagram of an example of the reconstruction result of the bowl in Figure 13A based on a single structured light system
  • Figure 13C shows a schematic diagram of an example of reconstruction results for the bowl in Figure 13A based on a single photometric system
  • FIG. 13D is a schematic diagram showing an example of the reconstruction result of the bowl in FIG. 13A based on the fusion system according to the embodiment of the present application;
  • FIG. 14 shows a structural block diagram of an example of a three-dimensional object reconstruction apparatus integrating structured light and photometry according to an embodiment of the present application
  • FIG. 15 is a schematic diagram of an example of a terminal device according to an embodiment of the present application.
  • the core of the structured light 3D scanning system is the encoding and decoding algorithms.
  • the existing structured light 3D scanning technology can be divided into three categories: time coding, spatial coding and hybrid coding.
  • Time coding structured light technology has been widely studied and used for its advantages of large coding capacity and high reconstruction resolution.
  • the commonly used time coding schemes are Gray code structured light coding (for example, Gray code sequence, Gray code sequence plus line shift and Gray code sequence). code sequence plus phase shift) and binary structured light encoding (with "0" (pure black) and "255" (pure white) as coding primitives).
  • the streak sequence ie, the coding pattern
  • the structured light technology based on the fringe positioning decoding is called the streak structured light technology.
  • the light technology can be called binary stripe structured light technology.
  • the positioning and decoding accuracy of the fringes is an important factor affecting the 3D reconstruction results.
  • the modulation of the fringe profile is derived from the surface shape, and the fringe structured light technique can achieve micron-level measurement accuracy.
  • the modulation of the fringe profile is not only derived from the surface shape, but also related to changes in the surface texture and surface reflectivity, while the fringe structured light technology often ignores the modulation of the surface reflectivity and texture on the fringe boundary profile, so it cannot be more accurate.
  • the photometric 3D scanning system takes the target images of different lighting directions of the 3D object as input, and then establishes equations based on the assumed surface reflection characteristic model (or, reflection model) of the object to solve the surface normal and reflectivity of the object, so as to achieve Rebuild a model of a 3D object.
  • reflection characteristic models include Lambertian reflection model (Lambertian surface), Phong reflection model (highly reflective surface), and BRDF (general surface).
  • the surface of a 3D object may be a complex surface, for example, it may be composed of sub-regions with different reflection characteristics, and using a single optical model assumption will lead to large errors in the calculation of the normal field, and it is difficult to obtain by a single photometric method.
  • the absolute depth information of the 3D object makes the reconstructed 3D object less accurate.
  • the mobile terminals described in the embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptop computers or tablet computers with touch-sensitive surfaces (eg, touch screen displays and/or touch pads).
  • other portable devices such as mobile phones, laptop computers or tablet computers with touch-sensitive surfaces (eg, touch screen displays and/or touch pads).
  • touch-sensitive surfaces eg, touch screen displays and/or touch pads.
  • the above-described devices are not portable communication devices, but rather desktop computers with touch-sensitive surfaces (eg, touch screen displays and/or touch pads).
  • a mobile terminal including a display and a touch-sensitive surface is described.
  • the mobile terminal may include one or more other physical user interface devices such as a physical keyboard, mouse and/or joystick.
  • Various applications that may be executed on the mobile terminal may use at least one common physical user interface device, such as a touch sensitive surface.
  • a touch sensitive surface One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal may be adjusted and/or changed between applications and/or within respective applications.
  • the common physical architecture of the terminal eg, touch-sensitive surface
  • FIG. 1 shows a schematic structural diagram of an example of a structured light system suitable for applying the three-dimensional object reconstruction method integrating structured light and photometry according to an embodiment of the present application.
  • the structured light system 100 is provided with a computer 10 , a camera 20 , a projector 30 and an object (or, an object to be reconstructed) 40 .
  • the coding pattern of the projector 30 can be set by the computer 10, and when three-dimensional reconstruction is required, the projector 30 projects to a designated area (for example, an area for placing the object 40).
  • the camera 20 can capture the image of the object 40 in the designated area, and the computer 10 can decode the encoded information in the image of the object captured by the camera 20 to reconstruct the three-dimensional information of the object.
  • the device types described above in conjunction with FIG. 1 are only used as examples, for example, the camera 20 can be replaced by other devices with image acquisition functions, the computer 10 can be other mobile terminals with processing functions, etc. .
  • FIG. 2 shows a schematic structural diagram of an example of a photometric system suitable for applying the three-dimensional object reconstruction method integrating structured light and photometry according to an embodiment of the present application.
  • the photometric system 200 is provided with an object (or an object to be reconstructed) 210 , a plurality of (eg, P, where P is a positive integer) surface light sources 220 , a control board 230 and a camera 240 .
  • the control board 230 can control the plurality of surface light sources 220 to illuminate in turn, and the camera 240 collects grayscale images corresponding to different light source directions.
  • the control board 230 can obtain the reflection equation and normal vector of the object surface based on each grayscale image, and integrate the normal vector to restore the relative height of each point on the object surface to reconstruct the three-dimensional information of the object.
  • control panel 230 can be replaced with other mobile terminals with processing functions, and the surface light sources in multiple orientations can also be replaced with point light sources. etc.
  • structured light system in Figure 1 can also be fused with the photometric system in Figure 2.
  • the fusion system can realize the functions of the structured light system and the photometric system while simplifying the system hardware, such as two systems. Cameras, computers, etc. can be shared between them.
  • FIG. 3 shows a flowchart of an example of a three-dimensional object reconstruction method integrating structured light and photometry according to an embodiment of the present application.
  • the three-dimensional object reconstruction method in the embodiment of the present application may be executed by a mobile terminal (eg, computer 10 ), and aims to reconstruct a high-precision three-dimensional object by performing control or processing operations.
  • a mobile terminal eg, computer 10
  • N first images are acquired, where N is a positive integer.
  • each first image is captured by projecting an encoded pattern with a sequence of encoded fringes onto a three-dimensional object.
  • a structured light system (as shown in FIG. 1 ) can be used to project a preset number of coding patterns, and the coding patterns on the three-dimensional object are collected to obtain corresponding images.
  • the number N of the first images may match the number of projected encoding patterns, for example, when eight encoding patterns are projected, the number of detected object patterns correspondingly collected may also be eight. It should be noted that the number of coding patterns may be related to the number of coding bits. For example, when an 8-bit coding sequence is used, the number of corresponding coding patterns is 8.
  • each encoding pattern has a unique encoding fringe sequence consisting of parallel multiple encoding fringes and fringe boundaries between adjacent encoding fringes.
  • various stripe positioning schemes for example, pixel center decoding positioning schemes
  • a decoding and positioning scheme based on stripe junctions can also be adopted, which can achieve high-precision (eg, sub-pixel level) positioning effects, and more details will be described below.
  • the coding patterns in the embodiments of the present application may adopt coding patterns in various structured light fringe coding technologies, such as binary structured light coding patterns, Gray code structured light coding patterns, or binary Gray code patterns, etc. This should be unlimited.
  • FIG. 4 shows a schematic diagram of an example of a 4-bit Gray code structured light coding pattern.
  • Stripe junctions have corresponding permutation numbers and coded values.
  • the fringe boundaries in different intensity images do not overlap with each other, so the Gray code value or phase encoding value is not easy to be misjudged, and the gray code value and the fringe boundary can be realized. mapping relationship between.
  • the Gray code structured light encoding pattern is composed of black stripes and white stripes, the Gray code structured light encoding pattern is a binary Gray code structured light encoding pattern.
  • the gray code boundary decoding can obtain the same image sampling point density as the pixel center decoding. Therefore, on the basis of projecting the Gray code coding pattern, it is possible to continue to project the line shift pattern to improve the sampling density of image points on the basis of Gray code fringe boundary decoding, thereby ensuring the sub-pixel accuracy of image sampling points.
  • FIG. 5A is a schematic diagram illustrating an example of a 4-bit Gray code plus a 4-bit line-shift structured light coding pattern.
  • Gray code stripe junction and line-shift stripe center are not coincident, and are separated by 0.5 stripe width. Therefore, by combining the two decoding, the central positioning line (solid line) and the stripe boundary positioning line (dotted line) are combined into the final positioning line (solid and dotted lines alternately appear), which can increase the image sampling point density from the width of a single stripe.
  • the image sampling point density is about 0.5 pixel width, which can achieve high-precision positioning at sub-pixel level.
  • FIG. 5B is a schematic diagram showing an example of an 8-bit Gray code and a 4-bit line-shifted binary Gray code structured light encoding pattern.
  • an 8-bit Gray code plus a 4-bit line-shifted binary Gray code structured light encoding pattern or other encoding patterns may also be used in the embodiments of the present application.
  • the coding information of the stripe boundary is directly carried in the coding pattern projected by the projector, which is convenient for decoding but adds a burden to the setting work of the coding pattern, for example, it may be necessary to indicate the same gray value
  • the boundary information between adjacent stripes for example, two adjacent white stripes.
  • the coding pattern may also be projected forward and backward (that is, the gray value of the stripes is inverted) twice.
  • the forward projected coding pattern is 0 -0-0-0-255-255-255-255
  • the reverse coding pattern is 255-255-255-255-0-0-0-0, so the zero crossing point of the positive and negative stripes can be used as the code
  • the intersection of the stripes of the pattern take the intersection of the positive and negative stripes to fit the straight line to obtain the positioning result of the sub-pixel accuracy of the stripes.
  • the structured light depth information of the three-dimensional object is determined based on the N first images. Specifically, the stripe positioning information of each pixel in the first image can be analyzed, and the absolute depth information of the three-dimensional object can be analyzed based on the principle of triangulation. For details, please refer to the description in the related art.
  • M second images are acquired.
  • the M second images are obtained by projecting the P light sources to the three-dimensional object from different directions, and M and P are both positive integers.
  • a photometric system (as shown in FIG. 2 ) can be used to control the P surface light sources in turn to illuminate the three-dimensional object from different directions, and the camera can collect images of the three-dimensional object from different angles.
  • photometric information of the three-dimensional object is determined.
  • the photometric information includes normal information and reflectivity.
  • the reflection equation and normal vector of the surface of the object can be obtained according to the M second images.
  • the photometric information may also include other parameter information, such as key luminance values such as diffuse, specular, and refractive components decomposed from reflectance.
  • step 350 the three-dimensional object is reconstructed based on the structured light depth information and the photometric information.
  • steps 330 and 340 may be performed first, Then, step 310 and step 320 are executed.
  • the error of the structured light positioning result caused by the information such as the reflectivity and normal vector of the texture of the surface of the object can be reduced or eliminated. It is a three-dimensional object with a complex surface, and can guarantee a certain reconstruction accuracy.
  • FIG. 6 shows a flowchart of an example of calibrating a light source in a photometric system according to an embodiment of the present application.
  • step 610 based on the structured light depth information, it is detected whether the light projected by the P light sources cannot cover the surface of the three-dimensional object.
  • the structured light system can be used to determine the preliminary depth map of the three-dimensional object, and then the light source in the photometric system can be calibrated by using the preliminary depth map.
  • step 620 when the light projected by the P light sources cannot cover the surface of the three-dimensional object, the positions and projection directions of the P light sources are determined according to the structured light depth information.
  • the light source in the structured light system is calibrated by the initial depth image, for example, the suitable position and projection direction of each light source are screened within the set position range and direction interval, so that the light source of the photometric system can completely cover the three-dimensional object surface without occlusion areas.
  • the structured light depth information in the preliminary depth map can be used to determine the preliminary three-dimensional point cloud structure of the three-dimensional object, and the projection light of each light source in the current photometric system may not be able to cover the above preliminary three-dimensional point cloud structure , and based on the preliminary three-dimensional point cloud structure and the light source of the current photometric system to complete the occlusion detection, so as to eliminate the influence of the occlusion part on the photometric system.
  • the M second images may be input to a reflection model matching the target surface type of the three-dimensional object, so as to output the photometric information of the three-dimensional object from the reflection model.
  • FIG. 7 shows a flowchart of an example of determining the photometric information of a three-dimensional object (ie, the above-mentioned step 340 ) according to an embodiment of the present application.
  • the target surface type of the three-dimensional object is obtained.
  • the types of surfaces can be diverse, such as metallic reflective surfaces, ceramic semi-transparent reflective surfaces, and the like.
  • the target surface type of the three-dimensional object to be reconstructed may be detected by various potential or known surface type detection techniques.
  • the target surface type of the three-dimensional object to be reconstructed may be specified by receiving a user operation.
  • a reflection model matching the target surface type is determined from a preset reflection model set.
  • each reflection model in the reflection model set is configured with a corresponding surface type.
  • the corresponding reflection model can be preconfigured to characterize its surface reflectivity.
  • step 730 the M second images are input to the determined reflection model to output photometric information of the three-dimensional object from the reflection model.
  • the Phong model can be used to model the reflection characteristics of the object surface
  • the Hanrahan–Krueger (HK) model with a layered reflective structure can be used model to model. Therefore, the corresponding reflection models are called for different surface types to solve the photometric information of the three-dimensional object, which can ensure high accuracy of the determined photometric information of the three-dimensional object.
  • FIG. 8 shows a flowchart of an example of reconstructing a three-dimensional object based on structured light depth information and photometric information according to an embodiment of the present application.
  • the structured light depth information is iteratively calibrated based on the photometric information to obtain calibration deviation information relative to the structured light depth information.
  • the change of the reflectivity of the surface of the object may cause errors in the positioning of the structured light stripes.
  • the influence rule of the preset reflectivity on the stripe positioning for example, it can be determined based on prior knowledge or a training model, for each pixel in the three-dimensional object corresponding to The stripe positioning information (or, based on the structured light depth information determined by the N first images) is calibrated, so as to obtain the corresponding updated structured light depth information, and calculate the deviation value of the structured light depth information before and after the depth update .
  • the normal vector information of the corresponding three-dimensional object can be obtained through the normal vector calculation criterion.
  • the description in the technology will not be repeated here.
  • step 820 it is determined whether the calibration deviation information satisfies a preset iteration termination condition.
  • the iterative termination condition may be indicative of a termination condition for the iterative calibration operation in step 810 described above.
  • step 820 If the judgment result in step 820 indicates that the preset iteration termination condition is satisfied, then jump to the operation in step 830 . If the judgment result in step 820 indicates that the preset iteration termination condition is not satisfied, then jump to step 810 .
  • step 830 a three-dimensional object is reconstructed based on the corresponding calibrated structured light depth information satisfying the iteration termination condition.
  • the structured light depth information is iteratively calibrated based on the photometric information, and the error of the structured light depth information caused by the photometric information such as the reflectivity of the surface texture is compensated, so as to ensure a high-precision 3D reconstruction result.
  • the photometric information may include first photometric information and second photometric information.
  • FIG. 9 shows a flowchart of an example of determining whether calibration deviation information satisfies an iteration termination condition according to an embodiment of the present application.
  • the structured light depth information is iteratively calibrated based on the first photometric information to obtain first calibration deviation information relative to the structured light depth information.
  • the second photometric information is calibrated using the calibrated structured light depth information to obtain second calibration deviation information relative to the second photometric information.
  • new second photometric information may be obtained from the calibrated structured light depth information, and the new second photometric information may be compared with the original second photometric information, thereby obtaining corresponding second calibration deviation information.
  • step 930 it is determined whether the first calibration deviation information and the second calibration deviation information satisfy a preset deviation condition, so as to correspondingly determine whether the calibration deviation information satisfies a preset iteration termination condition.
  • the first photometric information may be reflectivity
  • the second photometric information may be normal information.
  • the depth information determined by the structured light system can be calibrated based on the reflectivity determined by the photometric system, and the corresponding first deviation value can be determined, and the normal information of the three-dimensional object can be reversed by using the calibrated depth information and the correlation with the photometric value can be determined.
  • the second deviation value between the normal information determined by the scientific system, and whether the depth information, normal information and reflectivity are successfully adjusted through the first deviation value and the second deviation value are determined, which effectively avoids insufficient calibration or excessive calibration. Therefore, the final positioning result can meet the requirements of the photometric system and the structured light system at the same time, which can ensure the accuracy of the reconstructed three-dimensional object.
  • the first calibration deviation information and the second calibration deviation information may be determined to correspond to the target deviation information of the preset deviation weight configuration, and the first calibration deviation information and the second calibration deviation information may be determined accordingly. Whether the preset deviation conditions are met.
  • the deviation information may be weighted and summed, and according to the comparison result of whether the weighted summed value is smaller than a preset error threshold, it is correspondingly determined whether the iteration termination condition is satisfied.
  • pij represents the phase value of the image pixel at coordinates (i, j)
  • x pij represents the depth information determined based on structured light decoding (such as a three-dimensional point cloud determined by the principle of triangulation)
  • ps represents the photometric system
  • sls represents the structured light system
  • n ps represents the normal information determined based on the photometric system
  • represents the weight value.
  • E sls (x pij ,n ps ) (in some cases, ) represents the difference between the normal information of the target scene solved based on the depth information of structured light decoding and the normal information of the target scene determined based on the photometric system, Represents the difference (or grayscale difference) between the depth information determined based on the structured light system and the depth information updated by the photometric system that calibrates the light source based on the depth information, Represents the objective function of the combined structured light system and photometric system.
  • the 3D reconstruction problem of fusing structured light system and photometric system can be optimized in the following ways:
  • E(x pij , n ps , ⁇ ij ) can be compared with a set threshold, and when E(x pij , n ps , ⁇ ij ) is greater than or equal to the set threshold, iteratively update the corresponding until E(x pij ,n ps , ⁇ ij ) is less than the set threshold, stop the iteration, at this time E(x pij ,n ps , ⁇ ij ) is the minimum value, Realize high-precision 3D reconstruction of complex objects.
  • the commonly used decoding methods are the contour fitting method and the weighted gray-scale centroid method, which are used to realize the sub-pixel localization of the fringe boundary.
  • the contour fitting method the positioning accuracy of the sub-pixel is obtained by fitting an edge contour function (eg, a linear function, a Sigma function, etc.).
  • the weighted gray center of gravity method the center of gravity of the template in the M*N template is obtained by the gray weighted mean value method, and the accuracy similar to the center point fitting based on the Gaussian function can be obtained.
  • stripe profiles can represent different stripe boundary types, for example in binary stripes, stripe profiles include "black-to-white” stripe boundaries and "white-to-black” stripe boundaries in a certain encoding direction.
  • the reflection characteristics of the surface of the object may be introduced as prior knowledge, and the posterior probability estimation value of the corresponding stripe boundary function parameter is obtained on the basis of the existing stripe sub-pixel positioning. Therefore, the influence of the reflection characteristics of the object surface on the fringe localization result can be effectively eliminated, so as to obtain a more accurate fringe localization result and ensure the high accuracy of the reconstructed three-dimensional object.
  • the grayscale variable y of the stripe localization information satisfies the following Gaussian distribution:
  • u 0 represents the mean of the Gaussian distribution of the stripe location information
  • ⁇ 0 represents the standard deviation of the Gaussian distribution of the stripe location information
  • y represents the grayscale variable of the stripe location information
  • p(y) represents the probability distribution of the grayscale variable y
  • N represents a Gaussian distribution.
  • y) represents the probability distribution of the grayscale variable y under the reflectivity ⁇ of the object surface at the fringe location information
  • u ⁇ represents the fringe location information under the reflectivity ⁇ of the object surface at the fringe location information
  • ⁇ ⁇ represents the standard deviation of the Gaussian distribution of the fringe location information under the reflectivity ⁇ of the object surface at the fringe location information
  • N( ) represents the Gaussian distribution.
  • the maximum a posteriori probability estimate of the pattern location information, ⁇ N represents the standard deviation of the Gaussian distribution N(). Therefore, based on the data distribution of reflectivity and stripe positioning information in prior knowledge, the influence of reflectivity on stripe positioning information is obtained by using the formula in statistics, and then the stripe positioning under the corresponding reflectivity is calibrated. information, which can eliminate the influence of the surface reflectivity of the object on the fringe localization results.
  • the depth information determined by the structured light system is iteratively updated with the reflectivity determined by the photometric system, and the normal direction in the photometric system is updated with the updated structured light depth information. information), and then obtain the optimal solution of reflectivity, normal and depth values under the above two system constraints (ie meet the convergence conditions), so as to achieve high-precision reconstruction of 3D objects.
  • each training sample in the training sample set of the depth information calibration model includes structured light depth information corresponding to Q pieces of photometric information, where Q is a positive integer.
  • the depth information calibration model can automatically learn the influence of photometric information on the depth information of structured light through samples, and use the machine learning model to output the structure corresponding to the photometric information based on the input photometric information and the initial depth information determined by the structured light system.
  • the optical depth information can reduce or compensate the error caused by the photometric information (for example, reflectivity) of the object surface to the positioning result of the structured light system.
  • FIG. 10 shows a flowchart of an example of reconstructing a three-dimensional object by a fusion system based on photometry and structured light according to an embodiment of the present application.
  • a structured light system is used to determine the stripe location information of the three-dimensional object.
  • step 1003 preliminary structured light depth information of the three-dimensional object is determined based on the principle of triangulation and the stripe location information of the three-dimensional object.
  • step 1005 the light source in the photometric system is calibrated based on the preliminary structured light depth information, so as to realize occlusion detection and avoid the measurement result of the photometric system from being affected by the occlusion area.
  • step 1007 the image set collected using the photometric system is provided to the reflection model, so that the corresponding preliminary normal information and preliminary reflectivity are determined by the reflection model.
  • the joint objective function value E is calculated based on the preliminary normal information, preliminary reflectivity and preliminary structured light depth information.
  • step 1011 it is determined whether the joint objective function value E is greater than the preset threshold T threshold .
  • step 1011 If the judgment result in step 1011 is E>T threshold , then jump to step 1013 . If the judgment result in step 1011 is E ⁇ T threshold , then jump to step 1015 .
  • step 1013 the normal information and the structured light depth information are iteratively updated.
  • the update operation of the normal information and the structured light depth information reference may be made to the descriptions in the above related embodiments, which will not be repeated here.
  • the normal information, reflectivity and structured light depth information at the end of the iteration may be determined as optimal normal information, reflectivity and structured light depth information.
  • the three-dimensional object is reconstructed using one or more of the determined optimal normal information, reflectivity and structured light depth information, for example, the three-dimensional object can be reconstructed by using the optimal structured light depth information, Or a 3D object can be reconstructed with optimal normal information, reflectivity.
  • the photometric system and the structured light system are fused, the surface reflectivity is introduced into the fringe positioning method based on the structured light system, and finally the optimal method is iteratively solved based on the joint objective function constraints of the fusion system. orientation information, reflectivity and structured light depth information. Therefore, based on the fusion system of photometry and structured light, the reflection characteristics of the target scene (or the surface of the three-dimensional object) are introduced into the fringe positioning to eliminate the influence of the surface reflectivity on the fringe positioning, thereby improving the reconstruction accuracy of the structured light, solving the problem of To solve the localization optimization problem under the fusion system, the reconstruction accuracy for non-Lambertian surfaces can be effectively improved by solving the optimal 3D reconstruction information through two-step iteration.
  • FIG. 11A shows a schematic diagram of an example of a richly textured paper to be reconstructed.
  • FIG. 11B shows a schematic diagram of an example of the reconstruction result of the paper in FIG. 11A based on the first structured light system.
  • FIG. 11C shows a schematic diagram of an example of the reconstruction result of the paper in FIG. 11A based on the second structured light system.
  • FIG. 11D is a schematic diagram showing an example of the reconstruction result of the paper in FIG. 11A based on the fusion system according to the embodiment of the present application. It is not difficult to see that the influence of the surface texture on the reconstruction results can be effectively reduced by the fusion system, and a smoother and more realistic 3D reconstruction result can be obtained.
  • FIG. 12A shows a schematic diagram of an example of a circuit board having a surface with various reflective properties to be reconstructed.
  • FIG. 12B shows a schematic diagram of an example of reconstruction results for the circuit board in FIG. 12A based on a single structured light system.
  • FIG. 12C shows a schematic diagram of an example of the reconstruction result of the paper in FIG. 12A based on the fusion system according to the embodiment of the present application.
  • Figure 13A shows a schematic diagram of an example of a bowl to be rebuilt.
  • FIG. 13B shows a schematic diagram of an example of reconstruction results for the bowl in FIG. 13A based on a single structured light system.
  • Figure 13C shows a schematic diagram of an example of reconstruction results for the bowl in Figure 13A based on a single photometric system.
  • FIG. 13D is a schematic diagram showing an example of the reconstruction result of the bowl in FIG. 13A based on the fusion system according to the embodiment of the present application.
  • the application of the fusion system and the corresponding decoding algorithm in the embodiment of the present application can obtain smoother and more accurate reconstruction results, and can reduce or eliminate non-Lambertian surface reflectance differences and The influence of rich surface texture on the reconstruction results of structured light.
  • FIG. 14 shows a structural block diagram of an example of a three-dimensional object reconstruction apparatus integrating structured light and photometry according to an embodiment of the present application.
  • the three-dimensional object reconstruction device 1400 integrating structured light and photometry includes a structured light image acquisition unit 1410, a structured light depth information determination unit 1420, a photometric image acquisition unit 1430, a photometric information determination unit 1440, and a three-dimensional object reconstruction unit 1430. unit 1450.
  • the structured light image acquisition unit 1410 is configured to acquire N first images.
  • each first image is obtained by projecting an encoded pattern with an encoded fringe sequence onto a three-dimensional object, and N is a positive integer.
  • the structured light depth information determining unit 1420 is configured to determine the structured light depth information of the three-dimensional object based on the N first images.
  • the photometric image acquisition unit 1430 is configured to acquire M second images obtained by projecting P light sources to the three-dimensional object from different directions, where M and P are both positive integers.
  • the photometric information determining unit 1440 is configured to determine photometric information of the three-dimensional object based on the M second images.
  • the three-dimensional object reconstruction unit 1450 is configured to reconstruct the three-dimensional object based on the structured light depth information and the photometric information.
  • FIG. 15 is a schematic diagram of an example of a terminal device according to an embodiment of the present application.
  • the terminal device 1500 of this embodiment includes: a processor 1510 , a memory 1520 , and a computer program 1530 stored in the memory 1520 and executable on the processor 1510 .
  • the processor 1510 executes the computer program 1530, the steps in the above embodiments of the three-dimensional object reconstruction method integrating structured light and photometry are implemented, for example, steps 310 to 350 shown in FIG. 3 .
  • the processor 1510 executes the computer program 1530, the functions of the modules/units in the foregoing device embodiments, such as the functions of the units 1410 to 1450 shown in FIG. 14, are implemented.
  • the computer program 1530 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 1520 and executed by the processor 1510 to complete the this application.
  • the one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 1530 in the terminal device 1500 .
  • the computer program 1530 can be divided into a structured light image acquisition module, a structured light depth information determination module, a photometric image acquisition module, a photometric information determination module, and a three-dimensional object reconstruction module, and can be used to perform steps such as steps 310 to 310 respectively. 350 operations.
  • the terminal device 1500 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the terminal device may include, but is not limited to, the processor 1510 and the memory 1520 .
  • FIG. 15 is only an example of the terminal device 1500, and does not constitute a limitation on the terminal device 1500.
  • the terminal device may further include an input and output device, a network access device, a bus, and the like.
  • the so-called processor 1510 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 1520 may be an internal storage unit of the terminal device 1500 , such as a hard disk or a memory of the terminal device 1500 .
  • the memory 1520 may also be an external storage device of the terminal device 1500, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) equipped on the terminal device 1500 Card, Flash Card, etc.
  • the memory 1520 may also include both an internal storage unit of the terminal device 1500 and an external storage device.
  • the memory 1520 is used to store the computer program and other programs and data required by the terminal device.
  • the memory 1520 may also be used to temporarily store data that has been output or will be output.
  • the disclosed apparatus/terminal device and method may be implemented in other manners.
  • the apparatus/terminal device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned units can be implemented in the form of hardware, or can be implemented in the form of software.
  • the integrated modules/units if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing the relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium, and the computer When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electric carrier signal telecommunication signal and software distribution medium, etc.
  • the content contained in the computer-readable media may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, the computer-readable media Electric carrier signals and telecommunication signals are not included.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种融合结构光和光度学的三维对象重建方法及终端设备,其中该方法包括:获取N个第一图像,每个第一图像是通过将具有编码条纹序列的编码图案投射至三维对象后拍摄所得(310),N为正整数;基于N个第一图像,确定三维对象的结构光深度信息(320);获取M个第二图像,M个第二图像是通过将P个光源分别从不同方向投射至三维对象后拍摄所得(330),M和P均为正整数;基于M个第二图像,确定三维对象的光度信息(340);基于结构光深度信息和所述光度信息,重建三维对象(350)。由此,融合了结构光***和光度学***来重建三维对象,提高了针对具有复杂表面的三维对象的三维重建结果的精确度。

Description

融合结构光和光度学的三维对象重建方法及终端设备 技术领域
本申请属于计算机视觉技术领域,尤其涉及一种融合结构光和光度学的三维对象重建方法及终端设备。
背景技术
三维扫描技术发展迅速,目前已应用到了工业检测、设计、动漫及电影特效制作、3D展示、虚拟手术、反求工程等诸多领域和行业。
从现有的三维扫描技术手段来看,以激光三维扫描和投影结构光三维扫描技术为主,激光三维扫描***通过投射激光线或者点阵,用摄像头捕捉投射的激光特征,通过三角测量恢复物体的三维深度信息,但这种逐点和逐线的扫描方式主要缺点就是速度慢。在基于投影仪的结构光三维扫描***中,其通过结构光编码技术,实现整个面的一次性测量,具有速度快和精度高的明显优势,因而基于投影的结构光三维扫描技术已成为目前的主流技术手段。
目前,基于投影仪的结构光三维扫描***对于物体的具有无纹理的朗伯表面能够产生较佳的重建效果,三维物体重建(或,三维重建)是指构建物体所对应的三维模型。然而,对于物体的有纹理的朗伯表面和非朗伯表面,由于物体表面纹理对于条纹定位的影响(例如,反射率、内部遮挡等),存在因物体表面的反射率和纹理所造成的误差,导致物体的重建精度不高。
发明内容
有鉴于此,本申请实施例提供了一种融合结构光和光度学的三维对象重建方法及终端设备,以解决针对具有复杂表面(带纹理的朗伯表面和非朗伯表面)的物体的三维重建结果的精确度不高的问题。
本申请实施例的第一方面提供了一种融合结构光和光度学的三维对象重建方法,包括:获取N个第一图像,每个第一图像是通过将具有编码条纹序列的编码图案投射至三维对象后拍摄所得,N为正整数;基于所述N个第一图像, 确定所述三维对象的结构光深度信息;获取M个第二图像,所述M个第二图像是通过将P个光源分别从不同方向投射至所述三维对象后拍摄所得,M和P均为正整数;基于所述M个第二图像,确定所述三维对象的光度信息;基于所述结构光深度信息和所述光度信息,重建所述三维对象。
本申请实施例的第二方面提供了一种融合结构光和光度学的三维对象重建装置,包括:结构光图像获取单元,用于获取N个第一图像,每个第一图像是通过将具有编码条纹序列的编码图案投射至三维对象后拍摄所得,N为正整数;结构光深度信息确定单元,用于基于所述N个第一图像,确定所述三维对象的结构光深度信息;光度学图像获取单元,用于获取M个第二图像,所述M个第二图像是通过将P个光源分别从不同方向投射至所述三维对象后拍摄所得,M和P均为正整数;光度信息确定单元,用于基于所述M个第二图像,确定所述三维对象的光度信息;三维对象重建单元,用于基于所述结构光深度信息和所述光度信息,重建所述三维对象。
本申请实施例的第三方面提供了一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述方法的步骤。
本申请实施例的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述方法的步骤。
本申请实施例的第五方面提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备实现如上述方法的步骤。
本申请实施例与现有技术相比存在的有益效果是:
在利用结构光***采集三维对象的结构光深度图像(或三维点云信息)的同时,还利用该光度学***采集三维对象在不同方向上的图像,使得在实现得到三维对象的结构光深度信息的同时还可以得到三维对象的光度信息,进而结合结构光深度信息和光度信息重建三维对象。由此,融合了结构光***和光度学***来定位和重建三维对象,可以利用光度信息来修正因三维对象的表面纹理(例如,纹理反射率)所导致的结构光条纹定位误差,从而提高了针对具有复杂表面的三维对象的三维重建结果的精确度。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了适于应用根据本申请实施例的融合结构光和光度学的三维对象重建方法的结构光***一示例的架构示意图;
图2示出了适于应用根据本申请实施例的融合结构光和光度学的三维对象重建方法的光度学***一示例的架构示意图;
图3示出了根据本申请实施例的融合结构光和光度学的三维对象重建方法的一示例的流程图;
图4示出了4位格雷码结构光编码图案的一示例的示意图;
图5A示出了4位格雷码加4位线移结构光编码图案的一示例的示意图;
图5B示出了8位格雷码加4位线移的二值格雷码结构光编码图案的一示例的示意图;
图6示出了根据本申请实施例的标定光度学***中的光源的一示例的流程图;
图7示出了根据本申请实施例的确定三维对象的光度信息的一示例的流程图;
图8示出了根据本申请实施例的基于结构光深度信息和光度信息重建三维对象的一示例的流程图;
图9示出了根据本申请实施例的确定校准偏差信息是否满足迭代终止条件的一示例的流程图;
图10示出了根据本申请实施例的基于光度学和结构光的融合***重建三维对象的一示例的流程图;
图11A示出了待重建的具有丰富纹理的纸张的一示例的示意图;
图11B示出了基于第一结构光***对图11A中的纸张的重建结果的一示例的示意图;
图11C示出了基于第二结构光***对图11A中的纸张的重建结果的一示例的示意图;
图11D示出了基于本申请实施例的融合***对图11A中的纸张的重建结果的一示例的示意图;
图12A示出了待重建的具有多种反射特性的表面的电路板的一示例的示意图;
图12B示出了基于单一的结构光***对图12A中的电路板的重建结果的一示例的示意图;
图12C示出了基于本申请实施例的融合***对图12A中的纸张的重建结果的一示例的示意图;
图13A示出了待重建的碗的一示例的示意图;
图13B示出了基于单一的结构光***对图13A中的碗的重建结果的一示例的示意图;
图13C示出了基于单一的光度学***对图13A中的碗的重建结果的一示例的示意图;
图13D示出了基于本申请实施例的融合***对图13A中的碗的重建结果的一示例的示意图;
图14示出了根据本申请实施例的融合结构光和光度学的三维对象重建装置的一示例的结构框图;
图15是本申请实施例的终端设备的一示例的示意图。
具体实施方式
结构光三维扫描***的核心是编、解码算法,按照编码方式划分,现有的结构光三维扫描技术可分为时间编码、空间编码和混合编码三类。
时间编码结构光技术以其编码容量大、重建分辨率高的优点而被广泛研究和使用,常用的时间编码方案有格雷码结构光编码(例如,格雷码序列、格雷码序列加线移和格雷码序列加相移)和二值结构光编码(以“0”(纯黑)和“255”(纯白)为编码基元)。另外,利用不同的灰度值编码的条纹序列(即编码图案)进行投影,并基于条纹定位解码的结构光技术被称为条纹结构光技术,例如当采用二值编码图案时,对应的条纹结构光技术可以被成为二值条纹结构光技术。
在条纹结构光技术中,条纹的定位解码精度是影响三维重建结果的重要因素。对于无纹理的朗伯表面,条纹轮廓的调制源自于表面形状,条纹结构光技术可以获得微米级的测量精度。对于非朗伯表面,条纹轮廓的调制除了来自于表面形状,还与表面纹理和表面反射率变化有关,而条纹结构光技术往往忽略 表面反射率和纹理对于条纹边界轮廓的调制,因此无法较精准地重建有纹理的朗伯表面和非朗伯表面,在重建复杂场景时,由物体形状引起的遮挡和内反射造成编码信息的污染,使得重建结果也容易出现噪声和误差。
光度学三维扫描***是以针对三维对象的不同打光方向的目标图像作为输入,然后基于假设的物体表面反射特性模型(或,反射模型),建立方程求解物体表面法向和反射率,从而实现重建三维对象的模型。这里,常用的反射特性模型包括朗伯反射模型(朗伯表面)、Phong反射模型(高反光表面)及BRDF(一般表面)等。
然而,三维对象的表面可能是复杂表面,例如可能由具有不同反射特性子区域构成,而使用单一的光学模型假设均会导致法向场计算误差较大,并且通过单一的光度学方法也难以得到三维对象的绝对深度信息,使得所重建的三维对象的精确度较低。
以下描述中,为了说明而不是为了限定,提出了诸如特定***结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的***、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。
应当理解,当在本说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
具体实现中,本申请实施例中描述的移动终端包括但不限于诸如具有触摸敏感表面(例如,触摸屏显示器和/或触摸板)的移动电话、膝上型计算机或平板计算机之类的其它便携式设备。还应当理解的是,在某些实施例中,上述设备并非便携式通信设备,而是具有触摸敏感表面(例如,触摸屏显示器和/或触摸板)的台式计算机。
在接下来的讨论中,描述了包括显示器和触摸敏感表面的移动终端。然而, 应当理解的是,移动终端可以包括诸如物理键盘、鼠标和/或控制杆的一个或多个其它物理用户接口设备。
可以在移动终端上执行的各种应用程序可以使用诸如触摸敏感表面的至少一个公共物理用户接口设备。可以在应用程序之间和/或相应应用程序内调整和/或改变触摸敏感表面的一个或多个功能以及终端上显示的相应信息。这样,终端的公共物理架构(例如,触摸敏感表面)可以支持具有对用户而言直观且透明的用户界面的各种应用程序。
另外,在本申请的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
图1示出了适于应用根据本申请实施例的融合结构光和光度学的三维对象重建方法的结构光***一示例的架构示意图。
如图1所示,结构光***100中设有计算机10、相机20、投影仪30和物体(或,待重建的对象)40。通过计算机10可以设定投影仪30的编码图案,在需要进行三维重建时,投影仪30向指定区域(例如,用于放置物体40的区域)进行投射。进而,通过相机20可以采集指定区域中的物体40的图像,计算机10可以对由相机20所采集的物体的图像中的编码信息进行解码,以重建物体的三维信息。
应理解的是,上述结合图1中所描述的设备类型仅用作示例,例如可以将相机20替换为其他的具有图像采集功能的设备,计算机10可以是其他具有处理功能的移动终端,等等。
图2示出了适于应用根据本申请实施例的融合结构光和光度学的三维对象重建方法的光度学***一示例的架构示意图。
如图2所示,光度学***200中设有物体(或,待重建的对象)210、多个(例如,P个,P为正整数)面光源220、控制板230和相机240。在需要进行三维重建操作时,可以由控制板230控制多个面光源220轮流进行打光,并通过相机240采集对应不同光源方向的灰度图像。进而,控制板230可以基于各个灰度图像求得物体表面的反射方程与法向量,对法向量积分可恢复物体表面各点的相对高度,以重建物体的三维信息。
应理解的是,上述结合图2中所描述的设备类型仅用作示例,例如可以将控制板230替换为其他的具有处理功能的移动终端,多个方位的面光源也可以替换为点光源,等等。此外,还可以将图1中的结构光***与图2中的光度学 ***进行融合,融合***可以在实现结构光***和光度学***的功能的同时还能简化***硬件设备,例如两个***之间可以共用相机、计算机等。
图3示出了根据本申请实施例的融合结构光和光度学的三维对象重建方法的一示例的流程图。本申请实施例的三维对象重建方法可以是由移动终端(例如,计算机10)来执行的,旨在通过执行控制或处理操作,以重建高精度的三维对象。
如图3所示,在步骤310中,获取N个第一图像,N为正整数。这里,每个第一图像是通过将具有编码条纹序列的编码图案投射至三维对象后拍摄所得。示例性地,可以使用结构光***(如图1所示)来投射预设数量个编码图案,并对在三维对象上的编码图案进行采集,得到对应的图像。
这里,第一图像的数量N可以是与所投射的编码图案的数量相匹配,例如当投射8张编码图案时所对应采集到的检测对象图案的数量也可以是8个。需说明的是,编码图案的数量可以是与编码位数相关,例如当采用8位编码序列时,所对应的编码图案的数量为8张。
此外,每一编码图案具有由平行的多个编码条纹组成的唯一编码条纹序列和在相邻的编码条纹之间的条纹交界。在本申请实施例中,可以采用各种条纹定位方案(例如,像素中心解码定位方案)。在一些实施方式中,还可以是采用基于条纹交界的解码定位方案,能够实现高精度(例如,亚像素级别)的定位效果,更多细节将在下文中展开。
需说明的是,本申请实施例中的编码图案可以采用各种结构光条纹编码技术中的编码图案,例如二值结构光编码图案、格雷码结构光编码图案或二值格雷码图案等,在此应不加限制。
图4示出了4位格雷码结构光编码图案的一示例的示意图。
如图4所示,在使用4位格雷码结构光编码图案进行三维重建时,其需要在不同时序投射4幅编码图像,这样格雷码图案中共包含2 4-1=15个条纹交界,每一条纹交界具有对应的排列号和编码值。在上述4幅不同时序的格雷码结构光编码图案中,不同的强度图像中的条纹边界互不重合,因此其格雷码值或相位编码值不易被误判,可以实现格雷码值与条纹交界之间的映射关系。另外,当格雷码结构光编码图案是由黑色条纹和白色条纹组成时,格雷码结构光编码图案为二值格雷码结构光编码图案。
理论上,基于条纹交界的格雷码方案中,当编码图案中的最细条纹宽度为 1个象素时,格雷码交界解码才能获得与象素中心解码相同的图像采样点密度。因此,在投射格雷码编码图案的基础上,还可以继续投射线移图案,以在格雷码条纹交界解码的基础上提升图像点采样密度,从而保证图像采样点的亚象素准确度定位。
图5A示出了4位格雷码加4位线移结构光编码图案的一示例的示意图。
如图5A所示,在投射4幅格雷码图案后,还可以依次投射4幅周期线移条纹图案。格雷码条纹交界和线移条纹中心均不重合,相距0.5个条纹宽度。因此,将二者结合解码,将中心定位线(实线)与条纹交界定位线(虚线)结合成最终的定位线(实线和虚线交替出现),可将图像采样点密度由单条纹宽度提高到0.5条纹宽度,例如,当单条纹宽度在强度图像中为1个象素时,图像采样点密度约为0.5个象素宽,可以实现亚像素级的高精度定位。
图5B示出了8位格雷码加4位线移的二值格雷码结构光编码图案的一示例的示意图。如上面所介绍的一样,本申请实施例还可以采用8位格雷码加4位线移二值格雷码结构光编码图案或其他编码图案(例如,10位格雷码等)。
在本申请实施例的一些示例中,条纹交界的编码信息直接被携带在投影仪所投影的编码图案中,便于解码但给编码图案的设定工作增添了负担,例如可能需要指出相同灰度值的相邻条纹(例如,两条相邻的白条纹)之间的交界信息。在本申请实施例的另一些示例中,还可以将编码图案进行正反(即,将条纹灰度值进行反转)两次投射,通过正反条纹投影,例如正向投射的编码图案是0-0-0-0-255-255-255-255,反向的编码图案是255-255-255-255-0-0-0-0,由此可以将正反条纹的零交叉点作为编码图案的条纹交界,取正反条纹拟合直线交点获取条纹亚像素精度的定位结果。
在步骤320中,基于N个第一图像,确定三维对象的结构光深度信息。具体地,可以通过解析第一图像中各个像素的条纹定位信息,并基于三角测量原理解析出三维对象的绝对深度信息,具体细节还可以参照目前相关技术中的描述。
在步骤330中,获取M个第二图像。这里,M个第二图像是通过将P个光源分别从不同方向投射至三维对象后拍摄所得,M和P均为正整数。示例性地,可以使用光度学***(如图2所示)来轮流控制P个面光源从不同方向对三维对象进行打光,并由相机来从不同的角度来采集三维对象的图像。
在步骤340中,基于M个第二图像,确定三维对象的光度信息。这里,光 度信息包括法向信息和反射率,例如可以根据M个第二图像求得物体表面的反射方程与法向量。此外,光度信息还可以包括其他参数信息,例如从反射率分解出的诸如漫反射分量、镜面反射分量和折射分量之类的关键亮度值。
在步骤350中,基于结构光深度信息和光度信息,重建三维对象。
需说明的是,虽然图3所示的流程中的各个步骤是顺序进行的,但其仅用作示例而并不旨在限制,例如在一些示例中,还可以先执行步骤330和步骤340,后执行步骤310和步骤320。
在本申请实施例中,通过融合结构光***和光度学***,可以降低或消除因物体表面的纹理的反射率、法向量等信息而导致的结构光定位结果的误差,能够适用于针对具有各种复杂表面的三维物体,且可以保障一定的重建精度。
需说明的是,在使用光度学***之前,可以先对光度学***中的各个光源的方位进行标定,以保障较佳的光度立体重建效果。图6示出了根据本申请实施例的标定光度学***中的光源的一示例的流程图。
如图6所示,在步骤610中,基于结构光深度信息检测P个光源所投射的光是否无法覆盖的三维对象的表面。这里,可以先使用结构光***来确定三维对象的初步深度图,之后利用初步深度图来对光度学***中的光源进行标定。
在步骤620中,在P个光源所投射的光无法覆盖三维对象的表面时,根据结构光深度信息确定P个光源的位置和投射方向。具体地,通过初始深度图像对结构光***中的光源进行标定,例如在设定位置范围和方向区间范围内筛选各个光源所适宜的位置和投射方向,使得光度学***的光源能够完全覆盖三维对象的表面,而不会存在遮挡区域。
在本申请实施例中,可以利用初步深度图中的结构光深度信息来确定三维对象的初步三维点云结构,并且当前光度学***中各个光源的投射光可能无法覆盖上述的初步三维点云结构,并基于初步三维点云结构和目前的光度学***的光源进行标定来完成遮挡检测,从而剔除遮挡部分对于光度学***的影响。
关于上述的步骤340的一些实施方式,可以将M个第二图像输入至与三维对象的目标表面类型相匹配的反射模型,以由反射模型输出三维对象的光度信息。图7示出了根据本申请实施例的确定三维对象的光度信息(即上述步骤340)的一示例的流程图。
如图7所示,在步骤710中,获取三维对象的目标表面类型。这里,表面类型可以是多样化的,例如金属反射表面、陶瓷的半透明反射表面等等。在本 申请实施例的一些示例中,可以通过各种潜在的或已知的表面类型检测技术来检测待重建的三维对象的目标表面类型。在本申请实施例的另一些示例中,可以通过接收用户操作来指定待重建的三维对象的目标表面类型。
在步骤720中,从预设的反射模型集中确定与目标表面类型相匹配的反射模型。这里,反射模型集中的每一反射模型分别配置有相应的表面类型,例如针对典型的非朗伯表面(金属、陶瓷等),可以预配置对应的反射模型来表征其表面反射率。
在步骤730中,将M个第二图像输入至所确定的反射模型,以由该反射模型输出三维对象的光度信息。
在本申请实施例的一些示例中,对于金属反射表面,可以采用Phong模型对物体表面反射特性进行建模,对于陶瓷的半透明反射表面,可以采用具有分层反射结构的Hanrahan–Krueger(HK)模型进行建模。由此,针对不同的表面类型调用相应的反射模型来求解三维对象的光度信息,可以保障所确定的三维对象的光度信息的高精确度。
图8示出了根据本申请实施例的基于结构光深度信息和光度信息重建三维对象的一示例的流程图。
如图8所示,在步骤810中,基于光度信息迭代校准结构光深度信息,得到相对于结构光深度信息的校准偏差信息。
需说明的是,物体表面(例如,物体表面的纹理)的反射率的变化会导致结构光条纹定位的误差。在本申请实施例的一些示例中,可以依据预设的反射率对条纹定位的影响规律(例如,可以基于先验知识或训练模型而确定的),针对三维对象中的每一像素所对应的条纹定位信息(或者,基于N个第一图像所确定的结构光深度信息)进行校准,从而得到相应的更新后的结构光深度信息,并计算在深度更新前后在结构光深度信息上的偏差值。
具体地,可以在经更新的深度图(或三维点云结构)的基础上,通过法向量计算准则,求取相应的三维对象的法向量信息,关于法向计算准则的细节,可以参照目前相关技术中的描述,在此便不赘述。
在步骤820中,判断校准偏差信息是否满足预设的迭代终止条件。这里,迭代终止条件可以是表示针对上述步骤810中的迭代校准操作的终止条件。
如果步骤820中的判断结果指示满足预设的迭代终止条件,则跳转至步骤830中的操作。如果步骤820中的判断结果指示不满足预设的迭代终止条件, 则跳转至步骤810。
在步骤830中,基于对应满足所述迭代终止条件的经校准的结构光深度信息,重建三维对象。
在本申请实施例中,基于光度信息迭代校准结构光深度信息,补偿因表面纹理的反射率等光度信息所导致的结构光深度信息的误差,可以保障高精度的三维重建结果。
在本申请实施例的一些示例中,光度信息可以包括第一光度信息和第二光度信息。图9示出了根据本申请实施例的确定校准偏差信息是否满足迭代终止条件的一示例的流程图。
如图9所示,在步骤910中,基于第一光度信息迭代校准结构光深度信息,得到相对于结构光深度信息的第一校准偏差信息。
在步骤920中,利用经校准的结构光深度信息校准第二光度信息,得到相对于第二光度信息的第二校准偏差信息。示例性地,可以从经校准的结构光深度信息求解出新的第二光度信息,并将新的第二光度信息与原始的第二光度信息进行对比,从而得到相应的第二校准偏差信息。
在步骤930中,判断第一校准偏差信息和第二校准偏差信息是否满足预设的偏差条件,以相应地确定校准偏差信息是否满足预设的迭代终止条件。
在一些实施方式中,第一光度信息可以是反射率,以及第二光度信息可以是法向信息。这样,可以基于光度学***所确定的反射率来校准结构光***所确定的深度信息并确定相应的第一偏差值,利用经校准的深度信息来反推三维对象的法向信息并确定与光度学***所确定的法向信息之间的第二偏差值,通过第一偏差值和第二偏差值来确定深度信息、法向信息和反射率是否成功完成调整,有效避免了校准不足或校准过量的问题,使得最终的定位结果能同时满足光度学***和结构光***的要求,可以保证所重建的三维对象的精确度。
关于上述的步骤930,在一些实施方式中,可以确定第一校准偏差信息与第二校准偏差信息对应预设偏差权重配置的目标偏差信息,相应地确定第一校准偏差信息和第二校准偏差信息是否满足预设的偏差条件。示例性地,可以将偏差信息进行加权求和,并根据加权求和值是否小于预设误差阈值的比较结果,相应地确定是否满足迭代终止条件。
具体地,基于上述反射特性和条纹定位方法,提出了同时满足光度学***和结构光***约束要求的联合目标函数:
Figure PCTCN2020129563-appb-000001
其中,pij表示坐标(i,j)的图像像素的相位值,x pij表示基于结构光解码所确定的深度信息(例如通过三角测量原理而确定的三维点云),ps表示光度学***,sls表示结构光***,n ps表示基于光度学***而确定的法向信息,
Figure PCTCN2020129563-appb-000002
表示基于光度学方法获取的反射率信息,λ表示权重值。
此外,E sls(x pij,n ps)(在一些情况下,
Figure PCTCN2020129563-appb-000003
)表示基于结构光解码的深度信息求解的目标场景法向信息与基于光度学***所确定的目标场景法向信息之间的差值,
Figure PCTCN2020129563-appb-000004
表示基于结构光***所确定的深度信息与基于深度信息标定光源的光度学***所更新的深度信息之间的差值(或灰度差),
Figure PCTCN2020129563-appb-000005
表示联合结构光***和光度学***的目标函数。
通过上述优化模型,可以将融合结构光***和光度学***的三维重建问题按照以下方式进行优化:
Figure PCTCN2020129563-appb-000006
其中,
Figure PCTCN2020129563-appb-000007
表示通过结构光***和三角测量原理所确定的目标物体(或待重建对象)的深度信息,
Figure PCTCN2020129563-appb-000008
Figure PCTCN2020129563-appb-000009
分别表示通过光度学***所确定的目标物体的法向和反射率。
也就是说,可以将
Figure PCTCN2020129563-appb-000010
Figure PCTCN2020129563-appb-000011
作为起始点进行逐步优化(或迭代优化)操作,并将
Figure PCTCN2020129563-appb-000012
取最小值时所对应的深度信息、法向信息和反射率信息作为相应的最优的深度信息、法向信息和反射率信息。在一些实施方式中,可以将E(x pij,n psij)与设定阈值进行对比,当E(x pij,n psij)大于或等于设定阈值时,可以迭代更新相应的深度信息、反射率或法向信息,直到E(x pij,n psij)小于设定阈值,停止迭代,此时的E(x pij,n psij)即为最小值,实现复杂物体高精度的三维重建。
需说明的是,在结构光条纹定位技术中,常用的解码方式为轮廓拟合方式和加权灰度重心方式,并用以实现条纹边界亚像素定位。在轮廓拟合方式中,通过边缘轮廓函数(例如,线性函数、Sigma函数等)拟合获取亚像素的定位精度。在加权灰度重心方式中,通过灰度加权求均值的方法获取在M*N的模板内的模板重心,可以获取和基于高斯函数中心点拟合相似的精度。从概率意义上,上述两种方式在本质上是一种基于灰度的参数极大似然估计,未考虑物体的表面反射特性对于条纹定位结果(例如,条纹交界或条纹轮廓)的影响。这里,条纹轮廓可以表示不同的条纹交界类型,例如在二值条纹中,条纹轮廓包 括在某一编码方向上“从黑到白”的条纹交界和“从白到黑”的条纹交界。
在本申请实施例的一示例中,可以引入物体表面的反射特性作为先验知识,在现有条纹亚像素定位的基础上,获取对应条纹边界函数参数后验概率估计值。由此,可以有效地消除物体表面的反射特性对于条纹定位结果的影响,从而得到更准确的条纹定位结果,保障了所重建的三维对象的高精确度。
具体地,假定条纹定位信息的灰度变量y满足以下高斯分布:
p(y)=N(y;u 00 2)        式(1)
其中,u 0表示条纹定位信息的高斯分布的均值,σ 0表示条纹定位信息的高斯分布的标准差,y表示条纹定位信息的灰度变量,p(y)表示灰度变量y的概率分布,N表示高斯分布。
条纹定位信息的变量y与物体表面的反射率ρ的似然函数为:
p(ρ|y)=N(ρ;y,u ρρ 2)        式(2)
其中,p(ρ|y)表示在条纹定位信息处的物体表面的反射率ρ下灰度变量y的概率分布,u ρ表示在条纹定位信息处的物体表面的反射率ρ下条纹定位信息的高斯分布的均值,σ ρ表示在条纹定位信息处的物体表面的反射率ρ下条纹定位信息的高斯分布的标准差,N()表示高斯分布。
基于Bayesian公式,p(y|ρ)∝p(ρ|y)p(y),条纹定位信息的y的后验概率分布为:
p(y|ρ)=N(y|u NN 2)       式(3)
Figure PCTCN2020129563-appb-000013
纹定位信息的最大后验概率估计值,σ N表示高斯分布N()的标准差。由此,基于先验知识中的反射率与条纹定位信息的数据分布情况,利用统计学中的公式得出反射率对条纹定位信息的影响规律,并以此来校准对应反射率下的条纹定位信息,可以消除物体的表面反射率对条纹定位结果的影响。
在本申请实施例中,通过分步迭代(即以光度学***所确定的反射率迭代更新结构光***所确定的深度信息,并以经更新的结构光深度信息更新光度学***中的法向信息),进而获取在上述两个***约束条件下反射率、法向和深度值的最优解(即
Figure PCTCN2020129563-appb-000014
满足收敛条件),从而实现高精度重建三维对象。
在本申请实施例的另一示例中,在利用反射率来更新结构光深度信息时, 还可以将光度信息(例如,反射率)和基于N个第一图像所确定的结构光深度信息输入至深度信息校准模型,以相应地输出经校准的结构光深度信息。这里,深度信息校准模型的训练样本集中的每一训练样本包括对应Q个光度信息的结构光深度信息,Q为正整数。这样,深度信息校准模型可以通过样本自动学习光度信息对结构光深度信息的影响,利用机器学习模型来基于所输入的光度信息和由结构光***所确定的初始深度信息,输出对应光度信息的结构光深度信息,可以降低或补偿因物体表面的光度信息(例如,反射率)对结构光***的定位结果所造成的误差。
图10示出了根据本申请实施例的基于光度学和结构光的融合***重建三维对象的一示例的流程图。
如图10所示,在步骤1001中,使用结构光***确定三维对象的条纹定位信息。
在步骤1003中,基于三角测量原理和三维对象的条纹定位信息,确定三维对象的初步结构光深度信息。
在步骤1005中,基于初步结构光深度信息对光度学***中的光源进行标定,以实现遮挡检测,避免因遮挡区域而影响光度学***的测量结果。
在步骤1007中,将使用光度学***所采集到的图像集提供给反射模型,以由反射模型确定相应的初步法向信息和初步反射率。
在步骤1009中,基于初步法向信息、初步反射率和初步结构光深度信息,计算联合目标函数值E。关于联合目标函数值的具体计算细节,可以参照上文相关实施例中的描述,在此便不赘述。
在步骤1011中,判断联合目标函数值E是否大于预设阈值T 阈值
如果步骤1011中的判断结果为E>T 阈值,则跳转至步骤1013。如果步骤1011中的判断结果为E≤T 阈值,则跳转至步骤1015。
在步骤1013中,迭代更新法向信息和结构光深度信息。关于法向信息和结构光深度信息的更新操作的细节,可以参照上文相关实施例中的描述,在此便不赘述。
在步骤1015中,可以将迭代结束时的法向信息、反射率和结构光深度信息,确定为最优的法向信息、反射率和结构光深度信息。
在步骤1017中,利用所确定的最优的法向信息、反射率和结构光深度信息中的一者或多者,重建三维对象,例如可以通过最优的结构光深度信息来重建 三维对象,或者可以通过最优的法向信息、反射率来重建三维对象。
在本申请实施例中,融合了光度学***和结构光***,并将表面反射率引入到基于结构光***的条纹定位方法中,最后基于融合***的联合目标函数约束迭代求解出最优的法向信息、反射率和结构光深度信息。由此,基于光度学与结构光的融合***,将目标场景(或三维对象的表面)的反射特性引入条纹定位中以消除表面反射率对于条纹定位的影响,进而提高结构光的重建精度,解决了在融合***下的定位优化问题,通过两步迭代求解最优的三维重建信息,可以有效地改进针对非朗伯表面的重建精度。
图11A示出了待重建的具有丰富纹理的纸张的一示例的示意图。图11B示出了基于第一结构光***对图11A中的纸张的重建结果的一示例的示意图。图11C示出了基于第二结构光***对图11A中的纸张的重建结果的一示例的示意图。图11D示出了基于本申请实施例的融合***对图11A中的纸张的重建结果的一示例的示意图。不难看出,通过融合***可以有效地降低表面纹理对于重建结果的影响,得到更加平滑、真实的三维重建结果。
图12A示出了待重建的具有多种反射特性的表面的电路板的一示例的示意图。图12B示出了基于单一的结构光***对图12A中的电路板的重建结果的一示例的示意图。图12C示出了基于本申请实施例的融合***对图12A中的纸张的重建结果的一示例的示意图。通过对比可以发现,应用本申请实施例的融合***对由多种反射特性构成的复杂场景进行重建时,可以得到保留更多细节的重建结果。
图13A示出了待重建的碗的一示例的示意图。图13B示出了基于单一的结构光***对图13A中的碗的重建结果的一示例的示意图。图13C示出了基于单一的光度学***对图13A中的碗的重建结果的一示例的示意图。图13D示出了基于本申请实施例的融合***对图13A中的碗的重建结果的一示例的示意图。
通过上述对比结果可知,相比于单一结构光***,应用本申请实施例中融合***和相应的解码算法可以获取更平滑和准确的重建结果,并可以降低或消除非朗伯表面反射率差异和表面丰富的纹理对于结构光重建结果的影响。
图14示出了根据本申请实施例的融合结构光和光度学的三维对象重建装置的一示例的结构框图。
如图14所示,融合结构光和光度学的三维对象重建装置1400包括结构光图像获取单元1410、结构光深度信息确定单元1420、光度学图像获取单元1430、 光度信息确定单元1440和三维对象重建单元1450。
结构光图像获取单元1410用于获取N个第一图像。这里,每个第一图像是通过将具有编码条纹序列的编码图案投射至三维对象后拍摄所得,N为正整数。
结构光深度信息确定单元1420用于基于所述N个第一图像,确定所述三维对象的结构光深度信息。
光度学图像获取单元1430用于获取M个第二图像,所述M个第二图像是通过将P个光源分别从不同方向投射至所述三维对象后拍摄所得,M和P均为正整数。
光度信息确定单元1440用于基于所述M个第二图像,确定所述三维对象的光度信息。
三维对象重建单元1450用于基于所述结构光深度信息和所述光度信息,重建所述三维对象。
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
图15是本申请实施例的终端设备的一示例的示意图。如图15所示,该实施例的终端设备1500包括:处理器1510、存储器1520以及存储在所述存储器1520中并可在所述处理器1510上运行的计算机程序1530。所述处理器1510执行所述计算机程序1530时实现上述融合结构光和光度学的三维对象重建方法实施例中的步骤,例如图3所示的步骤310至350。或者,所述处理器1510执行所述计算机程序1530时实现上述各装置实施例中各模块/单元的功能,例如图14所示单元1410至1450的功能。
示例性的,所述计算机程序1530可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器1520中,并由所述处理器1510执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序1530在所述终端设备1500中的执行过程。例如,所述计算机程序1530可以被分割成结构光图像获取模块、结构光深度信息确定模块、光度学图像获取模块、光度信息确定模块以及三维对象重建模块,并分别可用于执行如步骤310至步骤350中的操作。
所述终端设备1500可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述终端设备可包括,但不仅限于,处理器1510、存储器1520。 本领域技术人员可以理解,图15仅是终端设备1500的示例,并不构成对终端设备1500的限定,可以包括比图示更多或少的部件,或组合某些部件,或不同的部件,例如所述终端设备还可以包括输入输出设备、网络接入设备、总线等。
所称处理器1510可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器1520可以是所述终端设备1500的内部存储单元,例如终端设备1500的硬盘或内存。所述存储器1520也可以是所述终端设备1500的外部存储设备,例如所述终端设备1500上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器1520还可以既包括所述终端设备1500的内部存储单元也包括外部存储设备。所述存储器1520用于存储所述计算机程序以及所述终端设备所需的其他程序和数据。所述存储器1520还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述***中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用 和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/终端设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/终端设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述单元既可以采用硬件的形式实现,也可以采用软件的形式实现。
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其 依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种融合结构光和光度学的三维对象重建方法,其特征在于,包括:
    获取N个第一图像,每个第一图像是通过将具有编码条纹序列的编码图案投射至三维对象后拍摄所得,N为正整数;
    基于所述N个第一图像,确定所述三维对象的结构光深度信息;
    获取M个第二图像,所述M个第二图像是通过将P个光源分别从不同方向投射至所述三维对象后拍摄所得,M和P均为正整数;
    基于所述M个第二图像,确定所述三维对象的光度信息;
    基于所述结构光深度信息和所述光度信息,重建所述三维对象。
  2. 如权利要求1所述的融合结构光和光度学的三维对象重建方法,其特征在于,在获取M个第二图像之前,所述方法还包括:
    根据所述结构光深度信息确定所述P个光源的位置和投射方向。
  3. 如权利要求1所述的融合结构光和光度学的三维对象重建方法,其特征在于,所述基于所述M个第二图像,确定所述三维对象的光度信息,具体包括:
    将所述M个第二图像输入至与所述三维对象的目标表面类型相匹配的反射模型,以由所述反射模型输出所述三维对象的光度信息。
  4. 如权利要求1所述的融合结构光和光度学的三维对象重建方法,其特征在于,所述基于所述结构光深度信息和所述光度信息,重建所述三维对象,包括:
    基于所述光度信息迭代校准所述结构光深度信息,得到相对于所述结构光深度信息的校准偏差信息;
    判断所述校准偏差信息是否满足预设的迭代终止条件;以及
    当满足时,基于对应满足所述迭代终止条件的经校准的结构光深度信息, 重建所述三维对象。
  5. 如权利要求4所述的融合结构光和光度学的三维对象重建方法,其特征在于,所述光度信息包括第一光度信息和第二光度信息,
    其中,所述基于所述光度信息迭代校准所述结构光深度信息,得到相对于所述结构光深度信息的第一校准偏差信息,包括:
    基于所述第一光度信息迭代校准所述结构光深度信息,得到相对于所述结构光深度信息的第一校准偏差信息;
    相应地,所述判断所述校准偏差信息是否满足预设的迭代终止条件,包括:
    利用经校准的结构光深度信息校准所述第二光度信息,得到相对于所述第二光度信息的第二校准偏差信息;
    判断所述第一校准偏差信息和所述第二校准偏差信息是否满足预设的偏差条件,以相应地确定所述校准偏差信息是否满足预设的迭代终止条件。
  6. 如权利要求5所述的融合结构光和光度学的三维对象重建方法,其特征在于,所述第一光度信息为反射率,以及所述第二光度信息为法向信息。
  7. 如权利要求5所述的融合结构光和光度学的三维对象重建方法,其特征在于,所述判断所述第一校准偏差信息和所述第二校准偏差信息是否满足预设的偏差条件,包括:
    确定所述第一校准偏差信息与所述第二校准偏差信息对应预设偏差权重配置的目标偏差信息;
    根据所述目标偏差信息是否小于预设偏差阈值信息的比较结果,相应地确定所述第一校准偏差信息和所述第二校准偏差信息是否满足预设的偏差条件。
  8. 如权利要求4所述的融合结构光和光度学的三维对象重建方法,其特征 在于,所述基于所述光度信息迭代校准所述结构光深度信息,包括:
    将所述光度信息和所述结构光深度信息输入至深度信息校准模型,以相应地输出经校准的结构光深度信息,所述深度信息校准模型的训练样本集中的每一训练样本包括对应Q个光度信息的结构光深度信息,Q为正整数。
  9. 一种融合结构光和光度学的三维对象重建装置,其特征在于,包括:
    结构光图像获取单元,用于获取N个第一图像,每个第一图像是通过将具有编码条纹序列的编码图案投射至三维对象后拍摄所得,N为正整数;
    结构光深度信息确定单元,用于基于所述N个第一图像,确定所述三维对象的结构光深度信息;
    光度学图像获取单元,用于获取M个第二图像,所述M个第二图像是通过将P个光源分别从不同方向投射至所述三维对象后拍摄所得,M和P均为正整数;
    光度信息确定单元,用于基于所述M个第二图像,确定所述三维对象的光度信息;
    三维对象重建单元,用于基于所述结构光深度信息和所述光度信息,重建所述三维对象。
  10. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至8任一项所述方法的步骤。
PCT/CN2020/129563 2020-07-28 2020-11-17 融合结构光和光度学的三维对象重建方法及终端设备 WO2022021680A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/927,692 US20230298189A1 (en) 2020-07-28 2020-11-17 Method for reconstructing three-dimensional object combining structured light and photometry and terminal device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010738438.5A CN111951376B (zh) 2020-07-28 2020-07-28 融合结构光和光度学的三维对象重建方法及终端设备
CN202010738438.5 2020-07-28

Publications (1)

Publication Number Publication Date
WO2022021680A1 true WO2022021680A1 (zh) 2022-02-03

Family

ID=73339725

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/129563 WO2022021680A1 (zh) 2020-07-28 2020-11-17 融合结构光和光度学的三维对象重建方法及终端设备

Country Status (3)

Country Link
US (1) US20230298189A1 (zh)
CN (1) CN111951376B (zh)
WO (1) WO2022021680A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115252992A (zh) * 2022-07-28 2022-11-01 北京大学第三医院(北京大学第三临床医学院) 基于结构光立体视觉的气管插管导航***
WO2024093282A1 (zh) * 2022-10-31 2024-05-10 华为技术有限公司 一种图像处理方法、相关设备及结构光***

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11223808B1 (en) * 2020-06-30 2022-01-11 Christie Digital Systems Usa, Inc. Device, system and method for generating a mapping of projector pixels to camera pixels and/or object positions using alternating patterns
CN112959322A (zh) * 2021-03-02 2021-06-15 中国科学院深圳先进技术研究院 控制方法、控制装置及终端设备
CN113052898B (zh) * 2021-04-08 2022-07-12 四川大学华西医院 基于主动式双目相机的点云和强反光目标实时定位方法
CN115775303B (zh) * 2023-02-13 2023-05-05 南京航空航天大学 一种基于深度学习与光照模型的高反光物体三维重建方法
CN116447978B (zh) * 2023-06-16 2023-10-31 先临三维科技股份有限公司 孔位信息检测方法、装置、设备及存储介质
CN117333560B (zh) * 2023-12-01 2024-02-20 北京航空航天大学杭州创新研究院 场景自适应的条纹结构光解码方法、装置、设备和介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107228625A (zh) * 2017-06-01 2017-10-03 深度创新科技(深圳)有限公司 三维重建方法、装置及设备
CN110879947A (zh) * 2018-09-06 2020-03-13 山东理工大学 一种基于一次投影结构光平行条纹图案的实时人脸三维测量方法

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100581460C (zh) * 2008-05-29 2010-01-20 上海交通大学 测量颜面缺损患者面部三维形貌的方法
US9635345B2 (en) * 2011-01-24 2017-04-25 Intel Corporation Method and system for acquisition, representation, compression, and transmission of three-dimensional data
EP2869263A1 (en) * 2013-10-29 2015-05-06 Thomson Licensing Method and apparatus for generating depth map of a scene
CN106875468B (zh) * 2015-12-14 2020-05-22 深圳先进技术研究院 三维重建装置及方法
US9947099B2 (en) * 2016-07-27 2018-04-17 Microsoft Technology Licensing, Llc Reflectivity map estimate from dot based structured light systems
CN106504284B (zh) * 2016-10-24 2019-04-12 成都通甲优博科技有限责任公司 一种基于立体匹配与结构光相结合的深度图获取方法
CN106780726A (zh) * 2016-12-23 2017-05-31 陕西科技大学 融合rgb‑d相机和彩色光度立体法的动态非刚体三维数字化方法
CN107677216B (zh) * 2017-09-06 2019-10-29 西安交通大学 一种基于光度立体视觉的多个磨粒三维形貌同步获取方法
US10529086B2 (en) * 2017-11-22 2020-01-07 Futurewei Technologies, Inc. Three-dimensional (3D) reconstructions of dynamic scenes using a reconfigurable hybrid imaging system
CN109978984A (zh) * 2017-12-27 2019-07-05 Tcl集团股份有限公司 人脸三维重建方法及终端设备
CN108520537B (zh) * 2018-03-29 2020-02-18 电子科技大学 一种基于光度视差的双目深度获取方法
CN109682814B (zh) * 2019-01-02 2021-03-23 华中农业大学 一种用tof深度相机修正空间频域成像中组织体表面光照度的方法
CN110264573B (zh) * 2019-05-31 2022-02-18 中国科学院深圳先进技术研究院 基于结构光的三维重建方法、装置、终端设备及存储介质
CN110686599B (zh) * 2019-10-31 2020-07-03 中国科学院自动化研究所 基于彩色格雷码结构光的三维测量方法、***、装置
CN111009007B (zh) * 2019-11-20 2023-07-14 广州光达创新科技有限公司 一种指部多特征全面三维重建方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107228625A (zh) * 2017-06-01 2017-10-03 深度创新科技(深圳)有限公司 三维重建方法、装置及设备
CN110879947A (zh) * 2018-09-06 2020-03-13 山东理工大学 一种基于一次投影结构光平行条纹图案的实时人脸三维测量方法

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LI MEIYING, LIU JIN, YANG HAIMA, SONG WANQING, YU ZIHAO: "Structured Light 3D Reconstruction System Based on a Stereo Calibration Plate", SYMMETRY, vol. 12, no. 5, pages 772, XP055890359, DOI: 10.3390/sym12050772 *
LIU WENJING: "Underwater 3D Reconstruction of Fusing Photometric Stereo and Structured Light", MASTER THESIS, TIANJIN POLYTECHNIC UNIVERSITY, CN, 15 May 2019 (2019-05-15), CN , XP055890531, ISSN: 1674-0246 *
YANG JIE: "Fusing Depth and Multispectral Photometric Stereo for 3D Reconstruction and Its Applications in Underwater Environment", MASTER THESIS, TIANJIN POLYTECHNIC UNIVERSITY, CN, 15 July 2016 (2016-07-15), CN , XP055890529, ISSN: 1674-0246 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115252992A (zh) * 2022-07-28 2022-11-01 北京大学第三医院(北京大学第三临床医学院) 基于结构光立体视觉的气管插管导航***
CN115252992B (zh) * 2022-07-28 2023-04-07 北京大学第三医院(北京大学第三临床医学院) 基于结构光立体视觉的气管插管导航***
WO2024093282A1 (zh) * 2022-10-31 2024-05-10 华为技术有限公司 一种图像处理方法、相关设备及结构光***

Also Published As

Publication number Publication date
US20230298189A1 (en) 2023-09-21
CN111951376A (zh) 2020-11-17
CN111951376B (zh) 2023-04-07

Similar Documents

Publication Publication Date Title
WO2022021680A1 (zh) 融合结构光和光度学的三维对象重建方法及终端设备
US10008005B2 (en) Measurement system and method for measuring multi-dimensions
CN101697233B (zh) 一种基于结构光的三维物体表面重建方法
Zhu et al. Reliability fusion of time-of-flight depth and stereo geometry for high quality depth maps
WO2022179259A1 (zh) 一种偏振相位偏折测量方法和装置
US20120176478A1 (en) Forming range maps using periodic illumination patterns
US20120176380A1 (en) Forming 3d models using periodic illumination patterns
US11042973B2 (en) Method and device for three-dimensional reconstruction
CN206583415U (zh) 确定反射表面的均匀度的***、表面分析设备和***
WO2022021678A1 (zh) 三维对象重建方法及终端设备
CN112053432A (zh) 一种基于结构光与偏振的双目视觉三维重建方法
CN115345822A (zh) 一种面向航空复杂零件的面结构光自动化三维检测方法
WO2023000595A1 (zh) 一种基于曲面屏的相位偏折测量方法、***及终端
Weinmann et al. Multi-view normal field integration for 3d reconstruction of mirroring objects
US9147279B1 (en) Systems and methods for merging textures
Garrido-Jurado et al. Simultaneous reconstruction and calibration for multi-view structured light scanning
US20180165821A1 (en) Devices, systems, and methods for reconstructing the three-dimensional shapes of objects
CN110675440A (zh) 三维深度数据的置信度评估方法、装置和计算机设备
Tran et al. A Structured Light RGB‐D Camera System for Accurate Depth Measurement
CN113793387A (zh) 单目散斑结构光***的标定方法、装置及终端
CN116295113A (zh) 一种融合条纹投影的偏振三维成像方法
WO2023010565A1 (zh) 单目散斑结构光***的标定方法、装置及终端
Angelopoulou et al. Evaluating the effect of diffuse light on photometric stereo reconstruction
US8948498B1 (en) Systems and methods to transform a colored point cloud to a 3D textured mesh
Grochulla et al. Combining photometric normals and multi-view stereo for 3d reconstruction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20946993

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20946993

Country of ref document: EP

Kind code of ref document: A1