WO2023207756A1 - 图像重建方法和装置及设备 - Google Patents

图像重建方法和装置及设备 Download PDF

Info

Publication number
WO2023207756A1
WO2023207756A1 PCT/CN2023/089562 CN2023089562W WO2023207756A1 WO 2023207756 A1 WO2023207756 A1 WO 2023207756A1 CN 2023089562 W CN2023089562 W CN 2023089562W WO 2023207756 A1 WO2023207756 A1 WO 2023207756A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
pixel
point
pixel point
target
Prior art date
Application number
PCT/CN2023/089562
Other languages
English (en)
French (fr)
Inventor
张华林
龙学雄
常旭
盛鸿
李耿磊
习嘉豪
孙元栋
Original Assignee
杭州海康机器人股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康机器人股份有限公司 filed Critical 杭州海康机器人股份有限公司
Publication of WO2023207756A1 publication Critical patent/WO2023207756A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Definitions

  • the present application relates to the field of image processing technology, and in particular to an image reconstruction method, apparatus and equipment.
  • the three-dimensional imaging equipment can include a laser and a camera.
  • the laser is used to project line structured light onto the surface of the measured object (i.e., the measured target), and the camera is used to photograph the measured object to obtain an image with line structured light, that is, a line structure. light image.
  • the light strip center line of the line structured light image can be obtained, and the light strip center line can be converted according to the pre-calibrated sensor parameters to obtain the spatial coordinates (i.e. three-dimensional coordinates) of the measured object at the current position. .
  • the three-dimensional reconstruction of the measured object i.e., three-dimensional reconstruction
  • An embodiment of the present application provides an image reconstruction method, which is applied to a three-dimensional imaging device.
  • the three-dimensional imaging device includes a first camera, a second camera, and a multi-line laser.
  • the method includes:
  • the first original image of the object being measured captured by the first camera is acquired, and the first original image of the object being measured captured by the second camera is acquired.
  • Two original images where N is a positive integer greater than 1; determine the first target image corresponding to the first original image and the second target image corresponding to the second original image, wherein the first target image includes the N line structure N first light strip areas corresponding to the light, and the second target image includes N second light strip areas corresponding to the N line structured lights;
  • a plurality of key point pairs are determined based on all first light strip center lines and all second light strip center lines, wherein for each key point pair, the key point pair includes the first pixel point and the first pixel point in the first light strip center line.
  • the second pixel point in the center line of the two light strips, the first pixel point and the second pixel point are pixel points corresponding to the same position point on the measured object;
  • a three-dimensional reconstructed image of the measured object is generated based on the corresponding three-dimensional points of the plurality of key points.
  • Embodiments of the present application provide an image reconstruction device for use in three-dimensional imaging equipment.
  • the three-dimensional imaging equipment includes a first camera, a second camera and a multi-line laser.
  • the device includes:
  • An acquisition module configured to acquire the first original image of the object under test collected by the first camera when the multi-line laser projects N lines of structured light onto the object under test, and obtain the first original image of the object under test by the second camera The second original image of the measured object collected, wherein N is a positive integer greater than 1;
  • Determining module configured to determine a first target image corresponding to the first original image and a second target image corresponding to the second original image, wherein the first target image includes N first target images corresponding to the N lines of structured light.
  • the second target image includes N second light strip areas corresponding to the N lines of structured light; determine the first light strip center corresponding to each first light strip area in the first target image Line, determine the second light strip center line corresponding to each second light strip area in the second target image; determine a plurality of key point pairs based on all first light strip center lines and all second light strip center lines, where , for each key point pair, the key point pair includes a first pixel point in the first light strip center line and a second pixel point in the second light strip center line, the first pixel point and the second pixel point It is the pixel point corresponding to the same position point on the measured object; the three-dimensional point corresponding to the key point pair is determined based on the key point pair and the camera calibration parameters;
  • a generation module configured to generate a three-dimensional reconstructed image of the measured object based on the three-dimensional points corresponding to the plurality of key point pairs.
  • An embodiment of the present application provides a three-dimensional imaging device, including: a processor and a machine-readable storage medium.
  • the machine-readable storage medium stores machine-executable instructions that can be executed by the processor; the processor is configured to execute The machine can execute instructions to implement the image reconstruction method disclosed in the above embodiments of this application.
  • the multi-line laser projects N lines of structured light to the measured object each time, and N is a positive integer greater than 1, such as 7, 11, 15, etc., so that the line structured light image collected by the camera each time includes N light strip center lines, the line structured light image is equivalent to the line structured light image of N positions of the measured object, thereby reducing the number of line structured light image collection times and reducing the time of three-dimensional reconstruction.
  • the multi-line laser scans the surface of the measured object, it can quickly obtain the entire contour data of the measured object, output the three-dimensional image information of the measured object, and improve the detection accuracy and detection speed.
  • the three-dimensional information of the measured object can be obtained using the triangulation method, that is, the depth information of the measured object can be obtained , thereby using a single acquisition image to obtain depth information on a multi-line laser, which can increase the single scanning efficiency by N times and quickly achieve a full-scale scan of the entire contour of the object being measured.
  • FIGS. 1A and 1B are schematic structural diagrams of a three-dimensional imaging device in an embodiment of the present application.
  • FIGS 1C and 1D are schematic diagrams of multiple lines of laser light in an embodiment of the present application.
  • Figure 2 is a schematic flowchart of an image reconstruction method in an embodiment of the present application.
  • Figure 3 is a schematic flowchart of an image reconstruction method in an embodiment of the present application.
  • Figure 4 is a schematic diagram of the principle of the triangulation method in an embodiment of the present application.
  • Figure 5 is a schematic flowchart of an image reconstruction method in an embodiment of the present application.
  • Figure 6 is a schematic structural diagram of an image reconstruction device in an embodiment of the present application.
  • first, second, third, etc. may be used to describe various information in the embodiments of this application, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from each other.
  • first information may also be called second information, and similarly, the second information may also be called first information.
  • word “if” as used may be interpreted as "when” or “when” or “in response to determining”, depending on the context.
  • surface structured light projection method In order to obtain a three-dimensional reconstructed image, among related technologies, surface structured light projection method, binocular speckle method, TOF (Time of flight, time of flight) method, and single-line laser profile scanning method can be used.
  • DLP Digital Light Processing
  • LCD Liquid Crystal Display
  • LED Light-Emitting Diode
  • the projection volume is large and the energy is divergent. In the large field of view and long distance, the large volume and high power consumption are not conducive to three-dimensional positioning applications.
  • the detection accuracy is low and the edge profile is poor, which is not conducive to contour scanning and three-dimensional positioning applications.
  • the TOF method it is limited by camera resolution
  • the detection accuracy is at the cm level, which does not meet the requirements of automated high-precision positioning applications.
  • the single-line laser profile scanning method which uses a single-line laser to scan the depth information of an object, the scanning speed is slow and the stability is poor, which does not meet the positioning requirements for three-dimensional reconstruction.
  • a laser can be used to project line structured light onto the surface of the measured object, and a camera can be used to photograph the measured object to obtain an image with line structured light, that is, a line structured light image.
  • the spatial coordinates (i.e., three-dimensional coordinates) of the measured object at its current position can be obtained based on the line structured light image, thereby achieving three-dimensional reconstruction (i.e., three-dimensional reconstruction) of the measured object.
  • three-dimensional reconstruction i.e., three-dimensional reconstruction
  • the laser projects line structured light to different positions of the object being measured, and each position corresponds to a line structured light image. Since the camera only collects line structured light images corresponding to one position at a time, the camera needs to collect line structured light images multiple times to complete the three-dimensional reconstruction. That is, the three-dimensional reconstruction time is relatively long, the scanning speed is slow, and the stability is poor, which does not satisfy the three-dimensional reconstruction requirements. Reconstruction positioning requirements.
  • the embodiment of the present application proposes a three-dimensional imaging method of multi-line laser scanning, which can use triangulation method to obtain the depth information of the measured object.
  • the multi-line laser is used to form an optical scan on the surface of the measured object, which can quickly obtain the measured object.
  • the entire contour data of the object is output, and the three-dimensional image information of the measured object is output.
  • the three-dimensional imaging method of multi-line laser scanning can be applied to the fields of machine vision and industrial automation, and can be used to achieve three-dimensional measurement and robot positioning. There are no restrictions on this application scenario.
  • the depth information of the measured object can be obtained using the triangulation method.
  • the single scanning efficiency can be increased by 10-20 times.
  • a full-width scan of the entire contour of the object being measured is achieved.
  • the multi-line laser scanning three-dimensional imaging method in this embodiment is a high-precision, low-cost, small-volume, and low-power consumption three-dimensional scanning imaging method with faster detection speed and higher detection accuracy.
  • a three-dimensional imaging method of multi-line laser scanning is proposed, which can be applied to three-dimensional imaging equipment.
  • the three-dimensional imaging device can be any device with a three-dimensional imaging function, for example, any device in the field of machine vision or industrial automation, and the type of the three-dimensional imaging device is not limited.
  • the three-dimensional imaging device may include but is not limited to: a left camera, a right camera, an auxiliary camera, a processor, a laser, a galvanometer motor, and a galvanometer drive.
  • the three-dimensional imaging device may include but is not limited to: a left camera, a right camera, a processor, a laser, a galvanometer motor and a galvanometer drive, that is, there is no auxiliary camera.
  • the three-dimensional imaging device may include but is not limited to: an image acquisition device 100, an optical-mechanical scanning device 200, and a multi-line laser emission device.
  • the image acquisition device 100 may include a left camera 101 and a right camera 102 , and the image acquisition device 100 may also include a left camera 101 , a right camera 102 and an auxiliary camera 103 .
  • the left camera 101 and the right camera 102 match multi-line lasers binocularly and obtain depth information based on triangulation.
  • the auxiliary camera 103 may or may not participate in reconstruction.
  • the left camera 101 and the right camera 102 can use black and white cameras.
  • a filter with the same bandwidth as the laser wavelength is added to the front end of the camera to only allow light within the laser wavelength range to pass through, that is, only the laser wavelength reflected by the surface of the measured object is received, and the acquisition
  • the reflected image of the laser line can improve contrast and reduce ambient light interference.
  • the left camera 101 and the right camera 102 can be respectively distributed on both sides of the mechanical galvanometer and installed symmetrically on the left and right.
  • the auxiliary camera 103 can be a black and white camera or an RGB camera.
  • the installation position of the auxiliary camera 103 is as close as possible to the exit optical axis of the mechanical galvanometer, thereby ensuring a short baseline and making the field of view basically coincide with the laser scanning field of view. In this way, the auxiliary camera 103 can be Take a complete shot of all laser lines.
  • the auxiliary camera 103 adopts an RGB camera, the laser can be turned off to capture a color image of the surface of the object being measured, thereby realizing the output function of the RGBD image.
  • the optical-mechanical scanning device 200 may include a mechanical galvanometer, that is, a mechanical galvanometer may be used to implement the scanning function, and the mechanical galvanometer may include three parts: a galvanometer motor, a galvanometer driver, and a reflector, which are only shown in FIG. 1A
  • the galvanometer motor and galvanometer drive are installed.
  • the mechanical galvanometer can be a mechanical galvanometer with high repeatability.
  • the reflector has a visible light reflective film, which can reflect the laser line and change the laser line emission angle.
  • the multi-line laser emitting device 300 may include a laser, and the laser may be a multi-line laser, that is, a laser that emits multiple laser lines simultaneously.
  • the laser can use a multi-line laser module, which is mainly composed of a laser diode, a collimating lens, and a multi-line DOE (Diffractive Optical Element, diffractive optical element).
  • the laser diode uses a high-power red laser diode, and the wavelength can be 635nm, or 660nm, or other wavelengths.
  • Multi-line DOE can simultaneously emit 10 laser lines, or 11 laser lines, or 25 laser lines, etc., without any restrictions.
  • the image processing device 400 may include a processor, such as a CPU or a GPU.
  • the image processing device 400 is connected to the image acquisition device 100, the optical machine scanning device 200, and the multi-line laser emitting device 300 respectively.
  • the image processing device 400 can activate the multi-line laser emitting device 300, and the multi-line laser emitting device 300 emits multiple line lasers (i.e., line structured light).
  • the multiple line lasers are reflected by the mechanical galvanometer and then illuminated on the surface of the object to be measured.
  • the image The processing device 400 controls the mechanical galvanometer to start scanning. Each time an angle is scanned, the image processing device 400 can receive angle feedback information from the mechanical galvanometer, and trigger the image acquisition device 100 to collect a multi-line laser image of the surface of the object being measured based on the angle feedback information.
  • the image processing device 400 is a processor
  • the image acquisition device 100 is a left camera, a right camera and an auxiliary camera
  • the optical-mechanical scanning device 200 is a mechanical galvanometer
  • the multi-line laser emitting device 300 is Take lasers for example.
  • the laser can emit multiple lines of laser light (7 lines of laser light are used as an example in Figure 1C). These line laser lights are reflected by the mechanical galvanometer and then illuminated on the surface of the object being measured.
  • the initial angle of the mechanical galvanometer is angle A.
  • the left camera collects the line structured light image A1 of the measured object
  • the right camera collects the line structured light image A2 of the measured object
  • the auxiliary camera collects the line structured light image of the measured object. Structured light image A3.
  • the angle of the mechanical galvanometer is angle B.
  • the left camera collects the line structured light image B1 of the measured object
  • the right camera collects the line structured light image B2 of the measured object
  • the auxiliary camera collects the measured object Line structured light image B3.
  • the processor can determine the three-dimensional point under the angle A (ie, the three-dimensional point cloud). Based on the line structured light image B1, line structured light image B2 and line structured light image B3, the processor can determine the three-dimensional point at angle B, and by analogy, the three-dimensional point at all angles can be obtained. On this basis, the three-dimensional points at all angles can be spliced to obtain complete three-dimensional points on the surface of the measured object, that is, a complete three-dimensional reconstructed image of the surface of the measured object can be obtained.
  • the processor can also turn off the laser, control the auxiliary camera to collect the RGB image, and then align the three-dimensional reconstructed image and the RGB image to output an RGBD image.
  • RGBD image there is no restriction in this embodiment.
  • the fixing bracket 500 plays the role of fixing and heat dissipation, and is made of metal aluminum or other materials.
  • An integrated fixed design is adopted for the camera and mechanical galvanometer to ensure that the relative positions of the camera and mechanical galvanometer remain unchanged.
  • FIG. 2 is a schematic flowchart of an image reconstruction method in an embodiment of the present application.
  • the method may include steps 201-206.
  • Step 201 When the multi-line laser projects N lines of structured light (i.e., laser lines) to the measured object (i.e., the measured target), obtain the first original image of the measured object collected by the first camera, and obtain the first original image of the measured object collected by the second camera.
  • the second original image of the measured object is collected.
  • N can be a positive integer greater than 1.
  • the image reconstruction method can be applied to a three-dimensional imaging device.
  • the three-dimensional imaging device can include a first camera, a second camera, and a multi-line laser.
  • the three-dimensional imaging device can also include a processor and a mechanical galvanometer.
  • the first camera may be a left camera and the second camera may be a right camera, or the first camera may be a right camera and the second camera may be a left camera.
  • the multi-line laser is the laser in the above embodiment.
  • the angle of the mechanical galvanometer is angle A
  • N lines of structured light are projected onto the measured object through the multi-line laser.
  • the left camera collects the line structured light image A1 of the measured object
  • the right camera collects the line structured light image of the measured object.
  • Image A2 the processor can obtain the first original image (such as the line structured light image A1) and the second original image (such as the line structured light image A2 ), and perform subsequent processing based on the first original image and the second original image.
  • the angle of the mechanical galvanometer is angle B
  • N lines of structured light are projected onto the measured object through the multi-line laser.
  • the left camera collects the line structured light image B1 of the measured object
  • the right camera collects the line structured light image of the measured object.
  • Image B2 the processor acquires the first original image (line structured light image B1) and the second original image (line structured light image B2), and performs subsequent processing based on the first original image and the second original image.
  • the processor can obtain the first original image of the measured object captured by the first camera and the second original image of the measured object captured by the second camera.
  • the first original image may include N first light strip areas corresponding to the N lines of structured light, and the N first light strips The area corresponds one-to-one to the N lines of structured light.
  • the second original image may include N second light strip areas corresponding to the N lines of structured light. The N second light strip areas correspond to the N lines of structured light.
  • Step 202 Determine the first target image corresponding to the first original image, and determine the second target image corresponding to the second original image, where the first target image includes N first light strip areas corresponding to N lines of structured light, The second target image includes N second light strip areas corresponding to N lines of structured light.
  • the first original image can be directly determined as the first target image
  • the second original image can be determined as the second target image.
  • perform binocular correction on the first original image and the second original image to obtain a first target image corresponding to the first original image and a second target image corresponding to the second original image.
  • the binocular correction is used to make the same position point on the measured object have the same pixel height in the first target image and the second target image, and there is no limit to this binocular correction process.
  • Step 203 Determine the first light strip center line corresponding to each first light strip area in the first target image, and determine the second light strip center line corresponding to each second light strip area in the second target image.
  • Step 204 Determine multiple key point pairs based on all first light bar center lines and all second light bar center lines.
  • Each key point pair includes a first pixel point in the first light bar center line and a first pixel point in the second light bar center line.
  • the second pixel point, and the first pixel point and the second pixel point are pixel points corresponding to the same position point on the measured object.
  • the target first light strip center line and target corresponding to the line structured light can be determined from all first light strip center lines and all second light strip center lines.
  • the second light strip center line for each first pixel point in the target first light strip center line, project the first pixel point to the second target image to obtain the projected pixel point corresponding to the first pixel point, and obtain it from Select the second pixel point corresponding to the projection pixel point from the center line of the second light strip of the target.
  • a key point pair is generated based on the first pixel point and the second pixel point, that is, the key point pair includes the first pixel point and the second pixel point.
  • selecting the second pixel point corresponding to the projection pixel point from the target second light strip center line may include but is not limited to: determining from the target second light strip center line the same pixel as the projection pixel point. height of the pixel; if there is one determined pixel, select the pixel as the second pixel; if there are at least two determined pixels, determine the distance between at least two pixels and the projected pixel.
  • Reprojection error select the pixel corresponding to the minimum reprojection error as the second pixel.
  • projecting the first pixel point to the second target image to obtain the projected pixel point corresponding to the first pixel point may include but is not limited to: obtaining the first calibration equation and the second calibration corresponding to the line structured light. Equation; wherein, the first calibration equation represents the functional relationship between the pixel points in the first target image and the three-dimensional reconstruction points, and the second calibration equation represents the functional relationship between the pixel points in the second target image and the three-dimensional reconstruction points.
  • the first pixel point is converted into the target three-dimensional reconstruction point based on the first calibration equation; the target three-dimensional reconstruction point is converted into the projection pixel point based on the second calibration equation.
  • Step 205 Determine the three-dimensional point corresponding to the key point pair based on the key point pair and the camera calibration parameters.
  • the camera calibration parameters may include camera intrinsic parameters of the first camera, camera intrinsic parameters of the second camera, and camera extrinsic parameters between the first camera and the second camera.
  • the distortion of the first pixel can be corrected through the camera internal parameters of the first camera, and the distortion-corrected pixel can be converted into the first homogeneous coordinates; the distortion of the second pixel can be corrected through the camera internal parameters of the second camera, and Convert the distortion-corrected pixels into second homogeneous coordinates.
  • triangulation is used to determine the three-dimensional point corresponding to the key point pair. There are no restrictions on the triangulation method.
  • Step 206 Generate a three-dimensional reconstructed image of the measured object based on the corresponding three-dimensional points of the multiple key points.
  • the three-dimensional points corresponding to multiple key point pairs can be determined based on the first original image and the second original image corresponding to angle A.
  • the angle of the mechanical galvanometer is angle B
  • the three-dimensional points corresponding to multiple key point pairs can be determined based on the first original image and the second original image corresponding to angle B.
  • a three-dimensional reconstructed image can be generated, that is, the measured A complete 3D reconstructed image of the object surface.
  • the three-dimensional imaging device may also include a third camera, that is, an auxiliary camera.
  • the third original image of the measured object collected by the third camera can also be obtained (such as line structured light image A3, line structured light image B3, etc.), and determine the third target image corresponding to the third original image.
  • the third target image includes N third light strip areas corresponding to N lines of structured light. Determine the third light strip center line corresponding to each third light strip area in the third target image. For each line of structured light, the target third light strip center line corresponding to the line structured light is determined from all third light strip center lines.
  • first pixel point in the center line of the target first light strip For each first pixel point in the center line of the target first light strip, project the first pixel point to the second target image to obtain the projected pixel point corresponding to the first pixel point, which may also include: from the target third light Determine the third pixel point with the same pixel height as the first pixel point in the center line; determine the target three-dimensional reconstruction point based on the first pixel point, the third pixel point and the camera calibration parameters; based on the third calibration equation, the target The three-dimensional reconstruction points are converted into the projected pixel points; where the third calibration equation represents the functional relationship between the pixel points in the second target image and the three-dimensional reconstruction points.
  • the multi-line laser projects N lines of structured light to the measured object each time, and N is a positive integer greater than 1, such as 7, 11, 15, etc., so that the line structured light image collected by the camera each time includes N light strip center lines, the line structured light image is equivalent to the line structured light image of N positions of the measured object, thereby reducing the number of line structured light image collection times and reducing the time of three-dimensional reconstruction.
  • the multi-line laser scans the surface of the measured object, it can quickly obtain the entire contour data of the measured object, output the three-dimensional image information of the measured object, and improve the detection accuracy and detection speed.
  • the three-dimensional information of the measured object can be obtained using the triangulation method, that is, the depth information of the measured object can be obtained , thereby using a single acquisition image to obtain depth information on a multi-line laser, which can increase the single scanning efficiency by N times and quickly achieve a full-scale scan of the entire contour of the object being measured.
  • the three-dimensional imaging device may include a first camera, a second camera, a processor, a multi-line laser and a mechanical galvanometer.
  • the first camera is a left camera
  • the second camera is a right camera
  • the first camera is a right camera. machine
  • the second camera is the left camera.
  • the camera calibration parameters, first calibration equation, and second calibration equation corresponding to the three-dimensional imaging device can be obtained in advance, and the camera calibration parameters, the first calibration equation, and the second calibration equation are stored for the three-dimensional imaging device. .
  • the camera calibration parameters may include camera intrinsic parameters of the first camera, camera intrinsic parameters of the second camera, and camera extrinsic parameters between the first camera and the second camera.
  • the camera intrinsic parameters of the first camera are parameters related to the characteristics of the first camera itself, such as focal length, pixel size, distortion coefficient, etc.
  • the camera intrinsic parameters of the second camera are parameters related to the second camera's own characteristics, such as focal length, pixel size, distortion coefficient, etc.
  • the camera extrinsic parameters between the first camera and the second camera are parameters in the world coordinate system, such as the position and rotation direction of the first camera, the position and rotation direction of the second camera, the distance between the first camera and the second camera. Positional relationships, such as rotation matrix and translation matrix, etc.
  • the camera intrinsic parameters of the first camera are inherent parameters of the first camera.
  • the camera intrinsic parameters of the first camera are already given when the first camera leaves the factory.
  • the camera intrinsic parameters of the second camera are inherent parameters of the second camera.
  • the camera intrinsic parameters of the second camera are already provided when the second camera leaves the factory.
  • the camera extrinsic parameters between the first camera and the second camera such as rotation matrix and translation matrix
  • multiple calibration points can be deployed in the target scene, and the first calibration image of the target scene is collected through the first camera.
  • the first calibration The image includes a plurality of calibration points
  • a second calibration image of the target scene is collected through the second camera, where the second calibration image includes a plurality of calibration points.
  • the camera extrinsic parameters between the first camera and the second camera can be determined. For this camera There is no restriction on the determination process of external parameters.
  • the first calibration equation represents the functional relationship between the pixel points and the three-dimensional reconstruction points in the image collected by the first camera (denoted as image s1)
  • the second calibration equation represents the functional relationship between the pixels in the image collected by the second camera (denoted as image s2).
  • the functional relationship between pixel points and three-dimensional reconstruction points Assuming that the multi-line laser projects N lines of structured light to the measured object, and the angles of the mechanical galvanometer are M angles in total, a total of N*M first calibration equations and N*M second calibration equations need to be obtained. Both the first calibration equation and the second calibration equation may be light plane equations. Regarding the method of obtaining the first calibration equation and the second calibration equation, steps S11-S15 may be included.
  • Step S11 For each angle of the mechanical galvanometer, when the multi-line laser projects N lines of structured light onto the white background plate, the image s1 collected by the first camera is acquired, and the image s2 collected by the second camera is acquired.
  • the image s1 includes N first light strip areas corresponding to N line structured lights
  • the N first light strip areas correspond to N line structured lights one-to-one
  • the image s2 includes N first light strip areas corresponding to N line structured lights. Two light strip areas, N second light strip areas and N line structured lights correspond one to one.
  • Step S12 Determine the first light stripe center line corresponding to each first light stripe area in the image s1, and determine the second light stripe center line corresponding to each second light stripe area in the image s2, that is, obtain N lines N first light strip center lines corresponding to the structured light, and N second light strip center lines corresponding to the N lines of structured light are obtained.
  • Step S13 Determine multiple key point pairs based on all first light bar center lines and all second light bar center lines.
  • Each key point pair includes a first center point in the first light bar center line and a first center point in the second light bar center line.
  • the second center point, and the first center point and the second center point are pixels corresponding to the same position point on the white background plate.
  • image s1 includes a first light strip area 1 corresponding to line structured light 1 and a first light strip area 2 corresponding to line structured light 2.
  • the image s2 includes a second light strip area 1 corresponding to the line structured light 1 and a second light strip area 2 corresponding to the line structured light 2 .
  • the first light strip area 1 corresponds to the first light strip center line 1
  • the first light strip area 2 corresponds to the first light strip center line 2
  • the second light strip area 1 corresponds to the second light strip center line 1
  • the second light strip area 2 corresponds to the center line 2 of the second light strip.
  • the object to be measured is a white background plate
  • the light strip area is relatively clear and no noise is generated. Therefore, the first light strip area is determined based on the first light strip area 1.
  • Light strip center line 1 When , each row of the first light strip center line 1 has only one center point. Similarly, each row of the second light strip center line 1 has only one center point. On this basis, the first row center point of the first light strip center line 1 and the first row center point of the second light strip center line 1 form a key point pair 11, and the second row center point of the first light strip center line 1 is formed. The row center point and the second row center point of the second light bar center line 1 form a key point pair 12, and so on.
  • first row center point of the first light strip center line 2 and the first row center point of the second light strip center line 2 can be formed into a key point pair 21, and the second row center point of the first light strip center line 2 can be The center point and the center point of the second row of the second light bar center line 2 form a key point pair 22, and so on.
  • Step S14 For each key point pair, determine the three-dimensional point corresponding to the key point pair based on the key point pair and the camera calibration parameters. For example, a triangulation method can be used to determine the three-dimensional points corresponding to the key point pair. For this triangulation method, please refer to subsequent embodiments and will not be described in detail here.
  • Step S15 Based on the key point pair and the three-dimensional point corresponding to the key point pair, determine the first calibration equation and the second calibration equation corresponding to the angle of the mechanical galvanometer and the line structured light.
  • the angle A of the mechanical galvanometer it is based on multiple key point pairs between the first light bar center line 1 and the second light bar center line 1 (such as key point pair 11, key point pair 12, etc.), and The first calibration equation and the second calibration equation corresponding to the angle A and the line structured light 1 are determined for each key point corresponding to the three-dimensional point.
  • the first calibration equation can be determined.
  • the first calibration equation is used to represent the pixel points in the image s1 (i.e., the first light
  • the functional relationship between the center point of the center line 1) and the three-dimensional reconstruction point that is, the three-dimensional point corresponding to the center point.
  • the second calibration equation can be determined.
  • the second calibration equation is used to represent the pixel points in the image s2 (that is, the second light strip center The functional relationship between the center point of line 1) and the three-dimensional reconstruction point (that is, the three-dimensional point corresponding to the center point).
  • the first calibration equation and the second calibration equation corresponding to the angle A and the line structured light 2 can be obtained
  • the first calibration equation and the second calibration equation corresponding to the angle B and the line structured light 1 can be obtained, and so on.
  • the first calibration equation and the second calibration equation corresponding to each line structured light can be obtained, that is, N*M first calibration equations and N*M second calibration equations. .
  • each line of structured light can be serially marked according to the actual order. For example, in order from left to right (or from right to left), the N line structure
  • the lights i.e., laser lines
  • the lights are marked as 1, 2, 3, ..., N in order to facilitate matching and indexing of each line of structured light.
  • the image reconstruction method in this embodiment may include steps 301-309.
  • Step 301 When the multi-line laser projects N lines of structured light onto the object being measured, obtain the first original image of the object being measured captured by the first camera, and obtain the second original image of the object being measured captured by the second camera. , the acquisition time of the first original image and the acquisition time of the second original image may be the same.
  • the first original image includes N first light strip areas corresponding to N line structured lights, such as the first light strip area 1 corresponding to line structured light 1, and the first light strip area 2 corresponding to line structured light 2. ,..., and so on.
  • the second original image includes N second light strip areas corresponding to N line structured lights, such as the second light strip area 1 corresponding to line structured light 1, the second light strip area 2 corresponding to line structured light 2,... , and so on.
  • Step 302 Perform binocular correction on the first original image and the second original image to obtain a first target image corresponding to the first original image and a second target image corresponding to the second original image.
  • binocular correction is used to make the same position point on the measured object have the same pixel height in the first target image and the second target image, that is, for the same position point on the measured object , correct the first original image and the second original image to the same pixel height through binocular correction, so that when matching, they are directly in one line Make a match to make it easier to match.
  • matching corresponding points in two-dimensional space is very time-consuming.
  • epipolar constraints can be used to reduce the matching of corresponding points from a two-dimensional search to a one-dimensional search.
  • the function of binocular correction is to reduce the matching search range.
  • the first original image and the second original image perform row correspondence to obtain the first target image and the second target image, so that the epipolar lines of the first target image and the second target image are exactly on the same horizontal line, and any point on the first target image is exactly the same as the first target image.
  • the corresponding points on the two target images must have the same row number, and only one-dimensional search is needed on that row.
  • the first target image includes N first light strip areas corresponding to N lines of structured light, such as the first light strip area 1 corresponding to line structured light 1, and the first light strip area 2 corresponding to line structured light 2. ,..., and so on.
  • the second target image includes N second light strip areas corresponding to N line structured lights, such as the second light strip area 1 corresponding to line structured light 1, the second light strip area 2 corresponding to line structured light 2,... , and so on.
  • Step 303 Determine the first light strip center line corresponding to each first light strip area in the first target image, and determine the second light strip center line corresponding to each second light strip area in the second target image.
  • each row of the first light strip area may include multiple pixel points, and the center point of the row may be selected from the multiple pixel points, and the center points of all the rows of the first light strip area constitute the first A light strip center line, therefore, the first light strip center line 1 corresponding to the first light strip area 1, the first light strip center line 2 corresponding to the first light strip area 2, ..., and so on.
  • the second light strip center line 1 corresponding to the second light strip area 1, the second light strip center line 2 corresponding to the second light strip area 2, ..., and so on are obtained.
  • a light strip center line extraction algorithm can be used to determine the light strip center line corresponding to the light strip area.
  • Gaussian fitting, COG (Center of Gravity, center of gravity) or STEGER can be used to extract each light strip area.
  • the center point of a row is used to obtain the center line of the light bar. This process is not limited in this embodiment.
  • each first light strip center line includes the center point of the H row
  • each second light strip center line includes the center of the H row. point.
  • Step 304 For each line structured light, determine the target first light strip center line and the target second light strip center line corresponding to the line structured light from all first light strip center lines and all second light strip center lines.
  • Step 305 For each line structured light, based on the first calibration equation corresponding to the line structured light and the target first light strip center line corresponding to the line structured light, for each first in the target first light strip center line The first pixel point is converted into a target three-dimensional reconstruction point based on the first calibration equation.
  • the angle of the mechanical galvanometer can be determined, that is, the first original image and the second original image are collected at this angle.
  • the angle can be selected from N*M first calibration equations.
  • the first calibration equation corresponding to this angle and this line of structured light. Since the first calibration equation represents the functional relationship between the pixel points and the three-dimensional reconstruction points in the first target image, each first pixel point in the center line of the first light bar of the target can be converted into Target 3D reconstruction point.
  • each first pixel point in the first light strip center line 1 is converted into a target three-dimensional reconstruction based on the first calibration equation corresponding to the line structured light 1. point.
  • each first pixel point in the first light strip center line 2 is converted into a target three-dimensional reconstruction point based on the first calibration equation corresponding to the line structured light 2, so as to And so on.
  • Step 306 For the target three-dimensional reconstruction point corresponding to each first pixel point, the target three-dimensional reconstruction point can be converted into a projection pixel point in the second target image based on the second calibration equation corresponding to the line structured light. This projection pixel The point is the projected pixel corresponding to the first pixel.
  • the second calibration equation For each line of structured light, select the second calibration equation corresponding to the line of structured light from the N*M second calibration equations. Two calibration equations. Since the second calibration equation represents the functional relationship between the pixel points in the second target image and the three-dimensional reconstruction point, therefore, after converting the first pixel point in the center line of the first light strip of the target into the target three-dimensional reconstruction point, it can be based on the third The second calibration equation converts the target 3D reconstruction points into projected pixel points.
  • the target three-dimensional reconstruction point is converted into a projection pixel point based on the second calibration equation corresponding to the line structured light 1
  • the projection pixel point corresponding to each first pixel point in the first light strip center line 1 is obtained.
  • the target three-dimensional reconstruction points are converted into projection pixels based on the second calibration equation corresponding to the line structured light 2
  • the projection pixels corresponding to each first pixel in the first light strip center line 2 are obtained, and so on.
  • the first pixel point can be projected to the second target image to obtain the projection pixel point corresponding to the first pixel point.
  • Step 307 For each first pixel point, after obtaining the projection pixel point corresponding to the first pixel point, select the second pixel point corresponding to the projection pixel point from the target second light strip center line.
  • the target second light strip center line corresponding to line structured light 1 can be determined, that is, the second light strip center line 1.
  • the target second light strip center line corresponding to line structured light 1 can be determined.
  • the second pixel corresponding to the projection pixel point can be selected from the second light strip center line 1. point.
  • the first pixel point and the second pixel point can form a key point pair, that is, the key point pair includes the first pixel point in the first light strip center line 1 and the second light strip center line 1.
  • Two pixel points, and the first pixel point and the second pixel point are pixel points corresponding to the same position point on the measured object.
  • the first pixel point is a pixel point in the first target image
  • the second pixel point is a second pixel point. Pixels in the target image.
  • selecting the second pixel point corresponding to the projection pixel point from the target second light strip center line may include: determining from the target second light strip center line the second pixel point corresponding to the projection pixel point. Pixels with the same pixel height; if there is one pixel that is determined, select the pixel as the second pixel; if there are at least two pixels that are determined, determine the difference between at least two pixels and the projected pixel. The pixel corresponding to the minimum reprojection error is selected as the second pixel.
  • one row of the second light bar center line may include one pixel.
  • the pixel is selected as the second pixel.
  • a row of the second light strip center line may also include at least two pixels. If there are noise points in the light strip area, it will cause at least two pixels to exist in one row. In this case, if it has the same pixel height as the projected pixel There are at least two pixel points, then the reprojection error between the projected pixel point and each pixel point is determined. There is no limit to the determination method. After obtaining the reprojection error between the projected pixel point and each pixel point After the error is determined, the pixel corresponding to the minimum reprojection error can be selected as the second pixel.
  • Step 308 Determine the three-dimensional point corresponding to the key point pair based on the key point pair and the camera calibration parameters.
  • the key point pair includes a first pixel point in the first target image and a second pixel point in the second target image, and the first pixel point and the second pixel point It is the pixel point corresponding to the same position point on the measured object.
  • the triangulation method can be used to determine the three-dimensional point corresponding to the key point pair. The following is a description of the process with specific steps.
  • Step 3081 Perform distortion correction on the first pixel using the camera internal parameters of the first camera, and convert the distortion-corrected pixel into the first homogeneous coordinates; perform distortion correction on the second pixel using the camera internal parameters of the second camera , and convert the distortion-corrected pixel points into second homogeneous coordinates.
  • the image collected by the first camera may be distorted, that is, there may be distortion, such as radial distortion and tangential distortion.
  • the camera internal parameters of the first camera include distortion parameters, such as radial distortion parameters k1, k2, k3, tangential distortion parameters p1, p2, etc. based on Therefore, in this embodiment, the camera intrinsic parameters of the first camera can be used to perform distortion correction on the first pixel point to obtain the pixel coordinates after dedistortion processing. After obtaining the de-distorted pixel coordinates, the de-distorted pixel coordinates can be converted into first homogeneous coordinates.
  • the camera intrinsic parameters of the second camera can be used to perform distortion correction on the second pixel coordinates, and the distortion-corrected pixel coordinates can be converted into second homogeneous coordinates.
  • the homogeneous coordinates of the key point pair may include the first homogeneous coordinates of the first key point and the second homogeneous coordinates of the second key point.
  • Step 3082 Based on the first homogeneous coordinates, the second homogeneous coordinates, the camera intrinsic parameters of the first camera, the camera intrinsic parameters of the second camera, and the camera extrinsic parameters (such as position relationship, etc.) between the first camera and the second camera, Use triangulation to determine the three-dimensional point corresponding to the key point pair.
  • FIG 4 is a schematic diagram of the principle of the triangulation method.
  • O L is the position of the first camera
  • OR is the position of the second camera.
  • the imaging position on the image plane of the first camera is p l
  • the imaging position on the image plane of the second camera is p r .
  • p l serves as the first pixel point
  • p r serves as the second pixel point
  • the first pixel point and the second pixel point form a key point pair
  • the three-dimensional point P is the three-dimensional point corresponding to the key point pair.
  • O L , OR, p l and pr Convert O L , OR , p l and pr to the same coordinate system.
  • O L , OR, p l and pr under the same coordinate system, there is a straight line a1, O between O L and p l .
  • triangulation can be used to obtain the three-dimensional spatial coordinates of the three-dimensional point P, thereby obtaining the three-dimensional point corresponding to the key point pair.
  • the above implementation methods are only examples of the triangulation method, and there are no limitations on the implementation methods of this triangulation method.
  • the first target image includes N first light strip center lines, and each first light strip center line includes H first light strip center lines. pixel points, therefore, N*H key point pairs can be obtained, and the N*H key point pairs correspond to N*H three-dimensional points.
  • Step 309 Generate a three-dimensional reconstructed image based on the corresponding three-dimensional points of the multiple key points.
  • steps 301 to 308 can be used to determine N*H three-dimensional points at that angle.
  • a point can be obtained at each angle.
  • M*N*H three-dimensional points at M angles can be obtained.
  • a three-dimensional reconstructed image can be generated based on M*N*H three-dimensional points.
  • the three-dimensional reconstructed image is point cloud data, and the three-dimensional reconstructed image is output.
  • the three-dimensional reconstructed image can be projected onto a camera to obtain a depth image, and the depth image can be output.
  • the three-dimensional imaging device may include a first camera, a second camera, a third camera, a processor, a multi-line laser and a mechanical galvanometer.
  • the first camera is a left camera
  • the second camera is a right camera
  • the third camera is a Auxiliary camera
  • the first camera is the right camera
  • the second camera is the left camera
  • the third camera is the auxiliary camera.
  • the camera calibration parameters and the third calibration equation corresponding to the three-dimensional imaging device can be obtained in advance, and the camera calibration parameters and the third calibration equation are stored for the three-dimensional imaging device.
  • the camera calibration parameters include camera intrinsic parameters of the first camera, camera intrinsic parameters of the second camera, camera intrinsic parameters of the third camera, camera extrinsic parameters (such as position relationship, such as rotation matrix) between the first camera and the second camera. and translation matrix, etc.), camera extrinsic parameters (such as positional relationship) between the first camera and the third camera, and camera extrinsic parameters (such as positional relationship) between the second camera and the third camera.
  • the third calibration equation represents the functional relationship between the pixel points and the three-dimensional reconstruction points in the image collected by the second camera (denoted as image s2). Assuming that the multi-line laser projects N lines of structured light to the measured object, and the angles of the mechanical galvanometer are M angles, a total of N*M third calibration equations need to be obtained.
  • the third calibration equations can all be light planes. equation.
  • For how to obtain the third calibration equation please refer to the second calibration equation in Application Scenario 1. The method of obtaining will not be described again here.
  • each line of structured light can be serially marked according to the actual order. For example, in order from left to right (or from right to left), the N line structure
  • the lights i.e., laser lines
  • the lights are marked as 1, 2, 3, ..., N in order to facilitate matching and indexing of each line of structured light.
  • the image reconstruction method of this embodiment may include steps 501-509.
  • Step 501 When the multi-line laser projects N lines of structured light onto the object being measured, obtain the first original image of the object being measured captured by the first camera, and obtain the second original image of the object being measured captured by the second camera. Obtain the third original image of the measured object collected by the third camera.
  • the collection time of the first original image, the collection time of the second original image, and the collection time of the third original image may be the same.
  • the first original image may include N first light strip areas corresponding to N lines of structured light
  • the second original image may include N second light strip areas corresponding to N lines of structured light
  • the first The three original images may include N third light strip areas corresponding to N lines of structured light.
  • Step 502 Perform tricular correction on the first original image, the second original image, and the third original image to obtain a first target image corresponding to the first original image, a second target image corresponding to the second original image, and a third original image.
  • the corresponding third target image is used to make the same position point on the measured object have the same pixel height in the first target image, the second target image and the third target image. That is to say, for the same position point on the measured object At one position point, through trinocular correction, the first original image, the second original image and the third original image can be corrected to the same pixel height.
  • the first target image may include N first light strip areas corresponding to N lines of structured light
  • the second target image may include N second light strip areas corresponding to N lines of structured light
  • the three-target image may include N third light strip areas corresponding to N lines of structured light.
  • Step 503 Determine the first light strip center line corresponding to each first light strip area in the first target image, determine the second light strip center line corresponding to each second light strip area in the second target image, and The third light strip center line corresponding to each third light strip area in the third target image is determined.
  • a light strip center line extraction algorithm can be used to determine the light strip center line corresponding to the light strip area.
  • Gaussian fitting, COG or STEGER can be used to extract the center point of each row of the light strip area, thereby obtaining The center line of the light strip is not limited to this process in this embodiment.
  • Step 504 For each line structured light, determine the target first light strip center line corresponding to the line structured light from all the first light strip center lines, and determine the target second light corresponding to the line structured light from all the second light strip center lines.
  • the center lines of the third light strips are used to determine the target third light strip center lines corresponding to the line structured light from the center lines of all third light strips.
  • Step 505 Based on the target first light strip center line and the target third light strip center line corresponding to the line structured light, for each first pixel point in the target first light strip center line, from the target third light strip center line A third pixel point having the same pixel height as the first pixel point is determined, and a target three-dimensional reconstruction point corresponding to the first pixel point is determined based on the first pixel point, the third pixel point and the camera calibration parameter.
  • a pixel point having the same pixel height as the first pixel point is determined from the target third light strip center line. If the determined pixel is one, then the pixel is selected as the third pixel. If there are at least two pixel points determined, the reprojection error between the at least two pixel points and the first pixel point is determined.
  • the method of determining the reprojection error is not limited, and the minimum reprojection error corresponding to The pixel is selected as the third pixel.
  • the first pixel point and the third pixel point may form a key point pair, that is, the key point pair includes the first pixel point in the first target image and the third pixel point in the third target image, And the first pixel and the The third pixel is the pixel corresponding to the same position on the measured object.
  • a triangulation method can be used to determine the three-dimensional point corresponding to the key point pair.
  • This three-dimensional point is the target three-dimensional reconstruction point corresponding to the first pixel point.
  • the distortion of the first pixel is corrected through the camera intrinsic parameters of the first camera, and the distortion-corrected pixel is converted into the first homogeneous coordinates;
  • the distortion of the third pixel is corrected through the camera intrinsic parameters of the third camera, Convert the distortion-corrected pixels into third homogeneous coordinates.
  • the third homogeneous coordinates, the camera intrinsic parameters of the first camera, the camera intrinsic parameters of the third camera, and the camera extrinsic parameters (such as position relationship, etc.) between the first camera and the third camera using triangulation
  • the method determines the three-dimensional point corresponding to the key point pair, and this process will not be described again.
  • the target three-dimensional reconstruction point corresponding to the first pixel point is determined, that is, the corresponding relationship between the first pixel point and the target three-dimensional reconstruction point is obtained.
  • Step 506 For the target three-dimensional reconstruction point corresponding to each first pixel point, the target three-dimensional reconstruction point can be converted into a projection pixel point in the second target image based on the third calibration equation corresponding to the line structured light. This projection pixel The point is the projected pixel corresponding to the first pixel.
  • the third calibration equation For each line of structured light, select the third calibration equation corresponding to the line of structured light from N*M third calibration equations. Since the third calibration equation represents the functional relationship between the pixel points and the three-dimensional reconstruction points in the second target image, after converting the first pixel point in the center line of the first light strip of the target into the target three-dimensional reconstruction point, it can be based on the third The three calibration equations convert the target 3D reconstruction points into projected pixels.
  • the first pixel point can be projected to the second target image to obtain the projection pixel point corresponding to the first pixel point.
  • Step 507 For each first pixel point, after obtaining the projection pixel point corresponding to the first pixel point, select the second pixel point corresponding to the projection pixel point from the target second light strip center line.
  • the pixel point with the same pixel height as the projection pixel point can be determined from the center line of the second light bar of the target; if the determined pixel point is one, then the pixel point is selected as the second pixel point; if the determined pixel point is If there are at least two pixel points, then the reprojection error between the at least two pixel points and the projected pixel point is determined, and the pixel point corresponding to the minimum reprojection error is selected as the second pixel point.
  • the first pixel point and the second pixel point can form a key point pair, and the first pixel point and the second pixel point are pixel points corresponding to the same position point on the measured object.
  • the first pixel point is The pixels in the first target image, and the second pixels are the pixels in the second target image.
  • Step 508 Determine the three-dimensional point corresponding to the key point pair based on the key point pair and the camera calibration parameters. For example, the distortion of the first pixel is corrected through the camera intrinsic parameters of the first camera, and the distortion-corrected pixel is converted into the first homogeneous coordinates; the distortion of the second pixel is corrected through the camera intrinsic parameters of the second camera, Convert the distortion-corrected pixels into second homogeneous coordinates. Based on the first homogeneous coordinates, the second homogeneous coordinates, the camera intrinsic parameters of the first camera, the camera intrinsic parameters of the second camera, and the camera extrinsic parameters between the first camera and the second camera, the key point pair is determined using triangulation. corresponding three-dimensional point.
  • the three-dimensional point corresponding to the key point pair can be obtained.
  • Step 509 Generate a three-dimensional reconstructed image based on the corresponding three-dimensional points of the multiple key points.
  • the multi-line laser projects N lines of structured light to the measured object each time, so that the line structured light image collected by the camera each time includes N light strip center lines.
  • This line structured light image is equivalent to the measured object.
  • the multi-line laser scans the surface of the measured object, it can quickly obtain the entire contour data of the measured object, output the three-dimensional image information of the measured object, and improve the detection accuracy and detection speed.
  • the three-dimensional information of the measured object can be obtained using the triangulation method, that is, the depth information of the measured object can be obtained , thereby utilizing a single acquisition image
  • Obtaining the depth information on the multi-line laser can increase the single scanning efficiency by N times, and can quickly achieve a full-scale scan of the entire contour of the measured object.
  • the multi-line laser triangulation method is used to obtain the depth information of multiple laser lines at one time, and a mechanical galvanometer is used to achieve small-angle scanning of the laser line spacing, thereby completing high-precision scanning of the entire contour of the surface of the measured object.
  • the embodiment of the present application proposes an image reconstruction device, which is applied to a three-dimensional imaging device.
  • the three-dimensional imaging device includes a first camera, a second camera and a multi-line laser, as shown in Figure 6.
  • the device may include an acquisition module 61 , a determination module 62 and a generation module 63 .
  • the acquisition module 61 is configured to acquire the first original image of the object under test collected by the first camera when the multi-line laser projects N lines of structured light onto the object under test, and obtain the first original image of the object under test by the second camera.
  • the determining module 62 is used to determine a first target image corresponding to the first original image and a second target image corresponding to the second original image, wherein the first target image includes N first target images corresponding to the N lines of structured light.
  • the second target image includes N second light strip areas corresponding to the N lines of structured light; determine the first light strip center corresponding to each first light strip area in the first target image line, and determine the second light strip center line corresponding to each second light strip area in the second target image; determine multiple key point pairs based on all first light strip center lines and all second light strip center lines, Wherein, for each key point pair, the key point pair includes a first pixel point in the first light strip center line and a second pixel point in the second light strip center line.
  • the first pixel point and the second pixel A point is a pixel corresponding to the same position point on the measured object; the three-dimensional point corresponding to the key point pair is determined based on the key point pair and the camera calibration parameters.
  • the generation module 63 is configured to generate a three-dimensional reconstructed image of the measured object based on the three-dimensional points corresponding to the plurality of key point pairs.
  • the determining module 62 determines the first target image corresponding to the first original image and the second target image corresponding to the second original image, it is specifically used to: determine the first original image as the first target image, and determine The second original image is determined as the second target image; or, binocular correction is performed on the first original image and the second original image to obtain the first target image corresponding to the first original image and the second original image corresponding to The second target image, wherein the binocular correction is used to make the same position point on the measured object have the same pixel height in the first target image and the second target image.
  • the determination module 62 determines multiple key point pairs based on all first light strip center lines and all second light strip center lines, specifically for: for each line of structured light, from all first light strip centers Determine the target first light strip center line and the target second light strip center line corresponding to the line structured light among the line and all second light strip center lines; for each first pixel point in the target first light strip center line , project the first pixel point to the second target image, obtain the projection pixel point corresponding to the first pixel point, and select the projection pixel point corresponding to the projection pixel point from the center line of the second target light bar a second pixel point; then, a key point pair can be generated based on the first pixel point and the second pixel point.
  • the determination module 62 selects the second pixel point corresponding to the projection pixel point from the target second light strip center line, it is specifically used to: determine the projection point from the target second light strip center line. Pixels with the same pixel height; if there is one determined pixel, select the pixel as the second pixel; if there are at least two determined pixels, determine at least two pixels and the projected pixel The pixel corresponding to the minimum reprojection error is selected as the second pixel.
  • the determination module 62 projects the first pixel point to the second target image to obtain the When describing the projected pixel point corresponding to the first pixel point, it is specifically used to: obtain the first calibration equation and the second calibration equation corresponding to the line structured light.
  • the first calibration equation represents the relationship between the pixel point in the first target image and the three-dimensional reconstruction point.
  • the second calibration equation represents the functional relationship between the pixel points and the three-dimensional reconstruction points in the second target image; based on the first calibration equation, the first pixel points are converted into the target three-dimensional reconstruction points; based on the The second calibration equation converts the target three-dimensional reconstruction point into the projection pixel point.
  • the three-dimensional imaging device further includes a third camera
  • the acquisition module 61 is further configured to acquire the object captured by the third camera when the multi-line laser projects N lines of structured light to the object being measured.
  • the determination module 62 is also used to determine a third target image corresponding to the third original image.
  • the third target image includes N third light strip areas corresponding to the N lines of structured light; determine each of the third target images in the third target image.
  • the third light strip center line corresponding to the third light strip area; for each line structured light, the target third light strip center line corresponding to the line structured light is determined from all the third light strip center lines.
  • the determination module For each first pixel point in the center line of the first light bar of the target, the determination module projects the first pixel point to the second target image, and obtains the projected pixel point corresponding to the first pixel point. Used to: determine a third pixel point with the same pixel height as the first pixel point from the center line of the third light bar of the target; determine the target based on the first pixel point, the third pixel point and camera calibration parameters Three-dimensional reconstruction points; convert the target three-dimensional reconstruction points into the projection pixel points based on a third calibration equation, wherein the third calibration equation represents the functional relationship between the pixel points and the three-dimensional reconstruction points in the second target image.
  • the camera calibration parameters include camera intrinsic parameters of the first camera, camera intrinsic parameters of the second camera, and camera extrinsic parameters between the first camera and the second camera; the determination module 62 is based on key points.
  • the camera calibration parameters are used to determine the three-dimensional point corresponding to the key point pair, it is specifically used to: perform distortion correction on the first pixel point through the camera internal parameters of the first camera, and convert the distortion-corrected pixel point into a first aligned pixel point.
  • Secondary coordinates perform distortion correction on the second pixel point through the camera internal parameters of the second camera, and convert the distortion-corrected pixel point into the second homogeneous coordinate; based on the first homogeneous coordinate, the second homogeneous coordinate,
  • the camera intrinsic parameters of the first camera, the camera intrinsic parameters of the second camera and the camera extrinsic parameters are used to determine the three-dimensional point corresponding to the key point pair using a triangulation method.
  • an embodiment of the present application proposes a three-dimensional imaging device.
  • the three-dimensional imaging device may include a processor and a machine-readable storage medium.
  • the machine-readable storage medium stores information that can be processed by the
  • the processor executes machine-executable instructions; the processor is configured to execute machine-executable instructions to implement the image reconstruction method disclosed in the above embodiments of the application.
  • embodiments of the present application also provide a machine-readable storage medium.
  • Several computer instructions are stored on the machine-readable storage medium. When the computer instructions are executed by the processor, the computer instructions can cause the The processor implements the image reconstruction method disclosed in the above embodiments of the present application.
  • machine-readable storage medium can be any electronic, magnetic, optical or other physical storage device, which can contain or store information, such as executable instructions, data, etc.
  • machine-readable storage media can be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, storage drive (such as hard drive), solid state drive, any type of storage disk (such as optical discs, DVDs, etc.), or similar storage media, or a combination thereof.
  • a typical implementation device is a computer, which may be in the form of a personal computer, a laptop, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email transceiver, or a game controller. desktop, tablet, wearable device, or a combination of any of these devices.
  • embodiments of the present application may be provided as methods, systems, or computer programs. product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment that combines software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • these computer program instructions may also be stored in a computer-readable memory that causes a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction means,
  • the instruction device implements the functions specified in one process or multiple processes of the flowchart and/or one block or multiple blocks of the block diagram.
  • These computer program instructions may also be loaded onto a computer or other programmable data processing device such that a series of operating steps are performed on the computer or other programmable data processing device to produce a computer-implemented process, thereby causing the computer or other programmable data processing device to perform a computer-implemented process.
  • the instructions executed on the device provide steps for implementing the functions specified in the process or processes of the flow diagrams and/or the block or blocks of the block diagrams.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种图像重建方法和装置及设备。图像重建方法包括:在多线激光器将N条线结构光投射到被测物体时,获取第一摄像机采集的第一原始图像和第二摄像机采集的第二原始图像(201);确定第一原始图像对应的第一目标图像和第二原始图像对应的第二目标图像(202);确定第一目标图像中每个第一光条区域对应的第一光条中心线、第二目标图像中每个第二光条区域对应的第二光条中心线(203);基于第一光条中心线和第二光条中心线确定多个关键点对,关键点对包括第一光条中心线中第一像素点和第二光条中心线中第二像素点(204);基于关键点对和相机标定参数确定三维点(205);基于多个关键点对对应的三维点生成三维重建图像(206)。

Description

图像重建方法和装置及设备 技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像重建方法和装置及设备。
背景技术
三维成像设备可以包括激光器和一个摄像机,使用激光器将线结构光投射到被测物体(即被测目标)表面,并使用摄像机对被测物体进行拍摄,得到带有线结构光的图像,即线结构光图像。在得到线结构光图像之后,可以获取线结构光图像的光条中心线,并根据预先标定的传感器参数对光条中心线进行转换,得到被测物体在当前位置的空间坐标(即三维坐标)。基于被测物体在当前位置的空间坐标,就可以实现被测物体的三维重建(即三维重构)。
发明内容
本申请实施例提供一种图像重建方法,应用于三维成像设备,所述三维成像设备包括第一摄像机、第二摄像机和多线激光器,所述方法包括:
在所述多线激光器将N条线结构光投射到被测物体时,获取第一摄像机采集的所述被测物体的第一原始图像,并获取第二摄像机采集的所述被测物体的第二原始图像,其中,N为大于1的正整数;确定第一原始图像对应的第一目标图像和第二原始图像对应的第二目标图像,其中,第一目标图像包括所述N条线结构光对应的N个第一光条区域,第二目标图像包括所述N条线结构光对应的N个第二光条区域;
确定所述第一目标图像中每个第一光条区域对应的第一光条中心线,并确定所述第二目标图像中每个第二光条区域对应的第二光条中心线;
基于所有第一光条中心线和所有第二光条中心线确定多个关键点对,其中,针对每个关键点对,该关键点对包括第一光条中心线中第一像素点和第二光条中心线中第二像素点,所述第一像素点和所述第二像素点是被测物体上同一位置点对应的像素点;
基于关键点对和相机标定参数确定该关键点对对应的三维点;
基于所述多个关键点对对应的三维点生成所述被测物体的三维重建图像。
本申请实施例提供一种图像重建装置,应用于三维成像设备,所述三维成像设备包括第一摄像机、第二摄像机和多线激光器,所述装置包括:
获取模块,用于在所述多线激光器将N条线结构光投射到被测物体时,获取所述第一摄像机采集的所述被测物体的第一原始图像,并获取所述第二摄像机采集的所述被测物体的第二原始图像,其中,N为大于1的正整数;
确定模块,用于确定第一原始图像对应的第一目标图像和第二原始图像对应的第二目标图像,其中,所述第一目标图像包括所述N条线结构光对应的N个第一光条区域,所述第二目标图像包括所述N条线结构光对应的N个第二光条区域;确定所述第一目标图像中每个第一光条区域对应的第一光条中心线,确定所述第二目标图像中每个第二光条区域对应的第二光条中心线;基于所有第一光条中心线和所有第二光条中心线确定多个关键点对,其中,针对每个关键点对,该关键点对包括第一光条中心线中第一像素点和第二光条中心线中第二像素点,所述第一像素点和所述第二像素点是被测物体上同一位置点对应的像素点;基于该关键点对和相机标定参数确定该关键点对对应的三维点;
生成模块,用于基于所述多个关键点对对应的三维点生成所述被测物体的三维重建图像。
本申请实施例提供一种三维成像设备,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;所述处理器用于执行所述机器可执行指令,以实现本申请上述实施例公开的图像重建方法。
本申请实施例中,多线激光器每次将N条线结构光投射到被测物体,N为大于1的正整数,如7、11、15等,使得摄像机每次采集的线结构光图像包括N个光条中心线,该线结构光图像等价于被测物体的N个位置的线结构光图像,从而能够减少线结构光图像的采集次数,减少三维重建的时间。多线激光器对被测物体表面进行扫描时,能够快速获取被测物体的整个轮廓数据,输出被测物体的三维图像信息,提高检测精度和检测速度。通过使用第一摄像机和第二摄像机同时采集线结构光图像,基于两个摄像机采集的线结构光图像,就能够利用三角测量法获取被测物体的三维信息,即可以得到被测物体的深度信息,从而利用单次采集图像获取多线激光上的深度信息,能够将单次扫描效率提升N倍,能够快速实现对被测物体整个轮廓的全幅扫描。
附图说明
图1A和图1B是本申请一种实施方式中的三维成像设备的结构示意图。
图1C和图1D是本申请一种实施方式中的多条线激光的示意图。
图2是本申请一种实施方式中的图像重建方法的流程示意图。
图3是本申请一种实施方式中的图像重建方法的流程示意图。
图4是本申请一种实施方式中的三角化方式的原理示意图。
图5是本申请一种实施方式中的图像重建方法的流程示意图。
图6是本申请一种实施方式中的图像重建装置的结构示意图。
具体实施方式
在本申请实施例使用的术语仅仅是出于描述特定实施例的目的,而非限制本申请。本申请和权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其它含义。还应当理解,本文中使用的术语“和/或”是指包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本申请实施例可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。此外,取决于语境,所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
为了获取三维重建图像,在相关技术中,可以采用面结构光投影法、双目散斑法、TOF(Time of flight,飞行时间)法以及单线激光轮廓扫描法等。在采用面结构光投影法时,可以采用DLP(Digital Light Processing,数字光处理)或LCD(Liquid Crystal Display,液晶显示器)投影技术,采用LED(Light-Emitting Diode,发光二极管)光源作为投影光源,投影体积大,能量发散,在大视野远距离下,体积大、功耗大,不利于三维定位应用。在采用双目散斑法时,采用双目视差结合激光散斑双目匹配方法,检测精度低、且边缘轮廓差,不利于轮廓扫描和三维定位应用。在采用TOF法时,受限于相机分辨 率,检测精度在cm级别,不满足自动化高精度定位应用。在采用单线激光轮廓扫描法时,利用单线激光扫描物体的深度信息,扫描速度慢且稳定性差,不满足三维重建的定位需求。
以采用单线激光轮廓扫描法为例,可以使用激光器将线结构光投射到被测物体表面,并使用摄像机对被测物体进行拍摄,得到带有线结构光的图像,即线结构光图像。在得到线结构光图像之后,可以基于线结构光图像得到被测物体在当前位置的空间坐标(即三维坐标),实现被测物体的三维重建(即三维重构)。但是,为了实现被测物体的三维重建,需要采集被测物体的不同位置的线结构光图像。即激光器将线结构光投射到被测物体的不同位置,每个位置对应一个线结构光图像。由于摄像机每次只采集一个位置对应的线结构光图像,从而导致摄像机需要多次采集线结构光图像才能够完成三维重建,即三维重建的时间比较长,扫描速度慢且稳定性差,不满足三维重建的定位需求。
鉴于此,本申请实施例提出一种多线激光扫描的三维成像方法,可以利用三角测量法获取被测物体的深度信息,采用多线激光对被测物体表面形成光学扫描,能够快速获取被测物体的整个轮廓数据,输出被测物体的三维图像信息。其中,多线激光扫描的三维成像方法可以应用于机器视觉领域和工业自动化领域,可以用于实现三维测量和机器人定位,对此应用场景不做限制。
本实施例中,可以利用三角测量法获取被测物体的深度信息,通过单次获取多线激光上的深度信息,如10线、20线等,能够将单次扫描效率提升10-20倍,然后通过带动多线激光扫描,实现对被测物体的整个轮廓的全幅扫描。
本实施例中,能够解决面结构光投影法的体积大、功耗大等问题,能够解决双目散斑法的检测精度低、且边缘轮廓差等问题,能够解决TOF法的检测精度低等问题,能够解决单线激光轮廓扫描法的扫描速度慢、稳定性差等问题。综上所述,本实施例中的多线激光扫描的三维成像方法,是一种高精度、低成本、小体积、低功耗的三维扫描成像方法,检测速度更快,检测精度更高。
本申请实施例中提出一种多线激光扫描的三维成像方法,可以应用于三维成像设备。此三维成像设备可以是具有三维成像功能的任意设备,例如,机器视觉领域或者工业自动化领域的任意设备等,对此三维成像设备的类型不做限制。
参见图1A所示,为三维成像设备的实际结构示意图,该三维成像设备可以包括但不限于:左相机、右相机、辅助相机、处理器、激光器、振镜电机和振镜驱动。在三维成像设备的另一种结构中,该三维成像设备可以包括但不限于:左相机、右相机、处理器、激光器、振镜电机和振镜驱动,即不具有辅助相机。
参见图1B所示,为三维成像设备的结构框图,是三维成像设备的另一种表现形式,该三维成像设备可以包括但不限于:图像获取装置100、光机扫描装置200、多线激光发射装置300、图像处理装置400、固定支架500。
示例性的,图像获取装置100可以包括左相机101和右相机102,图像获取装置100也可以包括左相机101、右相机102和辅助相机103。左相机101和右相机102通过双目匹配多线激光,根据三角测量法获取深度信息,辅助相机103可以参与重构也可以不参与重构。其中,左相机101和右相机102可以采用黑白相机,相机前端增加与激光器波长相同带宽的滤光片,只允许激光波长范围内的光通过,即只接收被测物体表面反射的激光器波长,获取激光线的反射图像,从而能够提高对比度,减少环境光干扰。左相机101和右相机102可以分别分布在机械振镜的两侧,左右对称安装。辅助相机103可以采用黑白相机或者RGB相机,辅助相机103的安装位置与机械振镜的出射光轴尽量靠近,从而保证短基线,使视野与激光扫描视野基本重合,这样,每次辅助相机103可 以完整的拍摄到所有激光线。当辅助相机103采用RGB相机时,可以关闭激光器拍摄一张被测物体表面的彩色图像,从而实现RGBD图像的输出功能。
示例性的,光机扫描装置200可以包括机械振镜,即采用机械振镜实现扫描功能,且机械振镜可以包括振镜电机、振镜驱动和反射镜三部分,在图1A中只示出了振镜电机和振镜驱动。其中,机械振镜可以是高重复精度的机械振镜,反射镜具有可见光反射膜,能够反射激光线,改变激光线发射角度。
示例性的,多线激光发射装置300可以包括激光器,激光器可以是多线激光器,即同时发射多条激光线的激光器。比如说,激光器可以采用多线激光器模组,主要由激光二极管、准直透镜、多线DOE(Diffractive Optical Element,衍射光学元件)组成。激光二极管采用大功率红光激光二极管,波长可以为635nm、或660nm、或其它波长。多线DOE可以同时发射10条激光线、或11条激光线、或25条激光线等,对此不做限制。
示例性的,图像处理装置400可以包括处理器,如CPU或GPU等,图像处理装置400分别与图像获取装置100、光机扫描装置200和多线激光发射装置300连接。图像处理装置400可以启动多线激光发射装置300,由多线激光发射装置300发射多条线激光(即线结构光),多条线激光经过机械振镜反射后照射在被测物体表面,图像处理装置400控制机械振镜开始扫描,每扫描一个角度,图像处理装置400可以接收机械振镜的角度反馈信息,根据角度反馈信息触发图像获取装置100采集被测物体表面的多线激光图像。
为了方便描述,在后续实施例中,以图像处理装置400是处理器,图像获取装置100是左相机、右相机和辅助相机,光机扫描装置200是机械振镜,多线激光发射装置300是激光器为例。参见图1C所示,激光器可以发射多条线激光(图1C中以7条线激光为例),这些线激光经过机械振镜反射后照射在被测物体表面。机械振镜的初始角度是角度A,在角度A下,由左相机采集被测物体的线结构光图像A1、右相机采集被测物体的线结构光图像A2、辅助相机采集被测物体的线结构光图像A3。然后,参见图1D所示,机械振镜的角度是角度B,由左相机采集被测物体的线结构光图像B1、右相机采集被测物体的线结构光图像B2、辅助相机采集被测物体的线结构光图像B3。以此类推,一直到机械振镜的最终角度,表示已经完成被测物体表面的完整扫描。
基于线结构光图像A1、线结构光图像A2和线结构光图像A3,处理器可以确定角度A下的三维点(即三维点云)。基于线结构光图像B1、线结构光图像B2和线结构光图像B3,处理器可以确定角度B下的三维点,以此类推,可以得到所有角度下的三维点。在此基础上,可以将所有角度下的三维点拼接,得到被测物体表面的完整三维点,即得到被测物体表面的完整三维重建图像。
当辅助相机采用RGB相机时,在完成扫描之后,处理器还可以关闭激光器,控制辅助相机采集RGB图像,然后将三维重建图像和RGB图像贴合对齐,输出RGBD图像,对此RGBD图像的处理过程,本实施例中不做限制。
示例性的,固定支架500起固定和散热作用,采用金属铝或其它材质。对相机和机械振镜采用一体化固定设计,保证相机和机械振镜的相对位置不变。
以下结合具体实施例,对本申请实施例的图像重建方法进行说明。参见图2所示,为本申请实施例中的图像重建方法的流程示意图,该方法可以包括步骤201-206。
步骤201、在多线激光器将N条线结构光(即激光线)投射到被测物体(即被测目标)时,获取第一摄像机采集的被测物体的第一原始图像,获取第二摄像机采集的被测物体的第二原始图像。其中,N可以为大于1的正整数。
示例性的,该图像重建方法可以应用于三维成像设备,该三维成像设备可以包括第一摄像机、第二摄像机和多线激光器,该三维成像设备还可以包括处理器和机械振镜。其中,第一摄像机可以为左相机,第二摄像机可以为右相机,或者,第一摄像机可以为右相机,第二摄像机可以为左相机。多线激光器即上述实施例中的激光器。
在机械振镜的角度是角度A时,通过多线激光器将N条线结构光投射到被测物体,左相机采集被测物体的线结构光图像A1、右相机采集被测物体的线结构光图像A2。在此基础上,假设第一摄像机为左相机,第二摄像机为右相机,那么,处理器可以获取第一原始图像(如线结构光图像A1)和第二原始图像(如线结构光图像A2),并基于第一原始图像和第二原始图像进行后续处理。
在机械振镜的角度是角度B时,通过多线激光器将N条线结构光投射到被测物体,左相机采集被测物体的线结构光图像B1、右相机采集被测物体的线结构光图像B2。在此基础上,处理器获取第一原始图像(线结构光图像B1)和第二原始图像(线结构光图像B2),并基于第一原始图像和第二原始图像进行后续处理。以此类推,在机械振镜的每个角度,处理器可以获取第一摄像机采集的被测物体的第一原始图像、第二摄像机采集的被测物体的第二原始图像。
示例性的,由于通过多线激光器将N条线结构光投射到被测物体,因此,第一原始图像可以包括N条线结构光对应的N个第一光条区域,N个第一光条区域与N条线结构光一一对应,第二原始图像可以包括N条线结构光对应的N个第二光条区域,N个第二光条区域与N条线结构光一一对应。
步骤202、确定第一原始图像对应的第一目标图像,并确定第二原始图像对应的第二目标图像,其中,第一目标图像包括N条线结构光对应的N个第一光条区域,第二目标图像包括N条线结构光对应的N个第二光条区域。
在一种可能的实施方式中,可以直接将第一原始图像确定为第一目标图像,并将第二原始图像确定为第二目标图像。或者,对第一原始图像和第二原始图像进行双目校正,得到第一原始图像对应的第一目标图像和第二原始图像对应的第二目标图像。其中,双目校正用于使被测物体上的同一个位置点,在第一目标图像和第二目标图像中具有相同像素高度,对此双目校正过程不做限制。
步骤203、确定第一目标图像中每个第一光条区域对应的第一光条中心线,并确定第二目标图像中每个第二光条区域对应的第二光条中心线。
步骤204、基于所有第一光条中心线和所有第二光条中心线确定多个关键点对,各关键点对包括第一光条中心线中第一像素点和第二光条中心线中第二像素点,且该第一像素点和该第二像素点是被测物体上同一位置点对应的像素点。
在一种可能的实施方式中,针对每条线结构光,可以从所有第一光条中心线和所有第二光条中心线中确定该线结构光对应的目标第一光条中心线和目标第二光条中心线;针对目标第一光条中心线中每个第一像素点,将该第一像素点投影到第二目标图像,得到该第一像素点对应的投影像素点,并从目标第二光条中心线中选取与该投影像素点对应的第二像素点。基于该第一像素点和该第二像素点生成关键点对,即该关键点对包括该第一像素点和该第二像素点。
示例性的,从目标第二光条中心线中选取与该投影像素点对应的第二像素点,可以包括但不限于:从目标第二光条中心线中确定与该投影像素点具有相同像素高度的像素点;若确定的像素点为一个,则将该像素点选取为第二像素点;若确定的像素点为至少两个,则确定至少两个像素点与该投影像素点之间的重投影误差,将最小重投影误差对应的像素点选取为第二像素点。
示例性的,将第一像素点投影到第二目标图像,得到该第一像素点对应的投影像素点,可以包括但不限于:获取与该线结构光对应的第一标定方程和第二标定方程;其中,第一标定方程表示第一目标图像中像素点与三维重建点之间的函数关系,第二标定方程表示第二目标图像中像素点与三维重建点之间的函数关系。在此基础上,基于第一标定方程将该第一像素点转换为目标三维重建点;基于第二标定方程将该目标三维重建点转换为该投影像素点。
步骤205、基于关键点对和相机标定参数确定该关键点对对应的三维点。
在一种可能的实施方式中,相机标定参数可以包括第一摄像机的相机内参、第二摄像机的相机内参、第一摄像机与第二摄像机之间的相机外参。可以通过第一摄像机的相机内参对第一像素点进行畸变校正,并将畸变校正后的像素点转换为第一齐次坐标;通过第二摄像机的相机内参对第二像素点进行畸变校正,并将畸变校正后的像素点转换为第二齐次坐标。然后,基于第一齐次坐标、第二齐次坐标、第一摄像机的相机内参、第二摄像机的相机内参和该相机外参,利用三角化方式确定该关键点对对应的三维点,对此三角化方式不做限制。
步骤206、基于多个关键点对对应的三维点生成被测物体的三维重建图像。
比如说,在机械振镜的角度是角度A时,可以基于角度A对应的第一原始图像和第二原始图像确定多个关键点对对应的三维点,在机械振镜的角度是角度B时,可以基于角度B对应的第一原始图像和第二原始图像确定多个关键点对对应的三维点,以此类推,基于所有角度对应的三维点,就可以生成三维重建图像,即得到被测物体表面的完整三维重建图像。
在一种可能的实施方式中,三维成像设备还可以包括第三摄像机,即辅助相机。在此基础上:在多线激光器将N条线结构光投射到被测物体时,还可以获取第三摄像机采集的被测物体的第三原始图像(如线结构光图像A3、线结构光图像B3等),并确定第三原始图像对应的第三目标图像,第三目标图像包括N条线结构光对应的N个第三光条区域。确定第三目标图像中每个第三光条区域对应的第三光条中心线。针对每条线结构光,从所有第三光条中心线中确定该线结构光对应的目标第三光条中心线。针对目标第一光条中心线中每个第一像素点,将该第一像素点投影到第二目标图像,得到该第一像素点对应的投影像素点,还可以包括:从目标第三光条中心线中确定与该第一像素点具有相同像素高度的第三像素点;基于该第一像素点、该第三像素点和相机标定参数确定目标三维重建点;基于第三标定方程将目标三维重建点转换为该投影像素点;其中,第三标定方程表示第二目标图像中像素点与三维重建点之间的函数关系。
本申请实施例中,多线激光器每次将N条线结构光投射到被测物体,N为大于1的正整数,如7、11、15等,使得摄像机每次采集的线结构光图像包括N个光条中心线,该线结构光图像等价于被测物体的N个位置的线结构光图像,从而能够减少线结构光图像的采集次数,减少三维重建的时间。多线激光器对被测物体表面进行扫描时,能够快速获取被测物体的整个轮廓数据,输出被测物体的三维图像信息,提高检测精度和检测速度。通过使用第一摄像机和第二摄像机同时采集线结构光图像,基于两个摄像机采集的线结构光图像,就能够利用三角测量法获取被测物体的三维信息,即可以得到被测物体的深度信息,从而利用单次采集图像获取多线激光上的深度信息,能够将单次扫描效率提升N倍,能够快速实现对被测物体整个轮廓的全幅扫描。
以下结合具体应用场景,对本申请实施例的上述技术方案进行说明。
应用场景1:三维成像设备可以包括第一摄像机、第二摄像机、处理器、多线激光器和机械振镜,第一摄像机为左相机,第二摄像机为右相机,或者,第一摄像机为右相 机,第二摄像机为左相机。在应用场景1下,可以预先获取三维成像设备对应的相机标定参数、第一标定方程和第二标定方程,并为三维成像设备存储该相机标定参数、该第一标定方程和该第二标定方程。
示例性的,相机标定参数可以包括第一摄像机的相机内参、第二摄像机的相机内参、第一摄像机与第二摄像机之间的相机外参。第一摄像机的相机内参是与第一摄像机自身特性相关的参数,如焦距、像素大小、畸变系数等。第二摄像机的相机内参是与第二摄像机自身特性相关的参数,如焦距、像素大小、畸变系数等。第一摄像机与第二摄像机之间的相机外参是世界坐标系中的参数,如第一摄像机的位置和旋转方向、第二摄像机的位置和旋转方向、第一摄像机与第二摄像机之间的位置关系,例如,旋转矩阵和平移矩阵等。
关于第一摄像机的相机内参,是第一摄像机的固有参数,第一摄像机出厂时就已经给出第一摄像机的相机内参。关于第二摄像机的相机内参,是第二摄像机的固有参数,第二摄像机出厂时就已经给出第二摄像机的相机内参。
关于第一摄像机与第二摄像机之间的相机外参,如旋转矩阵和平移矩阵等,可以在目标场景部署多个标定点,通过第一摄像机采集目标场景的第一标定图像,该第一标定图像包括多个标定点,通过第二摄像机采集目标场景的第二标定图像,该第二标定图像包括多个标定点。基于多个标定点在第一标定图像中的像素坐标和多个标定点在第二标定图像中的像素坐标,就可以确定出第一摄像机与第二摄像机之间的相机外参,对此相机外参的确定过程不做限制。
示例性的,第一标定方程表示第一摄像机采集图像(记为图像s1)中像素点与三维重建点之间的函数关系,第二标定方程表示第二摄像机采集图像(记为图像s2)中像素点与三维重建点之间的函数关系。假设多线激光器将N条线结构光投射到被测物体,且机械振镜的角度一共是M种角度,则一共需要获取N*M个第一标定方程和N*M个第二标定方程,第一标定方程和第二标定方程均可以是光平面方程。关于第一标定方程和第二标定方程的获取方式,可以包括步骤S11-S15。
步骤S11、针对机械振镜的每个角度,在多线激光器将N条线结构光投射到白色背景板时,获取第一摄像机采集的图像s1,并获取第二摄像机采集的图像s2。其中,图像s1包括N条线结构光对应的N个第一光条区域,N个第一光条区域与N条线结构光一一对应,图像s2包括N条线结构光对应的N个第二光条区域,N个第二光条区域与N条线结构光一一对应。
步骤S12、确定图像s1中每个第一光条区域对应的第一光条中心线,并确定图像s2中每个第二光条区域对应的第二光条中心线,即,得到N条线结构光对应的N个第一光条中心线,并得到N条线结构光对应的N个第二光条中心线。
步骤S13、基于所有第一光条中心线和所有第二光条中心线确定多个关键点对,各关键点对包括第一光条中心线中第一中心点和第二光条中心线中第二中心点,且第一中心点和第二中心点是白色背景板上同一位置点对应的像素点。
比如说,假设N条线结构光为线结构光1和线结构光2,图像s1包括线结构光1对应的第一光条区域1和线结构光2对应的第一光条区域2,图像s2包括线结构光1对应的第二光条区域1和线结构光2对应的第二光条区域2。第一光条区域1对应第一光条中心线1,第一光条区域2对应第一光条中心线2,第二光条区域1对应第二光条中心线1,第二光条区域2对应第二光条中心线2。
示例性的,由于被测物体是白色背景板,将2条线结构光投射到白色背景板时,光条区域比较清晰,没有杂点产生,因此,在基于第一光条区域1确定第一光条中心线1 时,第一光条中心线1的每一行只有1个中心点,同理,第二光条中心线1的每一行只有1个中心点。在此基础上,将第一光条中心线1的第一行中心点与第二光条中心线1的第一行中心点组成关键点对11,将第一光条中心线1的第二行中心点与第二光条中心线1的第二行中心点组成关键点对12,以此类推。同理,可以将第一光条中心线2的第一行中心点与第二光条中心线2的第一行中心点组成关键点对21,将第一光条中心线2的第二行中心点与第二光条中心线2的第二行中心点组成关键点对22,以此类推。
步骤S14、针对每个关键点对,基于该关键点对和相机标定参数确定该关键点对对应的三维点。比如说,可以采用三角化方式确定该关键点对对应的三维点,对此三角化方式可以参见后续实施例,在此暂不详述。
步骤S15、基于关键点对和该关键点对对应的三维点,确定与机械振镜的角度和线结构光对应的第一标定方程和第二标定方程。
比如说,针对机械振镜的角度A,基于第一光条中心线1和第二光条中心线1之间的多个关键点对(如关键点对11、关键点对12等)、及各关键点对对应的三维点,确定角度A和线结构光1对应的第一标定方程和第二标定方程。
例如,基于第一光条中心线1的大量中心点、及每个中心点对应的三维点,就可以确定第一标定方程,第一标定方程用于表示图像s1中像素点(即第一光条中心线1的中心点)与三维重建点(即中心点对应的三维点)之间的函数关系。比如说,使用平面模型或者二次模型进行拟合,得到第一标定方程。
基于第二光条中心线1的大量中心点、及每个中心点对应的三维点,就可以确定第二标定方程,第二标定方程用于表示图像s2中像素点(即第二光条中心线1的中心点)与三维重建点(即中心点对应的三维点)之间的函数关系。
同理,可以得到角度A和线结构光2对应的第一标定方程和第二标定方程,得到角度B和线结构光1对应的第一标定方程和第二标定方程,以此类推。
综上,针对机械振镜的每个角度,可以得到与每个线结构光对应的第一标定方程和第二标定方程,即N*M个第一标定方程和N*M个第二标定方程。
示例性的,针对多线激光器投射的N条线结构光,可以按照实际顺序对每条线结构光进行序号标记,如按照从左到右(或从右到左)的顺序,N条线结构光(即激光线)依次标记为1、2、3、…、N,便于对各线结构光进行匹配和索引。
在上述应用场景1下,参见图3所示,本实施例的图像重建方法可以包括步骤301-309。
步骤301、在多线激光器将N条线结构光投射到被测物体时,获取第一摄像机采集的被测物体的第一原始图像,并获取第二摄像机采集的被测物体的第二原始图像,第一原始图像的采集时刻与第二原始图像的采集时刻可以相同。
示例性的,第一原始图像包括N条线结构光对应的N个第一光条区域,如线结构光1对应的第一光条区域1、线结构光2对应的第一光条区域2、...、以此类推。第二原始图像包括N条线结构光对应的N个第二光条区域,如线结构光1对应的第二光条区域1、线结构光2对应的第二光条区域2、...、以此类推。
步骤302、对第一原始图像和第二原始图像进行双目校正,得到第一原始图像对应的第一目标图像和第二原始图像对应的第二目标图像。
示例性的,双目校正用于使被测物体上的同一个位置点,在第一目标图像和第二目标图像中具有相同像素高度,也就是说,针对被测物体上的同一个位置点,通过双目校正将第一原始图像和第二原始图像校正到同样像素高度,这样匹配时,直接在一行里面 进行匹配,匹配起来更方便。比如说,在二维空间上匹配对应点非常耗时,为了减少匹配搜索范围,可以利用极线约束使得对应点的匹配由二维搜索降为一维搜索,而双目校正的作用就是将第一原始图像和第二原始图像进行行对应,得到第一目标图像和第二目标图像,使得第一目标图像和第二目标图像的极线恰好在同一水平线,第一目标图像上任意一点与第二目标图像上的对应点就必然具有相同行号,只需在该行进行一维搜索即可。
示例性的,第一目标图像包括N条线结构光对应的N个第一光条区域,如线结构光1对应的第一光条区域1、线结构光2对应的第一光条区域2、...、以此类推。第二目标图像包括N条线结构光对应的N个第二光条区域,如线结构光1对应的第二光条区域1、线结构光2对应的第二光条区域2、...、以此类推。
步骤303、确定第一目标图像中每个第一光条区域对应的第一光条中心线,并确定第二目标图像中每个第二光条区域对应的第二光条中心线。
示例性的,针对第一光条区域的每一行,可以包括多个像素点,可以从多个像素点中选取该行的中心点,而第一光条区域的所有行的中心点就组成第一光条中心线,因此,得到第一光条区域1对应的第一光条中心线1、第一光条区域2对应的第一光条中心线2、...、以此类推。同理,得到第二光条区域1对应的第二光条中心线1、第二光条区域2对应的第二光条中心线2、...、以此类推。
示例性的,可以采用光条中心线提取算法确定光条区域对应的光条中心线,例如,可以使用高斯拟合、COG(Center of Gravity,重心)或者STEGER等方式,提取光条区域的每一行的中心点,从而得到光条中心线,本实施例中对此过程不做限制。
示例性的,假设第一目标图像和第二目标图像的高度为H,则每个第一光条中心线均包括H行的中心点,每个第二光条中心线均包括H行的中心点。
步骤304、针对每条线结构光,从所有第一光条中心线和所有第二光条中心线中确定该线结构光对应的目标第一光条中心线和目标第二光条中心线。
比如说,确定线结构光1对应的第一光条中心线1和第二光条中心线1,确定线结构光2对应的第一光条中心线2和第二光条中心线2,以此类推。
步骤305、针对每条线结构光,基于该线结构光对应的第一标定方程和该线结构光对应的目标第一光条中心线,针对该目标第一光条中心线中每个第一像素点,基于该第一标定方程将该第一像素点转换为目标三维重建点。
示例性的,可以确定机械振镜的角度,即第一原始图像和第二原始图像是在该角度下采集的,针对每条线结构光,可以从N*M个第一标定方程中选取出与该角度和该线结构光对应的第一标定方程。由于该第一标定方程表示第一目标图像中像素点与三维重建点之间的函数关系,因此,可以基于该第一标定方程将目标第一光条中心线中每个第一像素点转换为目标三维重建点。
比如说,针对线结构光1对应的第一光条中心线1,基于线结构光1对应的第一标定方程将第一光条中心线1中的每个第一像素点转换为目标三维重建点。针对线结构光2对应的第一光条中心线2,基于线结构光2对应的第一标定方程将第一光条中心线2中的每个第一像素点转换为目标三维重建点,以此类推。
步骤306、针对每个第一像素点对应的目标三维重建点,可以基于该线结构光对应的第二标定方程将该目标三维重建点转换为第二目标图像中的投影像素点,这个投影像素点也就是该第一像素点对应的投影像素点。
比如说,针对每条线结构光,从N*M个第二标定方程中选取与该线结构光对应的第 二标定方程。由于第二标定方程表示第二目标图像中像素点与三维重建点之间的函数关系,因此,将目标第一光条中心线中的第一像素点转换为目标三维重建点之后,可以基于第二标定方程将目标三维重建点转换为投影像素点。
比如说,基于线结构光1对应的第二标定方程将目标三维重建点转换为投影像素点时,得到第一光条中心线1中的每个第一像素点对应的投影像素点。基于线结构光2对应的第二标定方程将目标三维重建点转换为投影像素点时,得到第一光条中心线2中的每个第一像素点对应的投影像素点,以此类推。
综上所述,针对目标第一光条中心线中的每个第一像素点,可以将该第一像素点投影到第二目标图像,得到该第一像素点对应的投影像素点。
步骤307、针对每个第一像素点,在得到第一像素点对应的投影像素点之后,从目标第二光条中心线中选取与该投影像素点对应的第二像素点。
比如说,针对每条线结构光,以线结构光1为例,可以确定线结构光1对应的目标第二光条中心线,即第二光条中心线1,针对线结构光1对应的第一光条中心线1中的每个第一像素点,在得到第一像素点对应的投影像素点之后,可以从第二光条中心线1中选取与该投影像素点对应的第二像素点。
显然,该第一像素点和该第二像素点可以组成一个关键点对,即该关键点对包括第一光条中心线1中的第一像素点和第二光条中心线1中的第二像素点,且该第一像素点和该第二像素点是被测物体上同一位置点对应的像素点,第一像素点是第一目标图像中的像素点,第二像素点是第二目标图像中的像素点。
在一种可能的实施方式中,从目标第二光条中心线中选取与该投影像素点对应的第二像素点,可以包括:从目标第二光条中心线中确定与该投影像素点具有相同像素高度的像素点;若确定的像素点为一个,则将该像素点选取为第二像素点;若确定的像素点为至少两个,则确定至少两个像素点与该投影像素点之间的重投影误差,将最小重投影误差对应的像素点选取为第二像素点。
示例性的,第二光条中心线的一行可能包括一个像素点,在该情况下,若与投影像素点具有相同像素高度的像素点为一个,则将该像素点选取为第二像素点。第二光条中心线的一行也可能包括至少两个像素点,如光条区域存在杂点时,会导致一行存在至少两个像素点,在该情况下,若与投影像素点具有相同像素高度的像素点为至少两个,则确定该投影像素点与每个像素点之间的重投影误差,对此确定方式不做限制,在得到该投影像素点与每个像素点之间的重投影误差之后,可以将最小重投影误差对应的像素点选取为第二像素点。
步骤308、基于关键点对和相机标定参数确定该关键点对对应的三维点。
示例性的,针对每个关键点对,该关键点对包括第一目标图像中的第一像素点和第二目标图像中的第二像素点,且该第一像素点和该第二像素点是被测物体上同一位置点对应的像素点,在此基础上,可以采用三角化方式确定该关键点对对应的三维点,以下结合具体步骤,对该过程进行说明。
步骤3081、通过第一摄像机的相机内参对第一像素点进行畸变校正,并将畸变校正后的像素点转换为第一齐次坐标;通过第二摄像机的相机内参对第二像素点进行畸变校正,并将畸变校正后的像素点转换为第二齐次坐标。
例如,由于透镜制造精度以及组装工艺偏差等原因,会导致第一摄像机采集的图像存在失真,即存在畸变,如径向畸变和切向畸变等。为了解决畸变问题,第一摄像机的相机内参包括畸变参数,如径向畸变参数k1,k2,k3,切向畸变参数p1,p2等。基于 此,本实施例中,可以利用第一摄像机的相机内参对第一像素点进行畸变校正,得到去畸变处理后的像素坐标。在得到去畸变处理后的像素坐标之后,可以将去畸变处理后的像素坐标转换为第一齐次坐标。同理,可以利用第二摄像机的相机内参对第二像素坐标进行畸变校正,并将畸变校正后的像素坐标转换为第二齐次坐标。综上所述,关键点对的齐次坐标可以包括第一关键点的第一齐次坐标和第二关键点的第二齐次坐标。
步骤3082、基于第一齐次坐标、第二齐次坐标、第一摄像机的相机内参、第二摄像机的相机内参和第一摄像机与第二摄像机之间的相机外参(如位置关系等),利用三角化方式确定该关键点对对应的三维点。
比如说,参见图4所示,为三角化方式的原理示意图,OL为第一摄像机的位置,OR为第二摄像机的位置,基于第一摄像机与第二摄像机之间的相机外参,就可以获知OL和OR之间的位置关系。针对三维空间中的三维点P,在第一摄像机的像平面的成像位置为pl,在第二摄像机的像平面的成像位置为pr。pl作为第一像素点,pr作为第二像素点,该第一像素点和该第二像素点组成一个关键点对,而三维点P就是该关键点对对应的三维点。将OL,OR,pl和pr转换到同一坐标系下,针对同一坐标系下的OL,OR,pl和pr,OL和pl之间存在一条直线a1,OR和pr之间存在一条直线a2,若直线a1与直线a2存在交点,则直线a1与直线a2的交点就是三维点P。若直线a1与直线a2不存在交点,则三维点P是与直线a1和直线a2最近的点。基于上述应用场景,可以采用三角化方式获得三维点P的三维空间坐标,从而得到该关键点对对应的三维点。当然,上述实现方式只是三角化方式的示例,对此三角化方式的实现方式不做限制。
综上所述,针对每个关键点对,可以得到该关键点对对应的三维点,第一目标图像包括N个第一光条中心线,每个第一光条中心线包括H个第一像素点,因此,可以得到N*H个关键点对,且N*H个关键点对对应N*H个三维点。
步骤309、基于多个关键点对对应的三维点生成三维重建图像。
比如说,针对机械振镜的每个角度,可以采用步骤301-步骤308,确定该角度下的N*H个三维点,在机械振镜的扫描过程中,每个角度下都可以获取到一组原始图像进行上述操作,假设机械振镜一共存在M个角度,则可以得到M个角度下的M*N*H个三维点。在此基础上,可以基于M*N*H个三维点生成三维重建图像,三维重建图像即点云数据,并输出三维重建图像。或者,也可以将该三维重建图像投影到某个摄像机上得到深度图像,并输出深度图像。
应用场景2:三维成像设备可以包括第一摄像机、第二摄像机、第三摄像机、处理器、多线激光器和机械振镜,第一摄像机为左相机、第二摄像机为右相机、第三摄像机为辅助相机,或者,第一摄像机为右相机,第二摄像机为左相机、第三摄像机为辅助相机。可以预先获取三维成像设备对应的相机标定参数和第三标定方程,并为三维成像设备存储该相机标定参数和该第三标定方程。
示例性的,相机标定参数包括第一摄像机的相机内参、第二摄像机的相机内参、第三摄像机的相机内参、第一摄像机与第二摄像机之间的相机外参(如位置关系,如旋转矩阵和平移矩阵等)、第一摄像机与第三摄像机之间的相机外参(如位置关系)、第二摄像机与第三摄像机之间的相机外参(如位置关系)。
关于相机标定参数的获取方式,可以参见应用场景1,在此不再赘述。
示例性的,第三标定方程表示第二摄像机采集图像(记为图像s2)中像素点与三维重建点之间的函数关系。假设多线激光器将N条线结构光投射到被测物体,且机械振镜的角度一共是M种角度,则需要一共获取N*M个第三标定方程,第三标定方程均可以是光平面方程。关于第三标定方程的获取方式,可以参见应用场景1中第二标定方程 的获取方式,在此不再赘述。
示例性的,针对多线激光器投射的N条线结构光,可以按照实际顺序对每条线结构光进行序号标记,如按照从左到右(或从右到左)的顺序,N条线结构光(即激光线)依次标记为1、2、3、…、N,便于对各线结构光进行匹配和索引。
在上述应用场景2下,参见图5所示,本实施例的图像重建方法可以包括步骤501-509。
步骤501、在多线激光器将N条线结构光投射到被测物体时,获取第一摄像机采集的被测物体的第一原始图像,获取第二摄像机采集的被测物体的第二原始图像,获取第三摄像机采集的被测物体的第三原始图像。其中,第一原始图像的采集时刻、第二原始图像的采集时刻和第三原始图像的采集时刻可以相同。
示例性的,该第一原始图像可以包括N条线结构光对应的N个第一光条区域,该第二原始图像可以包括N条线结构光对应的N个第二光条区域,该第三原始图像可以包括N条线结构光对应的N个第三光条区域。
步骤502、对第一原始图像、第二原始图像和第三原始图像进行三目校正,得到第一原始图像对应的第一目标图像、第二原始图像对应的第二目标图像和第三原始图像对应的第三目标图像。其中,三目校正用于使被测物体上的同一个位置点,在第一目标图像、第二目标图像和第三目标图像中具有相同像素高度,也就是说,针对被测物体上的同一个位置点,通过三目校正,可以将第一原始图像、第二原始图像和第三原始图像校正到同样像素高度。
示例性的,该第一目标图像可以包括N条线结构光对应的N个第一光条区域,该第二目标图像可以包括N条线结构光对应的N个第二光条区域,该第三目标图像可以包括N条线结构光对应的N个第三光条区域。
步骤503、确定该第一目标图像中每个第一光条区域对应的第一光条中心线,确定该第二目标图像中每个第二光条区域对应的第二光条中心线,并确定该第三目标图像中每个第三光条区域对应的第三光条中心线。
示例性的,可以采用光条中心线提取算法确定光条区域对应的光条中心线,例如,可以使用高斯拟合、COG或者STEGER等方式,提取光条区域的每一行的中心点,从而得到光条中心线,本实施例中对此过程不做限制。
步骤504、针对每条线结构光,从所有第一光条中心线确定线结构光对应的目标第一光条中心线,从所有第二光条中心线确定线结构光对应的目标第二光条中心线,从所有第三光条中心线确定线结构光对应的目标第三光条中心线。
步骤505、基于该线结构光对应的目标第一光条中心线和目标第三光条中心线,针对目标第一光条中心线中每个第一像素点,从目标第三光条中心线中确定与该第一像素点具有相同像素高度的第三像素点,并基于该第一像素点、该第三像素点和相机标定参数确定该第一像素点对应的目标三维重建点。
示例性的,针对目标第一光条中心线中每个第一像素点,从目标第三光条中心线中确定与该第一像素点具有相同像素高度的像素点。若确定的像素点为一个,则将该像素点选取为第三像素点。若确定的像素点为至少两个,则确定至少两个像素点与该第一像素点之间的重投影误差,对此重投影误差的确定方式不做限制,并将最小重投影误差对应的像素点选取为第三像素点。
示例性的,该第一像素点和该第三像素点可以组成一个关键点对,即该关键点对包括第一目标图像中的第一像素点和第三目标图像中的第三像素点,且该第一像素点和该 第三像素点是被测物体上同一位置点对应的像素点。在此基础上,可以采用三角化方式确定该关键点对对应的三维点,这个三维点就是该第一像素点对应的目标三维重建点。比如说,通过第一摄像机的相机内参对第一像素点进行畸变校正,将畸变校正后的像素点转换为第一齐次坐标;通过第三摄像机的相机内参对第三像素点进行畸变校正,将畸变校正后的像素点转换为第三齐次坐标。基于第一齐次坐标、第三齐次坐标、第一摄像机的相机内参、第三摄像机的相机内参、第一摄像机与第三摄像机之间的相机外参(如位置关系等),利用三角化方式确定该关键点对对应的三维点,对此过程不再赘述。
综上所述,针对目标第一光条中心线中每个第一像素点,确定该第一像素点对应的目标三维重建点,即得到第一像素点和目标三维重建点的对应关系。
步骤506、针对每个第一像素点对应的目标三维重建点,可以基于该线结构光对应的第三标定方程将该目标三维重建点转换为第二目标图像中的投影像素点,这个投影像素点也就是该第一像素点对应的投影像素点。
比如说,针对每条线结构光,从N*M个第三标定方程中选取与该线结构光对应的第三标定方程。由于第三标定方程表示第二目标图像中像素点与三维重建点之间的函数关系,因此,将目标第一光条中心线中的第一像素点转换为目标三维重建点之后,可以基于第三标定方程将目标三维重建点转换为投影像素点。
综上所述,针对目标第一光条中心线中的每个第一像素点,可以将该第一像素点投影到第二目标图像,得到该第一像素点对应的投影像素点。
步骤507、针对每个第一像素点,在得到第一像素点对应的投影像素点之后,从目标第二光条中心线中选取与该投影像素点对应的第二像素点。
比如说,可以从目标第二光条中心线中确定与该投影像素点具有相同像素高度的像素点;若确定的像素点为一个,则将该像素点选取为第二像素点;若确定的像素点为至少两个,则确定至少两个像素点与该投影像素点之间的重投影误差,将最小重投影误差对应的像素点选取为第二像素点。
显然,该第一像素点和该第二像素点可以组成一个关键点对,且该第一像素点和该第二像素点是被测物体上同一位置点对应的像素点,第一像素点是第一目标图像中的像素点,第二像素点是第二目标图像中的像素点。
步骤508、基于关键点对和相机标定参数确定该关键点对对应的三维点。比如说,通过第一摄像机的相机内参对第一像素点进行畸变校正,将畸变校正后的像素点转换为第一齐次坐标;通过第二摄像机的相机内参对第二像素点进行畸变校正,将畸变校正后的像素点转换为第二齐次坐标。基于第一齐次坐标、第二齐次坐标、第一摄像机的相机内参、第二摄像机的相机内参、第一摄像机与第二摄像机之间的相机外参,利用三角化方式确定该关键点对对应的三维点。
综上所述,针对每个关键点对,可以得到该关键点对对应的三维点。
步骤509、基于多个关键点对对应的三维点生成三维重建图像。
本申请实施例中,多线激光器每次将N条线结构光投射到被测物体,使得摄像机每次采集的线结构光图像包括N个光条中心线,该线结构光图像等价于被测物体的N个位置的线结构光图像,从而能够减少线结构光图像的采集次数,减少三维重建的时间。多线激光器对被测物体表面进行扫描时,能够快速获取被测物体的整个轮廓数据,输出被测物体的三维图像信息,提高检测精度和检测速度。通过使用第一摄像机和第二摄像机同时采集线结构光图像,基于两个摄像机采集的线结构光图像,就能够利用三角测量法获取被测物体的三维信息,即可以得到被测物体的深度信息,从而利用单次采集图像 获取多线激光上的深度信息,能够将单次扫描效率提升N倍,能够快速实现对被测物体整个轮廓的全幅扫描。采用多线激光三角测量法,一次获取多条激光线上的深度信息,采用机械振镜实现激光线间距的小角度扫描,即可完成被测物体表面整个轮廓的高精度扫描。由于激光对比度高,准直性好,景深比较大,三维检测应用时对材质适应性更好,检测精度更高,因此,可以适合于机器视觉领域三维测量应用和工业自动化领域三维抓取和定位应用。
基于与上述方法同样的构思,本申请实施例中提出一种图像重建装置,应用于三维成像设备,所述三维成像设备包括第一摄像机、第二摄像机和多线激光器,参见图6所示,为所述装置的结构示意图,所述装置可以包括获取模块61、确定模块62和生成模块63。
获取模块61用于在所述多线激光器将N条线结构光投射到被测物体时,获取所述第一摄像机采集的所述被测物体的第一原始图像,并获取所述第二摄像机采集的所述被测物体的第二原始图像;其中,N为大于1的正整数。
确定模块62用于确定第一原始图像对应的第一目标图像和第二原始图像对应的第二目标图像,其中,所述第一目标图像包括所述N条线结构光对应的N个第一光条区域,所述第二目标图像包括所述N条线结构光对应的N个第二光条区域;确定所述第一目标图像中每个第一光条区域对应的第一光条中心线,并确定所述第二目标图像中每个第二光条区域对应的第二光条中心线;基于所有第一光条中心线和所有第二光条中心线确定多个关键点对,其中,针对每个关键点对,该关键点对包括第一光条中心线中第一像素点和第二光条中心线中第二像素点,所述第一像素点和所述第二像素点是被测物体上同一位置点对应的像素点;基于该关键点对和相机标定参数确定该关键点对对应的三维点。
生成模块63用于基于所述多个关键点对对应的三维点生成被测物体的三维重建图像。
示例性的,所述确定模块62确定第一原始图像对应的第一目标图像和第二原始图像对应的第二目标图像时具体用于:将第一原始图像确定为第一目标图像,并将第二原始图像确定为第二目标图像;或者,对第一原始图像和第二原始图像进行双目校正,得到所述第一原始图像对应的第一目标图像和所述第二原始图像对应的第二目标图像,其中,所述双目校正用于使所述被测物体上的同一个位置点,在所述第一目标图像和所述第二目标图像中具有相同像素高度。
示例性的,所述确定模块62基于所有第一光条中心线和所有第二光条中心线确定多个关键点对时具体用于:针对每条线结构光,从所有第一光条中心线和所有第二光条中心线中确定该线结构光对应的目标第一光条中心线和目标第二光条中心线;针对所述目标第一光条中心线中每个第一像素点,将所述第一像素点投影到所述第二目标图像,得到所述第一像素点对应的投影像素点,并从所述目标第二光条中心线中选取与该投影像素点对应的第二像素点;然后,可以基于所述第一像素点和所述第二像素点生成关键点对。
示例性的,所述确定模块62从所述目标第二光条中心线中选取与该投影像素点对应的第二像素点时具体用于:从目标第二光条中心线中确定与该投影像素点具有相同像素高度的像素点;若确定的像素点为一个,将该像素点选取为第二像素点;若确定的像素点为至少两个,确定至少两个像素点与该投影像素点之间的重投影误差,将最小重投影误差对应的像素点选取为第二像素点。
示例性的,所述确定模块62将所述第一像素点投影到所述第二目标图像,得到所 述第一像素点对应的投影像素点时具体用于:获取与该线结构光对应的第一标定方程和第二标定方程,第一标定方程表示第一目标图像中像素点与三维重建点之间的函数关系,第二标定方程表示第二目标图像中像素点与三维重建点之间的函数关系;基于所述第一标定方程将所述第一像素点转换为目标三维重建点;基于所述第二标定方程将所述目标三维重建点转换为所述投影像素点。
示例性的,所述三维成像设备还包括第三摄像机,所述获取模块61,还用于在多线激光器将N条线结构光投射到被测物体时,获取第三摄像机采集的所述被测物体的第三原始图像。所述确定模块62,还用于确定第三原始图像对应的第三目标图像,第三目标图像包括所述N条线结构光对应的N个第三光条区域;确定第三目标图像中每个第三光条区域对应的第三光条中心线;针对每条线结构光,从所有第三光条中心线中确定该线结构光对应的目标第三光条中心线。针对所述目标第一光条中心线中每个第一像素点,所述确定模块将所述第一像素点投影到第二目标图像,得到所述第一像素点对应的投影像素点时具体用于:从目标第三光条中心线中确定与所述第一像素点具有相同像素高度的第三像素点;基于所述第一像素点、所述第三像素点和相机标定参数确定目标三维重建点;基于第三标定方程将所述目标三维重建点转换为所述投影像素点,其中,所述第三标定方程表示第二目标图像中像素点与三维重建点之间的函数关系。
示例性的,所述相机标定参数包括第一摄像机的相机内参、第二摄像机的相机内参、所述第一摄像机与所述第二摄像机之间的相机外参;所述确定模块62基于关键点对和相机标定参数确定该关键点对对应的三维点时具体用于:通过第一摄像机的相机内参对所述第一像素点进行畸变校正,并将畸变校正后的像素点转换为第一齐次坐标;通过第二摄像机的相机内参对所述第二像素点进行畸变校正,并将畸变校正后的像素点转换为第二齐次坐标;基于第一齐次坐标、第二齐次坐标、第一摄像机的相机内参、第二摄像机的相机内参和所述相机外参,利用三角化方式确定该关键点对对应的三维点。
基于与上述方法同样的构思,本申请实施例中提出一种三维成像设备,所述三维成像设备可以包括处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;所述处理器用于执行机器可执行指令,以实现本申请上述实施例公开的图像重建方法。
基于与上述方法同样的构思,本申请实施例还提供一种机器可读存储介质,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被处理器执行时,能够使所述处理器实现本申请上述实施例公开的图像重建方法。
其中,上述机器可读存储介质可以是任何电子、磁性、光学或其它物理存储装置,可以包含或存储信息,如可执行指令、数据,等等。例如,机器可读存储介质可以是:RAM(Radom Access Memory,随机存取存储器)、易失存储器、非易失性存储器、闪存、存储驱动器(如硬盘驱动器)、固态硬盘、任何类型的存储盘(如光盘、DVD等),或者类似的存储介质,或者它们的组合。
上述实施例阐明的***、装置、模块或单元,具体可以由实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本申请的实施例可提供为方法、***、或计算机程序 产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可以由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其它可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其它可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
而且,这些计算机程序指令也可以存储在能引导计算机或其它可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或者多个流程和/或方框图一个方框或者多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其它可编程数据处理设备上,使得在计算机或者其它可编程数据处理设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其它可编程数据处理设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (15)

  1. 一种图像重建方法,应用于三维成像设备,所述三维成像设备包括第一摄像机、第二摄像机和多线激光器,所述方法包括:
    在所述多线激光器将N条线结构光投射到被测物体时,获取所述第一摄像机采集的所述被测物体的第一原始图像,并获取所述第二摄像机采集的所述被测物体的第二原始图像,其中,N为大于1的正整数;确定所述第一原始图像对应的第一目标图像和所述第二原始图像对应的第二目标图像,其中,所述第一目标图像包括所述N条线结构光对应的N个第一光条区域,所述第二目标图像包括所述N条线结构光对应的N个第二光条区域;
    确定所述第一目标图像中每个第一光条区域对应的第一光条中心线,并确定所述第二目标图像中每个第二光条区域对应的第二光条中心线;
    基于所有第一光条中心线和所有第二光条中心线确定多个关键点对,其中,针对每个关键点对,该关键点对包括第一光条中心线中第一像素点和第二光条中心线中第二像素点,所述第一像素点和所述第二像素点是所述被测物体上同一位置点对应的像素点;
    基于关键点对和相机标定参数确定该关键点对对应的三维点;
    基于所述多个关键点对对应的三维点生成所述被测物体的三维重建图像。
  2. 根据权利要求1所述的方法,其中,所述确定所述第一原始图像对应的第一目标图像和所述第二原始图像对应的第二目标图像,包括:
    将所述第一原始图像确定为所述第一目标图像,并将所述第二原始图像确定为所述第二目标图像;或者,
    对所述第一原始图像和所述第二原始图像进行双目校正,得到所述第一原始图像对应的第一目标图像和所述第二原始图像对应的第二目标图像,其中,所述双目校正用于使所述被测物体上的同一个位置点,在所述第一目标图像和所述第二目标图像中具有相同像素高度。
  3. 根据权利要求1或2所述的方法,其中,所述基于所有第一光条中心线和所有第二光条中心线确定多个关键点对,包括:
    针对每条线结构光,从所有第一光条中心线和所有第二光条中心线中确定该线结构光对应的目标第一光条中心线和目标第二光条中心线;
    针对所述目标第一光条中心线中每个第一像素点,将所述第一像素点投影到所述第二目标图像,得到所述第一像素点对应的投影像素点,并从所述目标第二光条中心线中选取与该投影像素点对应的第二像素点;
    基于所述第一像素点和所述第二像素点生成关键点对。
  4. 根据权利要求3所述的方法,其中,所述从所述目标第二光条中心线中选取与该投影像素点对应的第二像素点,包括:
    从所述目标第二光条中心线中确定与该投影像素点具有相同像素高度的像素点;
    若确定的像素点为一个,则将该像素点选取为所述第二像素点;
    若确定的像素点为至少两个,则确定至少两个像素点与该投影像素点之间的重投影误差,将所述至少两个像素点中最小重投影误差对应的像素点选取为所述第二像素点。
  5. 根据权利要求3所述的方法,其中,所述将所述第一像素点投影到所述第二目标图像,得到所述第一像素点对应的投影像素点,包括:
    获取与该线结构光对应的第一标定方程和第二标定方程,其中,所述第一标定方程表示所述第一目标图像中像素点与三维重建点之间的函数关系,所述第二标定方程表示所述第二目标图像中像素点与三维重建点之间的函数关系;
    基于所述第一标定方程将所述第一像素点转换为目标三维重建点;
    基于所述第二标定方程将所述目标三维重建点转换为所述投影像素点。
  6. 根据权利要求3所述的方法,其中,所述三维成像设备还包括第三摄像机,所 述方法还包括:在所述多线激光器将所述N条线结构光投射到所述被测物体时,获取所述第三摄像机采集的所述被测物体的第三原始图像,确定所述第三原始图像对应的第三目标图像,其中,所述第三目标图像包括所述N条线结构光对应的N个第三光条区域;确定所述第三目标图像中每个第三光条区域对应的第三光条中心线;针对每条线结构光,从所有第三光条中心线中确定该线结构光对应的目标第三光条中心线;
    其中,针对所述目标第一光条中心线中每个第一像素点,所述将所述第一像素点投影到所述第二目标图像,得到所述第一像素点对应的投影像素点,包括:从所述目标第三光条中心线中确定与所述第一像素点具有相同像素高度的第三像素点;基于所述第一像素点、所述第三像素点和相机标定参数确定目标三维重建点;基于第三标定方程将所述目标三维重建点转换为所述投影像素点,其中,所述第三标定方程表示所述第二目标图像中像素点与三维重建点之间的函数关系。
  7. 根据权利要求1所述的方法,其中,
    所述相机标定参数包括所述第一摄像机的相机内参、所述第二摄像机的相机内参、所述第一摄像机与所述第二摄像机之间的相机外参;
    所述基于关键点对和相机标定参数确定该关键点对对应的三维点,包括:
    通过所述第一摄像机的相机内参对所述第一像素点进行畸变校正,并将畸变校正后的像素点转换为第一齐次坐标;通过所述第二摄像机的相机内参对所述第二像素点进行畸变校正,并将畸变校正后的像素点转换为第二齐次坐标;
    基于所述第一齐次坐标、所述第二齐次坐标、所述第一摄像机的相机内参、所述第二摄像机的相机内参和所述相机外参,利用三角化方式确定该关键点对对应的三维点。
  8. 一种图像重建装置,应用于三维成像设备,所述三维成像设备包括第一摄像机、第二摄像机和多线激光器,所述装置包括:
    获取模块,用于在所述多线激光器将N条线结构光投射到被测物体时,获取所述第一摄像机采集的所述被测物体的第一原始图像,并获取所述第二摄像机采集的所述被测物体的第二原始图像,其中,N为大于1的正整数;
    确定模块,用于确定所述第一原始图像对应的第一目标图像和所述第二原始图像对应的第二目标图像,其中,所述第一目标图像包括所述N条线结构光对应的N个第一光条区域,所述第二目标图像包括所述N条线结构光对应的N个第二光条区域;确定所述第一目标图像中每个第一光条区域对应的第一光条中心线,并确定所述第二目标图像中每个第二光条区域对应的第二光条中心线;基于所有第一光条中心线和所有第二光条中心线确定多个关键点对,其中,针对每个关键点对,该关键点对包括第一光条中心线中第一像素点和第二光条中心线中第二像素点,所述第一像素点和所述第二像素点是所述被测物体上同一位置点对应的像素点;基于该关键点对和相机标定参数确定该关键点对对应的三维点;
    生成模块,用于基于所述多个关键点对对应的三维点生成所述被测物体的三维重建图像。
  9. 根据权利要求8所述的装置,其中,所述确定模块确定所述第一原始图像对应的第一目标图像和所述第二原始图像对应的第二目标图像时具体用于:
    将所述第一原始图像确定为所述第一目标图像,并将所述第二原始图像确定为所述第二目标图像;或者,
    对所述第一原始图像和所述第二原始图像进行双目校正,得到所述第一原始图像对应的第一目标图像和所述第二原始图像对应的第二目标图像,其中,所述双目校正用于使所述被测物体上的同一个位置点,在所述第一目标图像和所述第二目标图像中具有相同像素高度。
  10. 根据权利要求8或9所述的装置,其中,所述确定模块基于所有第一光条中心线和所有第二光条中心线确定多个关键点对时具体用于:
    针对每条线结构光,从所有第一光条中心线和所有第二光条中心线中确定该线结构光对应的目标第一光条中心线和目标第二光条中心线;
    针对所述目标第一光条中心线中每个第一像素点,将所述第一像素点投影到所述第二目标图像,得到所述第一像素点对应的投影像素点,并从所述目标第二光条中心线中选取与该投影像素点对应的第二像素点;
    基于所述第一像素点和所述第二像素点生成关键点对。
  11. 根据权利要求10所述的装置,其中,所述确定模块从所述目标第二光条中心线中选取与该投影像素点对应的第二像素点时具体用于:
    从所述目标第二光条中心线中确定与该投影像素点具有相同像素高度的像素点;
    若确定的像素点为一个,则将该像素点选取为所述第二像素点;
    若确定的像素点为至少两个,则确定至少两个像素点与该投影像素点之间的重投影误差,将所述至少两个像素点中最小重投影误差对应的像素点选取为所述第二像素点。
  12. 根据权利要求10所述的装置,其中,所述确定模块将所述第一像素点投影到所述第二目标图像,得到所述第一像素点对应的投影像素点时具体用于:
    获取与该线结构光对应的第一标定方程和第二标定方程,其中,所述第一标定方程表示所述第一目标图像中像素点与三维重建点之间的函数关系,所述第二标定方程表示所述第二目标图像中像素点与三维重建点之间的函数关系;
    基于所述第一标定方程将所述第一像素点转换为目标三维重建点;
    基于所述第二标定方程将所述目标三维重建点转换为所述投影像素点。
  13. 根据权利要求10所述的装置,其中,所述三维成像设备还包括第三摄像机,
    所述获取模块,还用于在所述多线激光器将所述N条线结构光投射到所述被测物体时,获取所述第三摄像机采集的所述被测物体的第三原始图像;
    所述确定模块,还用于确定所述第三原始图像对应的第三目标图像,其中,所述第三目标图像包括所述N条线结构光对应的N个第三光条区域;确定所述第三目标图像中每个第三光条区域对应的第三光条中心线;针对每条线结构光,从所有第三光条中心线中确定该线结构光对应的目标第三光条中心线;
    针对所述目标第一光条中心线中每个第一像素点,所述确定模块将所述第一像素点投影到所述第二目标图像,得到所述第一像素点对应的投影像素点时具体用于:从所述目标第三光条中心线中确定与所述第一像素点具有相同像素高度的第三像素点;基于所述第一像素点、所述第三像素点和相机标定参数确定目标三维重建点;基于第三标定方程将所述目标三维重建点转换为所述投影像素点,其中,所述第三标定方程表示所述第二目标图像中像素点与三维重建点之间的函数关系。
  14. 根据权利要求8所述的装置,其中,
    所述相机标定参数包括所述第一摄像机的相机内参、所述第二摄像机的相机内参、所述第一摄像机与所述第二摄像机之间的相机外参;
    所述确定模块基于关键点对和相机标定参数确定该关键点对对应的三维点时具体用于:
    通过所述第一摄像机的相机内参对所述第一像素点进行畸变校正,并将畸变校正后的像素点转换为第一齐次坐标;通过所述第二摄像机的相机内参对所述第二像素点进行畸变校正,并将畸变校正后的像素点转换为第二齐次坐标;
    基于所述第一齐次坐标、所述第二齐次坐标、所述第一摄像机的相机内参、所述第二摄像机的相机内参和所述相机外参,利用三角化方式确定该关键点对对应的三维点。
  15. 一种三维成像设备,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;所述处理器用于执行所述机器可执行指令,以实现权利要求1-7任一所述的方法。
PCT/CN2023/089562 2022-04-28 2023-04-20 图像重建方法和装置及设备 WO2023207756A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210470232.8 2022-04-28
CN202210470232.8A CN114898038A (zh) 2022-04-28 2022-04-28 一种图像重建方法、装置及设备

Publications (1)

Publication Number Publication Date
WO2023207756A1 true WO2023207756A1 (zh) 2023-11-02

Family

ID=82718895

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/089562 WO2023207756A1 (zh) 2022-04-28 2023-04-20 图像重建方法和装置及设备

Country Status (2)

Country Link
CN (1) CN114898038A (zh)
WO (1) WO2023207756A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114898038A (zh) * 2022-04-28 2022-08-12 杭州海康机器人技术有限公司 一种图像重建方法、装置及设备
CN117541730B (zh) * 2024-01-08 2024-03-29 清华四川能源互联网研究院 一种水下目标的三维图像重建方法及***

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103900494A (zh) * 2014-03-31 2014-07-02 中国科学院上海光学精密机械研究所 用于双目视觉三维测量的同源点快速匹配方法
WO2018103152A1 (zh) * 2016-12-05 2018-06-14 杭州先临三维科技股份有限公司 一种三维数字成像传感器、三维扫描***及其扫描方法
CN108267097A (zh) * 2017-07-17 2018-07-10 杭州先临三维科技股份有限公司 基于双目三维扫描***的三维重构方法和装置
US20180356213A1 (en) * 2016-09-14 2018-12-13 Hangzhou Scantech Co., Ltd Three-dimensional sensor system and three-dimensional data acquisition method
CN110009687A (zh) * 2019-03-14 2019-07-12 深圳市易尚展示股份有限公司 基于三相机的彩色三维成像***及其标定方法
CN114782632A (zh) * 2022-04-28 2022-07-22 杭州海康机器人技术有限公司 一种图像重建方法、装置及设备
CN114820939A (zh) * 2022-04-28 2022-07-29 杭州海康机器人技术有限公司 一种图像重建方法、装置及设备
CN114898038A (zh) * 2022-04-28 2022-08-12 杭州海康机器人技术有限公司 一种图像重建方法、装置及设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103900494A (zh) * 2014-03-31 2014-07-02 中国科学院上海光学精密机械研究所 用于双目视觉三维测量的同源点快速匹配方法
US20180356213A1 (en) * 2016-09-14 2018-12-13 Hangzhou Scantech Co., Ltd Three-dimensional sensor system and three-dimensional data acquisition method
WO2018103152A1 (zh) * 2016-12-05 2018-06-14 杭州先临三维科技股份有限公司 一种三维数字成像传感器、三维扫描***及其扫描方法
CN108267097A (zh) * 2017-07-17 2018-07-10 杭州先临三维科技股份有限公司 基于双目三维扫描***的三维重构方法和装置
CN110009687A (zh) * 2019-03-14 2019-07-12 深圳市易尚展示股份有限公司 基于三相机的彩色三维成像***及其标定方法
CN114782632A (zh) * 2022-04-28 2022-07-22 杭州海康机器人技术有限公司 一种图像重建方法、装置及设备
CN114820939A (zh) * 2022-04-28 2022-07-29 杭州海康机器人技术有限公司 一种图像重建方法、装置及设备
CN114898038A (zh) * 2022-04-28 2022-08-12 杭州海康机器人技术有限公司 一种图像重建方法、装置及设备

Also Published As

Publication number Publication date
CN114898038A (zh) 2022-08-12

Similar Documents

Publication Publication Date Title
WO2023207756A1 (zh) 图像重建方法和装置及设备
TWI668997B (zh) 產生全景深度影像的影像裝置及相關影像裝置
EP3531066B1 (en) Three-dimensional scanning method including a plurality of lasers with different wavelengths, and scanner
US10571668B2 (en) Catadioptric projector systems, devices, and methods
CN107783353B (zh) 用于捕捉立体影像的装置及***
CN111623725B (zh) 一种跟踪式三维扫描***
WO2021072802A1 (zh) 一种距离测量***及方法
US20200192206A1 (en) Structured light projector, three-dimensional camera module and terminal device
JP2007187581A (ja) 測距装置及び測距方法
JP2012504771A (ja) 三次元及び距離の面間推定を与えるための方法及びシステム
EP3951314B1 (en) Three-dimensional measurement system and three-dimensional measurement method
US20220036118A1 (en) Systems, methods, and media for directly recovering planar surfaces in a scene using structured light
US20220268571A1 (en) Depth detection apparatus and electronic device
US11175568B2 (en) Information processing apparatus, information processing method, and program as well as in interchangeable lens
TW202119058A (zh) 深度感測裝置及方法
WO2020214425A1 (en) Calibration systems usable for distortion characterization in cameras
CN114782632A (zh) 一种图像重建方法、装置及设备
CN114820939A (zh) 一种图像重建方法、装置及设备
US20240133679A1 (en) Projector for diffuse illumination and structured light
WO2021032298A1 (en) High resolution optical depth scanner
US11326874B2 (en) Structured light projection optical system for obtaining 3D data of object surface
TWI630431B (zh) 用於捕捉立體影像的裝置及系統
WO2022253777A1 (en) Auto calibration from epipolar line distance in projection pattern
RU164082U1 (ru) Устройство контроля линейных размеров трехмерных объектов
JP2011164114A (ja) 測距装置及び測距方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23795189

Country of ref document: EP

Kind code of ref document: A1