CN113052889B - Depth calculation method and system - Google Patents

Depth calculation method and system Download PDF

Info

Publication number
CN113052889B
CN113052889B CN202110314157.1A CN202110314157A CN113052889B CN 113052889 B CN113052889 B CN 113052889B CN 202110314157 A CN202110314157 A CN 202110314157A CN 113052889 B CN113052889 B CN 113052889B
Authority
CN
China
Prior art keywords
spot
image
camera unit
spot image
speckle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110314157.1A
Other languages
Chinese (zh)
Other versions
CN113052889A (en
Inventor
兰富洋
李秋平
王兆民
杨鹏
黄源浩
肖振中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Orbbec Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbbec Inc filed Critical Orbbec Inc
Priority to CN202110314157.1A priority Critical patent/CN113052889B/en
Publication of CN113052889A publication Critical patent/CN113052889A/en
Application granted granted Critical
Publication of CN113052889B publication Critical patent/CN113052889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/564Depth or shape recovery from multiple images from contours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/514Depth or shape recovery from specularities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application is applicable to the field of image processing, and relates to a depth calculation method, which comprises the following steps: acquiring a first spot image and a second spot image; acquiring first position information and second position information of spots in each corresponding sub-region in the first spot image and the second spot image; calculating a parallax of the spot of each region of the first spot image and the spot of each region of the corresponding second spot image based on the first position information and the second position information; based on the parallax, the depth of the target region is calculated. The parallax calculation method for calculating the position information of the spots in the first spot image and the second spot image based on the multiple areas is simple in operation and high in speed.

Description

Depth calculation method and system
Technical Field
The application belongs to the field of image processing, and particularly relates to a depth calculation method and a depth calculation system.
Background
In the prior art, the main algorithm for acquiring the parallax map from the binocular system is a stereo matching algorithm, but the accuracy and the speed of the algorithm are in a mutually restricted relation. The calculation speed of the parallax map determines the real-time performance of the binocular vision system for processing the acquired information. The high-precision parallax map algorithm mostly uses global methods such as a map segmentation method or confidence coefficient propagation, and the like, has low running speed and cannot meet the requirement of real-time property. In addition, for a weak texture target, because of the lack of characteristic points for matching in the binocular image, accurate depth information of the target is often difficult to obtain, and the projection module in the existing binocular system projects dense spots, so that the projection module has the defects of high power consumption, low single-point optical power and the like when projecting the dense spots.
Disclosure of Invention
The embodiment of the application provides a depth calculation method and a depth calculation system, which can solve the technical problems that the depth calculation method in the prior art is complex in operation and low in speed, errors are easy to occur when spot blocks are matched, and the real-time requirement cannot be met.
In a first aspect, an embodiment of the present application provides a depth calculation method, including:
Acquiring a first spot image and a second spot image, wherein the first spot image and the second spot image are images formed by respectively reflecting regular spots projected by a projection module to imaging areas of a first camera unit and a second camera unit after passing through a target area;
acquiring first position information and second position information of spots in the area corresponding to each of the first spot image and the second spot image;
calculating a parallax of a spot of each of the first and second spot images corresponding to the region based on the first and second position information;
calculating the depth of the target area according to the parallax;
Wherein the imaging region has been previously divided into a plurality of regions, each of the regions including only one spot.
In a possible implementation manner of the first aspect, the processing the first speckle image obtains first position information of the speckle of each region in the first speckle image; processing the second spot image to obtain second position information of the spot of each region in the second spot image includes:
Calculating coordinate information of the spot contour edge points of each spot in the area in the first spot image and the second spot image;
And calculating the center coordinates of the spots of each area based on the coordinate information of the spot contour edge points.
Wherein said calculating coordinate information of the spot contour edge points of the spot of each of the areas in the first spot image and the second spot image includes:
Performing Gaussian filtering on the first spot image and the second spot image by utilizing a Gaussian check of variance to obtain a filtered first spot image and a filtered second spot image;
performing Laplacian transformation on the filtered first spot image and the second spot image to obtain a Laplacian image of the filtered first spot image and a Laplacian image of the filtered second spot image;
Combining the filtered first speckle image and the Laplacian image of the filtered first speckle image to obtain a contour edge point solving equation of the first speckle image;
solving a contour edge point solving equation of the first spot image to obtain coordinate information of the contour edge point of the first spot image;
combining the filtered second speckle image and the Laplacian image of the filtered second speckle image to obtain a contour edge point solution equation of the second speckle image;
And solving a contour edge point solving equation of the second spot image to obtain coordinate information of the contour edge point of the second spot image.
Wherein the calculating the center coordinates of the spot of each region based on the coordinate information of the spot contour edge points includes:
Calculating center coordinates of spots of each of the areas of the first spot image based on the coordinate information of the first spot contour edge points;
And calculating the central coordinates of the spots of each area of the second spot image based on the coordinate information of the edge points of the outline of the second spot.
In a possible implementation manner of the first aspect, the calculating, based on the first position information and the second position information, a parallax between the blob of each of the regions of the first speckle image and the blob of each of the regions of the corresponding second speckle image includes:
A parallax of the spot of each of the regions of the first speckle image and the spot of each of the regions of the corresponding second speckle image is calculated based on the center coordinates of the spot of each of the regions of the first speckle image and the center coordinates of the spot of each of the regions of the second speckle image.
In a second aspect, an embodiment of the present application provides a depth computing system, including:
a projection module for projecting a regular speckle pattern onto a target area;
A camera module including a first camera unit and a second camera unit, an imaging region of the first camera unit and the second camera unit having been divided into a plurality of regions for acquiring a spot image reflected back through a target region and generating a first spot image and a second spot image;
a control and processing module for controlling the projection module and the camera module and for calculating a parallax in each corresponding region from the first and second speckle images to further obtain a depth;
wherein each of the imaging regions comprises only one spot.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
When the depth of the spot is calculated, the imaging area is divided to determine the position information of the spot in each area of the first spot image and the spot in each area of the second spot image, so that the depth of the target area can be calculated.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1a is a schematic diagram of a depth computing system according to an embodiment of the present application;
FIG. 1b is a schematic illustration of an imaging region divided into a plurality of regions according to an embodiment of the present application;
FIG. 2a is a flowchart illustrating steps of a depth calculation method according to an embodiment of the present application;
FIG. 2b is a schematic diagram of a first speckle image provided by an embodiment of the application;
FIG. 3 is a flowchart of method steps for processing a first speckle image and a second speckle image, according to one embodiment of the application;
FIG. 4 is a schematic diagram of calculating coordinate information of edge points of a contour of a spot according to an embodiment of the present application;
FIG. 5 is a schematic diagram of acquiring center coordinates of spots in each area according to an embodiment of the present application;
Fig. 6 is a schematic diagram of the principle of binocular structured light triangulation according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Reference in the specification to "an embodiment of the application" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "one embodiment of the application," "other embodiments of the application," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more, but not all, embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Furthermore, in the description of the present specification and the appended claims, the terms "first," "second," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
In order to illustrate the technical scheme of the application, the following description is made by specific examples.
FIG. 1a is a schematic diagram of a depth computing system according to an embodiment of the present application, the system includes a projection module 110, a camera module, and a control and processing module 130, the projection module 110 is configured to project a regular speckle pattern onto a target area; the camera module includes a first camera unit 121 and a second camera unit 122, the first camera unit 121 and the second camera unit 122 including an imaging region that has been divided into a plurality of regions for capturing a speckle pattern reflected back through a target region and generating a first speckle image and a second speckle image; the control and processing module 130 is used to control the projection module 110 and the camera module on the one hand, and to calculate the parallax in each sub-area from the speckle image and to calculate the depth of the target area from the parallax on the other hand. The spot pattern projected by the projection module 110 is designed according to the area divided by the imaging area of the camera module, so that each area only includes one spot.
In some embodiments, projection module 110 includes a light source 111 and an optical assembly 112. The light source 111 may be an edge emitting laser, a vertical cavity surface emitting laser, or the like, or may be a light source array formed by a plurality of light sources, and the light beam emitted by the light source may be laser, visible light, infrared light, ultraviolet light, or the like. The embodiment of the application uses the light source as the vertical cavity surface emitting laser for illustration, and the laser emitted by the vertical cavity surface emitting laser has the characteristics which common light does not have, such as good monochromaticity, good coherence, good directivity, high brightness and the like. Since the laser light has these characteristics, a speckle pattern is generated when the laser light irradiates a rough surface or transmits a projection body having uneven refraction. It should be noted that the light source may be a single point laser or a regular array laser, which is not limited herein.
In one embodiment, the optical assembly 112 includes an optical diffraction element and a lens element, wherein the lens element receives and concentrates the light beam emitted by the light source to the optical diffraction element, and the optical diffraction optical element receives the light beam concentrated by the lens element and projects a regular speckle pattern toward the target area. It should be noted that, the spot pattern formed by the embodiment of the present application is a spot pattern with a sparse lattice; the number of lens elements can be designed according to specific situations; the optical diffraction element and the lens element may be separate elements or may be an integrated element, without limitation.
It should be appreciated that the projection module 110 actively projects a speckle pattern with a sparse lattice to the target area, which reduces the overall power consumption of the projection module or the single point optical power at the same power consumption, and obtains a speckle image with a higher signal-to-noise ratio, a longer detection distance, and a higher noise immunity than the existing speckle pattern with a dense lattice.
In yet another embodiment, the optical assembly 112 comprises a microlens array comprising a plurality of microlens cells, where the size of the microlens cells is much smaller than the size of a single spot emitted by a light source, the light source may be a single point laser; when the microlens cell size is similar or equal to the size of a single spot emitted by a light source, the light source is a regular or irregular array laser. The microlens array receives the multiple light beams emitted by the light source, shapes the multiple light beams into uniform spots and projects the uniform spots to a target area. It should be appreciated that the optical assembly 112 may also include a lens element that receives the shaped uniform spot from the microlens array and collimates the projected target region, without limitation.
In another embodiment, when the optical assembly 112 includes only lens elements, the light source 111 is a regularly arranged array laser. The lens element receives the plurality of beams emitted by the array laser and collimates the beams into parallel beams for projection onto a target area to form a regular spot in the target area. It should be noted that the number of lens elements may be designed according to the specific circumstances.
In some embodiments, the camera module includes a first camera unit 121 and a second camera unit 122 (as shown in fig. 1 a) located at left and right sides of the projection module 110, wherein the first camera unit 121 may also be referred to as a left camera and the second camera unit 122 may also be referred to as a right camera. It should be noted that the camera may be a three-eye camera or a multi-eye camera, and the position design of the camera is not required, which accords with the actual situation.
In one embodiment, the first camera unit 121 and the second camera unit 122 each comprise an image sensor, the imaging area in the image sensor being pre-divided into a plurality of areas for receiving at least part of the spot reflected by the object back at the target area and imaging on the image sensor, forming a first spot image and a second spot image for further acquiring the depth of the target area. The image sensor may be an image sensor formed by a charge coupled device (charge coupled device, CCD) complementary metal oxide semiconductor (complementary metal-oxide-semiconductor transistor, CMOS), an avalanche diode (AVALANCHE DIODE, AD), a single photon avalanche diode (single photon avalanche diode, SPAD), or the like, and the embodiment of the present application does not limit the composition of the image sensor.
In one embodiment, the imaging region of the first camera unit 121 and the imaging region of the second camera unit 122 are divided based on the principle of binocular structured light triangulation, and the specific division of the regions is as follows:
The minimum working distance of the first camera unit 121 and the second camera unit 122 is determined, and the working distance refers to the object front-rear distance range of the target area measured by imaging in which the first camera unit 121 and the second camera unit 122 can acquire clear images.
The embodiment of the application determines that the minimum working distance between the first camera unit 121 and the second camera unit 122 is z min, and based on the principle of binocular structured light triangulation, the maximum parallax width formed by the first camera unit 121 and the second camera unit 122 within the working distance is:
Wherein d max is the maximum parallax width formed by the first camera unit and the second camera unit within the working distance, f is the camera focal length, b is the baseline length, and z min is the minimum working distance.
The imaging areas of the first camera unit 121 and the second camera unit 122 are divided into a plurality of areas according to the maximum parallax width, as shown in fig. 1 b. The area width is preferably D max and the area height is greater than the diameter D of the spot to ensure that only one spot is included in each area when the first camera unit 121 and the second camera unit 122 are operated at the maximum working distance and the minimum working distance.
It should be noted that D max and D are thresholds, and the width and height of the regions are set to be greater than the thresholds, so that only one spot is included in each region, which is not limited herein.
In one embodiment, the control and processing module 130 may be a central processing unit (central processing unit, CPU), and the control and processing module 130 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSPs), application Specific Integrated Circuits (ASICs), off-the-shelf programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It should be noted that, in other embodiments of the present application, the camera module may have a computing capability, and the functions of the control and processing module 130 may be integrated into the camera module. The camera module may also include, but is not limited to, a processing unit, a memory unit, and a computer program stored in and executable on the memory unit. The processing unit, when executing the computer program, implements the steps in the embodiments of the depth calculation method in the following embodiments. It will be appreciated by those skilled in the art that the camera module including the image sensor, the lens unit, the processing unit, the storage unit, and the computer program is merely an example of a camera and does not constitute a limitation of the camera module, and may include more or less components than the camera module, or may combine some components, or different components, for example, may also include an input-output device, a network access device, and the like. Wherein the processing unit in the camera may calculate the depth of the target area.
The present document does not limit the specific composition of a depth computing system, which may include more or fewer components than the example shown in FIG. 1a, or may combine certain components, or may be different components. Fig. 1a is merely an exemplary depiction and should not be construed as a specific limitation of the present application.
In summary, in the depth computing system provided by the embodiment of the application, the projection module actively projects the spot image with the regular sparse lattice, compared with the existing active projection of the spot pattern with the dense lattice, the projection module is reduced in total power consumption or under the same power consumption, has higher single-point optical power, obtains the spot image with higher signal-to-noise ratio, has a longer detection distance and higher noise immunity, and ensures that sufficient characteristic points exist on the measured object compared with the existing passive triangulation technology. The first and second camera units acquire at least a portion of the speckle reflected by the object at the target region and image first and second speckle images formed on the imaging region. The parallax calculation method for calculating the position information of the first spot and the second spot based on the plurality of areas is simple in operation and high in speed through the imaging areas divided into the plurality of areas, and the spots in each area of the first spot image and the spots in each area of the second spot image do not need to be matched, so that the real-time requirement is met.
Fig. 2a shows a flowchart of steps of a depth calculation method according to an embodiment of the present application. According to the embodiment of the application, the imaging areas of the first camera unit and the second camera unit are respectively divided into a plurality of areas, so that parallax is further calculated according to corresponding spots in the areas of the first camera unit and the second camera unit, and depth is acquired. As an implementation, the method in fig. 2a may be performed by the control and processing module 130 in fig. 1 a. As other implementations, the method in fig. 2a may be performed by a camera. The method more specifically includes S201 to S204:
s201: a first speckle image and a second speckle image are acquired.
In the embodiment of the present application, the first spot image and the second spot image are obtained by projecting the coded laser light, that is, the spot pattern with the sparse lattice, onto the object in the target area through the projection module, preferably, the radius of the spot in the projected spot pattern is D/2, the laser spot is projected to the target area and reflected to the first camera unit and the second camera unit respectively through the target area, the first spot image is formed in each area of the imaging area of the first camera unit (as shown in fig. 2 b), and the second spot image is formed in each area of the imaging area of the second camera unit.
S202: first and second position information of the spot in each of the areas in the first and second spot images are acquired. The blobs of each of the regions of the first blob image correspond to the blobs of each of the regions of the second blob image.
In one embodiment, after the first and second speckle images are obtained, the first and second speckle images are processed respectively, the first speckle image is processed to obtain first position information, the second speckle image is processed to obtain second position information, and specific processing steps of the first and second speckle images refer to fig. 3, and fig. 3 is a flowchart of a method for processing the first and second speckle images according to an embodiment of the present application, including S301 to S302.
S301, calculating coordinate information of the spot outline edge points of the spots of each area in the first spot image and the second spot image.
The embodiment of the application can detect the position information of the spots in each area by using a spot detection algorithm such as a Gaussian Laplace algorithm, a Surf algorithm or a Sift algorithm, and the like, and the application is not limited to the above.
Fig. 4 is an embodiment of S301 in fig. 3. In one embodiment, the location information of the spots in the respective areas is detected using the laplacian of gaussian algorithm, more specifically including S401 to S406:
S401, gaussian filtering is conducted on the first spot image and the second spot image by means of Gaussian verification of variance, and filtered first spot image and filtered second spot image are obtained.
Specifically, the gaussian kernel G (x, y, σ) of the variance σ is used to perform gaussian filtering on the speckle image I (x, y), which may be the first speckle image or the second speckle image, and the method of performing gaussian filtering on the first speckle image and the second speckle image is the same, and the application is exemplified by the first speckle image. The application carries out Gaussian filtering on the first speckle image and the second speckle image, and can eliminate Gaussian noise. Thus, the filtered first speckle image is:
Lσ=I(X,Y)*Gσ(X,Y)
where I represents the gray value of the first speckle image and x represents the convolution. It should be noted that the variance σ may be specifically designed according to the radius D/2 of the spot, and is not limited herein.
S402, performing Laplace transformation on the filtered first spot image and the second spot image to obtain a Laplace image of the filtered first spot image and a Laplace image of the filtered second spot image.
Specifically, the laplacian image of the first speckle image after gaussian filtering is:
it should be noted that: the method of performing the laplace transform on the filtered second speckle image is the same as the method of performing the laplace transform on the filtered first speckle image, and will not be described again here.
S403, combining the filtered first spot image and the Laplacian image of the filtered first spot image to obtain a contour edge point solving equation of the first spot image.
Specifically, substituting the filtered first speckle image into the laplacian image of the first speckle image after gaussian filtering can obtain:
S404, solving a contour edge point solving equation of the first spot image to obtain coordinate information of the contour edge point of the first spot image.
Specifically, taking an extreme point from the formula in S403 as a spot contour edge point of each region in the first spot image, and substituting the spot contour edge point into the formula in S403 can obtain the gray value of the spot image.
S405, combining the filtered second speckle image and the Laplacian image of the filtered second speckle image to obtain a contour edge point solution equation of the second speckle image.
The specific method of S405 and S403 is the same, and will not be described here again.
And S406, solving a contour edge point solving equation of the second spot image to obtain coordinate information of the contour edge point of the second spot image.
The specific method of S406 and S404 is the same, and will not be described here again.
It should be noted that, in the embodiment of the present application, by using the specific methods from S401 to S406, the coordinate information of the contour edge point of the spot in each region in the first spot image and the coordinate information of the contour edge point of the spot in each region in the second spot image corresponding to each region in the first spot image are not limited in time sequence, that is, the coordinate information of the contour edge point of the spot in each region in the first spot image may be calculated first, then the coordinate information of the contour edge point of the spot in each region in the second spot image may be calculated first, then the coordinate information of the contour edge point of the spot in each region in the first spot image may be calculated, and then the coordinate information of the contour edge point of the spot in each region in the first spot image may be calculated simultaneously.
S302, calculating the center coordinates of the spot of each area based on the coordinate information of the spot contour edge points.
The embodiment of the application can acquire the center coordinates of spots in each region by using a least square ellipse center quadratic fitting method or a least square Gaussian distribution fitting method and the like. The present application is not limited to the method of acquiring the center coordinates of the spots of the respective areas.
FIG. 5 is a schematic diagram of acquiring center coordinates of spots in each area according to an embodiment of the present application. In one embodiment, the center coordinates of each region spot are obtained by using a least squares elliptical center quadratic fitting method, and more specifically includes S501 to S502:
s501, calculating center coordinates of spots of each area of the first spot image based on coordinate information of edge points of the outline of the first spot.
Specifically, based on S301, the coordinate information of the edge point of the outline of the first spot and the gray value of the first spot image may be obtained, in the embodiment of the present application, the center coordinate of the spot of the first spot image is assumed to be (0, 0), the pixel size of the spot in the first spot image is m×n, and the method is as follows
And solving to calculate the central coordinates of the spots of each region of the first spot image.
S502, calculating center coordinates of the spots of each area of the second spot image based on the coordinate information of the edge points of the outline of the second spot.
The method of calculating the center coordinates of the spots of each region of the second spot image is the same as the method of calculating the center coordinates of the spots of each region of the first spot image, and will not be described again here.
The embodiment of the application can calculate the center coordinates of the spots in each region in the first spot image and the center coordinates of the spots in each region in the second spot diagram corresponding to each region in the first spot image by using the specific methods of steps S501 to S502.
In S202, the first position information of the spot in the first spot image and the second position information of the spot in the second spot image are obtained by processing the first spot image and the second spot image, and there is no timing limitation. The first spot image is processed to obtain first position information, and the second spot image is processed to obtain second position information; or the second spot image is processed to obtain second position information, and the first spot image is processed to obtain first position information; the first and second speckle images may be processed simultaneously to obtain the first and second position information.
S203, calculating the parallax of the spots of each region of the first spot image and the spots of each region of the corresponding second spot image based on the first position information and the second position information.
Specifically, the center coordinates of the spots in each region in the first spot image and the center coordinates of the spots in each region in the second spot diagram corresponding to each region in the first spot image are calculated according to S202, and the parallaxes of the center coordinates of the spots in each region in the first spot image and the center coordinates of the spots in each region in the second spot diagram corresponding to each region in the first spot image are calculated.
S204, calculating the depth of the target area according to the parallax.
In one embodiment, the depth image is calculated based on the principle of binocular structured light triangulation, the first speckle image and the second speckle image, please refer to fig. 6, point P is the object to be measured in the target area, the intersection point of the line between the point P and the optical center C L of the first camera unit 121 and the phase plane of the first camera unit 121 is P L, and the intersection point P L is the projection point of the point P on the first camera unit. The intersection point of the line between the point P and the first line of the optical center C R of the second camera unit 122 and the phase plane of the second camera unit 122 is P R, and the intersection point P R is the projection point of the point P on the second camera unit 122. The difference between X R and X L in fig. 6 is the parallax d.
Let the distance from point P L to point P R be dis:
dis=b-(XL-XR)
As can be seen from the figure: Δpc LCR is similar to Δpp LPR, then:
Where X L-XR is parallax d, b is the reference line length, which is the optical distance between the first camera unit 121 and the second camera unit 122, f is the focal length of the first camera unit 121 and the second camera unit 122, and z is the depth of the object to be measured in the target area.
The method comprises the following steps:
it can be seen that, if the control and processing module 130 in the depth computing system in fig. 1a needs to calculate the depth value of the object to be measured in the target area, the focal length f of the first camera unit 121 and the second camera unit 122, the reference line b of the first camera unit 121 and the second camera unit 122, and the parallax d need to be determined. The focal lengths f and the reference line b of the first camera unit 121 and the second camera unit 122 are clearly available, and therefore, only the parallax between the first speckle image and the second speckle image needs to be calculated, i.e., the depth value can be obtained. However, the above-mentioned depth calculation formula is an ideal model derived under ideal conditions, and when the depth is calculated by using the above-mentioned formula, it is affected by the distortion of the camera lens, whether the optical axes of the first camera unit and the second camera unit are parallel or not, and other factors, so when the above-mentioned formula is used to calculate the formula, the camera needs to be calibrated in advance, and the calibration of the camera is to solve the problem that the distortion of the camera lens and the optical axes of the first camera unit and the second camera unit are not parallel.
In one embodiment, the parallax is noted as d for the center coordinates of the spots in one region in the first spot image and the center coordinates of the spots in the region of the second spot map corresponding to the one region in the first spot image.
Substituting the disparity d into the formula derived from fig. 6:
where f is the focal length of the camera, b is the base length of the center points of the projection module and the first and second camera units, and d is the parallax calculated in S203.
The depth of the spot in one area can be obtained, any spot image in the first spot image and the second spot image is taken as a reference image, the spots in all areas are traversed, and the depth information of the target area can be obtained.
In summary, the embodiment of the present application provides a depth calculation method: according to the imaging areas of the divided areas, center coordinates of the spots of each area of the first spot image and the spots of each area of the corresponding second spot image are determined, and the depth of the target area is calculated based on the center coordinates. The parallax calculation is carried out by dividing the central coordinates of the spots, so that the operation is simple and the speed is higher, the spots in each area of the first spot image and the spots in each area of the second spot image do not need to be matched, and the requirement of instantaneity is met.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the steps in the embodiment of the depth calculation method when being executed by a processor.
The present application provides a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that enable the implementation of the depth calculation method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above-described embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of the method embodiments described above when executed by a processor. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer memory, read-only memory (ROM), random access memory (random access memory, RAM), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A depth calculation method, comprising:
Acquiring a first spot image and a second spot image, wherein the first spot image and the second spot image are images formed by respectively reflecting regular spots projected by a projection module to imaging areas of a first camera unit and a second camera unit through a target area; the spot pattern projected by the projection module is designed according to the area divided by the imaging area of the camera module, so that each area only comprises one spot;
acquiring first position information and second position information of spots in the area corresponding to each of the first spot image and the second spot image;
calculating a parallax of a spot of each of the first and second spot images corresponding to the region based on the first and second position information;
Calculating the depth of the target area according to the parallax; the depth image is calculated based on the principle of binocular structured light triangulation, the first spot image and the second spot image;
Wherein the imaging region has been previously divided into a plurality of regions, the dividing of the imaging region into the plurality of regions includes: determining minimum working distances of the first camera unit and the second camera unit, acquiring a maximum parallax width by using the minimum working distances, and dividing imaging areas of the first camera unit and the second camera unit into a plurality of areas according to the maximum parallax width so as to ensure that each area only comprises one spot when the first camera unit and the second camera unit work at the maximum working distances and the minimum working distances.
2. The depth computing method of claim 1, wherein the acquiring the first and second location information of each of the first and second speckle images corresponding to a speckle in the region; comprising the following steps:
Calculating coordinate information of the spot contour edge points of each spot in the area in the first spot image and the second spot image;
And calculating the center coordinates of the spots of each area based on the coordinate information of the spot contour edge points.
3. The depth calculation method according to claim 2, wherein the calculating the coordinate information of the spot of each of the regions at the spot contour edge points of the first spot image and the second spot image includes:
Performing Gaussian filtering on the first spot image and the second spot image by utilizing a Gaussian check of variance to obtain a filtered first spot image and a filtered second spot image;
performing Laplacian transformation on the filtered first spot image and the second spot image to obtain a Laplacian image of the filtered first spot image and a Laplacian image of the filtered second spot image;
Combining the filtered first speckle image and the Laplacian image of the filtered first speckle image to obtain a contour edge point solving equation of the first speckle image;
solving a contour edge point solving equation of the first spot image to obtain coordinate information of the contour edge point of the first spot image;
combining the filtered second speckle image and the Laplacian image of the filtered second speckle image to obtain a contour edge point solution equation of the second speckle image;
And solving a contour edge point solving equation of the second spot image to obtain coordinate information of the contour edge point of the second spot image.
4. The depth calculation method according to claim 2, wherein the calculating the center coordinates of the spot of each of the regions based on the coordinate information of the spot profile edge points includes:
Calculating center coordinates of spots of each of the areas of the first spot image based on the coordinate information of the first spot contour edge points;
And calculating the central coordinates of the spots of each area of the second spot image based on the coordinate information of the edge points of the outline of the second spot.
5. The depth computing method of claim 4, the computing a disparity of a blob for each of the regions of the first blob image and a blob for each of the regions of the corresponding second blob image based on the first position information and the second position information comprising:
A parallax of the spot of each of the regions of the first speckle image and the spot of each of the regions of the corresponding second speckle image is calculated based on the center coordinates of the spot of each of the regions of the first speckle image and the center coordinates of the spot of each of the regions of the second speckle image.
6. A depth computing system, comprising:
a projection module for projecting a regular speckle pattern onto a target area; the spot pattern projected by the projection module is designed according to the area divided by the imaging area of the camera module, so that each area only comprises one spot;
A camera module including a first camera unit and a second camera unit, an imaging region of the first camera unit and the second camera unit having been divided into a plurality of regions for acquiring a spot image reflected back through a target region and generating a first spot image and a second spot image;
a control and processing module for controlling the projection module and the camera module and for calculating a parallax in each corresponding region from the first and second speckle images to further obtain a depth;
Wherein the imaging region has been previously divided into a plurality of regions, the dividing of the imaging region into the plurality of regions includes: determining minimum working distances of the first camera unit and the second camera unit, acquiring a maximum parallax width by using the minimum working distances, and dividing imaging areas of the first camera unit and the second camera unit into a plurality of areas according to the maximum parallax width so as to ensure that each area in the imaging areas only comprises one spot when the first camera unit and the second camera unit work at the maximum working distances and the minimum working distances;
the depth image is calculated based on the principle of binocular structured light triangulation, the first speckle image and the second speckle image.
7. The depth computing system of claim 6, wherein the projection module comprises a light source and an optical assembly, wherein the optical assembly comprises at least one of a lens element, an optical diffraction element, or a microlens array.
8. The depth computing system of claim 6, wherein the imaging region is divided into a width of the region that is greater than or equal to a maximum parallax value of the first camera unit and the second camera unit; the imaging area is divided into areas with the height larger than or equal to the diameter size of the spots projected by the projection module.
9. The depth computing system of claim 6, wherein the computing the disparity in the corresponding region from the first speckle image and the second speckle image comprises:
acquiring first position information and second position information of spots in the area corresponding to each of the first spot image and the second spot image;
based on the first position information and the second position information, a parallax of a spot of each of the first spot image and the second spot image corresponding to the region is calculated.
10. The depth computing system of claim 9, wherein the acquiring first and second position information for each of the first and second blob images corresponding to a blob in the region comprises:
Calculating coordinate information of the spot contour edge points of each spot in the area in the first spot image and the second spot image;
And calculating the center coordinates of the spots of each area based on the coordinate information of the spot contour edge points.
CN202110314157.1A 2021-03-24 2021-03-24 Depth calculation method and system Active CN113052889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110314157.1A CN113052889B (en) 2021-03-24 2021-03-24 Depth calculation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110314157.1A CN113052889B (en) 2021-03-24 2021-03-24 Depth calculation method and system

Publications (2)

Publication Number Publication Date
CN113052889A CN113052889A (en) 2021-06-29
CN113052889B true CN113052889B (en) 2024-05-31

Family

ID=76514911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110314157.1A Active CN113052889B (en) 2021-03-24 2021-03-24 Depth calculation method and system

Country Status (1)

Country Link
CN (1) CN113052889B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496161A (en) * 2011-12-13 2012-06-13 浙江欧威科技有限公司 Method for extracting contour of image of printed circuit board (PCB)
CN104634276A (en) * 2015-02-12 2015-05-20 北京唯创视界科技有限公司 Three-dimensional measuring system, photographing device, photographing method, depth calculation method and depth calculation device
CN105160680A (en) * 2015-09-08 2015-12-16 北京航空航天大学 Design method of camera with no interference depth based on structured light
CN106875443A (en) * 2017-01-20 2017-06-20 深圳大学 The whole pixel search method and device of the 3-dimensional digital speckle based on grayscale restraint
WO2017138210A1 (en) * 2016-02-12 2017-08-17 ソニー株式会社 Image pickup apparatus, image pickup method, and image pickup system
CN107564091A (en) * 2017-07-26 2018-01-09 深圳大学 A kind of three-dimensional rebuilding method and device based on quick corresponding point search
CN109405765A (en) * 2018-10-23 2019-03-01 北京的卢深视科技有限公司 A kind of high accuracy depth calculation method and system based on pattern light
CN110657785A (en) * 2019-09-02 2020-01-07 清华大学 Efficient scene depth information acquisition method and system
CN111079772A (en) * 2019-12-18 2020-04-28 深圳科瑞技术股份有限公司 Image edge extraction processing method, device and storage medium
CN111145342A (en) * 2019-12-27 2020-05-12 山东中科先进技术研究院有限公司 Binocular speckle structured light three-dimensional reconstruction method and system
CN111561872A (en) * 2020-05-25 2020-08-21 中科微至智能制造科技江苏股份有限公司 Method, device and system for measuring package volume based on speckle coding structured light
CN112233063A (en) * 2020-09-14 2021-01-15 东南大学 Circle center positioning method for large-size round object
CN112487893A (en) * 2020-11-17 2021-03-12 北京的卢深视科技有限公司 Three-dimensional target identification method and system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496161A (en) * 2011-12-13 2012-06-13 浙江欧威科技有限公司 Method for extracting contour of image of printed circuit board (PCB)
CN104634276A (en) * 2015-02-12 2015-05-20 北京唯创视界科技有限公司 Three-dimensional measuring system, photographing device, photographing method, depth calculation method and depth calculation device
CN105160680A (en) * 2015-09-08 2015-12-16 北京航空航天大学 Design method of camera with no interference depth based on structured light
WO2017138210A1 (en) * 2016-02-12 2017-08-17 ソニー株式会社 Image pickup apparatus, image pickup method, and image pickup system
CN106875443A (en) * 2017-01-20 2017-06-20 深圳大学 The whole pixel search method and device of the 3-dimensional digital speckle based on grayscale restraint
CN107564091A (en) * 2017-07-26 2018-01-09 深圳大学 A kind of three-dimensional rebuilding method and device based on quick corresponding point search
CN109405765A (en) * 2018-10-23 2019-03-01 北京的卢深视科技有限公司 A kind of high accuracy depth calculation method and system based on pattern light
CN110657785A (en) * 2019-09-02 2020-01-07 清华大学 Efficient scene depth information acquisition method and system
CN111079772A (en) * 2019-12-18 2020-04-28 深圳科瑞技术股份有限公司 Image edge extraction processing method, device and storage medium
CN111145342A (en) * 2019-12-27 2020-05-12 山东中科先进技术研究院有限公司 Binocular speckle structured light three-dimensional reconstruction method and system
CN111561872A (en) * 2020-05-25 2020-08-21 中科微至智能制造科技江苏股份有限公司 Method, device and system for measuring package volume based on speckle coding structured light
CN112233063A (en) * 2020-09-14 2021-01-15 东南大学 Circle center positioning method for large-size round object
CN112487893A (en) * 2020-11-17 2021-03-12 北京的卢深视科技有限公司 Three-dimensional target identification method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张善文 等编.《图像模式识别》.西安电子科技大学出版社,2020,79-80. *

Also Published As

Publication number Publication date
CN113052889A (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN110689581B (en) Structured light module calibration method, electronic device and computer readable storage medium
EP2568253B1 (en) Structured-light measuring method and system
CN110230998B (en) Rapid and precise three-dimensional measurement method and device based on line laser and binocular camera
CN106548489B (en) A kind of method for registering, the three-dimensional image acquisition apparatus of depth image and color image
US20210358157A1 (en) Three-dimensional measurement system and three-dimensional measurement method
CN102203551B (en) Method and system for providing three-dimensional and range inter-planar estimation
CN110689577B (en) Active rigid body pose positioning method in single-camera environment and related equipment
US10482615B2 (en) Image processing device and image processing method
CN113111513B (en) Sensor configuration scheme determining method and device, computer equipment and storage medium
CN108924408B (en) Depth imaging method and system
CN108881717B (en) Depth imaging method and system
KR20180061803A (en) Apparatus and method for inpainting occlusion of road surface
CN113052889B (en) Depth calculation method and system
JP2023522755A (en) Irradiation pattern for object depth measurement
CN113052887A (en) Depth calculation method and system
CN108924407B (en) Depth imaging method and system
WO2023094530A1 (en) One shot calibration
JP2024520598A (en) Automatic calibration from epipolar distance in projected patterns
CN113513988B (en) Laser radar target detection method and device, vehicle and storage medium
Botterill et al. Design and calibration of a hybrid computer vision and structured light 3D imaging system
JP7064400B2 (en) Object detection device
Agarwal et al. Three dimensional image reconstruction using interpolation of distance and image registration
CN113014899A (en) Binocular image parallax determination method, device and system
CN118397201B (en) Method and device for reconstructing original light field data image of focusing light field camera
Costineanu et al. Triangulation-based 3D image processing method and system with compensating shadowing errors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant