CN113888614A - Depth recovery method, electronic device, and computer-readable storage medium - Google Patents

Depth recovery method, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN113888614A
CN113888614A CN202111117448.8A CN202111117448A CN113888614A CN 113888614 A CN113888614 A CN 113888614A CN 202111117448 A CN202111117448 A CN 202111117448A CN 113888614 A CN113888614 A CN 113888614A
Authority
CN
China
Prior art keywords
value
point
speckle pattern
valued
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111117448.8A
Other languages
Chinese (zh)
Other versions
CN113888614B (en
Inventor
化雪诚
户磊
王海彬
刘祺昌
李东洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Dilusense Technology Co Ltd
Original Assignee
Beijing Dilusense Technology Co Ltd
Hefei Dilusense Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dilusense Technology Co Ltd, Hefei Dilusense Technology Co Ltd filed Critical Beijing Dilusense Technology Co Ltd
Priority to CN202111117448.8A priority Critical patent/CN113888614B/en
Publication of CN113888614A publication Critical patent/CN113888614A/en
Application granted granted Critical
Publication of CN113888614B publication Critical patent/CN113888614B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application relates to the technical field of image processing, and discloses a depth recovery method, electronic equipment and a computer-readable storage medium, wherein the depth recovery method comprises the following steps: segmenting the filtered infrared image according to a preset image segmentation algorithm, determining a foreground region of the infrared image, and generating a mask of the foreground region; multi-valued processing is carried out on the speckle pattern corresponding to the infrared pattern to obtain a multi-valued speckle pattern; determining the parallax value of each point in the foreground area according to the multi-valued speckle pattern, a preset multi-valued reference speckle pattern and the mask; and determining the depth value of each point in the foreground area according to the parallax value. The depth recovery method provided by the embodiment of the application can greatly reduce the calculated amount while ensuring the high precision of the depth recovery, avoid increasing the cost of a chip and effectively improve the efficiency of the depth recovery.

Description

Depth recovery method, electronic device, and computer-readable storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a depth recovery method, electronic equipment and a computer-readable storage medium.
Background
With the rapid development of 3D technology, the 3D structured light technology for projecting in three-dimensional space and obtaining three-dimensional images based on structured light is mature, and is different from the pure passive three-dimensional measurement technology including binocular stereo vision technology, the 3D structured light technology mainly uses a near-infrared laser to emit light with certain structural characteristics, projects the light onto a shot object, and then is collected by a special infrared camera, because the light with certain structure can be changed differently in different depth areas of the shot object, the structure of an image generated after the collection of the infrared camera changes relative to the original light, and the change of the structure is converted into depth information by an arithmetic unit, so that the three-dimensional structure of the shot object can be determined, the speckle structured light is one of the light, and the 3D structured light technology is mainly applied to unlocking of intelligent equipment at present, Human body measurement, object volume measurement, face modeling and the like.
The depth recovery method adopted by the depth camera is mostly the extension and continuation of a stereoscopic vision binocular matching algorithm, such as a semi-global matching algorithm (SGM) for dense-global matching, a region growing algorithm, a global search optimization algorithm and the like, and the depth recovery and the depth calculation of a scene are achieved through the methods.
However, as the demand of the depth camera for the output resolution is continuously increased, the computation resource required by the depth camera is also multiplied, the depth recovery method based on the stereoscopic vision binocular matching algorithm cannot meet the actual demand of the depth camera, the accuracy of the depth recovery is low, and the cost of the chip is greatly increased by optimizing and upgrading the computation unit of the depth camera.
Disclosure of Invention
An object of the embodiments of the present application is to provide a depth recovery method, an electronic device, and a computer-readable storage medium, which can greatly reduce the amount of computation while ensuring high accuracy of depth recovery, avoid increasing the cost of a chip, and effectively improve the efficiency of depth recovery.
In order to solve the above technical problem, an embodiment of the present application provides a depth recovery method, including the following steps: segmenting the filtered infrared image according to a preset image segmentation algorithm, determining a foreground region of the infrared image, and generating a mask of the foreground region; multi-valued processing is carried out on the speckle pattern corresponding to the infrared pattern to obtain a multi-valued speckle pattern; determining the parallax value of each point in the foreground area according to the multi-valued speckle pattern, a preset multi-valued reference speckle pattern and the mask; and determining the depth value of each point in the foreground area according to the parallax value.
An embodiment of the present application further provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the above-described depth recovery method.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, which when executed by a processor implements the above-described depth recovery method.
In the depth recovery method, the electronic device, and the computer-readable storage medium provided by the embodiments of the present application, the server segments the filtered infrared image according to a preset image segmentation algorithm, determines a foreground region of the infrared image, generates a mask of the foreground region, performs multi-valued processing on a speckle pattern corresponding to the infrared image to obtain a multi-valued speckle pattern, determines disparity values of points in the foreground region according to the multi-valued speckle pattern, a preset multi-valued reference speckle pattern, and the mask, and determines depth values of the points in the foreground region according to the disparity values of the points in the foreground region The depth recovery is not carried out, the calculated amount of the depth recovery can be greatly reduced, the cost of a depth camera chip is reduced, the efficiency of the depth recovery is improved, meanwhile, the infrared image after being filtered is segmented, the determined foreground area can be more accurate and clear, in addition, the speckle pattern used in the depth recovery is the speckle pattern after multi-valued processing, compared with the speckle pattern of the binarization processing, the robustness of the depth recovery can be improved by using the speckle pattern of the multivalued processing, and the high precision of the depth recovery is ensured.
In addition, the multi-valuating the speckle pattern corresponding to the infrared image to obtain the multi-valued speckle pattern includes: traversing each point in the speckle pattern corresponding to the infrared image, and sequentially taking each point in the speckle pattern as a point to be processed; calculating the SAD value of the gray value of the point to be processed; calculating the mean value of the gray values of all points in the first window corresponding to the point to be processed and the mean value of the SAD values of the gray values of all points according to the size of a preset first window; and performing multi-valued assignment on the points to be processed according to the gray value of the points to be processed, the mean value of the gray value and the mean value of the SAD value of the gray value to obtain a multi-valued speckle pattern, and performing multi-valued processing on the processing points by using the SAD value of the gray value of the points to be processed, the mean value of the gray value of each point in a window corresponding to the points to be processed and the mean value of the SAD value of the gray value of each point in the window corresponding to the points to be processed, so that the multi-valued assignment is realized, the multi-valued assignment can be more accurate, and the robustness and the precision of depth recovery are further improved.
In addition, the preset image segmentation algorithm is a watershed segmentation algorithm; the method for segmenting the filtered infrared image according to a preset image segmentation algorithm, determining a foreground region of the infrared image and generating a mask of the foreground region comprises the following steps: carrying out graying and binarization processing on the filtered infrared image to obtain a binarized infrared image; performing image expansion and distance conversion on the binarized infrared image to obtain a determined foreground area image, a determined background area image and an uncertain area image; marking the determined foreground area image according to a connected component marking algorithm to obtain a marked image; according to the marker map, the uncertain region map, the binarized infrared map and the watershed segmentation algorithm, distance judgment is carried out on the uncertain region map, the foreground region of the infrared map is determined, the mask of the foreground region is generated, and the filtered infrared map is segmented by the watershed segmentation algorithm, so that the determined foreground region of the infrared map is more accurate, the actual requirement of depth recovery is better met, and the use experience of a user is improved.
In addition, before the segmenting the filtered infrared image according to a preset image segmentation algorithm and determining a foreground region of the infrared image, the method includes: the method comprises the steps of performing selective mask smoothing filtering on an obtained infrared image to obtain a filtered infrared image, and eliminating noise of the infrared image while inevitably bringing about the disadvantage of averaging by considering filtering modes such as mean filtering, weighted mean filtering and the like so that sharply changed edges or lines become fuzzy.
In addition, the determining the parallax value of each point in the foreground region according to the multi-valued speckle pattern, the preset multi-valued reference speckle pattern and the mask includes: generating a parallax cost matrix according to the width and the height of the speckle pattern, the SAD value of the gray value of each point of the speckle pattern and a preset parallax search range; determining points to be matched in the multi-valued speckle pattern according to the multi-valued speckle pattern and the mask; the pixel value of the point corresponding to the point to be matched on the mask is 1, the pixel value of each point in the mask comprises 0 and 1, and the point with the pixel value of 1 is located in the foreground area; determining a matching cost value between the point to be matched and a target point according to the parallax cost matrix, and determining a minimum value of the matching cost value; the target point is a point of the point to be matched in a corresponding second window in the multi-valued reference speckle pattern, and the second window is the parallax search range; and determining the parallax value of the point to be matched according to the parallax value between the target point corresponding to the minimum value and the point to be matched, and calculating only a plurality of matching cost values of the point to be matched after generating a parallax cost matrix, namely determining the parallax value of the point to be matched only by calculation, so that the calculation amount can be further reduced, and the efficiency of depth recovery is further improved.
In addition, the determining the disparity value of the point to be matched according to the disparity value between the target point corresponding to the minimum value and the point to be matched includes: performing sub-pixel interpolation on the parallax value between the target point corresponding to the minimum value and the point to be matched; and performing sub-pixel interpolation on the parallax between the target point corresponding to the minimum value and the point to be matched by using the parallax value between the target point corresponding to the minimum value and the point to be matched after interpolation as the parallax value of the point to be matched, so that the obtained parallax value is more scientific and accurate.
Drawings
One or more embodiments are illustrated by the corresponding figures in the drawings, which are not meant to be limiting.
FIG. 1 is a first flowchart of a depth recovery method according to an embodiment of the present application;
FIG. 2 is a flow chart illustrating a multi-valued speckle pattern obtained by applying a multi-valued speckle pattern to a speckle pattern corresponding to an infrared pattern according to an embodiment of the present application;
FIG. 3 is a flowchart of segmenting a filtered IR map according to a predetermined image segmentation algorithm, determining foreground regions of the IR map, and generating masks for the foreground regions, according to an embodiment of the present application;
FIG. 4 is a schematic diagram of 9 template windows for selective mask smoothing filtering provided in an embodiment of the present application;
FIG. 5 is a flow chart for determining disparity values for points in a foreground region based on a multi-valued speckle pattern, a pre-set multi-valued reference speckle pattern, and a mask, according to an embodiment of the present application;
fig. 6 is a flowchart illustrating a disparity value between a target point corresponding to a minimum value and a point to be matched as a disparity value of the point to be matched according to an embodiment of the present application;
FIG. 7 is a flow chart two of a depth recovery method according to another embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to another embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application clearer, the embodiments of the present application will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that in the examples of the present application, numerous technical details are set forth in order to provide a better understanding of the present application. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present application, and the embodiments may be mutually incorporated and referred to without contradiction.
One embodiment of the present application relates to a depth recovery method applied to an electronic device; the electronic device may be a terminal or a server, and the electronic device in this embodiment and the following embodiments is described by taking the server as an example. The implementation details of the depth recovery method of the present embodiment are specifically described below, and the following description is only provided for the convenience of understanding, and is not necessary for implementing the present embodiment.
A specific flow of the depth recovery method of this embodiment may be as shown in fig. 1, and includes:
step 101, segmenting the filtered infrared image according to a preset image segmentation algorithm, determining a foreground region of the infrared image, and generating a mask of the foreground region.
Specifically, after the server acquires the infrared image shot by the depth camera, the acquired infrared image may be filtered according to a preset filtering method to obtain the filtered infrared image, the server then segments the filtered infrared image according to a preset image segmentation algorithm to determine a foreground region and a background region of the infrared image, and a mask of the foreground region is generated according to a foreground partition of the infrared image, where the preset filtering method and the preset image segmentation algorithm may be set by a person skilled in the art according to actual needs.
In the specific implementation, the server segments the filtered infrared image according to a preset image segmentation algorithm, namely, performs region division according to gray features, texture features, shape features and the like of the filtered infrared image, so that differences are presented among regions of the filtered infrared image, and similarity is presented in a certain region, thereby determining a foreground region and a background region of the infrared image, wherein the foreground region contains information required by a user, and the information in the background region is irrelevant to the requirements of the user, the server generates a mask of the foreground region based on the foreground region of the infrared image, and only the foreground region can be processed according to the mask of the foreground region.
In one example, the preset image segmentation algorithm may include, but is not limited to: a threshold segmentation algorithm, an edge segmentation algorithm, a region segmentation algorithm, a graph theory segmentation algorithm, an energy flooding segmentation algorithm, a watershed segmentation algorithm, and the like.
And 102, performing multivalued processing on the speckle pattern corresponding to the infrared pattern to obtain a multivalued speckle pattern.
Specifically, after the foreground area of the infrared image is determined by the server and the mask of the foreground area is generated, multi-valued processing can be performed on the speckle image corresponding to the infrared image to obtain the multi-valued speckle image.
In a specific implementation, the server can determine a foreground region of the infrared image and generate a mask of the foreground region, and then perform multivalued processing on a speckle pattern corresponding to the infrared image; or multi-valued processing can be performed on the speckle pattern corresponding to the obtained infrared image, then the filtered infrared image is segmented according to a preset image segmentation algorithm, the foreground area of the infrared image is determined, and a mask of the foreground area is generated; the segmentation of the filtered infrared image and the multivalued processing of the speckle pattern can also be carried out at the same time.
In an example, the server performs multivalued processing on the speckle pattern corresponding to the infrared pattern, and may perform multivalued assignment on each point of the speckle pattern according to a relationship between a pixel value (i.e., a gray value) of each point of the speckle pattern and a preset threshold, where the preset threshold may be set by a person skilled in the art according to actual needs and experience, and the embodiment of the present application is not particularly limited thereto.
And 103, determining the parallax value of each point in the foreground area according to the multi-valued speckle pattern, the preset multi-valued reference speckle pattern and the mask.
Specifically, after obtaining the multi-valued speckle pattern and generating the mask of the foreground region, the server may determine the disparity values of the points in the foreground region according to the multi-valued speckle pattern, the preset multi-valued reference speckle pattern, and the mask of the foreground region, where the preset multi-valued reference speckle pattern may be set by a person skilled in the art according to actual needs.
In one example, after the server obtains the infrared map and the speckle pattern corresponding to the infrared map, a preset reference speckle pattern corresponding to the speckle pattern may be obtained, and when the server performs multi-valued processing on the speckle pattern, the server simultaneously performs the same multi-valued processing on the reference speckle pattern to obtain a multi-valued speckle pattern and a multi-valued reference speckle pattern.
In one example, the server may combine the mask with the multi-valued speckle pattern to obtain the multi-valued speckle pattern only including the foreground region, traverse each point in the multi-valued speckle pattern only including the foreground region, sequentially compare each point in the multi-valued speckle pattern only including the foreground region with a corresponding point in the multi-valued reference speckle pattern, and thereby calculate the disparity value of each point in the foreground region.
And step 104, determining the depth values of all points in the foreground area according to the parallax values.
Specifically, after the server determines the disparity values of the points in the foreground region, the server may determine the depth values of the points in the foreground region according to the disparity values of the points in the foreground region.
In a specific implementation, the server may determine the depth values of the points in the foreground region according to the disparity values of the points in the foreground region, the distance between the points and the reference plane, the camera calibration focal length, and the camera baseline distance based on the triangulation principle by the following formula:
Figure BDA0003275825930000061
in the formula, z0And d is the distance from the reference plane, d is the parallax value, f is the camera calibration focal length, L is the camera baseline distance, Z is the depth value, and the unit of the depth value is millimeter.
In this embodiment, compared with the technical solution of performing depth restoration based on the stereoscopic binocular matching algorithm, in the embodiment of the present application, the server segments the filtered infrared image according to a preset image segmentation algorithm, determines a foreground region of the infrared image, generates a mask of the foreground region, performs multivalued processing on the speckle pattern corresponding to the infrared image to obtain a multivalued speckle pattern, determines disparity values of points in the foreground region according to the multivalued speckle pattern, a preset multivalued reference speckle pattern, and the mask, and determines depth values of the points in the foreground region according to the disparity values of the points in the foreground region, in the embodiment of the present application, the foreground region of the infrared image is determined first, the mask of the foreground region is generated, only the depth values of the points in the foreground region are determined, that is, only the depth restoration is performed on the foreground region of the infrared image, the background area does not participate in calculation and does not carry out depth restoration, the calculation amount of the depth restoration can be greatly reduced, the cost of a depth camera chip is reduced, the efficiency of the depth restoration is improved, meanwhile, the infrared image after filtering is segmented, the determined foreground area can be more accurate and clear, in addition, the speckle pattern used in the depth restoration is the speckle pattern after multi-valued processing, compared with the speckle pattern of binarization processing, the robustness of the depth restoration can be improved by using the speckle pattern of multi-valued processing, and the high precision of the depth restoration is ensured.
In an embodiment, the server performs multivalued processing on the speckle pattern corresponding to the infrared pattern to obtain a multivalued speckle pattern, which can be implemented by the steps shown in fig. 2, and specifically includes:
step 201, traversing each point in the speckle pattern corresponding to the infrared image, and sequentially taking each point in the speckle pattern as a point to be processed.
In step 202, the SAD value of the gray value of the point to be processed is calculated.
In a specific implementation, when the server performs multi-valued processing on the speckle pattern corresponding to the infrared image, the server may traverse each point in the speckle pattern corresponding to the infrared image, sequentially take each point in the speckle pattern as a point to be processed, and calculate an SAD value of a gray value of each point to be processed, where the SAD value of the gray value is a sum of absolute values of differences between the gray value of the point to be processed and gray values of points in a neighborhood of the point to be processed.
Step 203, calculating the mean value of the gray values of the points and the mean value of the SAD values of the gray values of the points in the first window corresponding to the point to be processed according to the size of the preset first window.
In a specific implementation, after calculating the SAD value of the gray value of each point to be processed, the server may calculate an average value of the gray value of each point in the first window corresponding to the point to be processed in the speckle pattern according to a preset size of the first window, and calculate an average value of the SAD value of the gray value of each point in the first window corresponding to the point to be processed in the speckle pattern, where the preset size of the first window may be set by a person skilled in the art according to actual needs.
In one example, the preset first window size may be 15px × 15 px.
And 204, performing multi-valued assignment on the points to be processed according to the gray value of the points to be processed, the mean value of the gray value and the mean value of the SAD value of the gray value to obtain a multi-valued speckle pattern.
In a specific implementation, after calculating the SAD value of the gray value of each point to be processed, the mean value of the gray value of each point in the first window corresponding to the point to be processed, and the mean value of the SAD value of the gray value of each point, the server may perform multi-valued assignment on the point to be processed according to the gray value of the point to be processed, the mean value of the gray value, and the mean value of the SAD value of the gray value, so as to obtain a multi-valued speckle pattern.
In one example, the server may perform octalization processing on the speckle pattern corresponding to the infrared pattern, and perform octalization assignment on the to-be-processed points according to the gray value of the to-be-processed points, the mean value of the gray values, and the mean value of the SAD values of the gray values by using the following formulas:
Figure BDA0003275825930000071
β1234567
wherein X (i, j) is the gray value of the point to be processed,
Figure BDA0003275825930000072
is the average of the gray-scale values,
Figure BDA0003275825930000073
is the mean value of SAD values of gray values, beta1、β2、β3、β4、β5、β6、β7For the presetting of the parameters, B (i, j) is an octalized assignment of the points to be processed, where β1、β2、β3、β4、β5、β6、β7The setting can be performed by a person skilled in the art according to actual needs and experimental experience.
In one example, the server sets β1=-0.8,β2=-0.3,β3=0.2,β4=0.6,β5=1,β6=1.4,β7=2。
In this embodiment, the multi-valuating the speckle pattern corresponding to the infrared image to obtain a multi-valued speckle pattern includes: traversing each point in the speckle pattern corresponding to the infrared image, and sequentially taking each point in the speckle pattern as a point to be processed; calculating the SAD value of the gray value of the point to be processed; calculating the mean value of the gray values of all points in the first window corresponding to the point to be processed and the mean value of the SAD values of the gray values of all points according to the size of a preset first window; and performing multi-valued assignment on the points to be processed according to the gray value of the points to be processed, the mean value of the gray value and the mean value of the SAD value of the gray value to obtain a multi-valued speckle pattern, and performing multi-valued processing on the processing points by using the SAD value of the gray value of the points to be processed, the mean value of the gray value of each point in a window corresponding to the points to be processed and the mean value of the SAD value of the gray value of each point in the window corresponding to the points to be processed, so that the multi-valued assignment is realized, the multi-valued assignment can be more accurate, and the robustness and the precision of depth recovery are further improved.
In an embodiment, the preset image segmentation algorithm is a watershed segmentation algorithm, and the server segments the filtered infrared image according to the preset image segmentation algorithm, determines a foreground region of the infrared image, and generates a mask of the foreground region, which may be implemented by the steps shown in fig. 3, and specifically includes:
and 301, carrying out graying and binarization processing on the filtered infrared image to obtain a binarized infrared image.
And step 302, performing image expansion and distance conversion on the binarized infrared image to obtain a determined foreground area image, a determined background area image and an uncertain area image.
Specifically, the watershed segmentation algorithm is a segmentation method based on mathematical morphology of a topological theory, the basic idea is that an image is regarded as a geometrological topological landform, the gray value of each point in the image represents the altitude of the point, each local minimum value and an influence area of the local minimum value are called as a water collecting basin, the boundary of the water collecting basin forms a watershed, the algorithm can be realized as a flood submerging process, the lowest point of the image is submerged firstly, then flood submerges the whole valley gradually, the water level overflows when reaching a certain height, a dam is built at the position where the water overflows, the process is repeated until the points on the whole image are submerged completely, and a series of built dams become the watershed for separating each basin.
In a specific implementation, when a server segments a filtered infrared image based on a watershed segmentation algorithm, graying and binarization processing are required to be performed on the filtered infrared image to obtain a binarized infrared image, and image expansion operation is performed on the binarized infrared image to obtain a determined background area and generate a determined background area image; simultaneously, performing distance conversion operation on the binarized infrared image to obtain a determined foreground area and generate a determined foreground area image; and removing the determined foreground area and the determined residual part, namely the uncertain area, of the background area from the infrared image, and generating an uncertain area image by the server.
And 303, marking the determined foreground region image according to a connected component marking algorithm to obtain a marked image.
And 304, according to the marker map, the uncertain region map, the binarized infrared map and the watershed segmentation algorithm, performing distance judgment on the uncertain region map, determining the foreground region of the infrared map, and generating a mask of the foreground region.
In the specific implementation, after the server obtains the determined foreground area, the determined background area and the uncertain area, the server can further determine the uncertain area, the server firstly marks the determined foreground area image according to a connected component marking algorithm to obtain a marked image, then performs distance judgment on the uncertain area image according to the marked image, the uncertain area image, the binarized infrared image and a watershed segmentation algorithm to determine the foreground area in the uncertain area, so as to obtain the complete foreground area of the infrared image and generate a mask of the foreground area.
In this embodiment, the preset image segmentation algorithm is a watershed segmentation algorithm; the method for segmenting the filtered infrared image according to a preset image segmentation algorithm, determining a foreground region of the infrared image and generating a mask of the foreground region comprises the following steps: carrying out graying and binarization processing on the filtered infrared image to obtain a binarized infrared image; performing image expansion and distance conversion on the binarized infrared image to obtain a determined foreground area image, a determined background area image and an uncertain area image; marking the determined foreground area image according to a connected component marking algorithm to obtain a marked image; according to the marker map, the uncertain region map, the binarized infrared map and the watershed segmentation algorithm, distance judgment is carried out on the uncertain region map, the foreground region of the infrared map is determined, the mask of the foreground region is generated, and the filtered infrared map is segmented by the watershed segmentation algorithm, so that the determined foreground region of the infrared map is more accurate, the actual requirement of depth recovery is better met, and the use experience of a user is improved.
In an embodiment, before the server segments the filtered infrared image according to a preset image segmentation algorithm and determines a foreground region of the infrared image, the server may perform selective mask smoothing filtering on the acquired infrared image to obtain the filtered infrared image.
In the specific implementation, the selective mask smoothing filter takes a template window with the size of 5px × 5px, takes a central pixel as a reference point, makes 9 screen windows with the shapes of 4 pentagons, 4 hexagons and a square with the side length of 3, and respectively calculates the average value and the variance in each window, because the variance of a region with a sharp edge is larger than that of a gentle region, the server can use a shielding window with the minimum variance to average, so that the filtering operation can be completed without damaging the details of the region boundary, and the 9 template windows of the selective mask smoothing filter can be as shown in fig. 4.
In this embodiment, before the segmenting the filtered infrared image according to a preset image segmentation algorithm and determining a foreground region of the infrared image, the method includes: the method comprises the steps of performing selective mask smoothing filtering on an obtained infrared image to obtain a filtered infrared image, and eliminating noise of the infrared image while inevitably bringing about the disadvantage of averaging by considering filtering modes such as mean filtering, weighted mean filtering and the like so that sharply changed edges or lines become fuzzy.
In an embodiment, the server determines the disparity values of the points in the foreground region according to the multi-valued speckle pattern, the preset multi-valued reference speckle pattern, and the mask, and may be implemented by the steps shown in fig. 5, which specifically include:
step 401, generating a disparity cost matrix according to the width and height of the speckle pattern, the SAD value of the gray value of each point of the speckle pattern, and a preset disparity search range.
In a specific implementation, the disparity cost matrix CostVolume generated by the server is a three-dimensional cube, and the size of the three-dimensional cube is as follows: width, height and disparity range, wherein the width is the width of the speckle pattern, the height is the height of the speckle pattern, the disparity range is a preset viewing search range, and each position of the parallax cost matrix stores an SAD value of a gray value in a window, and the preset parallax search range can be set by a person skilled in the art according to actual needs.
In one example, the preset disparity search range may be 165.
Step 402, determining points to be matched in the multi-valued speckle pattern according to the multi-valued speckle pattern and the mask.
Specifically, the pixel values of the points in the mask include 0 and 1, the point with the pixel value of 1 is located in the foreground region, the point with the pixel value of 0 is located in the background region, and the pixel value of the point to be matched corresponding to the point on the mask is 1.
In a specific implementation, before calculating the matching cost value pixel by pixel, the server may perform screening according to the mask, that is, determine whether the pixel value of a point corresponding to each point in the multi-valued speckle pattern in the mask is 1, and if the pixel value of a point corresponding to a certain point in the multi-valued speckle pattern in the mask is 1, take the point as a point to be matched; if the pixel value of a corresponding point in the mask at a certain point in the multivalued speckle pattern is 0, the point is ignored and no calculation is performed.
And 403, determining a matching cost value between the point to be matched and the target point according to the parallax cost matrix, and determining a minimum value of the matching cost value.
And step 404, determining a parallax value of the point to be matched according to the parallax value between the target point corresponding to the minimum value and the point to be matched.
Specifically, the target point is a point of the point to be matched in a corresponding second window in the multi-valued reference speckle pattern, and the second window is the parallax search range.
In one example, the server may match the points to be matched with the reference speckle pattern respectively, fluctuate left and right parallax by 16 pixels, and calculate the matching cost value by the following formula:
Figure BDA0003275825930000101
wherein p represents the point to be matched, d is the parallax value, binaryL(. to) is a multivalued reference speckle pattern, binaryR(. for multivalued speckle pattern, N)PRepresenting a neighborhood, CSAD(p, d) is the matching cost value.
In one example, the preset parallax search range is 165, that is, each point to be matched has 165 matching cost values, and the server selects a parallax value between a target point corresponding to the minimum value of the matching cost values and the point to be matched as the parallax value of the point to be matched.
In this embodiment, the determining the parallax value of each point in the foreground region according to the multi-valued speckle pattern, the preset multi-valued reference speckle pattern, and the mask includes: generating a parallax cost matrix according to the width and the height of the speckle pattern, the SAD value of the gray value of each point of the speckle pattern and a preset parallax search range; determining points to be matched in the multi-valued speckle pattern according to the multi-valued speckle pattern and the mask; the pixel value of the point corresponding to the point to be matched on the mask is 1, the pixel value of each point in the mask comprises 0 and 1, and the point with the pixel value of 1 is located in the foreground area; determining a matching cost value between the point to be matched and a target point according to the parallax cost matrix, and determining a minimum value of the matching cost value; the target point is a point corresponding to the point to be matched in the multi-valued reference speckle pattern and each point in the parallax search range; and determining the parallax value of the point to be matched according to the parallax value between the target point corresponding to the minimum value and the point to be matched, and calculating only a plurality of matching cost values of the point to be matched after generating a parallax cost matrix, namely determining the parallax value of the point to be matched only by calculation, so that the calculation amount can be further reduced, and the efficiency of depth recovery is further improved.
In an embodiment, the server determines the disparity value of the point to be matched according to the disparity value between the target point corresponding to the minimum value and the point to be matched, which may be implemented by the steps shown in fig. 6, and specifically includes:
and step 501, performing sub-pixel interpolation on the parallax value between the target point corresponding to the minimum value and the point to be matched.
And 502, taking the parallax value between the target point corresponding to the minimum value after interpolation and the point to be matched as the parallax value of the point to be matched.
In a specific implementation, in order to further improve the accuracy of depth recovery, the server may perform sub-pixel interpolation on a disparity value between a target point corresponding to the minimum value and a point to be matched, and use the disparity value between the target point corresponding to the minimum value and the point to be matched after interpolation as the disparity value of the point to be matched, so as to obtain a more accurate disparity value of the point to be matched.
In one example, the server may perform sub-pixel interpolation on the disparity value between the target point corresponding to the minimum value and the point to be matched by the following formula:
Figure BDA0003275825930000111
CL=costd-1-costd,CR=costd+1-costd
Figure BDA0003275825930000112
in the formula, costdCost corresponding to the current point to be matchedd-1Cost for the last point to be matchedd+1And d is the matching cost value corresponding to the next point to be matched, d is the parallax value between the target point corresponding to the minimum value and the point to be matched, and d' is the parallax value between the target point corresponding to the minimum value after sub-pixel interpolation and the point to be matched.
In this embodiment, the determining the disparity value of the point to be matched according to the disparity value between the target point corresponding to the minimum value and the point to be matched includes: performing sub-pixel interpolation on the parallax value between the target point corresponding to the minimum value and the point to be matched; and performing sub-pixel interpolation on the parallax between the target point corresponding to the minimum value and the point to be matched by using the parallax value between the target point corresponding to the minimum value and the point to be matched after interpolation as the parallax value of the point to be matched, so that the obtained parallax value is more scientific and accurate.
Another embodiment of the present application relates to a depth recovery method, and the following describes implementation details of the depth recovery method of this embodiment in detail, where the following are provided only for facilitating understanding, and are not necessary to implement the present invention, and a specific flow of the depth recovery method of this embodiment may be as shown in fig. 7, and includes:
step 601, performing selective mask smoothing filtering on the obtained infrared image to obtain a filtered infrared image.
Step 602, segmenting the filtered infrared image according to a watershed segmentation algorithm, determining a foreground region of the infrared image, and generating a mask of the foreground region.
And step 603, traversing each point in the speckle pattern corresponding to the infrared image, and sequentially taking each point in the speckle pattern as a point to be processed.
Step 604, calculating the SAD value of the gray value of the point to be processed.
Step 605, calculating the mean value of the gray values of the points and the mean value of the SAD values of the gray values of the points in the first window corresponding to the point to be processed according to the preset size of the first window.
And 606, performing multi-valued assignment on the points to be processed according to the gray value of the points to be processed, the mean value of the gray value and the mean value of the SAD value of the gray value to obtain a multi-valued speckle pattern.
Step 607, determining the disparity values of the points in the foreground region according to the multi-valued speckle pattern, the preset multi-valued reference speckle pattern and the mask.
Step 608, determining the depth values of the points in the foreground region according to the disparity value.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
Another embodiment of the present application relates to an electronic device, as shown in fig. 8, including: at least one processor 701; and a memory 702 communicatively coupled to the at least one processor 701; the memory 702 stores instructions executable by the at least one processor 701, and the instructions are executed by the at least one processor 701, so that the at least one processor 701 can execute the depth recovery method in the foregoing embodiments.
Where the memory and processor are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting together one or more of the various circuits of the processor and the memory. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor.
The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory may be used to store data used by the processor in performing operations.
Another embodiment of the present application relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the present application, and that various changes in form and details may be made therein without departing from the spirit and scope of the present application in practice.

Claims (10)

1. A method of depth recovery, comprising:
segmenting the filtered infrared image according to a preset image segmentation algorithm, determining a foreground region of the infrared image, and generating a mask of the foreground region;
multi-valued processing is carried out on the speckle pattern corresponding to the infrared pattern to obtain a multi-valued speckle pattern;
determining the parallax value of each point in the foreground area according to the multi-valued speckle pattern, a preset multi-valued reference speckle pattern and the mask;
and determining the depth value of each point in the foreground area according to the parallax value.
2. The depth recovery method according to claim 1, wherein the multivalued processing of the speckle pattern corresponding to the infrared image to obtain a multivalued speckle pattern includes:
traversing each point in the speckle pattern corresponding to the infrared image, and sequentially taking each point in the speckle pattern as a point to be processed;
calculating the SAD value of the gray value of the point to be processed;
calculating the mean value of the gray values of all points in the first window corresponding to the point to be processed and the mean value of the SAD values of the gray values of all points according to the size of a preset first window;
and performing multi-valued assignment on the points to be processed according to the gray value of the points to be processed, the mean value of the gray value and the mean value of the SAD value of the gray value to obtain a multi-valued speckle pattern.
3. The depth restoration method according to claim 2, wherein the multi-valued process includes an octave process, and the multi-valued assignment is performed for the point to be processed according to the SAD value of the gray value, the mean value of the gray value, and the mean value of the SAD value of the gray value by the following formulas:
Figure FDA0003275825920000011
β1234567
wherein X (i, j) is the gray value of the point to be processed,
Figure FDA0003275825920000012
is the average of the gray values of the pixels,
Figure FDA0003275825920000013
is the mean value of SAD values of the gray values, beta1、β2、β3、β4、β5、β6、β7B (i, j) is an octalized assignment of the point to be processed as a preset parameter.
4. The depth restoration method according to any one of claims 1 to 3, wherein the preset image segmentation algorithm is a watershed segmentation algorithm;
the method for segmenting the filtered infrared image according to a preset image segmentation algorithm, determining a foreground region of the infrared image and generating a mask of the foreground region comprises the following steps:
carrying out graying and binarization processing on the filtered infrared image to obtain a binarized infrared image;
performing image expansion and distance conversion on the binarized infrared image to obtain a determined foreground area image, a determined background area image and an uncertain area image;
marking the determined foreground area image according to a connected component marking algorithm to obtain a marked image;
and according to the marker map, the uncertain region map, the binarized infrared map and a watershed segmentation algorithm, performing distance judgment on the uncertain region map, determining a foreground region of the infrared map, and generating a mask of the foreground region.
5. The depth restoration method according to any one of claims 1 to 3, wherein before the segmenting the filtered infrared image according to a preset image segmentation algorithm and determining the foreground region of the infrared image, the method comprises:
and carrying out selective mask smoothing filtering on the obtained infrared image to obtain the filtered infrared image.
6. The method according to claim 2 or 3, wherein the determining the disparity values of the points in the foreground region according to the multi-valued speckle pattern, a preset multi-valued reference speckle pattern and the mask comprises:
generating a parallax cost matrix according to the width and the height of the speckle pattern, the SAD value of the gray value of each point of the speckle pattern and a preset parallax search range;
determining points to be matched in the multi-valued speckle pattern according to the multi-valued speckle pattern and the mask; the pixel value of the point corresponding to the point to be matched on the mask is 1, the pixel value of each point in the mask comprises 0 and 1, and the point with the pixel value of 1 is located in the foreground area;
determining a matching cost value between the point to be matched and a target point according to the parallax cost matrix, and determining a minimum value of the matching cost value; the target point is a point of the point to be matched in a corresponding second window in the multi-valued reference speckle pattern, and the second window is the parallax search range;
and determining the parallax value of the point to be matched according to the parallax value between the target point corresponding to the minimum value and the point to be matched.
7. The depth recovery method according to claim 6, wherein the determining the disparity value of the point to be matched according to the disparity value between the target point corresponding to the minimum value and the point to be matched comprises:
performing sub-pixel interpolation on the parallax value between the target point corresponding to the minimum value and the point to be matched;
and taking the parallax value between the target point corresponding to the minimum value after interpolation and the point to be matched as the parallax value of the point to be matched.
8. The depth restoration method according to claim 1, wherein the preset image segmentation algorithm is any one of the following: a threshold segmentation algorithm, an edge segmentation algorithm, a region segmentation algorithm, a graph theory segmentation algorithm, an energy flooding segmentation algorithm, and a watershed segmentation algorithm.
9. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the depth recovery method of any one of claims 1 to 8.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the depth recovery method of any one of claims 1 to 8.
CN202111117448.8A 2021-09-23 2021-09-23 Depth recovery method, electronic device, and computer-readable storage medium Active CN113888614B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111117448.8A CN113888614B (en) 2021-09-23 2021-09-23 Depth recovery method, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111117448.8A CN113888614B (en) 2021-09-23 2021-09-23 Depth recovery method, electronic device, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN113888614A true CN113888614A (en) 2022-01-04
CN113888614B CN113888614B (en) 2022-05-31

Family

ID=79010441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111117448.8A Active CN113888614B (en) 2021-09-23 2021-09-23 Depth recovery method, electronic device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN113888614B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393224A (en) * 2022-09-02 2022-11-25 点昀技术(南通)有限公司 Depth image filtering method and device

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103268608A (en) * 2013-05-17 2013-08-28 清华大学 Depth estimation method and device based on near-infrared laser speckles
CN103424083A (en) * 2012-05-24 2013-12-04 北京数码视讯科技股份有限公司 Object depth detection method, device and system
CN103581653A (en) * 2013-11-01 2014-02-12 北京航空航天大学 Method for non-interference depth extraction of optical coding depth camera system according to luminous intensity modulation
CN103810708A (en) * 2014-02-13 2014-05-21 西安交通大学 Method and device for perceiving depth of laser speckle image
CN104268871A (en) * 2014-09-23 2015-01-07 清华大学 Method and device for depth estimation based on near-infrared laser speckles
AU2013206597A1 (en) * 2013-06-28 2015-01-22 Canon Kabushiki Kaisha Depth constrained superpixel-based depth map refinement
WO2015119657A1 (en) * 2014-02-07 2015-08-13 Lsi Corporation Depth image generation utilizing depth information reconstructed from an amplitude image
CN105205786A (en) * 2014-06-19 2015-12-30 联想(北京)有限公司 Image depth recovery method and electronic device
CN108701361A (en) * 2017-11-30 2018-10-23 深圳市大疆创新科技有限公司 Depth value determines method and apparatus
CN109461181A (en) * 2018-10-17 2019-03-12 北京华捷艾米科技有限公司 Depth image acquisition method and system based on pattern light
CN109583304A (en) * 2018-10-23 2019-04-05 宁波盈芯信息科技有限公司 A kind of quick 3D face point cloud generation method and device based on structure optical mode group
CN109658443A (en) * 2018-11-01 2019-04-19 北京华捷艾米科技有限公司 Stereo vision matching method and system
CN110288564A (en) * 2019-05-22 2019-09-27 南京理工大学 Binaryzation speckle quality evaluating method based on power spectrumanalysis
CN110853133A (en) * 2019-10-25 2020-02-28 深圳奥比中光科技有限公司 Method, device, system and readable storage medium for reconstructing three-dimensional model of human body
CN111105452A (en) * 2019-11-26 2020-05-05 中山大学 High-low resolution fusion stereo matching method based on binocular vision
US20200213533A1 (en) * 2017-09-11 2020-07-02 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image Processing Method, Image Processing Apparatus and Computer Readable Storage Medium
CN111402313A (en) * 2020-03-13 2020-07-10 合肥的卢深视科技有限公司 Image depth recovery method and device
CN112330751A (en) * 2020-10-30 2021-02-05 合肥的卢深视科技有限公司 Line deviation detection method and device for structured light camera
CN112465723A (en) * 2020-12-04 2021-03-09 北京华捷艾米科技有限公司 Method and device for repairing depth image, electronic equipment and computer storage medium
CN112700484A (en) * 2020-12-31 2021-04-23 南京理工大学智能计算成像研究院有限公司 Depth map colorization method based on monocular depth camera
CN112771573A (en) * 2019-04-12 2021-05-07 深圳市汇顶科技股份有限公司 Depth estimation method and device based on speckle images and face recognition system
CN112927280A (en) * 2021-03-11 2021-06-08 北京的卢深视科技有限公司 Method and device for acquiring depth image and monocular speckle structured light system
CN113379816A (en) * 2021-06-29 2021-09-10 北京的卢深视科技有限公司 Structure change detection method, electronic device, and storage medium

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103424083A (en) * 2012-05-24 2013-12-04 北京数码视讯科技股份有限公司 Object depth detection method, device and system
CN103268608A (en) * 2013-05-17 2013-08-28 清华大学 Depth estimation method and device based on near-infrared laser speckles
AU2013206597A1 (en) * 2013-06-28 2015-01-22 Canon Kabushiki Kaisha Depth constrained superpixel-based depth map refinement
CN103581653A (en) * 2013-11-01 2014-02-12 北京航空航天大学 Method for non-interference depth extraction of optical coding depth camera system according to luminous intensity modulation
WO2015119657A1 (en) * 2014-02-07 2015-08-13 Lsi Corporation Depth image generation utilizing depth information reconstructed from an amplitude image
CN103810708A (en) * 2014-02-13 2014-05-21 西安交通大学 Method and device for perceiving depth of laser speckle image
CN105205786A (en) * 2014-06-19 2015-12-30 联想(北京)有限公司 Image depth recovery method and electronic device
CN104268871A (en) * 2014-09-23 2015-01-07 清华大学 Method and device for depth estimation based on near-infrared laser speckles
US20200213533A1 (en) * 2017-09-11 2020-07-02 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image Processing Method, Image Processing Apparatus and Computer Readable Storage Medium
CN108701361A (en) * 2017-11-30 2018-10-23 深圳市大疆创新科技有限公司 Depth value determines method and apparatus
CN109461181A (en) * 2018-10-17 2019-03-12 北京华捷艾米科技有限公司 Depth image acquisition method and system based on pattern light
CN109583304A (en) * 2018-10-23 2019-04-05 宁波盈芯信息科技有限公司 A kind of quick 3D face point cloud generation method and device based on structure optical mode group
CN109658443A (en) * 2018-11-01 2019-04-19 北京华捷艾米科技有限公司 Stereo vision matching method and system
CN112771573A (en) * 2019-04-12 2021-05-07 深圳市汇顶科技股份有限公司 Depth estimation method and device based on speckle images and face recognition system
CN110288564A (en) * 2019-05-22 2019-09-27 南京理工大学 Binaryzation speckle quality evaluating method based on power spectrumanalysis
CN110853133A (en) * 2019-10-25 2020-02-28 深圳奥比中光科技有限公司 Method, device, system and readable storage medium for reconstructing three-dimensional model of human body
CN111105452A (en) * 2019-11-26 2020-05-05 中山大学 High-low resolution fusion stereo matching method based on binocular vision
CN111402313A (en) * 2020-03-13 2020-07-10 合肥的卢深视科技有限公司 Image depth recovery method and device
CN112330751A (en) * 2020-10-30 2021-02-05 合肥的卢深视科技有限公司 Line deviation detection method and device for structured light camera
CN112465723A (en) * 2020-12-04 2021-03-09 北京华捷艾米科技有限公司 Method and device for repairing depth image, electronic equipment and computer storage medium
CN112700484A (en) * 2020-12-31 2021-04-23 南京理工大学智能计算成像研究院有限公司 Depth map colorization method based on monocular depth camera
CN112927280A (en) * 2021-03-11 2021-06-08 北京的卢深视科技有限公司 Method and device for acquiring depth image and monocular speckle structured light system
CN113379816A (en) * 2021-06-29 2021-09-10 北京的卢深视科技有限公司 Structure change detection method, electronic device, and storage medium

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
GUIJIN WANG等: "Depth estimation for speckle projection system using progressive reliable points growing matching", 《APPLIED OPTICS》 *
HU TIAN等: "RECOVERING DEPTH OF BACKGROUND AND FOREGROUND FROM A MONOCULAR VIDEO WITH CAMERA MOTION", 《VCIP》 *
XUANWU YIN等: "Efficient active depth sensing by laser speckle projection system", 《OPTICAL ENGINEERING》 *
古家威等: "基于激光散斑的半稠密深度图获取算法", 《中国激光》 *
吴清等: "基于散斑的三维体感交互***", 《计算机辅助设计与图形学学报》 *
梁晓升: "基于混合滤波的深度图像修复算法的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
袁红星等: "对象引导的单幅散焦图像深度提取方法", 《电子学报》 *
郝丁丁: "基于激光散斑的带式输送机煤流负载监测方法研究", 《中国优秀硕士学位论文全文数据库 基础科学辑》 *
钟锦鑫等: "基于深度学习的散斑投影轮廓术", 《红外与激光工程》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393224A (en) * 2022-09-02 2022-11-25 点昀技术(南通)有限公司 Depth image filtering method and device

Also Published As

Publication number Publication date
CN113888614B (en) 2022-05-31

Similar Documents

Publication Publication Date Title
CN111209770B (en) Lane line identification method and device
CN103106651B (en) Method for obtaining parallax error plane based on three-dimensional hough
CN101082988A (en) Automatic deepness image registration method
CN110189339A (en) The active profile of depth map auxiliary scratches drawing method and system
Zhang et al. Critical regularizations for neural surface reconstruction in the wild
CN107369204B (en) Method for recovering basic three-dimensional structure of scene from single photo
CN106815594A (en) Solid matching method and device
CN111105451B (en) Driving scene binocular depth estimation method for overcoming occlusion effect
Rossi et al. Joint graph-based depth refinement and normal estimation
CN115239870A (en) Multi-view stereo network three-dimensional reconstruction method based on attention cost body pyramid
CN113920275B (en) Triangular mesh construction method and device, electronic equipment and readable storage medium
CN115222889A (en) 3D reconstruction method and device based on multi-view image and related equipment
CN111914913A (en) Novel stereo matching optimization method
CN113888614B (en) Depth recovery method, electronic device, and computer-readable storage medium
CN105225233B (en) A kind of stereopsis dense Stereo Matching method and system based on the expansion of two classes
CN111739071A (en) Rapid iterative registration method, medium, terminal and device based on initial value
CN103544732B (en) A kind of 3 D stereo method for reconstructing for lunar rover
CN108805841B (en) Depth map recovery and viewpoint synthesis optimization method based on color map guide
Hung et al. Multipass hierarchical stereo matching for generation of digital terrain models from aerial images
CN113344941A (en) Depth estimation method based on focused image and image processing device
CN112270701A (en) Packet distance network-based parallax prediction method, system and storage medium
CN116704123A (en) Three-dimensional reconstruction method combined with image main body extraction technology
CN113920270B (en) Layout reconstruction method and system based on multi-view panorama
CN113808006B (en) Method and device for reconstructing three-dimensional grid model based on two-dimensional image
CN110490877B (en) Target segmentation method for binocular stereo image based on Graph Cuts

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220419

Address after: 230091 room 611-217, R & D center building, China (Hefei) international intelligent voice Industrial Park, 3333 Xiyou Road, high tech Zone, Hefei, Anhui Province

Applicant after: Hefei lushenshi Technology Co.,Ltd.

Address before: 100083 room 3032, North B, bungalow, building 2, A5 Xueyuan Road, Haidian District, Beijing

Applicant before: BEIJING DILUSENSE TECHNOLOGY CO.,LTD.

Applicant before: Hefei lushenshi Technology Co., Ltd

GR01 Patent grant
GR01 Patent grant