CN110866535B - Disparity map acquisition method and device, computer equipment and storage medium - Google Patents

Disparity map acquisition method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110866535B
CN110866535B CN201910911950.2A CN201910911950A CN110866535B CN 110866535 B CN110866535 B CN 110866535B CN 201910911950 A CN201910911950 A CN 201910911950A CN 110866535 B CN110866535 B CN 110866535B
Authority
CN
China
Prior art keywords
matching cost
input image
pixel point
value
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910911950.2A
Other languages
Chinese (zh)
Other versions
CN110866535A (en
Inventor
王鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201910911950.2A priority Critical patent/CN110866535B/en
Publication of CN110866535A publication Critical patent/CN110866535A/en
Application granted granted Critical
Publication of CN110866535B publication Critical patent/CN110866535B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a method and a device for acquiring a disparity map, computer equipment and a storage medium. The method comprises the steps of calculating the matching cost value of each pixel point in an input image by adopting a matching cost function which is fused with multiple types to obtain a first matching cost, then conducting matching cost aggregation on the first matching cost to obtain a second matching cost, then optimizing the matching cost value corresponding to edge pixel points in the input image in the second matching cost by adopting an SGM algorithm or a GSM algorithm to obtain a third matching cost, then determining the target matching cost value of all the pixel points according to the third matching cost, and obtaining a disparity map of the input image according to the disparity value corresponding to the target matching cost value. According to the method for acquiring the parallax map, the parallax consistency of the non-edge pixel point region in the input image and the accuracy of the matching cost calculation of the edge pixel points in the input image can be improved.

Description

Disparity map acquisition method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a method and an apparatus for acquiring a disparity map, a computer device, and a storage medium.
Background
With the development of the stereo matching technology, the dual-camera smart phone is popularized and applied, and the requirements of a mobile phone user on photographing or previewing a background blurring effect are more and more strong, so how to obtain a disparity map with excellent quality by optimizing a stereo matching algorithm becomes a technical problem to be solved urgently in the application of the current dual-camera smart phone.
Currently, Stereo Matching algorithms are mainly classified into Local Stereo Matching (LSM), Semi-Global Stereo Matching (SGM), and Global Stereo Matching (GSM). The LSM algorithm has the lowest complexity, but the output disparity map quality is poor, and the GSM algorithm has the best disparity map quality, but the complexity is the highest, and although the SGM algorithm is improved in performance compared with the GSM algorithm, the algorithm quality is reduced.
Therefore, the above method for obtaining a disparity map by using a stereo matching algorithm has a problem that the quality of the disparity map cannot be improved and the complexity of the stereo matching algorithm cannot be reduced at the same time.
Disclosure of Invention
In view of the above, it is necessary to provide a disparity map acquisition method, a disparity map acquisition apparatus, a computer device, and a storage medium, which can effectively improve the quality of disparity maps and reduce the complexity of stereo matching algorithms at the same time.
In a first aspect, a method for obtaining a disparity map includes:
calculating the matching cost value of each pixel point in the input image by adopting a fusion matching cost function to obtain a first matching cost of the input image; the fusion matching cost function is a function obtained by fusing at least two types of matching cost functions;
performing matching cost aggregation on the first matching cost to obtain a second matching cost of the input image;
optimizing a matching cost value corresponding to the edge pixel point in the input image in the second matching cost by adopting an SGM algorithm or a GSM algorithm to obtain a third matching cost of the input image;
and determining the target matching cost values of all pixel points in the input image according to the third matching cost, and obtaining the disparity map of the input image according to the disparity value corresponding to the target matching cost values.
In one embodiment, the fusion matching cost function is a function obtained by fusing a gradient gray-scale correlation AD matching cost function and a left-right sampling BT matching cost function.
In one embodiment, optimizing the edge matching cost value in the second matching cost by using an SGM algorithm or a GSM algorithm to obtain a third matching cost of the input image, includes:
Determining the coordinates of edge pixel points in the input image by adopting a preset positioning operator;
obtaining an edge matching cost value in the second matching cost according to the coordinates of the edge pixel points;
and optimizing the edge matching cost value by adopting an SGM algorithm or a GSM algorithm to obtain a third matching cost of the input image.
In one embodiment, determining coordinates of edge pixel points in an input image by using a preset positioning operator includes:
carrying out weighted summation on the Canny operator, the gradient operator in the horizontal direction and the gradient operator in the vertical direction to obtain a positioning operator;
and determining the coordinates of the edge pixel points in the input image by adopting a positioning operator.
In one embodiment, performing matching cost aggregation on the first matching cost to obtain a second matching cost of the input image includes:
determining a fixed window corresponding to each pixel point according to the coordinate of each pixel point of the input image and a preset radius;
and aggregating the corresponding matching cost values of the pixel points in the fixed window corresponding to each pixel point in the first matching cost to obtain a second matching cost of the input image.
In one embodiment, determining the target matching cost values of all pixel points in the input image according to the third matching cost includes:
Obtaining all matching cost values corresponding to each pixel point in the input image in the third matching cost;
comparing all the matching cost values corresponding to each pixel point by adopting a WTA algorithm to obtain the minimum matching cost value of each pixel point;
and determining the minimum matching cost value of each pixel point as the target matching cost value of each pixel point.
In one embodiment, after obtaining the disparity map of the input image according to the disparity value corresponding to the target matching cost value, the method further includes:
detecting a parallax value corresponding to an unreliable pixel point in an input image;
and correcting the parallax value corresponding to the unreliable pixel point in the parallax map by adopting a preset correction method to obtain the corrected parallax map of the input image.
In one embodiment, the preset correction method includes at least one of weighted median filtering, mismatching pixel point rejection, and vulnerability filling methods.
In a second aspect, an apparatus for acquiring a disparity map includes:
the cost calculation module is used for calculating the matching cost value of each pixel point in the input image by adopting a fusion matching cost function to obtain a first matching cost of the input image; the fusion matching cost function is a function obtained by fusing at least two types of matching cost functions;
The cost aggregation module is used for carrying out matching cost aggregation on the first matching cost to obtain a second matching cost of the input image;
the cost optimization module is used for optimizing a matching cost value corresponding to the edge pixel point in the input image in the second matching cost by adopting an SGM algorithm or a GSM algorithm to obtain a third matching cost of the input image;
and the parallax calculation module is used for determining the target matching cost values of all the pixel points in the input image according to the third matching cost, and obtaining the parallax image of the input image according to the parallax value corresponding to the target matching cost value.
In a third aspect, a computer device includes a memory and a processor, where the memory stores a computer program, and the processor implements the method for acquiring a disparity map according to any one of the embodiments of the first aspect when executing the computer program.
In a fourth aspect, a computer-readable storage medium has a computer program stored thereon, where the computer program is executed by a processor to implement the method for acquiring a disparity map according to any one of the embodiments of the first aspect.
According to the method and device for obtaining the disparity map, the matching cost value of each pixel point in the input image is calculated by adopting a matching cost function fusing multiple types to obtain a first matching cost of the input image, then matching cost aggregation is carried out on the first matching cost to obtain a second matching cost of the input image, then an SGM algorithm or a GSM algorithm is adopted to optimize the matching cost value corresponding to the edge pixel point in the input image in the second matching cost to obtain a third matching cost of the input image, then the target matching cost value of all the pixel points in the input image is determined according to the third matching cost, and the disparity map of the input image is obtained according to the disparity value corresponding to the target matching cost value. In the method, the matching cost functions of multiple types are fused to calculate the matching cost value of each pixel point in the input image, so that the problem that a single matching cost function is difficult to achieve high robustness is solved, the quality of the obtained first matching cost is improved, and the accuracy of the obtained disparity map is greatly influenced to a certain extent by the quality of the first matching cost, so that the accuracy of the disparity map obtained based on the first matching cost is improved by the method for calculating the matching cost. In addition, the matching cost aggregation process can improve the parallax consistency of the non-edge pixel point region in the input image. In the process of optimizing the matching cost, the matching cost of the edge pixel points in the input image is optimized by adopting the SGM algorithm or the GSM algorithm with high calculation accuracy, so that the accuracy of the matching cost of the edge pixel points is further improved, and compared with a mode that all the pixel points are optimized by adopting the SGM algorithm or the GSM algorithm, the method for optimizing the edge pixel points only adopts a higher-complexity algorithm, so that the calculation amount is greatly reduced. In summary, on the basis of ensuring the parallax consistency of the non-edge pixel point regions in the input image, the method optimizes the edge pixel points in the input image by adopting the high-accuracy SGM algorithm or the GSM algorithm to obtain a third matching cost, and then obtains the parallax map based on the third matching cost.
Drawings
FIG. 1 is a diagram illustrating an application scenario, according to an embodiment;
fig. 2 is a flowchart of a method for obtaining a disparity map according to an embodiment;
FIG. 2A is a schematic diagram of a data cube, according to an embodiment;
FIG. 3 is a flowchart of another implementation of S103 in the embodiment of FIG. 2;
FIG. 4 is a flowchart of another implementation of S201 in the embodiment of FIG. 3;
FIG. 5 is a flow chart of another implementation of S102 in the embodiment of FIG. 2;
FIG. 6 is a flow chart of another implementation of S104 in the embodiment of FIG. 2;
fig. 7 is a flowchart of a method for obtaining a disparity map according to an embodiment;
fig. 8 is a flowchart of a method for obtaining a disparity map according to an embodiment;
fig. 9 is a flowchart of an apparatus for obtaining a disparity map according to an embodiment;
fig. 10 is a flowchart of an apparatus for obtaining a disparity map according to an embodiment;
fig. 11 is a flowchart of an apparatus for obtaining a disparity map according to an embodiment;
fig. 12 is a flowchart of an apparatus for obtaining a disparity map according to an embodiment;
fig. 13 is a flowchart of an apparatus for obtaining a disparity map according to an embodiment;
Fig. 14 is a schematic internal structural diagram of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The method for acquiring the disparity map can be applied to an application environment shown as a in fig. 1, and in the application environment, at least two cameras are installed on a terminal a, so that a target object in the same scene can be photographed or photographed. Optionally, the method for acquiring a disparity map provided by the present application may also be applied to an application environment shown as B in fig. 1, where at least one camera is respectively installed on the terminal B and the terminal c, and the terminal B and the terminal c simultaneously take a picture of a target object in the same scene or take a picture of the target object in the same scene. It should be noted that the terminal (a, b, or c) can be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
The following describes in detail the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems by embodiments and with reference to the drawings. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a flowchart of a method for acquiring a disparity map according to an embodiment, where an execution subject of the method is a terminal (a, b, or c) in fig. 1, and the method relates to a specific process of acquiring a disparity map of a left view or a right view by processing a left view and a right view obtained by shooting by the terminal. As shown in fig. 2, the method specifically includes the following steps:
s101, calculating a matching cost value of each pixel point in an input image by adopting a fusion matching cost function to obtain a first matching cost of the input image; the fusion matching cost function is a function obtained by fusing at least two types of matching cost functions.
The matching cost function can be used for calculating the matching cost value of each pixel point in the input image in a preset parallax search range, wherein the preset parallax search range can be determined by a terminal in advance according to actual application requirements, for example, the preset parallax search range d can be specifically {0-16 }. The first matching cost is a data set, and the data in the set corresponds to the matching cost values of all the pixels in the input image, and may be specifically represented by a three-dimensional data cube, for example, the first matching cost of the input image may be represented by a data cube C as shown in fig. 2A, where u represents an abscissa of the pixel, v represents an ordinate of the pixel, d represents each disparity value in the disparity search range, and each numerical value (u, v, d) in the data cube C represents a corresponding matching cost value of each pixel (u, v) in the two-dimensional input image in the preset disparity search range d. The input image is a left view taken by the terminal (terminal a in fig. 1) using one camera mounted thereon or a right view taken by another camera. Optionally, the input image may also be a left view (or a right view) obtained by the terminal (terminal b in fig. 1) using a camera mounted thereon, and a right view (or a left view) obtained by the terminal (terminal c in fig. 1) using a camera mounted thereon.
In this embodiment, when the terminal acquires an input image, the matching cost value of each pixel point on the input image may be further calculated, and the specific calculation method includes: the terminal obtains a fusion matching cost function by fusing multiple types of matching cost functions, and then substitutes each pixel point on the input image into the fusion matching cost function to calculate the matching cost value of each pixel point on the input image in a preset parallax searching range, so as to obtain a first matching cost of the input image. Optionally, the terminal may also substitute each pixel point on the input image into a plurality of types of matching cost functions, obtain matching cost values output by the matching cost functions of various types, and substitute the matching cost values output by the matching cost functions of various types into the fusion matching cost function, so as to obtain the first matching cost of the input image.
And S102, performing matching cost aggregation on the first matching cost to obtain a second matching cost of the input image.
The embodiment relates to a process of matching cost aggregation, and the purpose of matching cost aggregation is to improve the robustness of a first matching cost. Meanwhile, the parallax consistency of the non-edge pixel point region in the input image can be improved. In this embodiment, after the terminal obtains the first matching cost based on S101, the terminal may further adopt a corresponding aggregation algorithm to aggregate the matching cost values corresponding to the pixel points in the first matching cost, so as to obtain a second matching cost of the input image.
S103, optimizing a matching cost value corresponding to the edge pixel point in the input image in the second matching cost by adopting an SGM algorithm or a GSM algorithm to obtain a third matching cost of the input image.
The SGM algorithm or the GSM algorithm is a stereo matching algorithm and is used for calculating the matching cost value of the pixel points, and the SGM algorithm or the GSM algorithm is high in complexity and high in calculation accuracy. In this embodiment, after the terminal obtains the second matching cost of the input image based on the S102 method, the matching cost value corresponding to the edge pixel point in the input image in the second matching cost may be further optimized by using an SGM algorithm or a GSM algorithm, and the matching cost value corresponding to the non-edge pixel point in the input image in the second matching cost is retained, so as to obtain a third matching cost of the input image. It should be noted that, after the terminal obtains the third matching cost based on the methods in S102 and S103, the matching cost value corresponding to the edge pixel point in the third matching cost is the matching cost value obtained by optimizing the SGM algorithm or the GSM algorithm, and the matching cost value corresponding to the non-edge pixel point in the third matching cost is the matching cost value obtained by aggregating the matching costs, so that the method improves the parallax consistency of the non-edge pixel point region and also improves the accuracy of the matching cost value of the edge pixel point.
And S104, determining target matching cost values of all pixel points in the input image according to the third matching cost, and obtaining a disparity map of the input image according to a disparity value corresponding to the target matching cost values.
And the target matching cost value represents the matching cost value corresponding to the optimal parallax value of each pixel point. In this embodiment, after the terminal obtains the third matching cost based on S103, all matching cost values of each pixel point in the input image within the preset parallax search range may be further extracted from the third matching cost, then a preset selection mode is adopted to determine a target matching cost value from all matching cost values of each pixel point, then a parallax value corresponding to the target matching cost value is used as the optimal parallax value of the pixel point, and after the terminal obtains the optimal parallax values of all pixel points in the input image, the parallax map of the input image may be obtained. It should be noted that the preset selection manner may be specifically predetermined by the terminal according to an actual application requirement, for example, specifically may be a (Winner-Take-All, WTA) algorithm, and optionally may also be other types of algorithms, which is not limited in this embodiment.
According to the method for obtaining the disparity map, the matching cost values of all the pixel points in the input image are calculated by adopting a matching cost function fusing multiple types to obtain a first matching cost of the input image, then matching cost aggregation is performed on the first matching cost to obtain a second matching cost of the input image, then an SGM algorithm or a GSM algorithm is adopted to optimize the matching cost values of the edge pixel points in the input image in the second matching cost to obtain a third matching cost of the input image, then the target matching cost values of all the pixel points in the input image are determined according to the third matching cost, and the disparity map of the input image is obtained according to the disparity value corresponding to the target matching cost value. In the method, the matching cost functions of multiple types are fused to calculate the matching cost value of each pixel point in the input image, so that the problem that a single matching cost function is difficult to achieve high robustness is solved, the quality of the obtained first matching cost is improved, and the accuracy of the obtained disparity map is greatly influenced to a certain extent by the quality of the first matching cost, so that the accuracy of the disparity map obtained based on the first matching cost is improved by the method for calculating the matching cost.
In addition, the matching cost aggregation process can improve the parallax consistency of the non-edge pixel point region in the input image. In the process of optimizing the matching cost, the matching cost of the edge pixel points in the input image is optimized by adopting the SGM algorithm or the GSM algorithm with high calculation accuracy, so that the accuracy of the matching cost of the edge pixel points is further improved, and compared with a mode that all the pixel points are optimized by adopting the SGM algorithm or the GSM algorithm, the method for optimizing the edge pixel points only adopts a higher-complexity algorithm, so that the calculation amount is greatly reduced. In summary, on the basis of ensuring the parallax consistency of the non-edge pixel point regions in the input image, the method optimizes the edge pixel points in the input image by adopting the high-accuracy SGM algorithm or the GSM algorithm to obtain a third matching cost, and then obtains the parallax map based on the third matching cost.
In practical application, the fusion matching cost function may be specifically a function obtained by fusing a gradient gray-level correlation AD matching cost function and a left-right sampling BT matching cost function.
Wherein, the gradient gray-scale correlation (AD) matching cost function is a gray-scale correlation AD matching cost function based on a horizontal direction gradient and a vertical direction gradient. The left and right sampling (BT) matching cost function is a commonly used function to calculate the matching cost. In this embodiment, the terminal may specifically obtain the fusion matching cost function by using the relational expression (1) or the variant thereof:
C 1 (u,v,d)=ρ(BT,λ BT )·α+ρ(AD,λ AD )·(1-α) (1);
in the above formula (1), C 1 (u, v, d) a data cube representing a first matching cost; BT represents the output value of the BT matching cost function of the left and right samples, and AD represents the output value of the AD matching cost function related to the gray scale of the gradient; lambda [ alpha ] BT Threshold, λ, representing the matching cost of the left and right samples BT AD A threshold value representing the gray-level dependent AD matching cost of the gradient. α represents a weight value; ρ is a robustness function, which can be expressed by the following relation (2):
Figure GDA0003629244530000101
in the formula (2), x represents an output value of the BT matching cost function of the left and right samples or an output value of the AD matching cost function of the gradient; λ represents the above-mentioned λ BT Or λ above AD
In this embodiment, after the terminal obtains the fusion matching cost function based on the above relational expressions (1) and (2), each pixel point in the input image may be specifically substituted into the gradient gray-level-related AD matching cost function Performing line calculation to obtain an output value of a gradient gray-scale related AD matching cost function, substituting each pixel point in an input image into a left-sampling BT matching cost function and a right-sampling BT matching cost function to calculate to obtain an output value of the left-sampling BT matching cost function and an output value of the gradient gray-scale related AD matching cost function and the left-sampling BT matching cost function, respectively calculating the output values of the gradient gray-scale related AD matching cost function and the left-sampling BT matching cost function as parameter input values (for example, AD and BT in a relational expression (1)) in the fusion matching cost function to obtain a first matching cost (C in the relational expression (1)) 1 (u,v,d))。
In the above embodiment, since the gradient gray-scale correlation AD matching cost function and the left-right sampling BT matching cost function are convenient for algorithm acceleration, the method for calculating the matching cost by fusing the gradient gray-scale correlation AD matching cost function and the left-right sampling BT matching cost function can facilitate terminal calculation acceleration to a certain extent, shorten the calculation time of the terminal, and further improve the calculation rate of the terminal.
Fig. 3 is a flowchart of another implementation manner of S103 in the embodiment of fig. 2, where as shown in fig. 3, the step S103 "optimizes a matching cost value corresponding to an edge pixel point in an input image in a second matching cost by using an SGM algorithm or a GSM algorithm to obtain a third matching cost of the input image" includes:
S201, determining coordinates of edge pixel points in the input image by adopting a preset positioning operator.
The positioning operator is used for extracting edge pixel points in the input image and determining the coordinates of the edge pixel points according to the extracted edge pixel points, so that the terminal can find the matching cost value corresponding to the edge pixel points in the second matching cost according to the coordinates of the edge pixel points. It should be noted that any type of positioning operator may be adopted for the positioning operator, and optionally, a positioning operator formed by combining multiple types of positioning operators may also be adopted. The specific type of the positioning operator may be predetermined by the terminal according to the actual application requirement, which is not limited in this embodiment.
S202, obtaining an edge matching cost value in the second matching cost according to the coordinates of the edge pixel points.
And the edge matching cost value is the matching cost value corresponding to the edge pixel point in the input image in the second matching cost. In this embodiment, after the terminal obtains the coordinates of the edge pixel point in the input image based on the method in S201, the matching cost value on each coordinate may be further found in the second matching cost, and the matching cost value on each coordinate is determined as the edge matching cost value.
And S203, optimizing the edge matching cost value by adopting an SGM algorithm or a GSM algorithm to obtain a third matching cost of the input image.
In practical application, specifically, the terminal may perform optimization processing on the edge matching cost value in the second matching cost by using the SGM algorithm or the GSM algorithm to obtain an optimized edge matching cost value, and meanwhile, retain a non-edge matching cost value in the second matching cost to obtain a third matching cost.
Optionally, in an embodiment, the present application further provides a method for the terminal to obtain a third matching cost, and specifically, the terminal may further perform optimization processing on all matching cost values in the second matching cost by using an SGM algorithm or a GSM algorithm to obtain the third matching cost. All the matching cost values comprise the matching cost values corresponding to the edge pixel points in the input image and also comprise the matching cost values corresponding to the non-edge pixel points. The quality of the third matching cost obtained by the method is high, the matching cost value corresponding to each pixel point is accurate, and the accuracy of the disparity map obtained by the disparity map obtaining method provided by the application is improved.
In the above embodiment, the terminal determines the coordinates of the edge pixel points in the input image by using the preset positioning operator, obtains the edge matching cost value in the second matching cost according to the coordinates of the edge pixel points, and then performs optimization processing on the edge matching cost value by using an SGM algorithm or a GSM algorithm to obtain the third matching cost of the input image. The SGM algorithm or the GSM algorithm is adopted to optimize the edge matching cost value in the second matching cost, so that the accuracy of the edge matching cost value is improved, and compared with a conventional method for optimizing all pixel points in the second matching cost, the method greatly reduces the calculation amount and the complexity of calculating the matching cost of the input image on the basis of ensuring the calculation accuracy.
Fig. 4 is a flowchart of another implementation manner of S201 in the embodiment of fig. 3, and as shown in fig. 4, the step S201 "determining coordinates of edge pixel points in an input image by using a preset positioning operator" includes:
s301, carrying out weighted summation on the Canny operator, the gradient operator in the horizontal direction and the gradient operator in the vertical direction to obtain a positioning operator.
The present embodiment relates to a specific type of a positioning operator, and the positioning operator in the present embodiment is obtained by combining a Canny operator, a gradient operator in a horizontal direction, and a gradient operator in a vertical direction, and specifically can be obtained by using the following relational expression (3) or a variant thereof:
Edge=I Canny ·β+I gradx ·γ+I grady ·γ (3);
In the above formula (3), Edge represents a positioning operator; i is Canny Representing a Canny operator; i is gradx A gradient operator representing a horizontal direction; i is grady A gradient operator representing a vertical direction; β and γ represent weighting factors, respectively. In this embodiment, the terminal may specifically substitute a Canny operator, a gradient operator in the horizontal direction, a gradient operator in the vertical direction, and a related weight factor into the weighted summation calculation relation to obtain the positioning operator Edge (u, v).
And S302, determining the coordinates of the edge pixel points in the input image by adopting a positioning operator.
After the terminal obtains the positioning operator based on the method of S301, each pixel point in the input image may be further substituted into the positioning operator Edge, and the coordinate of the Edge pixel point in the input image is obtained through calculation, so that the Edge matching cost value is determined in the second matching cost by using the coordinate of the Edge pixel point.
In the implementation, a Canny operator, a gradient operator in the horizontal direction and a gradient operator in the vertical direction are weighted and summed to obtain a positioning operator; and then, determining the coordinates of the edge pixel points in the input image by adopting a positioning operator. In the method, because the edge pixel points in the input image are extracted in a mode of combining multiple operators, more edge pixel points can be extracted compared with the mode of extracting the edge pixel points in the input image by adopting a single type of operator, and the determined edge pixel points are more comprehensive and accurate. The process of optimizing the matching cost value of the edge pixel point is more accurate.
Fig. 5 is a flowchart of another implementation manner of S102 in the embodiment of fig. 2, where as shown in fig. 5, the S102 "performs matching cost aggregation on the first matching cost to obtain a second matching cost of the input image", and this embodiment relates to a specific process in which a terminal performs matching cost aggregation on the first matching cost in a fixed window matching cost summation manner to obtain the second matching cost, where the specific process includes:
s401, determining a fixed window corresponding to each pixel point according to the coordinate of each pixel point of the input image and a preset radius.
The preset radius may be determined by the terminal in advance according to the actual application requirement, and may be large or small, which is not limited in this embodiment. The fixed window may be a window in a rectangular area, a window in a circular area, or a window in other types of areas, which is not limited in this embodiment. In this embodiment, taking the fixed window as a rectangular area as an example, the terminal may determine the fixed window corresponding to each pixel point by taking each pixel point of the input image as a center and taking the preset radius as a half length and a half width of the fixed window.
S402, aggregating the corresponding matching cost values of the pixel points in the fixed window corresponding to each pixel point in the first matching cost to obtain a second matching cost of the input image.
After the terminal determines the fixed window corresponding to each pixel point in the input image, the matching cost values corresponding to all the pixel points in each fixed window in the first matching cost can be further aggregated, so that the aggregated matching cost value of each pixel point is obtained, and then the matching cost values of all the pixel points are combined together to form a second matching cost of the input image. Specifically, the second matching cost may be obtained by using the following relation (4):
Figure GDA0003629244530000141
in the above formula (4), (u) i ,v j ) Representing each pixel within the fixed window area W.
In the embodiment, the matching cost aggregation is performed on the first matching cost in a fixed window matching cost summation mode, so that the parallax consistency of a non-edge pixel point region in an input image can be improved while the robustness of the first matching cost is improved.
Fig. 6 is a flowchart of another implementation manner of S104 in the embodiment of fig. 2, and as shown in fig. 6, the "determining target matching cost values of all pixel points in the input image according to the third matching cost" in S104 includes:
s501, obtaining all matching cost values corresponding to each pixel point in the input image in the third matching cost.
After the terminal obtains the third matching cost of the input image based on any of the above embodiments, all matching cost values of each pixel point in the preset parallax search range can be extracted from the third matching cost, so that a required matching cost value can be extracted from a plurality of matching cost values corresponding to each pixel point for use.
S502, comparing all the matching cost values corresponding to each pixel point by adopting a WTA algorithm to obtain the minimum matching cost value of each pixel point.
The WTA algorithm is a comparison algorithm used for determining the minimum matching cost value from a plurality of matching cost values corresponding to each pixel point. In this embodiment, after the terminal obtains all the matching cost values corresponding to each pixel point based on the method of S501, the WTA algorithm may be further adopted to compare all the matching cost values corresponding to each pixel point, so as to obtain the minimum matching cost value of each pixel point.
S503, determining the minimum matching cost value of each pixel point as the target matching cost value of each pixel point.
When the terminal acquires the minimum matching cost value of each pixel point in the input image based on the method of S502, the minimum matching cost value can be directly determined as the target matching cost value of each pixel point.
In practical applications, after the terminal acquires the disparity map of the input image based on any of the above embodiments, the terminal may further correct the disparity value corresponding to the unreliable pixel point on the disparity map, so as to obtain a disparity map with high accuracy, and therefore, in an embodiment, after S104, as shown in fig. 8, the method further includes:
s601, unreliable pixel points in the input image are detected.
The unreliable pixel points may include mismatching points, occlusion points, or points on weak texture. After the terminal acquires the input image of the left view and the input image of the right view, unreliable pixel points between the input image of the left view and the input image of the right view can be further detected through a left-right consistency constraint method, so that the parallax value corresponding to the unreliable pixel points can be corrected later.
And S602, correcting the parallax value corresponding to the unreliable pixel point in the parallax map by adopting a preset correction method to obtain the corrected parallax map of the input image.
The preset correction method is a correction method for the parallax value, and any type of correction method can be specifically adopted. Optionally, the preset correction method may include at least one of weighted median filtering, mismatching pixel point rejection, and vulnerability filling methods. When the terminal acquires the unreliable pixel points in the input image based on S601, the disparity value corresponding to the unreliable pixel points can be further determined in the disparity map of the input image, and then the corresponding preset correction method is adopted to correct the disparity value, thereby avoiding the inaccuracy of the acquired disparity value caused by factors such as mismatching or occlusion. Therefore, the method for acquiring the disparity map provided by the application further improves the accuracy of acquiring the disparity map through the method described in the embodiment, namely the step of performing post-processing on the disparity map.
In summary, the present application further provides a method for acquiring a disparity map, as shown in fig. 7, the method includes:
and S701, acquiring a left view and a right view.
S702, respectively calculating the matching cost values of all the pixel points in the left view and the right view by adopting an AD matching cost function to obtain the AD matching cost of the left view and the AD matching cost of the right view, and respectively calculating the matching cost values of all the pixel points in the left view and the right view by adopting a BT matching cost function to obtain the BT matching cost of the left view and the BT matching cost of the right view.
S703, fusing the AD matching cost and the BT matching cost by adopting a preset fusion method to obtain first matching costs of the left view and the right view.
And S704, performing matching cost aggregation on the first matching cost by adopting a fixed window matching cost summation mode to obtain second matching costs of the left view and the right view.
S705, carrying out weighted summation on the Canny operator, the gradient operator in the horizontal direction and the gradient operator in the vertical direction to obtain a positioning operator.
And S706, determining the coordinates of the edge pixel points in the left view and the right view by adopting a positioning operator.
And S707, obtaining an edge matching cost value in the second matching cost according to the coordinates of the edge pixel points.
And S708, optimizing the edge matching cost value by adopting an SGM algorithm or a GSM algorithm to obtain a third matching cost of the left view and the right view.
S709, extracting all matching cost values corresponding to each pixel point in the left view and the right view from the third matching cost.
And S710, comparing all the matching cost values corresponding to each pixel point in the left view and the right view by adopting a WTA algorithm to obtain the minimum matching cost value of each pixel point.
And S711, determining the minimum matching cost value of each pixel point in the left view and the right view as the target matching cost value of each pixel point in the left view and the right view.
And S712, obtaining the disparity maps of the left view and the right view according to the disparity value corresponding to the target matching cost value of each pixel point in the left view and the right view.
And S713, detecting unreliable pixel points in the left view and the right view.
And S714, correcting the parallax values corresponding to the unreliable pixel points in the parallax images of the left view and the right view by adopting a preset correction method to obtain the corrected parallax image of the input image, wherein the preset correction method can comprise at least one of weighted median filtering, mismatching pixel point elimination and vulnerability filling methods.
In the embodiment, the matching cost of the left view and the matching cost of the right view are calculated in a mode of fusing the AD matching cost function and the BT matching cost function, so that the parallel acceleration is facilitated, and meanwhile, the problem of inaccurate calculation caused by a single matching cost function can be solved. In addition, the Canny operator and gradient operator fusion mode is adopted to extract the edge pixel points of the left view and the right view, and the mode exposes more edge pixel points, so that the accuracy of extracting the edge pixel points is improved. In addition, the SGM algorithm or the GSM algorithm with high calculation accuracy is adopted to optimize the edge matching cost value, so that the problem of edge point calculation accuracy is improved, and the calculation amount of a non-edge pixel point region is reduced. In summary, the method for acquiring a disparity map provided by the application has the advantages that the accuracy of the acquired disparity map is high, and a large amount of calculation space can be saved, so that the operation rate is improved.
It should be understood that although the various steps in the flow charts of fig. 2-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential.
In one embodiment, as shown in fig. 9, there is provided a disparity map acquiring apparatus, including: a cost calculation module 11, a cost aggregation module 12, a cost optimization module 13, and a disparity calculation module 14, wherein:
the cost calculation module 11 is configured to calculate a matching cost value of each pixel in the input image by using a fusion matching cost function, so as to obtain a first matching cost of the input image; the fusion matching cost function is a function obtained by fusing at least two types of matching cost functions;
the cost aggregation module 12 is configured to perform matching cost aggregation on the first matching cost to obtain a second matching cost of the input image;
the cost optimization module 13 is configured to optimize, by using an SGM algorithm or a GSM algorithm, a matching cost value corresponding to an edge pixel point in the input image in the second matching cost to obtain a third matching cost of the input image;
and the parallax calculation module 14 is configured to determine target matching cost values of all pixel points in the input image according to the third matching cost, and obtain a parallax map of the input image according to the parallax value corresponding to the target matching cost values.
In one embodiment, the fusion matching cost function is a function obtained by fusing a gradient gray-scale correlation AD matching cost function and a left-right sampling BT matching cost function.
In an embodiment, as shown in fig. 10, the cost optimization module 13 includes: a positioning unit 131, a first determining unit 132 and an optimizing unit 133, wherein:
the positioning unit 131 is configured to determine coordinates of edge pixel points in the input image by using a preset positioning operator;
the first determining unit 132 is configured to obtain an edge matching cost value in the second matching cost according to the coordinate of the edge pixel;
the optimizing unit 133 is configured to perform optimization processing on the edge matching cost value by using an SGM algorithm or a GSM algorithm, so as to obtain a third matching cost of the input image.
In an embodiment, the positioning unit 131 is specifically configured to perform weighted summation on the Canny operator, the gradient operator in the horizontal direction, and the gradient operator in the vertical direction to obtain a positioning operator, and determine the coordinates of the edge pixel point in the input image by using the positioning operator.
In one embodiment, as shown in fig. 11, the cost aggregation module 12 includes: a second determination unit 121 and an aggregation unit 122, wherein:
a second determining unit 121, configured to determine, according to the coordinates and the preset radius of each pixel point of the input image, a fixed window corresponding to each pixel point;
And the aggregation unit 122 is configured to aggregate matching cost values, corresponding to the first matching cost, of the pixels in the fixed window corresponding to each pixel, so as to obtain a second matching cost of the input image.
In one embodiment, as shown in fig. 12, the disparity calculating module 14 includes: a third determining unit 141, a comparing unit 142, and a fourth determining unit 143, wherein:
the third determining unit 141 is configured to obtain all matching cost values corresponding to each pixel point in the input image in the third matching cost;
a comparing unit 142, configured to compare all matching cost values corresponding to each pixel point by using a WTA algorithm, so as to obtain a minimum matching cost value of each pixel point;
the fourth determining unit 143 is configured to determine the minimum matching cost value of each pixel point as the target matching cost value of each pixel point.
In one embodiment, as shown in fig. 13, based on the disparity map obtaining apparatus shown in fig. 9, the disparity map obtaining apparatus further includes:
the detection module 15 is used for detecting unreliable pixel points in the input image;
and the correction module 16 is configured to correct the parallax value corresponding to the unreliable pixel point in the parallax map by using a preset correction method, so as to obtain a corrected parallax map of the input image.
In one embodiment, the preset correction method includes at least one of weighted median filtering, mismatching pixel point elimination, and hole filling.
For the specific definition of the disparity map acquisition device, reference may be made to the above definition of a disparity map acquisition method, and details are not repeated here. The modules in the disparity map obtaining device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 14. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a disparity map acquisition method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 14 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
calculating the matching cost value of each pixel point in the input image by adopting a fusion matching cost function to obtain a first matching cost of the input image; the fusion matching cost function is a function obtained by fusing at least two types of matching cost functions;
performing matching cost aggregation on the first matching cost to obtain a second matching cost of the input image;
optimizing a matching cost value corresponding to the edge pixel point in the input image in the second matching cost by adopting an SGM algorithm or a GSM algorithm to obtain a third matching cost of the input image;
and determining the target matching cost values of all pixel points in the input image according to the third matching cost, and obtaining a disparity map of the input image according to the disparity value corresponding to the target matching cost values.
The implementation principle and technical effect of the computer device provided by the above embodiment are similar to those of the above method embodiment, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, the computer program, when executed by a processor, further implementing the steps of:
calculating the matching cost value of each pixel point in the input image by adopting a fusion matching cost function to obtain a first matching cost of the input image; the fusion matching cost function is a function obtained by fusing at least two types of matching cost functions;
performing matching cost aggregation on the first matching cost to obtain a second matching cost of the input image;
optimizing a matching cost value corresponding to the edge pixel point in the input image in the second matching cost by adopting an SGM algorithm or a GSM algorithm to obtain a third matching cost of the input image;
and determining the target matching cost values of all pixel points in the input image according to the third matching cost, and obtaining a disparity map of the input image according to the disparity value corresponding to the target matching cost values.
The implementation principle and technical effect of the computer-readable storage medium provided by the above embodiments are similar to those of the above method embodiments, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. A method for acquiring a disparity map, the method comprising:
calculating the matching cost value of each pixel point in the input image by adopting a fusion matching cost function to obtain a first matching cost of the input image; the fusion matching cost function is a function obtained by fusing at least two types of matching cost functions, the first matching cost is a data set, and the data set comprises pixel point coordinates and each parallax value in a parallax searching range;
Performing matching cost aggregation on the first matching cost to obtain a second matching cost of the input image;
optimizing the corresponding matching cost value of the edge pixel points in the input image in the second matching cost by adopting an SGM algorithm or a GSM algorithm to obtain a third matching cost of the input image;
and determining target matching cost values of all pixel points in the input image according to the third matching cost, and obtaining a disparity map of the input image according to a disparity value corresponding to the target matching cost values.
2. The method according to claim 1, wherein the fusion matching cost function is a function obtained by fusing a gradient gray-scale correlation AD matching cost function and a left-right sampling BT matching cost function.
3. The method according to claim 1 or 2, wherein the optimizing an edge matching cost value in the second matching cost by using an SGM algorithm or a GSM algorithm to obtain a third matching cost of the input image comprises:
determining the coordinates of edge pixel points in the input image by adopting a preset positioning operator;
obtaining an edge matching cost value in the second matching cost according to the coordinates of the edge pixel points;
And optimizing the edge matching cost value by adopting the SGM algorithm or the GSM algorithm to obtain a third matching cost of the input image.
4. The method of claim 3, wherein determining coordinates of edge pixels in the input image using a predetermined positioning operator comprises:
carrying out weighted summation on the Canny operator, the gradient operator in the horizontal direction and the gradient operator in the vertical direction to obtain the positioning operator;
and determining the coordinates of the edge pixel points in the input image by adopting the positioning operator.
5. The method according to claim 1, wherein the aggregating the first matching costs to obtain a second matching cost of the input image comprises:
determining a fixed window corresponding to each pixel point according to the coordinate of each pixel point of the input image and a preset radius;
and aggregating the corresponding matching cost values of the pixel points in the fixed window corresponding to each pixel point in the first matching cost to obtain a second matching cost of the input image.
6. The method of claim 1, wherein determining the target matching cost values of all pixel points in the input image according to the third matching cost comprises:
Obtaining all matching cost values corresponding to each pixel point in the input image in the third matching cost;
comparing all the matching cost values corresponding to each pixel point by adopting a WTA algorithm to obtain the minimum matching cost value of each pixel point;
and determining the minimum matching cost value of each pixel point as the target matching cost value of each pixel point.
7. The method according to claim 1, wherein after obtaining the disparity map of the input image according to the disparity value corresponding to the target matching cost value, the method further comprises:
detecting unreliable pixel points in the input image;
and correcting the parallax value corresponding to the unreliable pixel point in the parallax map by adopting a preset correction method to obtain the corrected parallax map of the input image.
8. The method of claim 7, wherein the predetermined correction method comprises at least one of weighted median filtering, mismatched pixel rejection, and hole filling.
9. An apparatus for obtaining a disparity map, the apparatus comprising:
the cost calculation module is used for calculating the matching cost value of each pixel point in the input image by adopting a fusion matching cost function to obtain a first matching cost of the input image; the fusion matching cost function is a function obtained by fusing at least two types of matching cost functions, the first matching cost is a data set, and the data set comprises pixel point coordinates and each parallax value in a parallax searching range;
The cost aggregation module is used for performing matching cost aggregation on the first matching cost to obtain a second matching cost of the input image;
the cost optimization module is used for optimizing the corresponding matching cost value of the edge pixel point in the input image in the second matching cost by adopting an SGM algorithm or a GSM algorithm to obtain a third matching cost of the input image;
and the parallax calculation module is used for determining the target matching cost values of all the pixel points in the input image according to the third matching cost, and obtaining the parallax image of the input image according to the parallax value corresponding to the target matching cost values.
10. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN201910911950.2A 2019-09-25 2019-09-25 Disparity map acquisition method and device, computer equipment and storage medium Active CN110866535B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910911950.2A CN110866535B (en) 2019-09-25 2019-09-25 Disparity map acquisition method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910911950.2A CN110866535B (en) 2019-09-25 2019-09-25 Disparity map acquisition method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110866535A CN110866535A (en) 2020-03-06
CN110866535B true CN110866535B (en) 2022-07-29

Family

ID=69652435

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910911950.2A Active CN110866535B (en) 2019-09-25 2019-09-25 Disparity map acquisition method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110866535B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113345001A (en) * 2021-05-19 2021-09-03 智车优行科技(北京)有限公司 Disparity map determination method and device, computer-readable storage medium and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680510A (en) * 2013-12-18 2015-06-03 北京大学深圳研究生院 RADAR parallax image optimization method and stereo matching parallax image optimization method and system
WO2016180325A1 (en) * 2015-05-12 2016-11-17 努比亚技术有限公司 Image processing method and device
CN108520534A (en) * 2018-04-23 2018-09-11 河南理工大学 A kind of adaptive multimodality fusion Stereo Matching Algorithm
CN109919991A (en) * 2017-12-12 2019-06-21 杭州海康威视数字技术股份有限公司 A kind of depth information determines method, apparatus, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680510A (en) * 2013-12-18 2015-06-03 北京大学深圳研究生院 RADAR parallax image optimization method and stereo matching parallax image optimization method and system
WO2016180325A1 (en) * 2015-05-12 2016-11-17 努比亚技术有限公司 Image processing method and device
CN109919991A (en) * 2017-12-12 2019-06-21 杭州海康威视数字技术股份有限公司 A kind of depth information determines method, apparatus, electronic equipment and storage medium
CN108520534A (en) * 2018-04-23 2018-09-11 河南理工大学 A kind of adaptive multimodality fusion Stereo Matching Algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于自适应权重AD-Census变换的双目立体匹配;王云峰等;《工程科学与技术》;20180731;全文 *
基于边缘特征和可信度的立体匹配算法;李治江 等;《包装工程》;20141231;全文 *

Also Published As

Publication number Publication date
CN110866535A (en) 2020-03-06

Similar Documents

Publication Publication Date Title
US9350969B2 (en) Target region filling involving source regions, depth information, or occlusions
CN111639626A (en) Three-dimensional point cloud data processing method and device, computer equipment and storage medium
CN112967339B (en) Vehicle pose determining method, vehicle control method and device and vehicle
US9380286B2 (en) Stereoscopic target region filling
CN111402152A (en) Disparity map processing method and device, computer equipment and storage medium
CN111882655B (en) Method, device, system, computer equipment and storage medium for three-dimensional reconstruction
CN113132717A (en) Data processing method, terminal and server
CN111080571A (en) Camera shielding state detection method and device, terminal and storage medium
CN114359334A (en) Target tracking method and device, computer equipment and storage medium
CN110866535B (en) Disparity map acquisition method and device, computer equipment and storage medium
CN113177886B (en) Image processing method, device, computer equipment and readable storage medium
US9098746B2 (en) Building texture extracting apparatus and method thereof
CN111721283B (en) Precision detection method and device for positioning algorithm, computer equipment and storage medium
CN111914890A (en) Image block matching method between images, image registration method and product
CN115272470A (en) Camera positioning method and device, computer equipment and storage medium
CN115550558A (en) Automatic exposure method and device for shooting equipment, electronic equipment and storage medium
CN115294280A (en) Three-dimensional reconstruction method, apparatus, device, storage medium, and program product
WO2021238499A1 (en) Method and device for fast binocular image processing
CN112615993A (en) Depth information acquisition method, binocular camera module, storage medium and electronic equipment
CN112396117A (en) Image detection method and device and electronic equipment
CN110223257B (en) Method and device for acquiring disparity map, computer equipment and storage medium
CN110838138A (en) Repetitive texture detection method, device, computer equipment and storage medium
CN116518981B (en) Aircraft visual navigation method based on deep learning matching and Kalman filtering
CN117671007B (en) Displacement monitoring method and device, electronic equipment and storage medium
CN113766090B (en) Image processing method, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant