CN110570468A - Binocular vision depth estimation method and system based on depth learning - Google Patents
Binocular vision depth estimation method and system based on depth learning Download PDFInfo
- Publication number
- CN110570468A CN110570468A CN201910759790.4A CN201910759790A CN110570468A CN 110570468 A CN110570468 A CN 110570468A CN 201910759790 A CN201910759790 A CN 201910759790A CN 110570468 A CN110570468 A CN 110570468A
- Authority
- CN
- China
- Prior art keywords
- image information
- module
- scale feature
- information
- disparity value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000004364 calculation method Methods 0.000 claims abstract description 26
- 238000013528 artificial neural network Methods 0.000 claims abstract description 16
- 238000013135 deep learning Methods 0.000 claims description 33
- 230000004927 fusion Effects 0.000 claims description 18
- 238000000605 extraction Methods 0.000 claims description 13
- 230000002159 abnormal effect Effects 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 abstract description 4
- 230000000694 effects Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20228—Disparity calculation for image-based rendering
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
the invention relates to a binocular vision depth estimation method and a binocular vision depth estimation system based on depth learning, wherein the method comprises the following steps: the method comprises the steps that image information of a target object is collected through a first camera and a second camera at the same time, wherein the first camera collects first image information of a plurality of target objects, and the second camera collects second image information of the plurality of target objects; reading the first image information and the second image information through a reading module; when the judging module judges that the reading module reads the first image information and the second image information at the same time, a target instruction is sent to the calculating module; calculating a parallax value between the first image information and the second image information by using a neural network through a calculation module, and converting the parallax value into target image depth information; the method utilizes the depth neural network, omits the coplanar conversion of the camera, optimizes the problem of feature matching in the disparity map calculation, and obtains a more accurate depth estimation result compared with the traditional method.
Description
Technical Field
The invention relates to the technical field of binocular vision, in particular to a binocular vision depth estimation method and system based on deep learning.
Background
Binocular camera depth estimation has wide application in autonomous driving, and depth estimation can acquire the depth value of each pixel, facilitating obstacle position estimation. The traditional binocular camera depth estimation method firstly needs to obtain internal and external parameters of a camera and a camera baseline distance through calibration, then calculates the disparity maps of a left camera and a right camera, calculates the depth information of corresponding points according to the disparity map and the camera baseline distance, and the precision of the disparity map is directly related to the precision of depth calculation. The calculation of the Disparity map needs to distort and correct the image, so that the two cameras are completely coplanar, and then the imaging positions of the same point in the real space in the two cameras are found by utilizing an epipolar constraint rule and feature matching; in the process, it is very difficult for the cameras to be completely coplanar, and in addition, the feature matching of the image is seriously influenced by illumination, and the farther the distance is, the smaller the parallax between the two cameras, and other factors all influence the precision of the disparity map, and further influence the precision of the depth estimation.
Disclosure of Invention
aiming at the defects in the prior art, the invention aims to provide a binocular vision depth estimation method based on deep learning and a system thereof, and aims to solve the problems in the prior art.
The purpose of the invention and the technical problem to be solved are realized by adopting the following technical scheme.
The binocular vision depth estimation method based on deep learning provided by the invention comprises the following steps: simultaneously acquiring image information of a target object through a first camera and a second camera, wherein the first camera acquires first image information of a plurality of target objects, and the second camera acquires second image information of the plurality of target objects; reading the first image information and the second image information through a reading module; judging whether the reading module simultaneously reads the first image information and the second image information or not through a judging module in a processing module, and sending a target instruction to a calculating module when the judging module judges that the reading module simultaneously reads the first image information and the second image information; calculating a disparity value between the first image information and the second image information by the calculation module through a neural network, and converting the disparity value into target image depth information; and sending the target image depth information to external equipment through a communication module.
In an embodiment of the present invention, the method further includes: when the judging module judges that the reading module does not read the first image information and the second image information simultaneously, further judging whether the first image information and the second image information are not read simultaneously for multiple times continuously; if yes, sending a first instruction to an exit module; if not, a second instruction is sent to the warning module.
In an embodiment of the invention, the binocular vision depth estimation system based on deep learning is exited through the exit module, wherein the first instruction is an exit instruction.
in an embodiment of the present invention, the warning module issues an abnormal warning message, wherein the second instruction is a warning instruction.
in an embodiment of the present invention, after the warning module issues the abnormal warning information, the action of acquiring the image information of the target object by the first camera and the second camera is returned to be executed.
in an embodiment of the present invention, the step of calculating, by the calculation module, a disparity value between the first image information and the second image information by using a neural network, and converting the disparity value into target image depth information further includes: performing feature extraction on the image information of the target object through an extraction module to obtain three features of different scales, namely a first scale feature, a second scale feature and a third scale feature, wherein the first scale feature is 1/16 of the image size of the target object, the second scale feature is 1/8 of the image size of the target object, and the third scale feature is 1/4 of the image size of the target object; calculating, by the calculation module, a disparity value of the first scale feature according to the first scale feature, performing information fusion on disparity value information of the first scale feature and the second scale feature, calculating a disparity value of the second scale feature, performing information fusion on disparity value information of the second scale feature and the third scale feature, calculating a disparity value of the third scale feature, performing upsampling and fusion on the disparity value of the first scale feature, the disparity value of the second scale feature and the disparity value of the third scale feature, outputting a specific disparity value, and calculating the target image depth information according to the specific disparity value and a camera baseline.
the purpose of the invention and the technical problem to be solved are realized by adopting the following technical scheme.
according to the binocular vision depth estimation system based on deep learning, the binocular vision depth estimation system comprises: the binocular camera comprises a first camera and a second camera, wherein the first camera and the second camera are used for simultaneously acquiring image information of a target object, the first camera acquires first image information of a plurality of target objects, and the second camera acquires second image information of the target objects; a reading module, configured to read the first image information and the second image information respectively; the processing module comprises a judging module and a judging module, wherein the judging module is used for judging whether the reading module reads the first image information and the second image information simultaneously; the calculation module is used for calculating a parallax value between the first image information and the second image information by using a neural network when the judgment module judges that the reading module simultaneously reads the first image information and the second image information, and converting the parallax value into target image depth information; and the communication module is used for sending the target image depth information to external equipment.
In an embodiment of the present invention, the determining module is further configured to further determine whether the first image information and the second image information are not read simultaneously for multiple times when it is determined that the first image information and the second image information are not read simultaneously by the reading module; if yes, sending a first instruction to an exit module; if not, sending a second instruction to the warning module; the exit module is used for exiting the binocular vision depth estimation system based on deep learning, wherein the first instruction is an exit instruction; the warning module is used for sending out abnormal warning information, wherein the second instruction is a warning instruction.
In an embodiment of the present invention, the image processing apparatus further includes an extraction module, where the extraction module is configured to perform feature extraction on the image information of the target object to obtain three features of different scales, which are a first scale feature, a second scale feature and a third scale feature, respectively, where the first scale feature is 1/16 of an image size of the target object, the second scale feature is 1/8 of the image size of the target object, and the third scale feature is 1/4 of the image size of the target object; the calculation module is further configured to calculate a disparity value of the first scale feature according to the first scale feature, perform information fusion on disparity value information of the first scale feature and the second scale feature, calculate a disparity value of the second scale feature, perform information fusion on disparity value information of the second scale feature and the third scale feature, calculate a disparity value of the third scale feature, perform upsampling and fusion on the disparity value of the first scale feature, the disparity value of the second scale feature and the disparity value of the third scale feature, output a specific disparity value, and calculate the target image depth information according to the specific disparity value and a camera baseline.
in order to achieve the above object, the present invention further provides a computer readable storage medium, wherein a binocular vision depth estimation program based on deep learning is stored on the computer readable storage medium, and when being executed by a processor, the binocular vision depth estimation program based on deep learning implements the steps of the binocular vision depth estimation method based on deep learning.
by the technical scheme, the invention has the beneficial effects that: simultaneously acquiring image information of a target object through a first camera and a second camera, wherein the first camera acquires first image information of a plurality of target objects, and the second camera acquires second image information of the plurality of target objects; reading the first image information and the second image information through a reading module; judging whether the reading module reads the first image information and the second image information at the same time through a judging module, and sending a target instruction to a calculating module when the judging module judges that the reading module reads the first image information and the second image information at the same time; calculating a disparity value between the first image information and the second image information by the calculation module through a neural network, and converting the disparity value into target image depth information; the target image depth information is sent to external equipment through a communication module; compared with the traditional depth estimation method, the method utilizes the depth neural network, adopts the end-to-end thought, omits the coplanar conversion of the camera, optimizes the problem of feature matching in the disparity map calculation, and obtains a more accurate depth estimation result compared with the traditional method.
drawings
Fig. 1 is a schematic structural diagram of a binocular vision depth estimation system based on deep learning according to an embodiment of the present invention;
Fig. 2 is a flowchart illustrating steps of a binocular vision depth estimation method based on deep learning according to a second embodiment of the present invention;
Fig. 3 is a flowchart illustrating steps of a binocular vision depth estimation method based on deep learning according to a third embodiment of the present invention.
Detailed Description
to further illustrate the technical means and effects of the present invention for achieving the predetermined objects, the following detailed description will be given to a binocular vision depth estimation method based on deep learning and the system thereof, and the specific implementation, structure, features and effects thereof according to the present invention with reference to the accompanying drawings and preferred embodiments. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without any inventive step, are within the scope of the present invention.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
example one
As shown in fig. 1, a schematic structural diagram of a binocular vision depth estimation system based on deep learning according to an embodiment of the present invention includes: the binocular camera 110 comprises a first camera 111 and a second camera 112, wherein the first camera 111 and the second camera 112 are used for simultaneously acquiring image information of a target object, the first camera 111 acquires a plurality of pieces of first image information of the target object, and the second camera 112 acquires a plurality of pieces of second image information of the target object; a reading module 120, configured to read the first image information and the second image information respectively; a processing module 130, including a determining module 131, configured to determine whether the reading module 120 reads the first image information and the second image information at the same time; a calculating module 140, configured to calculate a disparity value between the first image information and the second image information by using a neural network when the determining module 131 determines that the reading module simultaneously reads the first image information and the second image information, and convert the disparity value into target image depth information; and a communication module 150, configured to send the target image depth information to an external device.
In an embodiment of the present invention, the determining module 131 is further configured to determine whether the first image information and the second image information are not read simultaneously for multiple times when it is determined that the reading module 120 does not read the first image information and the second image information simultaneously; if yes, sending a first instruction to an exit module; if not, sending a second instruction to the warning module; the exit module is used for exiting the binocular vision depth estimation system based on deep learning, wherein the first instruction is an exit instruction; the warning module is used for sending out abnormal warning information, wherein the second instruction is a warning instruction.
In an embodiment of the present invention, the present invention further includes an extracting module 160, where the extracting module 160 is configured to perform feature extraction on the image information of the target object to obtain three features of different scales, which are a first scale feature, a second scale feature and a third scale feature, respectively, where the first scale feature is 1/16 of the image size of the target object, the second scale feature is 1/8 of the image size of the target object, and the third scale feature is 1/4 of the image size of the target object; the calculation module 140 is further configured to calculate a disparity value of the first scale feature according to the first scale feature, perform information fusion on the disparity value information of the first scale feature and the second scale feature, calculate a disparity value of the second scale feature, perform information fusion on the disparity value information of the second scale feature and the third scale feature, calculate a disparity value of the third scale feature, perform upsampling and fusion on the disparity value of the first scale feature, the disparity value of the second scale feature and the disparity value of the third scale feature, output a specific disparity value, and calculate the target image depth information according to the specific disparity value and the camera baseline.
Example two
as shown in fig. 2, a flowchart of steps of a binocular vision depth estimation method based on deep learning according to a second embodiment of the present invention includes the following steps:
Step S210: simultaneously acquiring image information of a target object through a first camera and a second camera; the first camera collects first image information of a plurality of target objects, and the second camera collects second image information of a plurality of target objects;
Step S220: reading the first image information and the second image information through a reading module;
Step S230: judging whether the reading module simultaneously reads the first image information and the second image information or not through a judging module in a processing module, and sending a target instruction to a calculating module when the judging module judges that the reading module simultaneously reads the first image information and the second image information;
Step S240: calculating a disparity value between the first image information and the second image information by the calculation module through a neural network, and converting the disparity value into target image depth information;
Step S250: sending the target image depth information to external equipment through a communication module;
The image depth information obtained by the conversion is packaged into a format required by the communication module 150 through the calculation module 140, and the information is sent to the communication module 150 for broadcasting, so that other modules can use the information.
In an embodiment of the present invention, the method further includes: when the determining module 131 determines that the reading module 120 does not read the first image information and the second image information at the same time, further determining whether the first image information and the second image information are not read at the same time for multiple consecutive times; if yes, sending a first instruction to an exit module; if not, a second instruction is sent to the warning module.
In an embodiment of the present invention, the method further includes: and exiting the binocular vision depth estimation system based on deep learning through the exiting module, wherein the first instruction is an exiting instruction.
In an embodiment of the present invention, the method further includes: and sending abnormal warning information through the warning module, wherein the second instruction is a warning instruction.
in an embodiment of the present invention, the method further includes: and when the warning module sends out abnormal warning information, returning to execute the action that the first camera and the second camera simultaneously acquire the image information of the target object.
in an embodiment of the present invention, the step of calculating a disparity value between the first image information and the second image information by the calculation module 140 using a neural network, and converting the disparity value into the target image depth information further includes: extracting features of the image information of the target object through an extraction module 160 to obtain three features of different scales, namely a first scale feature, a second scale feature and a third scale feature, wherein the first scale feature is 1/16 of the image size of the target object, the second scale feature is 1/8 of the image size of the target object, and the third scale feature is 1/4 of the image size of the target object; the calculation module 140 calculates the disparity value of the first scale feature according to the first scale feature, performs information fusion on the disparity value information of the first scale feature and the second scale feature, calculates the disparity value of the second scale feature, performs information fusion on the disparity value information of the second scale feature and the third scale feature, calculates the disparity value of the third scale feature, performs upsampling and fusion on the disparity value of the first scale feature, the disparity value of the second scale feature and the disparity value of the third scale feature, outputs a specific disparity value, and calculates the target image depth information according to the specific disparity value and the camera baseline.
specifically, the obtained first scale feature is accessed to DisparityNet, and the parallax value of the first scale feature is calculated by the calculating module 140; wherein, the DisparityNet structure is designed as follows: first, a feature similarity matrix is calculated: respectively calculating the differences of the first image information, the second image information and the feature map, using L1loss, and calculating in an upper MxM distance range and a lower MxM distance range according to the size of the feature map, wherein the setting of M is related to the size of the feature map, the smaller the feature map is, the higher the abstract layer is, the better the effect on objects with longer distances is theoretically realized, and the smaller the Disparity is when the distances are longer; second, 3D convolution: 3D convolution feature extraction is carried out on the similarity matrix obtained through calculation; third, the Disparity map regresses: direct end-to-end regression to Dispair map.
EXAMPLE III
As shown in fig. 3, a flowchart of steps of a binocular vision depth estimation method based on deep learning according to a third embodiment of the present invention includes the following steps:
Step S310: simultaneously acquiring image information of a target object through a first camera and a second camera; the first camera collects first image information of a plurality of target objects, and the second camera collects second image information of a plurality of target objects;
step S320: reading the first image information and the second image information through a reading module;
Step S330: judging whether the reading module reads the first image information and the second image information at the same time through a judging module, if so, executing a step S340; if not, executing step S360;
Step S340: calculating a parallax value between the first image information and the second image information by using a neural network through a calculation module, and converting the parallax value into target image depth information;
Step S350: sending the target image depth information to external equipment through a communication module;
step S360: judging whether the first image information and the second image information are not read simultaneously for a plurality of times continuously through a judging module, if so, executing a step S370; if not, go to step S380;
Step S370: exiting abnormally;
step S380: sending out abnormal warning information, and then returning to execute the step S310; if the first image information and the second image information are not received at the same time only for a short time, an abnormal warning message is issued, and then the step S310 is continuously executed.
the present invention provides a computer-readable storage medium having stored thereon a binocular visual depth estimation program based on deep learning, which when executed by a processor, implements the steps of the binocular visual depth estimation method based on deep learning according to the second and third embodiments.
The invention relates to a binocular vision depth estimation method and a binocular vision depth estimation system based on deep learning, wherein image information of a target object is acquired simultaneously through a first camera and a second camera, the first camera acquires first image information of a plurality of target objects, and the second camera acquires second image information of the plurality of target objects; reading the first image information and the second image information through a reading module; judging whether the reading module reads the first image information and the second image information at the same time through a judging module, and sending a target instruction to a calculating module when the judging module judges that the reading module reads the first image information and the second image information at the same time; calculating a disparity value between the first image information and the second image information by the calculation module through a neural network, and converting the disparity value into target image depth information; the target image depth information is sent to external equipment through a communication module; compared with the traditional depth estimation method, the method utilizes the depth neural network, adopts the end-to-end thought, omits the coplanar conversion of the camera, optimizes the problem of feature matching in the disparity map calculation, obtains a more accurate depth estimation result compared with the traditional method, simplifies the network model, controls the calculated amount and enables the model to be deployed on a mobile calculation platform.
although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. a binocular vision depth estimation method based on deep learning is characterized by comprising the following steps:
Simultaneously acquiring image information of a target object through a first camera and a second camera, wherein the first camera acquires first image information of a plurality of target objects, and the second camera acquires second image information of the plurality of target objects;
Reading the first image information and the second image information through a reading module;
Judging whether the reading module simultaneously reads the first image information and the second image information or not through a judging module in a processing module, and sending a target instruction to a calculating module when the judging module judges that the reading module simultaneously reads the first image information and the second image information;
Calculating a disparity value between the first image information and the second image information by the calculation module through a neural network, and converting the disparity value into target image depth information; and
And sending the target image depth information to external equipment through a communication module.
2. The binocular vision depth estimation method based on deep learning of claim 1, further comprising: when the judging module judges that the reading module does not read the first image information and the second image information simultaneously, further judging whether the first image information and the second image information are not read simultaneously for multiple times continuously; if yes, sending a first instruction to an exit module; if not, a second instruction is sent to the warning module.
3. the binocular vision depth estimation method based on deep learning of claim 2, wherein the binocular vision depth estimation system based on deep learning is exited through the exit module, wherein the first instruction is an exit instruction.
4. The binocular vision depth estimation method based on deep learning of claim 2, wherein abnormal warning information is issued by the warning module, wherein the second instruction is a warning instruction.
5. The binocular vision depth estimation method based on deep learning of claim 4, wherein after the warning module issues abnormal warning information, an action of simultaneously acquiring image information of a target object by the first camera and the second camera is returned to be executed.
6. the binocular vision depth estimation method based on deep learning of claim 1, wherein the step of calculating a disparity value between the first image information and the second image information by the calculation module using a neural network and converting the disparity value into target image depth information further comprises:
Performing feature extraction on the image information of the target object through an extraction module to obtain three features of different scales, namely a first scale feature, a second scale feature and a third scale feature, wherein the first scale feature is 1/16 of the image size of the target object, the second scale feature is 1/8 of the image size of the target object, and the third scale feature is 1/4 of the image size of the target object;
calculating, by the calculation module, a disparity value of the first scale feature according to the first scale feature, performing information fusion on disparity value information of the first scale feature and the second scale feature, calculating a disparity value of the second scale feature, performing information fusion on disparity value information of the second scale feature and the third scale feature, calculating a disparity value of the third scale feature, performing upsampling and fusion on the disparity value of the first scale feature, the disparity value of the second scale feature and the disparity value of the third scale feature, outputting a specific disparity value, and calculating the target image depth information according to the specific disparity value and a camera baseline.
7. a binocular vision depth estimation system based on deep learning, comprising:
The binocular camera comprises a first camera and a second camera, wherein the first camera and the second camera are used for simultaneously acquiring image information of a target object, the first camera acquires first image information of a plurality of target objects, and the second camera acquires second image information of the target objects;
A reading module, configured to read the first image information and the second image information respectively;
The processing module comprises a judging module and a judging module, wherein the judging module is used for judging whether the reading module reads the first image information and the second image information simultaneously;
The calculation module is used for calculating a parallax value between the first image information and the second image information by using a neural network when the judgment module judges that the reading module simultaneously reads the first image information and the second image information, and converting the parallax value into target image depth information; and
and the communication module is used for sending the target image depth information to external equipment.
8. The binocular vision depth estimation system based on deep learning of claim 7, wherein the determining module is further configured to, when it is determined that the first image information and the second image information are not simultaneously read by the reading module, further determine whether the first image information and the second image information are not simultaneously read for a plurality of consecutive times; if yes, sending a first instruction to an exit module; if not, sending a second instruction to the warning module;
The exit module is used for exiting the binocular vision depth estimation system based on deep learning, wherein the first instruction is an exit instruction;
the warning module is used for sending out abnormal warning information, wherein the second instruction is a warning instruction.
9. The binocular vision depth estimation system based on deep learning of claim 7, further comprising an extraction module, wherein the extraction module is configured to perform feature extraction on the image information of the target object to obtain three features of different scales, namely a first scale feature, a second scale feature and a third scale feature, wherein the first scale feature is 1/16 of the image size of the target object, the second scale feature is 1/8 of the image size of the target object, and the third scale feature is 1/4 of the image size of the target object;
the calculation module is further configured to calculate a disparity value of the first scale feature according to the first scale feature, perform information fusion on disparity value information of the first scale feature and the second scale feature, calculate a disparity value of the second scale feature, perform information fusion on disparity value information of the second scale feature and the third scale feature, calculate a disparity value of the third scale feature, perform upsampling and fusion on the disparity value of the first scale feature, the disparity value of the second scale feature and the disparity value of the third scale feature, output a specific disparity value, and calculate the target image depth information according to the specific disparity value and a camera baseline.
10. a computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a depth-learning based binocular vision depth estimation program, which when executed by a processor implements the steps of the depth-learning based binocular vision depth estimation method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910759790.4A CN110570468A (en) | 2019-08-16 | 2019-08-16 | Binocular vision depth estimation method and system based on depth learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910759790.4A CN110570468A (en) | 2019-08-16 | 2019-08-16 | Binocular vision depth estimation method and system based on depth learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110570468A true CN110570468A (en) | 2019-12-13 |
Family
ID=68775570
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910759790.4A Pending CN110570468A (en) | 2019-08-16 | 2019-08-16 | Binocular vision depth estimation method and system based on depth learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110570468A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021217444A1 (en) * | 2020-04-28 | 2021-11-04 | 深圳市大疆创新科技有限公司 | Depth map generation method, electronic device, computer processing device and storage medium |
CN113643347A (en) * | 2020-07-20 | 2021-11-12 | 黑芝麻智能科技(上海)有限公司 | Stereoscopic vision with weakly aligned heterogeneous cameras |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107767413A (en) * | 2017-09-20 | 2018-03-06 | 华南理工大学 | A kind of image depth estimation method based on convolutional neural networks |
CN108230384A (en) * | 2017-11-28 | 2018-06-29 | 深圳市商汤科技有限公司 | Picture depth computational methods, device, storage medium and electronic equipment |
-
2019
- 2019-08-16 CN CN201910759790.4A patent/CN110570468A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107767413A (en) * | 2017-09-20 | 2018-03-06 | 华南理工大学 | A kind of image depth estimation method based on convolutional neural networks |
CN108230384A (en) * | 2017-11-28 | 2018-06-29 | 深圳市商汤科技有限公司 | Picture depth computational methods, device, storage medium and electronic equipment |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021217444A1 (en) * | 2020-04-28 | 2021-11-04 | 深圳市大疆创新科技有限公司 | Depth map generation method, electronic device, computer processing device and storage medium |
CN113643347A (en) * | 2020-07-20 | 2021-11-12 | 黑芝麻智能科技(上海)有限公司 | Stereoscopic vision with weakly aligned heterogeneous cameras |
CN113643347B (en) * | 2020-07-20 | 2024-02-09 | 黑芝麻智能科技(上海)有限公司 | Stereoscopic vision using weakly aligned heterogeneous cameras |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022083402A1 (en) | Obstacle detection method and apparatus, computer device, and storage medium | |
EP3506161A1 (en) | Method and apparatus for recovering point cloud data | |
US10909395B2 (en) | Object detection apparatus | |
CN105627932A (en) | Distance measurement method and device based on binocular vision | |
CN107886477A (en) | Unmanned neutral body vision merges antidote with low line beam laser radar | |
CN105335955A (en) | Object detection method and object detection apparatus | |
CN104677330A (en) | Small binocular stereoscopic vision ranging system | |
US20220051425A1 (en) | Scale-aware monocular localization and mapping | |
CN104574393A (en) | Three-dimensional pavement crack image generation system and method | |
CN113160327A (en) | Method and system for realizing point cloud completion | |
CN112802092B (en) | Obstacle sensing method and device and electronic equipment | |
AU2021103300A4 (en) | Unsupervised Monocular Depth Estimation Method Based On Multi- Scale Unification | |
CN111047634A (en) | Scene depth determination method, device, equipment and storage medium | |
CN110570468A (en) | Binocular vision depth estimation method and system based on depth learning | |
CN112241978A (en) | Data processing method and device | |
US11348271B2 (en) | Image processing device and three-dimensional measuring system | |
CN114919584A (en) | Motor vehicle fixed point target distance measuring method and device and computer readable storage medium | |
CN113240750A (en) | Three-dimensional space information measuring and calculating method and device | |
KR101459522B1 (en) | Location Correction Method Using Additional Information of Mobile Instrument | |
KR102410300B1 (en) | Apparatus for measuring position of camera using stereo camera and method using the same | |
CN115546216A (en) | Tray detection method, device, equipment and storage medium | |
CN115683046A (en) | Distance measuring method, distance measuring device, sensor and computer readable storage medium | |
CN104296690A (en) | Multi-line structure light three-dimensional measuring method based on image fusion | |
CN114359891A (en) | Three-dimensional vehicle detection method, system, device and medium | |
CN113724311B (en) | Depth map acquisition method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191213 |
|
RJ01 | Rejection of invention patent application after publication |