CN114488103A - Distance measuring system, distance measuring method, robot, device, and storage medium - Google Patents

Distance measuring system, distance measuring method, robot, device, and storage medium Download PDF

Info

Publication number
CN114488103A
CN114488103A CN202111682866.1A CN202111682866A CN114488103A CN 114488103 A CN114488103 A CN 114488103A CN 202111682866 A CN202111682866 A CN 202111682866A CN 114488103 A CN114488103 A CN 114488103A
Authority
CN
China
Prior art keywords
light
target object
texture
robot
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111682866.1A
Other languages
Chinese (zh)
Inventor
胡洪伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunjing Intelligence Technology Dongguan Co Ltd
Yunjing Intelligent Shenzhen Co Ltd
Original Assignee
Yunjing Intelligence Technology Dongguan Co Ltd
Yunjing Intelligent Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunjing Intelligence Technology Dongguan Co Ltd, Yunjing Intelligent Shenzhen Co Ltd filed Critical Yunjing Intelligence Technology Dongguan Co Ltd
Priority to CN202111682866.1A priority Critical patent/CN114488103A/en
Publication of CN114488103A publication Critical patent/CN114488103A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves

Landscapes

  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

The invention provides a distance measuring system, a distance measuring method, a robot, a system and a storage medium, wherein the distance measuring system comprises: a stereoscopic vision device, a projection device and a processing device; the stereoscopic vision apparatus includes at least two image pickup devices; the projection device comprises a light emitter, a pattern mask and a lens group; wherein the light emitter is used for emitting light; the light passes through the pattern mask to form a light beam with a pattern; the lens group is used for projecting the light beam with the pattern onto a target object to form texture on the target object; the camera shooting device is used for carrying out image acquisition on a target object to obtain a texture image; and the processing device is used for determining the depth of the target object according to the texture images acquired by the at least two camera devices. The invention reduces the manufacturing cost of the existing distance measuring device.

Description

Distance measuring system, distance measuring method, robot, device, and storage medium
Technical Field
The invention relates to the field of computer vision, in particular to a distance measuring system, a distance measuring method, a robot, equipment and a storage medium.
Background
Binocular stereo vision (binocular stereo vision) is an important form of computer vision, and is a method for acquiring three-dimensional geometric information of an object by acquiring two images of the object to be measured from different positions by using imaging equipment based on a parallax principle and calculating position deviation between corresponding points of the images. One condition for the successful calculation of object distance in the existing binocular stereo vision technology is that the measured object must possess a certain degree of texture, otherwise it cannot be detected by the algorithm at the imaging positions of the two cameras, and thus matching and subsequent calculation cannot be performed.
Aiming at the problem that a non-texture object cannot be matched by a binocular vision algorithm, the prior art mainly uses a VCSEL (Vertical-Cavity Surface-Emitting Laser) in combination with a DOE (Diffractive optical element) to solve the problem. The laser emitted by the VSCEL becomes parallel light after being collimated, dense light spots meeting a certain rule are projected through the DOE, and dense textures meeting the algorithm requirement are covered on the surface of an object. However, in the scheme of manufacturing the texture for the object based on the VCSEL + DOE projected speckle, the DOE is manufactured by using a super-large-scale integrated circuit manufacturing process, and in order to make the texture clearer and eliminate the influence of ambient light, an infrared VCSEL is generally used, and the camera lens side can also filter visible light, so that the observable distance of a stereoscopic vision algorithm is relatively short, and information such as the original color and the texture of the object can also be lost.
As can be seen from the above, the prior art causes problems of high cost and large loss of visual information.
Disclosure of Invention
The invention mainly aims to provide a distance measuring system, a distance measuring method, a robot, equipment and a storage medium. The method aims to solve the problems of high cost and large visual information loss of the existing distance measuring technology.
To achieve the above object, the present invention provides a ranging system, comprising: a stereoscopic vision device, a projection device and a processing device; the stereoscopic vision apparatus includes at least two image pickup devices; the projection device comprises a light emitter, a pattern mask and a lens group; wherein,
the light emitter is used for emitting light; the light passes through the pattern mask to form a light beam with a pattern; the lens group is used for projecting the light beam with the pattern onto a target object to form texture on the target object;
the camera shooting device is used for carrying out image acquisition on a target object to obtain a texture image;
and the processing device is used for determining the depth of the target object according to the texture images acquired by the at least two camera devices.
Optionally, the pattern mask includes a light-shielding region and a light-transmitting region, the light-transmitting region and the light-shielding region form a pattern, and light passes through the light-transmitting portion to form a light beam with the pattern.
Optionally, the light-transmitting part is a transparent material part; or the light transmission part is provided with a plurality of light transmission holes; the light passes through the pattern mask to form a patterned beam.
Optionally, the projection apparatus further includes a light collimator disposed between the light emitter and the pattern mask, and configured to collimate light emitted by the light emitter into parallel light.
In addition, in order to achieve the above object, the present invention further provides a distance measuring method, which is applied to a distance measuring system, wherein the distance measuring system comprises a stereoscopic vision device and a projection device; the stereoscopic vision apparatus includes at least two image pickup devices; the projection device comprises a light emitter, a pattern mask and a lens group; the distance measuring method comprises the following steps:
emitting light through the light emitter; the light passes through the pattern mask to form a light beam with a pattern;
projecting the patterned light beam onto a target object through the lens group to form a texture on the target object;
acquiring an image of a target object by the camera equipment to obtain a texture image;
and determining the depth of the target object according to the texture images acquired by the at least two camera devices.
Optionally, the step of determining the depth of the target object according to the texture images acquired by the at least two image capturing devices includes:
acquiring calibration parameters of the at least two camera devices;
respectively extracting characteristic information of texture images acquired by the at least two camera devices under different scales;
calculating corresponding matching cost between texture images acquired by the at least two camera devices according to the characteristic information under different scales;
performing cost aggregation calculation on the calculated matching cost, and obtaining a target parallax value according to a calculated result;
and calculating the depth of the target object according to the calibration parameters and the target parallax value.
Optionally, the step of calculating, according to the feature information at different scales, a matching cost corresponding to each of texture images acquired by the at least two image capturing devices includes:
and according to the feature information under different scales, traversing the texture images acquired by the at least two camera devices pixel by pixel in a preset parallax searching range, and calculating the matching cost of each pixel under different parallaxes.
Optionally, the step of calculating the depth of the target object according to the calibration parameter and the target disparity value includes:
calculating an object depth value corresponding to each pixel according to the relative distance between the at least two camera devices, the focal length of the camera devices and the target parallax value;
and calculating the depth of the target object according to the object depth value corresponding to each pixel.
In addition, the invention also provides a robot, and the robot comprises the ranging system.
Furthermore, the present invention provides a ranging apparatus, the ranging system comprising a robot, a memory, a processor, and a ranging program stored on the memory and executable on the processor, the ranging program when executed by the processor implementing the steps of the ranging method as described above.
In addition, the present invention also provides a storage medium, wherein the storage medium stores a ranging program, and the ranging program realizes the steps of the ranging method when being executed by a processor.
The invention provides a distance measuring system, a distance measuring method, a robot, a system and a storage medium, wherein the distance measuring system comprises: a stereoscopic vision device, a projection device and a processing device; the stereoscopic vision apparatus includes at least two image pickup devices; the projection device comprises a light emitter, a pattern mask and a lens group; wherein the light emitter is used for emitting light; the light passes through the pattern mask to form a light beam with a pattern; the lens group is used for projecting the light beam with the pattern onto a target object to form texture on the target object; the camera shooting device is used for carrying out image acquisition on a target object to obtain a texture image; and the processing device is used for determining the depth of the target object according to the texture images acquired by the at least two camera devices. The distance measuring method comprises the following steps: emitting light through the light emitter; the light passes through the pattern mask to form a light beam with a pattern; projecting the patterned light beam onto a target object through the lens group to form a texture on the target object; acquiring an image of a target object by the camera equipment to obtain a texture image; and determining the depth of the target object according to the texture images acquired by the at least two camera devices. By the structure and the method, the distance measurement of the non-texture object can be realized, the accurate distance between the robot and the object can be measured by performing rough texture projection on the non-texture object, the defects that a precise projection device is required to be used for projection in the conventional distance measurement method and the loss of visual information in the conventional projection device is large are solved, and meanwhile, the manufacturing cost of the robot can be reduced by the distance measurement system in the application.
Drawings
FIG. 1 is a schematic diagram of an apparatus in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a ranging system according to the present invention;
FIG. 3 is a schematic structural diagram of a projection apparatus in the distance measuring system according to the present invention;
FIG. 4 is a flowchart illustrating a first embodiment of a distance measuring method according to the present invention;
fig. 5 is a schematic flowchart illustrating a detailed process of step S40 in the first embodiment of the ranging method according to the present invention;
fig. 6 is a flowchart illustrating a detailed process of step S45 in the first embodiment of the ranging method according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The robot related to the present application may include a cleaning robot, a housekeeping robot, a service robot, a logistics robot, and the like, and specifically, the type of the cleaning robot may include a sweeping robot, a mopping robot, a sweeping and mopping integrated robot, and the like, where the cleaning robot may be used to automatically clean the ground, and the application scenario may be household indoor cleaning, large-scale place cleaning, and the like. The following description is given taking a cleaning robot as an example:
on the cleaning robot, there are cleaning elements and a drive device, which may comprise a motor and drive wheels. The cleaning robot performs self-movement according to a preset cleaning path under the driving of the driving device, and cleans the floor by the cleaning member. For the sweeping cleaning robot, the cleaning piece is a sweeping piece, a dust suction device is arranged on the sweeping cleaning robot, and in the cleaning process, the sweeping piece sweeps dust, garbage and the like to a dust suction port of the dust suction device, so that the dust suction device absorbs and temporarily stores the dust, the garbage and the like. In the case of a cleaning robot, the cleaning element is a mop (e.g., a mop cloth) that contacts the floor and wipes the floor during movement of the cleaning robot to clean the floor. For the sweeping and mopping integrated cleaning robot, the cleaning piece comprises a sweeping piece and a mopping piece, and the sweeping piece and the mopping piece can work simultaneously to mop and sweep the floor, and can also work separately to respectively mop and sweep the floor. The sweeping piece further comprises an edge brush and a rolling brush (also called a centering brush), the edge brush sweeps dust and other garbage to the middle area on the outer side, and the rolling brush continuously sweeps the garbage to the dust collection device.
In order to facilitate the use of users, a base station is often used in cooperation with a cleaning robot, the base station can be used for charging the cleaning robot, and when the electric quantity of the cleaning robot is less than a threshold value in the cleaning process, the cleaning robot automatically moves to the base station to be charged. In the case of a cleaning robot, the base station may also clean a mop (e.g. a mop cloth), which often becomes soiled after the cleaning robot has mopped the floor and needs to be cleaned. For this purpose, the base station can be used for cleaning the mop of the cleaning robot. Specifically, the mopping cleaning robot can move to the base station so that the cleaning mechanism on the base station automatically cleans the mopping piece of the cleaning robot. The base station can manage the robot through the base station, so that the robot can be controlled more intelligently in the process of executing the cleaning task, and the working intelligence of the robot is improved.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a robot in a hardware operating environment according to an embodiment of the present invention.
The main solution of the embodiment of the invention is as follows: emitting light through the light emitter; the light passes through the pattern mask to form a light beam with a pattern; projecting the patterned light beam onto a target object through the lens group to form a texture on the target object; acquiring an image of a target object by the camera equipment to obtain a texture image; and determining the depth of the target object according to the texture images acquired by the at least two camera devices.
Since in the prior art, during the moving process of the cleaning robot, obstacles (such as trash cans, walls, etc.) are often encountered, when the cleaning robot passes through the obstacles, the cleaning robot often cannot recognize the obstacles and collides with the obstacles, which affects the normal operation of the cleaning robot.
The invention provides the solution, and aims to acquire the information of the barrier, avoid the collision between the cleaning robot and the barrier and ensure the normal operation of the robot.
The invention provides a cleaning robot which can be automatic equipment for cleaning environment, such as a sweeping robot, a mopping robot and the like. Furthermore, in other embodiments, the robot may be other types of robots, such as a service robot, etc.
As shown in fig. 1, the robot may include: a processor 1001, e.g. a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005 and a perception unit 1006. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., a Wi-Fi interface).
A memory 1005 is provided on the robot main body, and the memory 1005 stores a program that realizes corresponding operations when executed by the processor 1001. The memory 1005 is also used to store parameters for use by the robot. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
The robot may communicate with the user terminal through the network interface 1004. The robot may also communicate with a base station via short-range communication techniques. Wherein, the base station is the cleaning equipment who cooperates the robot to use.
The sensing unit 1006 includes various types of sensors such as a laser radar, a collision sensor, a distance sensor, a fall sensor, a counter, a gyroscope, and the like.
Laser radar sets up at the top of robot main part, and at the during operation, laser radar is rotatory to through the transmitter transmission laser signal on the laser radar, laser signal is by the barrier reflection, thereby laser radar's receiver receives the laser signal that the barrier reflected back. The circuit unit of the laser radar analyzes the received laser signal, and thereby obtains surrounding environment information such as a distance and an angle of an obstacle with respect to the laser radar. In addition, a camera can be used to replace the laser radar, and the distance, the angle and the like of the obstacle relative to the camera can be obtained by analyzing the obstacle in the image shot by the camera.
The impact sensor includes an impact housing and a trigger sensor. The collision case surrounds the head of the robot body, and particularly, the collision case may be disposed at a forward position of the head of the robot body and left and right sides of the robot body. The trigger sensor is arranged inside the robot main body and behind the collision shell. An elastic buffer is arranged between the collision housing and the robot body. When the robot collides with an obstacle through the collision case, the collision case moves toward the inside of the robot and compresses the elastic buffer. After the collision shell moves a certain distance to the interior of the robot, the collision shell is contacted with the trigger sensor, the trigger sensor is triggered to generate a signal, and the signal can be sent to a robot controller in the robot main body for processing. After the obstacle is collided, the robot is far away from the obstacle, and the collision shell moves back to the original position under the action of the elastic buffer piece. Therefore, the collision sensor can detect the obstacle and play a role in buffering after colliding with the obstacle.
The distance sensor may specifically be an infrared detection sensor, which may be used to detect the distance from an obstacle to the distance sensor. The distance sensor may be provided at a side of the robot body so that a distance value from an obstacle located near the side of the robot to the distance sensor can be measured by the distance sensor. The distance sensor may also be an ultrasonic distance measuring sensor, a laser distance measuring sensor, or a depth sensor, etc., which is not limited herein.
The drop sensors may be disposed at the bottom edge of the robot body, and the number may be one or more. When the robot moves to the edge position of the ground, the risk that the robot falls from a high position can be detected by the falling sensor, so that corresponding anti-falling reaction can be performed, such as the robot stops moving, or moves away from the falling position.
And a counter and a gyroscope are also arranged in the robot main body. The counter is used for accumulating the total number of the rotation angles of the driving wheels so as to calculate the moving distance length of the driving wheel driven robot. The gyroscope is used for detecting the rotation angle of the robot, so that the orientation of the robot can be determined.
Those skilled in the art will appreciate that the robot configuration shown in fig. 1 does not constitute a limitation of the robot, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The memory 1005, which is a kind of computer storage medium as shown in fig. 1, may include therein an operating system, a network communication module, a user interface module, and a ranging program of the robot.
In the robot shown in fig. 1, the network interface 1004 is mainly used for connecting a base station, a charging stand, etc. used with the robot, and performing data communication with the base station, wherein the base station can be used for charging the robot, cleaning mop on the robot, etc.; the user interface 1003 is mainly used for connecting a client and performing data communication with the client; and the processor 1001 may be configured to invoke a ranging procedure of the robot stored in the memory 1005 and perform the following operations:
the distance measuring method comprises the following steps:
emitting light through the light emitter; the light passes through the pattern mask to form a light beam with a pattern;
projecting the patterned light beam onto a target object through the lens group to form a texture on the target object;
acquiring an image of a target object by the camera equipment to obtain a texture image;
and determining the depth of the target object according to the texture images acquired by the at least two camera devices.
Further, the processor 1001 may call the ranging procedure stored in the memory 1006, and further perform the following operations:
the step of determining the depth of the target object according to the texture images acquired by the at least two camera devices comprises:
acquiring calibration parameters of the at least two camera devices;
respectively extracting characteristic information of texture images acquired by the at least two camera devices under different scales;
calculating corresponding matching cost between texture images acquired by the at least two camera devices according to the characteristic information under different scales;
performing cost aggregation calculation on the calculated matching cost, and obtaining a target parallax value according to a calculated result;
and calculating the depth of the target object according to the calibration parameters and the target parallax value.
Further, the processor 1001 may call the ranging procedure stored in the memory 1006, and further perform the following operations:
the step of calculating the matching cost corresponding to the texture images acquired by the at least two camera devices according to the feature information under the different scales comprises:
and according to the feature information under different scales, traversing the texture images acquired by the at least two camera devices pixel by pixel in a preset parallax searching range, and calculating the matching cost of each pixel under different parallaxes.
Further, the processor 1001 may call the ranging procedure stored in the memory 1006, and further perform the following operations:
the step of calculating the depth of the target object according to the calibration parameters and the target disparity values comprises:
calculating an object depth value corresponding to each pixel according to the relative distance between the at least two camera devices, the focal length of the camera devices and the target parallax value;
and calculating the depth of the target object according to the object depth value corresponding to each pixel.
Based on the above hardware structure, various embodiments of the ranging system of the present invention are proposed.
Referring to fig. 2 and fig. 3, fig. 2 is a schematic structural diagram of a distance measuring system of the present invention, and fig. 3 is a schematic structural diagram of a projection apparatus in the distance measuring system of the present invention. The ranging system includes: a stereoscopic vision device 01, a projection device 02, and a processing device 03; the stereoscopic vision apparatus 01 includes at least two image pickup devices; the projection device 02 includes a light emitter 021, a pattern mask 022, and a lens group 023; wherein, the light emitter 021 is used for emitting light; the light passes through the pattern mask 022 to form a patterned light beam; the lens group 023 is used for projecting the light beam with the pattern onto a target object to form texture on the target object; the camera shooting device is used for carrying out image acquisition on a target object to obtain a texture image; the processing device 03 is configured to determine the depth of the target object according to the texture images acquired by the at least two image capturing apparatuses.
In this embodiment, the projection device 02 may be applied to a robot, may be installed in a robot, or may be installed in a scene requiring projection distance measurement, for example, a non-textured white wall, non-textured furniture, etc., and may be installed flexibly and applied to various occasions requiring distance measurement. It should be noted that the relationship between the stereoscopic vision device 01 and the projection device 02 needs to satisfy the requirement that the projection area of the projection device 02 and the visual coverage area of the stereoscopic vision device, i.e. the image pickup apparatus, partially or completely coincide, and the specific arrangement may be that the stereoscopic vision device 01 and the projection device 02 may be assembled on the same plane, specifically, the projection device 02 may be disposed in the middle, and the stereoscopic vision device 01 is disposed on two sides, i.e. two image pickup apparatuses are disposed on two sides of the projection device 02; the projection device 02 can be arranged beside the stereoscopic vision device 01 in the middle; further, the present application is not limited to this, for example, two image pickup devices in the stereoscopic vision apparatus 01 and the projection apparatus 02 are in a triangular positional relationship, or a plurality of image pickup devices in the stereoscopic vision apparatus 01 and the projection apparatus 02 are in a five-cylinder arrangement positional relationship. The stereoscopic viewing device 01 and the projection device 02 may not be provided on the same plane, and similarly, it is sufficient that the projection area of the projection device 02 and the visual coverage area of the imaging device, which is the stereoscopic viewing device, partially or entirely overlap each other. The light emitter 021 can be an LED or a VCSEL to realize the generation of light. The lens group 023 is configured to project the patterned light beam onto a target object according to a preset angle. In this embodiment, can replace the current scheme of higher cost VCSEL + DOE among the prior art through above-mentioned structure, avoid the big condition of visual information loss among the current scheme, and this scheme range finding distance is far away, is convenient for combine range finding algorithm to develop more vision applications, like semantic segmentation, target detection etc..
In one embodiment, the pattern mask 022 includes a light-blocking region and a light-transmitting region thereon, wherein the light-transmitting region and the light-blocking region form a pattern, and light passes through the light-transmitting portion to form a patterned light beam.
In one embodiment, the light-transmitting portion is a transparent material part; or the light transmission part is provided with a plurality of light transmission holes; the light passes through the pattern mask 022, forming a patterned beam.
In this embodiment, a pre-designed pattern may be attached to a transparent medium, such as glass, by laser printing or etching. The laser-printed or etched portions form light-transmitting regions, and the non-laser-printed or etched portions form light-transmitting regions, and when light passes through the pattern mask 022, the printed or etched portions are blocked, thereby forming a patterned beam of light through the light-transmitting regions. Alternatively, a hollow-out method is used to form a patterned hollow-out portion on the pattern mask 022, and when light passes through the hollow-out portion, a patterned light beam is formed. In this embodiment, form the light beam of banding pattern through above-mentioned mode, manufacturing cost is lower, and the pattern that just forms is clear, can satisfy the line projection under the different situations for the range finding is more accurate.
In an embodiment, the projection apparatus 02 further includes a light collimator disposed between the light emitter 021 and the pattern mask 022 for collimating the light emitted from the light emitter 021 into parallel light. In this embodiment, the light collimator can make the projection pattern of the projection device 02 more accurate and clear.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating a first embodiment of a distance measuring method according to the present invention, wherein the distance measuring method is applied to a distance measuring system, and the distance measuring system includes a stereoscopic vision device and a projection device; the stereoscopic vision apparatus includes at least two image pickup devices; the projection device comprises a light emitter, a pattern mask and a lens group; the distance measuring method provided by the embodiment comprises the following steps:
step S10, emitting light by the light emitter; the light passes through the pattern mask to form a light beam with a pattern;
step S20, projecting the light beam with pattern onto the target object through the lens group, and forming texture on the target object;
in this embodiment, the lens group projects a patterned light beam onto a target object through a predetermined angle, where the target object is an object to be measured.
Step S30, acquiring images of the target object by the camera equipment to obtain texture images;
in this embodiment, the number of the image capturing apparatuses is at least two, and certainly, a plurality of image capturing apparatuses may be provided to capture an image.
And step S40, determining the depth of the target object according to the texture images collected by the at least two camera devices.
In an embodiment, referring to fig. 5, the step S40 further includes:
step S41, acquiring calibration parameters of the at least two camera devices;
step S42, respectively extracting characteristic information of texture images acquired by the at least two camera devices under different scales;
in this embodiment, the calibration parameters include a focal length of the image capturing apparatus, an optical center, a distortion coefficient, and a relative position between two or more image capturing apparatuses. The different scales include a macro scale and a micro scale, specifically, a minimum scale and a maximum scale may be set, then the minimum scale and the maximum scale are equally divided, and then feature information in the texture image of each equally divided scale is obtained, or an image pyramid is adopted, and the resolution of the texture image is subjected to gradient down-sampling to obtain texture images of different resolutions, that is, different scales, wherein the higher the pyramid level is, the smaller the image scale is, and the lower the resolution is. The characteristic information may be obtained from a convolutional neural network. In addition, after the two texture images are acquired, the texture images also need to be subjected to image stereo correction, that is, two actually shot images which are not on the same plane and have the same spatial point are corrected into coplanar alignment, so that the matching pixel search is changed from two-dimensional search to one-dimensional search, and the matching search efficiency is improved.
Step S43, calculating the corresponding matching cost between the texture images collected by the at least two camera devices according to the characteristic information under different scales;
in this embodiment, the cost calculation is a process of finding a corresponding position relationship of the same texture on images with different textures.
In one embodiment, the step S43 further includes:
and step A431, according to the feature information under different scales, traversing the texture images acquired by the at least two camera devices pixel by pixel in a preset parallax search range, and calculating the matching cost of each pixel under different parallaxes.
In this embodiment, the disparity is a distance between pixel coordinates of two matched pixels in two texture images, where the distance is in units of pixels, and the purpose of the matching cost calculation is to measure a correlation between a pixel to be matched and a candidate pixel. Whether the two pixels are homonymous points or not can be calculated through a matching cost function, the smaller the cost is, the greater the correlation is, the greater the probability of being homonymous points is, and the homonymous points are the same points.
Step S44, carrying out cost aggregation calculation on the calculated matching cost, and obtaining a target parallax value according to the calculated result;
in the embodiment in this market, after the costs of all pixels are calculated separately, the cost distribution of the whole image is further optimized integrally, so that the problem that the correct image cannot be found due to sparse texture can be avoided more effectively. For example, there may be an ith pixel point in the first texture image, and the cost corresponding to the position relationship between two or more different pixel points in the second texture image is the same or almost the same, and it is necessary to further determine which pixel point is the correct cost matching result, so that the cost of surrounding pixels can be used for determination, and specifically, the global or semi-global cost can be aggregated for the multi-scale cost, so as to obtain the optimized cost.
Step S45, calculating the depth of the target object according to the calibration parameter and the target disparity value.
In an embodiment, referring to fig. 6, the step S45 further includes:
step S451, calculating an object depth value corresponding to each pixel according to the relative distance between the at least two image capturing apparatuses, the focal length of the image capturing apparatus, and the target disparity value;
the object depth value corresponding to each pixel refers to a distance between each pixel in the texture image and the target object, and specifically, the object depth value corresponding to each pixel is calculated according to the following formula,
z=Fb/d;
wherein z is a depth distance, F is a focal length of the camera, b is a relative distance between the cameras corresponding to the texture images, and d is a target parallax value between pixel points of the same object on the texture images of different cameras.
Step S452, calculating the depth of the target object according to the object depth value corresponding to each pixel;
in this embodiment, according to the object depth value of each pixel, the overall depth value of the target object, i.e. the distance from the robot to the target object, can be directly calculated. The minimum depth value of all pixels can be taken as the depth of the target object, and the depth of the target object can be further selected according to the overall depth of the object.
The invention provides a distance measuring method, which comprises the following steps: emitting light through the light emitter; the light passes through the pattern mask to form a light beam with a pattern; projecting the patterned light beam onto a target object through the lens group to form a texture on the target object; acquiring an image of a target object by the camera equipment to obtain a texture image; and determining the depth of the target object according to the texture images acquired by the at least two camera devices. By the structure and the method, the distance measurement of the non-texture object can be realized, the accurate distance between the robot and the object can be measured by performing rough texture projection on the non-texture object, the defects that a precise projection device is required to be used for projection in the conventional distance measurement method and the loss of visual information in the conventional projection device is large are solved, and meanwhile, the manufacturing cost of the robot can be reduced by the distance measurement system in the application. And the combination of the distance measuring system and the distance measuring method in the invention can greatly reduce the requirement on the fineness of the texture of the object. On the premise, the distance measuring device used by the invention replaces VCSEL + DOE to add texture with certain quality to the original non-textured object, so that the feasibility is achieved, and the manufacturing cost of the distance measuring device is reduced.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where a ranging program is stored on the computer-readable storage medium, and when executed by a processor, the ranging program implements the following operations:
emitting light through the light emitter; the light passes through the pattern mask to form a light beam with a pattern;
projecting the patterned light beam onto a target object through the lens group to form a texture on the target object;
acquiring an image of a target object by the camera equipment to obtain a texture image;
and determining the depth of the target object according to the texture images acquired by the at least two camera devices.
Further, the ranging procedure when executed by the processor further performs the following operations:
the step of determining the depth of the target object according to the texture images acquired by the at least two camera devices comprises:
acquiring calibration parameters of the at least two camera devices;
respectively extracting characteristic information of texture images acquired by the at least two camera devices under different scales;
calculating corresponding matching cost between texture images acquired by the at least two camera devices according to the characteristic information under different scales;
performing cost aggregation calculation on the calculated matching cost, and obtaining a target parallax value according to a calculated result;
and calculating the depth of the target object according to the calibration parameters and the target parallax value.
Further, the ranging procedure when executed by the processor further performs the following operations:
the step of calculating the matching cost corresponding to the texture images acquired by the at least two camera devices according to the feature information under the different scales comprises:
and according to the feature information under different scales, traversing the texture images acquired by the at least two camera devices pixel by pixel in a preset parallax searching range, and calculating the matching cost of each pixel under different parallaxes.
Further, the ranging procedure when executed by the processor further performs the following operations:
the step of calculating the depth of the target object according to the calibration parameters and the target disparity values comprises:
calculating an object depth value corresponding to each pixel according to the relative distance between the at least two camera devices, the focal length of the camera devices and the target parallax value;
and calculating the depth of the target object according to the object depth value corresponding to each pixel.
The specific embodiment of the computer-readable storage medium of the present invention is substantially the same as the embodiments of the distance measuring method described above, and is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on this understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for causing a robot device to execute the methods according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (11)

1. A ranging system, comprising: a stereoscopic vision device, a projection device and a processing device; the stereoscopic vision apparatus includes at least two image pickup devices; the projection device comprises a light emitter, a pattern mask and a lens group; wherein,
the light emitter is used for emitting light; the light passes through the pattern mask to form a light beam with a pattern; the lens group is used for projecting the light beam with the pattern onto a target object to form texture on the target object;
the camera shooting device is used for carrying out image acquisition on a target object to obtain a texture image;
and the processing device is used for determining the depth of the target object according to the texture images acquired by the at least two camera devices.
2. The range finding system of claim 1 wherein the pattern mask includes a light-blocking area and a light-transmitting area, the light-transmitting area and the light-blocking area forming a pattern, and light passing through the light-transmitting area forming a patterned beam.
3. The range finding system of claim 2 wherein the light transmissive portion is a transparent material; or the light transmission part is provided with a plurality of light transmission holes; the light passes through the pattern mask to form a patterned beam.
4. A ranging system according to any of claims 1-3, wherein the projection means further comprises a light collimator arranged between the light emitter and the pattern mask for collimating light emitted by the light emitter into parallel light rays.
5. A distance measurement method is characterized in that the distance measurement method is applied to a distance measurement system, and the distance measurement system comprises a stereoscopic vision device and a projection device; the stereoscopic vision apparatus includes at least two image pickup devices; the projection device comprises a light emitter, a pattern mask and a lens group; the distance measuring method comprises the following steps:
emitting light through the light emitter; the light passes through the pattern mask to form a light beam with a pattern;
projecting the patterned light beam onto a target object through the lens group to form a texture on the target object;
acquiring an image of a target object by the camera equipment to obtain a texture image;
and determining the depth of the target object according to the texture images acquired by the at least two camera devices.
6. The range finding method of claim 5, wherein the step of determining the depth of the target object from the texture images acquired by the at least two camera devices comprises:
acquiring calibration parameters of the at least two camera devices;
respectively extracting characteristic information of texture images acquired by the at least two camera devices under different scales;
calculating corresponding matching cost between texture images acquired by the at least two camera devices according to the characteristic information under different scales;
performing cost aggregation calculation on the calculated matching cost, and obtaining a target parallax value according to a calculated result;
and calculating the depth of the target object according to the calibration parameters and the target parallax value.
7. The distance measuring method according to claim 5, wherein the step of calculating the matching cost corresponding to the texture images acquired by the at least two image capturing devices according to the feature information at the different scales comprises:
and according to the feature information under different scales, traversing the texture images acquired by the at least two camera devices pixel by pixel in a preset parallax searching range, and calculating the matching cost of each pixel under different parallaxes.
8. The ranging method according to claim 6 or 7, wherein the step of calculating the depth of the target object based on the calibration parameters and the target disparity value comprises:
calculating an object depth value corresponding to each pixel according to the relative distance between the at least two camera devices, the focal length of the camera devices and the target parallax value;
and calculating the depth of the target object according to the object depth value corresponding to each pixel.
9. A robot, characterized in that it comprises a ranging system according to claims 1-4.
10. A ranging apparatus comprising a robot, a memory, a processor, and a ranging program stored on the memory and executable on the processor, the ranging program when executed by the processor implementing the steps of the ranging method of any of claims 5 to 8.
11. A storage medium having stored thereon a ranging program which, when executed by a processor, performs the steps of the ranging method according to any one of claims 5 to 8.
CN202111682866.1A 2021-12-31 2021-12-31 Distance measuring system, distance measuring method, robot, device, and storage medium Pending CN114488103A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111682866.1A CN114488103A (en) 2021-12-31 2021-12-31 Distance measuring system, distance measuring method, robot, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111682866.1A CN114488103A (en) 2021-12-31 2021-12-31 Distance measuring system, distance measuring method, robot, device, and storage medium

Publications (1)

Publication Number Publication Date
CN114488103A true CN114488103A (en) 2022-05-13

Family

ID=81510245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111682866.1A Pending CN114488103A (en) 2021-12-31 2021-12-31 Distance measuring system, distance measuring method, robot, device, and storage medium

Country Status (1)

Country Link
CN (1) CN114488103A (en)

Similar Documents

Publication Publication Date Title
JP6633568B2 (en) Autonomous coverage robot
US11688089B2 (en) Method and processing system for updating a first image generated by a first camera based on a second image generated by a second camera
CN110852312B (en) Cliff detection method, mobile robot control method, and mobile robot
EP3104194B1 (en) Robot positioning system
US9632505B2 (en) Methods and systems for obstacle detection using structured light
US8446492B2 (en) Image capturing device, method of searching for occlusion region, and program
CN113916230A (en) System and method for performing simultaneous localization and mapping using a machine vision system
KR101591471B1 (en) apparatus and method for extracting feature information of object and apparatus and method for generating feature map
EP2623010A2 (en) Robot cleaner
CN108481327B (en) Positioning device, positioning method and robot for enhancing vision
JP3596339B2 (en) Inter-vehicle distance measurement device
WO2019174484A1 (en) Charging base identification method and mobile robot
CN113848944A (en) Map construction method and device, robot and storage medium
JP5874252B2 (en) Method and apparatus for measuring relative position with object
CN114594482A (en) Obstacle material detection method based on ultrasonic data and robot control method
JP7348414B2 (en) Method and device for recognizing blooming in lidar measurement
CN114488103A (en) Distance measuring system, distance measuring method, robot, device, and storage medium
CN116245929A (en) Image processing method, system and storage medium
US20210063850A1 (en) Imaging device, method for controlling imaging device, and system including imaging device
Attamimi et al. A Visual Sensor for Domestic Service Robots
KR102530307B1 (en) Subject position detection system
RU2800503C1 (en) Cleaning robot and method of automatic control of cleaning robot
CN118411704A (en) Mobile robot control method, mobile robot, and storage medium
AU2015224421A1 (en) Autonomous coverage robot
CN115511939A (en) Obstacle detection method, obstacle detection device, storage medium, and electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination