CN114509061A - Method and system for determining robot traveling path based on barrier attributes - Google Patents

Method and system for determining robot traveling path based on barrier attributes Download PDF

Info

Publication number
CN114509061A
CN114509061A CN202111669492.XA CN202111669492A CN114509061A CN 114509061 A CN114509061 A CN 114509061A CN 202111669492 A CN202111669492 A CN 202111669492A CN 114509061 A CN114509061 A CN 114509061A
Authority
CN
China
Prior art keywords
obstacle
image
information
robot
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111669492.XA
Other languages
Chinese (zh)
Inventor
衡进
孙贇
姚郁巍
苏瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Weimo Zhuoran Technology Co ltd
Original Assignee
Chongqing Terminus Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Terminus Technology Co Ltd filed Critical Chongqing Terminus Technology Co Ltd
Priority to CN202111669492.XA priority Critical patent/CN114509061A/en
Publication of CN114509061A publication Critical patent/CN114509061A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a robot advancing strategy determining method and system based on barrier attributes.A binocular camera is used for independently describing the surrounding environment, point clouds under two independent coordinate systems are fused into the same coordinate system according to the spatial position characteristics of an overlapping region, and an image fusion is carried out after the overlapping region is found in the two point clouds to obtain a primary fusion image; obtaining contour information according to the image collected by the 2D laser radar; performing secondary fusion on the primary fusion image and the outline information to obtain a secondary fusion image, performing obstacle extraction according to the secondary fusion image, calculating obstacle information, and reducing fusion time caused by complex registration algorithm; meanwhile, the analysis area is divided according to the obstacle outline information, so that the data analysis and operation time is reduced, and the operation speed is increased.

Description

Method and system for determining robot traveling path based on barrier attributes
The technical field is as follows:
the invention belongs to the field of robot control, and particularly relates to a robot traveling strategy determination method based on barrier attributes.
Background art:
the robot exploring an unknown environment by carrying an external sensor is an important direction for the development of the field of artificial intelligence. At present, robots play more and more important roles in tasks of target searching in unknown environments such as reconnaissance tasks, deep sea exploration, disaster rescue, search and rescue and the like. Through the research, more and more new-technology product auxiliary robots are enabled to complete various tasks with high difficulty, the efficiency is improved, and the progress of the human society is promoted. Among them, mobile robots have been the focus of artificial intelligence research field, and more learners deeply research and develop new skills aiming at different application levels of robots, so that the range of the robots capable of working is continuously enlarged, and the autonomous exploration of the robots has made a great breakthrough, especially the autonomous exploration of the robots in unknown environment becomes a research focus.
The problem of large data processing capacity after fusion can occur during fusion of three-dimensional local point cloud images acquired by a robot, the point cloud images need to be registered when the three-dimensional point cloud images are fused, and mismatching can occur when the registration degree is low, so that the fusion result of the point cloud images is poor directly; and the complex registration algorithm can lead to long fusion time and adverse effect on obstacle modeling, so that the modeling precision is reduced, and the traveling safety and timeliness of the robot are further affected. How to improve the accuracy of judging the attribute of the barrier becomes an urgent problem to be solved.
Disclosure of Invention
Aiming at the problems that the existing registration algorithm is complex, the fusion time is long, adverse effects on obstacle modeling can be caused, and the modeling precision is reduced, a binocular camera is used for independently describing the surrounding environment, point clouds in two independent coordinate systems are fused into the same coordinate system according to the spatial position characteristics of an overlapping area, and a primary fusion image is obtained by image fusion after the overlapping area is searched in the two point clouds; obtaining contour information according to the image collected by the 2D laser radar; and performing secondary fusion on the primary fusion image and the contour information to obtain a secondary fusion image, performing obstacle extraction according to the secondary fusion image, calculating obstacle information, obtaining obstacle attribute information according to the obstacle information, determining a traveling strategy according to the obstacle attribute information and the attribute information of the robot, adjusting a traveling route of the robot according to the traveling strategy, and reducing the fusion time caused by the complex registration algorithm.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the robot traveling strategy determination method based on the obstacle attribute comprises the following steps:
s1, acquiring real-time images in front of the robot according to a depth binocular camera and a 2D laser radar, wherein the depth binocular camera is horizontally arranged left and right, and the 2D laser radar is arranged above the center of the binocular camera; the binocular camera is an RGB-D camera, an infrared sensor is arranged in the center of the binocular camera,
s2, independently describing the surrounding environment by the binocular camera, fusing point clouds under two independent coordinate systems into the same coordinate system according to the spatial position characteristics of an overlapping area, searching the overlapping area in the two point clouds, and then carrying out image fusion to obtain a primary fused image; obtaining contour information according to the image collected by the 2D laser radar; performing secondary fusion on the primary fusion image and the contour information to obtain a secondary fusion image;
s3, judging and extracting obstacles according to the secondary fusion image outline information,
s4, calculating the obstacle information, obtaining the obstacle attribute information according to the obstacle information,
s5, determining a traveling strategy according to the obstacle attribute information and the robot own attribute information,
and S6, adjusting the traveling route by the robot according to the traveling strategy.
Furthermore, the obstacle judgment is carried out on the secondary fusion images acquired in real time according to a convolutional neural network model, so as to obtain a judgment result, and the neural network model is obtained by utilizing a stochastic gradient descent algorithm for training according to a plurality of secondary fusion images.
Furthermore, the binocular camera independently describes the surrounding environment, point clouds in two independent coordinate systems are fused into the same coordinate system according to the spatial position characteristics of the overlapping area, and image fusion is performed after the overlapping area is found in the two point clouds.
Further, the image fusion comprises,
s11, preprocessing the point cloud, converting the point cloud picture with color information into a point cloud picture with color reduction information,
s12, randomly sampling the point cloud picture, matching once by using an initial fusion algorithm of sampling consistency,
s13, performing secondary matching by using an iterative closest point algorithm to obtain a fused three-dimensional point cloud picture;
the image acquisition contour information acquired by the 2D laser radar comprises filtering operation on the image acquired by the 2D laser radar, wherein the filtering specifically comprises the following steps: decomposing by adopting a non-downsampling contour transformation algorithm to obtain a high-frequency part and a low-frequency part in the image, filtering the high-frequency part according to a median filtering algorithm, processing the low-frequency part according to a non-local mean filtering algorithm, reconstructing the image by adopting a threshold denoising algorithm based on the non-downsampling contour transformation algorithm for the filtered image, obtaining the denoised and filtered image and outputting the image.
Further, the extracting includes segmenting position and size information of an obstacle from the secondary fusion image, the segmenting includes calculating a difference value between a background color and a color in the outline area, and when the difference value exceeds a first threshold value, the area is an obstacle area.
Further, the step of calculating the obstacle information comprises the step of extracting texture features and morphological features of the obstacle according to the position and size information of the obstacle in the secondary fusion image; obtaining temperature information of the obstacle based on the infrared sensor; and establishing an obstacle feature vector according to the size information, the texture feature, the morphological feature and the temperature feature of the obstacle.
Further, the current traveling action is determined according to the type of the obstacle and other attribute information as well as the attribute information of the robot.
Further, if the temperature information of the obstacle exceeds a second threshold value, the obstacle is judged to be a living body, and the acousto-optic driving and separating device is started to drive away the obstacle.
Further, the determining of the current travel action according to the type of the obstacle and other attribute information as well as the attribute information of the robot itself specifically includes:
s71, acquiring the height and attribute information of the obstacle after the robot forwards travels to identify the obstacle;
s72, judging whether the temperature and the texture feature of the barrier accord with the characteristics of the living body, if so, driving away, otherwise, acquiring the height and width information of the barrier,
s73, calculating a difference value between the height of the obstacle and the height of the chassis of the robot, if the difference value is smaller than a third threshold value, performing crossing, if the difference value is larger than the third threshold value and the edge gap of the obstacle is larger than the width of the robot, performing detour, otherwise, performing retreating action;
and S74, after the robot retreats, reselecting other paths.
Further, the relationship between the feature point and the stereo space in the primary image fusion image is as follows:
Figure BDA0003449222770000041
Figure BDA0003449222770000042
wherein It(u, v) is a frame feature point at time t, d is a feature point It(u, v) depth value, S is the scaling factor of the depth map, k is the internal parameter matrix of the camera, Pt(x, y, z) are stereo space coordinates; and r is a camera transformation attitude parameter.
Further, the parameter matrix k is:
Figure BDA0003449222770000043
wherein f isxAnd fyFocal length of the camera in x-axis and y-axis, cxAnd cyIs the aperture value of the camera lens.
The robot traveling strategy determination system based on the obstacle attribute includes:
the image acquisition module is used for acquiring real-time images in front of the robot according to a depth binocular camera and a 2D laser radar, wherein the depth binocular camera is horizontally arranged left and right, and the 2D laser radar is arranged above the center of the binocular camera; the binocular camera is an RGB-D camera, and an infrared sensor is arranged in the center of the binocular camera;
the image processing module is used for independently describing the surrounding environment by the binocular camera, fusing point clouds under two independent coordinate systems into the same coordinate system according to the spatial position characteristics of the overlapping area, searching the overlapping area in the two point clouds, and then carrying out image fusion to obtain a primary fused image; obtaining contour information according to the image collected by the 2D laser radar; performing secondary fusion on the primary fusion image and the contour information to obtain a secondary fusion image;
the obstacle extraction module is used for judging and extracting obstacles according to the secondary fusion image contour information,
the obstacle analysis module is used for calculating obstacle information, acquiring obstacle attribute information according to the obstacle information,
a traveling strategy module for determining a traveling strategy according to the obstacle attribute information and the attribute information of the robot,
and the route planning module is used for adjusting the traveling route according to the traveling strategy.
Further, the image processing module includes an image fusion module, which includes:
the first preprocessing module is used for preprocessing the point cloud, converting the point cloud picture with the color information into a point cloud picture without the color information,
a primary matching module, which carries out down-sampling on the point cloud picture, carries out primary matching by utilizing a sampling consistency initial fusion algorithm,
the secondary matching module is used for carrying out secondary matching by utilizing an iterative closest point algorithm to obtain a fused three-dimensional point cloud picture;
the second preprocessing module is used for performing filtering operation on the image acquired by the 2D laser radar, and the filtering operation specifically comprises the following steps: decomposing by adopting a non-downsampling contour transformation algorithm to obtain a high-frequency part and a low-frequency part in the image, filtering the high-frequency part according to a median filtering algorithm, processing the low-frequency part according to a non-local mean filtering algorithm, reconstructing the image by adopting a threshold denoising algorithm based on the non-downsampling contour transformation algorithm for the filtered image, obtaining the denoised and filtered image and outputting the image.
Further, the segmenting in the obstacle analysis module includes computing a difference between a background color and a color in the outline region, and when the difference exceeds a first threshold, the region is an obstacle region.
Further, the step of calculating the obstacle information comprises the step of obtaining texture features and morphological features of the obstacle according to the position and size information of the obstacle in the image; obtaining temperature information of the obstacle based on the infrared sensor; and determining and establishing an obstacle feature vector according to the size information, the texture feature, the morphological feature and the temperature feature of the obstacle.
Further, the current traveling action is determined according to the type of the obstacle and other attribute information as well as the attribute information of the robot.
Further, if the temperature information of the obstacle exceeds a second threshold value, the obstacle is judged to be a living body, and the acousto-optic driving and separating device is started to drive away the obstacle.
Further, the robot comprises a policy decision module for performing the steps of: s71, acquiring the height and attribute information of the obstacle after the robot forwards travels to identify the obstacle;
s72, judging whether the temperature and the texture feature of the barrier accord with the characteristics of the living body, if so, driving away, otherwise, acquiring the height and width information of the barrier,
s73, calculating a difference value between the height of the obstacle and the height of the chassis of the robot, if the difference value is smaller than a third threshold value, performing crossing, if the difference value is larger than the third threshold value and the edge gap of the obstacle is larger than the width of the robot, performing detour, otherwise, performing retreating action;
and S74, after the robot retreats, reselecting other paths.
Further, the relationship between the feature point and the stereo space in the primary image fusion image is as follows:
Figure BDA0003449222770000061
Figure BDA0003449222770000062
wherein It(u, v) is a frame feature point at time t, d is a feature point It(u, v) depth value, S is the scaling factor of the depth map, k is the internal parameter matrix of the camera, Pt(x, y, z) are stereo space coordinates; and r is a camera transformation attitude parameter.
Further, the parameter matrix k is:
Figure BDA0003449222770000063
wherein f isxAnd fyFocal length of the camera in x-axis and y-axis, cxAnd cyIs the aperture value of the camera lens.
The invention has the following beneficial effects:
independently describing the surrounding environment by using a binocular camera, fusing point clouds under two independent coordinate systems into the same coordinate system according to the spatial position characteristics of an overlapping area, searching the overlapping area in the two point clouds, and then carrying out image fusion to obtain a primary fusion image; obtaining contour information according to the image collected by the 2D laser radar; performing secondary fusion on the primary fusion image and the outline information to obtain a secondary fusion image, performing obstacle extraction according to the secondary fusion image, calculating obstacle information, and obtaining obstacle attribute information according to the obstacle information, so that the fusion time caused by the complex registration algorithm is reduced, the method effectively improves the obstacle modeling precision, and improves the safety and timeliness of the robot in advancing; meanwhile, the analysis area is divided according to the obstacle outline information, so that the data analysis and operation time is reduced, and the operation speed is increased.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above description and other objects, features, and advantages of the present invention more clearly understandable, preferred embodiments are specifically described below.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart of a robot traveling strategy determination method based on obstacle attributes according to the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In the description of the present invention, unless otherwise expressly specified or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be connected or detachably connected or integrated; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
Embodiment 1, a robot traveling strategy determination method based on obstacle attributes includes:
the robot traveling strategy determination method based on the obstacle attribute comprises the following steps:
the robot traveling strategy determination method based on the obstacle attribute comprises the following steps:
s1, acquiring real-time images in front of the robot according to a depth binocular camera and a 2D laser radar, wherein the depth binocular camera is horizontally arranged left and right, and the 2D laser radar is arranged above the center of the binocular camera; the binocular camera is an RGB-D camera, an infrared sensor is arranged in the center of the binocular camera,
s2, independently describing the surrounding environment by the binocular camera, fusing point clouds under two independent coordinate systems into the same coordinate system according to the spatial position characteristics of an overlapping area, searching the overlapping area in the two point clouds, and then carrying out image fusion to obtain a primary fused image; obtaining contour information according to the image collected by the 2D laser radar; performing secondary fusion on the primary fusion image and the contour information to obtain a secondary fusion image;
s3, judging and extracting obstacles according to the secondary fusion image outline information,
s4, calculating the obstacle information, obtaining the obstacle attribute information according to the obstacle information,
s5, determining a traveling strategy according to the obstacle attribute information and the robot own attribute information,
and S6, adjusting the traveling route by the robot according to the traveling strategy.
Furthermore, the obstacle judgment is carried out on the secondary fusion images acquired in real time according to a convolutional neural network model, so as to obtain a judgment result, and the neural network model is obtained by utilizing a stochastic gradient descent algorithm for training according to a plurality of secondary fusion images.
Furthermore, the binocular camera independently describes the surrounding environment, point clouds in two independent coordinate systems are fused into the same coordinate system according to the spatial position characteristics of the overlapping area, and image fusion is performed after the overlapping area is found in the two point clouds.
Further, the image fusion comprises,
s11, preprocessing the point cloud, converting the point cloud picture with color information into a point cloud picture with color reduction information,
s12, randomly sampling the point cloud picture, matching once by using an initial fusion algorithm of sampling consistency,
s13, performing secondary matching by using an iterative closest point algorithm to obtain a fused three-dimensional point cloud picture;
the image acquisition contour information acquired by the 2D laser radar comprises filtering operation on the image acquired by the 2D laser radar, wherein the filtering specifically comprises the following steps: decomposing by adopting a non-downsampling contour transformation algorithm to obtain a high-frequency part and a low-frequency part in the image, filtering the high-frequency part according to a median filtering algorithm, processing the low-frequency part according to a non-local mean filtering algorithm, reconstructing the image by adopting a threshold denoising algorithm based on the non-downsampling contour transformation algorithm for the filtered image, obtaining the denoised and filtered image and outputting the image.
Further, the extracting includes segmenting position and size information of an obstacle from the secondary fusion image, the segmenting includes calculating a difference value between a background color and a color in the outline area, and when the difference value exceeds a first threshold value, the area is an obstacle area.
Further, the step of calculating the obstacle information comprises the step of extracting texture features and morphological features of the obstacle according to the position and size information of the obstacle in the secondary fusion image; obtaining temperature information of the obstacle based on the infrared sensor; and establishing an obstacle feature vector according to the size information, the texture feature, the morphological feature and the temperature feature of the obstacle.
Further, the current traveling action is determined according to the type of the obstacle and other attribute information as well as the attribute information of the robot.
Further, if the temperature information of the obstacle exceeds a second threshold value, the obstacle is judged to be a living body, and the acousto-optic driving and separating device is started to drive away the obstacle.
Further, the determining of the current travel action according to the type of the obstacle and other attribute information as well as the attribute information of the robot itself specifically includes:
s71, acquiring the height and attribute information of the obstacle after the robot forwards travels to identify the obstacle;
s72, judging whether the temperature and the texture feature of the barrier accord with the characteristics of the living body, if so, driving away, otherwise, acquiring the height and width information of the barrier,
s73, calculating a difference value between the height of the obstacle and the height of the chassis of the robot, if the difference value is smaller than a third threshold value, performing crossing, if the difference value is larger than the third threshold value and the edge gap of the obstacle is larger than the width of the robot, performing detour, otherwise, performing retreating action;
and S74, after the robot retreats, reselecting other paths.
Further, the relationship between the feature point and the stereo space in the primary image fusion image is as follows:
Figure BDA0003449222770000091
Figure BDA0003449222770000092
wherein It(u, v) is a frame feature point at time t, d is a feature point It(u, v) depth value, S is the scaling factor of the depth map, k is the internal parameter matrix of the camera, Pt(x, y, z) are stereo space coordinates; r is the camera transformation attitude parameter。
Further, the parameter matrix k is:
Figure BDA0003449222770000093
wherein, fxAnd fyFocal length of the camera in x-axis and y-axis, cxAnd cyIs the aperture value of the camera lens.
Embodiment 2, the robot traveling strategy determination system based on the obstacle attribute includes:
the robot traveling strategy determination system based on the obstacle attribute includes:
the image acquisition module is used for acquiring real-time images in front of the robot according to a depth binocular camera and a 2D laser radar, wherein the depth binocular camera is horizontally arranged left and right, and the 2D laser radar is arranged above the center of the binocular camera; the binocular camera is an RGB-D camera, and an infrared sensor is arranged in the center of the binocular camera;
the image processing module is used for independently describing the surrounding environment by the binocular camera, fusing point clouds under two independent coordinate systems into the same coordinate system according to the spatial position characteristics of the overlapping area, searching the overlapping area in the two point clouds, and then carrying out image fusion to obtain a primary fused image; obtaining contour information according to the image collected by the 2D laser radar; performing secondary fusion on the primary fusion image and the contour information to obtain a secondary fusion image;
the obstacle extraction module is used for judging and extracting obstacles according to the secondary fusion image contour information,
the obstacle analysis module is used for calculating obstacle information, acquiring obstacle attribute information according to the obstacle information,
a traveling strategy module for determining a traveling strategy according to the obstacle attribute information and the attribute information of the robot,
and the route planning module is used for adjusting the traveling route according to the traveling strategy.
Further, the image processing module includes an image fusion module, which includes:
the first preprocessing module is used for preprocessing the point cloud, converting the point cloud picture with the color information into a point cloud picture without the color information,
a primary matching module, which carries out down-sampling on the point cloud picture, carries out primary matching by utilizing a sampling consistency initial fusion algorithm,
the secondary matching module is used for carrying out secondary matching by utilizing an iterative closest point algorithm to obtain a fused three-dimensional point cloud picture;
the second preprocessing module is used for performing filtering operation on the image acquired by the 2D laser radar, and the filtering operation specifically comprises the following steps: decomposing by adopting a non-downsampling contour transformation algorithm to obtain a high-frequency part and a low-frequency part in the image, filtering the high-frequency part according to a median filtering algorithm, processing the low-frequency part according to a non-local mean filtering algorithm, reconstructing the image by adopting a threshold denoising algorithm based on the non-downsampling contour transformation algorithm for the filtered image, obtaining the denoised and filtered image and outputting the image.
Further, the segmenting in the obstacle analysis module includes computing a difference between a background color and a color in the outline region, and when the difference exceeds a first threshold, the region is an obstacle region.
Further, the step of calculating the obstacle information comprises the step of obtaining texture features and morphological features of the obstacle according to the position and size information of the obstacle in the image; obtaining temperature information of the obstacle based on the infrared sensor; and determining and establishing an obstacle feature vector according to the size information, the texture feature, the morphological feature and the temperature feature of the obstacle.
Further, the current traveling action is determined according to the type of the obstacle and other attribute information as well as the attribute information of the robot.
Further, if the temperature information of the obstacle exceeds a second threshold value, the obstacle is judged to be a living body, and the acousto-optic driving and separating device is started to drive away the obstacle.
Further, the robot comprises a policy decision module for performing the steps of: s71, acquiring the height and attribute information of the obstacle after the robot forwards travels to identify the obstacle;
s72, judging whether the temperature and the texture feature of the barrier accord with the characteristics of the living body, if so, driving away, otherwise, acquiring the height and width information of the barrier,
s73, calculating a difference value between the height of the obstacle and the height of the chassis of the robot, if the difference value is smaller than a third threshold value, performing crossing, if the difference value is larger than the third threshold value and the edge gap of the obstacle is larger than the width of the robot, performing detour, otherwise, performing retreating action;
and S74, after the robot retreats, reselecting other paths.
Further, the relationship between the feature point and the stereo space in the primary image fusion image is as follows:
Figure BDA0003449222770000111
Figure BDA0003449222770000112
wherein It(u, v) is a frame feature point at time t, d is a feature point It(u, v) depth value, S is the scaling factor of the depth map, k is the internal parameter matrix of the camera, Pt(x, y, z) are stereo space coordinates; and r is a camera transformation attitude parameter.
Further, the parameter matrix k is:
Figure BDA0003449222770000113
wherein f isxAnd fyFocal length of the camera in x-axis and y-axis, cxAnd cyThe invention has the advantages that the aperture value of the camera lens is as follows:
independently describing the surrounding environment by using a binocular camera, fusing point clouds under two independent coordinate systems into the same coordinate system according to the spatial position characteristics of an overlapping area, searching the overlapping area in the two point clouds, and then carrying out image fusion to obtain a primary fusion image; obtaining contour information according to the image collected by the 2D laser radar; performing secondary fusion on the primary fusion image and the outline information to obtain a secondary fusion image, performing obstacle extraction according to the secondary fusion image, calculating obstacle information, and obtaining obstacle attribute information according to the obstacle information, so that the fusion time caused by the complex registration algorithm is reduced, the method effectively improves the obstacle modeling precision, and improves the safety and timeliness of the robot in advancing; meanwhile, the analysis area is divided according to the obstacle outline information, so that the data analysis and operation time is reduced, and the operation speed is increased.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A robot traveling strategy determination method based on obstacle attributes is characterized by comprising the following steps:
s1, acquiring real-time images in front of the robot according to a depth binocular camera and a 2D laser radar, wherein the depth binocular camera is horizontally arranged left and right, and the 2D laser radar is arranged above the center of the binocular camera; the binocular camera is an RGB-D camera, an infrared sensor is arranged in the center of the binocular camera,
s2, independently describing the surrounding environment by the binocular camera, fusing point clouds under two independent coordinate systems into the same coordinate system according to the spatial position characteristics of an overlapping area, searching the overlapping area in the two point clouds, and then carrying out image fusion to obtain a primary fused image; obtaining contour information according to the image collected by the 2D laser radar; performing secondary fusion on the primary fusion image and the contour information to obtain a secondary fusion image;
s3, judging and extracting obstacles according to the secondary fusion image outline information,
s4, calculating the obstacle information, obtaining the obstacle attribute information according to the obstacle information,
s5, determining a traveling strategy according to the obstacle attribute information and the robot own attribute information,
and S6, adjusting the traveling route by the robot according to the traveling strategy.
2. The obstacle attribute-based robot travel strategy determination method according to claim 1, characterized by: and calculating the secondary fusion image acquired in real time according to a convolutional neural network model for judging the barrier to obtain a judgment result, wherein the neural network model is obtained by training a plurality of secondary fusion images by using a random gradient descent algorithm.
3. The obstacle attribute-based robot travel strategy determination method according to claim 1, characterized by: the binocular camera independently describes the surrounding environment, point clouds in two independent coordinate systems are fused into the same coordinate system according to the spatial position characteristics of an overlapping area, and image fusion is carried out after the overlapping area is found in the two point clouds.
4. The obstacle attribute-based robot travel strategy determination method according to claim 3, characterized in that: the image fusion comprises the steps of fusing the images,
s11, preprocessing the point cloud, converting the point cloud picture with color information into a point cloud picture with color reduction information,
s12, randomly sampling the point cloud picture, matching once by using an initial fusion algorithm of sampling consistency,
s13, performing secondary matching by using an iterative closest point algorithm to obtain a fused three-dimensional point cloud picture;
the image acquisition contour information acquired by the 2D laser radar comprises filtering operation on the image acquired by the 2D laser radar, wherein the filtering specifically comprises the following steps: decomposing by adopting a non-downsampling contour transformation algorithm to obtain a high-frequency part and a low-frequency part in the image, filtering the high-frequency part according to a median filtering algorithm, processing the low-frequency part according to a non-local mean filtering algorithm, reconstructing the image by adopting a threshold denoising algorithm based on the non-downsampling contour transformation algorithm for the filtered image, obtaining the denoised and filtered image and outputting the image.
5. The method of claim 1 for determining a robot travel strategy based on obstacle attributes, wherein: the extracting comprises the step of segmenting position and size information of the obstacle from the secondary fusion image, wherein the segmenting comprises the step of calculating a difference value between a background color and a color in the outline area, and when the difference value exceeds a first threshold value, the area is an obstacle area.
6. The obstacle attribute-based robot travel strategy determination method according to claim 5, characterized in that: calculating the obstacle information comprises extracting the textural features and morphological features of the obstacle according to the position and size information of the obstacle in the secondary fusion image; obtaining temperature information of the obstacle based on the infrared sensor; and establishing an obstacle feature vector according to the size information, the texture feature, the morphological feature and the temperature feature of the obstacle.
7. The obstacle attribute-based robot travel strategy determination method of claim 6, wherein: and determining the current traveling action according to the type of the obstacle and other attribute information as well as the attribute information of the robot.
8. The obstacle attribute-based robot travel strategy determination method according to claim 5, characterized in that: and if the temperature information of the barrier exceeds a second threshold value, judging that the barrier is a living body, and starting the acousto-optic driving device to drive away.
9. The obstacle attribute-based robot travel strategy determination method of claim 7, wherein: the specific step of determining the current traveling action according to the type of the obstacle and other attribute information and the attribute information of the robot is as follows:
s71, acquiring the height and attribute information of the obstacle after the robot forwards travels to identify the obstacle;
s72, judging whether the temperature and the texture feature of the barrier accord with the characteristics of the living body, if so, driving away, otherwise, acquiring the height and width information of the barrier,
s73, calculating a difference value between the height of the obstacle and the height of the chassis of the robot, if the difference value is smaller than a third threshold value, performing crossing, if the difference value is larger than the third threshold value and the edge gap of the obstacle is larger than the width of the robot, performing detour, otherwise, performing retreating action;
and S74, after the robot retreats, reselecting other paths.
10. The obstacle attribute-based robot traveling strategy determination method according to claim 1, characterized in that: the relationship between the feature points and the stereo space in the image one-time fusion is as follows:
Figure FDA0003449222760000031
Figure FDA0003449222760000032
wherein It(u, v) is a frame feature point at time t, d is a feature point It(u, v) depth value, S is the scaling factor of the depth map, k is the internal parameter matrix of the camera, Pt(x, y, z) are stereo space coordinates; and r is a camera transformation attitude parameter.
Further, the parameter matrix k is:
Figure FDA0003449222760000033
wherein f isxAnd fyFocal length of the camera in x-axis and y-axis, cxAnd cyIs the aperture value of the camera lens.
CN202111669492.XA 2021-12-30 2021-12-30 Method and system for determining robot traveling path based on barrier attributes Pending CN114509061A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111669492.XA CN114509061A (en) 2021-12-30 2021-12-30 Method and system for determining robot traveling path based on barrier attributes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111669492.XA CN114509061A (en) 2021-12-30 2021-12-30 Method and system for determining robot traveling path based on barrier attributes

Publications (1)

Publication Number Publication Date
CN114509061A true CN114509061A (en) 2022-05-17

Family

ID=81547947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111669492.XA Pending CN114509061A (en) 2021-12-30 2021-12-30 Method and system for determining robot traveling path based on barrier attributes

Country Status (1)

Country Link
CN (1) CN114509061A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101482617A (en) * 2009-01-16 2009-07-15 西安电子科技大学 Synthetic aperture radar image denoising method based on non-down sampling profile wave
CN102222322A (en) * 2011-06-02 2011-10-19 西安电子科技大学 Multiscale non-local mean-based method for inhibiting infrared image backgrounds
CN103077508A (en) * 2013-01-25 2013-05-01 西安电子科技大学 Transform domain non local and minimum mean square error-based SAR (Synthetic Aperture Radar) image denoising method
CN109213137A (en) * 2017-07-05 2019-01-15 广东宝乐机器人股份有限公司 sweeping robot, sweeping robot system and its working method
CN111191600A (en) * 2019-12-30 2020-05-22 深圳元戎启行科技有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN111784605A (en) * 2020-06-30 2020-10-16 珠海全志科技股份有限公司 Image denoising method based on region guidance, computer device and computer readable storage medium
CN111930127A (en) * 2020-09-02 2020-11-13 广州赛特智能科技有限公司 Robot obstacle identification and obstacle avoidance method
CN112764005A (en) * 2021-01-05 2021-05-07 哈尔滨工业大学 Low signal-to-noise ratio echo data reconstruction method for Gm-APD laser radar combined with morphological filtering
CN113096183A (en) * 2021-03-18 2021-07-09 武汉科技大学 Obstacle detection and measurement method based on laser radar and monocular camera
CN113487730A (en) * 2021-09-06 2021-10-08 中国电子科技集团公司第二十八研究所 Urban three-dimensional automatic modeling method based on laser radar point cloud data
CN215114624U (en) * 2020-12-22 2021-12-10 安徽康能电气有限公司 On-line monitoring multi-sensor system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101482617A (en) * 2009-01-16 2009-07-15 西安电子科技大学 Synthetic aperture radar image denoising method based on non-down sampling profile wave
CN102222322A (en) * 2011-06-02 2011-10-19 西安电子科技大学 Multiscale non-local mean-based method for inhibiting infrared image backgrounds
CN103077508A (en) * 2013-01-25 2013-05-01 西安电子科技大学 Transform domain non local and minimum mean square error-based SAR (Synthetic Aperture Radar) image denoising method
CN109213137A (en) * 2017-07-05 2019-01-15 广东宝乐机器人股份有限公司 sweeping robot, sweeping robot system and its working method
CN111191600A (en) * 2019-12-30 2020-05-22 深圳元戎启行科技有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN111784605A (en) * 2020-06-30 2020-10-16 珠海全志科技股份有限公司 Image denoising method based on region guidance, computer device and computer readable storage medium
CN111930127A (en) * 2020-09-02 2020-11-13 广州赛特智能科技有限公司 Robot obstacle identification and obstacle avoidance method
CN215114624U (en) * 2020-12-22 2021-12-10 安徽康能电气有限公司 On-line monitoring multi-sensor system
CN112764005A (en) * 2021-01-05 2021-05-07 哈尔滨工业大学 Low signal-to-noise ratio echo data reconstruction method for Gm-APD laser radar combined with morphological filtering
CN113096183A (en) * 2021-03-18 2021-07-09 武汉科技大学 Obstacle detection and measurement method based on laser radar and monocular camera
CN113487730A (en) * 2021-09-06 2021-10-08 中国电子科技集团公司第二十八研究所 Urban three-dimensional automatic modeling method based on laser radar point cloud data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
冯吉: "基于机器视觉和激光雷达的割草机前方障碍物检测", 中国优秀硕士学位论文全文数据库 农业科技辑, no. 1, 15 January 2021 (2021-01-15), pages 7 - 12 *
蔡军,等: "基于Kinect 的改进移动机器人视觉SLAM", 智能***学报, vol. 13, no. 5, 31 October 2018 (2018-10-31), pages 734 - 740 *
陈文: "基于深度相机的移动机器人SLAM 研究", 中国优秀硕士论文全文数据库 信息科技辑, no. 2, 15 February 2020 (2020-02-15), pages 11 - 16 *

Similar Documents

Publication Publication Date Title
Song et al. Lane detection and classification for forward collision warning system based on stereo vision
CN110188696B (en) Multi-source sensing method and system for unmanned surface equipment
CN113255520B (en) Vehicle obstacle avoidance method based on binocular vision and deep learning and electronic equipment
CN111598916A (en) Preparation method of indoor occupancy grid map based on RGB-D information
CN111161317A (en) Single-target tracking method based on multiple networks
EP3686775A1 (en) Method for detecting pseudo-3d bounding box based on cnn capable of converting modes according to poses of objects using instance segmentation and device using the same
Zou et al. Real-time full-stack traffic scene perception for autonomous driving with roadside cameras
CN109000655B (en) Bionic indoor positioning and navigation method for robot
CN112051853A (en) Intelligent obstacle avoidance system and method based on machine vision
CN113592891B (en) Unmanned vehicle passable domain analysis method and navigation grid map manufacturing method
Arain et al. Improving underwater obstacle detection using semantic image segmentation
Saleem et al. Steering angle prediction techniques for autonomous ground vehicles: a review
Ouyang et al. A cgans-based scene reconstruction model using lidar point cloud
US10445611B1 (en) Method for detecting pseudo-3D bounding box to be used for military purpose, smart phone or virtual driving based-on CNN capable of converting modes according to conditions of objects and device using the same
CN113111787A (en) Target detection method, device, equipment and storage medium
CN116189150B (en) Monocular 3D target detection method, device, equipment and medium based on fusion output
Vatavu et al. Modeling and tracking of dynamic obstacles for logistic plants using omnidirectional stereo vision
Huang et al. A coarse-to-fine LiDar-based SLAM with dynamic object removal in dense urban areas
CN114509061A (en) Method and system for determining robot traveling path based on barrier attributes
CN110889362A (en) Obstacle detection method using grid map height information
CN114442615A (en) Robot traveling strategy determination method and system based on barrier attributes
CN110110645B (en) Obstacle rapid identification method and system suitable for low signal-to-noise ratio image
Subramanian et al. Integrating computer vision and photogrammetry for autonomous aerial vehicle landing in static environment
CN113009501A (en) Image and laser data fused robot navigation three-dimensional semantic map generation method
CN111815667B (en) Method for detecting moving target with high precision under camera moving condition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20231205

Address after: 100600 District C, Qiankun Building, Chaoyang District, Beijing

Applicant after: Beijing Weimo Zhuoran Technology Co.,Ltd.

Address before: 401329 no.2-1, building 6, No.39 Xinggu Road, Jiulongpo District, Chongqing

Applicant before: Chongqing Terminus Wisdom Science and Technology Inc., Co.,Ltd.