CN110764110A - Path navigation method, device and computer readable storage medium - Google Patents

Path navigation method, device and computer readable storage medium Download PDF

Info

Publication number
CN110764110A
CN110764110A CN201911104806.4A CN201911104806A CN110764110A CN 110764110 A CN110764110 A CN 110764110A CN 201911104806 A CN201911104806 A CN 201911104806A CN 110764110 A CN110764110 A CN 110764110A
Authority
CN
China
Prior art keywords
image
vehicle
obstacle
path
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911104806.4A
Other languages
Chinese (zh)
Other versions
CN110764110B (en
Inventor
赵健章
黄子少
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Skyworth Digital Technology Co Ltd
Original Assignee
Shenzhen Skyworth Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Skyworth Digital Technology Co Ltd filed Critical Shenzhen Skyworth Digital Technology Co Ltd
Priority to CN201911104806.4A priority Critical patent/CN110764110B/en
Publication of CN110764110A publication Critical patent/CN110764110A/en
Application granted granted Critical
Publication of CN110764110B publication Critical patent/CN110764110B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a path navigation method, which comprises the following steps: acquiring a target planning path of the vehicle, and determining positioning information of the vehicle based on a radar layer corresponding to the vehicle and the laser radar; acquiring a first image corresponding to the vehicle through the depth camera; and controlling the vehicle to run based on the positioning information, the first image, the obstacle avoidance layer and the target planning path of the vehicle. The invention also discloses a path navigation device and a computer readable storage medium. According to the invention, the vehicle is positioned through the radar layer, the path planning and the obstacle avoidance are carried out according to the positioning information and the obstacle avoidance layer, and when the target planning path has an obstacle, the laser radar with the installation height greater than the height of the obstacle does not influence the positioning precision of the vehicle, so that the accuracy and the efficiency of vehicle navigation are improved.

Description

Path navigation method, device and computer readable storage medium
Technical Field
The invention relates to the field of intelligent driving, in particular to a path navigation method, a path navigation device and a computer readable storage medium.
Background
SLAM (simultaneous localization and mapping, instantaneous localization and mapping) based on natural environment includes two major functions: and (5) positioning and mapping. The main function of the map building is to understand the surrounding environment and build the corresponding relation between the surrounding environment and the space; the main function of positioning is to judge the position of the vehicle body in the map according to the established map, thereby obtaining the information in the environment. Secondly, the laser radar is an active detection sensor, does not depend on the external illumination condition, and has high-precision ranging information. Therefore, the SLAM method based on the laser radar is still the most widely applied method in the SLAM method of the Robot, and the SLAM application in ROS (Robot Operating System) has also been very widely applied.
At present, most of laser radar SLAM navigation can only be aimed at static environment, namely, in the whole SLAM navigation process, the environment can not change at all, and in practical application, the environment is mostly a dynamic environment with moving objects. In a dynamic environment, scanning point data of a moving object detected by a laser radar influences positioning precision, and larger navigation errors are caused to influence navigation accuracy.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a path navigation method, a path navigation device and a computer readable storage medium, and aims to solve the technical problem of low positioning accuracy when SLAM navigation is carried out in the existing dynamic environment.
In order to achieve the above object, the present invention provides a path navigation method applied to a vehicle, where the vehicle is provided with a laser radar and a depth camera, and an installation height of the laser radar is greater than an installation height of the depth camera, and the path navigation method includes the following steps:
acquiring a target planning path of the vehicle, and determining positioning information of the vehicle based on a radar layer corresponding to the vehicle and the laser radar;
acquiring a first image corresponding to the vehicle through the depth camera;
and controlling the vehicle to run based on the positioning information, the first image, the obstacle avoidance layer and the target planning path of the vehicle.
In an embodiment, the step of controlling the vehicle to travel based on the positioning information, the first image, the obstacle avoidance layer, and the target planned path of the vehicle includes:
determining whether an obstacle exists in the target planning path or not based on the first image and the obstacle avoidance layer;
and if the target planning path does not exist, controlling the vehicle to run based on the positioning information and the target planning path.
In an embodiment, after the step of determining whether an obstacle exists in the target planned path based on the first image and the obstacle avoidance layer, the method further includes:
if an obstacle exists in the target planning path, determining the obstacle information based on the first image and the obstacle avoidance layer;
determining a navigation path corresponding to the vehicle body based on the obstacle information, the positioning information, the target position information of the vehicle, the obstacle avoidance layer and the radar layer corresponding to the vehicle;
and taking the navigation path as the target planning path, controlling the vehicle to run based on the target planning path, continuously acquiring the target planning path of the vehicle, and determining the positioning information of the vehicle based on the laser radar.
In an embodiment, the step of determining whether an obstacle exists in the target planned path based on the first image and the obstacle avoidance layer includes:
acquiring a target background environment template image based on the positioning information and the obstacle avoidance layer;
performing image processing on the first image to determine first obstacle information corresponding to the image;
and determining whether an obstacle exists in the target planning path or not based on the first obstacle information and the target background environment template image.
In an embodiment, after the step of controlling the vehicle to travel based on the positioning information, the first image, the obstacle avoidance layer, and the target planned path of the vehicle, the path navigation method further includes:
generating a first background environment template image corresponding to the first image;
determining contour data of a first obstacle according to the first background environment template image, the first image and camera shooting parameters of the depth camera;
and updating the obstacle avoidance layer based on the profile data of the first obstacle and the first background environment template image.
In an embodiment, the step of generating a first background environment template image corresponding to the first image includes:
filling and coating the cavity data in the first image to obtain a filled and coated first image;
processing the first image after the filling processing by adopting a multi-frame averaging method to obtain a multi-frame processed first image;
smoothing the first image after the multi-frame processing to obtain a two-dimensional image template image after the smoothing processing;
and carrying out mean value filtering on the two-dimensional image template image to obtain a first background environment template image corresponding to the first image.
In an embodiment, the step of determining the contour data of the first obstacle according to the first background environment template image, the first image and the camera parameters of the depth camera includes:
acquiring a first pixel coordinate of the first background environment template image and acquiring a second pixel coordinate of the first image;
determining a distance difference value between the second pixel coordinate and the first pixel coordinate, and reserving the second pixel coordinate corresponding to the distance difference value smaller than zero to obtain a difference pixel coordinate;
performing polar coordinate conversion on the difference pixel coordinate according to the camera shooting parameters of the depth camera to obtain a polar coordinate after the barrier conversion;
and determining the angle and the distance of the barrier relative to the origin of coordinates according to the converted polar coordinates to obtain the profile data of the first barrier, wherein the position of the depth camera is the origin of coordinates.
In an embodiment, before the step of obtaining the target planned path of the vehicle and determining the positioning information of the vehicle based on the radar layer corresponding to the vehicle and the laser radar, the path navigation method further includes:
in the process that the vehicle runs according to a preset mapping path, a second image corresponding to the vehicle is obtained through the depth camera, and a second background environment template image corresponding to the second image is generated;
determining profile data of a second obstacle according to the second background environment template image, the second image and the camera shooting parameters of the depth camera;
and determining the obstacle avoidance layer based on the contour data and the second background environment template image.
In order to achieve the above object, the present invention also provides a route guidance apparatus including: the system comprises a memory, a processor and a path navigation program stored on the memory and capable of running on the processor, wherein the path navigation program realizes the steps of the path navigation method when being executed by the processor.
In addition, to achieve the above object, the present invention also provides a computer readable storage medium having a path navigation program stored thereon, which, when executed by a processor, implements the steps of the aforementioned path navigation method.
According to the method, a target planning path of the vehicle is obtained, and positioning information of the vehicle is determined based on a radar layer corresponding to the vehicle and the laser radar; then, acquiring a first image corresponding to the vehicle through the depth camera; and then controlling the vehicle to run based on the positioning information, the first image, the obstacle avoidance layer and the target planning path of the vehicle, positioning the vehicle through the radar layer, planning the path and avoiding the obstacle according to the positioning information and the obstacle avoidance layer, and when the target planning path has the obstacle, the laser radar with the installation height larger than the height of the obstacle does not influence the positioning precision of the vehicle, so that the accuracy and the efficiency of vehicle navigation are improved.
Drawings
FIG. 1 is a schematic structural diagram of a path navigation device in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of a route guidance method according to the present invention;
FIG. 3 is a schematic view of a scenario in an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a second embodiment of a route guidance method according to the present invention;
FIG. 5 is a flowchart illustrating a route guidance method according to a third embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a route guidance device in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the route guidance apparatus may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the path guidance device may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like.
Those skilled in the art will appreciate that the configuration of the route guidance apparatus shown in FIG. 1 does not constitute a limitation of the route guidance apparatus, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a path guidance program.
In the route guidance device shown in fig. 1, the network interface 1004 is mainly used for connecting with a background server and performing data communication with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be used to invoke a path navigation program stored in the memory 1005.
In this embodiment, the route guidance device includes: a memory 1005, a processor 1001 and a route guidance program stored in the memory 1005 and operable on the processor 1001, wherein the processor 1001, when calling the route guidance program stored in the memory 1005, performs the following operations:
acquiring a target planning path of the vehicle, and determining positioning information of the vehicle based on a radar layer corresponding to the vehicle and the laser radar;
acquiring a first image corresponding to the vehicle through the depth camera;
and controlling the vehicle to run based on the positioning information, the first image, the obstacle avoidance layer and the target planning path of the vehicle.
Further, the processor 1001 may call the path navigation program stored in the memory 1005, and also perform the following operations:
determining whether an obstacle exists in the target planning path or not based on the first image and the obstacle avoidance layer;
and if the target planning path does not exist, controlling the vehicle to run based on the positioning information and the target planning path.
Further, the processor 1001 may call the path navigation program stored in the memory 1005, and also perform the following operations:
if an obstacle exists in the target planning path, determining the obstacle information based on the first image and the obstacle avoidance layer;
determining a navigation path corresponding to the vehicle body based on the obstacle information, the positioning information, the target position information of the vehicle, the obstacle avoidance layer and the radar layer corresponding to the vehicle;
and taking the navigation path as the target planning path, controlling the vehicle to run based on the target planning path, continuously acquiring the target planning path of the vehicle, and determining the positioning information of the vehicle based on the laser radar.
Further, the processor 1001 may call the path navigation program stored in the memory 1005, and also perform the following operations:
acquiring a target background environment template image based on the positioning information and the obstacle avoidance layer;
performing image processing on the first image to determine first obstacle information corresponding to the image;
and determining whether an obstacle exists in the target planning path or not based on the first obstacle information and the target background environment template image.
Further, the processor 1001 may call the path navigation program stored in the memory 1005, and also perform the following operations:
generating a first background environment template image corresponding to the first image;
determining contour data of a first obstacle according to the first background environment template image, the first image and camera shooting parameters of the depth camera;
and updating the obstacle avoidance layer based on the profile data of the first obstacle and the first background environment template image.
Further, the processor 1001 may call the path navigation program stored in the memory 1005, and also perform the following operations:
filling and coating the cavity data in the first image to obtain a filled and coated first image;
processing the first image after the filling processing by adopting a multi-frame averaging method to obtain a multi-frame processed first image;
smoothing the first image after the multi-frame processing to obtain a two-dimensional image template image after the smoothing processing;
and carrying out mean value filtering on the two-dimensional image template image to obtain a first background environment template image corresponding to the first image.
Further, the processor 1001 may call the path navigation program stored in the memory 1005, and also perform the following operations:
acquiring a first pixel coordinate of the first background environment template image and acquiring a second pixel coordinate of the first image;
determining a distance difference value between the second pixel coordinate and the first pixel coordinate, and reserving the second pixel coordinate corresponding to the distance difference value smaller than zero to obtain a difference pixel coordinate;
performing polar coordinate conversion on the difference pixel coordinate according to the camera shooting parameters of the depth camera to obtain a polar coordinate after the barrier conversion;
and determining the angle and the distance of the barrier relative to the origin of coordinates according to the converted polar coordinates to obtain the profile data of the first barrier, wherein the position of the depth camera is the origin of coordinates.
Further, the processor 1001 may call the path navigation program stored in the memory 1005, and also perform the following operations:
in the process that the vehicle runs according to a preset mapping path, a second image corresponding to the vehicle is obtained through the depth camera, and a second background environment template image corresponding to the second image is generated;
determining profile data of a second obstacle according to the second background environment template image, the second image and the camera shooting parameters of the depth camera;
and determining the obstacle avoidance layer based on the contour data and the second background environment template image.
The invention also provides a path navigation method, and referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of the path navigation method of the invention.
In this embodiment, the path navigation method is applied to a vehicle, the vehicle is provided with a laser radar and a depth camera, and the installation height of the laser radar is greater than that of the depth camera.
The path navigation method of the embodiment can be applied to the intelligent automatic driving process, wherein the intelligent automatic driving process can be suitable for warehouse freight in a closed environment and can also be suitable for road transportation in an open environment, and the embodiment takes warehouse freight as an example for description; the Vehicle corresponding to warehouse freight may be a forklift, a cradle, or an AGV (automated guided Vehicle) cart, which can transport goods.
It should be noted that, the vehicle adopts a high-mounted single line laser radar mode, and avoids a low obstacle that may exist in a driving scene, generally, the low obstacle is mostly goods, staff or other vehicle bodies, and the vehicle can obtain parameters such as position information of other vehicle bodies, so the mounting height of the laser radar is set to be greater than the height of the goods and the height of the staff, for example, if the height of the goods is 1.5, the mounting height of the laser radar can be set to be 1.8 meters, 1.9 meters, and the interference of the goods obstacle with 1.5 meters and the interference of the staff are avoided.
Meanwhile, the vehicle adopts a low-position installation depth camera, namely a three-dimensional structure light depth camera, namely the installation height of a laser radar is greater than that of the depth camera, and the depth camera is installed according to a preset visual angle, so that low obstacles higher than the ground can be identified through the depth camera, for example, 3 surrounding depth cameras can be installed on the vehicle, the installation height is 1 meter, the vertical visual angle of each depth camera is 45 degrees, the horizontal visual angle of each depth camera is 50 degrees, and the low obstacles higher than the ground in the preset ranges in front of the vehicle and on the left and right sides can be identified through images shot by the depth camera, as shown in fig. 3, 1.1-1.3 are depth cameras, 2 are vehicles, and 3 is the height of the depth camera from the ground; 4 is the vertical viewing angle of the depth camera; 5 is the horizontal visual angle of degree of depth camera, 6 is lidar, and 7 is lidar ground height.
The path navigation method comprises the following steps:
step S100, acquiring a target planning path of the vehicle, and determining positioning information of the vehicle based on a radar layer corresponding to the vehicle and the laser radar;
in this embodiment, when a vehicle travels according to a target planned path, the target planned path is acquired, current detection data of a laser radar is acquired, and positioning information of the vehicle is determined based on the detection data and a radar layer, where the positioning information is determined by the laser radar by using an existing SLAM navigation algorithm in this embodiment, and is not described herein again.
It should be noted that the vehicle determines the target planned path through the radar layer and the obstacle avoidance layer according to the initial position information and the target position information, so that no currently known fixed obstacle exists in the target planned path.
Step S200, acquiring a first image corresponding to the vehicle through the depth camera;
in this embodiment, when the vehicle travels along the target planned path, the depth camera performs a shooting operation in real time to obtain a shot image, and when the positioning information of the vehicle is obtained, a first image currently shot by the depth camera is obtained.
Step S300, controlling the vehicle to run based on the positioning information, the first image, the obstacle avoidance layer and the target planning path of the vehicle.
In this embodiment, when the first image is acquired, an obstacle avoidance layer corresponding to a current driving scene is acquired, and the vehicle is controlled to drive based on the positioning information, the first image, the obstacle avoidance layer, and a target planned path of the vehicle, specifically, if it is determined that no obstacle exists in the target planned path according to the first image, the vehicle is controlled to drive based on the positioning information and the target planned path, otherwise, the vehicle is controlled to drive based on the positioning information, the first image, and the obstacle avoidance layer.
In this embodiment, through the height that highly sets up laser radar as being greater than the barrier that probably exists for there is not the barrier in the height that laser radar worked, and then can not reduce the precision of passing through laser radar to the vehicle location, avoids producing great navigation error.
In the path navigation method provided by this embodiment, a target planned path of the vehicle is obtained, and positioning information of the vehicle is determined based on a radar layer corresponding to the vehicle and the laser radar; then, acquiring a first image corresponding to the vehicle through the depth camera; and then controlling the vehicle to run based on the positioning information, the first image, the obstacle avoidance layer and the target planning path of the vehicle, positioning the vehicle through the radar layer, planning the path and avoiding the obstacle according to the positioning information and the obstacle avoidance layer, and when the target planning path has the obstacle, the laser radar with the installation height larger than the height of the obstacle does not influence the positioning precision of the vehicle, so that the accuracy and the efficiency of vehicle navigation are improved.
Based on the first embodiment, a second embodiment of the route guidance method of the present invention is proposed, and referring to fig. 4, in this embodiment, step S300 includes:
step S310, determining whether an obstacle exists in the target planning path or not based on the first image and the obstacle avoidance layer;
and step S320, if the target planning path does not exist, controlling the vehicle to run based on the positioning information and the target planning path.
In this embodiment, when the first image is acquired, the obstacle avoidance layer corresponding to the current driving scene is acquired, whether an obstacle exists in the target planned path is determined based on the first image and the obstacle avoidance layer, and if the obstacle does not exist, the vehicle is controlled to drive based on the positioning information and the target planned path, that is, the vehicle is controlled to continue to drive according to the target planned path, so that the accuracy and the efficiency of vehicle navigation are improved.
In an embodiment, a first background environment template image corresponding to the first image may be generated, the contour data of the first obstacle is determined according to the first background environment template image, the first image and the camera parameters of the depth camera, then the contour data of the first obstacle is compared with a target background environment template image corresponding to the positioning information, the contour data of a target first obstacle in the target background environment template image is obtained, if the contour data of the first obstacle does not have the contour data of other first obstacles except the contour data of the target first obstacle, it is determined that no obstacle exists in the target planned path, otherwise, it is determined that an obstacle exists in the target planned path.
In the path navigation method provided by the embodiment, whether an obstacle exists in the target planned path is determined based on the first image and the obstacle avoidance layer; and if the target planned path does not exist, controlling the vehicle to run based on the positioning information and the target planned path, and controlling the vehicle to continue to run according to the target planned path when the target planned path does not have the obstacle so as to improve the accuracy and efficiency of vehicle navigation.
Based on the second embodiment, a third embodiment of the route guidance method of the present invention is provided, and referring to fig. 5, in this embodiment, after step S310, the method further includes:
step S330, if an obstacle exists in the target planning path, determining the obstacle information based on the first image and the obstacle avoidance layer;
step S340, determining a navigation path corresponding to the vehicle body based on the obstacle information, the positioning information, the target position information of the vehicle, the obstacle avoidance layer and the radar layer corresponding to the vehicle;
and step S350, taking the navigation path as the target planning path, controlling the vehicle to run based on the target planning path, continuously executing the steps of obtaining the target planning path of the vehicle and determining the positioning information of the vehicle based on the laser radar.
In this embodiment, if an obstacle exists in the target planned path, determining obstacle information based on the first image and the obstacle avoidance layer, specifically, generating a first background environment template image corresponding to the first image, determining profile data of the first obstacle according to the first background environment template image, the first image, and the camera parameters of the depth camera, then comparing the profile data of the first obstacle with a target background environment template image corresponding to the positioning information, obtaining profile data of a target first obstacle in the target background environment template image, and if profile data of other first obstacles than the profile data of the target first obstacle exists in the profile data of the first obstacle, obtaining profile data of other first obstacles, so as to obtain obstacle information.
And then, determining a navigation path corresponding to the vehicle body based on the obstacle information, the positioning information, the target position information of the vehicle, the obstacle avoidance layer and the radar layer corresponding to the vehicle, wherein if the vehicle can bypass the obstacle, the navigation path comprises a driving path of the vehicle when the vehicle bypasses the obstacle and a driving path after the vehicle bypasses the obstacle, if the vehicle cannot bypass the obstacle, the navigation path is used as the target planning path, the vehicle is controlled to drive based on the target planning path, the step of obtaining the target planning path of the vehicle is continuously executed, and the step of determining the positioning information of the vehicle based on the laser radar is further executed, so that the implementation planning of the driving path of the vehicle is realized.
In the route navigation method provided by the embodiment, if an obstacle exists in a target planned route, the obstacle information is determined based on the first image and the obstacle avoidance layer; then, determining a navigation path corresponding to the vehicle body based on the obstacle information, the positioning information, the target position information of the vehicle, the obstacle avoidance layer and the radar layer corresponding to the vehicle; and then taking the navigation path as the target planning path, controlling the vehicle to run based on the target planning path, continuously acquiring the target planning path of the vehicle, determining the positioning information of the vehicle based on the laser radar, and re-planning the running path of the vehicle to enable the vehicle to avoid the obstacle in the target planning path, so that the navigation efficiency of the vehicle is improved.
Based on the second embodiment, a fourth embodiment of the route guidance method of the present invention is proposed, in this embodiment, step S310 includes:
step S311, acquiring a target background environment template image based on the positioning information and the obstacle avoidance layer;
step S312, performing image processing on the first image to determine first obstacle information corresponding to the image;
step 313, determining whether an obstacle exists in the target planned path based on the first obstacle information and the target background environment template image.
In this embodiment, a target background environment template image corresponding to the positioning information is obtained in the obstacle avoidance layer, and then the first image is subjected to image processing to determine first obstacle information corresponding to the image, for example, a first background environment template image corresponding to the first image may be generated, and profile data of the first obstacle, that is, the first obstacle information, is determined according to the first background environment template image, the first image, and the camera parameters of the depth camera.
And then, determining whether an obstacle exists in the target planned path according to the first obstacle information and the target background environment template image, specifically, obtaining contour data of a target first obstacle in the target background environment template image, and determining that the obstacle does not exist in the target planned path if the contour data of the first obstacle, namely the contour data of the first obstacle and the contour data of the target first obstacle, does not exist in the contour data of the first obstacle, and otherwise, determining that the obstacle exists in the target planned path.
According to the path navigation method provided by the embodiment, the target background environment template image is obtained based on the positioning information and the obstacle avoidance layer, then the first image is subjected to image processing to determine the first obstacle information corresponding to the image, and then whether an obstacle exists in the target planned path or not is determined based on the first obstacle information and the target background environment template image, so that whether an obstacle exists in the target planned path or not can be accurately determined according to the first image, the obstacle detection accuracy is further improved, and the vehicle navigation efficiency is further improved.
Based on the first embodiment, a fifth embodiment of the route guidance method of the present invention is provided, where in this embodiment, after step S300, the method further includes:
step S400, generating a first background environment template image corresponding to the first image;
in this embodiment, after the vehicle is controlled to travel according to the updated target planned path, a first background environment template image corresponding to the first image may be generated. It should be noted that in the acquired first image, hole data may exist in the first image due to the characteristics of the depth camera, reflection and refraction of ambient light during shooting, and the hole data is located where the depth data cannot be acquired, that is, the depth value corresponding to the hole data is zero. Therefore, the first image needs to be processed to obtain a first background environment template image corresponding to the first image.
Further, the step of generating a first background environment template image corresponding to the first image comprises:
step a, filling and coating the cavity data in the first image to obtain a filled and coated first image.
Further, the process of generating the first background environment template image corresponding to the first image is as follows: and performing filling processing on the hole data in the first image to obtain a filled first image. In this embodiment, a standard function of OpenCV is adopted to implement closed operation, and expansion and then erosion are performed to eliminate void data in the first image, and at this time, a standard function Size (Size) parameter of OpenCV is controlled.
And b, processing the first image after the filling processing by adopting a multi-frame averaging method to obtain a multi-frame processed first image.
And after the first image after the filling processing is obtained, processing the first image after the filling processing by adopting a multi-frame averaging method to obtain a multi-frame processed first image. Specifically, the multi-frame averaging method is essentially a statistical filtering idea, in which collected multi-frame images are added together over a period of time, and an average value is obtained, and the average value is used as a reference background model. In this embodiment, a plurality of frames of images are collected at the same position as the position where the first image is obtained, in the collected plurality of frames of images, the average value of the depth values is taken at the same position as the depth data in the first image after the padding processing, and the average value is taken as the depth value of the position corresponding to the first image after the multi-frame processing, so as to obtain the first image after the multi-frame processing.
And c, smoothing the first image subjected to the multi-frame processing to obtain a smoothed two-dimensional image template image.
And d, performing mean filtering on the two-dimensional image template image to obtain a first background environment template image corresponding to the first image.
And after the first image after multi-frame processing is obtained, smoothing the first image after multi-frame processing to obtain a two-dimensional template image after smoothing. In the present embodiment, the smoothing process uses methods including, but not limited to, an interpolation method, a linear smoothing method, and a convolution method. And after the two-dimensional template image after the smoothing processing is obtained, performing mean filtering on the two-dimensional template image to obtain a mean filtered image, wherein the mean filtered image is the first background environment template image corresponding to the first image.
Step S500, determining contour data of a first obstacle according to the first background environment template image, the first image and the camera shooting parameters of the depth camera;
step S600, updating the obstacle avoidance layer based on the contour data and the first background environment template image.
And after the first background environment template image is obtained, acquiring camera parameters of the depth camera, and determining the contour data of the first obstacle according to the first background environment template image, the first image and the camera parameters, wherein the contour data of the first obstacle comprises contour data of all fixed first obstacles corresponding to the first image. The camera shooting parameters include but are not limited to the installation height, the installation angle, the vertical view field angle, the horizontal view field angle, the number of effective pixel rows and the number of effective pixel columns of the depth camera; the contour data of the first obstacle is the distance between the obstacle and the vehicle and the position of the obstacle, and in this embodiment, the position of the obstacle is represented in the form of coordinates.
And then, updating the obstacle avoidance layer based on the profile data and the first background environment template image, specifically, labeling the profile data in the first background environment template image, and updating the obstacle avoidance layer based on the labeled first background environment template image to update the obstacle avoidance layer, realize dynamic reconstruction of the obstacle avoidance layer, and improve navigation efficiency.
It should be noted that, when the vehicle reaches the destination position corresponding to the target planned path, the above operation may be performed to update the obstacle avoidance layer.
According to the path navigation method provided by the embodiment, the first background environment template image corresponding to the first image is generated, the profile data of the first obstacle is determined according to the first background environment template image, the first image and the camera shooting parameters of the depth camera, the obstacle avoidance layer is updated based on the profile data and the first background environment template image, and the dynamic reconstruction of the obstacle avoidance layer is realized according to the first image, so that the navigation efficiency of the vehicle is further improved.
Based on the fifth embodiment, a sixth embodiment of the route guidance method of the present invention is provided, in this embodiment, step S500 includes:
step S510, obtaining a first pixel coordinate of the first background environment template image, and obtaining a second pixel coordinate of the first image;
step S520, determining a distance difference value between the second pixel coordinate and the first pixel coordinate, and reserving the second pixel coordinate corresponding to the distance difference value smaller than zero to obtain a difference pixel coordinate;
step S530, performing polar coordinate conversion on the difference pixel coordinate according to the camera shooting parameters of the depth camera to obtain a polar coordinate after barrier conversion;
and S540, determining the angle and the distance of the obstacle relative to the origin of coordinates according to the converted polar coordinates to obtain the profile data of the first obstacle, wherein the position of the depth camera is the origin of coordinates.
In this embodiment, the pixel coordinates of the first background environment template image are acquired, and the pixel coordinates of the first image are acquired, and for convenience of distinguishing, the pixel coordinates in the first background environment template image are recorded as the first pixel coordinates, and the pixel coordinates of the first image are recorded as the second pixel coordinates. It can be understood that each pixel point in the image is a coordinate point. After the first pixel coordinate and the second pixel coordinate are obtained, a distance difference value between the second pixel coordinate and the first pixel coordinate is determined, the second pixel coordinate corresponding to the distance difference value being smaller than zero is reserved, and the difference pixel coordinate is obtained.
In the process of determining the distance difference between the second pixel coordinate and the first pixel coordinate, the module value corresponding to the second pixel coordinate and the first pixel coordinate is calculated, and then the distance difference between the second pixel coordinate and the first pixel coordinate is determined according to the correspondence between the module value of the second pixel coordinate and the module value of the first pixel coordinate. In the process of obtaining the distance difference, the pixel coordinates of the same position in the first image and the first background environment template image are calculated, for example, the distance difference corresponding to the position of a1 is determined according to the second pixel coordinate of the position of a1 of the first image and the first pixel coordinate of the position of a1 of the first background environment template image, the distance difference corresponding to the position of a2 is determined according to the second pixel coordinate of the position of a2 of the first image and the first pixel coordinate of the position of a2 of the first background environment template image, and the distance difference corresponding to the position of A3 is determined according to the second pixel coordinate of the position of A3 of the first image and the first pixel coordinate of the position of A3 of the first background environment template image. Further, the corresponding distance difference is greater than or equal to zero, and then is recorded as zero.
And after obtaining the difference pixel coordinate, performing polar coordinate conversion on the difference pixel coordinate according to the shooting parameter of the depth camera to obtain a polar coordinate after the obstacle is converted, and determining the angle and the distance of the obstacle relative to a coordinate origin according to the converted polar coordinate to obtain the contour data of the first obstacle, wherein the coordinate origin is the position of the depth camera, namely the angle and the distance are the angle and the distance of the obstacle relative to the vehicle. It should be noted that the coordinate system adopted by the coordinate origin and the first pixel coordinate and the second pixel coordinate is different, and the coordinates between different coordinate systems can be converted to each other.
Further, step S530 includes:
and g1, reading the installation height, the installation angle, the vertical view field angle, the horizontal view field angle, the effective pixel line number and the effective pixel column number of the depth camera, reading the difference pixel coordinates as measuring points, and executing the following steps on the measuring points one by one.
And g2, detecting the depth value between the measuring point and the depth camera, and the pixel line number and the pixel column number of the measuring point.
And g3, determining the polar coordinate module value of the measuring point according to the installation angle, the vertical view field angle, the number of the pixel lines, the number of the effective pixel lines and the depth value.
And g4, determining the polar coordinate angle of the measuring point according to the horizontal view field angle, the installation height, the installation angle, the vertical view field angle, the number of the located pixel columns, the number of the effective pixel columns, the number of the located pixel rows and the number of the effective pixel rows.
And g5, determining the polar coordinate module value and the polar coordinate angle as the polar coordinate of the measuring point, and determining each polar coordinate as the polar coordinate after the obstacle is converted after each measuring point generates the polar coordinate.
In the embodiment, a three-dimensional space coordinate system is established in advance, the three-dimensional space coordinate system takes the position of the depth camera as a coordinate origin, takes the plane of the vehicle as an XY plane, and takes the upper space vertical to the XY plane as the space in which the positive direction of the Y axis is located; wherein, for the XY plane, the direction right ahead of the vehicle is the Y-axis direction, and the direction vertical to the Y-axis direction is the X-axis direction. And taking the positive direction of the Y axis as a preset direction, imaging each obstacle by the depth camera once the obstacle is detected, forming a projection of each obstacle in the preset direction, and detecting the projection height of each obstacle.
Specifically, the camera shooting parameters of the depth camera are read, and the camera shooting parameters comprise installation height H, installation angle theta and vertical view field angle omegazHorizontal field angle omegahThe number of effective pixel rows L and the number of effective pixel columns C; the effective pixel row number is the imaging maximum pixel value of the depth camera in the Y-axis direction, and the effective pixel column number is the imaging maximum pixel value of the stereo camera in the X-axis direction. And reading the coordinates of each pixel point contained in the difference pixel coordinates as measuring points, and processing the measuring points one by one. During processing, firstly detecting a depth value D between a measuring point and a depth camera, and the number n of pixel rows where the measuring point is located and the number m of pixel columns where the measuring point is located; then, according to the installation angle, the vertical view field angle, the number of pixel lines, the number of effective pixel lines and the depth value of the depth camera, determining the polar coordinate module value of the measuring point, specifically, the installation angle theta and the vertical view field angle omegazThe pixel line number n and the effective pixel line number L are transmitted to formula (1), and the deflection angle α of the pixel line is obtained through the calculation of formula (1), wherein the formula (1) is:
α=θ-(ωz/2)+(ωz*n/L) (1)。
after the deflection angle α of the row where the pixel is located is obtained through the calculation of the formula (1), the deflection angle α and the depth value D are transmitted to the formula (2), and the polar coordinate module value r of the measuring point is obtained through the calculation of the formula (2), wherein the formula (2) is as follows:
r=D*Cos(α) (2)。
further, calculating the absolute value coordinates (| Xmax |, | Ymax |) of the farthest projection point imaged by the depth camera and the absolute value coordinates (| Xmin |, | Ymin |) of the nearest projection point; specifically, the horizontal field angle ωhMounting height H, mounting angle theta and vertical view field angle omegazTransmitting the absolute value of the maximum projection point to a formula (3), and obtaining a value of | Xmax | in the absolute value coordinate of the maximum projection point through calculation of the formula (3); mounting height H, mounting angle theta and vertical view angle omegazTransmitting the absolute value of the projection point to a formula (4), and obtaining a value of | Ymax | in the absolute value coordinate of the farthest projection point through calculation of the formula (4); angle omega of horizontal field of viewhMounting height H, mounting angle theta and vertical view field angle omegazTransmitting the absolute value of the absolute value coordinate to a formula (5), and obtaining an absolute value of | Xmin | in the absolute value coordinate of the latest projection through calculation of the formula (5); mounting height H, mounting angle theta and vertical view angle omegazAnd (4) transmitting the absolute value of the projection to the formula (6), and calculating the absolute value coordinate of the latest projection by the formula (6) to obtain the value of | Ymin |. Wherein the formulas (3), (4), (5) and (6) are respectively:
|Xmax|=Tan(0.5*ωh)*H/Cos(θ-0.5*ωz) (3);
|Ymax|=H/Tan(θ-0.5*ωz) (4);
|Xmin|=Tan(0.5*ωh)*H/Cos(θ+0.5*ωz) (5);
|Ymin|=H/Tan(θ+0.5*ωz) (6)。
further, absolute value coordinates (| Xc |, | Yc |) of the coordinates of the measuring points are calculated, the number m of the pixel columns, the number C of the effective pixel columns, | Xmax | and | Xmin | where the pixel columns are located are transmitted to a formula (7), a | Xc | numerical value in the absolute values of the coordinates of the measuring points is obtained through the calculation of the formula (7), a | Xc | numerical value in the absolute values of the coordinates of the measuring points is transmitted to a formula (8) through the number n of the pixel rows where the pixel columns are located, the number L of the effective pixel rows, | Ymax | and | Ymin | and a | Yc | are obtained through the calculation of the formula (8); wherein equations (7) and (8) are respectively:
|Xc|=m/C*(|Xmax|-|Xmin|)+|Xmin| (7);
|Yc|=n/L*(|Ymax|-|Ymin|)+|Ymin| (8)。
thereafter, the absolute value of the coordinates of the measurement point is transmitted to equation (9), and the polar coordinate angle □ of the measurement point is obtained through the calculation of equation (9), where equation (9) is:
□=Tan-1(|Yc|/|Xc|) (9)。
understandably, the polar coordinate module value and the polar coordinate angle of the measuring point calculated by the above formulas (1) to (9) are determined as the polar coordinates of the corresponding measuring point, and after the polar coordinates are generated at each measuring point, each polar coordinate is determined as the polar coordinate after the obstacle conversion, that is, the polar coordinate after the obstacle conversion is at least 1.
Further, step S540 includes:
and h1, selecting the converted polar coordinates within a preset angle range to generate a polar coordinate set, and sequentially performing median filtering and mean filtering on each element in the polar coordinate set to generate a processing result.
And h2, merging the elements in the processing result to generate a target element, calculating the angle and the distance between the target element and the depth camera, and correspondingly obtaining the angle and the distance of the obstacle relative to the origin of coordinates to obtain the contour data of the first obstacle.
After obtaining the polar coordinates after the obstacle conversion, selecting the converted polar coordinates within a preset angle range, determining the selected converted polar coordinates as a polar coordinate set, that is, selecting the polar coordinates within the preset angle range to determine as the polar coordinate set, and using each polar coordinate point in the polar coordinate set as each element in the polar coordinate set, where the preset angle range is preset, and the size of the preset angle range is not specifically limited in this embodiment. Carrying out median filtering processing on each element in the polar coordinate set to remove salt and pepper noise points in the element; then, removing elements with the distances between the elements and the origin of coordinates larger than the minimum value by setting the minimum value from the origin of coordinates; and then carrying out mean value filtering processing on each processed element to generate a processing result. And then combining all elements in the processing result, combining all the polar coordinate points as the elements into one polar coordinate point, wherein the polar coordinate point obtained by combining is the target element. And then, calculating the corresponding angle and distance of the target element relative to the depth camera, determining the relative distance between the vehicle and the obstacle through the calculated distance and angle, and obtaining the angle and distance of the obstacle relative to the origin of coordinates, namely obtaining the contour data of the first obstacle. It will be appreciated that the obstacle closest to the vehicle can be determined from the profile data.
In the process of calculating the contour data, polar coordinate conversion is carried out on the difference pixel coordinate according to the camera shooting parameters of the depth camera, and then the processing modes of filtering, denoising and calculating are carried out, so that the accuracy of calculating the distance between the barrier and the vehicle is improved, and the accuracy of identifying the position and the distance of the tray on the bin is further improved.
Further, the step of determining a distance difference between the second pixel coordinate and the first pixel coordinate comprises:
step f1, determining a threshold error corresponding to the first pixel coordinate, and calculating a product between a first modulus corresponding to the first pixel coordinate and the threshold error.
Step f2, adding the product and the first module value to obtain a third module value, and subtracting the third module value from the second module value corresponding to the second pixel coordinate to obtain the distance difference between the second pixel coordinate and the first pixel coordinate.
Further, the process of determining the distance difference between the second pixel coordinate and the first pixel coordinate is: determining a threshold error corresponding to the first pixel coordinate, and calculating a product between the first modulus corresponding to the first pixel coordinate and the threshold error, wherein the threshold error may be set according to specific needs, such as 2%, 2.5%, or 3.2%, and the threshold error is used to indicate the accuracy of the calculated distance difference. When the threshold error is set to 2.5%, it means that the calculated distance difference can reach an accuracy of 2.5 cm. And after the product between the first modulus and the threshold error is obtained, adding the product and the first modulus to obtain a third modulus, and subtracting the third modulus from the second modulus corresponding to the second pixel coordinate to obtain a distance difference between the second pixel coordinate and the first pixel coordinate. If the distance difference is represented by d1, d represents the second modulus, d0 represents the first modulus, and m% represents the error threshold, then d1 is d-d0 (1+ m%).
In the path navigation method provided by this embodiment, a first pixel coordinate of the first background environment template image is obtained, and a second pixel coordinate of the first image is obtained; then, determining a distance difference value between the second pixel coordinate and the first pixel coordinate, and reserving the second pixel coordinate corresponding to the distance difference value smaller than zero to obtain a difference pixel coordinate; then, carrying out polar coordinate conversion on the difference pixel coordinate according to the camera shooting parameters of the depth camera so as to obtain a polar coordinate after the barrier conversion; and then determining the angle and the distance of the obstacle relative to the origin of coordinates according to the converted polar coordinates to obtain the contour data of the first obstacle, and accurately obtaining the contour data of the first obstacle according to the pixel coordinates, so that the updating accuracy of the obstacle avoidance layer is improved, and the navigation efficiency of the vehicle is further improved.
Based on the foregoing embodiments, a seventh embodiment of the route guidance method according to the present invention is provided, where in this embodiment, before step S100, the method further includes:
step S700, in the process that the vehicle runs according to a preset mapping path, a second image corresponding to the vehicle is obtained through the depth camera, and a second background environment template image corresponding to the second image is generated;
step S800, determining contour data of a second obstacle according to the second background environment template image, the second image and the camera shooting parameters of the depth camera;
step S900, determining the obstacle avoidance layer based on the contour data of the second obstacle and the second background environment template image.
In the embodiment, when a vehicle is mapped, the vehicle runs according to a preset mapping path, the vehicle adopts a mapping algorithm to establish a radar layer and simultaneously establish an obstacle avoidance layer, specifically, a second image corresponding to the vehicle is acquired through the depth camera, a second background environment template image corresponding to the second image is generated, profile data of a second obstacle is determined according to the second background environment template image, the second image and camera parameters of the depth camera, the obstacle avoidance layer is determined based on the profile data of the second obstacle and the second background environment template image, specifically, the profile data of the second obstacle can be labeled in the second background environment template image, and the obstacle avoidance layer is generated based on the labeled second background environment template image; the generation process of the second background environment template image is similar to the generation process of the first background environment template image, and the acquisition process of the contour data of the second obstacle is similar to the acquisition process of the contour data of the first obstacle, which is not described herein again.
In the path navigation method provided by the embodiment, in the process that the vehicle runs according to a preset mapping path, a second image corresponding to the vehicle is obtained through the depth camera, and a second background environment template image corresponding to the second image is generated; then determining contour data of a second obstacle according to the second background environment template image, the second image and the camera shooting parameters of the depth camera; and then determining the obstacle avoidance layer based on the profile data of the second obstacle and the second background environment template image, and accurately generating the obstacle avoidance layer, so that the subsequent path navigation is conveniently carried out according to the obstacle avoidance layer, and the navigation efficiency of the AGV is improved.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where a path navigation program is stored on the computer-readable storage medium, and when executed by a processor, the path navigation program implements the following operations:
acquiring a target planning path of the vehicle, and determining positioning information of the vehicle based on a radar layer corresponding to the vehicle and the laser radar;
acquiring a first image corresponding to the vehicle through the depth camera;
and controlling the vehicle to run based on the positioning information, the first image, the obstacle avoidance layer and the target planning path of the vehicle.
Further, the path guidance program when executed by the processor further performs the following operations:
determining whether an obstacle exists in the target planning path or not based on the first image and the obstacle avoidance layer;
and if the target planning path does not exist, controlling the vehicle to run based on the positioning information and the target planning path.
Further, the path guidance program when executed by the processor further performs the following operations:
if an obstacle exists in the target planning path, determining the obstacle information based on the first image and the obstacle avoidance layer;
determining a navigation path corresponding to the vehicle body based on the obstacle information, the positioning information, the target position information of the vehicle, the obstacle avoidance layer and the radar layer corresponding to the vehicle;
and taking the navigation path as the target planning path, controlling the vehicle to run based on the target planning path, continuously acquiring the target planning path of the vehicle, and determining the positioning information of the vehicle based on the laser radar.
Further, the path guidance program when executed by the processor further performs the following operations:
acquiring a target background environment template image based on the positioning information and the obstacle avoidance layer;
performing image processing on the first image to determine first obstacle information corresponding to the image;
and determining whether an obstacle exists in the target planning path or not based on the first obstacle information and the target background environment template image.
Further, the path guidance program when executed by the processor further performs the following operations:
generating a first background environment template image corresponding to the first image;
determining contour data of a first obstacle according to the first background environment template image, the first image and camera shooting parameters of the depth camera;
and updating the obstacle avoidance layer based on the profile data of the first obstacle and the first background environment template image.
Further, the path guidance program when executed by the processor further performs the following operations:
filling and coating the cavity data in the first image to obtain a filled and coated first image;
processing the first image after the filling processing by adopting a multi-frame averaging method to obtain a multi-frame processed first image;
smoothing the first image after the multi-frame processing to obtain a two-dimensional image template image after the smoothing processing;
and carrying out mean value filtering on the two-dimensional image template image to obtain a first background environment template image corresponding to the first image.
Further, the path guidance program when executed by the processor further performs the following operations:
acquiring a first pixel coordinate of the first background environment template image and acquiring a second pixel coordinate of the first image;
determining a distance difference value between the second pixel coordinate and the first pixel coordinate, and reserving the second pixel coordinate corresponding to the distance difference value smaller than zero to obtain a difference pixel coordinate;
performing polar coordinate conversion on the difference pixel coordinate according to the camera shooting parameters of the depth camera to obtain a polar coordinate after the barrier conversion;
and determining the angle and the distance of the barrier relative to the origin of coordinates according to the converted polar coordinates to obtain the profile data of the first barrier, wherein the position of the depth camera is the origin of coordinates.
Further, the path guidance program when executed by the processor further performs the following operations:
in the process that the vehicle runs according to a preset mapping path, a second image corresponding to the vehicle is obtained through the depth camera, and a second background environment template image corresponding to the second image is generated;
determining profile data of a second obstacle according to the second background environment template image, the second image and the camera shooting parameters of the depth camera;
and determining the obstacle avoidance layer based on the contour data and the second background environment template image.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A path navigation method is applied to a vehicle, the vehicle is provided with a laser radar and a depth camera, the installation height of the laser radar is greater than that of the depth camera, and the path navigation method comprises the following steps:
acquiring a target planning path of the vehicle, and determining positioning information of the vehicle based on a radar layer corresponding to the vehicle and the laser radar;
acquiring a first image corresponding to the vehicle through the depth camera;
and controlling the vehicle to run based on the positioning information, the first image, the obstacle avoidance layer and the target planning path of the vehicle.
2. The route guidance method of claim 1, wherein the step of controlling the vehicle to travel based on the positioning information, the first image, an obstacle avoidance layer, and a target planned route of the vehicle comprises:
determining whether an obstacle exists in the target planning path or not based on the first image and the obstacle avoidance layer;
and if the target planning path does not exist, controlling the vehicle to run based on the positioning information and the target planning path.
3. The route guidance method of claim 2, wherein after the step of determining whether an obstacle exists in the target planned route based on the first image and the obstacle avoidance layer, further comprising:
if an obstacle exists in the target planning path, determining the obstacle information based on the first image and the obstacle avoidance layer;
determining a navigation path corresponding to the vehicle body based on the obstacle information, the positioning information, the target position information of the vehicle, the obstacle avoidance layer and the radar layer corresponding to the vehicle;
and taking the navigation path as the target planning path, controlling the vehicle to run based on the target planning path, continuously acquiring the target planning path of the vehicle, and determining the positioning information of the vehicle based on the laser radar.
4. The route guidance method of claim 2, wherein the step of determining whether an obstacle exists in the target planned route based on the first image and an obstacle avoidance layer comprises:
acquiring a target background environment template image based on the positioning information and the obstacle avoidance layer;
performing image processing on the first image to determine first obstacle information corresponding to the image;
and determining whether an obstacle exists in the target planning path or not based on the first obstacle information and the target background environment template image.
5. The route guidance method of claim 1, wherein after the step of controlling the vehicle to travel based on the positioning information, the first image, an obstacle avoidance layer, and a target planned route of the vehicle, the route guidance method further comprises:
generating a first background environment template image corresponding to the first image;
determining contour data of a first obstacle according to the first background environment template image, the first image and camera shooting parameters of the depth camera;
and updating an obstacle avoidance layer based on the profile data of the first obstacle and the first background environment template image.
6. The path guidance method of claim 5, wherein the step of generating a first background environment template image corresponding to the first image comprises:
filling and coating the cavity data in the first image to obtain a filled and coated first image;
processing the first image after the filling processing by adopting a multi-frame averaging method to obtain a multi-frame processed first image;
smoothing the first image after the multi-frame processing to obtain a two-dimensional image template image after the smoothing processing;
and carrying out mean value filtering on the two-dimensional image template image to obtain a first background environment template image corresponding to the first image.
7. The path guidance method of claim 5, wherein the step of determining contour data of a first obstacle from the first background environment template image, the first image, and camera parameters of the depth camera comprises:
acquiring a first pixel coordinate of the first background environment template image and acquiring a second pixel coordinate of the first image;
determining a distance difference value between the second pixel coordinate and the first pixel coordinate, and reserving the second pixel coordinate corresponding to the distance difference value smaller than zero to obtain a difference pixel coordinate;
performing polar coordinate conversion on the difference pixel coordinate according to the camera shooting parameters of the depth camera to obtain a polar coordinate after the barrier conversion;
and determining the angle and the distance of the barrier relative to the origin of coordinates according to the converted polar coordinates to obtain the profile data of the first barrier, wherein the position of the depth camera is the origin of coordinates.
8. The path guidance method according to any one of claims 1 to 7, wherein before the step of obtaining the target planned path of the vehicle and determining the positioning information of the vehicle based on the radar layer corresponding to the vehicle and the lidar, the path guidance method further comprises:
in the process that the vehicle runs according to a preset mapping path, a second image corresponding to the vehicle is obtained through the depth camera, and a second background environment template image corresponding to the second image is generated;
determining profile data of a second obstacle according to the second background environment template image, the second image and the camera shooting parameters of the depth camera;
and determining the obstacle avoidance layer based on the contour data and the second background environment template image.
9. A route guidance device, characterized in that the route guidance device comprises: memory, processor and a path guidance program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the path guidance method according to any one of claims 1 to 8.
10. A computer-readable storage medium, having stored thereon a path guidance program which, when executed by a processor, implements the steps of the path guidance method according to any one of claims 1 to 8.
CN201911104806.4A 2019-11-12 2019-11-12 Path navigation method, device and computer readable storage medium Active CN110764110B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911104806.4A CN110764110B (en) 2019-11-12 2019-11-12 Path navigation method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911104806.4A CN110764110B (en) 2019-11-12 2019-11-12 Path navigation method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110764110A true CN110764110A (en) 2020-02-07
CN110764110B CN110764110B (en) 2022-04-08

Family

ID=69337664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911104806.4A Active CN110764110B (en) 2019-11-12 2019-11-12 Path navigation method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110764110B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111202472A (en) * 2020-02-18 2020-05-29 深圳市愚公科技有限公司 Terminal map construction method of sweeping robot, terminal equipment and sweeping system
CN112130576A (en) * 2020-10-15 2020-12-25 广州富港万嘉智能科技有限公司 Intelligent vehicle traveling method, computer readable storage medium and AGV
CN114265412A (en) * 2021-12-29 2022-04-01 深圳创维数字技术有限公司 Vehicle control method, device, equipment and computer readable storage medium
CN114742490A (en) * 2022-02-24 2022-07-12 南京音飞储存设备(集团)股份有限公司 Vehicle scheduling system, method, computer device, and computer-readable storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107063276A (en) * 2016-12-12 2017-08-18 成都育芽科技有限公司 One kind is without the high-precision unmanned vehicle on-vehicle navigation apparatus of delay and method
CN107172322A (en) * 2017-06-16 2017-09-15 北京飞识科技有限公司 A kind of vedio noise reduction method and apparatus
US20170300061A1 (en) * 2005-10-21 2017-10-19 Irobot Corporation Methods and systems for obstacle detection using structured light
CN107305386A (en) * 2016-04-22 2017-10-31 王锦海 A kind of intelligent optical guidance system
CN207373179U (en) * 2017-10-26 2018-05-18 常熟理工学院 A kind of robot control system for being used for SLAM and navigation
CN108170145A (en) * 2017-12-28 2018-06-15 浙江捷尚人工智能研究发展有限公司 Robot obstacle-avoiding system and its application process based on laser radar
CN108344414A (en) * 2017-12-29 2018-07-31 中兴通讯股份有限公司 A kind of map structuring, air navigation aid and device, system
US20180292825A1 (en) * 2017-04-07 2018-10-11 Nvidia Corporation Performing autonomous path navigation using deep neural networks
CN108958250A (en) * 2018-07-13 2018-12-07 华南理工大学 Multisensor mobile platform and navigation and barrier-avoiding method based on known map
CN109189885A (en) * 2018-08-31 2019-01-11 广东小天才科技有限公司 A kind of real-time control method and smart machine based on smart machine camera
CN109917786A (en) * 2019-02-04 2019-06-21 浙江大学 A kind of robot tracking control and system operation method towards complex environment operation
CN110065558A (en) * 2019-04-22 2019-07-30 深圳创维-Rgb电子有限公司 A kind of back Kun formula AGV auxiliary locator and its method
CN110245199A (en) * 2019-04-28 2019-09-17 浙江省自然资源监测中心 A kind of fusion method of high inclination-angle video and 2D map
CN110262495A (en) * 2019-06-26 2019-09-20 山东大学 Mobile robot autonomous navigation and pinpoint control system and method can be achieved

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170300061A1 (en) * 2005-10-21 2017-10-19 Irobot Corporation Methods and systems for obstacle detection using structured light
CN107305386A (en) * 2016-04-22 2017-10-31 王锦海 A kind of intelligent optical guidance system
CN107063276A (en) * 2016-12-12 2017-08-18 成都育芽科技有限公司 One kind is without the high-precision unmanned vehicle on-vehicle navigation apparatus of delay and method
US20180292825A1 (en) * 2017-04-07 2018-10-11 Nvidia Corporation Performing autonomous path navigation using deep neural networks
CN107172322A (en) * 2017-06-16 2017-09-15 北京飞识科技有限公司 A kind of vedio noise reduction method and apparatus
CN207373179U (en) * 2017-10-26 2018-05-18 常熟理工学院 A kind of robot control system for being used for SLAM and navigation
CN108170145A (en) * 2017-12-28 2018-06-15 浙江捷尚人工智能研究发展有限公司 Robot obstacle-avoiding system and its application process based on laser radar
CN108344414A (en) * 2017-12-29 2018-07-31 中兴通讯股份有限公司 A kind of map structuring, air navigation aid and device, system
CN108958250A (en) * 2018-07-13 2018-12-07 华南理工大学 Multisensor mobile platform and navigation and barrier-avoiding method based on known map
CN109189885A (en) * 2018-08-31 2019-01-11 广东小天才科技有限公司 A kind of real-time control method and smart machine based on smart machine camera
CN109917786A (en) * 2019-02-04 2019-06-21 浙江大学 A kind of robot tracking control and system operation method towards complex environment operation
CN110065558A (en) * 2019-04-22 2019-07-30 深圳创维-Rgb电子有限公司 A kind of back Kun formula AGV auxiliary locator and its method
CN110245199A (en) * 2019-04-28 2019-09-17 浙江省自然资源监测中心 A kind of fusion method of high inclination-angle video and 2D map
CN110262495A (en) * 2019-06-26 2019-09-20 山东大学 Mobile robot autonomous navigation and pinpoint control system and method can be achieved

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PAULO V. K. BORGES: "Real-time autonomous ground vehicle navigation in heterogeneous environments using a 3D LiDAR", 《2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)》 *
赖秋玲等: "基于数据融合的SLAM***研究与路径规划实现", 《电脑知识与技术》 *
陈泓屺: "基于二次帧差背景提取的车辆追踪方法", 《广东工业大学学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111202472A (en) * 2020-02-18 2020-05-29 深圳市愚公科技有限公司 Terminal map construction method of sweeping robot, terminal equipment and sweeping system
CN112130576A (en) * 2020-10-15 2020-12-25 广州富港万嘉智能科技有限公司 Intelligent vehicle traveling method, computer readable storage medium and AGV
CN114265412A (en) * 2021-12-29 2022-04-01 深圳创维数字技术有限公司 Vehicle control method, device, equipment and computer readable storage medium
CN114265412B (en) * 2021-12-29 2023-10-24 深圳创维数字技术有限公司 Vehicle control method, device, equipment and computer readable storage medium
CN114742490A (en) * 2022-02-24 2022-07-12 南京音飞储存设备(集团)股份有限公司 Vehicle scheduling system, method, computer device, and computer-readable storage medium

Also Published As

Publication number Publication date
CN110764110B (en) 2022-04-08

Similar Documents

Publication Publication Date Title
CN110764110B (en) Path navigation method, device and computer readable storage medium
JP7398506B2 (en) Methods and systems for generating and using localization reference data
US11433880B2 (en) In-vehicle processing apparatus
CN110068836B (en) Laser radar road edge sensing system of intelligent driving electric sweeper
CN110859044B (en) Integrated sensor calibration in natural scenes
US9251587B2 (en) Motion estimation utilizing range detection-enhanced visual odometry
CN113870343B (en) Relative pose calibration method, device, computer equipment and storage medium
CN113657224B (en) Method, device and equipment for determining object state in vehicle-road coordination
CN110703268B (en) Air route planning method and device for autonomous positioning navigation
EP3032818B1 (en) Image processing device
US11908164B2 (en) Automatic extrinsic calibration using sensed data as a target
CN112258590B (en) Laser-based depth camera external parameter calibration method, device and storage medium thereof
CN113947639B (en) Self-adaptive online estimation calibration system and method based on multi-radar point cloud line characteristics
Nienaber et al. A comparison of low-cost monocular vision techniques for pothole distance estimation
US11677931B2 (en) Automated real-time calibration
KR102490521B1 (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
CN114494466B (en) External parameter calibration method, device and equipment and storage medium
CN110816522B (en) Vehicle attitude control method, apparatus, and computer-readable storage medium
CN110989619A (en) Method, apparatus, device and storage medium for locating object
JP6959032B2 (en) Position estimation device, moving device
JP7302966B2 (en) moving body
KR102618951B1 (en) Method for visual mapping, and computer program recorded on record-medium for executing method therefor
JP2018004435A (en) Movement amount calculation device and movement amount calculation method
CN118050736A (en) Joint detection method, device, terminal equipment and storage medium
JP2021173801A (en) Information processing device, control method, program, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant