CN112711263A - Storage automatic guided vehicle obstacle avoidance method and device, computer equipment and storage medium - Google Patents

Storage automatic guided vehicle obstacle avoidance method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112711263A
CN112711263A CN202110069572.5A CN202110069572A CN112711263A CN 112711263 A CN112711263 A CN 112711263A CN 202110069572 A CN202110069572 A CN 202110069572A CN 112711263 A CN112711263 A CN 112711263A
Authority
CN
China
Prior art keywords
camera
real
guided vehicle
time
automatic guided
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110069572.5A
Other languages
Chinese (zh)
Inventor
杨秉川
李陆洋
方牧
鲁豫杰
郑帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionnav Robotics Shenzhen Co Ltd
Original Assignee
Visionnav Robotics Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionnav Robotics Shenzhen Co Ltd filed Critical Visionnav Robotics Shenzhen Co Ltd
Priority to CN202110069572.5A priority Critical patent/CN112711263A/en
Publication of CN112711263A publication Critical patent/CN112711263A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application relates to an obstacle avoidance method and device for a storage automatic guided vehicle, computer equipment and a storage medium. The method comprises the following steps: acquiring real-time images acquired by all cameras arranged on the storage automatic guided vehicle based on a 5G communication network; operating a preset object detection model, extracting the characteristics of each object in the real-time image, and classifying each object based on the characteristics of each object; selecting positioning points of each object according to the type of each object, and calculating real-time coordinates of each object in a vehicle coordinate system of the storage automatic guided vehicle according to preset camera calibration parameters and a plane equation of the ground in the camera coordinate system of each camera; and calculating the real-time distance between the storage automatic guided vehicle and each object based on the real-time coordinates, and outputting obstacle avoidance information to the storage automatic guided vehicle when the real-time distance is smaller than a set threshold value. And the storage automatic guided vehicle receives the obstacle avoidance information and takes measures. By adopting the method, the efficiency of the storage automatic guided vehicle for identifying the obstacles and taking obstacle avoidance measures can be effectively improved.

Description

Storage automatic guided vehicle obstacle avoidance method and device, computer equipment and storage medium
Technical Field
The application relates to the technical field of industrial robots, in particular to an obstacle avoidance method and device for a storage automatic guided vehicle, computer equipment and a storage medium.
Background
With the rapid development of the e-commerce and logistics industry, Automated Guided Vehicles (AGVs) play an important role in the field of smart warehousing. The automatic guided vehicle can automatically run along a preset route, has high automation degree and intelligent level, and can be flexibly changed according to the requirements of storage goods places, production process flows and the like. However, the background of the existing storage environment is complex, man-machine is mixed, and the automatic guided vehicle cannot effectively identify the obstacles on the running road only by the vehicle-mounted safety sensor when running. Moreover, the coverage area of the common 4G wireless network is limited, and when the automated guided vehicle travels to a position far away from the wireless router or a position where a plurality of wireless routers are located, the stability, the time delay and the broadband performance of the wireless network are rapidly reduced, and the requirement of the automated guided vehicle for the network transmission rate for acquiring an image of an obstacle by a high-definition camera when the automated guided vehicle identifies the obstacle cannot be met, so that the efficiency of the automated guided vehicle for identifying the obstacle and making obstacle avoidance measures is influenced.
Disclosure of Invention
Therefore, in order to solve the above technical problems, it is necessary to provide an obstacle avoidance method and apparatus for a storage automated guided vehicle, a computer device, and a storage medium, which can effectively improve the efficiency of the automated guided vehicle in recognizing obstacles and taking obstacle avoidance measures.
An obstacle avoidance method for a storage automated guided vehicle, the method comprising:
acquiring real-time images acquired by all cameras arranged on the storage automatic guided vehicle based on a 5G communication network;
operating a preset object detection model, extracting the characteristics of each object in the real-time image, and classifying each object based on the characteristics of each object;
selecting a positioning point of each object according to the type of each object, and calculating real-time coordinates of each object in a vehicle coordinate system of the storage automatic guided vehicle according to preset camera calibration parameters and a plane equation of the ground in the camera coordinate system of each camera;
and calculating real-time distances between the storage automatic guided vehicle and each object based on the real-time coordinates, and outputting obstacle avoidance information to the storage automatic guided vehicle based on the 5G communication network when the real-time distances are smaller than a set threshold value.
In one embodiment, after acquiring the real-time images acquired by the cameras arranged on the warehousing automatic guided vehicle based on the 5G communication network, the method includes:
and synchronizing and calibrating the real-time images acquired by the cameras, wherein the synchronization is to synchronize the real-time images acquired by the cameras according to the acquisition time of the real-time images, and the calibration is to calibrate the real-time images acquired by the cameras according to preset camera calibration parameters.
In one embodiment, before the synchronizing and calibrating the real-time images acquired by each of the cameras, the method includes:
the method comprises the steps of obtaining camera calibration parameters of each camera, wherein the camera calibration parameters comprise internal parameters and external parameters of each camera, the internal parameters comprise internal parameter matrixes and distortion parameters of each camera, and the external parameters comprise rotation matrixes and translation matrixes of each camera.
In one embodiment, the obtaining the camera calibration parameters of each camera includes:
extracting each angular point in the real-time image by using a chess checkerboard method, calculating pixel coordinates of each angular point, and obtaining an internal reference matrix and distortion parameters of each camera;
the method comprises the steps of obtaining position data and posture data of each camera, calculating a rotation matrix and a translation matrix of each camera, wherein the position data comprise the height of each camera and the actual distance between the ground position corresponding to the center of the real-time image and the projection position of each camera on the ground, and the posture data comprise the pitch angle, the yaw angle and the roller angle of each camera.
In one embodiment, after the synchronizing and calibrating the real-time images acquired by each of the cameras, the method includes:
and calculating a plane equation of the ground under the camera coordinate system of each camera.
In one embodiment, the calculating a plane equation of the ground in a camera coordinate system of each camera includes:
acquiring each feature point of the ground, acquiring an actual distance between every two feature points, and calculating the coordinate of each feature point under the camera coordinate system of each camera according to the actual distance;
and calculating a plane equation of the ground under the camera coordinate system of each camera according to the coordinates of each feature point under the camera coordinate system of each camera.
In one embodiment, before the acquiring, based on the 5G communication network, the real-time images acquired by each camera disposed on the warehousing automatic guided vehicle, the method includes:
testing the 5G communication network, wherein the testing comprises whether the 5G communication network is connected and whether the transmission rate of the 5G communication network is greater than a preset value;
and when the 5G communication network can be communicated and the transmission rate of the 5G communication network is greater than the preset value, the step of acquiring the real-time images acquired by the cameras arranged on the storage automatic guided vehicle is carried out.
In one embodiment, after the calculating the plane equation of the ground in the camera coordinate system of each camera, the method includes:
and training and obtaining the object detection model.
In one embodiment, the training and obtaining the object detection model includes:
obtaining an image of an operator authorized to enter the warehouse;
acquiring a warehouse image and the type of each object in the warehouse image marked by a user;
and performing feature extraction on each object in the operator image and the warehouse image by using a full convolution layer of a convolution neural network, training and obtaining the object detection model.
In one embodiment, the selecting a positioning point of each object according to the type of each object, and calculating a real-time coordinate of each object in a vehicle coordinate system of the storage automated guided vehicle according to preset camera calibration parameters and a plane equation of the ground in the camera coordinate system of each camera includes:
selecting positioning points of the objects according to the types of the objects, taking ankles of operators as the positioning points when the objects are the operators, and taking the middle lower parts of vehicles or cargos as the positioning points when the objects are the vehicles or the cargos;
acquiring pixel coordinates of the positioning points of the objects under the image coordinate system of the real-time image based on the positioning points of the objects;
converting the coordinates of the pixels of the positioning points into coordinates of the cameras of the positioning points under the camera coordinate systems of the cameras according to preset internal parameters of the cameras and a plane equation of the ground under the camera coordinate systems of the cameras;
and converting the coordinates of the positioning point camera into the coordinates of the positioning point vehicle under the vehicle coordinate system of the warehousing automatic guided vehicle according to preset external parameters of each camera, and obtaining the real-time coordinates of each object under the vehicle coordinate system of the warehousing automatic guided vehicle.
An obstacle avoidance device for a storage automated guided vehicle, the device comprising:
the image acquisition module is used for acquiring real-time images acquired by all cameras arranged on the storage automatic guided vehicle based on a 5G communication network;
the image classification module is used for operating a preset object detection model, extracting the characteristics of each object in the real-time image and classifying each object based on the characteristics of each object;
the coordinate calculation module is used for selecting positioning points of the objects according to the types of the objects, and calculating real-time coordinates of the objects in a vehicle coordinate system of the warehousing automatic guided vehicle according to preset camera calibration parameters and a plane equation of the ground in the camera coordinate system of each camera;
and the information output module is used for calculating the real-time distance between the storage automatic guided vehicle and each object based on the real-time coordinates, and outputting obstacle avoidance information to the storage automatic guided vehicle based on the 5G communication network when the real-time distance is smaller than a set threshold value.
A storage automated guided vehicle, the storage automated guided vehicle comprising:
the camera module is arranged on the storage automatic guided vehicle and used for acquiring a real-time image of the environment where the storage automatic guided vehicle is located;
the 5G communication module is used for transmitting the real-time images collected by the camera module arranged on the warehousing automatic guided vehicle to an edge server through a 5G communication network; receiving obstacle avoidance information which is calculated by the edge server based on the real-time image and is obtained when the real-time distance between the warehousing automatic guided vehicle and each object in the real-time image is smaller than a set threshold value through a 5G communication network;
and the storage automatic guided vehicle module is used for making obstacle avoidance measures according to the obstacle avoidance information.
The computer equipment comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the storage automatic guided vehicle obstacle avoidance method when executing the computer program.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the warehouse automated guided vehicle obstacle avoidance method as described above.
According to the obstacle avoidance method and device for the storage automatic guided vehicle, the computer equipment and the storage medium, the edge server acquires real-time images acquired by all cameras arranged on the storage automatic guided vehicle based on a 5G communication network; operating a preset object detection model, extracting the characteristics of each object in the real-time image, and classifying each object based on the characteristics of each object; selecting positioning points of each object according to the type of each object, and calculating real-time coordinates of each object in a vehicle coordinate system of the storage automatic guided vehicle according to preset camera calibration parameters and a plane equation of the ground in the camera coordinate system of each camera; and calculating the real-time distance between the storage automatic guided vehicle and each object based on the real-time coordinates, and outputting obstacle avoidance information to the storage automatic guided vehicle based on a 5G communication network when the real-time distance is smaller than a set threshold value. And after receiving the obstacle avoidance information, the storage automatic guided vehicle makes obstacle avoidance measures according to the obstacle avoidance information. By adopting the method provided by the embodiment of the application, the efficiency of recognizing the barrier and making obstacle avoidance measures by the storage automatic guided vehicle can be effectively improved, so that the intelligence of industrial products is improved.
Drawings
FIG. 1 is a diagram of an exemplary application environment of the obstacle avoidance method for a storage automated guided vehicle;
FIG. 2 is a schematic flow chart illustrating an exemplary method for avoiding obstacles of a storage automated guided vehicle;
FIG. 3 is a schematic diagram of a process for obtaining camera calibration parameters of each camera according to an embodiment;
FIG. 4 is a schematic flow chart of a process for calculating a plane equation of the ground in the camera coordinate system of each camera in one embodiment;
FIG. 5 is a flow diagram illustrating testing of a 5G communication network in one embodiment;
FIG. 6 is a schematic flow chart illustrating training and obtaining an object detection model according to one embodiment;
FIG. 7 is a schematic flow chart illustrating a process for calculating real-time coordinates of objects in a vehicle coordinate system of a storage automated guided vehicle according to an embodiment;
FIG. 8 is a schematic view of the storage automated guided vehicle and the mounting location of each camera in one embodiment;
FIG. 9 is a diagram of a 5G communication network in one embodiment;
FIG. 10 is a schematic diagram illustrating synchronization of real-time images acquired by cameras in an exemplary embodiment;
FIG. 11 is a diagram illustrating selecting anchor points for objects in an exemplary embodiment;
FIG. 12 is a schematic diagram of a system interface during operation of an edge server in accordance with an exemplary embodiment;
FIG. 13 is a diagram illustrating face detection performed by an operator in an exemplary embodiment;
FIG. 14 is a diagram illustrating a system interface in the form of a radar map displayed during operation of an edge server after scaling and merging the real-time images captured by the cameras in an exemplary embodiment;
fig. 15 is a block diagram of an obstacle avoidance apparatus of the storage automated guided vehicle in one embodiment;
FIG. 16 is a block diagram of a storage automated guided vehicle according to one embodiment;
FIG. 17 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The obstacle avoidance method for the storage automatic guided vehicle can be applied to the application environment shown in fig. 1. A plurality of cameras 102 are mounted on the body of the storage automated guided vehicle 110, a 5G communication module 104 is also present in the storage automated guided vehicle 110, and the storage automated guided vehicle 110 communicates with the edge server 120 via a 5G communication network.
Specifically, the edge server 120 acquires real-time images acquired by the cameras 102 arranged on the warehousing automatic guided vehicle 110 based on a 5G communication network; operating a preset object detection model, extracting the characteristics of each object in the real-time image, and classifying each object based on the characteristics of each object; selecting a positioning point of each object according to the type of each object, and calculating real-time coordinates of each object in a vehicle coordinate system of the storage automatic guided vehicle 110 according to preset camera calibration parameters and a plane equation of the ground in the camera coordinate system of each camera; based on the real-time coordinates, the real-time distance between the storage automated guided vehicle 110 and each object is calculated, and when the real-time distance is smaller than a set threshold, based on the 5G communication network, the edge server 120 outputs obstacle avoidance information to the storage automated guided vehicle 110. After receiving the obstacle avoidance information, the storage automated guided vehicle 110 takes obstacle avoidance measures according to the obstacle avoidance information. The camera 102 may be, but not limited to, various image capturing devices, such as a high definition camera, a high definition vehicle-mounted camera, a Charge Coupled Device (CCD) camera, a scanner, or a mobile phone with a photographing function, a tablet computer, and the like, the storage automated guided vehicle 110 may be, but not limited to, various unmanned devices, such as an unmanned vehicle or an unmanned aerial vehicle, and the edge server 120 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, an obstacle avoidance method for a storage automated guided vehicle is provided, which is described by taking the method as an example applied to the edge server 120 in fig. 1, and includes the following steps:
and step S202, acquiring real-time images acquired by all cameras arranged on the storage automatic guided vehicle based on a 5G communication network.
The fifth generation mobile communication technology (5G) is the latest generation cellular mobile communication technology, and has high data transmission rate, stable network, low network delay, capability of being connected with an automatic intelligent device, and stronger industrial applicability. Therefore, in the embodiment of the present application, the information transmission all uses a 5G communication network. An Automatic Guided Vehicle (AGV) is unmanned equipment, has high automation degree and intelligent level, and can realize the automation of the whole process of loading, unloading and carrying goods and materials in a warehouse.
In one embodiment, a plurality of cameras are mounted on the storage automated guided vehicle for capturing real-time images of the environment in which the storage automated guided vehicle is located. In order to meet the requirements of basically covering the surrounding environment of the vehicle and ensuring the safety of the forward direction and the backward direction of the vehicle, the number of the cameras is at least four. The vehicle mainly travels forward, turns left, turns right and backs. The positions of the cameras may be respectively located directly in front of, left in front of, right in front of, and directly behind the vehicle. Wherein the left and right front cameras have an angle of 60 ° with respect to the forward direction of the vehicle, the right front, left front and right front cameras overlap by about 45 °, and the horizontal angle of view of each camera is 105 °. Therefore, the camera setting method of the present embodiment can cover a range of 330 ° around the vehicle, making the blind area small.
Specifically, after the cameras arranged on the storage automatic guided vehicle collect real-time images of the environment where the vehicle is located, the real-time images collected by the cameras are acquired based on a 5G communication network.
Step S204, operating a preset object detection model, extracting the characteristics of each object in the real-time image, and classifying each object based on the characteristics of each object.
The object detection model is a detection model trained in advance by using a machine learning algorithm, and can realize the identification and classification of each object according to characteristics. Machine learning is the distributed feature representation of data by combining low-level features to form more abstract high-level representation attribute classes or features, and the motivation of the machine learning is to establish a neural network simulating the human brain for analysis learning and to interpret data such as images, sounds, texts and the like by simulating the mechanism of the human brain.
In one embodiment, the object detection model includes an object detection identification module. Each object in the real-time image of the environment where the storage automated guided vehicle is located can be an operator, goods or a vehicle. Specifically, a preset object detection model is operated, and the object detection and identification module extracts the features of each object in the real-time image and identifies the type of each object in the real-time image according to the features of each object.
In one embodiment, when a preset object detection model is operated to classify objects in a real-time image, such as operators, goods, vehicles and the like, in order to remove noise of the object detection model and ensure the accuracy of object detection and avoid false detection and missed detection, objects with object identification scores larger than a preset score are screened according to a prediction empirical value of the object detection model, and the objects larger than the preset score are classified again. Wherein the predetermined score may be set to 80 points.
In one embodiment, the object detection model further comprises a human recognition detection module. The human body identification detection module comprises a bone detection module and a face detection module. Specifically, the skeleton detection module may perform skeleton recognition on an image including an operator, and extract skeleton nodes of the human body, which may include 25 skeleton nodes of the head, arms, ankles, and the like of the human body. The face detection module can be used for carrying out face segmentation and identification on the image containing the operator, judging whether the operator is authorized to enter the warehouse or not, and independently opening and closing the face detection module.
And S206, selecting positioning points of the objects according to the types of the objects, and calculating real-time coordinates of the objects in a vehicle coordinate system of the storage automatic guided vehicle according to preset camera calibration parameters and a plane equation of the ground in the camera coordinate system of each camera.
In one embodiment, the anchor point of each object is selected according to the type of each object. The real-time image coordinate system takes the upper left corner as the origin of coordinates, the ankle of the operator is taken as a positioning point for the operator, and the middle lower part of the vehicle or the goods is taken as a positioning point for the vehicle or the goods. And obtaining the pixel coordinates of the positioning points in the image coordinate system of the real-time image.
In one embodiment, the camera calibration parameters include intrinsic parameters and extrinsic parameters of each camera, the intrinsic parameters include an intrinsic matrix and distortion parameters of each camera, and the extrinsic parameters include a rotation matrix and a translation matrix of each camera. The internal parameters of the camera are parameters related to the characteristics of the camera itself, such as the focal length, aperture and pixel size of the camera. Extrinsic parameters of a camera are parameters in a world coordinate system that are used to determine the position and orientation of the camera in three-dimensional space, such as the position, rotational direction, etc. of the camera. In order to determine the relationship between the geometric position of a point on the surface of an object in three-dimensional space and the corresponding point in the image, a geometric model imaged by a camera must be established through calculation and geometric model parameters are solved, which can also be called as calibration of the camera.
In one embodiment, since a plurality of cameras are arranged on the storage automatic guided vehicle, a plane equation of the ground under a camera coordinate system of each camera needs to be calculated and obtained. And converting the pixel coordinates of the positioning point of each object in the image coordinate system of the real-time image into the camera coordinates of the positioning point under the camera coordinate system of each camera by combining the camera calibration parameters, and further converting the camera coordinates of the positioning point into the real-time coordinates under the vehicle coordinate system of the storage automatic guided vehicle.
And S208, calculating the real-time distance between the storage automatic guided vehicle and each object based on the real-time coordinates, and outputting obstacle avoidance information to the storage automatic guided vehicle based on the 5G communication network when the real-time distance is smaller than a set threshold value.
In one embodiment, the accurate position of each object is calculated according to the real-time coordinates of each object in the vehicle coordinate system of the storage automatic guided vehicle, and the actual distance between the vehicle, the operator and the goods is obtained. And when the real-time distance is smaller than a set threshold value, recording the type and the distance of the barrier, and outputting obstacle avoidance information to the storage automatic guided vehicle. Wherein the threshold value may be set to 20m (meters).
In one embodiment, after receiving the obstacle avoidance information, the storage automated guided vehicle obtains the comprehensive perception information by using a safety sensor on the vehicle to perform an obstacle avoidance action.
In one embodiment, the obstacle avoidance information further includes alarm information. Specifically, when the fact that the real-time distance between the obstacle and the vehicle is smaller than the first distance is detected, alarm information is sent to the storage automatic guided vehicle to give an alarm. And when the distance between the operator and the storage automatic guided vehicle is smaller than the second distance, the human body recognition detection module in the object detection model performs face detection on the operator entering the storage. And when the operator is not a person authorized to enter the warehouse, the second distance is smaller than the first distance, and alarm information is sent to the warehouse automatic guided vehicle and a monitoring person is notified. Wherein the first distance may be set to 7m and the second distance may be set to 3 m.
In the obstacle avoidance method for the storage automatic guided vehicle, the edge server acquires real-time images acquired by all cameras arranged on the storage automatic guided vehicle based on a 5G communication network; operating a preset object detection model, extracting the characteristics of each object in the real-time image, and classifying each object based on the characteristics of each object; selecting positioning points of each object according to the type of each object, and calculating real-time coordinates of each object in a vehicle coordinate system of the storage automatic guided vehicle according to preset camera calibration parameters and a plane equation of the ground in the camera coordinate system of each camera; and calculating the real-time distance between the storage automatic guided vehicle and each object based on the real-time coordinates, and outputting obstacle avoidance information to the storage automatic guided vehicle based on a 5G communication network when the real-time distance is smaller than a set threshold value. And after receiving the obstacle avoidance information, the storage automatic guided vehicle makes obstacle avoidance measures according to the obstacle avoidance information. By adopting the method provided by the embodiment of the application, the efficiency of recognizing the barrier and making obstacle avoidance measures by the storage automatic guided vehicle can be effectively improved, so that the intelligence of industrial products is improved.
In one embodiment, after acquiring the real-time images acquired by the cameras disposed on the storage automated guided vehicle based on the 5G communication network in step S202, the method includes:
and synchronizing and calibrating the real-time images acquired by the cameras. The synchronization is to synchronize the real-time images acquired by the cameras according to the acquisition time of the real-time images, and the calibration is to calibrate the real-time images acquired by the cameras according to preset camera calibration parameters.
Due to factors such as lens distortion, manufacturing precision and assembly process of each camera, real-time images acquired by each camera may be deformed or the acquisition time of the real-time images acquired by each camera is not synchronous, so that the accuracy of identifying each object in the real-time images and calculating the real-time distance is affected, and therefore the real-time images acquired by each camera need to be synchronized and calibrated.
In one embodiment, the real-time images acquired by the cameras are synchronized and calibrated, and the synchronization process and the calibration process are not in sequence. The real-time images can be synchronized first and then calibrated, or the real-time images can be calibrated first and then synchronized.
In one embodiment, the real-time images acquired by the cameras are synchronized according to the acquisition time of the real-time images. In the real-time image acquisition stage, each camera acquires a real-time image and simultaneously stores the acquisition time of the real-time image. And searching the real-time images acquired by the cameras at the closest time according to the acquisition time of the real-time images, and synchronizing the real-time images acquired by the cameras.
In particular, the acquisition time of the real-time image may be recorded using a timestamp. And acquiring real-time images acquired by each camera, searching the real-time images acquired by other cameras at the closest time according to the timestamp of the real-time image acquired by one camera, and synchronizing the real-time images acquired by each camera.
In one embodiment, the real-time images acquired by each camera are calibrated according to preset camera calibration parameters. Before calibrating the synchronized real-time image according to preset camera calibration parameters, the method comprises the following steps:
and obtaining camera calibration parameters of each camera, wherein the camera calibration parameters comprise internal parameters and external parameters of each camera. The internal parameters comprise an internal parameter matrix and distortion parameters of each camera, the internal parameter matrix is mainly used for calibrating camera distortion, calculating a ground equation and calculating the distance between each object and each camera, and the distortion parameters are mainly used for calibrating camera distortion. The extrinsic parameters include a rotation matrix and a translation matrix of each camera, which are mainly used to calculate the distance between each object and the vehicle.
In one embodiment, as shown in fig. 3, obtaining the camera calibration parameters of each camera includes:
step S302, extracting each angular point in the real-time image by using a chess checkerboard method, calculating pixel coordinates of each angular point, and obtaining an internal reference matrix and distortion parameters of each camera.
In one embodiment, the internal parameters of each camera are obtained, each black and white angular point in the real-time image is extracted by using a chess checkerboard method, and the black and white blocks of the chess checkerboard have the same and known size, so that the internal reference matrix and the distortion parameters of each camera can be obtained by calculating the pixel coordinates of each black and white angular point.
Step S304, acquiring position data and attitude data of each camera, and calculating a rotation matrix and a translation matrix of each camera, wherein the position data comprises the height of each camera and the actual distance between the ground position corresponding to the center of the real-time image and the projection position of each camera on the ground, and the attitude data comprises the pitch angle, the yaw angle and the roller angle of each camera.
In one embodiment, extrinsic parameters of each camera are acquired, and a rotation matrix and a translation matrix of each camera are calculated by acquiring position data and pose data of each camera.
The position parameters comprise the height of each camera, the actual distance between the ground position corresponding to the center of the real-time image and the projection position of each camera on the ground, namely the coordinates (X, Y and Z) of each camera under a vehicle coordinate system of the storage automatic guided vehicle, wherein the positive X direction is the advancing direction of the vehicle body, the positive Y direction is the left turning direction of the vehicle body, and the positive Z direction is the height direction, and can be obtained by a direct measurement mode.
The attitude data comprises a pitch angle, a yaw angle and a roller angle of each camera, and the pitch angle can be calculated by measuring the distance from the center of the real-time image on the ground to the projection point of each camera on the ground and the height of each camera from the ground. The yaw angle can be calculated by measuring the position of the center of the real-time image on the ground and the included angle between the projection point of each camera on the ground and the X axis of the vehicle coordinate system of the storage automatic guided vehicle. Since each camera is mounted on a plane parallel to the ground, the roller angle is negligible.
In one embodiment, after synchronizing and calibrating the real-time images acquired by the cameras, the method includes: and calculating a plane equation of the ground under the camera coordinate system of each camera.
In one embodiment, as shown in FIG. 4, calculating the plane equation of the ground in the camera coordinate system of each camera includes:
step S402, obtaining each feature point on the ground, obtaining the actual distance between every two feature points, and calculating the coordinates of each feature point in the camera coordinate system of each camera according to the actual distance.
And S404, calculating a plane equation of the ground under the camera coordinate system of each camera according to the coordinates of each feature point under the camera coordinate system of each camera.
In one embodiment, a standard feature point P1 of the ground is obtained, and coordinates (u, v) of the standard feature point P1 in an image coordinate system of the real-time image are converted into coordinate proportional relations x/z and y/z in a camera coordinate system of each camera according to an internal reference matrix K of each camera, wherein the coordinate proportional relations x/z and y/z are expressed as:
Figure BDA0002905282750000121
in the formula (I), the compound is shown in the specification,
Figure BDA0002905282750000122
is a homogeneous coordinate representation of the coordinates (u, v) of the standard feature point P1 in the image coordinate system of the real-time image,
Figure BDA0002905282750000123
is an internal reference matrix K for each camera, wherein,
Figure BDA0002905282750000124
and
Figure BDA0002905282750000125
denotes the focal length, u0And v0Representing the image principal point coordinates of the real-time image calibrated according to the camera calibration parameters,
Figure BDA0002905282750000126
the homogeneous coordinate representation of the coordinate proportional relation x/z and y/z of the standard feature point P1 in the camera coordinate system of each camera.
The coordinates (x1, y1, z1) of the standard feature point P1 in the camera coordinate system of each camera are expressed as:
Figure BDA0002905282750000127
in the formula, k11 represents the abscissa proportion x/z of the standard feature point P1 in the camera coordinate system of each camera, k12 represents the ordinate proportion y/z of the standard feature point P1 in the camera coordinate system of each camera, and k11 and k12 are constants.
Similarly, the coordinates (x2, y2, z2) of the second feature point P2 in the camera coordinate system of each camera can be calculated as:
Figure BDA0002905282750000128
in the formula, k21 represents the abscissa proportional relationship x/z of the feature point P2 in the camera coordinate system of each camera, k22 represents the ordinate proportional relationship y/z of the feature point P2 in the camera coordinate system of each camera, and k21 and k22 are constants.
And acquiring the actual distance between every two feature points, and expressing the actual distance between the standard feature point P1 and the feature point P2 as D12, and so on, and expressing the equation system as follows:
Figure BDA0002905282750000131
the actual coordinates of each feature point P1, P2 … Pn in the camera coordinate system of each camera can be calculated.
By adopting a multipoint fitting mode, a plane equation of the ground under the camera coordinate system of each camera can be calculated and obtained, and the equation is expressed as follows:
Ax+By+Cz+D=0。
wherein A, B, C and D are constants.
In one embodiment, as shown in fig. 5, before acquiring real-time images acquired by each camera disposed on the warehousing automatic guided vehicle based on the 5G communication network in step S202, the method includes:
step S502, the 5G communication network is tested, and the test includes whether the 5G communication network is connected and whether the transmission rate of the 5G communication network is greater than a preset value.
And step S504, when the 5G communication network can be connected and the transmission rate of the 5G communication network is greater than a preset value, the step of acquiring real-time images acquired by all cameras arranged on the storage automatic guided vehicle is carried out.
In one embodiment, the warehousing automated guided vehicles and the edge servers are connected via a 5G communication network. In particular, a vehicle-mounted router is mounted on the storage automated guided vehicle. The vehicle-mounted router is a hardware device connected with a network, and can realize information transmission. And connecting each camera to a vehicle-mounted router of the storage automatic guided vehicle, and connecting the vehicle-mounted router to a vehicle-mounted industrial personal computer of the storage automatic guided vehicle. The vehicle-mounted industrial personal computer is an industrial control computer on the warehousing automatic guided vehicle and has the function of detecting and controlling a production process, electromechanical equipment and process equipment. Install 5G industry module on-vehicle industrial computer, 5G industry module passes through feeder and connects the antenna. The shorter the length of the feeder line is, the stronger the signal intensity is, the too short the feeder line is not easy to install, the too long the feeder line can affect the signal intensity, and the length of the feeder line exceeds 0.5m, so the feeder line can be hardly used, and therefore, in the embodiment of the application, the length range of the feeder line is 0.1-0.3 m. Networking software is used to connect the 5G industrial module into an independently deployed 5G communication network. The networking software can be 5G industrial module networking software under the condition of using a Windows system, and an AT instruction set can be sent to a 5G industrial module in a serial port mode aiming AT specific services to be networked. The at (attention) instruction set is an instruction sent from a terminal device or a data terminal device to a terminal device adapter or a data circuit terminal device to implement network interaction.
In one embodiment, the 5G communication network is tested after the warehousing automated guided vehicle and the edge server are connected to the 5G communication network. In particular, in the case of a Windows system, a PING packet tool may be used to test whether the 5G communication network between the warehouse automated guided vehicle and the edge server is accessible. The PING (Packet Internet Groper, PING) is an Internet Packet finder, which is used for testing a network connection amount program, and mainly sends a request message to a specific target host to test whether the target host can reach and know the relevant state of the target host. The warehousing automatic guided vehicle and the edge server are communicated through independent networking (SA) or Non-independent Networking (NSA), IP addresses of the warehousing automatic guided vehicle and the edge server can be directly obtained during testing, and a PING IP address mode can be directly executed in a command line program cmd.exe to verify whether a 5G communication network is communicated or not under the condition that a Windows system is used.
In one embodiment, the stability of the 5G communication network between the warehouse automated guided vehicle and the edge server is also tested. Specifically, a third-party tool for testing network performance, such as a network connection Test (tamoaft Throughput Test) or the like, may be used to Test the stability of the 5G communication network.
In one embodiment, testing the 5G communication network between the warehousing automated guided vehicle and the edge server further comprises testing a transmission rate of the 5G communication network. The transmission rate comprises an uplink transmission rate of information transmitted to the edge server by each camera and a downlink transmission rate of information transmitted to the storage automatic guided vehicle by the edge server, and when the uplink transmission rate and the downlink transmission rate are both greater than a preset transmission rate preset value, the transmission rate of the 5G communication network is judged to meet the requirement.
Specifically, under the condition of using a Windows system, cameras, a vehicle-mounted router and a vehicle-mounted industrial personal computer on the storage automated guided vehicle are arranged in the same network segment, a Port mapping tool such as a Port mapping software Port tunnel can be used for carrying out Port mapping on the cameras, and a PING packet tool is used for detecting whether a 5G communication network between the storage automated guided vehicle and an edge server can be communicated or not and testing uplink and downlink transmission rates of the 5G communication network. Wherein, the same network segment refers to the IP address and the subnet mask sum to obtain the same network address. For example, the IP address of each camera is in the set of IP addresses allocated by the vehicle-mounted router, and the set of IP addresses can be 192.168.1.2-192.168.1.254. The purpose of port mapping is to facilitate external network access to the internal network for interaction between the two networks.
In one embodiment, after calculating the plane equation of the ground in the camera coordinate system of each camera, the method comprises: and training and obtaining an object detection model.
In one embodiment, as shown in fig. 6, training and obtaining the object detection model includes:
step S602, an image of an operator authorized to enter the warehouse is obtained.
In one embodiment, the image of the operator authorized to enter the warehouse may include a face image and employee information, and after the image of the operator authorized to enter the warehouse is obtained, a database may be created, and the face image and the employee information of the authorized operator may be entered into the database.
Step S604, acquiring the type of each object in the warehouse image and the warehouse image labeled by the user.
In one embodiment, a warehouse environment image is obtained, and the types of objects in the warehouse image labeled by a user are obtained, wherein the types can comprise operators, goods, vehicles and the like.
And step S606, performing feature extraction on each object in the operator image and the warehouse image by using the full convolution layer of the convolutional neural network, training and obtaining an object detection model.
In one embodiment, data cleaning, sample equalization, and data labeling of the acquired image data is required before training and acquiring the object detection model. The data cleaning is to screen out wrong and repeated data, which is helpful to improve the quality of the data. Because the model trained by the unbalanced samples has poor generalization capability and is easy to over-fit, sample equalization is needed before the model is detected by a training object, so that the number of different types of samples is kept consistent, the trained model has high identification accuracy and high training speed. The data marking is to artificially frame each object in the real-time image, mark the characteristics of the object, and use the marked object as a basic material for machine learning, so that the machine can continuously and repeatedly learn to train and improve the accuracy of the object detection model. The higher the quality of data annotation, the higher the accuracy of model prediction, and image annotation software such as Labellmg can be used to label each object in the real-time image.
In one embodiment, a convolutional neural network is used to perform feature extraction on each object in the operator image and the warehouse image, and an object detection model is trained. The Convolutional Neural Networks (CNN) is a machine learning algorithm, and includes a Convolutional calculation and a feed-forward Neural network (fed forward Neural Networks) with a deep structure. The training set can be continuously optimized by extracting the characteristics of each object in the image through a multilayer convolution and iterative algorithm, and finally, a trained object detection model is obtained.
In one embodiment, as shown in fig. 7, in step S206, selecting a positioning point of each object according to the type of each object, and calculating real-time coordinates of each object in a vehicle coordinate system of the storage automated guided vehicle according to preset camera calibration parameters and a plane equation of the ground in the camera coordinate system of each camera, including:
step S702, selecting positioning points of the objects according to the types of the objects, wherein when the objects are operators, ankles of the operators are used as the positioning points, and when the objects are vehicles or cargoes, middle-lower parts of the vehicles or the cargoes are used as the positioning points.
In one embodiment, the anchor point of each object is selected according to the type of each object. The real-time image coordinate system takes the upper left corner as the origin of coordinates, the ankle of the operator is taken as a positioning point for the operator, and the middle lower part of the vehicle or the goods is taken as a positioning point for the vehicle or the goods.
Step S704, based on the positioning point of each object, acquiring the pixel coordinates of the positioning point of each object in the image coordinate system of the real-time image.
In one embodiment, the pixel coordinates of the anchor point of each object under the image coordinate system of the real-time image are obtained and are represented as (u, v).
Step S706, converting the coordinates of the pixel of the positioning point into the coordinates of the camera of the positioning point in the camera coordinate system of each camera according to the preset internal parameters of each camera and the plane equation of the ground in the camera coordinate system of each camera.
In one embodiment, according to the internal reference matrix of each camera, the coordinates (u, v) of the positioning point of each object in the image coordinate system of the real-time image are converted into coordinate proportional relations x/z and y/z in the camera coordinate system of each camera, and the coordinate proportional relations are expressed as:
Figure BDA0002905282750000161
in the formula (I), the compound is shown in the specification,
Figure BDA0002905282750000162
the homogeneous coordinate representation mode of the coordinates (u, v) of the positioning points of each object under the image coordinate system of the real-time image,
Figure BDA0002905282750000171
is an internal reference matrix K for each camera, wherein,
Figure BDA0002905282750000172
and
Figure BDA0002905282750000173
denotes the focal length, u0And v0Representing the image principal point coordinates of the real-time image calibrated according to the camera calibration parameters,
Figure BDA0002905282750000174
the homogeneous times of coordinate proportional relations x/z and y/z of the positioning points of the objects under the camera coordinate systems of the camerasThe coordinate representation.
According to the plane equation of the ground under the camera coordinate system of each camera, the camera coordinates (Xc, Yc, Zc) of the positioning point under the camera coordinate system of each camera can be obtained by being connected with the coordinate proportion relation of the positioning point of each object under the camera coordinate system of each camera, and the coordinates are expressed as:
Figure BDA0002905282750000175
in the formula, Ax + By + Cz + D is 0, which is a plane equation of the ground in the camera coordinate system of each camera, and x/z and y/z are coordinate proportional relations of the positioning point of each object in the camera coordinate system of each camera.
Step S708, converting the coordinates of the positioning point camera into coordinates of the positioning point vehicle in the vehicle coordinate system of the storage automated guided vehicle according to preset external parameters of each camera, and obtaining real-time coordinates of each object in the vehicle coordinate system of the storage automated guided vehicle.
In one embodiment, the coordinates (Xc, Yc, Zc) of the positioning point camera are converted into coordinates (Xo, Yo, Zo) of the positioning point vehicle in the vehicle coordinate system of the storage automated guided vehicle according to the rotation matrix and the translation matrix of each camera, and are expressed as:
Figure BDA0002905282750000176
in the formula (I), the compound is shown in the specification,
Figure BDA0002905282750000177
is a homogeneous coordinate representation mode of coordinates (Xc, Yc, Zc) of the cameras of the positioning points, R is a rotation matrix of each camera, T is a translation matrix of each camera,
Figure BDA0002905282750000178
is a homogeneous coordinate representation of the coordinates (Xo, Yo, Zo) of the vehicle of the positioning point.
In one embodiment, the real-time distance D between the storage automated guided vehicle and each object is calculated based on the real-time coordinates (Xo, Yo, Zo) of each object in the vehicle coordinate system of the storage automated guided vehicle, and may be calculated according to a distance formula, which is expressed as:
Figure BDA0002905282750000181
in the formula, D is the real-time distance between the storage automated guided vehicle and each object, and Xo and Yo are the abscissa and ordinate of each object in the real-time coordinates of the storage automated guided vehicle in the vehicle coordinate system.
In one embodiment, after the obstacle avoidance information is output to the storage automated guided vehicle, the method further includes: and storing the real-time coordinates of each object obtained by calculation under the vehicle coordinate system of the storage automatic guided vehicle, combining the classification and positioning results of the obtained real-time images, and generating a radar map of the distance between each object and the storage automatic guided vehicle.
In one embodiment, after generating the radar map of the distance between each object and the storage automated guided vehicle, the method further includes: and storing the acquired real-time image and the radar map in a video mode, so that monitoring personnel can look up and call the change condition of the surrounding environment of the vehicle and the running condition of the vehicle in a certain time period.
To further make the purpose, technical solutions and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings and one specific embodiment thereof. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one specific embodiment, the storage automated guided vehicle obstacle avoidance system is integrally constructed by the following steps:
step 1, as shown in fig. 8(a), four high-definition vehicle-mounted cameras 810 are mounted on the body of the storage automated guided vehicle 820, and the high-definition vehicle-mounted cameras 810 are distributed right in front of, left behind, and right behind the body of the storage automated guided vehicle 820, wherein the high-definition vehicle-mounted cameras right in front of, left in front of, and right in front of each other overlap by about 45 °. As shown in fig. 8(b-c), the included angles of the high definition cameras 810 positioned at the left front part and the right front part of the vehicle body of the warehousing automatic guide vehicle 820 relative to the advancing direction of the warehousing automatic guide vehicle 820 are both 60 °, and the horizontal viewing angle of each high definition vehicle-mounted camera 810 is 105 °. Therefore, the range of 330 degrees around the vehicle body of the storage automatic guided vehicle can be basically covered.
And 2, as shown in fig. 9, connecting each high-definition vehicle-mounted camera 810 to a vehicle-mounted router, and connecting the vehicle-mounted router to a vehicle-mounted industrial personal computer of the storage automatic guided vehicle. The 5G industrial module is installed on the vehicle-mounted industrial personal computer, the 5G industrial module is connected with the antenna through the feeder, and the length of the feeder is 0.1 m. The 5G industrial module networking software is used for connecting the 5G industrial module to the independently deployed 5G communication network. Meanwhile, after the edge server is connected to the network, a PING packet tool is used for detecting whether the 5G communication network between the storage automated guided vehicle and the edge server can be communicated or not, and whether the downlink transmission rate of the information transmitted to the storage automated guided vehicle by the edge server meets the requirement or not is tested. And when the 5G communication network can be connected and the downlink transmission rate meets the requirement, entering the step 3.
And 3, arranging each high-definition vehicle-mounted camera, the vehicle-mounted router and the vehicle-mounted industrial personal computer to be located in the same network segment. And Port mapping is carried out on each high-definition vehicle-mounted camera by using Port mapping software, a PING packet tool is used for detecting whether a 5G communication network between the edge server and each high-definition vehicle-mounted camera can be communicated or not, and whether the uplink transmission rate of information transmitted to the edge server by each high-definition vehicle-mounted camera meets the requirement or not is tested. And when the 5G communication network can be connected and the uplink transmission rate meets the requirement, entering the step 4.
And 4, calibrating internal parameters of each high-definition vehicle-mounted camera by using a chess checkerboard method, wherein the internal parameters comprise an internal parameter matrix and a distortion parameter of each high-definition vehicle-mounted camera, and storing a calibration result.
And 5, calibrating external parameters of each high-definition vehicle-mounted camera, and storing a calibration result. Specifically, the coordinates (X, Y, Z) of each high-definition vehicle-mounted camera in the vehicle coordinate system of the storage automated guided vehicle can be directly measured. Wherein the positive X direction is the advancing direction of the vehicle body, the positive Y direction is the left turning direction of the vehicle body, and the positive Z direction is the height direction. The field angle of each high-definition vehicle-mounted camera in the vertical direction is 57 degrees, the installation height of each high-definition vehicle-mounted camera is about 1.85m, and in the embodiment, each high-definition vehicle-mounted camera is bent downwards by about 22 degrees, so that the monitoring distance is ensured to be 1.5-50 m. The pitch angle can be calculated by measuring the distance from the position of the center of the real-time image collected by each high-definition vehicle-mounted camera on the ground to the projection point of each high-definition vehicle-mounted camera on the ground and the height of each high-definition vehicle-mounted camera from the ground. And calculating the yaw angle by measuring the position of the center of the real-time image on the ground and the included angle between the projection point of each high-definition vehicle-mounted camera on the ground and the X axis of the vehicle coordinate system of the storage automatic guided vehicle. Since each high-definition vehicle-mounted camera is installed on a plane parallel to the ground, the roller angle is negligible.
And 6, calculating a plane equation of the ground under the camera coordinate system of each high-definition vehicle-mounted camera, and storing a calculation result.
And 7, performing data acquisition on the face image and the employee information of an operator authorized to enter the warehouse, performing data acquisition, data cleaning, sample balancing, data labeling, model training and model testing on the scene in the warehouse, training and obtaining an object detection model.
Therefore, the overall construction of the storage automatic guided vehicle obstacle avoidance system is completed. And putting the trained object monitoring model into an edge server, starting a vehicle-mounted industrial personal computer and connecting a 5G communication network, and ensuring that the warehousing automatic guided vehicle can start working after the 5G communication network is stable.
In one embodiment, an interface of the edge server executing an obstacle avoidance method for the storage automated guided vehicle is shown in fig. 10, where (a) is a real-time image acquired by each high-definition vehicle-mounted camera on the storage automated guided vehicle, and (b) is a real-time distance between each object in the obtained real-time image and the storage automated guided vehicle. The method for the edge server to execute the obstacle avoidance of the storage automatic guided vehicle comprises the following steps:
and 8, acquiring real-time images acquired by the high-definition vehicle-mounted cameras on the storage automatic guided vehicle through a 5G communication network, synchronizing the real-time images according to the time of acquiring the real-time images by the high-definition vehicle-mounted cameras, and carrying out distortion adjustment on the real-time images according to the calibrated internal parameter matrix and distortion parameters of the high-definition vehicle-mounted cameras. As shown in fig. 11, (a) is a live image in the vehicle front direction, (b) is a live image in the vehicle rear direction, (c) is a live image in the vehicle left front direction, and (d) is a live image in the vehicle right front direction. In order to ensure the synchronism of the operation of each high-definition vehicle-mounted camera, real-time images (a-d) collected by each high-definition vehicle-mounted camera are spliced into a 2 x2 tiled large graph.
And 9, operating the trained object detection model, and classifying the objects in the real-time image, such as operators, goods, vehicles and the like. In order to remove noise of the object detection model and ensure the detection accuracy and avoid false detection and missed detection, objects with object identification scores larger than 80 points are screened according to the prediction empirical value of the object detection model. When the object detection model judges that the warehouse has operators, the skeleton detection module is automatically started, and whether the operators exist is further judged. When the skeleton detection module detects a human body, 25 skeleton nodes such as the head, the arm and the ankle of the human body are extracted.
Step 10, as shown in fig. 12, the pixel size of the real-time image is 1920 × 1080, the upper left corner in the image coordinate system of the real-time image is used as the origin of coordinates, the ankle of the operator is used as the positioning point of the human body, and the middle lower part of the vehicle or the cargo is used as the positioning point of the vehicle or the cargo, so as to obtain the pixel coordinates of the positioning point. And converting the pixel coordinates of the positioning point into the coordinates of the positioning point camera under the camera coordinate system of each high-definition vehicle-mounted camera according to the internal parameters of each high-definition vehicle-mounted camera and the plane equation of the ground under the camera coordinate system of each high-definition vehicle-mounted camera. And then, converting the coordinates of the locating point camera into the coordinates of the locating point vehicle under the vehicle coordinate system of the storage automatic guided vehicle through the external parameters of each high-definition vehicle-mounted camera. The exact location of the operator, cargo and vehicle in the warehouse is ultimately obtained.
And 11, when the distance between the obstacle and the storage automatic guided vehicle is smaller than 20m, recording the type and the distance of the obstacle, synchronously displaying on a system interface when the edge server operates, and sending obstacle avoidance information to the storage automatic guided vehicle. And after receiving the obstacle avoidance information, the storage automatic guided vehicle makes obstacle avoidance measures. And when the distance between the obstacle and the storage automatic guided vehicle is smaller than 7m, the detected obstacle is highlighted on a system interface when the edge server operates, and alarm information is sent to the storage automatic guided vehicle. When the distance between the operator and the storage automatic guided vehicle is smaller than 3m, the face detection module performs face detection on the operator, and as shown in fig. 13, when an unauthorized operator is found to break into the storage, alarm information is sent to the storage automatic guided vehicle, and the position of the operator is displayed on a system interface when the edge server operates.
Step 12, zooming 1920 × 1080 real-time images obtained by each high-definition vehicle-mounted camera according to a specified scale and displaying the images in a system interface when the edge server runs, as shown in fig. 14. Wherein the front view (a) is zoomed out to 1240 x 635(h), the left front view (b) and the right front view (c) are zoomed in to 620 x 300(e-f), the back view (d) is zoomed in to 416 x 220(g), and are synchronously displayed in the system interface of 1920 x 1080 size edge server operation in combination with the radar map (i). Meanwhile, the edge server stores the acquired real-time image in a video format so as to be convenient for monitoring personnel to consult.
It should be understood that although the various steps in the flow charts of fig. 2-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-7 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 15, an obstacle avoidance apparatus for a storage automated guided vehicle is provided, the apparatus includes an image acquisition module 1510, an image classification module 1520, a coordinate calculation module 1530, and an information output module 1540, where:
and an image acquisition module 1510, configured to acquire, based on a 5G communication network, real-time images acquired by the cameras disposed on the storage automated guided vehicle.
The image classification module 1520 is configured to run a preset object detection model, extract features of each object in the real-time image, and classify each object based on the features of each object.
The coordinate calculation module 1530 is configured to select a positioning point of each object according to the type of each object, and calculate a real-time coordinate of each object in the vehicle coordinate system of the storage guide vehicle according to preset camera calibration parameters and a plane equation of the ground in the camera coordinate system of each camera.
An information output module 1540, configured to calculate a real-time distance between the storage automated guided vehicle and each object based on the real-time coordinates, and output obstacle avoidance information to the storage automated guided vehicle based on the 5G communication network when the real-time distance is smaller than a set threshold.
In one embodiment, the storage automated guided vehicle obstacle avoidance device further includes:
the image synchronization and calibration unit is used for synchronizing and calibrating the real-time images acquired by the cameras, wherein the synchronization is to synchronize the real-time images acquired by the cameras according to the acquisition time of the real-time images, and the calibration is to calibrate the real-time images acquired by the cameras according to preset camera calibration parameters.
In one embodiment, the image synchronization and calibration unit comprises the following units:
the camera calibration parameter acquiring unit is used for acquiring camera calibration parameters of each camera, wherein the camera calibration parameters comprise internal parameters and external parameters of each camera, the internal parameters comprise an internal parameter matrix and distortion parameters of each camera, and the external parameters comprise a rotation matrix and a translation matrix of each camera.
In one embodiment, the camera calibration parameter acquiring unit includes the following units:
and the internal parameter acquisition unit is used for extracting each angular point in the real-time image by using a chess checkerboard method, calculating the pixel coordinate of each angular point and acquiring the internal parameter matrix and the distortion parameter of each camera.
The external parameter acquisition unit is used for acquiring position data and attitude data of each camera and calculating a rotation matrix and a translation matrix of each camera, wherein the position data comprises the height of each camera and the actual distance between the ground position corresponding to the center of the real-time image and the projection position of each camera on the ground, and the attitude data comprises a pitch angle, a yaw angle and a roller angle of each camera.
In one embodiment, the storage automated guided vehicle obstacle avoidance device further includes:
and the ground equation calculation unit is used for calculating a plane equation of the ground under the camera coordinate system of each camera.
In one embodiment, the ground equation calculation unit includes the following units:
and the characteristic point acquisition unit is used for acquiring each characteristic point on the ground, acquiring the actual distance between every two characteristic points, and calculating the coordinates of each characteristic point in the camera coordinate system of each camera according to the actual distance.
And the equation calculation unit is used for calculating a plane equation of the ground under the camera coordinate system of each camera according to the coordinates of each characteristic point under the camera coordinate system of each camera.
In one embodiment, the storage automated guided vehicle obstacle avoidance device further comprises a communication network unit, and the communication network unit comprises the following units:
and the network testing unit is used for testing the 5G communication network, and the test comprises whether the 5G communication network is connected and whether the transmission rate of the 5G communication network is greater than a preset value.
And the network communication unit is used for entering the step of acquiring the real-time images acquired by the cameras arranged on the storage automatic guided vehicle when the 5G communication network can be communicated and the transmission rate of the 5G communication network is greater than the preset value.
In one embodiment, the storage automated guided vehicle obstacle avoidance device further includes:
and the model training unit is used for training and obtaining the object detection model.
In one embodiment, the model training unit comprises the following units:
and the operator image acquisition unit is used for acquiring the image of the operator authorized to enter the warehouse.
And the warehouse image acquisition unit is used for acquiring the warehouse image and the type of each object in the warehouse image labeled by the user.
And the feature extraction and model training unit is used for extracting features of each object in the operator image and the warehouse image by using a full convolution layer of a convolution neural network, training and obtaining the object detection model.
In one embodiment, the coordinate calculation module 1530 includes the following elements:
and the positioning point acquisition unit is used for selecting the positioning point of each object according to the type of each object.
And the locating point pixel coordinate acquisition unit is used for acquiring the locating point pixel coordinates of the locating points of the objects under the image coordinate system of the real-time image based on the locating points of the objects.
And the locating point camera coordinate acquisition unit is used for converting the locating point pixel coordinates into locating point camera coordinates under the camera coordinate systems of the cameras according to preset internal parameters of the cameras and a plane equation of the ground under the camera coordinate systems of the cameras.
And the locating point vehicle coordinate acquisition unit is used for converting the locating point camera coordinates into locating point vehicle coordinates under the vehicle coordinate system of the warehousing automatic guided vehicle according to preset external parameters of each camera, and obtaining real-time coordinates of each object under the vehicle coordinate system of the warehousing automatic guided vehicle.
In one embodiment, a storage automated guided vehicle is provided, as shown in fig. 16, comprising a camera module 1610, a 5G communication module 1620, and a storage automated guided vehicle module 1630, wherein:
the camera module 1610 is disposed on the storage automated guided vehicle and configured to acquire a real-time image of an environment where the storage automated guided vehicle is located.
The 5G communication module 1620 is configured to transmit the real-time image acquired by the camera module arranged on the warehousing automatic guided vehicle to an edge server through a 5G communication network; and receiving obstacle avoidance information, which is calculated by the edge server based on the real-time image and is obtained when the real-time distance between the warehousing automatic guided vehicle and each object in the real-time image is smaller than a set threshold value, through a 5G communication network.
And the storage automatic guided vehicle module 1630 is used for taking obstacle avoidance measures according to the obstacle avoidance information.
For specific limitations of the obstacle avoidance device for the storage automated guided vehicle, reference may be made to the above limitations on the obstacle avoidance method for the storage automated guided vehicle, and details thereof are not repeated here. All modules in the storage automatic guided vehicle obstacle avoidance device can be completely or partially realized through software, hardware and a combination of the software and the hardware. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 17. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer equipment is used for storing the obstacle avoidance data of the storage automatic guided vehicle. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to realize the obstacle avoidance method of the storage automatic guided vehicle.
Those skilled in the art will appreciate that the architecture shown in fig. 17 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the warehouse automated guided vehicle obstacle avoidance method as described above when executing the computer program.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the steps of the warehouse automated guided vehicle obstacle avoidance method as described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (14)

1. An obstacle avoidance method for a storage automated guided vehicle, the method comprising:
acquiring real-time images acquired by all cameras arranged on the storage automatic guided vehicle based on a 5G communication network;
operating a preset object detection model, extracting the characteristics of each object in the real-time image, and classifying each object based on the characteristics of each object;
selecting a positioning point of each object according to the type of each object, and calculating real-time coordinates of each object in a vehicle coordinate system of the storage automatic guided vehicle according to preset camera calibration parameters and a plane equation of the ground in the camera coordinate system of each camera;
and calculating real-time distances between the storage automatic guided vehicle and each object based on the real-time coordinates, and outputting obstacle avoidance information to the storage automatic guided vehicle based on the 5G communication network when the real-time distances are smaller than a set threshold value.
2. The method according to claim 1, wherein after acquiring the real-time images acquired by the cameras arranged on the warehousing automatic guided vehicle based on the 5G communication network, the method comprises:
and synchronizing and calibrating the real-time images acquired by the cameras, wherein the synchronization is to synchronize the real-time images acquired by the cameras according to the acquisition time of the real-time images, and the calibration is to calibrate the real-time images acquired by the cameras according to preset camera calibration parameters.
3. The method of claim 1, wherein prior to said synchronizing and calibrating said real-time images acquired by each of said cameras, comprising:
the method comprises the steps of obtaining camera calibration parameters of each camera, wherein the camera calibration parameters comprise internal parameters and external parameters of each camera, the internal parameters comprise internal parameter matrixes and distortion parameters of each camera, and the external parameters comprise rotation matrixes and translation matrixes of each camera.
4. The method of claim 3, wherein the obtaining camera calibration parameters for each of the cameras comprises:
extracting each angular point in the real-time image by using a chess checkerboard method, calculating pixel coordinates of each angular point, and obtaining an internal reference matrix and distortion parameters of each camera;
the method comprises the steps of obtaining position data and posture data of each camera, calculating a rotation matrix and a translation matrix of each camera, wherein the position data comprise the height of each camera and the actual distance between the ground position corresponding to the center of the real-time image and the projection position of each camera on the ground, and the posture data comprise the pitch angle, the yaw angle and the roller angle of each camera.
5. The method of claim 2, wherein after said synchronizing and calibrating the real-time images acquired by each of the cameras, comprising:
and calculating a plane equation of the ground under the camera coordinate system of each camera.
6. The method of claim 5, wherein said calculating a plane equation of the ground in a camera coordinate system of each of the cameras comprises:
acquiring each feature point of the ground, acquiring an actual distance between every two feature points, and calculating the coordinate of each feature point under the camera coordinate system of each camera according to the actual distance;
and calculating a plane equation of the ground under the camera coordinate system of each camera according to the coordinates of each feature point under the camera coordinate system of each camera.
7. The method according to claim 1, wherein before acquiring the real-time images collected by the cameras arranged on the warehousing automatic guided vehicle based on the 5G communication network, the method comprises:
testing the 5G communication network, wherein the testing comprises whether the 5G communication network is connected and whether the transmission rate of the 5G communication network is greater than a preset value;
and when the 5G communication network can be communicated and the transmission rate of the 5G communication network is greater than the preset value, the step of acquiring the real-time images acquired by the cameras arranged on the storage automatic guided vehicle is carried out.
8. The method of claim 5, after said calculating a plane equation of the ground in a camera coordinate system of each of the cameras, comprising:
and training and obtaining the object detection model.
9. The method of claim 8, wherein the training and obtaining the object detection model comprises:
obtaining an image of an operator authorized to enter the warehouse;
acquiring a warehouse image and the type of each object in the warehouse image marked by a user;
and performing feature extraction on each object in the operator image and the warehouse image by using a full convolution layer of a convolution neural network, training and obtaining the object detection model.
10. The method according to claim 1, wherein the selecting a positioning point of each object according to the type of each object, and calculating real-time coordinates of each object in a vehicle coordinate system of the storage automated guided vehicle according to preset camera calibration parameters and a plane equation of the ground in the camera coordinate system of each camera comprises:
selecting positioning points of the objects according to the types of the objects, taking ankles of operators as the positioning points when the objects are the operators, and taking the middle lower parts of vehicles or cargos as the positioning points when the objects are the vehicles or the cargos;
acquiring pixel coordinates of the positioning points of the objects under the image coordinate system of the real-time image based on the positioning points of the objects;
converting the coordinates of the pixels of the positioning points into coordinates of the cameras of the positioning points under the camera coordinate systems of the cameras according to preset internal parameters of the cameras and a plane equation of the ground under the camera coordinate systems of the cameras;
and converting the coordinates of the positioning point camera into the coordinates of the positioning point vehicle under the vehicle coordinate system of the warehousing automatic guided vehicle according to preset external parameters of each camera, and obtaining the real-time coordinates of each object under the vehicle coordinate system of the warehousing automatic guided vehicle.
11. An obstacle avoidance device for a storage automated guided vehicle, the device comprising:
the image acquisition module is used for acquiring real-time images acquired by all cameras arranged on the storage automatic guided vehicle based on a 5G communication network;
the image classification module is used for operating a preset object detection model, extracting the characteristics of each object in the real-time image and classifying each object based on the characteristics of each object;
the coordinate calculation module is used for selecting positioning points of the objects according to the types of the objects, and calculating real-time coordinates of the objects in a vehicle coordinate system of the warehousing automatic guided vehicle according to preset camera calibration parameters and a plane equation of the ground in the camera coordinate system of each camera;
and the information output module is used for calculating the real-time distance between the storage automatic guided vehicle and each object based on the real-time coordinates, and outputting obstacle avoidance information to the storage automatic guided vehicle based on the 5G communication network when the real-time distance is smaller than a set threshold value.
12. A storage automated guided vehicle, comprising:
the camera module is arranged on the storage automatic guided vehicle and used for acquiring a real-time image of the environment where the storage automatic guided vehicle is located;
the 5G communication module is used for transmitting the real-time images collected by the camera module arranged on the warehousing automatic guided vehicle to an edge server through a 5G communication network; receiving obstacle avoidance information which is calculated by the edge server based on the real-time image and is obtained when the real-time distance between the warehousing automatic guided vehicle and each object in the real-time image is smaller than a set threshold value through a 5G communication network;
and the storage automatic guided vehicle module is used for making obstacle avoidance measures according to the obstacle avoidance information.
13. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 10 when executing the computer program.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 10.
CN202110069572.5A 2021-01-19 2021-01-19 Storage automatic guided vehicle obstacle avoidance method and device, computer equipment and storage medium Pending CN112711263A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110069572.5A CN112711263A (en) 2021-01-19 2021-01-19 Storage automatic guided vehicle obstacle avoidance method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110069572.5A CN112711263A (en) 2021-01-19 2021-01-19 Storage automatic guided vehicle obstacle avoidance method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112711263A true CN112711263A (en) 2021-04-27

Family

ID=75549363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110069572.5A Pending CN112711263A (en) 2021-01-19 2021-01-19 Storage automatic guided vehicle obstacle avoidance method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112711263A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115129070A (en) * 2022-08-31 2022-09-30 深圳市欧铠智能机器人股份有限公司 Intelligent obstacle avoidance system and method for storage robot under Internet of things
WO2022233166A1 (en) * 2021-05-06 2022-11-10 珠海格力智能装备有限公司 Agv obstacle avoidance method and apparatus, computer-readable storage medium, and processor
CN115373407A (en) * 2022-10-26 2022-11-22 北京云迹科技股份有限公司 Method and device for robot to automatically avoid safety warning line

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010060287A1 (en) * 2008-11-27 2010-06-03 东软集团股份有限公司 An obstacle detecting method based on monocular vision and the device thereof
US20160093052A1 (en) * 2014-09-26 2016-03-31 Neusoft Corporation Method and apparatus for detecting obstacle based on monocular camera
CN107314771A (en) * 2017-07-04 2017-11-03 合肥工业大学 Unmanned plane positioning and attitude angle measuring method based on coded target
CN108489454A (en) * 2018-03-22 2018-09-04 沈阳上博智像科技有限公司 Depth distance measurement method, device, computer readable storage medium and electronic equipment
CN109270534A (en) * 2018-05-07 2019-01-25 西安交通大学 A kind of intelligent vehicle laser sensor and camera online calibration method
CN109483516A (en) * 2018-10-16 2019-03-19 浙江大学 A kind of mechanical arm hand and eye calibrating method based on space length and epipolar-line constraint
CN109902637A (en) * 2019-03-05 2019-06-18 长沙智能驾驶研究院有限公司 Method for detecting lane lines, device, computer equipment and storage medium
CN110211190A (en) * 2019-05-31 2019-09-06 北京百度网讯科技有限公司 Training method, device and the storage medium of camera self moving parameter estimation model
US20200012869A1 (en) * 2018-07-06 2020-01-09 Cloudminds (Beijing) Technologies Co., Ltd. Obstacle avoidance reminding method, electronic device and computer-readable storage medium thereof
CN111666876A (en) * 2020-06-05 2020-09-15 北京百度网讯科技有限公司 Method and device for detecting obstacle, electronic equipment and road side equipment
CN112183476A (en) * 2020-10-28 2021-01-05 深圳市商汤科技有限公司 Obstacle detection method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010060287A1 (en) * 2008-11-27 2010-06-03 东软集团股份有限公司 An obstacle detecting method based on monocular vision and the device thereof
US20160093052A1 (en) * 2014-09-26 2016-03-31 Neusoft Corporation Method and apparatus for detecting obstacle based on monocular camera
CN107314771A (en) * 2017-07-04 2017-11-03 合肥工业大学 Unmanned plane positioning and attitude angle measuring method based on coded target
CN108489454A (en) * 2018-03-22 2018-09-04 沈阳上博智像科技有限公司 Depth distance measurement method, device, computer readable storage medium and electronic equipment
CN109270534A (en) * 2018-05-07 2019-01-25 西安交通大学 A kind of intelligent vehicle laser sensor and camera online calibration method
US20200012869A1 (en) * 2018-07-06 2020-01-09 Cloudminds (Beijing) Technologies Co., Ltd. Obstacle avoidance reminding method, electronic device and computer-readable storage medium thereof
CN109483516A (en) * 2018-10-16 2019-03-19 浙江大学 A kind of mechanical arm hand and eye calibrating method based on space length and epipolar-line constraint
CN109902637A (en) * 2019-03-05 2019-06-18 长沙智能驾驶研究院有限公司 Method for detecting lane lines, device, computer equipment and storage medium
CN110211190A (en) * 2019-05-31 2019-09-06 北京百度网讯科技有限公司 Training method, device and the storage medium of camera self moving parameter estimation model
CN111666876A (en) * 2020-06-05 2020-09-15 北京百度网讯科技有限公司 Method and device for detecting obstacle, electronic equipment and road side equipment
CN112183476A (en) * 2020-10-28 2021-01-05 深圳市商汤科技有限公司 Obstacle detection method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
QIAN CHEN: "Measurement of Tree Barriers in Transmission Line Corridors Based on Binocular Stereo Vision", 《2018 15TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV)》 *
孙增鹏: "面向室内动态复杂环境的导航避障方法研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022233166A1 (en) * 2021-05-06 2022-11-10 珠海格力智能装备有限公司 Agv obstacle avoidance method and apparatus, computer-readable storage medium, and processor
CN115129070A (en) * 2022-08-31 2022-09-30 深圳市欧铠智能机器人股份有限公司 Intelligent obstacle avoidance system and method for storage robot under Internet of things
CN115129070B (en) * 2022-08-31 2022-12-30 深圳市欧铠智能机器人股份有限公司 Intelligent obstacle avoidance system and method for storage robot under Internet of things
CN115373407A (en) * 2022-10-26 2022-11-22 北京云迹科技股份有限公司 Method and device for robot to automatically avoid safety warning line

Similar Documents

Publication Publication Date Title
CN112711263A (en) Storage automatic guided vehicle obstacle avoidance method and device, computer equipment and storage medium
CN112149550B (en) Automatic driving vehicle 3D target detection method based on multi-sensor fusion
CN102737236B (en) Method for automatically acquiring vehicle training sample based on multi-modal sensor data
US11841434B2 (en) Annotation cross-labeling for autonomous control systems
WO2020097840A1 (en) Systems and methods for correcting a high-definition map based on detection of obstructing objects
AU2018286594A1 (en) Methods and systems for color point cloud generation
CN112419385B (en) 3D depth information estimation method and device and computer equipment
US20170221241A1 (en) System, method and apparatus for generating building maps
CN111753649B (en) Parking space detection method, device, computer equipment and storage medium
CN113160327A (en) Method and system for realizing point cloud completion
CN112614165B (en) Firework monitoring method, device, camera, electronic device and storage medium
CN110136186B (en) Detection target matching method for mobile robot target ranging
CN112508865A (en) Unmanned aerial vehicle inspection obstacle avoidance method and device, computer equipment and storage medium
CN114359334A (en) Target tracking method and device, computer equipment and storage medium
CN115376109A (en) Obstacle detection method, obstacle detection device, and storage medium
CN114494466B (en) External parameter calibration method, device and equipment and storage medium
CN114463303A (en) Road target detection method based on fusion of binocular camera and laser radar
CN112488022B (en) Method, device and system for monitoring panoramic view
CN112689234A (en) Indoor vehicle positioning method and device, computer equipment and storage medium
CN114037968A (en) Lane line detection method based on depth radar point cloud and image data fusion
CN114169355A (en) Information acquisition method and device, millimeter wave radar, equipment and storage medium
CN114167443A (en) Information completion method and device, computer equipment and storage medium
US20230386062A1 (en) Method for training depth estimation model, method for estimating depth, and electronic device
EP4345750A1 (en) Position estimation system, position estimation method, and program
CN113572946B (en) Image display method, device, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination