CN113537096B - AGV forklift warehouse position tray identification and auxiliary positioning method and system based on ROS - Google Patents

AGV forklift warehouse position tray identification and auxiliary positioning method and system based on ROS Download PDF

Info

Publication number
CN113537096B
CN113537096B CN202110824995.3A CN202110824995A CN113537096B CN 113537096 B CN113537096 B CN 113537096B CN 202110824995 A CN202110824995 A CN 202110824995A CN 113537096 B CN113537096 B CN 113537096B
Authority
CN
China
Prior art keywords
tray
forklift
fork
agv
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110824995.3A
Other languages
Chinese (zh)
Other versions
CN113537096A (en
Inventor
徐本连
李震
从金亮
鲁明丽
施健
吴迪
赵康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changshu Institute of Technology
Original Assignee
Changshu Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changshu Institute of Technology filed Critical Changshu Institute of Technology
Priority to CN202110824995.3A priority Critical patent/CN113537096B/en
Publication of CN113537096A publication Critical patent/CN113537096A/en
Application granted granted Critical
Publication of CN113537096B publication Critical patent/CN113537096B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66FHOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
    • B66F9/00Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
    • B66F9/06Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
    • B66F9/075Constructional features or details
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66FHOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
    • B66F9/00Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
    • B66F9/06Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
    • B66F9/075Constructional features or details
    • B66F9/0755Position control; Position detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Structural Engineering (AREA)
  • Transportation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Civil Engineering (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Geology (AREA)
  • Mechanical Engineering (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Warehouses Or Storage Devices (AREA)

Abstract

The invention discloses a method and a system for identifying and assisting in positioning pallets of an AGV forklift warehouse based on ROS, wherein the method comprises the steps of detecting and identifying the pallets in the warehouse, predicting the positions of support columns of the pallet and pixels thereof in an image by using a model trained by deep learning, and calculating three-dimensional coordinates of the centers of the support columns under a camera coordinate system by using a depth camera; calculating the position and the posture of the tray relative to the AGV forklift body by calculating the coordinate conversion relation among the depth camera coordinate system, the AGV forklift fork coordinate system and the AGV forklift body coordinate system; the step of controlling the AGV forklift and the fork to move to the pluggable position is performed, so that the fork of the AGV forklift can be opposite to the cavity of the tray. The invention has higher detection speed, can accurately detect and position the trays in the warehouse in real time, improves the working efficiency, and better assists the forklift to insert and take the trays.

Description

AGV forklift warehouse position tray identification and auxiliary positioning method and system based on ROS
Technical Field
The invention relates to an AGV forklift warehouse pallet identification and auxiliary inserting and taking positioning method and system based on ROS.
Background
With the rapid development of artificial intelligence technology, the intellectualization of robots is also a necessary trend. Meanwhile, along with the maturity of the robot technology, the application range of the robot is also expanding continuously, and the robot is applicable to industrial robots manufactured in industrial production, household service robots for serving people in daily life, medical robots for assisting doctors and patients, military robots for national defense armies and the like.
The AGV (Automated Guided Vehicle) forklift is one of industrial robots and has the functions of moving, automatic navigation, multi-sensor control, network interaction, loading, unloading, shipping and the like. The AGV forklift is used as an artificial intelligent industrial truck, and the tasks of loading and unloading cargoes of a tray, the truck used for short-distance transportation and the like are mainly completed. Along with the increase of warehouse turnover rate and order quantity, in order to ensure work efficiency, more and more users change traditional manual forklifts into unmanned AGV forklifts, and the unmanned AGV forklifts have wider application prospects. With unique vision and artificial intelligence techniques, future robots provide flexible solutions for unmanned upscaling of streams in factories and warehouses. The deep application of computer vision gives the unmanned forklift higher positioning and sensing precision, combines the technologies of motion control, deep learning and the like, is more intelligent, and can rapidly screen and execute the optimal task completion strategy.
In the mill that AGV fork truck carried out goods loading and unloading transport work, because of tray in the storehouse position and goods also can exist and be inserted the possibility of taking away by the manual work, so AGV fork truck need use visual camera to go to real-time detection and identification corresponding target storehouse position to have the tray before advancing the station when getting goods, and then inform system this storehouse position goods have been carried away, and AGV fork truck itself also need not go on this storehouse position to get goods simultaneously. On the other hand, sometimes the goods may be misplaced, such as by misplacing the goods in a warehouse of other kinds of goods. At this time, if different kinds of goods are placed by using different kinds of trays, different kinds of goods can be distinguished by using the vision camera to identify the different trays, and whether the inserted goods are correct or not is judged. The traditional machine vision perception method mainly utilizes the characteristic points to perform matching and the like to identify the target object through the manually designed characteristics. In practical application, the algorithm is often not satisfactory due to the complex form of the target object and the light change of the environment. Currently, detection technology based on deep neural networks has become a mainstream method, and the detection effect and performance on an open target detection data set are excellent.
When the trays are placed in the warehouse, sometimes the trays cannot be placed strictly according to the specified requirements, and the trays are placed irregularly, so that the inclined deviation exists between the positions of the trays and the placing positions of the original trays. If the AGV fork truck continues to execute the inserting and taking task according to the original planning path and program, the condition of inaccurate fork insertion can occur. At the moment, the carried sensor equipment is utilized to assist the AGV forklift to position the pallet on the basis of the path planned in the original mode, and the forklift can accurately insert and take the pallet for loading and unloading.
ROS (Robot Operating System ) provides a series of libraries and tools to help software developers create robot application software. It provides many functions such as hardware abstraction, device drivers, function libraries, visualization tools, messaging, and software package management. ROS perform several types of communications, including service-based synchronous RPC (Remote Procedure Call ) communications, topic (Topic) -based asynchronous data stream communications, and data storage on parameter servers.
YOLO v4 is a target detection method for deep learning (You Only Look Once, you only need to see once), and the method is characterized in that high accuracy is achieved while rapid detection is achieved, and the YOLO adopts a single neural network to directly predict object boundaries and class probabilities, so that end-to-end object detection is achieved. YOLOv4 is a fourth generation target detection method of the YOLO series, inherits the ideas and concepts of the YOLO series, can be trained and tested by using a traditional GPU (Graphics Processing Unit, image processing unit), and can obtain real-time and high-precision detection results.
Disclosure of Invention
1. Object of the invention
The invention aims to assist an AGV fork truck to detect whether the tray is still in the original position or not before entering a warehouse to insert and pick the tray, and detect whether the type of the tray is correct or not, so that repeated useless operation is avoided. Meanwhile, due to dislocation caused by the fact that the tray is placed loosely, the inserting and taking auxiliary positioning system can assist the AGV forklift in accurately positioning the tray, and therefore the AGV forklift inserting, taking and hauling operation can be completed better.
2. The invention adopts the technical proposal that
The invention discloses an AGV forklift warehouse position tray identification and auxiliary positioning method based on ROS, which comprises the following steps:
detecting and identifying the tray in the library position, predicting the tray support column and the pixel position thereof in the image by using a model trained by deep learning, and calculating the three-dimensional coordinates of the center of each support column under a camera coordinate system by using a depth camera;
calculating the position and the posture of the tray relative to the AGV forklift body by calculating the coordinate conversion relation among the depth camera coordinate system, the AGV forklift fork coordinate system and the AGV forklift body coordinate system;
the step of controlling the AGV forklift and the fork to move to the pluggable position is performed, so that the fork of the AGV forklift can be opposite to the cavity of the tray.
Further, the detection and recognition step S1:
s11, shooting and collecting tray sample data in a library position;
s12, labeling sample data by using image labeling software LabelImg, selecting support columns of different types of trays in each graph by a frame, labeling labels of the different types of trays, storing and outputting labeling files, and then dividing a data set;
s13, training an object detection model, and constructing a deep learning YOLOv4-CSP target detection network; the network structure of YOLOv4 is divided into four parts including an input, a trunk, a neck and a head; the method of using rich detection data set at the input end of YOLOv4 comprises mosaic data enhancement and SAT strategy; bac of YOLOv4The kBone takes a CSPDarknet53 network framework as a network extraction backbone for feature extraction; the neck part mainly adopts an SPP module, an FPN and a PAN mode, the SPP module is used for fusing feature graphs with different dimensions, the receiving range of trunk features is increased, and the feature extraction capacity of a network is improved by using a top-down FPN feature pyramid and a bottom-up PAN feature pyramid; the loss function of the head part training of YOLOv4 is L CIOU ,L CIOU When calculating the regression of the boundary frame, the loss function of the overlapping area, the center point distance and the aspect ratio of the prediction frame A and the real frame B is considered; l (L) CIOU The specific calculation method is as follows:
in the formula, distance_2 is the Euclidean Distance between the center points of the prediction frame and the real frame, and distance_C is the diagonal Distance between the minimum circumscribed rectangle of the prediction frame and the real frame; IOU (Intersection over Union, cross-over ratio) is a standard for measuring accuracy of detecting a corresponding object in a particular dataset, and is calculated as:
wherein U represents the union of the two, and U represents the intersection of the two; v in formula (1) is a parameter for measuring the uniformity of the aspect ratio, and the calculation formula is:
wherein w is gt 、h gt Respectively the width and the height of a real frame, w p 、h p The width and height of the prediction box, respectively, arctan is an arctan function;
the YOLOv4-CSP adds a cross-stage local network structure CSPNet in the up-sampling and down-sampling stages and SPP modules of the FPN and PAN of the Neck part, divides the feature map of the base layer into two parts, and then merges through a cross-stage layered structure;
s14, training on a GPU computer to obtain the tray support column detection and identification model with good effect.
Further, a tray pose calculating step S2:
s21, turning on a depth camera on a fork of the forklift, running a YOLOv4-CSP detection model, and sending topics of the sounding_boxes through ROS, wherein the sounding_boxes contain detection frame information and topic messages sent through ROS; the topic comprises each identified tray support column, the type of the tray, the confidence level of the tray support column and the pixel coordinates of the upper left point and the lower right point of the detection frame;
s22, creating two messages of YoloObject and YoloObjects under the ROS; yolo objects and yolo objects are custom message types for storing transformed coordinate information; the YoloObject message contains label information of the single tray support column and three-dimensional coordinates of the center point, and is used for storing all the YoloObject messages;
s23, subscribing the color image, depth data and internal topic information of the camera and the sounding_boxes through the ROS, converting the two-dimensional pixel coordinates (u, v) of the center point of the support column into three-dimensional coordinates (x, y, z), and calculating the conversion as follows:
z=0.001*d (4)
wherein d is depth data of the pixel point, f x ,f y C is the focal length of the camera x ,c y Is the main point of the camera;
s24, storing three-dimensional coordinate (x, y, z) information of the center point of the tray support column and type information of the tray in the YoloObject.
Further, in the step S24, the category information is stored in the YoloObject:
judging the size of the x-axis coordinate of each support column, and arranging the x-axis coordinate from small to large to form a left support column a, a middle support column b and a right support column c respectively; then sequentially adding each YoloObject into YoloObjects, and publishing the YoloObjects through a customized point topic by utilizing ROS; the point_point topic is the ROS topic created for sending the yolo objects message.
Further, S3, the AGV fork truck receives the detection information and controls the AGV fork truck and the fork to move to the pluggable position; the method comprises the following specific steps:
s31, moving the AGV forklift to a tray warehouse position to be inserted, setting control parameters of the fork, and controlling the fork to lift to an initial preparation position; determining a fork coordinate system { H } and a fork truck body coordinate system { B } transformation matrix according to the lifting position
S32, determining a transformation matrix from a camera coordinate system { C } to a fork coordinate system { H } through hand-eye calibrationThe depth camera is opened to carry out the detection step of S2, whether the tray at the warehouse is taken away or not is judged by detecting the supporting frame of the tray through the camera, and if the supporting frame of the tray is not detected, the control system is prompted that the tray is taken away; if the type of the tray is detected to be incorrect, prompting a control system to misplace the tray and goods; if the supporting frame of the correct tray is detected, sending out the result through a point topic;
s33, subscribing the AGV forklift to a point topic, and setting three coordinates of center points of three support columns a, b and C of the tray under a camera coordinate system { C } C P aC P bC P c The method is converted into a forklift body coordinate system { B }:
in the method, in the process of the invention, H P aH P bH P cB P aB P bB P c the coordinates of the center points of the three support columns in a fork coordinate system and an AGV fork truck body coordinate system are respectively;
s34, requiring the AGV forklift to be capable of inserting and taking the pallet, the pallet is required to be opposite to the pallet, namely the center of the pallet surface is aligned with the center point of the pallet, and the inserting and taking surface of the pallet is parallel to the portal surface of the forklift; the three-dimensional coordinates of the tray support columns a, b and c in the AGV forklift body coordinate system are respectively as follows B P a =[x a ,y a ,z a ] TB P b =[x b ,y b ,z b ] TB P c =[x c ,y c ,z c ] T The method comprises the steps of carrying out a first treatment on the surface of the The central position of the tray is the coordinate of the support column b B P b =[x b ,y b ,z b ] T Solving a deflection angle theta of the tray inserting surface relative to the forklift gantry surface, wherein l is the length of the tray:
Δz=z c -z a (7)
wherein arcsin is an arcsin function;
s35, the AGV forklift truck obtains tray pose information { x } b ,y b ,z b θ }, including the coordinates [ x ] of the center of the pallet insertion plane in the x, y, z axes of the forklift body coordinate system b ,y b ,z b ] T The deflection angle theta of the tray inserting surface relative to the forklift gantry surface; by means of these information pairsThe height positions of the truck body and the fork lifting are adjusted, so that the fork of the AGV forklift faces the supporting hole of the tray, and the AGV forklift forwards runs to insert and take the tray, thereby completing the task.
The invention provides an AGV forklift library position tray identification and auxiliary positioning system based on ROS, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the method when executing the computer program.
3. The invention has the beneficial effects that
1) The invention can well identify the tray by using the detection model of deep learning. The traditional method is to extract the outline of the tray by utilizing the point cloud information, and identify whether the tray is a hole and a support column of the tray or not by the width between the edges of the outline, so as to judge whether the tray is the tray or not. Compared with the traditional method, the detection method for deep learning provided by the invention has the advantages of better recognition effect, higher accuracy and better robustness.
2) According to the invention, the improved YOLOv4 deep learning network structure YOLOv4-CSP is utilized to train the tray samples, and the trained network model has a higher detection speed, so that the AGV forklift can detect the trays in the warehouse in real time, and the working efficiency is improved.
3) Compared with the traditional AGV preset coordinate track, the auxiliary positioning system for the AGV forklift in the invention can better assist the forklift in inserting and taking the tray by positioning the tray by using the depth camera and the detection model of the deep learning when the tray is inconsistent with the original position due to the reason of non-strict placement and the like.
4) The invention operates under the ROS framework, the results among the modules are published and subscribed in the form of topics through the ROS system for communication transmission, and the system operates more smoothly in real time.
Drawings
FIG. 1 is a training flow chart of a deep learning detection recognition model.
FIG. 2 is a flow chart of an AGV forklift warehouse pallet identification and insertion auxiliary positioning system.
Fig. 3 shows the type of pallet that the AGV forklift needs to recognize and carry, and a, b, and c respectively show the support columns of the pallet.
FIG. 4 is a top view of the pallet in relation to the AGV fork positions.
Fig. 5 is a visual diagram of topic publication and subscription under ROS for the detection and identification module and the AGV fork truck control module.
Fig. 6 is a schematic block diagram of an AGV forklift.
Detailed Description
The following description of the embodiments of the present invention will be made more apparent and fully by reference to the accompanying drawings, in which embodiments of the invention are shown, and in which it is evident that the embodiments shown are only some, but not all embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be within the scope of the present invention.
Examples of the present invention will be described in further detail below with reference to the accompanying drawings.
Example 1
An AGV fork truck warehouse position tray identification and inserting and taking auxiliary positioning system based on ROS comprises two parts, namely detection and identification of trays in warehouse positions and calculation of positions and postures of the trays. The detection part of the library position tray comprises a depth camera and a computer carrying a GPU, the computer predicts the positions of the tray support columns and pixels thereof in the image by using a model trained by the deep learning, and the three-dimensional coordinates of the centers of the support columns under a camera coordinate system are calculated by using the depth camera. The position and pose calculation part calculates through the coordinate conversion relation among the depth camera coordinate system, the AGV fork coordinate system and the AGV fork body coordinate system, solves the position and pose of the tray relative to the AGV fork body, and controls the AGV fork to move so that the fork of the AGV fork can be just opposite to the cavity of the tray. Through the recognition of the AGV fork garage position tray based on the ROS and the inserting and taking auxiliary positioning system, the AGV fork truck can more effectively insert, take and haul the tray in the garage position, and the production efficiency is improved.
The main implementation steps of the invention include the following steps:
s1, training a detection and identification model of a tray based on an improved Yolov4 target detection network Yolov4-CSP, wherein the detection and identification model specifically comprises the following steps:
s11, shooting and collecting tray sample data in the library position.
And S12, labeling the sample data by using image labeling software LabelImg, and selecting support columns of different types of trays in each drawing by a frame to label the labels of the different types of trays. And storing and outputting the annotation file, and dividing the data set, wherein the ratio of the training set to the testing set is 9:1.
S13, training an object detection model, and building a deep learning YOLOv4-CSP target detection network. The network structure of YOLOv4 can be divided into four parts, input, backBone, neck, head. The YOLOv4 has mosaic data enhancement, SAT (Self-countermeasure training) and other strategies at the Input end, and enriches the detection data set. The BackBone of YOLOv4 uses the CSPDarknet53 network framework as a network extraction BackBone for feature extraction. The Neck part mainly adopts an SPP (Spatial Pyramid Pooling, space pyramid pooling) module, an FPN (Feature Pyramid Networks, feature map pyramid network) and a PAN (Path Aggregation Network ) mode, the SPP module is used for fusing feature maps with different dimensions, the receiving range of trunk features can be effectively increased, and the feature extraction capacity of the network is improved by using a top-down FPN feature pyramid and a bottom-up PAN feature pyramid. The Head part training of YOLOv4 has a loss function of L CIOU ,L CIOU Is a loss function that takes into account the overlapping area, center point distance, and aspect ratio of the predicted box a and the real box B when computing the bounding box regression. L (L) CIOU The specific calculation method is as follows:
in the formula, distance_2 is the Euclidean Distance between the center points of the prediction frame and the real frame, and distance_C is the diagonal Distance between the minimum circumscribed rectangle of the prediction frame and the real frame. IOU (Intersection over Union, cross-over ratio) is a standard for measuring accuracy of detecting corresponding objects in a particular dataset, and is calculated by:
where U represents the union of the two and U represents the intersection of the two. V in formula (1) is a parameter for measuring the uniformity of the aspect ratio, and the calculation formula is:
wherein w is gt 、h gt Respectively the width and the height of a real frame, w p 、h p The width and height of the prediction box, respectively, arctan is the arctangent function.
In contrast to YOLOv4, YOLOv4-CSP incorporates a cross-phase local network structure CSPNet (Cross Stage Partial Network) in the FPN, up-sampling and down-sampling phases of PAN and SPP modules of the neg section. The CSPNet splits the feature map of the base layer into two parts and then merges through a cross-stage hierarchy (Cross-stage hierarchy) structure. The CSPNet network structure can enhance the learning capability of CNN (Convolutional Neural Network ), lighten the model, simultaneously maintain the precision of the model, reduce the calculation bottleneck of the whole model and simultaneously reduce the memory cost of the algorithm. The main purpose of CSPNet is to enable this architecture to achieve richer gradient combinations while reducing computation. The YOLOv4-CSPNet network model has a better balance of speed and accuracy over YOLOv4 on a common GPU processor.
S14, training on a GPU computer to obtain the tray support column detection and identification model with good effect.
S2, detecting the tray by using a trained model, calculating three-dimensional coordinates of centers of three support columns of the tray by using a depth camera, and sending detected data messages to an AGV forklift by using an ROS (reactive oxygen species), wherein the specific steps are as follows:
s21, turning on a depth camera on a fork of the forklift, running a Yolov4-CSP detection model, and sending detected data to topics of the sounding_boxes through ROS. The sounding_boxes are topic messages that contain the detection box information and are sent by ROS. The topic includes each identified tray support column, the type of tray, the confidence of the tray support column, the pixel coordinates of the upper left and lower right points of the detection frame, and the like.
S22, creating two messages of YoloObject and YoloObjects under the ROS. Yolo objects and yolo objects are custom message types, and are mainly used for storing transformed coordinate information. The yolo objects message contains label information of the single tray support column and three-dimensional coordinates of the center point, and is used for storing all the yolo objects.
S23, subscribing the color image, depth data and internal topic information of the camera and the sounding_boxes through the ROS, converting the two-dimensional pixel coordinates (u, v) of the center point of the support column into three-dimensional coordinates (x, y, z), and calculating the conversion as follows:
z=0.001*d (4)
wherein d is depth data of the pixel point, f x ,f y C is the focal length of the camera x ,c y Is the main point of the camera.
S24, storing three-dimensional coordinate (x, y, z) information of the center point of the tray support column and type information of the tray in the YoloObject. And judging the size of the x-axis coordinate of each support column, wherein the x-axis coordinate is sequentially arranged from small to large and is respectively a left support column a, a middle support column b and a right support column c. Each yolo object is then added to the yolo objects in sequence, and the ROS are used to publish the yolo objects through the custom point_point topic. The point_point topic is the ROS topic created for sending the yolo objects message.
S3, the AGV forklift receives the detection information and controls the AGV forklift and the fork to move to the pluggable position. The method comprises the following specific steps:
s31, moving the AGV forklift to a tray warehouse position to be inserted, setting control parameters of the fork, and controlling the fork to lift to an initial preparation position. Determining a fork coordinate system { H } and a fork truck body coordinate system { B } transformation matrix according to the lifting position
S32, determining a transformation matrix from a camera coordinate system { C } to a fork coordinate system { H } through hand-eye calibrationThe depth camera is opened to carry out the detection step of S2, whether the tray at the warehouse is taken away or not is judged by detecting the supporting frame of the tray through the camera, and if the supporting frame of the tray is not detected, the control system is prompted that the tray is taken away; if the type of the tray is detected to be incorrect, prompting a control system to misplace the tray and goods; if the correct tray support is detected, the result is sent out through the point topic.
S33, subscribing the AGV forklift to a point topic, and setting three coordinates of center points of three support columns a, b and C of the tray under a camera coordinate system { C } C P aC P bC P c The method is converted into a forklift body coordinate system { B }:
in the method, in the process of the invention, H P aH P bH P cB P aB P bB P c the coordinates of the center points of the three support columns in a pallet fork coordinate system and an AGV fork truck body coordinate system are respectively.
S34, wantSolving AGV fork truck and can inserting and getting the tray then needs the fork to just to the tray, and the center of tray face and the central point alignment of fork promptly, and the face is got in the tray is parallel to fork truck's portal face. The three-dimensional coordinates of the tray support columns a, b and c in the AGV forklift body coordinate system are respectively as follows B P a =[x a ,y a ,z a ] TB P b =[x b ,y b ,z b ] TB P c =[x c ,y c ,z c ] T . The central position of the tray is the coordinate of the support column b B P b =[x b ,y b ,z b ] T Solving a deflection angle theta of the tray inserting surface relative to the forklift gantry surface, wherein l is the length of the tray:
Δz=z c -z a (7)
where arcsin is an arcsine function.
S35, the AGV forklift truck obtains tray pose information { x } b ,y b ,z b θ }, which includes the coordinates [ x ] of the pallet insertion surface center in the x, y, z axes of the forklift body coordinate system b ,y b ,z b ] T And the deflection angle theta of the tray inserting and taking surface relative to the forklift gantry surface. The height positions of the forklift body and the fork lifting are adjusted through the information, so that the fork of the AGV forklift faces the supporting hole of the tray, and the AGV forklift forwards runs to insert and take the tray, thereby completing the task.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (5)

1. The AGV forklift warehouse position tray identification and auxiliary positioning method based on the ROS is characterized by comprising the following steps of:
detecting and identifying the tray in the library position, predicting the tray support column and the pixel position thereof in the image by using a model trained by deep learning, and calculating the three-dimensional coordinates of the center of each support column under a camera coordinate system by using a depth camera;
calculating the position and the posture of the tray relative to the AGV forklift body by calculating the coordinate conversion relation among the depth camera coordinate system, the AGV forklift fork coordinate system and the AGV forklift body coordinate system;
controlling the AGV fork truck and the fork to move to the pluggable position, so that the fork of the AGV fork truck can be opposite to the cavity of the tray;
the tray pose calculating step comprises the following steps:
s21, turning on a depth camera on a fork of the forklift, running a YOLOv4-CSP detection model, and sending topics of the sounding_boxes through ROS, wherein the sounding_boxes contain detection frame information and topic messages sent through ROS; the topic comprises each identified tray support column, the type of the tray, the confidence level of the tray support column and the pixel coordinates of the upper left point and the lower right point of the detection frame;
s22, creating two messages of YoloObject and YoloObjects under the ROS; yolo objects and yolo objects are custom message types for storing transformed coordinate information; the YoloObject message contains label information of the single tray support column and three-dimensional coordinates of the center point, and is used for storing all the YoloObject messages;
s23, subscribing the color image, depth data and internal topic information of the camera and the sounding_boxes through the ROS, converting the two-dimensional pixel coordinates (u, v) of the center point of the support column into three-dimensional coordinates (x, y, z), and calculating the conversion as follows:
z=0.001*d (4)
wherein d is depth data of the pixel point, f x ,f y C is the focal length of the camera x ,c y Is the main point of the camera;
s24, storing three-dimensional coordinate (x, y, z) information of the center point of the tray support column and type information of the tray in the YoloObject.
2. The ROS-based AGV forklift library tray recognition and auxiliary positioning method of claim 1, wherein the detection and recognition step S1:
s11, shooting and collecting tray sample data in a library position;
s12, labeling sample data by using image labeling software LabelImg, selecting support columns of different types of trays in each graph by a frame, labeling labels of the different types of trays, storing and outputting labeling files, and then dividing a data set;
s13, training an object detection model, and constructing a deep learning YOLOv4-CSP target detection network; the network structure of YOLOv4 is divided into four parts including an input, a trunk, a neck and a head; the method of using rich detection data set at the input end of YOLOv4 comprises mosaic data enhancement and SAT strategy; the BackBone of YOLOv4 uses a CSPDarknet53 network framework as a network extraction BackBone for feature extraction; the neck part mainly adopts an SPP module, an FPN and a PAN mode, the SPP module is used for fusing feature graphs with different dimensions, the receiving range of trunk features is increased, and the feature extraction capacity of a network is improved by using a top-down FPN feature pyramid and a bottom-up PAN feature pyramid; the loss function of the head part training of YOLOv4 is L CIOU ,L CIOU When calculating the regression of the boundary frame, the loss function of the overlapping area, the center point distance and the aspect ratio of the prediction frame A and the real frame B is considered; l (L) CIOU The specific calculation method is as follows:
in the formula, distance_2 is the Euclidean Distance between the center points of the prediction frame and the real frame, and distance_C is the diagonal Distance between the minimum circumscribed rectangle of the prediction frame and the real frame; IOU (Intersection over Union, cross-over ratio) is a standard for measuring accuracy of detecting a corresponding object in a particular dataset, and is calculated as:
wherein U represents the union of the two, and U represents the intersection of the two; v in formula (1) is a parameter for measuring the uniformity of the aspect ratio, and the calculation formula is:
wherein w is gt 、h gt Respectively the width and the height of a real frame, w p 、h p The width and height of the prediction box, respectively, arctan is an arctan function;
the YOLOv4-CSP adds a cross-stage local network structure CSPNet in the up-sampling and down-sampling stages and SPP modules of the FPN and PAN of the Neck part, divides the feature map of the base layer into two parts, and then merges through a cross-stage layered structure;
s14, training on a GPU computer to obtain the tray support column detection and identification model with good effect.
3. The method for identifying and assisting in positioning the pallet of the AGV forklift library based on the ROS according to claim 1, wherein in the step S24, the type information is stored in the YoloObject:
judging the size of the x-axis coordinate of each support column, and arranging the x-axis coordinate from small to large to form a left support column a, a middle support column b and a right support column c respectively; then sequentially adding each YoloObject into YoloObjects, and publishing the YoloObjects through a customized point topic by utilizing ROS; the point_point topic is the ROS topic created for sending the yolo objects message.
4. The ROS-based AGV forklift library tray identification and assisted positioning method of claim 1, wherein the step of controlling the movement of the AGV forklift and the fork to the pluggable position comprises:
s31, moving the AGV forklift to a tray warehouse position to be inserted, setting control parameters of the fork, and controlling the fork to lift to an initial preparation position; determining a fork coordinate system { H } and a fork truck body coordinate system { B } transformation matrix according to the lifting position
S32, determining a transformation matrix from a camera coordinate system { C } to a fork coordinate system { H } through hand-eye calibrationThe depth camera is opened to carry out the detection step of S2, whether the tray at the warehouse is taken away or not is judged by detecting the supporting frame of the tray through the camera, and if the supporting frame of the tray is not detected, the control system is prompted that the tray is taken away; if the type of the tray is detected to be incorrect, prompting a control system to misplace the tray and goods; if the supporting frame of the correct tray is detected, sending out the result through a point topic;
s33, subscribing the AGV forklift to a point topic, and setting three coordinates of center points of three support columns a, b and C of the tray under a camera coordinate system { C } C P aC P bC P c The method is converted into a forklift body coordinate system { B }:
in the method, in the process of the invention, H P aH P bH P cB P aB P bB P c the coordinates of the center points of the three support columns in a fork coordinate system and an AGV fork truck body coordinate system are respectively;
s34, requiring the AGV forklift to be capable of inserting and taking the pallet, the pallet is required to be opposite to the pallet, namely the center of the pallet surface is aligned with the center point of the pallet, and the inserting and taking surface of the pallet is parallel to the portal surface of the forklift; the three-dimensional coordinates of the tray support columns a, b and c in the AGV forklift body coordinate system are respectively as follows B P a =[x a ,y a ,z a ] TB P b =[x b ,y b ,z b ] TB P c =[x c ,y c ,z c ] T The method comprises the steps of carrying out a first treatment on the surface of the The central position of the tray is the coordinate of the support column b B P b =[x b ,y b ,z b ] T Solving a deflection angle theta of the tray inserting surface relative to the forklift gantry surface, wherein l is the length of the tray:
Δz=z c -z a (7)
wherein arcsin is an arcsin function;
s35, the AGV forklift truck obtains tray pose information { x } b ,y b ,z b θ }, including the coordinates [ x ] of the center of the pallet insertion plane in the x, y, z axes of the forklift body coordinate system b ,y b ,z b ] T The deflection angle theta of the tray inserting surface relative to the forklift gantry surface; height positions of the forklift body and the fork lifting are adjusted through the information, so that the fork of the AGV forklift faces the supporting hole of the tray, and the AGVThe fork truck forwards runs to insert and pick up the tray, and the task is completed.
5. The utility model provides a AGV fork truck storehouse position tray discernment and auxiliary positioning system based on ROS, includes memory and treater, and the memory stores computer program, its characterized in that: the processor, when executing the computer program, implements the method steps of any of claims 1-4.
CN202110824995.3A 2021-07-21 2021-07-21 AGV forklift warehouse position tray identification and auxiliary positioning method and system based on ROS Active CN113537096B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110824995.3A CN113537096B (en) 2021-07-21 2021-07-21 AGV forklift warehouse position tray identification and auxiliary positioning method and system based on ROS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110824995.3A CN113537096B (en) 2021-07-21 2021-07-21 AGV forklift warehouse position tray identification and auxiliary positioning method and system based on ROS

Publications (2)

Publication Number Publication Date
CN113537096A CN113537096A (en) 2021-10-22
CN113537096B true CN113537096B (en) 2023-08-15

Family

ID=78100716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110824995.3A Active CN113537096B (en) 2021-07-21 2021-07-21 AGV forklift warehouse position tray identification and auxiliary positioning method and system based on ROS

Country Status (1)

Country Link
CN (1) CN113537096B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114195045B (en) * 2021-11-29 2023-11-07 宁波如意股份有限公司 Automatic forking method of unmanned forklift
CN114435434B (en) * 2021-12-28 2023-06-02 广州润易包装制品有限公司 Transfer trolley capable of assisting forklift loading
CN114170521B (en) * 2022-02-11 2022-06-17 杭州蓝芯科技有限公司 Forklift pallet butt joint identification positioning method
CN115676698B (en) * 2022-10-14 2023-05-09 哈尔滨科锐同创机模制造有限公司 Tray positioning method, system, device and medium based on mobile terminal equipment
CN115965855B (en) * 2023-02-14 2023-06-13 成都睿芯行科技有限公司 Method and device for improving tray identification precision
CN116443527B (en) * 2023-06-13 2023-09-08 上海木蚁机器人科技有限公司 Pallet fork method, device, equipment and medium based on laser radar
CN117068891B (en) * 2023-10-17 2024-01-26 中亿丰数字科技集团有限公司 Vertical transportation method and system for linkage elevator of AGV (automatic guided vehicle) carrying robot at construction site
CN117555308B (en) * 2024-01-12 2024-04-26 泉州装备制造研究所 Tray recycling method, system and storage medium based on unmanned forklift

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829947A (en) * 2019-02-25 2019-05-31 北京旷视科技有限公司 Pose determines method, tray loading method, apparatus, medium and electronic equipment
JP2020040790A (en) * 2018-09-11 2020-03-19 三菱ロジスネクスト株式会社 Information processing device and information processing method
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3
JP2021024718A (en) * 2019-08-07 2021-02-22 株式会社豊田自動織機 Position and attitude estimation device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020040790A (en) * 2018-09-11 2020-03-19 三菱ロジスネクスト株式会社 Information processing device and information processing method
CN109829947A (en) * 2019-02-25 2019-05-31 北京旷视科技有限公司 Pose determines method, tray loading method, apparatus, medium and electronic equipment
JP2021024718A (en) * 2019-08-07 2021-02-22 株式会社豊田自動織機 Position and attitude estimation device
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3

Also Published As

Publication number Publication date
CN113537096A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN113537096B (en) AGV forklift warehouse position tray identification and auxiliary positioning method and system based on ROS
EP3950539A1 (en) Intelligent warehousing system, processing terminal, warehousing robot, and intelligent warehousing method
CA3138243C (en) Tracking vehicles in a warehouse environment
CN110450153B (en) Mechanical arm object active picking method based on deep reinforcement learning
CN111275063B (en) Robot intelligent grabbing control method and system based on 3D vision
CN111243017B (en) Intelligent robot grabbing method based on 3D vision
CN110176078B (en) Method and device for labeling training set data
CN202924613U (en) Automatic control system for efficient loading and unloading work of container crane
Walter et al. A situationally aware voice‐commandable robotic forklift working alongside people in unstructured outdoor environments
DE102020114577A1 (en) CONTROL AND CONTROL PROCEDURES FOR ROBOTIC SYSTEM
CN109250380A (en) Storage access system and method
CN106647738A (en) Method and system for determining docking path of automated guided vehicle, and automated guided vehicle
Asadi et al. Automated object manipulation using vision-based mobile robotic system for construction applications
Shen et al. Parallel loading and unloading: smart technology towards intelligent logistics
Lee The study of mechanical arm and intelligent robot
CN112633590B (en) Intelligent warehousing method and system for four-way shuttle
CN117093009B (en) Logistics AGV trolley navigation control method and system based on machine vision
Tian et al. Object grasping of humanoid robot based on YOLO
CN113516322B (en) Factory obstacle risk assessment method and system based on artificial intelligence
US20230195134A1 (en) Path planning method
CN109830124A (en) A kind of fleet's obstacle avoidance system
WO2023092519A1 (en) Grabbing control method and apparatus, and electronic device and storage medium
Roennau et al. Grasping and retrieving unknown hazardous objects with a mobile manipulator
Zhang et al. Human-agv interaction: Real-time gesture detection using deep learning
CN113742546A (en) Visual intelligent container yard management system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant