CN113537096A - ROS-based AGV forklift storage tray identification and auxiliary positioning method and system - Google Patents

ROS-based AGV forklift storage tray identification and auxiliary positioning method and system Download PDF

Info

Publication number
CN113537096A
CN113537096A CN202110824995.3A CN202110824995A CN113537096A CN 113537096 A CN113537096 A CN 113537096A CN 202110824995 A CN202110824995 A CN 202110824995A CN 113537096 A CN113537096 A CN 113537096A
Authority
CN
China
Prior art keywords
tray
fork
agv
coordinate system
pallet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110824995.3A
Other languages
Chinese (zh)
Other versions
CN113537096B (en
Inventor
徐本连
李震
从金亮
鲁明丽
施健
吴迪
赵康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changshu Institute of Technology
Original Assignee
Changshu Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changshu Institute of Technology filed Critical Changshu Institute of Technology
Priority to CN202110824995.3A priority Critical patent/CN113537096B/en
Publication of CN113537096A publication Critical patent/CN113537096A/en
Application granted granted Critical
Publication of CN113537096B publication Critical patent/CN113537096B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66FHOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
    • B66F9/00Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
    • B66F9/06Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
    • B66F9/075Constructional features or details
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66FHOISTING, LIFTING, HAULING OR PUSHING, NOT OTHERWISE PROVIDED FOR, e.g. DEVICES WHICH APPLY A LIFTING OR PUSHING FORCE DIRECTLY TO THE SURFACE OF A LOAD
    • B66F9/00Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes
    • B66F9/06Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
    • B66F9/075Constructional features or details
    • B66F9/0755Position control; Position detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Structural Engineering (AREA)
  • Transportation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Geology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Biomedical Technology (AREA)
  • Civil Engineering (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Warehouses Or Storage Devices (AREA)

Abstract

The invention discloses an AGV forklift warehouse tray identification and auxiliary positioning method and system based on ROS, which comprises a detection identification step of trays in a warehouse, wherein a model trained by deep learning is used for predicting tray support columns and pixel positions thereof in an image, and a depth camera is used for calculating three-dimensional coordinates of the centers of all the support columns under a camera coordinate system; calculating the pose of the pallet, namely calculating the coordinate conversion relation among a depth camera coordinate system, an AGV forklift fork coordinate system and an AGV forklift body coordinate system, and solving the pose of the pallet relative to the AGV forklift body; and controlling the AGV forklift and the fork to move to the step of the pluggable position, so that the fork of the AGV forklift can be opposite to the cavity of the tray. The tray detection device has higher detection speed, can accurately detect and position the tray in the warehouse location in real time, improves the working efficiency, and better assists the forklift to insert and take the tray.

Description

ROS-based AGV forklift storage tray identification and auxiliary positioning method and system
Technical Field
The invention relates to an ROS-based AGV forklift warehouse location tray identification and insertion auxiliary positioning method and system.
Background
With the rapid development of artificial intelligence technology, the intellectualization of robots is also an inevitable trend. Meanwhile, with the maturity of the robot technology, the application range of the robot is continuously expanded, and the robot comprises an industrial robot suitable for industrial production and manufacturing, a family service robot for serving daily life of people, a medical robot for assisting doctors and patients, a military robot for national defense troops and the like.
An AGV (automated Guided vehicle) forklift is one of industrial robots and has the functions of moving, automatic navigation, multi-sensor control, network interaction, loading, unloading, consignment and the like. The AGV forklift is used as an artificial intelligent industrial loading and unloading vehicle, and the tasks to be completed mainly comprise loading and unloading of goods on a tray, loading and unloading of short-distance transportation and the like. Along with the increase of warehouse turnover rate and order quantity, in order to guarantee work efficiency, more and more users change traditional manual fork truck into unmanned AGV fork truck, and unmanned AGV fork truck has wider application prospect. By using unique vision and artificial intelligence technologies, the robot provides a flexible solution for logistics unmanned upgrading of factories and warehouses in the future. The deep application of computer vision endows the unmanned forklift with higher positioning and sensing precision, and the AGV forklift is more clever by combining the technologies of motion control, deep learning and the like, and can quickly screen and execute an optimal task completion strategy.
In the mill that AGV fork truck carried out goods loading and unloading transport work, because of tray and goods in the storehouse position also can have the possibility of being inserted away by the manual work, so AGV fork truck when getting goods, need use the vision camera to go the target storehouse position that real-time detection discernment corresponds and have the tray before advancing the station, and then inform this storehouse position goods of system and have been carried away, AGV fork truck itself also need not continue to go this storehouse position and get goods simultaneously. On the other hand, the goods may be misplaced, for example, misplacing the goods in a storage space of other kinds of goods. At this moment, if different types of goods are placed on the trays of different types, the different trays are identified by the vision camera, so that different goods can be distinguished, and whether the inserted goods are correct or not is judged. The traditional machine vision perception method mainly identifies a target object by manually designed features, utilizing feature points for matching and the like. In practical application, the algorithm is often unsatisfactory due to the complex shape of the target object and the change of the light of the environment. At present, a detection technology based on a deep neural network becomes a mainstream method, and the detection effect and the performance on an open target detection data set are excellent.
When the tray is placed in the storage position, sometimes the tray is not placed strictly according to the specified requirements, and the tray is not placed regularly, so that the position of the tray and the placing position of the original setting have inclination deviation. If the AGV fork truck continues to execute the inserting and taking tasks according to the original planned path and program, the situation that the fork is inserted inaccurately can occur. At the moment, the loaded sensor equipment is utilized to assist the AGV forklift to position the tray on the basis of the originally planned path, so that the forklift can insert, take, load and unload the tray more accurately.
ROS (Robot Operating System) provides a series of libraries and tools to help software developers create Robot application software. It provides a number of functions including hardware abstraction, device drivers, function libraries, visualization tools, message passing, and software package management. The ROS performs several types of communication, including service-based synchronous RPC (Remote Procedure Call) communication, Topic-based asynchronous data stream communication, and data storage on the parameter server.
YOLOv4 YOLO (You Only need to see Once) is a target detection method for deep learning, the method is characterized by achieving high accuracy while achieving quick detection, and the YOLO adopts a single neural network to directly predict the boundary and class probability of an article to achieve end-to-end article detection. YOLOv4 is a fourth-generation object detection method of the YOLO series, inherits the thought and concept of the YOLO series, can use a traditional GPU (Graphics Processing Unit) to train and test, and can obtain real-time and high-precision detection results.
Disclosure of Invention
1. Objects of the invention
The invention aims to assist an AGV fork truck to detect whether a tray is still at the original position before entering a storage position to insert and take the tray, and detect whether the type of the tray is correct, so that repeated useless operation is avoided. Simultaneously because the dislocation that the tray was placed and is not strictly leaded to inserts and get auxiliary positioning system can assist the AGV fork truck accurate fix a position the tray to accomplish AGV fork truck better and insert and get the haul operation.
2. The technical scheme adopted by the invention
The invention discloses an AGV fork truck storage tray identification and auxiliary positioning method based on ROS, which comprises the following steps:
detecting and identifying the tray in the library, predicting tray support columns and pixel positions thereof in the image by using a model trained by deep learning, and calculating three-dimensional coordinates of the centers of all the support columns in a camera coordinate system by using a depth camera;
calculating the pose of the pallet, namely calculating the coordinate conversion relation among a depth camera coordinate system, an AGV forklift fork coordinate system and an AGV forklift body coordinate system, and solving the pose of the pallet relative to the AGV forklift body;
and controlling the AGV forklift and the fork to move to the step of the pluggable position, so that the fork of the AGV forklift can be opposite to the cavity of the tray.
Further, the detection recognizing step S1:
s11, shooting and collecting tray sample data in the warehouse;
s12, labeling the sample data by using image labeling software LabelImg, selecting support columns of different types of trays in each image, labeling labels of the different types of trays, storing and outputting labeled files, and dividing the data set;
s13, training an object detection model, and building a target detection network based on deep learning YOLOv 4-CSP; the network structure of YOLOv4 is divided into four parts, including input, stem, neck, head; YOLOv4 adopts a method of detecting data sets in rich mode at the input end, and the method comprises mosaic data enhancement and SAT strategies; BackBone of YOLOv4 uses a CSPDarknet53 network framework as a network extraction BackBone for feature extraction; the neck part mainly adopts the mode of an SPP module, an FPN and a PAN, the SPP module is used for fusing feature maps with different scales, the receiving range of the trunk features is increased, and the feature extraction capability of the network is improved by utilizing a top-down FPN feature pyramid and a bottom-up PAN feature pyramid; the loss function for the head portion training of YOLOv4 is LCIOU,LCIOUWhen the regression of the boundary box is calculated, the loss functions of the overlapping area, the distance of the central point and the aspect ratio of the prediction box A and the real box B are considered; l isCIOUThe specific calculation method is as follows:
Figure BDA0003173424500000031
in the formula, Distance _2 is the Euclidean Distance between the central points of the prediction frame and the real frame, and Distance _ C is the diagonal Distance between the minimum circumscribed rectangle of the prediction frame and the real frame; the IOU (Intersection over Union) is a standard for measuring the accuracy of detecting a corresponding object in a specific data set, and is calculated by the following formula:
Figure BDA0003173424500000032
wherein ^ denotes the union of the two, and ^ denotes the intersection of the two; v in the formula (1) is a parameter for measuring the uniformity of the aspect ratio, and the calculation formula is as follows:
Figure BDA0003173424500000033
in the formula, wgt、hgtWidth and height, w, of the real frame, respectivelyp、hpWidth and height of the prediction box respectively, and arctan is an arctangent function;
the YOLOv4-CSP adds a cross-stage local network structure CSPNet in the up-sampling and down-sampling stages and SPP modules of the FPN and PAN of the Neck part, divides the feature diagram of the base layer into two parts, and then merges through a cross-stage layered structure;
and S14, obtaining the tray support column detection and recognition model with good effect on the GPU computer through training.
Further, the tray posture calculating step S2:
s21, opening a depth camera on a fork of the forklift, operating a YOLOv4-CSP detection model, and sending a topic of a bounding _ boxes by detected data through an ROS, wherein the bounding _ boxes contain detection frame information and send topic information through the ROS; the topic comprises each identified tray support column, the type of the tray, the confidence of the tray support column, and the pixel coordinates of the upper left point and the lower right point of the detection frame;
s22, creating two messages of yooloObjects and yooloObjects under the ROS; yooloObjects and yooloObjects are self-defined message types and are used for storing the converted coordinate information; the yoolobject message contains the label information of a single tray support column and the three-dimensional coordinates of a central point, and is used for storing all the yoolobject messages;
s23, subscribing the bounding _ boxes and the color images, the depth data and the internal reference topic messages of the camera through the ROS, converting the two-dimensional pixel coordinates (u, v) of the support column center point into three-dimensional coordinates (x, y, z), wherein the conversion is calculated as follows:
z=0.001*d (4)
Figure BDA0003173424500000041
Figure BDA0003173424500000042
wherein d is the depth data of the pixel point, fx,fyIs the focal length of the camera, cx,cyIs a camera principal point;
s24, storing the three-dimensional coordinate (x, y, z) information of the center point of the tray support column and the kind information of the tray in yooloobject.
Further, in step S24, the category information is stored in yooloobject:
judging the size of an x-axis coordinate of each support column, and sequentially arranging the support columns from small to large, wherein the support columns are a support column a on the left side, a support column b in the middle and a support column c on the right side; then, adding each YooloObject into the YooloObjects in sequence, and issuing the YooloObjects through a self-defined point _ point topic by utilizing the ROS; the point _ point topic is the ROS topic created to send yooloobjects messages.
Further, S3, the AGV fork truck receives the detection information and controls the AGV fork truck and the fork to move to the position capable of being inserted and taken; the method comprises the following specific steps:
s31, moving the AGV to a pallet warehouse position needing to be inserted, setting control parameters of the pallet fork, and controlling the pallet fork to lift to an initial preparation position; determining a transformation matrix of a fork coordinate system { H } and a forklift body coordinate system { B } according to the lifting position
Figure BDA0003173424500000053
S32, determining a transformation matrix from the camera coordinate system { C } to the fork coordinate system { H } through hand-eye calibration
Figure BDA0003173424500000054
Opening the depth camera to perform a detection step S2, judging whether the tray at the storage position is taken away or not and judging the type of the tray through the support frame of the camera detection tray, and if the support frame of the tray is not detected, prompting that the tray of the control system is taken away; if the type of the pallet is detected to be incorrect, prompting a control system to position the pallet at a wrong goods placement position; if the support frame of the correct tray is detected, the result is sent out through the position _ point topic;
s33, subscribing the tip _ point topic by the AGV forklift, and arranging three-dimensional coordinates of center points of three support columns a, b and C of the tray under a camera coordinate system { C }CPaCPbCPcAnd (4) converting to a forklift body coordinate system { B }:
Figure BDA0003173424500000051
Figure BDA0003173424500000052
in the formula (I), the compound is shown in the specification,HPaHPbHPcBPaBPbBPcthe coordinates of the central points of the three support columns are respectively in a pallet fork coordinate system and an AGV forklift body coordinate system;
s34, requiring the AGV fork truck to insert and take the pallet, the pallet fork is required to be over against the pallet, namely the center of the pallet surface is aligned with the center point of the pallet fork, and the pallet inserting and taking surface is parallel to the door frame surface of the fork truck; the three-dimensional coordinates of the tray support columns a, b and c in the AGV forklift body coordinate system are respectivelyBPa=[xa,ya,za]TBPb=[xb,yb,zb]TBPc=[xc,yc,zc]T(ii) a The central position of the tray is the coordinate of the support column bBPb=[xb,yb,zb]TAnd solving the deflection angle theta of the insertion surface of the tray relative to the fork truck frame surface, wherein l is the length of the tray:
Δz=zc-za (7)
Figure BDA0003173424500000061
in the formula, arcsin is an arcsine function;
s35, the AGV forklift truck obtains tray position information { xb,yb,zbTheta, including the coordinates [ x ] of the center of the insertion surface of the pallet under the x, y and z axes of the coordinate system of the forklift bodyb,yb,zb]TAnd a deflection angle theta of the tray insertion surface relative to the forklift frame surface; the height positions of the forklift body and the lifting of the fork are adjusted through the information, so that the fork of the AGV forklift faces the supporting hole of the tray, and the AGV forklift runs forwards to insert and take the tray, thereby completing the field task.
The invention provides an AGV fork truck storage tray recognition and auxiliary positioning system based on ROS, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program to realize the steps of the method.
3. Advantageous effects adopted by the present invention
1) The tray identification method and the tray identification device can well identify the tray by using the detection model of deep learning. The traditional method is to extract the outline of the tray by using point cloud information, and identify whether the tray is a hole or a support column through the width between the edges of the outline, so as to judge whether the tray is the tray. Compared with the traditional method, the detection method for deep learning provided by the invention has the advantages of better identification effect, higher accuracy and better robustness.
2) According to the invention, the tray sample is trained by using the improved YOLOv4 deep learning network structure YOLOv4-CSP, the detection speed of the trained network model is higher, so that the AGV forklift can detect the trays in the storage position in real time, and the working efficiency is improved.
3) Compared with the traditional AGV preset coordinate track, the auxiliary positioning system for inserting and taking the AGV forklift in the invention can better assist the forklift to insert and take the tray by positioning the tray by using the depth camera and the detection model for depth learning when the tray is inconsistent with the original position due to reasons such as non-strict placement and the like.
4) The invention runs under the ROS framework, and the results between the modules are published and subscribed for communication transmission in the form of topics through the ROS system, so that the system runs more smoothly in real time.
Drawings
FIG. 1 is a training flow diagram of a deep learning detection recognition model.
FIG. 2 is a flow chart of AGV fork truck storage tray identification and insertion auxiliary positioning system.
Fig. 3 shows the type of pallet to be handled by the AGV forklift, and a, b, and c respectively show the supporting columns of the pallet.
FIG. 4 is a top view of the pallet's pose with respect to the AGV fork truck.
Fig. 5 is a visual diagram of topic publishing and subscribing of the detection identification module and the AGV forklift control module under the ROS.
FIG. 6 is a schematic diagram of an AGV fork truck configuration.
Detailed Description
The technical solutions in the examples of the present invention are clearly and completely described below with reference to the drawings in the examples of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without inventive step, are within the scope of the present invention.
The present invention will be described in further detail with reference to the accompanying drawings.
Example 1
The utility model provides a AGV fork truck storehouse position tray discernment and inserts and get auxiliary positioning system based on ROS, it contains the detection discernment to tray in the storehouse position and calculates two parts to the tray position appearance. The detection part of the storehouse position tray comprises a depth camera and a computer carrying a GPU, and the computer uses the depth camera to calculate three-dimensional coordinates of the centers of all support columns under a camera coordinate system by using the tray support columns and pixel positions thereof in a model prediction image trained by depth learning. The tray position and posture calculating part calculates the position and posture of the tray relative to the AGV forklift body through the coordinate conversion relation among the depth camera coordinate system, the AGV forklift fork coordinate system and the AGV forklift body coordinate system, so that the AGV forklift is controlled to move to enable the fork of the AGV forklift to be opposite to the cavity of the tray. Through discernment of AGV fork truck position in storehouse tray and insert and get auxiliary positioning system based on ROS can make AGV fork truck can more effectually insert the tray in the position in the storehouse and get haul work, improve production efficiency.
The main implementation steps of the invention comprise:
s1, training a detection recognition model of the tray based on an improved YOLOv4 target detection network YOLOv4-CSP, and specifically comprising the following steps:
and S11, shooting and collecting the tray sample data in the library.
And S12, labeling the sample data by using image labeling software LabelImg, selecting support columns of different types of trays in each image, and labeling labels of the different types of trays. And storing and outputting the marking file, and then dividing the data set, wherein the proportion of the training set to the testing set is 9: 1.
S13, training an object detection model, and building a target detection network based on deep learning YOLOv 4-CSP. The network structure of YOLOv4 can be divided into four parts, Input, BackBone, Neck, Head. The YOLOv4 has mosaic data enhancement, SAT (Self-adaptive-training) and other strategies at the Input end of the Input, and enriches the detection data set. BackBone of Yolov4 uses a CSPDarknet53 network framework as a network extraction BackBone for feature extraction. The neutral part mainly adopts the modes of an SPP (Spatial Pyramid Pooling) module, an FPN (Feature Pyramid Network) and a PAN (Path Aggregation Network), the SPP module is used for fusing Feature maps with different scales, the receiving range of the trunk features can be effectively increased, and the top-to-bottom manner is utilized to realize the purpose of integrating Feature maps with different scalesThe feature extraction capability of the network is improved by the lower FPN feature pyramid and the bottom-up PAN feature pyramid. The loss function of Head part training of Yolov4 is LCIOU,LCIOUWhen calculating the regression of the boundary box, the loss function of the overlapping area, the distance of the central point and the aspect ratio of the prediction box A and the real box B are considered at the same time. L isCIOUThe specific calculation method is as follows:
Figure BDA0003173424500000081
in the formula, Distance _2 is the Euclidean Distance between the center points of the prediction frame and the real frame, and Distance _ C is the diagonal Distance between the minimum circumscribed rectangle of the prediction frame and the real frame. The IOU (Intersection over Union) is a standard for measuring the accuracy of detecting a corresponding object in a specific data set, and is calculated by the following formula:
Figure BDA0003173424500000082
where &representsthe union of the two and &representsthe intersection of the two. V in the formula (1) is a parameter for measuring the uniformity of the aspect ratio, and the calculation formula is as follows:
Figure BDA0003173424500000083
in the formula, wgt、hgtWidth and height, w, of the real frame, respectivelyp、hpWidth and height of the prediction box, respectively, arctan is the arctangent function.
Compared with YOLOv4, YOLOv4-CSP adds a cross-Stage local network structure CSPNet (cross Stage Partial network) in the up-sampling and down-sampling stages and SPP modules of FPN and PAN in the Neck part. CSPNet splits the feature map of the base layer into two parts, which are then merged by a cross-stage hierarchy (cross-stage hierarchy) structure. The CSPNet Network structure can enhance the learning capability of CNN (Convolutional Neural Network), reduce the model weight, maintain the model precision, reduce the calculation bottleneck of the whole model and reduce the memory cost of the algorithm. The main purpose of CSPNet is to enable this architecture to achieve richer gradient combinations while reducing the computational load. The YOLOv4-CSPNet network model has better balanced speed and precision on a normal GPU processor than YOLOv 4.
And S14, obtaining the tray support column detection and recognition model with good effect on the GPU computer through training.
S2, detecting the tray by using the trained model, calculating three-dimensional coordinates of the centers of three support columns of the tray by using a depth camera, and sending the detected data message to the AGV forklift by using the ROS, wherein the method comprises the following specific steps:
s21, opening a depth camera on a fork of the forklift, operating a YOLOv4-CSP detection model, and sending the detected data to the topics of bounding _ boxes through the ROS. The bounding boxes are topic messages that contain detection box information and are sent over the ROS. The topic includes each identified tray support column, the type of the tray, the confidence of the tray support column, the pixel coordinates of the upper left and lower right points of the detection frame, and the like.
S22, creating two messages, yooloObject and yooloObjects, under the ROS. Yoolobjects and yoolobjects are self-defined message types and are mainly used for storing the converted coordinate information. The yoolobject message contains the tag information of the single pallet support column and the three-dimensional coordinates of the center point, and is used for storing all yoolobject messages.
S23, subscribing the bounding _ boxes and the color images, the depth data and the internal reference topic messages of the camera through the ROS, converting the two-dimensional pixel coordinates (u, v) of the support column center point into three-dimensional coordinates (x, y, z), wherein the conversion is calculated as follows:
z=0.001*d (4)
Figure BDA0003173424500000091
Figure BDA0003173424500000092
wherein d is the depth data of the pixel point, fx,fyIs the focal length of the camera, cx,cyIs the camera principal point.
S24, storing the three-dimensional coordinate (x, y, z) information of the center point of the tray support column and the kind information of the tray in yooloobject. And judging the size of the x-axis coordinate of each support column, and sequentially arranging the support columns from small to large, namely a support column a on the left side, a support column b in the middle and a support column c on the right side. And then, adding each YooloObject into the YooloObjects in sequence, and publishing the YooloObjects through the self-defined point _ point topic by utilizing the ROS. The point _ point topic is the ROS topic created to send yooloobjects messages.
And S3, the AGV fork truck receives the detection information and controls the AGV fork truck and the fork truck to move to the inserting position. The method comprises the following specific steps:
s31, the AGV moves to the position of the pallet warehouse which needs to be inserted, the control parameters of the pallet fork are set, and the pallet fork is controlled to lift to the initial preparation position. Determining a transformation matrix of a fork coordinate system { H } and a forklift body coordinate system { B } according to the lifting position
Figure BDA0003173424500000103
S32, determining a transformation matrix from the camera coordinate system { C } to the fork coordinate system { H } through hand-eye calibration
Figure BDA0003173424500000104
Opening the depth camera to perform a detection step S2, judging whether the tray at the storage position is taken away or not and judging the type of the tray through the support frame of the camera detection tray, and if the support frame of the tray is not detected, prompting that the tray of the control system is taken away; if the type of the pallet is detected to be incorrect, prompting a control system to position the pallet at a wrong goods placement position; if the support frame of the correct tray is detected, the result is sent out through the tip _ point topic.
S33, subscribing the tip _ point topic by the AGV forklift, and placing three support columns a, b and c of the tray in a camera coordinate systemThree-dimensional coordinates of the lower center point of { C }CPaCPbCPcAnd (4) converting to a forklift body coordinate system { B }:
Figure BDA0003173424500000101
Figure BDA0003173424500000102
in the formula (I), the compound is shown in the specification,HPaHPbHPcBPaBPbBPcthe coordinates of the central points of the three support columns in a pallet fork coordinate system and an AGV forklift body coordinate system are respectively.
S34, requiring the AGV fork truck to be able to insert the pallet and then requiring the fork to be just opposite to the pallet, namely the center of the pallet surface is aligned with the center point of the fork, and the pallet inserting surface is parallel to the door frame surface of the fork truck. The three-dimensional coordinates of the tray support columns a, b and c in the AGV forklift body coordinate system are respectivelyBPa=[xa,ya,za]TBPb=[xb,yb,zb]TBPc=[xc,yc,zc]T. The central position of the tray is the coordinate of the support column bBPb=[xb,yb,zb]TAnd solving the deflection angle theta of the insertion surface of the tray relative to the fork truck frame surface, wherein l is the length of the tray:
Δz=zc-za (7)
Figure BDA0003173424500000111
in the formula, arcsin is an arcsine function.
S35, the AGV forklift truck obtains tray position information { xb,yb,zbTheta, the information contains coordinates [ x ] of the center of the pallet insertion plane under x, y and z axes of the forklift body coordinate systemb,yb,zb]TAnd a deflection angle theta of the tray insertion surface relative to the forklift frame surface. The height positions of the forklift body and the lifting of the fork are adjusted through the information, so that the fork of the AGV forklift faces the supporting hole of the tray, and the AGV forklift runs forwards to insert and take the tray, thereby completing the field task.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. The method for identifying and assisting in positioning the tray of the storage position of the AGV forklift based on the ROS is characterized by comprising the following steps of:
detecting and identifying the tray in the library, predicting tray support columns and pixel positions thereof in the image by using a model trained by deep learning, and calculating three-dimensional coordinates of the centers of all the support columns in a camera coordinate system by using a depth camera;
calculating the pose of the pallet, namely calculating the coordinate conversion relation among a depth camera coordinate system, an AGV forklift fork coordinate system and an AGV forklift body coordinate system, and solving the pose of the pallet relative to the AGV forklift body;
and controlling the AGV forklift and the fork to move to the step of the pluggable position, so that the fork of the AGV forklift can be opposite to the cavity of the tray.
2. The ROS-based AGV forklift library tray identifying and auxiliary positioning method according to claim 1, wherein the detecting and identifying step S1 is:
s11, shooting and collecting tray sample data in the warehouse;
s12, labeling the sample data by using image labeling software LabelImg, selecting support columns of different types of trays in each image, labeling labels of the different types of trays, storing and outputting labeled files, and dividing the data set;
s13, training an object detection model, and building a target detection network based on deep learning YOLOv 4-CSP; the network structure of YOLOv4 is divided into four parts, including input, stem, neck, head; YOLOv4 adopts a method of detecting data sets in rich mode at the input end, and the method comprises mosaic data enhancement and SAT strategies; BackBone of YOLOv4 uses a CSPDarknet53 network framework as a network extraction BackBone for feature extraction; the neck part mainly adopts the mode of an SPP module, an FPN and a PAN, the SPP module is used for fusing feature maps with different scales, the receiving range of the trunk features is increased, and the feature extraction capability of the network is improved by utilizing a top-down FPN feature pyramid and a bottom-up PAN feature pyramid; the loss function for the head portion training of YOLOv4 is LCIOU,LCIOUWhen the regression of the boundary box is calculated, the loss functions of the overlapping area, the distance of the central point and the aspect ratio of the prediction box A and the real box B are considered; l isCIOUThe specific calculation method is as follows:
Figure FDA0003173424490000011
in the formula, Distance _2 is the Euclidean Distance between the central points of the prediction frame and the real frame, and Distance _ C is the diagonal Distance between the minimum circumscribed rectangle of the prediction frame and the real frame; the IOU (Intersection over Union) is a standard for measuring the accuracy of detecting a corresponding object in a specific data set, and is calculated by the following formula:
Figure FDA0003173424490000021
wherein ^ denotes the union of the two, and ^ denotes the intersection of the two; v in the formula (1) is a parameter for measuring the uniformity of the aspect ratio, and the calculation formula is as follows:
Figure FDA0003173424490000022
in the formula, wgt、hgtWidth and height, w, of the real frame, respectivelyp、hpWidth and height of the prediction box respectively, and arctan is an arctangent function;
the YOLOv4-CSP adds a cross-stage local network structure CSPNet in the up-sampling and down-sampling stages and SPP modules of the FPN and PAN of the Neck part, divides the feature diagram of the base layer into two parts, and then merges through a cross-stage layered structure;
and S14, obtaining the tray support column detection and recognition model with good effect on the GPU computer through training.
3. The ROS-based AGV forklift depot tray recognition and assistant positioning method according to claim 2, wherein the tray pose calculation step S2:
s21, opening a depth camera on a fork of the forklift, operating a YOLOv4-CSP detection model, and sending a topic of a bounding _ boxes by detected data through an ROS, wherein the bounding _ boxes contain detection frame information and send topic information through the ROS; the topic comprises each identified tray support column, the type of the tray, the confidence of the tray support column, and the pixel coordinates of the upper left point and the lower right point of the detection frame;
s22, creating two messages of yooloObjects and yooloObjects under the ROS; yooloObjects and yooloObjects are self-defined message types and are used for storing the converted coordinate information; the yoolobject message contains the label information of a single tray support column and the three-dimensional coordinates of a central point, and is used for storing all the yoolobject messages;
s23, subscribing the bounding _ boxes and the color images, the depth data and the internal reference topic messages of the camera through the ROS, converting the two-dimensional pixel coordinates (u, v) of the support column center point into three-dimensional coordinates (x, y, z), wherein the conversion is calculated as follows:
z=0.001*d (4)
Figure FDA0003173424490000023
Figure FDA0003173424490000024
wherein d is the depth data of the pixel point, fx,fyIs the focal length of the camera, cx,cyIs a camera principal point;
s24, storing the three-dimensional coordinate (x, y, z) information of the center point of the tray support column and the kind information of the tray in yooloobject.
4. The ROS-based AGV fork truck library tray identifying and assistant positioning method of claim 2, wherein in said step S24, category information is stored in YooloObject:
judging the size of an x-axis coordinate of each support column, and sequentially arranging the support columns from small to large, wherein the support columns are a support column a on the left side, a support column b in the middle and a support column c on the right side; then, adding each YooloObject into the YooloObjects in sequence, and issuing the YooloObjects through a self-defined point _ point topic by utilizing the ROS; the point _ point topic is the ROS topic created to send yooloobjects messages.
5. The ROS-based AGV fork storage tray identifying and auxiliary positioning method according to claim 3, wherein said S3 AGV fork receives the detection information and controls the AGV fork and fork to move to the position where they can be inserted; the method comprises the following specific steps:
s31, moving the AGV to a pallet warehouse position needing to be inserted, setting control parameters of the pallet fork, and controlling the pallet fork to lift to an initial preparation position; determining a transformation matrix of a fork coordinate system { H } and a forklift body coordinate system { B } according to the lifting position
Figure FDA0003173424490000032
S32, determining a camera coordinate system (C) to a fork coordinate system (H) through hand-eye calibrationTransformation matrix of down
Figure FDA0003173424490000033
Opening the depth camera to perform a detection step S2, judging whether the tray at the storage position is taken away or not and judging the type of the tray through the support frame of the camera detection tray, and if the support frame of the tray is not detected, prompting that the tray of the control system is taken away; if the type of the pallet is detected to be incorrect, prompting a control system to position the pallet at a wrong goods placement position; if the support frame of the correct tray is detected, the result is sent out through the position _ point topic;
s33, subscribing the tip _ point topic by the AGV forklift, and arranging three-dimensional coordinates of center points of three support columns a, b and C of the tray under a camera coordinate system { C }CPaCPbCPcAnd (4) converting to a forklift body coordinate system { B }:
Figure FDA0003173424490000031
Figure FDA0003173424490000041
in the formula (I), the compound is shown in the specification,HPaHPbHPcBPaBPbBPcthe coordinates of the central points of the three support columns are respectively in a pallet fork coordinate system and an AGV forklift body coordinate system;
s34, requiring the AGV fork truck to insert and take the pallet, the pallet fork is required to be over against the pallet, namely the center of the pallet surface is aligned with the center point of the pallet fork, and the pallet inserting and taking surface is parallel to the door frame surface of the fork truck; the three-dimensional coordinates of the tray support columns a, b and c in the AGV forklift body coordinate system are respectivelyBPa=[xa,ya,za]TBPb=[xb,yb,zb]TBPc=[xc,yc,zc]T(ii) a The central position of the tray is the coordinate of the support column bBPb=[xb,yb,zb]TAnd solving the deflection angle theta of the insertion surface of the tray relative to the fork truck frame surface, wherein l is the length of the tray:
Δz=zc-za (7)
Figure FDA0003173424490000042
in the formula, arcsin is an arcsine function;
s35, the AGV forklift truck obtains tray position information { xb,yb,zbTheta, including the coordinates [ x ] of the center of the insertion surface of the pallet under the x, y and z axes of the coordinate system of the forklift bodyb,yb,zb]TAnd a deflection angle theta of the tray insertion surface relative to the forklift frame surface; the height positions of the forklift body and the lifting of the fork are adjusted through the information, so that the fork of the AGV forklift faces the supporting hole of the tray, and the AGV forklift runs forwards to insert and take the tray, thereby completing the field task.
6. The utility model provides a AGV fork truck storehouse position tray discernment and assistance-localization real-time system based on ROS, includes memory and treater, and the memory stores computer program, its characterized in that: the processor, when executing the computer program, realizes the method steps of any of claims 1-5.
CN202110824995.3A 2021-07-21 2021-07-21 AGV forklift warehouse position tray identification and auxiliary positioning method and system based on ROS Active CN113537096B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110824995.3A CN113537096B (en) 2021-07-21 2021-07-21 AGV forklift warehouse position tray identification and auxiliary positioning method and system based on ROS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110824995.3A CN113537096B (en) 2021-07-21 2021-07-21 AGV forklift warehouse position tray identification and auxiliary positioning method and system based on ROS

Publications (2)

Publication Number Publication Date
CN113537096A true CN113537096A (en) 2021-10-22
CN113537096B CN113537096B (en) 2023-08-15

Family

ID=78100716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110824995.3A Active CN113537096B (en) 2021-07-21 2021-07-21 AGV forklift warehouse position tray identification and auxiliary positioning method and system based on ROS

Country Status (1)

Country Link
CN (1) CN113537096B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170521A (en) * 2022-02-11 2022-03-11 杭州蓝芯科技有限公司 Forklift pallet butt joint identification positioning method
CN114195045A (en) * 2021-11-29 2022-03-18 宁波如意股份有限公司 Automatic forking method of unmanned forklift
CN114435434A (en) * 2021-12-28 2022-05-06 广州润易包装制品有限公司 Transfer trolley capable of assisting forklift in loading
CN115676698A (en) * 2022-10-14 2023-02-03 哈尔滨科锐同创机模制造有限公司 Tray positioning method, system, device and medium based on mobile terminal equipment
CN115965855A (en) * 2023-02-14 2023-04-14 成都睿芯行科技有限公司 Method and device for improving tray identification precision
CN116443527A (en) * 2023-06-13 2023-07-18 上海木蚁机器人科技有限公司 Pallet fork method, device, equipment and medium based on laser radar
CN117068891A (en) * 2023-10-17 2023-11-17 中亿丰数字科技集团有限公司 Vertical transportation method and system for linkage elevator of AGV (automatic guided vehicle) carrying robot at construction site
CN117555308A (en) * 2024-01-12 2024-02-13 泉州装备制造研究所 Tray recycling method, system and storage medium based on unmanned forklift

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829947A (en) * 2019-02-25 2019-05-31 北京旷视科技有限公司 Pose determines method, tray loading method, apparatus, medium and electronic equipment
JP2020040790A (en) * 2018-09-11 2020-03-19 三菱ロジスネクスト株式会社 Information processing device and information processing method
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3
JP2021024718A (en) * 2019-08-07 2021-02-22 株式会社豊田自動織機 Position and attitude estimation device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020040790A (en) * 2018-09-11 2020-03-19 三菱ロジスネクスト株式会社 Information processing device and information processing method
CN109829947A (en) * 2019-02-25 2019-05-31 北京旷视科技有限公司 Pose determines method, tray loading method, apparatus, medium and electronic equipment
JP2021024718A (en) * 2019-08-07 2021-02-22 株式会社豊田自動織機 Position and attitude estimation device
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114195045B (en) * 2021-11-29 2023-11-07 宁波如意股份有限公司 Automatic forking method of unmanned forklift
CN114195045A (en) * 2021-11-29 2022-03-18 宁波如意股份有限公司 Automatic forking method of unmanned forklift
CN114435434A (en) * 2021-12-28 2022-05-06 广州润易包装制品有限公司 Transfer trolley capable of assisting forklift in loading
CN114170521A (en) * 2022-02-11 2022-03-11 杭州蓝芯科技有限公司 Forklift pallet butt joint identification positioning method
CN115676698A (en) * 2022-10-14 2023-02-03 哈尔滨科锐同创机模制造有限公司 Tray positioning method, system, device and medium based on mobile terminal equipment
CN115965855A (en) * 2023-02-14 2023-04-14 成都睿芯行科技有限公司 Method and device for improving tray identification precision
CN115965855B (en) * 2023-02-14 2023-06-13 成都睿芯行科技有限公司 Method and device for improving tray identification precision
CN116443527A (en) * 2023-06-13 2023-07-18 上海木蚁机器人科技有限公司 Pallet fork method, device, equipment and medium based on laser radar
CN116443527B (en) * 2023-06-13 2023-09-08 上海木蚁机器人科技有限公司 Pallet fork method, device, equipment and medium based on laser radar
CN117068891A (en) * 2023-10-17 2023-11-17 中亿丰数字科技集团有限公司 Vertical transportation method and system for linkage elevator of AGV (automatic guided vehicle) carrying robot at construction site
CN117068891B (en) * 2023-10-17 2024-01-26 中亿丰数字科技集团有限公司 Vertical transportation method and system for linkage elevator of AGV (automatic guided vehicle) carrying robot at construction site
CN117555308A (en) * 2024-01-12 2024-02-13 泉州装备制造研究所 Tray recycling method, system and storage medium based on unmanned forklift
CN117555308B (en) * 2024-01-12 2024-04-26 泉州装备制造研究所 Tray recycling method, system and storage medium based on unmanned forklift

Also Published As

Publication number Publication date
CN113537096B (en) 2023-08-15

Similar Documents

Publication Publication Date Title
CN113537096A (en) ROS-based AGV forklift storage tray identification and auxiliary positioning method and system
EP3950539A1 (en) Intelligent warehousing system, processing terminal, warehousing robot, and intelligent warehousing method
US11383380B2 (en) Object pickup strategies for a robotic device
US11772267B2 (en) Robotic system control method and controller
Schwarz et al. Fast object learning and dual-arm coordination for cluttered stowing, picking, and packing
CN111275063B (en) Robot intelligent grabbing control method and system based on 3D vision
KR20210020945A (en) Vehicle tracking in warehouse environments
US10762468B2 (en) Adaptive process for guiding human-performed inventory tasks
CN107218927B (en) A kind of cargo pallet detection system and method based on TOF camera
CN109250380A (en) Storage access system and method
Walter et al. A situationally aware voice‐commandable robotic forklift working alongside people in unstructured outdoor environments
US11741566B2 (en) Multicamera image processing
DE102020114577A1 (en) CONTROL AND CONTROL PROCEDURES FOR ROBOTIC SYSTEM
CN111077889A (en) Multi-mobile-robot formation cooperative positioning method for workshop tray transportation
WO2021039850A1 (en) Information processing device, configuration device, image recognition system, robot system, configuration method, learning device, and learned model generation method
CN113050636A (en) Control method, system and device for autonomous tray picking of forklift
Medjram et al. Markerless vision-based one cardboard box grasping using dual arm robot
CN110514210A (en) A kind of AGV and its high-precision locating method with multisensor
WO2023092519A1 (en) Grabbing control method and apparatus, and electronic device and storage medium
CN115631401A (en) Robot autonomous grabbing skill learning system and method based on visual perception
Roa-Garzón et al. Vision-based solutions for robotic manipulation and navigation applied to object picking and distribution
KR102452315B1 (en) Apparatus and method of robot control through vision recognition using deep learning and marker
TWI788253B (en) Adaptive mobile manipulation apparatus and method
Fu et al. Costmap construction and pseudo-lidar conversion method of mobile robot based on monocular camera
JP7241374B2 (en) Robotic object placement system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant