CN117532603B - Quick positioning method, system and device for feeding and discharging of mobile robot - Google Patents

Quick positioning method, system and device for feeding and discharging of mobile robot Download PDF

Info

Publication number
CN117532603B
CN117532603B CN202311453910.0A CN202311453910A CN117532603B CN 117532603 B CN117532603 B CN 117532603B CN 202311453910 A CN202311453910 A CN 202311453910A CN 117532603 B CN117532603 B CN 117532603B
Authority
CN
China
Prior art keywords
target
positioning
target material
tray
mobile robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311453910.0A
Other languages
Chinese (zh)
Other versions
CN117532603A (en
Inventor
彭广德
吴俊凯
王睿
李卫燊
李卫铳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Ligong Industrial Co ltd
Original Assignee
Guangzhou Ligong Industrial Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Ligong Industrial Co ltd filed Critical Guangzhou Ligong Industrial Co ltd
Priority to CN202311453910.0A priority Critical patent/CN117532603B/en
Publication of CN117532603A publication Critical patent/CN117532603A/en
Application granted granted Critical
Publication of CN117532603B publication Critical patent/CN117532603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0009Constructional details, e.g. manipulator supports, bases
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/02Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian coordinate type
    • B25J9/04Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian coordinate type by rotating at least one arm, excluding the head movement itself, e.g. cylindrical coordinate type or polar coordinate type
    • B25J9/046Revolute coordinate type
    • B25J9/047Revolute coordinate type the pivoting axis of the first arm being offset to the vertical axis
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a method, a system and a device for quickly positioning loading and unloading of a mobile robot. The method comprises the following steps: carrying out first positioning on the material rack device to obtain the spatial position information of the material rack device; moving the manipulator to the side of the material rack device according to the space position information of the material rack device; identifying the identification information on the material tray, and determining whether the material tray is a target material tray according to the identification information on the material tray; performing second positioning on the target material, and determining the relative position of the target material on the target material tray; moving the manipulator to the side of the target material according to the relative position of the target material on the target material tray; thirdly, positioning the target material, and determining the accurate position of the target material; and carrying out clamping operation on the target material. The invention utilizes the fusion positioning of the target detection network algorithm and the visual detection algorithm, greatly improves the positioning precision of the loading and unloading work at the tail end, does not need to be in direct contact with materials, and is suitable for different materials and different processing technology application scenes.

Description

Quick positioning method, system and device for feeding and discharging of mobile robot
Technical Field
The invention relates to the technical field of loading and unloading equipment, in particular to a loading and unloading quick positioning method, a loading and unloading quick positioning system and a loading and unloading quick positioning device for a mobile robot.
Background
The material feeding and discharging in the traditional industrial production are realized through manual carrying, so that a large amount of human resources are occupied, the working efficiency is low, and the large-scale production operation is not facilitated.
The existing industrial automatic production line generally adopts a robot to automatically complete feeding and discharging operations. The robot performs loading and unloading, and may need to perform a series of tasks such as target sensing, movement planning, grabbing planning, and the like. However, the loading and unloading operation of the existing robot generally only comprises one-stage target perception, so that the conditions of inaccurate positioning, low precision, collision between the manipulator and materials and the like are easy to occur; in addition, the existing robot loading and unloading operation relies on some structural features of the material to perform positioning and grabbing, and shielding, deformation or missing features and the like occur on the structural features of the material, so that the loading and unloading operation may not be completed normally.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide a method, a system, and a device for fast positioning loading and unloading of a mobile robot.
The first aspect of the invention provides a mobile robot feeding and discharging quick positioning method, which comprises the following steps:
Carrying out first positioning on a material rack device to obtain space position information of the material rack device; at least one material tray is arranged on the material rack device; the tray is provided with a plurality of positioning holes;
Moving a manipulator to the side of the material frame device according to the spatial position information of the material frame device, wherein a binocular depth camera is arranged on the manipulator;
Identifying the identification information on the material tray, and determining whether the material tray is a target material tray according to the identification information on the material tray; the target material tray is filled with target materials needing to be clamped;
Performing second positioning on the target material, and determining the relative position of the target material on the target tray;
moving a manipulator to the side of the target material according to the relative position of the target material on the target material tray;
Thirdly, positioning the target material, and determining the accurate position of the target material;
And carrying out clamping operation on the target material.
Further, the first positioning of the material rack device specifically includes the following steps:
Acquiring image data of the material frame device by using a binocular depth camera on the manipulator, wherein the image data is used as material frame image data;
Carrying out three-dimensional target detection fusion reasoning according to the material frame image data to obtain the spatial position information of the material frame device; the spatial position information specifically comprises three-dimensional positions of the material rest devices in the world coordinate system and size information of the material rest devices.
Further, the three-dimensional target detection fusion reasoning specifically comprises the following steps:
Performing two-dimensional feature extraction on the material frame image data by using a preset convolutional neural network to obtain plane information of the material frame device;
Calculating depth information of the material rack device by using a binocular vision algorithm;
and fusing the plane information and the depth information of the material frame device to obtain the three-dimensional position of the material frame device in the world coordinate system and the size information of the material frame device.
Further, the identification information on the tray comprises a two-dimensional code; the identification information on the tray is identified, and specifically comprises the following steps:
determining an identification information area on the tray according to the spatial position information of the material rack device;
Acquiring image data of the identification information area as first identification data;
Threshold segmentation is carried out on the first identification data to obtain binarized second identification data;
performing contour extraction on the second identification data to obtain the contour of the identification information;
performing straight line fitting correction on the outline of the identification information;
and decoding the identification information area according to the outline of the identification information to obtain the identification information on the material disc.
Further, the second positioning of the target material specifically includes the following steps:
determining the distribution of preset positioning holes on the target material tray according to the identification information on the target material tray;
scanning the target material tray to obtain the actual positioning hole distribution on the target material tray;
comparing the preset positioning hole distribution with the actual positioning hole distribution; and determining the difference value of the two materials as the relative position of the target material on the target tray.
Further, the third positioning is performed on the target material, and the feeding and discharging clamping operation is performed on the target material by a manipulator, which specifically comprises the following steps:
acquiring image data of the target material by using a binocular depth camera on the manipulator, wherein the image data is used as material image data;
Performing two-dimensional target detection according to the material image data by using a target detection model after training to obtain shape information of a target material;
Planning a moving path for the manipulator according to the relative position of the target material on the target material tray and the shape information of the target material; the moving path takes a positioning hole on the material tray as a reference;
controlling the manipulator to move to the side of the target material, and detecting the pixel offset around the target material;
and fusing the pixel offset on the basis of the relative position of the target material on the target material tray to calculate so as to obtain the accurate position of the target material.
Further, the object detection model is trained by:
collecting a material image; dividing a material image with a preset proportion to perform random transformation operation;
carrying out data annotation on the material image;
Dividing the collected material images into a training set and a verification set;
training the target detection model by using a training set to obtain a model output result; the model output result comprises the relative positions of the materials and the material tray;
Evaluating the output of the target detection model using a validation set; and when the output of the target detection model reaches a preset evaluation standard, the target detection model is regarded as being trained.
The invention discloses a mobile robot loading and unloading quick positioning system, which comprises a material rest device and a mobile robot;
at least one tray for containing materials is arranged on the material rack device;
The mobile robot is provided with an edge computing platform and a manipulator; the manipulator is provided with a binocular depth camera; the edge computing platform executes the mobile robot loading and unloading quick positioning method.
Further, the material rest device is arranged on the mobile robot.
The third aspect of the invention discloses a mobile robot feeding and discharging quick positioning device, which comprises a processor and a memory;
the memory is used for storing programs;
The processor executes the program to realize the quick positioning method for loading and unloading of the mobile robot.
Embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the foregoing method.
The embodiment of the invention has the following beneficial effects: according to the method, the system and the device for quickly positioning loading and unloading of the mobile robot, disclosed by the invention, the target detection network algorithm and the visual detection algorithm are used for fusion positioning, so that the positioning precision of loading and unloading work at the tail end is greatly improved, and the method, the system and the device are suitable for different processing technology application scenes; the invention positions the relative positions of the materials based on the positioning holes of the material tray, is applicable to feeding and discharging of materials with different shapes, sizes and colors, and has wide application range; the invention does not need to be in direct contact with the material in the material positioning process, thereby reducing the risk of damaging or interfering the material.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method, a system and a device for quickly positioning loading and unloading of a mobile robot;
FIG. 2 is a front view of a mobile robot in a method, a system and a device for quickly positioning loading and unloading of the mobile robot according to the present invention;
FIG. 3 is a top view of a mobile robot in a method, system and apparatus for fast positioning loading and unloading of a mobile robot according to the present invention;
FIG. 4 is a rear view of a mobile robot in a method, system and apparatus for quickly positioning loading and unloading of a mobile robot according to the present invention;
FIG. 5 is a schematic diagram of a tray in a method, a system and a device for quickly positioning loading and unloading of a mobile robot according to the present invention;
FIG. 6 is a top view of a tray in a method, system and apparatus for quickly positioning loading and unloading of a mobile robot according to the present invention;
FIG. 7 is a schematic diagram of detecting identification information in a method, a system and a device for rapidly positioning loading and unloading of a mobile robot according to the present invention;
FIG. 8 is a training flow chart of a mobile robot loading and unloading quick positioning method, system and device target detection model according to the invention;
FIG. 9 is a flow chart of the positioning of the plane of the target material in the method, system and device for quickly positioning the loading and unloading of the mobile robot.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The following factors need to be considered for quick positioning of feeding and discharging of the robot:
Workplace: robot loading and unloading is commonly used in industrial automation production lines. It is therefore necessary to consider the design and arrangement of the production line and whether the robot is working in a dangerous environment or in a special protective environment.
Object shape and size: the robot loading and unloading may involve objects of different shapes, sizes and weights. It is therefore necessary to consider whether the robot can quickly and accurately identify and locate these objects. Meanwhile, the influence of the material of the object on the target sensing system needs to be considered.
Production yield: the larger the production batch, the higher the benefit of robot loading and unloading. Therefore, we need to consider the balance between the degree of automation and the initial investment costs.
The precision requirement is as follows: different industrial applications have different requirements on the feeding and discharging precision of the robot. For example, for high precision electronic component assembly, higher precision is required. Therefore, the precision requirement of the actual application scene needs to be considered.
Cycle time: the feeding and discharging efficiency and the cycle time of the robot are directly related. It is therefore necessary to consider whether the speed and stability of the individual working steps are sufficiently high.
Monitoring and maintaining: in the feeding and discharging process of the robot, more monitoring and maintenance work is needed to ensure the normal operation and long service life of the robot, so that the design of functions such as monitoring, data acquisition and analysis and the like, and corresponding training and technical support schemes are needed to be considered.
Safety problem: in the feeding and discharging process of the robot, the robot is likely to damage operators, other equipment or objects. It is therefore considered that a series of safety measures such as setting up safety guards, manual operation protection, robot obstacle avoidance, etc. should be taken.
The background information can influence the implementation effect of quick positioning of feeding and discharging of the robot, each factor needs to be comprehensively considered, and the most suitable selection is made according to actual conditions. The above strong related requirements lead to the situation that the designed robot feeding and discharging operation has single use scene, and when the use scene disappears, the robot feeding and discharging flow is announced to be abandoned.
Therefore, the embodiment provides a method, a system and a device for quickly positioning loading and unloading of a mobile robot so as to realize loading and unloading operation of materials applicable to multiple scenes.
As shown in fig. 1, the invention provides a quick positioning method for loading and unloading of a mobile robot, which comprises the following steps:
S1, carrying out first positioning on a material rack device to obtain spatial position information of the material rack device; at least one material tray is arranged on the material rack device; the tray is provided with a plurality of positioning holes;
s2, moving a manipulator to the side of the material rest device according to the spatial position information of the material rest device, wherein a binocular depth camera is arranged on the manipulator;
S3, identifying identification information on the material tray, and determining whether the material tray is a target material tray according to the identification information on the material tray; the target material tray is filled with target materials needing clamping operation;
s4, performing second positioning on the target material, and determining the relative position of the target material on the target tray;
s5, moving the manipulator to the side of the target material according to the relative position of the target material on the target material tray;
s6, performing third positioning on the target material, and determining the accurate position of the target material;
S7, clamping the target material.
The invention provides a method, a system and a device for rapidly positioning feeding and discharging of a robot, which can solve the problems of poor positioning accuracy, low speed, difficult implementation and the like in the feeding and discharging operation of the robot in the current industrial production scene of an edge end. Compared with the existing method, the method is simple and convenient to implement, quick in positioning and high in precision, the method can solve the problems that a complex and changeable scene work positioning device is slow in real time, long in time consumption for positioning searching positions, large in computing resources occupied by edge sections and the like, a set of self-adaptive adjustable parameters are used for completing a quick positioning task, and the method has extremely high terminal positioning accuracy and robustness. Can be easily popularized to the edge application scenes of various production industries.
The method for rapidly positioning the feeding and discharging of the mobile robot is mainly divided into three links.
And (3) a process machining center positioning link: figures 2,3, 4 show schematic views of a material device mounted on a mobile robot; however, the material device in the embodiment of the invention is not necessarily carried on the mobile robot, and the situation that the mobile robot is separated from the material device can occur. For the situation that the mobile robot is separated from the material device, the mobile robot needs to be navigated through map planning and the like, so that the mobile robot moves to an area beside the material device to perform feeding and discharging operations; the robot navigation can be realized by a sensor such as a laser radar.
S1, performing first positioning on a material rack device, which specifically comprises the following steps:
S1-1, obtaining image data of a material taking frame device by using a binocular depth camera on a manipulator, wherein the image data is used as material frame image data;
S1-2, carrying out three-dimensional target detection fusion reasoning according to the frame image data to obtain the spatial position information of the frame device; the spatial position information specifically includes three-dimensional position of the material rest device in the world coordinate system and size information of the material rest device.
In the step S1-2, three-dimensional target detection fusion reasoning specifically comprises the following steps:
Carrying out two-dimensional feature extraction on the frame image data by using a preset convolutional neural network to obtain plane information of a frame device;
Calculating depth information of the material rack device by using a binocular vision algorithm;
and fusing the plane information and the depth information of the material frame device to obtain the three-dimensional position of the material frame device in the world coordinate system and the size information of the material frame device.
In the embodiment of the invention, the plane information of the material frame device is obtained by carrying out feature extraction through a convolutional neural network, and the model is output as the plane information comprising x-direction coordinates, y-direction coordinates, width w and height h; the depth information of the material rack device is acquired by a binocular depth camera carried on the manipulator. The binocular depth camera can obtain the depth information of a scene through parallax generated by the two lenses; the depth information is output as a z-direction coordinate and a length l; and carrying out three-dimensional target detection fusion reasoning on the two to obtain the spatial position information of the material rack device.
Illustratively, the convolutional neural network includes yolo and centernet, etc., feature extraction convolutional neural networks, and data frame images of different types (rgb images) are input as input for training. And after the positioning feeding and discharging material rack device is placed at the scene position of the factory of any movable operation carrier, manually collecting images of objects (material rack devices, materials and the like) to be detected and a point cloud data frame, and marking 2d coordinates in the images to obtain a data set. Training by using a convolutional neural network until convergence to obtain the convolutional neural network which is trained,
S2, moving the manipulator to the side of the material rack device according to the space position information of the material rack device,
After the spatial position information of the material rack device is obtained, the mechanical arm is triggered to move, and the mechanical arm is moved to the side of the material rack device to perform the material tray positioning detection step.
Positioning and detecting a material tray: and utilizing the region position of the material rack device provided by the process machining center positioning link, and utilizing the binocular depth camera of the mechanical arm to scan the identification information on the material tray to acquire target material tray data. As shown in fig. 5 and 6, the identification information used in the embodiment is a two-dimensional code, and a plurality of positioning holes are formed in the tray for the robot to detect and position the target.
S3, identifying identification information on the material tray, and determining whether the material tray is a target material tray according to the identification information on the material tray; the target material tray is filled with target materials needing clamping operation;
The identification information identification in this embodiment is realized by a positioning algorithm, and specifically includes the following steps:
S3-1, determining an identification information area on the material tray according to the space position information of the material rack device;
s3-2, acquiring image data of the identification information area as first identification data;
s3-3, threshold segmentation is carried out on the first identification data to obtain binarized second identification data;
s3-4, extracting the outline of the second identification data to obtain the outline of the identification information;
S3-5, performing straight line fitting correction on the outline of the identification information;
s3-6, decoding the identification information area according to the outline of the identification information to obtain the identification information on the material disc.
As shown in fig. 7, in this embodiment, the operations such as dividing the adaptive threshold, searching the contour and the connected domain, and performing straight line fitting on the contour can eliminate the pixel offset of the two-dimensional code generated by the shooting angle, etc., and perform the decoding operation by extracting the convex quadrangle from the contour, so as to identify the identification information and further determine whether the target tray is the target tray.
S4, performing second positioning on the target material, and determining the relative position of the target material on the target tray;
in this embodiment, the second positioning of the target material specifically includes the following steps:
S4-1, determining preset positioning hole distribution on a target material tray according to the identification information on the target material tray;
s4-2, scanning the target material tray to obtain actual positioning hole distribution on the target material tray;
s4-3, comparing the distribution of preset positioning holes with the distribution of actual positioning holes; and determining the difference value of the two materials as the relative position of the target material on the target tray.
According to the embodiment, the position of the positioning hole occupied by the material is identified, so that the position of the accurate two-dimensional plane material on the material tray is further calculated. Because the positioning of the materials is realized based on the material tray, the embodiment of the invention can be suitable for different material products. In some embodiments, according to the shape and structure of the positioning hole where the material tray is located, multiple positioning can be determined by rapidly using a positioning detection algorithm, and material tray information such as positioning positions and categories can be easily obtained according to the two-dimensional code.
S5, moving the manipulator to the side of the target material according to the relative position of the target material on the target material tray;
after the relative position of the target material on the target material tray is obtained, the mechanical arm moves to trigger, and the mechanical arm moves to the side of the target material; and after the mechanical arm moves, the height of the mechanical arm is infinitely close to the plane of the target tray, and a material positioning detection step is performed.
And (3) material positioning detection: after the step of moving the manipulator to the side of the target material, the manipulator is close to the target material, and the material positioning detection step is started.
S6, performing third positioning on the target material, and determining the accurate position of the target material;
As shown in fig. 8, the third positioning is performed on the target material, and the feeding and discharging clamping operation is performed on the target material by using a manipulator, which specifically includes the following steps:
acquiring image data of a target material by using a binocular depth camera on the manipulator, wherein the image data is used as material image data;
performing two-dimensional target detection according to the material image data by using a target detection model after training to obtain shape information of a target material;
Planning a moving path for the manipulator according to the relative position of the target material on the target tray and the shape information of the target material; the moving path takes a positioning hole on the material tray as a reference;
controlling the manipulator to move to the side of the target material, and detecting the pixel offset around the target material;
and fusing pixel offset values on the basis of the relative position of the target material on the target material tray to calculate so as to obtain the accurate position of the target material.
In the material positioning detection step in the embodiment, firstly, a binocular depth camera is used for measuring and calculating the approximate height of the material, and when the mechanical arm is close to a positioning hole of a target material tray, high-resolution real-time imaging is used; and then a traditional two-dimensional visual detection algorithm (such as circle center finding, rectangle center finding and the like) is used for measuring and calculating the pixel offset with high enough end precision (the comprehensive arm end precision is up to 0.1 mm), and the pixel offset is fused on the basis of the relative position of the target material on the target material tray for calculation, so that the accurate position of the target material is obtained. During prediction, the data frame is preprocessed to be processed into a plurality of different input specifications, and the data frame is input into the trained target detection network to obtain a target detection model. And meanwhile, carrying out detection processing judgment on the image data frame by using a traditional two-dimensional and binocular three-dimensional ranging modeling algorithm, respectively fusing and detecting to obtain corresponding 3d coordinate positions (position results obtained by using the detection model and the traditional algorithm), carrying out automatic reasoning detection division on the target region of interest, and outputting the coordinate information of the detection target position.
As shown in fig. 9, in this embodiment, by collecting and labeling background images and material image data of an industrial scene, collecting an image of interest and labeling different types of targets in the image, randomly changing target areas from the image, then attaching the target areas to the background image, and calculating labels; training a target detection model by using a convolutional neural network, which specifically comprises the following steps:
1. data set generation
A) And collecting materials and the material swinging plate to shoot picture data.
B) Marking standard (target marking frame error + -3 pixels, category accuracy 100%)
C) Labeling personnel adopts standard flow to label
The images and the marks form a data set I
2. Selecting a certain number of various data sets, digging out a target area containing materials to be detected, randomly changing characteristic shapes and the like, randomly attaching the target area to a data set of a normal background (a material disc), synthesizing a new data set, calculating a mark of the target position in the new data set, and forming a data set II by the data and the mark;
4. Combining various data sets to form a data set III;
5. training on the public data set by using each convolutional neural network model (target detection) until the model converges, and performing fine tuning training on the third data set to obtain a target detection model, as shown in fig. 2;
6. The external parameters and the internal parameters of the binocular vision camera are calibrated to meet the requirements of binocular ranging and the like. Performing super-parameter tuning by using a traditional visual detection two-dimensional code or a special code until the generalization capability of the model is optimal, and obtaining a two-dimensional code detection model; and performing binarization processing, connected domain segmentation, circle finding, polygon finding and the like by using the traditional vision, and performing super-parameter tuning to obtain the precise polygon deterministic position detection model.
S7, clamping the target material.
According to the embodiment, the moving track of the mechanical arm can be quickly corrected according to the two-dimensional positioning information of the high-precision material tray and the binocular three-dimensional precision, and materials can be clamped and placed.
The following is an overall execution flow of a mobile robot loading and unloading quick positioning method in the embodiment of the invention:
1. The mobile robot receives the work task, and the work trigger of the task is rapidly positioned at one stage. The indoor side camera shoots in real time to acquire picture data, the vehicle-mounted edge computing platform acquires data flow in real time and performs three-dimensional target detection fusion reasoning (a convolutional neural network model yolov performs target detection, outputs (x, y, w, h, class), the binocular depth camera outputs depth point cloud space data information, and the two target coordinate information fusion matching outputs 3D detection position information (x, y, z, w, h, l, class), and the three-dimensional target detection fusion reasoning respectively represents a target center coordinate position xyz, a length and width height whl and a target class so as to acquire a material frame, a material tray and a positioning device coordinate position. The mechanical arm moves and triggers, and the arm binocular camera acquires data in real time and performs 3D target detection (consistent as described above), and the position distance of the feeding and discharging device is fed back and positioned in real time.
2. The mechanical arm moves to the side of the material rack device, and the work of the two-stage high-precision positioning task is triggered. And (3) carrying out binocular 3d ranging positioning (the flow is as shown in 1) on the material tray and the material from the material tray device positions acquired in one stage, moving the mechanical arm to the two-dimensional code position of the material tray, triggering the real-time detection of the two-dimensional code of the material tray, outputting the three-dimensional (x, y, z) position of the center, and detecting the moving positioning in real time according to the position of the two-dimensional code detected and stored in advance.
3. And (3) through binocular vision target detection fusion positioning, moving and outputting the center coordinates of the target materials in the material tray in real time. And moving the mechanical arm to the side of the target material, and triggering the working task at the tail end of the mechanical arm in three-stage ultra-precise and accurate positioning. Detecting the plane of the material tray in real time, outputting the three-dimensional position in real time, enabling the tail end arm of the robot to approach the plane of the material tray, enabling the robot to be very close to the target material tray, starting ultra-high precision detection and identification, outputting a positioning error of 0.01 in continuous t seconds of data frames, and moving the tail end arm of the mechanical arm to the target material. According to the traditional vision algorithm, the high-precision coordinate position of the material on the two-dimensional plane of the material tray is measured (according to the known relative positions of the two-dimensional code and the positioning holes or the special shapes, the relative positions (x, y, z) of the centers of all positioning points of the material tray checkerboard are automatically generated), according to the detected special shapes, all the centers of the positioning shapes on the material tray checkerboard can be obtained, the position of the material can not be identified due to shielding, the space position of the material can be obtained, the center of the trolley is converted through the tail end position, the material is output at the position of the material tray, and then the clamping operation can be carried out.
The embodiment of the invention also discloses a mobile robot feeding and discharging quick positioning system, which comprises a material rack device and a mobile robot;
At least one tray for containing materials is arranged on the material rack device;
The mobile robot is provided with an edge computing platform and a manipulator; a binocular depth camera is arranged on the manipulator; a mobile robot loading and unloading quick positioning method executed by an edge computing platform.
The embodiment of the invention also discloses a mobile robot feeding and discharging quick positioning device, which comprises a processor and a memory;
The memory is used for storing programs;
A fast positioning method for loading and unloading of a mobile robot is realized by executing a program by a processor.
Embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the method shown in fig. 1.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the invention is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the described functions and/or features may be integrated in a single physical device and/or software module or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the invention, which is to be defined in the appended claims and their full scope of equivalents.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present application has been described in detail, the present application is not limited to the embodiments described above, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present application, and these equivalent modifications or substitutions are included in the scope of the present application as defined in the appended claims.

Claims (8)

1. The quick positioning method for loading and unloading of the mobile robot is characterized by comprising the following steps of:
Carrying out first positioning on a material rack device to obtain space position information of the material rack device; at least one material tray is arranged on the material rack device; the tray is provided with a plurality of positioning holes;
Moving a manipulator to the side of the material frame device according to the spatial position information of the material frame device, wherein a binocular depth camera is arranged on the manipulator;
Identifying the identification information on the material tray, and determining whether the material tray is a target material tray according to the identification information on the material tray; the target material tray is filled with target materials needing to be clamped;
Performing second positioning on the target material, and determining the relative position of the target material on the target tray;
moving a manipulator to the side of the target material according to the relative position of the target material on the target material tray;
Thirdly, positioning the target material, and determining the accurate position of the target material;
clamping the target material;
the second positioning of the target material specifically comprises the following steps:
determining the distribution of preset positioning holes on the target material tray according to the identification information on the target material tray;
scanning the target material tray to obtain the actual positioning hole distribution on the target material tray;
comparing the preset positioning hole distribution with the actual positioning hole distribution; determining the difference value of the two materials as the relative position of the target material on the target material tray;
The third positioning is performed on the target material, and the feeding and discharging clamping operation is performed on the target material through a manipulator, and the method specifically comprises the following steps:
acquiring image data of the target material by using a binocular depth camera on the manipulator, wherein the image data is used as material image data;
Performing two-dimensional target detection according to the material image data by using a target detection model after training to obtain shape information of a target material;
Planning a moving path for the manipulator according to the relative position of the target material on the target material tray and the shape information of the target material; the moving path takes a positioning hole on the material tray as a reference;
controlling the manipulator to move to the side of the target material, and detecting the pixel offset around the target material; the pixel offset around the detection target material is realized through a two-dimensional visual detection algorithm;
and fusing the pixel offset on the basis of the relative position of the target material on the target material tray to calculate so as to obtain the accurate position of the target material.
2. The method for quickly positioning loading and unloading of a mobile robot according to claim 1, wherein the first positioning of the material rack device specifically comprises the following steps:
Acquiring image data of the material frame device by using a binocular depth camera on the manipulator, wherein the image data is used as material frame image data;
Carrying out three-dimensional target detection fusion reasoning according to the material frame image data to obtain the spatial position information of the material frame device; the spatial position information specifically comprises three-dimensional positions of the material rest devices in a world coordinate system and size information of the material rest devices.
3. The method for quickly positioning loading and unloading of the mobile robot according to claim 2, wherein the three-dimensional target detection fusion reasoning specifically comprises the following steps:
Performing two-dimensional feature extraction on the material frame image data by using a preset convolutional neural network to obtain plane information of the material frame device;
Calculating depth information of the material rack device by using a binocular vision algorithm;
and fusing the plane information and the depth information of the material frame device to obtain the three-dimensional position of the material frame device in the world coordinate system and the size information of the material frame device.
4. The method for quickly positioning loading and unloading of the mobile robot according to claim 1, wherein the identification information on the tray comprises a two-dimensional code; the identification information on the tray is identified, and specifically comprises the following steps:
determining an identification information area on the tray according to the spatial position information of the material rack device;
Acquiring image data of the identification information area as first identification data;
Threshold segmentation is carried out on the first identification data to obtain binarized second identification data;
performing contour extraction on the second identification data to obtain the contour of the identification information;
performing straight line fitting correction on the outline of the identification information;
and decoding the identification information area according to the outline of the identification information to obtain the identification information on the material disc.
5. The method for quickly positioning loading and unloading of mobile robot according to claim 1, wherein the target detection model is trained by the following steps:
collecting a material image; dividing a material image with a preset proportion to perform random transformation operation;
carrying out data annotation on the material image;
Dividing the collected material images into a training set and a verification set;
training the target detection model by using a training set to obtain a model output result; the model output result comprises the relative positions of the materials and the material tray;
Evaluating the output of the target detection model using a validation set; and when the output of the target detection model reaches a preset evaluation standard, the target detection model is regarded as being trained.
6. The quick positioning system for loading and unloading of the mobile robot is characterized by comprising a material rack device and the mobile robot;
at least one tray for containing materials is arranged on the material rack device;
The mobile robot is provided with an edge computing platform and a manipulator; the manipulator is provided with a binocular depth camera; the edge computing platform performs a mobile robot loading and unloading quick positioning method according to any one of claims 1-5.
7. The rapid positioning system for loading and unloading of a mobile robot of claim 6, wherein the material rest device is disposed on the mobile robot.
8. The quick loading and unloading positioning device of the mobile robot is characterized by comprising a processor and a memory;
the memory is used for storing programs;
The processor executing the program to implement the method of any one of claims 1-5.
CN202311453910.0A 2023-11-02 2023-11-02 Quick positioning method, system and device for feeding and discharging of mobile robot Active CN117532603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311453910.0A CN117532603B (en) 2023-11-02 2023-11-02 Quick positioning method, system and device for feeding and discharging of mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311453910.0A CN117532603B (en) 2023-11-02 2023-11-02 Quick positioning method, system and device for feeding and discharging of mobile robot

Publications (2)

Publication Number Publication Date
CN117532603A CN117532603A (en) 2024-02-09
CN117532603B true CN117532603B (en) 2024-06-25

Family

ID=89783275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311453910.0A Active CN117532603B (en) 2023-11-02 2023-11-02 Quick positioning method, system and device for feeding and discharging of mobile robot

Country Status (1)

Country Link
CN (1) CN117532603B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN208150248U (en) * 2018-04-26 2018-11-27 北京极智嘉科技有限公司 Haulage equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210147150A1 (en) * 2016-10-08 2021-05-20 Zhejiang Guozi Robot Technology Co., Ltd. Position and orientation deviation detection method and system for shelf based on graphic with feature information
CN109975172B (en) * 2019-04-01 2021-07-27 中国工程物理研究院化工材料研究所 Material tray identification system and method capable of realizing rapid identification of PBX booster charge
CN111604273A (en) * 2020-05-22 2020-09-01 深圳市周大福珠宝制造有限公司 Jewelry printing quality detection equipment, system, method and device
CN112959318A (en) * 2021-01-11 2021-06-15 浙江中烟工业有限责任公司 Full-automatic label paper gripping device and method based on machine vision
CN112873163A (en) * 2021-01-14 2021-06-01 电子科技大学 Automatic material carrying robot system and control method thereof
CN113104531A (en) * 2021-04-09 2021-07-13 深圳谦腾科技有限公司 Flexible feeding system and method
CN114559438A (en) * 2022-03-25 2022-05-31 卡奥斯工业智能研究院(青岛)有限公司 Recognition and placement device and recognition and placement method
CN115744094A (en) * 2022-10-19 2023-03-07 东莞市李群自动化技术有限公司 Material grabbing method and device based on material sorting equipment
CN116160458B (en) * 2023-04-26 2023-07-04 广州里工实业有限公司 Multi-sensor fusion rapid positioning method, equipment and system for mobile robot

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN208150248U (en) * 2018-04-26 2018-11-27 北京极智嘉科技有限公司 Haulage equipment

Also Published As

Publication number Publication date
CN117532603A (en) 2024-02-09

Similar Documents

Publication Publication Date Title
CN103678754B (en) Information processor and information processing method
KR102056664B1 (en) Method for work using the sensor and system for performing thereof
US9482754B2 (en) Detection apparatus, detection method and manipulator
CN102506830B (en) Vision-based positioning method and device
JP2015171745A (en) Robot simulation device for simulating workpiece unloading process
CN104217441A (en) Mechanical arm positioning fetching method based on machine vision
JP2012221456A (en) Object identification device and program
Bellandi et al. Roboscan: a combined 2D and 3D vision system for improved speed and flexibility in pick-and-place operation
CN110425996A (en) Workpiece size measurement method based on binocular stereo vision
CN114140439A (en) Laser welding seam feature point identification method and device based on deep learning
KR20210019014A (en) Method and plant for determining the location of a point on a complex surface of space
CN114581368B (en) Bar welding method and device based on binocular vision
CN113689509A (en) Binocular vision-based disordered grabbing method and system and storage medium
Zhang et al. Slat-calib: Extrinsic calibration between a sparse 3d lidar and a limited-fov low-resolution thermal camera
CN114473309A (en) Welding position identification method for automatic welding system and automatic welding system
US20230410362A1 (en) Target object detection method and apparatus, and electronic device, storage medium and program
Chen et al. Pallet recognition and localization method for vision guided forklift
CN111389750B (en) Vision measurement system and measurement method
CN117532603B (en) Quick positioning method, system and device for feeding and discharging of mobile robot
CN116160458B (en) Multi-sensor fusion rapid positioning method, equipment and system for mobile robot
CN111854616A (en) Tree breast height diameter vision measurement method and system under assistance of laser
CN106558070A (en) A kind of method and system of the visual tracking under the robot based on Delta
Frank et al. Stereo-vision for autonomous industrial inspection robots
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
CN116243329A (en) High-precision multi-target non-contact ranging method based on laser radar and camera fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant