CN111626117B - Garbage sorting system and method based on target detection - Google Patents

Garbage sorting system and method based on target detection Download PDF

Info

Publication number
CN111626117B
CN111626117B CN202010321347.1A CN202010321347A CN111626117B CN 111626117 B CN111626117 B CN 111626117B CN 202010321347 A CN202010321347 A CN 202010321347A CN 111626117 B CN111626117 B CN 111626117B
Authority
CN
China
Prior art keywords
mechanical arm
garbage
data
max
target detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010321347.1A
Other languages
Chinese (zh)
Other versions
CN111626117A (en
Inventor
黄鸿飞
张桦
吴以凡
蒋世豪
姚王泽
谭云鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202010321347.1A priority Critical patent/CN111626117B/en
Publication of CN111626117A publication Critical patent/CN111626117A/en
Application granted granted Critical
Publication of CN111626117B publication Critical patent/CN111626117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • B07C5/362Separating or distributor mechanisms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1682Dual arm manipulator; Coordination of several manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C2501/00Sorting according to a characteristic or feature of the articles or material to be sorted
    • B07C2501/0063Using robots
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02WCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO WASTEWATER TREATMENT OR WASTE MANAGEMENT
    • Y02W30/00Technologies for solid waste management
    • Y02W30/10Waste collection, transportation, transfer or storage, e.g. segregated refuse collecting, electric or hybrid propulsion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Sorting Of Articles (AREA)

Abstract

The invention discloses a garbage sorting system and method based on target detection. The garbage classification target detection system comprises a garbage classification target detection model, a mechanical arm, an industrial camera, a server and a conveyor belt; the garbage classification target detection model is a garbage classification target detection model which is trained by a YOLOV3 neural network model through a labeled data set, then the trained YOLOV3 neural network model is continuously debugged through a cross validation set, a debugged final model is tested by a test set, and the model reaching the index is guaranteed to be the garbage classification target detection model. The marked data set is characterized in that image acquisition equipment is used for acquiring garbage pictures of a real scene at a garbage processing site, and the garbage types in the garbage pictures are marked as [ x _ min, y _ min, x _ max, y _ max ] and the garbage type classes _ id. The server is connected with the mechanical arm and the industrial camera and then respectively creates threads; the invention realizes the intelligent and unmanned aim of the garbage sorting production line.

Description

Garbage sorting system and method based on target detection
Technical Field
The invention provides a garbage sorting system and method based on target detection through a target detection technology of machine vision. The automatic sorting system is combined with the industrial robot to realize automatic sorting of solid waste garbage in the real garbage disposal plant environment, so that the garbage disposal quality and the working efficiency are improved, and the solid waste garbage is efficiently recycled.
Background
The environmental protection and the improvement of the resource utilization rate are the guidelines which are always followed by China, the garbage is an unutilized mineral deposit, the garbage has huge resource potential, and the waste of today can become the resources of tomorrow. Foreign garbage sorting robots are started earlier, and various products are published at present, such as American MAX-AI; no manufacturer develops research and development similar to the intelligent robot system for sorting the household garbage at home.
Disclosure of Invention
The invention relates to a garbage sorting system and method based on target detection, which mainly aim at a target classification and identification technology and a visual identification technology in a complex environment and a control technology for real-time cooperation of an industrial mechanical arm.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the garbage sorting system based on target detection comprises a garbage classification target detection model, a mechanical arm, an industrial camera, a server and a conveyor belt;
the garbage classification target detection model is a garbage classification target detection model which is trained by a YOLOV3 neural network model through a labeled data set, then the trained YOLOV3 neural network model is continuously debugged through a cross validation set, a debugged final model is tested by a test set, and the model after reaching the index is guaranteed to be the garbage classification target detection model.
The marked data set is characterized in that an image acquisition device is used for acquiring garbage pictures of a real scene at a garbage disposal site, and the garbage types in the garbage pictures are marked, wherein the marked garbage types comprise bottles, plastics, metals, harmful substances, landfill incineration and paper; the method comprises the steps of taking a point at the upper left corner of a garbage picture as an origin [0,0], taking a point at the upper left corner of the garbage picture as a positive axis right, taking a point at the upper right corner of the garbage picture as a positive axis Y, taking a format of data annotation as a garbage picture corresponding to an XML file, displaying the XML file on the garbage picture, marking each garbage by a rectangular box, and recording coordinates of the upper left corner and the lower right corner of the rectangular box, wherein the coordinates are marked as [ X _ min, Y _ min, X _ max, Y _ max ] and class _ id of the garbage.
The mechanical arm, the industrial camera, the server and the conveyor belt are all industrial existing products;
the server is connected with the mechanical arm and the industrial camera and then respectively creates threads; the server establishes a detection thread for each mechanical arm, namely establishes a thread 1 for the mechanical arm 1 and establishes a thread 2 for the mechanical arm 2;
as shown in fig. 1,2 or more mechanical arms are arranged and added according to actual requirements; the industrial camera is arranged on one side of the mechanical arm, the distance from the mechanical arm 1 is S1, and the distance from the mechanical arm 2 is S2; the server directly calls sdk of the industrial camera through threads to obtain a real-time image on the conveyor belt; the server takes the obtained image as an input picture of a garbage classification target detection model; the garbage classification target detection model outputs the position information and the category information of the input picture, and presses the position information and the category information into a data queue stored in a server in a format of [ x _ min, y _ min, x _ max, y _ max, t, classes _ id ]. Each mechanical arm continuously scans data of the data queue in the server through threads, namely the two threads of the mechanical arms 1 and 2 continuously scan the data of the data queue, the mechanical arms complete grabbing according to the read data, and the mechanical arms are placed into the specified recycling object frame according to category information in the data.
The model of the mechanical arm is LeArm; the brand is magic; the production enterprise is Shenzhen Huanler science and technology Limited; an STM32 single chip microcomputer version;
the garbage sorting method based on target detection specifically comprises the following steps:
step 1: and acquiring and marking data to obtain a data set with marks.
The method comprises the steps of collecting garbage pictures of real scenes at a garbage disposal site by using an image collection device, and labeling garbage categories in the garbage pictures, wherein the labeled garbage categories comprise bottles, plastics, metals, harmful articles, landfill incineration and paper, the bottles comprise plastic bottles with various shapes, such as flattened plastic bottles, color-polluted plastic bottles and the like, the harmful articles comprise waste medicines, pcb boards and the like, and the paper categories comprise books, cartons and the like. The method comprises the steps of taking a point at the upper left corner of a garbage picture as an origin [0,0], taking a point at the upper left corner of the garbage picture as an origin [0,0], taking a point at the upper right corner as a positive X axis, taking a point at the lower right corner as a positive Y axis, taking a data annotation format as a garbage picture corresponding to an XML file, displaying the XML file on the garbage picture, and annotating each garbage by a rectangular box, wherein the XML file records coordinates of the upper left corner and the lower right corner of the rectangular box, and marks the coordinates as [ X _ min, Y _ min, X _ max, Y _ max ] and class _ id of the garbage.
The image acquisition equipment is the existing equipment and comprises a camera.
Step 2: training of models
Dividing a data set with labels into a training set, a cross validation set and a test set, training a YOLOV3 neural network model by using the training set, and continuously debugging the YOLOV3 neural network model through the cross validation set to obtain a debugged final model; and finally, testing the effect of the final model by using the test set, and storing the final model after the indexes are reached to obtain the required garbage classification target detection model.
And step 3: image acquisition and target detection
As shown in fig. 1, an industrial camera captures a garbage acquisition image on a conveyor belt, and transmits the image to a garbage classification target detection model of a server, the model extracts features from the image and then gives class _ id and position time information [ x _ min, y _ min, x _ max, y _ max, t ] of the target, and then the position information and the class information are pressed into a queue in a format of [ x _ min, y _ min, x _ max, y _ max, t, class _ id ].
The Yolov3 neural network model has the function of extracting features of images.
And 4, step 4: multi-mechanical-arm cooperative grabbing
And a multi-mechanical arm cooperation technology is adopted. And continuously detecting whether the data queue has information or not in a static state by a plurality of mechanical arm processes, reading a group of data when the data queue has the data, dequeuing the group of data to obtain position information, controlling and grabbing the mechanical arms by combining the speed of the conveying belt and the distance between the mechanical arms and the camera, and putting the mechanical arms into a specified recycling object frame according to the read data types.
The invention has the following beneficial effects:
the invention mainly applies a target classification identification technology and a visual identification technology and a control technology of real-time cooperation of the industrial mechanical arm.
The invention relies on an industrial camera and a target detection model, and improves a target detection algorithm under a complex background through the application of a neural network technology with prominent current recognition rate based on a machine vision theory. The garbage is classified and identified, and the automatic classification and recovery of the household garbage are realized.
In addition, aiming at the garbage sorting requirement of multiple targets on a mobile production line, the invention applies a field bus and also implements a clock synchronization technology to cooperatively control a plurality of mechanical arms through threads, namely, multi-arm cooperative control is applied, so that the cooperative operation of a plurality of execution units controlled by a single detection unit is realized, and the sorting of the sorting targets with the maximum efficiency is realized.
The invention is not only beneficial to classifying and recycling garbage, saves a large amount of manpower and funds, but also increases the fault-tolerant rate, makes the garbage classification more clear and efficient, and realizes the intelligent and unmanned goal of the garbage sorting production line.
Drawings
Fig. 1 is a diagram of a garbage sorting system.
FIG. 2 is a block diagram of the operation of the system.
Fig. 3 is a timing diagram of the operation of the system.
FIG. 4 is a labeling example
FIG. 5 is a diagram of a process for obtaining valid test data
FIG. 6 is a diagram of a multi-robot cooperative operation mode
Detailed Description
The invention is further illustrated by the following figures and examples.
As shown in fig. 1, this figure shows a garbage sorting scene, in the production line, objects continuously pass through the conveyor belt with a speed v, the industrial camera is installed above the conveyor belt with a view field size W × L, 2 or more mechanical arms are installed on both sides of the conveyor belt, and the distances between the industrial camera and the two mechanical arms are S1 and S2, respectively.
As shown in fig. 2, this diagram shows an architecture diagram of garbage sorting, in which an industrial camera first acquires image data, after the garbage classification target detection model identifies the image data, category information and position information of a current object are obtained, the category information and the position information of the identified object are inserted into a data queue, and a mechanical arm performs a grabbing operation by reading data in the queue.
As shown in fig. 3, which is a system timing diagram, after an object is identified by a garbage classification target detection model, data is continuously inserted into a data queue, an idle mechanical arm 1 reads the data of the data queue and performs a capture operation, the data is not read from the queue during the capture period, the data read by the mechanical arm 1 is dequeued by the queue, when new data is inserted, a mechanical arm 2 reads the data and performs a read operation, the mechanical arm 1 enters an idle state after the capture is finished, whether the data can be captured is judged when the queue has the data, and when the data can be captured, the data is immediately read and captured.
Fig. 4 is an annotation case: the point at the upper left corner of the picture is taken as an origin [0,0], the point at the upper left corner of the picture is taken as a positive X axis, the point at the right corner is taken as a positive Y axis, the format of data annotation is that one picture corresponds to one XML file, the XML file is displayed on the picture and is marked with one rectangular frame for each object, and the XML file records the coordinates of the upper left corner and the lower right corner of the rectangular frame, and is marked as [ X _ min, Y _ min, X _ max, Y _ max ] and the class _ id of the object.
Fig. 5 shows a process of acquiring an effective detection frame, when an object enters the field of view while moving on the conveyor belt, an image of the object is acquired to give detection information, but the object exists in the field of view for a while, so that an effective non-repeated detection information needs to be given, and the basic scene is as follows: the upper left corner of the field of view is the origin [0,0], and is the X-axis to the right and the Y-axis downward. The judgment process is as follows: ob1p1, ob1p2, ob1p3 are respectively the positions p1, p2, p3 of the object ob1 at different moments of the conveyor belt, when the object ob1 is detected at p1, the information is [ x1_ min, y1_ min, x1_ max, y1_ max, t, classes _ id ], the current detection frame is combined with the detection data of the picture at the next moment, the data detected by ob1 at p2 is [ x2_ min, y2_ min, x2_ max, y2_ max, t, classes _ id ], the [ y1_ min, y1_ max ] and [ y2_ min, y2_ max ] can be compared, when the difference is within the set range, the same object can be considered, the comparison between x1_ max-x1_ min and x2_ min can be considered as the same object, when the difference is within the set range, the detection data can be considered as the detection data is inserted into the field of view 3, the detection frame is considered to be valid.
FIG. 6 is a diagram of a multi-robot cooperative mode, where the velocity of the conveyor is defined as v, and the robot parameters are defined as follows: the grabbing range is [ -z, z ], the forward direction is the direction towards the industrial camera, the time for completing a complete grabbing action is Δ T, and the time for the robot arm to grab downwards is about Δ X. The robot arm 1,2 continuously scans the data in the data queue by two threads, and whether the data queue can be grabbed is judged according to the current time, the time when the object leaves the camera view field and the speed v of the conveyor belt. Assuming that the time of two objects ob1 and ob2 just leaving the field of view of the camera is T1 and T2, the position information and the category information of the objects have already entered the queue, and then the distance from the robot arm becomes smaller as the conveyor belt is transported, assuming that the time T = T3 at this time, the distance between o1 and o2 and the robot arm 1 is S1-v (T3-T1) and S1-v (T3-T2), the distance between o1 and o2 and the robot arm 2 is S2-v (T3-T1) and S2-v (T3-T2), at a certain time T4, when the object ob1 enters the capture range of [ -z + v Δ X, z + Δ X ], the robot arm 1 starts to read the data of the object ob1, (note that, a mutual exclusion is added when the data is read, it is ensured that only one thread is reading the operation data queue) and dequeues the data of the object ob1, at this time, the robot arm does not scan the data in the queue of the robot arm 2, and the data queue is left; when grabbing is performed, because the time when the read ob1 leaves the field of view is t1, the position from the mechanical arm S1 at the current time t4 is S1-v (t 4-t 1), and because the downward grabbing time of the mechanical arm is Δ X, the coordinate of the falling point of the mechanical arm at this time of grabbing should be [ S1-v (t 4-t1+ Δ X) + (X1 _ max-X1_ min)/2, (y 1_ max-y1_ min)/2 + y1 \\ \ u min ]. At a certain time T5, when the object 2 is away from the robot arm 1, the range of the robot arm 2 is [ -z + v X Δ X, z + Δ X ], if T5> T4+ Δ T, the robot arm 1 finishes grabbing, then the robot arm 1,2 is in an idle state at the time, any robot arm thread can lock the operation data queue, when T5< Δ T + T4, the robot arm 2 performs locking reading and grabbing operation, the coordinate position can be obtained in the manner of the previous robot arm 1, when the object ob3 reaches the grabbing range, the robot arm can grab if the robot arm is idle, and if the robot arm is in the grabbing state, the robot arm can be added to grab, which indicates that the robot arms of the system can perform cooperative operation.

Claims (3)

1. The method for realizing the garbage sorting system based on the target detection is characterized by comprising the following steps of:
step 1: acquiring and marking data to obtain a data set with marks;
collecting a garbage picture of a real scene at a garbage disposal site by using image collection equipment, and marking the garbage category in the garbage picture, wherein the marked garbage category comprises bottles, plastics, metals, harmful articles, landfill incineration and paper; taking a point at the upper left corner of the garbage picture as an origin [0,0], taking a point at the right corner as a positive X axis, taking a point at the right corner as a positive Y axis, taking a data annotation format as a garbage picture corresponding to an XML file, displaying the XML file on the garbage picture, and annotating each garbage by using a rectangular box, wherein the XML file records coordinates of the upper left corner and the lower right corner of the rectangular box, and marks the coordinates as [ X _ min, Y _ min, X _ max, Y _ max ] and a class _ id of the garbage;
step 2: training of models
Dividing a data set with labels into a training set, a cross validation set and a test set, training a YOLOV3 neural network model by using the training set, and continuously debugging the YOLOV3 neural network model through the cross validation set to obtain a debugged final model; finally, testing the effect of the final model by using the test set, and storing the final model after the indexes are reached to obtain the required garbage classification target detection model;
and step 3: image acquisition and target detection
Capturing garbage on a conveyor belt through an industrial camera to obtain an image, transmitting the image to a garbage classification target detection model of a server, after the model extracts features from the image, giving class _ id and position time information [ x _ min, y _ min, x _ max, y _ max, t ] of the target, and then pressing the position information and the class information into a queue in a format of [ x _ min, y _ min, x _ max, y _ max, t, class _ id ];
and 4, step 4: multi-mechanical-arm cooperative grabbing
When the mechanical arm 1 and the mechanical arm 2 are in an idle state, the threads of the mechanical arm 1 and the mechanical arm 2 continuously detect whether data exist in a data queue, when the data exist in the data queue, a group of data is read, the group of data is dequeued, position information of the group of data is obtained, control and grabbing of the mechanical arm are carried out by combining the speed of a conveyor belt and the distance between the mechanical arm and a camera, and the mechanical arm is placed into a specified article recycling frame according to the read category information;
the step 3 is specifically realized as follows:
the process of the industrial camera for acquiring a valid detection frame is as follows:
when an object moves on the conveyor belt, the industrial camera acquires an image of the object when the object just enters the visual field of the industrial camera, and position information and category information of the object are given; however, the object always exists in the visual field in a period of time, so that effective and non-repeated position information and category information need to be given, and the basic scene is as follows: let the upper left corner of the field of view be the origin [0,0], and the right side be the X-axis direction and the downward side be the Y-axis direction, the determination process is as follows: ob1p1, ob1p2, ob1p3 are respectively the positions p1, p2, p3 of the object ob1 at different moments of the conveyor belt, when the object ob1 is detected at the position p1, the information is [ x1_ min, y1_ min, x1_ max, y1_ max, t, classes _ id ], the current detection frame is combined with the detection data of the picture at the next moment, the data detected when the object ob1 is at the position p2 is [ x2_ min, y2_ min, x2_ max, y2_ max, t, classes _ id ], the [ y1_ min, y1_ max ] and [ y2_ min, y2_ max ] are compared, when the compared difference value is within the set range, the object is considered to be the same object, then x1_ max-x1_ min and x2_ max-x2_ min are compared, and when the compared difference value is within the set range, the object ob1 is considered to be in the view field; when the object moves to p3, x3_ max-x3_ min begins to change, at this moment, ob1 is considered to be about to leave the visual field, and the detection information [ x3_ min, y3_ min, x3_ max, y3_ max, t, classes _ id ] is inserted into the data queue, namely the detection frame is an effective detection frame;
the step 4 is specifically realized as follows:
setting the speed of the conveyor belt as v, defining the grabbing range of the mechanical arm as [ -z, z ], wherein the forward direction is the direction towards the industrial camera, the time for completing a complete grabbing action is delta T, and the time for the mechanical arm to grab downwards is delta X, so that the mechanical arm can grab the object when the object is at the position of [ -z + v delta X, z + delta X ];
assuming that the mechanical arm recovers the initial state after grabbing each time, continuously scanning data of the data queue by the mechanical arm 1 and the mechanical arm 2, and judging whether the grabbing is available according to the current time, the time of the object leaving the camera view and the speed v of the conveyor belt; setting the time when two objects ob1 and ob2 just leave the field of view of the camera as T1 and T2, the position information and the category information of the two objects are already entered into the data queue, then as the distance from the mechanical arm becomes smaller along with the transmission of the conveyor belt, setting the time T = T3, then the distances from ob1 and ob2 to the mechanical arm 1 are S1-v (T3-T1) and S1-v (T3-T2), respectively, the distances from ob1 and ob2 to the mechanical arm 2 are S2-v (T3-T1) and S2-v (T3-T2), at a certain time T4, when the object ob1 enters the capture range of [ -z + v Δ X, z + Δ X ], the mechanical arm 1 starts to read the data of the object ob1 and dequeues the data of the object ob1, and at the time Δ T, the mechanical arm 1 does not read the data, and only the mechanical arm 2 remains to scan the data queue; and when reading data, adding a mutual exclusion lock to ensure that only one thread reads the operation data queue at the same time; when grabbing is performed, the time when ob1 leaves the field of view is t1, then at the current time t4, the position from the mechanical arm S1 is S1-v (t 4-t 1), and since the downward grabbing time of the mechanical arm is Δ X, the falling point coordinate of the mechanical arm at this time of grabbing is [ S1-v (t 4-t1+ Δ X) + (X1 _ max-X1_ min)/2, (y 1_ max-y1_ min)/2 + y1 u min ]; at a certain time T5, when the object ob2 is far from the mechanical arm 1, the range of the mechanical arm 2 is [ -z + v × Δ X, z + Δ X ], if T5 is larger than or equal to T4+ Δ T, the mechanical arm 1 finishes grabbing, then the mechanical arms 1 and 2 are in an idle state at the moment, any mechanical arm thread can lock the operation data queue, when T5 is smaller than Δ T + T4, the mechanical arm 2 performs locking reading and grabbing operation, and the coordinate position can be obtained according to the mode of the mechanical arm 1.
2. The method of claim 1, wherein when the object ob3 reaches the capture range, if both robots 1 and 2 are idle, then either robot thread can lock the operational data queue; if the mechanical arms 1 and 2 are in the grabbing state, the mechanical arms can be added to grab, and the cooperative operation of the mechanical arms is realized.
3. The method of claim 1, wherein the object detection-based garbage sorting system comprises a garbage classification object detection model, a mechanical arm, an industrial camera, a server, and a conveyor belt;
the garbage classification target detection model is a garbage classification target detection model which is formed by training a YOLOV3 neural network model through a training set, continuously debugging the trained YOLOV3 neural network model through a cross validation set, testing the debugged final model by using a test set and obtaining the index;
the training set, the cross validation set and the test set form a labeled data set;
the data set with labels is characterized in that image acquisition equipment is used for acquiring garbage pictures of real scenes in a garbage disposal station on site, and the types of garbage in the garbage pictures are labeled, wherein the labeled garbage types comprise bottles, plastics, metals, harmful articles, landfill incineration and paper; taking a point at the upper left corner of the garbage picture as an origin [0,0], taking a point at the right corner as a positive X axis, taking a point at the right corner as a positive Y axis, taking a data annotation format as a garbage picture corresponding to an XML file, displaying the XML file on the garbage picture, and annotating each garbage by using a rectangular box, wherein the XML file records coordinates of the upper left corner and the lower right corner of the rectangular box, and marks the coordinates as [ X _ min, Y _ min, X _ max, Y _ max ] and a class _ id of the garbage;
the server is connected with the mechanical arm and the industrial camera and then respectively creates threads; the server establishes a detection thread for each mechanical arm, namely establishes a thread 1 for the mechanical arm 1 and establishes a thread 2 for the mechanical arm 2;
the industrial camera is arranged on one side of the mechanical arm, the distance from the mechanical arm 1 is S1, and the distance from the mechanical arm 2 is S2; the server directly calls sdk of the industrial camera through threads to obtain a real-time image on the conveyor belt; the server takes the acquired image as an input picture of a garbage classification target detection model; the garbage classification target detection model outputs position information and category information of an input picture, and presses the position information and the category information into a data queue stored in a server in a format of [ x _ min, y _ min, x _ max, y _ max, t, classes _ id ];
the two threads of the mechanical arm 1 and the mechanical arm 2 continuously scan data of a data queue in the server, the mechanical arm calls the SDK of the mechanical arm according to the read data to obtain captured parameter information, the parameter information is written into a mechanical arm main board, the mechanical arm is controlled to complete capturing, and the mechanical arm is placed into a specified recycling object frame according to category information in the data.
CN202010321347.1A 2020-04-22 2020-04-22 Garbage sorting system and method based on target detection Active CN111626117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010321347.1A CN111626117B (en) 2020-04-22 2020-04-22 Garbage sorting system and method based on target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010321347.1A CN111626117B (en) 2020-04-22 2020-04-22 Garbage sorting system and method based on target detection

Publications (2)

Publication Number Publication Date
CN111626117A CN111626117A (en) 2020-09-04
CN111626117B true CN111626117B (en) 2023-04-18

Family

ID=72260050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010321347.1A Active CN111626117B (en) 2020-04-22 2020-04-22 Garbage sorting system and method based on target detection

Country Status (1)

Country Link
CN (1) CN111626117B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110949991A (en) * 2020-01-03 2020-04-03 佛亚智能装备(苏州)有限公司 Multi-station detection material conveying and circuit control method
CN112248021B (en) * 2020-09-29 2022-10-21 北海职业学院 Robot-based pneumatic chuck clamping device
CN112215131A (en) * 2020-10-10 2021-01-12 李睿宸 Automatic garbage picking system and manual operation and automatic picking method thereof
CN112232246A (en) * 2020-10-22 2021-01-15 深兰人工智能(深圳)有限公司 Garbage detection and classification method and device based on deep learning
CN112329849A (en) * 2020-11-04 2021-02-05 中冶赛迪重庆信息技术有限公司 Scrap steel stock yard unloading state identification method based on machine vision, medium and terminal
CN113083703A (en) * 2021-03-10 2021-07-09 浙江博城机器人科技有限公司 Control method of garbage sorting robot based on unmanned navigation
CN113128363A (en) * 2021-03-31 2021-07-16 武汉理工大学 Machine vision-based household garbage sorting system and method
CN113688825A (en) * 2021-05-17 2021-11-23 海南师范大学 AI intelligent garbage recognition and classification system and method
CN114192447A (en) * 2021-12-08 2022-03-18 上海电机学院 Garbage sorting method based on image recognition
CN114429573A (en) * 2022-01-10 2022-05-03 华侨大学 Data enhancement-based household garbage data set generation method
CN115447924A (en) * 2022-09-05 2022-12-09 广东交通职业技术学院 Machine vision-based garbage classification and sorting method, system, device and medium
CN116342895B (en) * 2023-05-31 2023-08-11 浙江联运知慧科技有限公司 Method and system for improving sorting efficiency of renewable resources based on AI (advanced technology attachment) processing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0706838A1 (en) * 1994-10-12 1996-04-17 PELLENC (Société Anonyme) Machine and method for sorting varied objects using at least one robotic arm
US6124560A (en) * 1996-11-04 2000-09-26 National Recovery Technologies, Inc. Teleoperated robotic sorting system
CN110624857A (en) * 2019-10-21 2019-12-31 广东弓叶科技有限公司 Object type identification method and sorting equipment
CN110909660A (en) * 2019-11-19 2020-03-24 佛山市南海区广工大数控装备协同创新研究院 Plastic bottle detection and positioning method based on target detection
CN111003380A (en) * 2019-12-25 2020-04-14 深圳蓝胖子机器人有限公司 Method, system and equipment for intelligently recycling garbage

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0706838A1 (en) * 1994-10-12 1996-04-17 PELLENC (Société Anonyme) Machine and method for sorting varied objects using at least one robotic arm
US6124560A (en) * 1996-11-04 2000-09-26 National Recovery Technologies, Inc. Teleoperated robotic sorting system
CN110624857A (en) * 2019-10-21 2019-12-31 广东弓叶科技有限公司 Object type identification method and sorting equipment
CN110909660A (en) * 2019-11-19 2020-03-24 佛山市南海区广工大数控装备协同创新研究院 Plastic bottle detection and positioning method based on target detection
CN111003380A (en) * 2019-12-25 2020-04-14 深圳蓝胖子机器人有限公司 Method, system and equipment for intelligently recycling garbage

Also Published As

Publication number Publication date
CN111626117A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111626117B (en) Garbage sorting system and method based on target detection
CN113158956B (en) Garbage detection and identification method based on improved yolov network
CN111046948B (en) Point cloud simulation and deep learning workpiece pose identification and robot feeding method
CN108229665A (en) A kind of the System of Sorting Components based on the convolutional neural networks by depth
CN111015662B (en) Method, system and equipment for dynamically grabbing object and method, system and equipment for dynamically grabbing garbage
CN112845143A (en) Household garbage classification intelligent sorting system and method
CN101912847B (en) Fruit grading system and method based on DSP machine vision
CN111715559A (en) Garbage sorting system based on machine vision
CN206132657U (en) Gilt quality intelligent detecting system based on machine vision
CN102837406A (en) Mold monitoring method based on FAST-9 image characteristic rapid registration algorithm
CN104647388A (en) Machine vision-based intelligent control method and machine vision-based intelligent control system for industrial robot
CN112102368A (en) Robot garbage classification and sorting method based on deep learning
CN208092786U (en) A kind of the System of Sorting Components based on convolutional neural networks by depth
CN111590611A (en) Article classification and recovery method based on multi-mode active perception
CN103873779B (en) Method for controlling intelligent camera for parking lot
CN111144480A (en) Visual classification method, system and equipment for recyclable garbage
CN113971746B (en) Garbage classification method and device based on single hand teaching and intelligent sorting system
CN103148783B (en) A kind of automatic testing method of valve rocker installation site
CN114192447A (en) Garbage sorting method based on image recognition
CN112150507B (en) 3D model synchronous reproduction method and system for object posture and displacement
CN213005371U (en) Plastic bottle rubbish letter sorting manipulator
CN109409453A (en) A kind of cartoning sealing machine lacks a traceability system and its means of proof
JP2965886B2 (en) Apparatus for sorting objects
CN107671002A (en) The clothes method for sorting and its device of view-based access control model detection
CN211839079U (en) Chip detecting, sorting and counting system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant