CN111652214A - Garbage bottle sorting method based on deep learning - Google Patents

Garbage bottle sorting method based on deep learning Download PDF

Info

Publication number
CN111652214A
CN111652214A CN202010456143.9A CN202010456143A CN111652214A CN 111652214 A CN111652214 A CN 111652214A CN 202010456143 A CN202010456143 A CN 202010456143A CN 111652214 A CN111652214 A CN 111652214A
Authority
CN
China
Prior art keywords
deep learning
trash
layer
classification
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010456143.9A
Other languages
Chinese (zh)
Other versions
CN111652214B (en
Inventor
杨海东
关洁
李淑芬
罗哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Foshan Guangdong University CNC Equipment Technology Development Co. Ltd
Original Assignee
Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Foshan Guangdong University CNC Equipment Technology Development Co. Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute, Foshan Guangdong University CNC Equipment Technology Development Co. Ltd filed Critical Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Priority to CN202010456143.9A priority Critical patent/CN111652214B/en
Publication of CN111652214A publication Critical patent/CN111652214A/en
Application granted granted Critical
Publication of CN111652214B publication Critical patent/CN111652214B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a garbage bottle sorting method based on deep learning, which comprises the following steps: s1, presetting image acquisition and data classification standards, and labeling the pictures; s2, classifying the collected images by using an object detection algorithm based on deep learning to obtain a series of segmentation pictures; s3, utilizing a residual error neural network to classify colors of the garbage bottles in the segmented pictures; s4, the obtained position information and color attribute of each trash bottle are transmitted to the rear manipulator for classification, the trash bottle sorting method and the trash bottle sorting device realize accurate positioning of the trash bottles by using object detection based on deep learning, the returned target position is high in accuracy, and the trash bottle sorting accuracy of industrial mechanical equipment is greatly improved.

Description

Garbage bottle sorting method based on deep learning
Technical Field
The invention relates to the technical field of object detection and image classification of computer vision and deep learning, in particular to a garbage bottle sorting method based on deep learning.
Background
Along with the acceleration of the urbanization process in China and the improvement of the living standard of people, the variety and the quantity of household garbage are continuously improved, wherein plastic bottles are widely used in life as food packages, and the quantity of waste plastic bottles is also increased day by day. If the waste plastic bottles are not properly treated, the environment is damaged and even the health of people is harmed by random discarding, burning or burying, and simultaneously, the waste of resources is caused. In recent years, the problem of environmental pollution has become more serious, and recycling of waste plastic bottles has become more and more important in international society. Because the limitation of the application of the recycled plastic can be greatly reduced by the plastic bottles with single color, the recycled waste plastic bottles can be changed into plastic particles with different colors through color screening, crushing, granulating, modifying and other processes, and then the plastic particles can be reused to be used as recycled raw materials for producing products such as belts, shoes and the like.
However in the sorting work of actual trash can recycling we find: firstly, a large amount of pop cans, aluminum cans and other garbage bottles and sundries can be mixed in the plastic bottles, and the recovery efficiency is greatly influenced. At present, the plastic bottle recovery work in China is still completed by manual operation, and the method is very low in classification efficiency, severe in factory environment and not suitable for long-time work of workers. Thirdly, the number of the garbage bottles is huge, so that a large amount of personnel are needed to carry out manual sorting, and the personnel cost is high.
Therefore, the industry is actively advancing to automation, and high-efficiency mechanical equipment for automatically recycling the objects with a screening function by adopting a machine vision technology is introduced or developed, but the accuracy of identification and classification still cannot meet the requirement of garbage identification in modern industrial application, so that industrialization is not realized.
Disclosure of Invention
Aiming at the problems, the invention provides a garbage bottle sorting method based on deep learning, which mainly solves the problems in the background technology.
The invention provides a garbage bottle sorting method based on deep learning, which comprises the following steps:
s1, presetting image acquisition and data classification standards, and labeling the pictures;
s2, classifying the collected images by using an object detection algorithm based on deep learning to obtain a series of segmentation pictures;
s3, utilizing a residual error neural network to classify colors of the garbage bottles in the segmented pictures;
and S4, transmitting the obtained position information and color attribute of each trash bottle to a rear manipulator for classification.
2. The trash bottle sorting method based on deep learning of claim 1, wherein in the step S1, the trash bottles are classified into four categories according to the images of the trash bottles collected on site: the seal is aluminum can, plastic bottle, pop can and sundries.
3. The method for sorting garbage bottles based on deep learning as claimed in claim 1, wherein in the step S2, the classifying the garbage bottles by using the deep learning based object detection algorithm specifically comprises:
s201, constructing a neural network model based on object detection;
s202, training the neural network model;
and S203, continuously adjusting the data set and the model parameters, and repeating the step S202 to obtain the neural network model meeting the requirements on precision and speed.
4. The deep learning-based trash can sorting method according to claim 3, wherein in the step S201, the neural network model comprises a feature extraction module, an RPN module, a RoiP layer and a classification regression module, the feature extraction module comprises a convolutional neural network for performing feature extraction on an input picture, the RPN module is used for recommending candidate regions, the RoiP layer enables each target candidate region to generate a feature map with a fixed size so as to be sent to a full-connection layer of the convolutional neural network for determining a target class, the classification regression module comprises a regression function for realizing classification of the target class and an accurate position of a target detection frame, and the detection classification probability and the detection frame regression are respectively selected for performing combined training on the classification probability and the frame regression.
5. The trash can sorting method based on deep learning of claim 4, wherein the feature extraction module performs feature extraction on input image data of any size by using a convolutional neural network, wherein each layer of convolutional network extracts more abstract features on the basis of information of a previous layer, and finally a convolutional feature map which is much smaller than an original image in one spatial dimension and is deeper in representation is obtained.
6. The deep learning-based garbage bottle sorting method according to claim 4, wherein the RPN module generates a full-connected feature with a length of 512D by using a 3 x 3 sliding window on the feature map obtained by the feature extraction module, and then generates a full-connected layer of two branches after the full-connected feature of 512D: one is reg-layer, which is used for predicting the position coordinates [ x, y, w, h ] corresponding to the candidate frame of the target suggested region; the other is a cls-layer for determining whether the target suggestion box is foreground or background.
7. The trash can sorting method based on deep learning of claim 4, wherein the RoiP layer enables feature maps obtained from convolutional neural networks with different sizes and feature maps of candidate frames of a target area obtained from an RPN network to pass through the RoiP layer and then output in a fixed size for subsequent convolutional neural networks to continue to use.
8. The trash bottle sorting method based on deep learning of claim 1, wherein in step S3, the segmentation images are classified according to the proportions of evergreen, dark brown, white, pink, green, blue, transparent farmer a, transparent dirty, yellow, black and light blue, and are respectively labeled, and then the segmentation image data are divided into a training set, a verification set and a test set according to the proportion of 8:1:1, and then color classification of plastic bottles is realized by using a residual neural network, specifically comprising the following steps:
s301, inputting pictures with the size of 224 × 224 into the convolution layer and the pooling layer;
s302, extracting the characteristics of the color of the plastic bottle through a residual error neural network;
s303, the obtained feature map realizes final color classification through a global average pooling layer and a full connection layer;
and S304, updating the parameters of the training model to determine a final network model, wherein the network model comprises a learning rate, training times and iteration times.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the invention, deep learning is integrated into a scene of a garbage sorting factory, manpower is liberated, and the sorting efficiency of the existing garbage bottles is improved.
2. The invention realizes the accurate positioning of the garbage bottles by utilizing the object detection based on deep learning, the returned target position has high precision, and the garbage bottle sorting accuracy of industrial mechanical equipment is greatly improved.
3. The systematic garbage bottle sorting method provided by the invention has the advantages that the garbage bottles are firstly divided by utilizing the deep learning method, then the plastic bottles are subjected to color classification, the requirement of the practical situation on the garbage bottle classification is met, and the market effectiveness is high.
Drawings
The drawings are for illustrative purposes only and are not to be construed as limiting the patent; for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
FIG. 1 is a schematic overall flow chart of an embodiment of the present invention;
FIG. 2 is a schematic flow chart of garbage bottle classification based on object detection according to an embodiment of the present invention;
FIG. 3 is a schematic view of a process for color sorting plastic bottles according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating the segmentation results of garbage bottle classification according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating the color classification results of plastic bottles according to an embodiment of the present invention.
Detailed Description
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted" and "connected" are to be interpreted broadly, e.g., as being either fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, so to speak, as communicating between the two elements. The specific meaning of the above terms in the present invention can be understood in specific cases to those skilled in the art. The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
The invention discloses a garbage bottle sorting method based on deep learning, which integrates the deep learning into a factory, helps people classify and screen garbage bottles, and trains a machine to recognize and classify garbage like a human through a neural network, so that the time and the economic cost for classifying the garbage bottles are reduced to a certain extent, manpower is saved, and the recognition accuracy is improved.
Example 1:
the garbage bottle sorting method based on deep learning is characterized in that artificial intelligence is applied to a garbage bottle sorting scene by using a target detection algorithm, so that the tasks of high-efficiency and high-precision sorting of garbage bottles are realized, and the garbage bottle sorting method specifically comprises the following implementation steps:
step S1, image acquisition and data classification criteria definition. Firstly, an imaging system of an industrial light source and an industrial camera is utilized to collect images of the trash bottles in a factory. Next, a positioning description of the sorting work of the trash bottles is performed according to the acquired images: the first step is required to classify a large number of garbage bottles containing non-plastic bottles such as pop cans and aluminum cans, and the classification standard of each garbage bottle is determined so that the subsequent picture marking work can be more accurately and effectively implemented; and the second step is to further perform color sorting on the plastic bottles obtained by the first step of classification, and the plastic bottles with single color can reduce the limitation of the application of the recycled plastic.
And step S2, labeling pictures and cleaning data. Since the supervised training is premised on the requirement of a labeled data set, the classification standard and the acquired image are required to be labeled according to the step S1, and the labeled data set is used as a true value of a calculation model prediction error in the model training process.
And step S3, classifying the garbage bottles. The garbage bottles in the actually collected image are classified according to the standard by using an object detection algorithm based on deep learning, and a series of segmentation pictures are obtained, wherein the segmentation result is shown in fig. 4.
Step S4, the plastic bottles are color classified. The plastic bottles in the segmented picture obtained in step S3 are further classified by color by using a residual neural network, and the classification result is shown in fig. 5.
In step S5, the positional information and color of each plastic bottle obtained in step S4 are attribute-linked to the rear end robot for sorting.
Preferably, in step S1, based on the large number of garbage bottle images collected on site, we classify all garbage bottles into four categories according to certain criteria according to practical situations: the seal is aluminum can, plastic bottle, pop can and sundries.
Preferably, in step S3, the step of classifying the trash bottle using the deep learning-based object detection algorithm includes: s301, constructing a neural network model based on object detection, wherein a specific schematic diagram is shown in FIG. 2;
s302, training the network model of S301;
and S303, continuously adjusting the data set and the model parameters, repeating the step S302 to obtain a model meeting the requirements of precision and speed, and putting the model into a factory for use to obtain a segmentation result graph of the garbage bottle.
Preferably, in step S4, the cut plastic bottle pictures obtained in step S3 are first sorted and labeled according to evergreen, dark brown, white, pink, green, blue, transparent farmer a, transparent dirty, yellow, black, light blue. Then, dividing the picture data into a training set, a verification set and a test set according to the ratio of 8:1:1, and then utilizing a residual error neural network to realize color classification of the plastic bottles, wherein the specific steps are as follows: inputting pictures with the size of 224 and 224 to a convolution layer and a pooling layer; secondly, extracting the characteristics of the color of the plastic bottle through a residual error network; thirdly, the obtained feature graph realizes final color classification through a global average pooling layer and a full connection layer; and fourthly, updating the parameters of the training model to determine the final network model, wherein the parameters comprise the learning rate, the training times and the iteration times. The specific implementation flow is shown in fig. 3.
Preferably, for the invention described in step S301, a neural network model including a feature extraction module, an RPN module, a RoiP layer, and a classification regression module is constructed. The feature extraction module is mainly composed of a convolutional neural network and is used for extracting features of input pictures with any sizes. The RPN module is used for recommending the candidate region. The RoiP layer enables each target candidate area to generate a feature map with a fixed size so as to be sent to a full-connection layer of a convolutional network to judge the target category. The classification regression module comprises a classification function for realizing the classification of the target category and a regression function for realizing the accurate position of the target detection frame, and the classification probability and the frame regression are jointly trained by respectively selecting Softmax Loss (detection classification probability) and Smooth L1Loss (detection frame regression).
(1) And a feature extraction module. The method comprises the steps of utilizing a convolutional neural network to carry out feature extraction on input picture data with any size, wherein each layer of convolutional network extracts more abstract features on the basis of information of the previous layer, and finally obtaining a convolutional feature map (feature map) which is much smaller than an original picture in one spatial dimension and is deeper in representation.
(2) And an RPN module. A full-connected feature of length 512 dimensions is generated on the feature map (feature map) obtained by the feature extraction module using a 3 x 3 sliding window, and then two branched full-connected layers are generated after this 512-dimensional feature: one is reg-layer for predicting the position coordinates [ x, y, w, h ] corresponding to the candidate frame of the target proposed area](ii) a The other is a cls-layer for determining whether the target suggestion box is foreground or background (i.e., is an object). Specifically, it is trueNow the function of the RPN module, we first need to construct a label of two categories (judge whether it is an object): candidate region of interest box corresponding to anchor and IoU of grountruth>0.7 is marked as positive sample, and the candidate region of interest box corresponding to anchor is IoU of ground truth<0.3 is marked as a negative sample, then positive and negative sample data of 1:1 are selected from the positive sample data and the negative sample data to participate in the subsequent model training, and the others are not used as the positive sample or the negative sample and do not participate in the training, next, a series of operations are carried out on the RPN module to generate a target region candidate box according to the following steps of ① generating an anchor, and four transformations of translation and scaling are utilized
Figure BDA0002509453220000091
The method comprises the steps of performing regression on candidate frames according to GT (ground route), ② sequencing anchors from large to small according to input probability scores of objects, extracting the first N anchors, ③ mapping the selected anchors to a feature map obtained by an original convolution neural network, judging whether the anchors exceed a boundary in a large range or not, eliminating anchors which seriously exceed the boundary, ④ performing non-maximum value inhibition to further filter partial anchors, ⑤ performing sequencing on the remaining anchors from large to small according to object probability scores again, and selecting the first N anchors as final target candidate frames, wherein loss functions used in the RPN network training process are as follows:
Figure BDA0002509453220000092
in the above formula, LclsRepresents the Loss function, L, used to detect the classification probability under Softmax LossregRepresents the penalty function used to probe bounding box regression under Smooth L1Loss, i represents the index of anchor, piThe representation is the Softmax predicted probability of the object,
Figure BDA0002509453220000093
representing the corresponding prediction probability (0 or 1) of GT, tiRepresents the predicted target candidate box or boxes,
Figure BDA0002509453220000094
representing the GT bounding box corresponding to the object.
(3) And (4) a RoiP layer. The module realizes that the feature map obtained from the convolutional neural network with different sizes and the target area candidate frame feature map obtained from the RPN pass through the RoiP layer and then are output in a fixed size for the subsequent convolutional neural network to continue to use.
(4) And (5) a classification regression module. The module outputs the final target classification result and the accurate position of the target detection frame. Specifically, classifying the target candidate frames by using full connection and Softmax, and determining the specific category to which each candidate frame belongs; and accurately calculating the target candidate frame by using frame regression, and outputting the position coordinate of the target detection frame with higher precision.
In the drawings, the positional relationship is described for illustrative purposes only and is not to be construed as limiting the present patent; it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (8)

1. A trash can sorting method based on deep learning is characterized by comprising the following steps:
s1, presetting image acquisition and data classification standards, and labeling the pictures;
s2, classifying the collected images by using an object detection algorithm based on deep learning to obtain a series of segmentation pictures;
s3, utilizing a residual error neural network to classify colors of the garbage bottles in the segmented pictures;
and S4, transmitting the obtained position information and color attribute of each trash bottle to a rear manipulator for classification.
2. The trash bottle sorting method based on deep learning of claim 1, wherein in the step S1, the trash bottles are classified into four categories according to the images of the trash bottles collected on site: the seal is aluminum can, plastic bottle, pop can and sundries.
3. The method for sorting garbage bottles based on deep learning as claimed in claim 1, wherein in the step S2, the classifying the garbage bottles by using the deep learning based object detection algorithm specifically comprises:
s201, constructing a neural network model based on object detection;
s202, training the neural network model;
and S203, continuously adjusting the data set and the model parameters, and repeating the step S202 to obtain the neural network model meeting the requirements on precision and speed.
4. The deep learning-based trash can sorting method according to claim 3, wherein in the step S201, the neural network model comprises a feature extraction module, an RPN module, a RoiP layer and a classification regression module, the feature extraction module comprises a convolutional neural network for performing feature extraction on an input picture, the RPN module is used for recommending candidate regions, the RoiP layer enables each target candidate region to generate a feature map with a fixed size so as to be sent to a full-connection layer of the convolutional neural network for determining a target class, the classification regression module comprises a regression function for realizing classification of the target class and an accurate position of a target detection frame, and the detection classification probability and the detection frame regression are respectively selected for performing combined training on the classification probability and the frame regression.
5. The trash can sorting method based on deep learning of claim 4, wherein the feature extraction module performs feature extraction on input image data of any size by using a convolutional neural network, wherein each layer of convolutional network extracts more abstract features on the basis of information of a previous layer, and finally a convolutional feature map which is much smaller than an original image in one spatial dimension and is deeper in representation is obtained.
6. The deep learning-based garbage bottle sorting method according to claim 4, wherein the RPN module generates a full-connected feature with a length of 512D by using a 3 x 3 sliding window on the feature map obtained by the feature extraction module, and then generates a full-connected layer of two branches after the full-connected feature of 512D: one is reg-layer, which is used for predicting the position coordinates [ x, y, w, h ] corresponding to the candidate frame of the target suggested region; the other is a cls-layer for determining whether the target suggestion box is foreground or background.
7. The trash can sorting method based on deep learning of claim 4, wherein the RoiP layer enables feature maps obtained from convolutional neural networks with different sizes and feature maps of candidate frames of a target area obtained from an RPN network to pass through the RoiP layer and then output in a fixed size for subsequent convolutional neural networks to continue to use.
8. The trash bottle sorting method based on deep learning of claim 1, wherein in step S3, the segmentation images are classified according to the proportions of evergreen, dark brown, white, pink, green, blue, transparent farmer a, transparent dirty, yellow, black and light blue, and are respectively labeled, and then the segmentation image data are divided into a training set, a verification set and a test set according to the proportion of 8:1:1, and then color classification of plastic bottles is realized by using a residual neural network, specifically comprising the following steps:
s301, inputting pictures with the size of 224 × 224 into the convolution layer and the pooling layer;
s302, extracting the characteristics of the color of the plastic bottle through a residual error neural network;
s303, the obtained feature map realizes final color classification through a global average pooling layer and a full connection layer;
and S304, updating the parameters of the training model to determine a final network model, wherein the network model comprises a learning rate, training times and iteration times.
CN202010456143.9A 2020-05-26 2020-05-26 Garbage bottle sorting method based on deep learning Active CN111652214B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010456143.9A CN111652214B (en) 2020-05-26 2020-05-26 Garbage bottle sorting method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010456143.9A CN111652214B (en) 2020-05-26 2020-05-26 Garbage bottle sorting method based on deep learning

Publications (2)

Publication Number Publication Date
CN111652214A true CN111652214A (en) 2020-09-11
CN111652214B CN111652214B (en) 2024-05-28

Family

ID=72343204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010456143.9A Active CN111652214B (en) 2020-05-26 2020-05-26 Garbage bottle sorting method based on deep learning

Country Status (1)

Country Link
CN (1) CN111652214B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505633A (en) * 2021-05-19 2021-10-15 海南师范大学 Household garbage sorting treatment method and system based on block chain
CN113731836A (en) * 2021-08-04 2021-12-03 华侨大学 Urban solid waste online sorting system based on deep learning
CN114937199A (en) * 2022-07-22 2022-08-23 山东省凯麟环保设备股份有限公司 Garbage classification method and system based on discriminant feature enhancement
CN115205521A (en) * 2022-08-09 2022-10-18 湖南大学 Kitchen waste detection method based on neural network

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107798335A (en) * 2017-08-28 2018-03-13 浙江工业大学 A kind of automobile logo identification method for merging sliding window and Faster R CNN convolutional neural networks
WO2018137357A1 (en) * 2017-01-24 2018-08-02 北京大学 Target detection performance optimization method
CN109190691A (en) * 2018-08-20 2019-01-11 小黄狗环保科技有限公司 The method of waste drinking bottles and pop can Classification and Identification based on deep neural network
CN109190581A (en) * 2018-09-17 2019-01-11 金陵科技学院 Image sequence target detection recognition methods
CN110116415A (en) * 2019-06-12 2019-08-13 中北大学 A kind of Bottle & Can class rubbish identification sorting machine people based on deep learning
CN110738131A (en) * 2019-09-20 2020-01-31 广州游艺云物联网技术有限公司 Garbage classification management method and device based on deep learning neural network
CN110866561A (en) * 2019-11-18 2020-03-06 佛山市南海区广工大数控装备协同创新研究院 Plastic bottle color sorting method based on image recognition
CN210161172U (en) * 2019-06-12 2020-03-20 中北大学 Bottle and can type garbage identification and classification robot based on deep learning
CN110909660A (en) * 2019-11-19 2020-03-24 佛山市南海区广工大数控装备协同创新研究院 Plastic bottle detection and positioning method based on target detection
CN111091059A (en) * 2019-11-19 2020-05-01 佛山市南海区广工大数控装备协同创新研究院 Data equalization method in household garbage plastic bottle classification

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018137357A1 (en) * 2017-01-24 2018-08-02 北京大学 Target detection performance optimization method
CN107798335A (en) * 2017-08-28 2018-03-13 浙江工业大学 A kind of automobile logo identification method for merging sliding window and Faster R CNN convolutional neural networks
CN109190691A (en) * 2018-08-20 2019-01-11 小黄狗环保科技有限公司 The method of waste drinking bottles and pop can Classification and Identification based on deep neural network
CN109190581A (en) * 2018-09-17 2019-01-11 金陵科技学院 Image sequence target detection recognition methods
CN110116415A (en) * 2019-06-12 2019-08-13 中北大学 A kind of Bottle & Can class rubbish identification sorting machine people based on deep learning
CN210161172U (en) * 2019-06-12 2020-03-20 中北大学 Bottle and can type garbage identification and classification robot based on deep learning
CN110738131A (en) * 2019-09-20 2020-01-31 广州游艺云物联网技术有限公司 Garbage classification management method and device based on deep learning neural network
CN110866561A (en) * 2019-11-18 2020-03-06 佛山市南海区广工大数控装备协同创新研究院 Plastic bottle color sorting method based on image recognition
CN110909660A (en) * 2019-11-19 2020-03-24 佛山市南海区广工大数控装备协同创新研究院 Plastic bottle detection and positioning method based on target detection
CN111091059A (en) * 2019-11-19 2020-05-01 佛山市南海区广工大数控装备协同创新研究院 Data equalization method in household garbage plastic bottle classification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈宇超;卞晓晓;: "基于机器视觉与深度学习的医疗垃圾分类***", 电脑编程技巧与维护, no. 05, 18 May 2019 (2019-05-18), pages 110 - 112 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505633A (en) * 2021-05-19 2021-10-15 海南师范大学 Household garbage sorting treatment method and system based on block chain
CN113731836A (en) * 2021-08-04 2021-12-03 华侨大学 Urban solid waste online sorting system based on deep learning
CN114937199A (en) * 2022-07-22 2022-08-23 山东省凯麟环保设备股份有限公司 Garbage classification method and system based on discriminant feature enhancement
CN114937199B (en) * 2022-07-22 2022-10-25 山东省凯麟环保设备股份有限公司 Garbage classification method and system based on discriminant feature enhancement
CN115205521A (en) * 2022-08-09 2022-10-18 湖南大学 Kitchen waste detection method based on neural network
CN115205521B (en) * 2022-08-09 2024-03-26 湖南大学 Kitchen waste detection method based on neural network

Also Published As

Publication number Publication date
CN111652214B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN111652214A (en) Garbage bottle sorting method based on deep learning
Wang et al. RENet: Rectangular convolution pyramid and edge enhancement network for salient object detection of pavement cracks
CN108985169B (en) Shop cross-door operation detection method based on deep learning target detection and dynamic background modeling
CN110781896B (en) Track garbage identification method, cleaning method, system and resource allocation method
CN111881730A (en) Wearing detection method for on-site safety helmet of thermal power plant
CN104992449A (en) Information identification and surface defect on-line detection method based on machine visual sense
CN111967313B (en) Unmanned aerial vehicle image annotation method assisted by deep learning target detection algorithm
Gyawali et al. Comparative analysis of multiple deep CNN models for waste classification
CN104574389A (en) Battery piece chromatism selection control method based on color machine vision
CN112330593A (en) Building surface crack detection method based on deep learning network
CN113469264A (en) Construction method of automatic garbage classification model, garbage sorting method and system
CN114596555B (en) Obstacle point cloud data screening method and device, electronic equipment and storage medium
CN114639115B (en) Human body key point and laser radar fused 3D pedestrian detection method
CN116665011A (en) Coal flow foreign matter identification method for coal mine belt conveyor based on machine vision
CN113065486A (en) Floater identification method, system, equipment and medium suitable for river channel
CN115035082B (en) Method for detecting defects of transparent parts of aircraft based on YOLOv4 improved algorithm
CN113888754A (en) Vehicle multi-attribute identification method based on radar vision fusion
CN113111875A (en) Seamless steel rail weld defect identification device and method based on deep learning
CN115620178A (en) Real-time detection method for abnormal and dangerous behaviors of power grid of unmanned aerial vehicle
Meria Development of automatic industrial waste detection system for leather products using artificial intelligence
CN113077438B (en) Cell nucleus region extraction method and imaging method for multi-cell nucleus color image
CN111414855B (en) Telegraph pole sign target detection and identification method based on end-to-end regression model
CN113762144A (en) Deep learning-based black smoke vehicle detection method
CN116206148A (en) Intelligent detection, identification and classification method for decoration waste
CN114241189B (en) Ship black smoke recognition method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant