CN116206155A - Waste steel classification and identification method based on YOLOv5 network - Google Patents

Waste steel classification and identification method based on YOLOv5 network Download PDF

Info

Publication number
CN116206155A
CN116206155A CN202310170325.3A CN202310170325A CN116206155A CN 116206155 A CN116206155 A CN 116206155A CN 202310170325 A CN202310170325 A CN 202310170325A CN 116206155 A CN116206155 A CN 116206155A
Authority
CN
China
Prior art keywords
yolov5
scrap steel
identification
classification
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310170325.3A
Other languages
Chinese (zh)
Inventor
余飞鹏
杨祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Technology
Original Assignee
Guilin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Technology filed Critical Guilin University of Technology
Priority to CN202310170325.3A priority Critical patent/CN116206155A/en
Publication of CN116206155A publication Critical patent/CN116206155A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a scrap steel classification and identification method based on a YOLOv5 network, and mainly relates to classification and identification of scrap steel. The method comprises the following steps: step 1: acquiring scrap steel classification and identification data, and acquiring an image to be detected through a camera; step 2: data preprocessing, labeling and data set dividing, and sending an image or video stream to be detected to a model for detection and identification; the invention has the beneficial effects that: the YOLOv5 model is selected to finish scrap steel classification and identification tasks, so that the training and detection speed is high, the model parameters are small, and the method has the advantage of extremely strong rapid deployment; introducing a semantic dividing head for detecting steel slag and bulk scrap steel; adding an anchor and a detection layer aiming at a pig iron type small target; aiming at the problem of overlapping target omission, changing a non-maximum suppression NMS method in the post-processing process into a softening non-maximum suppression Soft-NMS method; alpha IoU is introduced to avoid gradient disappearance when the real frame and the predicted frame are overlapped.

Description

Waste steel classification and identification method based on YOLOv5 network
Technical Field
The invention relates to the field of classification and identification of scrap steel, in particular to research and application of a scrap steel classification and identification model based on a YOLOv5 network.
Background
The method for classifying and identifying the scrap steel mainly comprises manual classification, physical characteristic classification and intelligent specification drawing image classification, wherein the intelligent specification drawing image classification method is mainly used in two directions, and the detection method based on a sensor and a spectrum and the target detection method based on deep learning are adopted. The former has the defects of poor recognition effect of scrap steel with irregular size and shape and nonferrous metal, high calculation cost and the like. Currently, the development of a target detection method based on deep learning is mature, and a large number of algorithms are applied to the floor industry.
In the current steel production process, waste steel is generally recycled for melting and reutilization in a furnace in order to reduce cost and improve smelting efficiency. Due to the large usage amount of the scrap steel, the multi-material type doping and mixing and the like, the scrap steel is mixed frequently, and the scrap steel needs to be classified in a high-precision grade judgment mode in order to ensure the quality of products and improve the steel yield. The traditional scrap steel judging level is greatly influenced by artificial subjective factors, has higher requirements on personnel, generally requires familiarity standard and has abundant experience to judge; and each person judges that the difference exists, the evaluation result is possibly influenced by fatigue, mood and the like, the quantitative evaluation conclusion is not formed, good data analysis cannot be formed, and the suppliers cannot easily trust. Meanwhile, the steel scrap grading operation environment is severe, a quality inspector needs to climb up to the roof of a large truck four and five meters each time, the steel scrap in the truck is observed in a short distance, the labor intensity is high, and the operation risk is high. How to solve a plurality of problems existing in the traditional scrap steel grade judgment, and realizing the intelligent scrap steel grade judgment meeting the requirements of new industrial revolution is the focus of attention of iron and steel enterprises.
Under the background of rapid development of deep learning, some intelligent scrap steel grade judging algorithms based on deep learning are already presented, but small scrap steel is difficult to intelligently detect due to the characteristics of small size, limited available characteristics, easy shielding by other scrap steel and the like, and the problems of low grade judging accuracy and large error can be presented; in view of this, we propose a scrap steel classification method based on small target data enhancement and multi-view collaborative reasoning.
Aiming at the scrap steel classification and identification task, a scrap steel classification and identification grading system based on YOLOv5 is provided, so that the problem of scrap steel type identification under optical conditions is solved, and aiming at the problem of extremely irregular materials, edges and size rules of specific scrap steel types, a segmentation head is added to assist in identifying a target object. The classification and identification of the scrap steel type by using the image of the drawing of the optical specification are realized.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a waste steel classification and identification model research and application based on a YOLOv5 network, and solves the problems that a quality inspector needs to climb up to the roof of a large truck four and five meters each time, the short-distance observation is carried out on waste steel in the truck, the labor intensity is high, the operation risk is high and the like.
The scrap steel classification and identification model research and application based on the YOLOv5 network comprises the following steps:
step 1: acquiring scrap steel classification and identification data, and acquiring an image to be detected through a camera;
step 2: data preprocessing, labeling and data set dividing, and sending an image or video stream to be detected to a model for detection and identification;
step 3: constructing and training a scrap steel classification and identification network model based on YOLOv5, randomly dividing a data sample into a training set, a verification set and a test set images, marking residual baits in the images, calculating the scaling of the marked images and filling edges of the marked images;
step 4: establishing a network model based on YOLOv5 for scrap steel classification and identification, improving algorithm pertinence and analyzing theory, and carrying out data enhancement on the image marked in the step 3 in a mosaics mode;
step 5: inputting the training set image enhanced by the data in the step 4 into a YOLOv5 network model, and adjusting training super parameters by using a verification set; after obtaining the optimal super parameters, iterating the YOLOv5 network model until the optimal YOLOv5 network model is determined;
step 6: inputting the test set into an optimal YOLOv5 network model, and outputting the scrap steel classification recognition result.
The method for classifying and identifying the scrap steel mainly comprises manual classification, physical characteristic classification and intelligent specification drawing image classification, wherein the intelligent specification drawing image classification method is mainly used in two directions, and the detection method based on a sensor and a spectrum and the target detection method based on deep learning are adopted. The former has the defects of poor recognition effect of scrap steel with irregular size and shape and nonferrous metal, high calculation cost and the like. Currently, the development of a target detection method based on deep learning is mature, and a large number of algorithms are applied to the floor industry.
Aiming at the scrap steel classification and identification task, a scrap steel classification and identification grading system based on YOLOv5 is provided, so that the problem of scrap steel type identification under optical conditions is solved, and aiming at the problem of extremely irregular materials, edges and size rules of specific scrap steel types, a segmentation head is added to assist in identifying a target object. The classification and identification of the scrap steel type by using the image of the drawing of the optical specification are realized.
Drawings
FIG. 1 is a schematic diagram of the technical route of the invention.
FIG. 2 is a general step of the scrap classification and identification of the present invention.
FIG. 3 is a diagram of the model structure of YOLOv5 of the invention.
FIG. 4 is a photograph of four types of scrap steel according to the present invention.
Detailed Description
The invention will be further illustrated with reference to specific examples. It is to be understood that these examples are illustrative of the present invention and are not intended to limit the scope of the present invention. Further, it is understood that various changes and modifications may be made by those skilled in the art after reading the teachings of the present invention, and such equivalents are intended to fall within the scope of the claims appended hereto.
Scrap steel classification recognition model research and application based on YOLOv5 network
The method for classifying and identifying the scrap steel mainly comprises manual classification, physical characteristic classification and intelligent specification drawing image classification, wherein the intelligent specification drawing image classification method is mainly used in two directions, and the detection method based on a sensor and a spectrum and the target detection method based on deep learning are adopted. The former has the defects of poor recognition effect of scrap steel with irregular size and shape and nonferrous metal, high calculation cost and the like. Currently, the development of a target detection method based on deep learning is mature, and a large number of algorithms are applied to the floor industry.
Aiming at the scrap steel classification and identification task, a scrap steel classification and identification grading system based on YOLOv5 is provided, so that the problem of scrap steel type identification under optical conditions is solved, and aiming at the problem of extremely irregular materials, edges and size rules of specific scrap steel types, a segmentation head is added to assist in identifying a target object. The classification and identification of the scrap steel type by using the image of the drawing of the optical specification are realized.
1. The technical route is shown in figure 1 of the accompanying drawings in the specification:
(1) Data acquisition
(2) Data preprocessing, labeling and data set partitioning
(3) Construction and training of scrap steel classification and identification network model based on YOLOv5
(4) Analyzing according to the training result, evaluating the established model
2. The technical measures and the research method are shown in figure 2 of the attached drawing in the specification:
(1) General steps of scrap steel classification and identification
The general steps of scrap steel classification and identification are shown in figure 2 of the attached drawing; the identification and detection of scrap steel are carried out through a camera to obtain an image of a drawing of a specification to be detected; the detection modes are divided into two types, namely image detection and video stream detection of a single Zhang Shuiming book drawing; sending the image or video stream of the drawing of the to-be-detected specification into a model for detection and identification; and outputting, storing and displaying the detection result.
(2) Scrap steel classification recognition algorithm based on improved YOLOv5
There are two kinds of target detection methods in deep learning, one-stage method and two-stage method. The double-stage target detector adopts two-stage structure sampling to solve the problem of unbalanced categories, a RPN (RegionProposal Network) network is used for enabling positive and negative samples to be more balanced, a two-stage cascading mode is used for fitting bbox, coarse regression is performed, and fine adjustment is performed. The two-stage representative algorithm is RCNN, fast RCNN, SSD, and the like. The one-stage algorithm directly applies the algorithm to the image of the drawing of the input specification and outputs the category and the corresponding positioning, the one-stage representative algorithm is a YOLO series algorithm, YOLO is an acronym of You only look once, and the one-stage representative algorithm is an object detection algorithm for dividing the image of the drawing of the specification into grid systems. Each cell in the grid is responsible for detecting an object inside itself, and if the coordinates of the central position of a certain object fall into a certain grid, the grid is responsible for detecting the object. The detection of the drawing image of the specification is used as a regression problem to be solved, and the position of all objects in the drawing image of the specification, the category of the objects and the corresponding confidence probability can be obtained through one-time reasoning of the drawing image of the specification.
YOLOv5 is the most recent algorithm of the YOLOv series, the accuracy of the YOLOv5 network model is slightly inferior to that of the YOLOv4 network model, but the flexibility and speed of YOLOv5 are much better than those of the YOLOv4 network model. The YOLOv5 algorithm has four structures of s, M, l and x, each model completes selection control through two super parameters of a model configuration file, the minimum weight file volume of the YOLOv5 network model is within 15M, and the small volume enables deployment of the network model to have extremely strong advantages, and the method is more suitable for deployment on various platform environments, including mobile terminal equipment.
The YOLOv5 model structure is shown in figure 3 of the accompanying drawings of the specification, and the model structure is composed of a back bone Backbone network, a Neck Neck network and a Head detection Head. The backbone network is responsible for completing the convolution neural network which is aggregated on the fine granularity of the images of different specifications and forms the image characteristics of the specifications; the neck network completes a series of network layers for mixing and combining the image features of the drawing of the specification, and transmits the image features of the drawing of the specification to the prediction layer; the detection head part is responsible for carrying out prediction classification, positioning and boundary box generation on the image features of the drawing of the specification.
The image of the drawing of the specification enters a Backbone network of a backbox after being input, enters a Focus structure firstly, then passes through a stacked convolution block of Conv and C3 structures, and then enters an SPP characteristic pyramid structure, so that the Backbone network can finish the extraction of the image characteristics of the drawing of the specification; and then a Neck structure is entered, a structure of combining an FPN feature pyramid network and a PAN path aggregation network is adopted in Neck, a conventional FPN layer and a bottom-up feature pyramid are combined, extracted semantic features and position features are fused, and a main layer and a detection layer are fused in features, so that the model obtains richer feature information. The detection head consists of three detection layers, and the characteristic specification drawing figures with different sizes are used for detecting target objects with large, medium and small sizes. And outputting corresponding vectors by each detection layer, and finally generating and marking a prediction boundary box and a category of the target in the image of the original specification drawing.
Under the actual working condition, the optical condition is dark, the recognition task needs to be completed in the process of the electromagnetic chuck from sucking to putting down, and in the process, the real-time detection needs to be ensured, so that the high requirement on the detection speed is realized. And analyzing a target detection algorithm which is mature in the mainstream in the market at present, and combining the characteristics of the scrap steel classification and identification task, the actual detection and identification environment and the model deployment condition, and selectively constructing a YOLOv5 network model to finish the scrap steel real-time classification and identification task.
(3) PSPNet-based image segmentation head for drawing of specification
The image segmentation of the drawing of the specification is mainly to divide regions according to certain or several characteristics of the image of the drawing of the specification, and then to specially mark the divided regions, for example, using color marked regions and backgrounds, using special tracks to divide different regions, and the like, thereby improving the efficiency and effect of the image processing of the drawing of the following specification. For many years, a large number of scholars study summaries, and three levels of semantic information generally exist in the images of the attached drawings in the specification: the traditional image segmentation method of the specification and drawing, such as region method, mainly uses low-level semantics, such as color, texture, shape and the like, in the image of the specification and drawing. The segmentation of the drawing image of the specification can only aim at the drawing image of the simple specification, and the segmentation of the drawing image of the complex specification needs to be combined with the middle-level and high-level semantics of the drawing image of the specification to improve the segmentation effect, wherein one thought is to directly take the pixels or super-pixels of the drawing image of the specification as a processing unit, extract the characteristics of the drawing image of the specification for semantic segmentation, generally take a large number of drawing images of the specification with pixel-level labels as samples, train classifiers similar to neural networks, such as U-Net networks, full convolution neural networks, R-CNN and the like.
The addition of an image segmentation Head to the YOLOv5 model Head section to assist in identifying specific classes of objects is presented herein.
Adding a segmentation Head, namely accessing a semantic segmentation network into a Head part of a Yolov5 model, wherein the segmentation network is PSPNet, and extracting a pyramid pooling module (pyramid pooling module) of the PSPNet and taking the pyramid pooling module as a Head structure of the Head part of the model. The PPM structure is a hierarchical global priori structure, contains information between different sub-areas with different scales, and can construct global scene priori information on a final layer characteristic specification drawing of the deep neural network. The PPM module fuses 4 features with different pyramid scales, the first layer is the coarsest feature, a single bin output is generated for global pooling, and the last three layers are pooled features with different scales. To guarantee the weighting of the global features, a 1 x 1 convolution is used after each level to reduce the channel to the original 1/N. And obtaining the size before non-pooling through bilinear interpolation, and finally concat.
(IV) conditions required for the embodiment (technical conditions, test conditions)
1. Technical conditions
(1) Familiarity with the Pytorch deep learning framework is required;
(2) The languages such as Python3 and the like need to be mastered;
(3) The method needs to be familiar with common algorithms of machine learning and neural networks, and tries to integrate into own network models so as to improve the accuracy of the models;
(4) Several object detection network models, which are familiar with the current mainstream, are required to be properly optimized and improved.
2. Experimental conditions
(1) And (3) a self-built data set, and acquiring by using a camera. The dataset was annotated using labelmg software. The data set comprises four kinds of data, namely pig iron, common package, large steel slag and common bulk steel scraps, as shown in figure 4 of the attached drawing.
(2) Windows 10 system, nvidia 2070-MQ specification drawing image display card, python3 compiling environment and Pytorch deep learning framework.
(1) Pig iron and packaging blocks in the four types of scrap steel are clear in outline and bright in color, the picture features of the specifications and the drawing figures of the two types of scrap steel can be well learned and well identified during model training, and the method has high accuracy, but for large steel slag and common bulk scrap steel, the appearance of the two types of scrap steel is extremely irregular, and the two types of scrap steel have different sizes, shapes and colors, so that the identification of the two types of scrap steel is difficult. How to learn the image characteristics of the instruction drawings of the large steel slag in four types of scrap steel and common bulk scrap steel and complete identification;
(2) Pig iron detection in the four types of scrap steel is similar to detection of a small target due to target characteristics, and the identification accuracy is not high;
(3) In the post-processing process, an NMS algorithm is used for removing a target redundant boundary frame method after detecting the picture of the drawing of the specification, the method is too rough, when two targets in the picture of the drawing of the specification are overlapped, the frame with lower confidence level can be deleted due to larger overlapping area, so that the detection fails and the average detection rate of the model is reduced;
(4) When the predicted frame overlaps the real frame, the gradient disappears due to the loss of IoU, and the convergence speed is reduced, so that the detection accuracy is lowered.
The technical key to the solution
(1) Adding an image segmentation head of an instruction drawing, training the image segmentation of the instruction drawing for the large steel slag in the scrap steel and common bulk scrap steel, and completing the identification of the large steel slag in the scrap steel and the common bulk scrap steel by two methods of detection and segmentation;
(2) The structures of the anchor and the detecting head are adjusted and improved to meet the image detecting requirements of the drawing of the pig iron specifications;
(3) Changing NMS algorithm in post-processing into Soft-NMS algorithm, the original deleted bounding box is given a smaller confidence, instead of being deleted directly;
(4) Based on the existing IoU loss, an alpha IoU function is introduced, and alpha IoU is beneficial to improving the high IoU target loss and regression accuracy.
According to the prior literature and data, the method based on the sensor and the spectrum is higher in cost compared with the method using the optical target detection for the steel scrap classification, the method using the deep learning target detection is more advantageous, and the target identification by using the image of the attached drawing of the optical instruction book of the steel scrap is a problem worthy of research. Aiming at the image characteristics, deployment and detection speed requirements of the specifications of various scrap steels, it is feasible to establish a YOLOv5 model to finish the identification of the scrap steels.
During the study period, a great deal of theoretical knowledge and literature related to the subject are read, deep study algorithms such as target detection and the like are studied in depth, python programming experience is rich, a series of experiments on the subject related technology are developed in the early stage, and the subject study related technology is mastered.
The YOLOv5 model is selected to finish scrap steel classification and identification tasks, so that the training and detection speed is high, the model parameters are small, and the method has the advantage of extremely strong rapid deployment;
introducing a semantic dividing head for detecting steel slag and bulk scrap steel;
adding an anchor and a detection layer aiming at a pig iron type small target;
aiming at the problem of overlapping target omission, changing a non-maximum suppression NMS method in the post-processing process into a softening non-maximum suppression Soft-NMS method;
alpha IoU is introduced to avoid gradient disappearance when the real frame and the predicted frame are overlapped.

Claims (7)

1. A scrap steel classification and identification method based on a YOLOv5 network is characterized by comprising the following steps:
step 1: acquiring scrap steel classification and identification data, and acquiring an image to be detected through a camera;
step 2: data preprocessing, labeling and data set dividing, and sending an image or video stream to be detected to a model for detection and identification;
step 3: constructing and training a scrap steel classification and identification network model based on YOLOv5, randomly dividing a data sample into a training set, a verification set and a test set images, marking residual baits in the images, calculating the scaling of the marked images and filling edges of the marked images;
step 4: establishing a network model based on YOLOv5 for scrap steel classification and identification, improving algorithm pertinence and analyzing theory, and carrying out data enhancement on the image marked in the step 3 in a mosaics mode;
step 5: inputting the training set image enhanced by the data in the step 4 into a YOLOv5 network model, and adjusting training super parameters by using a verification set; after obtaining the optimal super parameters, iterating the YOLOv5 network model until the optimal YOLOv5 network model is determined;
step 6: inputting the test set into an optimal YOLOv5 network model, and outputting the scrap steel classification recognition result.
2. The method for classifying and identifying steel scraps based on a YOLOv5 network according to claim 1, wherein the method comprises the following steps: the ratio of the training set, the verification set and the test set in the step 3 is 3:2:1.
3. the method for classifying and identifying steel scraps based on a YOLOv5 network according to claim 2, wherein the method is characterized in that: and (5) determining the standard of the optimal YOLOv5 network model in the step (5) to be that the recognition accuracy is higher than 95% when the model finally converges.
4. A method for classifying and identifying steel scraps based on YOLOv5 network according to claim 1 or 3, wherein the method comprises the following steps: the YOLOv5 network model comprises an input end, a backbox part, a Neck part and a Prediction part.
5. The method for identifying the scrap steel classification based on the YOLOv5 network according to claim 1, wherein the method comprises the following steps: the working mode of the YOLOv5 network model in the step 5 is as follows:
inputting an image with enhanced data into a backbox part at an input end, firstly, performing a Focus structure, adopting a slicing operation to obtain a feature image after downsampling without information loss, then performing feature extraction on the feature image by using a CBL (cubic boron nitride), CSP1-X module and CSP2-X module, performing downsampling for five times in the backbox part to generate five types of feature layers with different sizes, respectively inputting four types of feature layers into a Neck part, and performing tensor splicing with the four feature layers with corresponding sizes after upsampling and downsampling to obtain four types of feature layers; the Prediction part predicts the image features according to the four types of feature layers, screens out the boundary frames exceeding the threshold value by adopting a DIoU_NMS method, and further obtains a Prediction frame; and calculating a target function loss value by adopting a CIoU loss function, back-propagating the loss value, updating the weight of YOLOv5, and adjusting the position of the prediction frame.
6. The method for identifying the scrap steel classification based on the YOLOv5 network according to claim 5: the DIoU_NMS method is characterized in that the calculation formula of the DIoU_NMS method is as follows:
Figure FDA0004097747580000021
5"width="527"/>
R DIoU is the distance between the center points of the two frames; m is the maximum score prediction frame obtained by current calculation; b (B) i Is a different prediction box; s is(s) i Is a classification score of the target inside the prediction frame; route_2 is the Euclidean distance between two center points of the prediction frame and the real frame; route_C is the diagonal distance of the smallest bounding rectangle of the predicted frame and the real frame; epsilon is the set NMS threshold.
7. The method for identifying the scrap steel classification based on the YOLOv5 network according to claim 5, wherein the CIoU loss function is calculated according to the following formula:
Figure FDA0004097747580000022
Figure FDA0004097747580000031
5"width="471"/>
wherein, route_2 is the center distance between two center points of the prediction frame and the real frame, route_C is the diagonal distance of the minimum circumscribed rectangle of the prediction frame and the real frame, and w gt Is the width of a real frame, h gt Is the height of a real frame, w p To predict the width of the frame, h p Is the height of the prediction frame; alpha and v are penalty terms for aspect ratio, where alpha is a parameter for balancing the ratio and is a positive number and v is used to measure the uniformity problem of aspect ratio.
CN202310170325.3A 2023-02-27 2023-02-27 Waste steel classification and identification method based on YOLOv5 network Pending CN116206155A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310170325.3A CN116206155A (en) 2023-02-27 2023-02-27 Waste steel classification and identification method based on YOLOv5 network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310170325.3A CN116206155A (en) 2023-02-27 2023-02-27 Waste steel classification and identification method based on YOLOv5 network

Publications (1)

Publication Number Publication Date
CN116206155A true CN116206155A (en) 2023-06-02

Family

ID=86515647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310170325.3A Pending CN116206155A (en) 2023-02-27 2023-02-27 Waste steel classification and identification method based on YOLOv5 network

Country Status (1)

Country Link
CN (1) CN116206155A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437459A (en) * 2023-10-08 2024-01-23 昆山市第一人民医院 Method for realizing user knee joint patella softening state analysis based on decision network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437459A (en) * 2023-10-08 2024-01-23 昆山市第一人民医院 Method for realizing user knee joint patella softening state analysis based on decision network
CN117437459B (en) * 2023-10-08 2024-03-22 昆山市第一人民医院 Method for realizing user knee joint patella softening state analysis based on decision network

Similar Documents

Publication Publication Date Title
CN110390691A (en) A kind of ore scale measurement method and application system based on deep learning
CN105260749B (en) Real-time target detection method based on direction gradient binary pattern and soft cascade SVM
CN103886308B (en) A kind of pedestrian detection method of use converging channels feature and soft cascade grader
CN109255350B (en) New energy license plate detection method based on video monitoring
CN108509954A (en) A kind of more car plate dynamic identifying methods of real-time traffic scene
CN109636772A (en) The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning
CN109753949B (en) Multi-window traffic sign detection method based on deep learning
CN112560675B (en) Bird visual target detection method combining YOLO and rotation-fusion strategy
CN103049763A (en) Context-constraint-based target identification method
CN106096542A (en) Image/video scene recognition method based on range prediction information
CN113642474A (en) Hazardous area personnel monitoring method based on YOLOV5
CN112967255A (en) Shield segment defect type identification and positioning system and method based on deep learning
Xing et al. Traffic sign recognition using guided image filtering
CN104954741A (en) Tramcar on-load and no-load state detecting method and system based on deep-level self-learning network
CN111738336A (en) Image detection method based on multi-scale feature fusion
CN106778540A (en) Parking detection is accurately based on the parking event detecting method of background double layer
CN114359245A (en) Method for detecting surface defects of products in industrial scene
CN116206155A (en) Waste steel classification and identification method based on YOLOv5 network
CN113807347A (en) Kitchen waste impurity identification method based on target detection technology
CN111815616A (en) Method for detecting dangerous goods in X-ray security inspection image based on deep learning
CN114419006A (en) Method and system for removing watermark of gray level video characters changing along with background
CN110188607A (en) A kind of the traffic video object detection method and device of multithreads computing
CN117437647A (en) Oracle character detection method based on deep learning and computer vision
CN112001336A (en) Pedestrian boundary crossing alarm method, device, equipment and system
CN110889418A (en) Gas contour identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination