CN113850799B - YOLOv 5-based trace DNA extraction workstation workpiece detection method - Google Patents

YOLOv 5-based trace DNA extraction workstation workpiece detection method Download PDF

Info

Publication number
CN113850799B
CN113850799B CN202111195733.1A CN202111195733A CN113850799B CN 113850799 B CN113850799 B CN 113850799B CN 202111195733 A CN202111195733 A CN 202111195733A CN 113850799 B CN113850799 B CN 113850799B
Authority
CN
China
Prior art keywords
yolov
training
workpiece
workstation
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111195733.1A
Other languages
Chinese (zh)
Other versions
CN113850799A (en
Inventor
姜长泓
刘茴香
梁超
王小瑀
王其铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Technology
Original Assignee
Changchun University of Technology
Filing date
Publication date
Application filed by Changchun University of Technology filed Critical Changchun University of Technology
Priority to CN202111195733.1A priority Critical patent/CN113850799B/en
Publication of CN113850799A publication Critical patent/CN113850799A/en
Application granted granted Critical
Publication of CN113850799B publication Critical patent/CN113850799B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a YOLOv-based workpiece detection method for a trace DNA extraction workstation, and belongs to the technical field of target detection. The method comprises the steps of marking pictures of a workpiece in a collecting workstation to form a data set, and carrying out format processing and division on the data set, wherein the environments of the workpiece in the pictures comprise illumination, angles, distances, shielding and the like in different time periods. The training set is input to calculate the initial anchor value again, the YOLOv is trained to obtain a weight model, the position information of the overlapped target is output by using DIOU-NMS regression, the optimal model is selected by using the feedback result of the verification set, and the trace DNA extraction workstation workpiece detection model is tested by the test set. The invention provides a method for detecting workpieces in different environments by a trace DNA extraction workstation, which can detect consumable materials required by the workstation and solve the problems of low detection efficiency and poor robustness in the prior art.

Description

YOLOv 5-based trace DNA extraction workstation workpiece detection method
Technical Field
The invention belongs to the field of target detection, and particularly relates to an image processing and deep learning detection system, in particular to a YOLOv-based trace DNA extraction workstation workpiece detection method.
Background
With the continuous progress of artificial intelligence, forensics need to perform operations such as batch pipetting, sample adding, mixing and the like on a large number of biological samples in a short time, and a micro-DNA automatic extraction workstation is generated. The invention has the advantages that the biological detection materials are difficult to collect, the samples are easy to pollute, and the quantity, the category and the position of the workpieces in the working station must not be wrong in the experimental process. In the underdeveloped stage of science and technology, detection work needs to be completed manually, the detection speed is low, the reliability is low, and the degree of automation is low. Therefore, how to perform automatic workpiece recognition and positioning on a micro-DNA extraction workstation has important research significance.
In recent years, machine vision has been rapidly developed, and workpiece identification and positioning have wide application in industrial automation scenes. Conventional workpiece detection algorithms identify objects of interest and locate targets by five steps, preprocessing, sliding windows, extracting features, target classification, and post-processing. The extraction of features needs to truly reflect the content in an image, so that the extraction is always an important point and a difficult point in a detection system, and algorithms comprise an LBP operator, a Canny operator, a Hough transformation and the like. The technique 1 is a local binary feature algorithm for extracting local texture features in pictures, and has the advantages of rotation invariance, gray scale invariance and the like. The technology 2 is a multipole edge detection algorithm for processing single-channel gray level images, and can reduce the data size of the images while maintaining the original image properties. Technique 3 is one of the basic algorithms for identifying geometry in an image by global features, and has the advantage of being immune to image rotation and less immune to curve discontinuities. The algorithm technology is suitable for training and detection in specific environments, and has complex artificial calculation characteristics, large workload and easy interference. Aiming at the difficult problem of feature extraction in images, deep learning is used as a branch of emerging computer vision, and features are extracted by multiplying and adding a plurality of convolution templates with corresponding positions of the original feature images, so that a convolutional neural network is obtained. Compared with the characteristics manually learned by the traditional method, the characteristics extracted by the convolutional neural network have diversity and complexity, and the classification precision of the full connection layer is higher under the condition that a plurality of targets appear. At present, scholars at home and abroad propose a plurality of convolutional neural network models, study convolutional layers and pooling layers, such as Alexnet, VGG, RCNN, YOLO, SSD, and the like, and are quickly applied to various fields of image recognition, target tracking and the like. Technology 4 by deepening the classical convolutional neural network model depth and adding a ReLU activation function and random inactivation, the activation function increases the training speed, the random inactivation prevents over fitting, and the model parameters also decrease. Technology 5 the network layer is deep, the pooling kernel is small, the parameter quantity generated by the big convolution kernel is reduced by using the small convolution kernel, and the improvement of the accuracy rate shows that the error rate can be reduced by increasing the network depth to a certain extent. The technology 6 is a classical detection algorithm based on candidate areas, and aims at a target detection algorithm, SELECTIVESEARCH is adopted to intercept candidate frames, a convolutional neural network extracts characteristics of the candidate frames, then characteristic information is sent into a linear SVM classifier and a regression model, the candidate frames with high coincidence are obtained through non-maximum suppression, and the application of the candidate frames and the convolutional neural network is a milestone leap of a target detection problem. The technology 7 directly carries out end-to-end regression on the positions and the categories of the target frames, has extremely high reasoning speed and better detection precision, has good generalization when carrying out global reasoning, and truly realizes real-time detection. The technology 8 adopts the traditional image pyramid idea to extract feature graphs with different scales for convolution operation, belongs to an end-to-end one-stage algorithm, and solves the difficult problem that small targets are detected weakly or cannot be detected to a certain extent. Along with the development of target detection, a YOLOv model is already provided in the YOLO series method, a new Pytorch frame is introduced, and the model occupies small memory and is easy to transplant and mainly falls to the ground on the application of model engineering and the like, and meanwhile, a foundation is laid for the identification and positioning of workpieces in intelligent manufacturing, production and operation environments.
Through the above analysis, the problems and defects existing in the prior art are as follows: the traditional target detection algorithm has higher requirements on environment, lower anti-interference capability, insufficient feature extraction during artificial feature calculation and the like.
The difficulty of solving the problems and the defects is as follows: the device has strong adaptability to changeable environments when the actual workstation machine operates to detect the workpiece, and realizes quick and accurate detection.
The meaning of solving the problems and the defects is as follows: the requirement of automation of workstation equipment cannot be met by means of manual detection, and the traditional target detection algorithm is suitable for identification under a certain single environment, but the environment where the workstation is in actual work is continuously changed in real time. Therefore, the requirement of automation and intellectualization of the workstation equipment can be met by researching an algorithm with stronger applicability.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides an automatic feature extraction and autonomous fusion feature automatic recognition workpiece online detection system, which solves the problems that the feature extraction is inaccurate and is easy to be interfered by the environment and the like based on an unsafe behavior mode of manual detection in the prior art.
The invention is realized in such a way that the workpiece detection method of the micro DNA extraction workstation based on YOLOv comprises the following steps:
step one, collecting workpiece picture data based on workstation live-action;
Marking the data acquired by the live-action, and carrying out format processing and division on the marked data;
And thirdly, based on YOLOv network algorithm, performing iterative training by using a pre-trained weight model, and continuously adjusting weight parameters of the model by using training set and verification set data.
Fourthly, after YOLOv network algorithm training, saving a weight file, judging through verification set evaluation indexes of the comparison model, and selecting an optimal model according to a judgment result to identify and detect the workpiece.
Preferably, in step one, a self-built dataset is obtained by taking photographs of the workpiece sample with a MER-500-14U3C color camera of the Da Heng Galaxy series. In order to ensure sample diversity, images with different conditions of illumination, angles, distances, shielding and the like are collected in a data collection stage, and 1200 workpiece images are taken in total.
Preferably, in the second step, the dataset adopts PASAL VOC format, and the labelImg tool is used to label the target object in the picture, and the label file includes matrix coordinate parameters of the real region. The marked file takes xml as a suffix, the file name is consistent with the picture name, and the method comprises the following steps: the 2:2 ratio divides the dataset into a training set, a validation set and a test set.
Preferably, in the third step, a weight model pre-trained on coco and voc data sets is used in the training process of the training set, training data and weight parameters are saved in each training, and the training process is tracked through wandb to visualize the training result. The parameter training defaults to use an SGD optimization algorithm, and the super parameter setting is as follows: batcthSize is 8, the trained epoch is 300, the momentum factor is 0.937, the weight attenuation coefficient is 0.0005, the initial learning rate is 0.01, the learning rate is dynamically adjusted by adopting a cosine annealing strategy, and the loss function is GIoU Loss.
The loss function calculation formula is L GIoU =1-GIoU, wherein,C is the minimum closed rectangular area of the prediction box and the ground box, A is the intersection area of the prediction box and the ground box, and B is the union area of the prediction box and the ground box.
Preferably, in the fourth step, in order to measure the performance of the model, we use the evaluation criteria commonly used in the target detection to evaluate the algorithm, wherein three common evaluation indexes are respectively average Precision mAP, accuracy Precision, recall. By verifying the performance of the set feedback model, a trained YOLOv optimal model is selected to be input into a testing set for workpiece identification and testing, and best. Pt in yolov5x.pt of YOLOv is selected as model weight.
The average accuracy mAP formula is: Where Q is the total number of categories and AP is the area under the Precision-Recall curve.
The average Precision formula is: where TP is the number of positive samples predicted to be positive and FP is the number of negative samples predicted to be positive.
The Recall ratio Recall formula is: Where FN is the number of positive samples predicted negative.
The YOLOv network is a modified YOLOv network, and the modification steps of the modified YOLOv network are as follows:
Step one: the conventional non-maximum suppression NMS in YOLOv was modified using DIOU-NMS. Repair when identifying overlapping targets, the traditional NMS ignores the overlapping targets and DIOU can regress the location information of the overlapping target bounding box center.
Step two: and YOLOv, when the network is trained, calculating an initial anchor frame anchors, adding a K-means mean algorithm to recalculate the initial anchor frame aiming at the target of the research, and replacing the result obtained by calculating the self-contained automatic anchor frame function of the network.
Drawings
In order to more clearly describe the technical solution of the embodiments of the present application, the following will briefly describe the drawings that are required to be used in the embodiments of the present application.
FIG. 1 is a diagram showing the steps of detecting a workpiece in an automatic extraction workstation for trace DNA according to an embodiment of the present invention
FIG. 2 is a flow chart of a work piece detection process in an automatic extraction workstation for trace DNA according to an embodiment of the present invention
FIG. 3 is an example of an automated extraction workstation apparatus for micro DNA according to an embodiment of the present invention
FIG. 4 is a YOLOv network architecture model provided by an embodiment of the present invention
FIG. 5 is an image of an identified workpiece marked by an algorithm under different illumination provided by an embodiment of the present invention
FIG. 6 is an image of an identified workpiece marked by an algorithm at different viewing angles according to an embodiment of the present invention
FIG. 7 is an image of a workpiece identified by an algorithm under different occlusion provided by an embodiment of the present invention
Detailed Description
The present invention will be described in further detail with reference to the following examples in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Aiming at the problems existing in the prior art, the invention provides a YOLOv-based workpiece detection method of a trace DNA extraction workstation, and the invention is further described in detail below with reference to the accompanying drawings and the specific implementation process.
As shown in fig. 1, a diagram showing the steps of workpiece detection in a micro DNA automated extraction workstation based on a deep learning algorithm is shown, wherein the steps in the diagram are described in detail as follows:
s101: collecting workpiece picture data based on the real scenes of the work stations;
Consider the interference of the stable light source and the external natural light carried by the workstation, wherein a trace amount of DNA automatic extraction workstation is shown in figure 3. And (3) designing workpiece picture acquisition equipment, and acquiring a workpiece sample picture by using a MER-500-14U3C color camera of Da Heng Galaxy series to carry out a self-built data set. In order to ensure sample diversity, the data collection stage collects pictures of illumination, angles, distances, shielding and other conditions in different time periods, and 1200 workpiece images are taken in total.
S102: labeling the data acquired by the live-action, and carrying out format processing and division on the labeled data;
The dataset adopts PASAL VOC labeling format, which is convenient for relevant labeling work and enhancement operation, and the labelImg labeling tool is utilized for manually labeling the target object in the workpiece picture, and the labeling file comprises matrix coordinate parameters of the real target. The marked data are stored in a tag file in an xml format, the xml tag file is firstly converted into a txt file required by YOLOv, and the marked original data set is divided into a 60% training set, a 20% verification set and a 20% test set by a program through a leave-out method.
S103: iterative training is performed by using a pre-trained weight model based on YOLOv network algorithm, and weight parameters of the model are continuously adjusted by using training set and verification set data.
Training was performed on a training set of homemade datasets using the YOLOv model, and the training process used pre-trained weight models on coco and voc datasets. And (3) judging whether model training is over-fitted or not according to the change relation of the loss value along with epoch by using the weight file best. Pt obtained by training, and inputting verification set data, wherein the work piece detection flow chart is shown in fig. 2. And performing continuous iterative training by continuously adjusting the model parameters and the super parameters. Through wandb tracking training process, visual training result wandb is an online model training visual tool capable of automatically recording hyper-parameters and output indexes in the model training process. Parameter training defaults to use SGD optimization algorithm, super parameter settings are as follows: batcthSize is 8, the trained epoch is 300, the momentum factor is 0.937, the weight attenuation coefficient is 0.0005, the initial learning rate is 0.01, the learning rate is dynamically adjusted by adopting a cosine annealing strategy, and the loss function is GIoU Loss. The loss function calculation formula is L GIoU =1-GIoU, wherein,C is the minimum closed rectangular area of the prediction box and the ground box, A is the intersection area of the prediction box and the ground box, and B is the union area of the prediction box and the ground box.
S104: and YOLOv, after training the network algorithm, storing a weight file, judging through the evaluation index of the verification set of the comparison model, and selecting an optimal model according to the judgment result to identify and detect the workpiece.
In order to measure the performance of the model, an evaluation standard commonly used in target detection is adopted to evaluate an algorithm, wherein three common evaluation indexes are respectively average Precision mAP, accuracy and Recall. And selecting an optimal training model according to the feedback result of the verification set, and performing identification test on the work-piece test set of the workstation by using the optimal model. The invention selects best. Pt in yolov5x.pt of YOLOv as the model weight. As shown in fig. 5, 6, and 7, the labeling information of the bounding box obtained by detecting unknown samples inputted to the model is shown.
Wherein the mAP formula is: Where Q is the total number of categories and AP is the area under the Precision-Recall curve.
The Precision formula is: where TP is the number of positive samples predicted to be positive and FP is the number of negative samples predicted to be positive.
The Recall formula is: Where FN is the number of positive samples predicted negative.
The deep learning algorithm YOLOv described in S103 and S104 is a target detection model, as shown in fig. 4. During training, firstly, data enhancement is carried out, pictures are spliced in a random scaling, cutting and arrangement mode, each time a few pictures are trained or a plurality of rounds of pictures are subjected to self-adaptive scaling, finally, a K-means mean algorithm is added before the network is operated, an initial anchor frame is recalculated aiming at the target of the research, and the result obtained by calculation of the self-contained automatic anchor frame function of the network is replaced. Upon entering the Backbone stage, a Focus downsampling structure is employed that includes 4 slicing operations and 1 convolution operation with 32 convolution kernels. The method changes the original dimension of the original image into half, changes the channel into 4 times of the original dimension, and ensures the information integrity of the image although the dimension is reduced. Meanwhile, a CSPNet local cross-layer fusion structure is adopted, a richer feature map is obtained by utilizing the optimization process of the network, and meanwhile, the gradient transformation process is concentrated in the feature map, so that the calculated amount is reduced to a certain extent. The network inserts Neck layers before outputting the prediction result, so that the better fusion of the features is ensured, and the CSP2 module is adopted to strengthen the feature fusion. And the FPN network is adopted to transmit the high-level characteristic information with the output characteristics of the CSP modules of different layers in an up-sampling mode from top to bottom, the PAN network is fused, and the shallow layer characteristics are aggregated through the characteristic pyramid from bottom to top. The SPP module is adopted to obtain different scale feature graphs for channel splicing after maxpooling operation, the context features are obviously separated, the acceptance range of the trunk features is more effectively increased, the parameters of different detection layers are fused with each other for different trunk layers, the output prediction result is more accurate, the traditional non-maximum suppression NMS is modified when YOLOv5 is output, the DIOU-NMS is used for repairing the situation that the traditional NMS ignores the overlapping targets when the overlapping targets are identified, and the DIOU algorithm can return the position information of the central point of the overlapping target boundary frame.
In the description of the present invention, unless otherwise indicated, the meaning of "a plurality" is two or more; the terms "upper," "lower," "left," "right," "inner," "outer," "front," "rear," "head," "tail," and the like are used as an orientation or positional relationship based on that shown in the drawings, merely to facilitate description of the invention and to simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore should not be construed as limiting the invention. Furthermore, the terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The foregoing is merely illustrative of specific embodiments of the present invention, and the scope of the invention is not limited thereto, but any modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present invention will be apparent to those skilled in the art within the scope of the present invention.

Claims (4)

1. A YOLOv-based trace DNA extraction workstation workpiece detection method is characterized by comprising the following steps:
step one, collecting workpiece picture data based on workstation live-action;
Marking the data acquired by the live-action, and carrying out format processing and division on the marked data;
The dataset adopts PASAL VOC format, a labelImg tool is used for marking the target object in the picture, and the marking file comprises matrix coordinate parameters of the real region; the marked file takes xml as a suffix, the file name is consistent with the picture name, and the method comprises the following steps: dividing the data set into a training set, a verification set and a test set according to the proportion of 2:2;
Thirdly, based on YOLOv network algorithm, performing iterative training by using a pre-trained weight model, and continuously adjusting weight parameters of the model by using training set and verification set data;
The training set training process uses weight models pre-trained on coco and voc data sets; each time training is performed, training data and weight parameters are saved, a training process is tracked through wandb, and a training result is visualized; the parameter training defaults to use an SGD optimization algorithm, and the super parameter setting is as follows: batcthSize is 8, the trained epoch is 300, the momentum factor is 0.937, the weight attenuation coefficient is 0.0005, the initial learning rate is 0.01, the learning rate is dynamically adjusted by adopting a cosine annealing strategy, and the loss function is GIoU Loss;
The loss function calculation formula is L GIoU =1-GIoU, wherein, C is the minimum closed rectangular area of the prediction frame and the ground box, A is the intersection area of the prediction frame and groundbox, and B is the union area of the prediction frame and groundbox;
Fourthly, based on YOLOv network algorithm, after training, saving weight files, judging through verification set evaluation indexes of the comparison model, and selecting an optimal model according to the judgment result to identify and detect the workpiece;
Carrying out work-piece detection on a workstation by YOLOv, and adopting a common evaluation standard in target detection to evaluate an algorithm for measuring the performance of a model, wherein three common evaluation indexes, namely average Precision mAP, accuracy Precision and Recall ratio Recall, are used; selecting a trained YOLOv optimal model input test set to identify and test a workpiece by verifying set feedback model performance, and selecting best. Pt in yolov5x.pt of YOLOv5 as model weight;
The average accuracy mAP formula is: where Q is the total number of categories and AP is the area under the Precision-Recall curve;
the accuracy formula is: wherein TP is the number of positive samples predicted to be positive, and FP is the number of negative samples predicted to be positive;
The Recall ratio Recall formula is: Where FN is the number of positive samples predicted negative.
2. The method for detecting workpieces in a micro-DNA extraction workstation based on YOLOv according to claim 1, wherein in the first step, a self-built dataset is obtained by collecting photos of samples of the workpieces by using a MER-500-14U3C color camera of Da Heng Galaxy series; in order to ensure sample diversity, images with different conditions of illumination, angles, distances, shielding and the like are collected in a data collection stage, and 1200 workpiece images are taken in total.
3. The workpiece detection method of the micro DNA extraction workstation based on YOLOv as claimed in claim 1, wherein YOLOv5 is characterized in that firstly data enhancement is carried out during training, pictures are spliced in a random scaling, cutting and arrangement mode, each time a few pictures or a few rounds of pictures are trained, self-adaptive scaling is carried out on the pictures, finally, an initial anchor frame is recalculated aiming at the target of the research by adding a K-means mean algorithm before network operation, and the result obtained by calculation of the function of the automatic anchor frame carried by the network is replaced; when entering a back bone stage, adopting a Focus downsampling structure which comprises 4 times of slicing operation and 1 time of convolution operation of 32 convolution kernels, wherein the original dimension of the original image is changed into half, and the channel is changed into 4 times of the original dimension; meanwhile, a CSPNet local cross-layer fusion structure is adopted, a richer feature map is obtained by utilizing the optimization process of a network, and meanwhile, the gradient transformation process is concentrated in the feature map, so that the calculated amount is reduced to a certain extent; the network inserts Neck layers before outputting the prediction result, so that better fusion of the features is ensured, CSP2 module is adopted to strengthen the feature fusion, FPN network is adopted to transfer high-level feature information with the output features of CSP modules of different layers in an up-sampling mode from top to bottom, and the fusion of the PAN network is used to aggregate shallow features through a feature pyramid from bottom to top; the SPP module is adopted to perform channel splicing by using maxpooling to obtain feature graphs with different scales, so that the context features are obviously separated, the acceptance range of the trunk features is more effectively increased, and the parameters of different detection layers are fused with each other for different trunk layers, so that the output prediction result is more accurate; YOLOv5 modifying the conventional non-maximum suppression NMS when outputting, using DIOU-NMS, repairing the situation that the conventional NMS ignores the overlapped object when identifying the overlapped object, and the DIOU algorithm can regress the position information of the center point of the boundary frame of the overlapped object.
4. Use of a YOLOv-based microscale DNA extraction workstation workpiece detection method according to any one of claims 1 to 3 in image processing and target detection of traffic signs, face detection, target tracking, medical imaging, defect occlusion detection, etc.
CN202111195733.1A 2021-10-14 YOLOv 5-based trace DNA extraction workstation workpiece detection method Active CN113850799B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111195733.1A CN113850799B (en) 2021-10-14 YOLOv 5-based trace DNA extraction workstation workpiece detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111195733.1A CN113850799B (en) 2021-10-14 YOLOv 5-based trace DNA extraction workstation workpiece detection method

Publications (2)

Publication Number Publication Date
CN113850799A CN113850799A (en) 2021-12-28
CN113850799B true CN113850799B (en) 2024-06-07

Family

ID=

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476756A (en) * 2020-03-09 2020-07-31 重庆大学 Method for identifying casting DR image loose defects based on improved YO L Ov3 network model
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN112270252A (en) * 2020-10-26 2021-01-26 西安工程大学 Multi-vehicle target identification method for improving YOLOv2 model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN111476756A (en) * 2020-03-09 2020-07-31 重庆大学 Method for identifying casting DR image loose defects based on improved YO L Ov3 network model
CN112270252A (en) * 2020-10-26 2021-01-26 西安工程大学 Multi-vehicle target identification method for improving YOLOv2 model

Similar Documents

Publication Publication Date Title
CN103324937B (en) The method and apparatus of label target
CN108711148B (en) Tire defect intelligent detection method based on deep learning
CN111368690B (en) Deep learning-based video image ship detection method and system under influence of sea waves
CN111724355B (en) Image measuring method for abalone body type parameters
CN108830332A (en) A kind of vision vehicle checking method and system
CN114638784A (en) Method and device for detecting surface defects of copper pipe based on FE-YOLO
CN107230203A (en) Casting defect recognition methods based on human eye vision attention mechanism
CN109886928A (en) A kind of target cell labeling method, device, storage medium and terminal device
CN110543912B (en) Method for automatically acquiring cardiac cycle video in fetal key section ultrasonic video
CN109284779A (en) Object detecting method based on the full convolutional network of depth
CN109522963A (en) A kind of the feature building object detection method and system of single-unit operation
CN105184229A (en) Online learning based real-time pedestrian detection method in dynamic scene
CN110472581A (en) A kind of cell image analysis method based on deep learning
CN112215203A (en) Pavement disease detection method and device based on deep learning
CN115171045A (en) YOLO-based power grid operation field violation identification method and terminal
CN111898677A (en) Plankton automatic detection method based on deep learning
CN115035082A (en) YOLOv4 improved algorithm-based aircraft transparency defect detection method
Wang et al. Improved YOLOv3 detection method for PCB plug-in solder joint defects based on ordered probability density weighting and attention mechanism
CN114387261A (en) Automatic detection method suitable for railway steel bridge bolt diseases
CN112561885B (en) YOLOv 4-tiny-based gate valve opening detection method
CN113487570A (en) High-temperature continuous casting billet surface defect detection method based on improved yolov5x network model
CN113850799B (en) YOLOv 5-based trace DNA extraction workstation workpiece detection method
CN117197085A (en) Road rapid-inspection image pavement disease detection method based on improved YOLOv8 network
CN116052110A (en) Intelligent positioning method and system for pavement marking defects
CN115661446A (en) Pointer instrument indication automatic reading system and method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant