CN114972967A - Airplane part identification and counting method and detection system - Google Patents

Airplane part identification and counting method and detection system Download PDF

Info

Publication number
CN114972967A
CN114972967A CN202210490468.8A CN202210490468A CN114972967A CN 114972967 A CN114972967 A CN 114972967A CN 202210490468 A CN202210490468 A CN 202210490468A CN 114972967 A CN114972967 A CN 114972967A
Authority
CN
China
Prior art keywords
network
loss
training
images
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210490468.8A
Other languages
Chinese (zh)
Inventor
武星
陈成
钟鸣宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Zhongdun Technology Co ltd
Original Assignee
Wuxi Zhongdun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Zhongdun Technology Co ltd filed Critical Wuxi Zhongdun Technology Co ltd
Priority to CN202210490468.8A priority Critical patent/CN114972967A/en
Publication of CN114972967A publication Critical patent/CN114972967A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an airplane part identification and counting method and an airplane part identification and counting detection system. The invention can greatly improve the identification and counting efficiency of airplane parts and avoid errors caused by artificial identification and counting or traditional identification and counting methods; meanwhile, other cost investment is not needed, the camera is only used for extracting the tiled image of the airplane part, and the image is processed through a software algorithm.

Description

Airplane part identification and counting method and detection system
Technical Field
The invention relates to a detection technology, in particular to an airplane part identification and counting method and a detection system.
Background
The existing large civil aircraft is usually assembled by tens of thousands of parts, and the aircraft parts have the characteristics of multiple varieties, small batch, different sizes, complex curves and curved surface shapes and the like. At present, the identification and counting tasks for airplane parts are mostly carried out in a manual comparison mode, the airplane parts are various, the degree of distinction among some parts is very small, the manual classification counting mode is poor in reliability, identification accuracy is difficult to guarantee, and the manual classification counting mode is low in working efficiency and high in strength.
The large-scale airplane part identification and counting is a problem which cannot be effectively solved up to now due to the performance limit of the traditional visual algorithm. Due to the rapid development of computer hardware, the hardware level is improved, and in recent years, the research in related fields is focused on applying artificial neural networks to the industrial production field. The deep learning model has excellent performance, can realize large-scale part identification and counting, but has a bottleneck in the application of airplane part identification and counting: under the condition that targets of various airplane parts are overlapped and shielded, sizes are different, illumination intensity is different and the like, under the condition of interference of complex condition factors, the accuracy rate of a traditional target detection and counting algorithm is not high, the error of a result is large, and the identification speed cannot reach an expected level.
Disclosure of Invention
The airplane part identification model is trained on the basis of a target detection network of a feature pyramid, parts on a conveyor belt are identified through the trained airplane part identification model, and class names of the parts on the conveyor belt are output.
The technical scheme of the invention is as follows: a visual detection system collects part images on a conveyor belt, the part images are sent to a trained part identification model to obtain part classification label images, the part classification label images are input into an airplane part counting model formed by a multi-branch depth convolution network to obtain a part density map, and the part density map is integrated to obtain the estimated number of parts.
A part identification model training method in an airplane part identification and counting method comprises the following steps: 1) data acquisition and enhancement: the visual detection system collects part images on the conveyor belt, and the part images enhance the part data through translation, turnover and rotation as well as random brightness and contrast adjustment to obtain a part image data set;
2) and (3) labeling of part data: labeling the image data of the part by adopting LabelImg labeling software, selecting the part in a frame, labeling the name of each part, storing and outputting a labeling file, wherein the labeling file forms a training data set, and dividing the training data set into a training set and a test set for model training and test verification;
3) constructing a part identification network: taking a DarkNet-53 network frame as a backbone network, adopting layer-hopping connection of a plurality of residual errors, adopting three different feature scales for DarkNet-53, generating feature maps of three different scales for targets at three different positions of a model, detecting, adopting a filter to identify the targets, using spatial pyramid pooling as an additional module of a network neck, detecting the targets of different sizes, and extracting spatial feature information of different sizes; the path aggregation network is used as a feature fusion module of a network neck, and fine-grained local information is obtained in an accelerated manner;
4) setting a loss function, sending the training set in the step 2) into the part recognition network in the step 3) for training, and outputting the category name of the part in the graph by the part recognition network;
5) and the trained part identification network uses the test set to carry out effect verification.
Further, the loss function includes three parts: loss L calculated by predicting the center coordinates and width and height of the box box Loss of confidence in the classification of the target L obj Loss L due to target classification result cls (ii) a Said L box Caused by the offset of the coordinate of the center of the prediction frame relative to the marking frame and the difference of the width and the height of the prediction frame and the marking frame(ii) a Said L obj Calculating the loss of classification confidence degree no matter whether an object exists in the prediction frame or not, wherein the loss comprises the loss of an object part and the loss of an object part in the detection frame, and the weights of the two parts are determined by a set parameter lambda; said L cls The target classification result loss is calculated as the difference between the classification class of the target and its true class, which uses a cross-entropy function to calculate the loss.
A multi-branch depth convolution network used in an airplane part identification and counting method is composed of three convolution network branches including a large-scale convolution kernel, a medium-scale convolution kernel and a small-scale convolution kernel, wherein the three convolution network branches are used for respectively extracting features of parts with different sizes in an input image, original image features extracted by the three convolution branches are stacked to obtain a combined feature map, and the combined feature map is mapped to a density map through a convolution of 1 x 1.
Preferably, the euclidean distance is used to measure the difference between the predicted result density graph and the labeled value when the multi-branch deep convolutional network performs part calculation training, as shown in the following formula:
Figure BDA0003631547990000031
n is the number of input images; x is the number of i Is an input image; theta is a parameter which can be learned in the model and is used for adjusting the whole network training process; f (x) i And theta) is a density map finally obtained by the model; f i Is the flag value.
An aircraft part detection system comprises a motion control module, an image acquisition processing module and a mechanical control module, wherein the motion control module comprises a conveyor belt for conveying parts; the image acquisition processing module comprises a CCD camera and a computer; the mechanical control module is a sliding frame capable of sliding up and down; the CCD camera is fixed on the sliding frame, the CCD camera shoots part images parked on the conveying belt with different resolutions along with the up-and-down movement of the sliding frame, the part images with different resolutions collected by the CCD camera are sent to the computer, and the computer identifies the types and the number of the parts in the images through the trained model.
The invention has the beneficial effects that: the airplane part identification and counting method and the airplane part detection system can greatly improve the airplane part identification and counting efficiency and avoid errors caused by artificial identification and counting or a traditional identification and counting method; meanwhile, other cost investment is not needed, the camera is only used for extracting the tiled image of the airplane part, and the image is processed through a software algorithm.
Drawings
FIG. 1 is a schematic diagram of an aircraft part inspection system according to the present invention;
FIG. 2 is a flow chart of the feature pyramid-based aircraft part identification technique of the present invention;
FIG. 3 is a flow chart of the aircraft part counting technique based on the multi-branch deep neural network according to the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
The identification, counting and detection of airplane parts based on the feature pyramid and the multi-branch deep neural network proposed by the present invention will be further described in detail with reference to the accompanying drawings and the detailed description.
As shown in fig. 1, the structural diagram of the aircraft part detection system includes a motion control module, an image acquisition and processing module, and a mechanical control module, where the motion control module includes a conveyor belt, and the conveyor belt is responsible for conveying parts; the image acquisition processing module comprises a CCD camera and a computer, the CCD camera is responsible for acquiring images of parts conveyed on the conveying belt, and the computer processes the acquired images to obtain the types of the parts in the images; the mechanical control module is a sliding frame, the sliding frame can slide up and down, and the CCD camera is fixed on the sliding frame and shoots part images with different resolutions along with the up-and-down movement of the sliding frame. When the system is started, the conveyor belt is started to convey the parts, and when the parts are conveyed to a specified position below the CCD camera, the conveyor belt is stopped; the CCD camera and the sliding frame start to collect part images with different resolutions, and the computer identifies the types and the numbers of the parts in the images through the trained model; and then the conveyor belt is started to continue conveying the parts until the next group of parts reaches the designated position, and the next identification and counting are started.
An airplane part identification and counting method based on a feature pyramid and a multi-branch deep neural network comprises the steps of training an airplane part identification model; and (5) training an airplane part counting model.
The specific process of the aircraft part identification model training is shown in fig. 2, and mainly comprises the following steps:
the method is based on a target detection network of a feature pyramid to train an airplane part recognition model, detect parts in pictures through the trained model, and output class names of the parts in the pictures.
Collecting and enhancing part data: firstly, a visual detection platform is built by utilizing a CCD camera, a conveyor belt and the like, 10000 pictures of parts are collected, then data are enhanced, the variability of input images is increased, and the trained recognition model has higher robustness.
The enhancement mode of the part data is as follows: the part data is enhanced by translation, turnover, rotation, and random adjustment of brightness and contrast.
And (3) labeling of part data: and labeling the image data of the part by adopting LabelImg labeling software, selecting the part in a frame, labeling the name of each part, storing and outputting a labeling file, wherein the labeling file forms a training data set, and the training data set is divided into a training set and a test set in a ratio of 8: 2.
Training a part recognition model: aiming at the problem of part identification on a production line, a part identification network based on a deep learning characteristic pyramid is built, a DarkNet-53 network framework is used as a backbone network, the part identification network is composed of 53 convolution layers, and 1 × 1 and 3 × 3 convolution kernels are adopted. The DarkNet-53 adopts a plurality of residual jump layer connections, so that the negative influence of the gradient on the model can be reduced, and the DarkNet-53 adopts three different characteristic scales and can generate three different targets at three different positions of the modelAnd (4) scaling the feature map, thereby carrying out detection. When the model realizes the down-sampling operation, a 3 × 3 convolution kernel with the step length of 2 is adopted, so that the continuity of model information transmission can be maintained. When the DarkNet-53 is used for processing the target overfitting problem, a large number of filters are adopted to identify the target, so that the parameters during operation are greatly reduced, and the operation is simpler. The spatial pyramid pooling is used as an additional module of the network neck to detect targets with different sizes, extract spatial feature information with different sizes and improve the robustness of the model to spatial layout and object degeneration; the path aggregation network is used as a feature fusion module of a network neck, so that low-level information is easier to propagate to a high level, a shortcut path is introduced into the path aggregation network, and only 10 layers of paths are needed to reach the top level, so that the top level can acquire fine-grained local information. The loss function contains three parts: loss L calculated from the center coordinates and width and height of the prediction box box Loss of confidence in the classification of the target L obj Loss L due to target classification result cls . The penalty due to the predicted box center coordinates and width height is expressed as follows:
Figure BDA0003631547990000051
wherein k × k is all grid units formed after the input picture is divided, and is manually set to 13 × 13, 26 × 26, and 52 × 52; m is the number of detection boxes contained in each grid,
Figure BDA0003631547990000052
judging whether a jth detection frame in the ith grid unit has a detected object, if so, setting the value to be 1, otherwise, setting the value to be 0; x is a radical of a fluorine atom i ,y i Is the position of the real frame, w i ,h i Is the size of the real frame, c i Confidence of the real frame;
Figure BDA0003631547990000053
in order to detect the position of the frame,
Figure BDA0003631547990000054
is the size of the detection frame; lambda [ alpha ] coord The manual setting is 0.5.
The above equation represents that the loss of this part is caused by the offset of the prediction box with respect to the center coordinate of the labeling box and the difference in width and height from the labeling box.
The loss of confidence in the target classification is as follows:
Figure BDA0003631547990000055
λ noobj is manually set to be 0.5 and,
Figure BDA0003631547990000056
in order to determine the confidence level of the detection box,
Figure BDA0003631547990000057
judging whether the jth detection frame in the ith grid unit has a detected object, if so, the value of the jth detection frame is 0, otherwise, the jth detection frame is 1.
The loss of classification confidence is calculated in the above equation regardless of whether an object is present in the prediction box, including the loss of both the presence and absence of an object in the detection box, the weights of which are determined by a parameter λ, which is manually determined and set to 0.5.
The losses due to the target classification result are as follows:
Figure BDA0003631547990000061
wherein p is i (c i ) Category probability of a real box;
Figure BDA0003631547990000062
class is the class number for the class probability of the detection box.
The target classification result loss in the above equation calculates the difference between the classification class of the target and its true class, using cross entropyThe function calculates the loss. In summary, the loss function of the feature pyramid network can be expressed as: l ═ L box +L obj +L cls
And finally, obtaining the part identification model with good effect through training.
Aiming at the problem of part identification on a production line, the invention sets initialization parameters, wherein the training period is 100000, the batch size is 16, the learning rate is 0.00261, and the momentum is 0.949, and finally outputs a trained network model.
And sending the images obtained by the part identification model into a multi-branch depth convolution network to identify the part images with different sizes. At this time, the ImageNet pre-training model needs to be loaded in advance, so that the problem of poor training result caused by random initialization is solved. The multi-branch depth convolution structure is shown in fig. 3. Fig. 3 shows the relationship between the convolutional layer, the pooling layer, the input part image, the fused feature image, and the generated density map. The multi-branch deep convolution network consists of three branches, wherein the first row is a large-scale convolution kernel, the second row is a medium-scale convolution kernel, the third row is a small-scale convolution kernel, and the convolution kernels used in the first row, the second row and the third row are respectively (9 × 9), (7 × 7), (7 × 7) and (7 × 7); (7 × 7), (5 × 5), (5 × 5), (5 × 5) and (5 × 5), (3 × 3), (3 × 3), (3 × 3); due to the fact that the part images are different in size, the convolution kernels with the uniform sizes are designed to be not beneficial to identifying the smaller or the oversized part images, and therefore the three branches are adopted to identify the part images with different sizes respectively. And finally, stacking the features of the original image extracted by the convolution network (namely the output of the convolution network) to obtain a combined feature map, and mapping the combined feature map to a density map through a (1 x 1) convolution. At this time, the Euclidean distance is used to measure the difference between the predicted result density graph and the mark value, as shown in the following formula:
Figure BDA0003631547990000071
n is the number of input images; x is the number of i Is an input image; theta is a parameter which can be learned in the model and is used for adjusting the whole network training process; f (x) i Theta) is the final density of the modelDegree graph; f i Is the flag value.
Stacking the features obtained by the model, processing the features by a (1 x 1) convolution layer to obtain a corresponding density map, and integrating the density map to obtain the estimated number.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (6)

1. The airplane part identification and counting method is characterized in that a visual detection system collects part images on a conveyor belt, the part images are sent into a trained part identification model to obtain part classification label images, the part classification label images are input into an airplane part counting model formed by a multi-branch deep convolution network to obtain a part density map, and the part density map is integrated to obtain the estimated number of parts.
2. The method for training the part recognition model in the airplane part recognition and counting method according to claim 1, characterized by comprising the following steps:
1) data acquisition and enhancement: the visual detection system collects part images on the conveyor belt, and the part images enhance the part data through translation, turnover and rotation, and random brightness and contrast adjustment to obtain a part image data set;
2) and (3) labeling of part data: labeling the image data of the part by adopting LabelImg labeling software, selecting the part in a frame, labeling the name of each part, storing and outputting a labeling file, wherein the labeling file forms a training data set, and divides the training data set into a training set and a test set for model training and test verification;
3) constructing a part identification network: taking a DarkNet-53 network frame as a backbone network, adopting layer-hopping connection of a plurality of residual errors, adopting three different feature scales by the DarkNet-53, generating three feature graphs with different scales for targets at three different positions of a model, detecting, identifying the targets by adopting a filter, using spatial pyramid pooling as an additional module of a network neck, detecting the targets with different sizes, and extracting spatial feature information with different sizes; the path aggregation network is used as a feature fusion module of a network neck, and fine-grained local information is obtained in an accelerated manner;
4) setting a loss function, sending the training set in the step 2) into the part recognition network in the step 3) for training, and outputting the category name of the part in the graph by the part recognition network;
5) and the trained part identification network uses the test set to carry out effect verification.
3. The method for training the part recognition model in the aircraft part recognition and counting method according to claim 2, wherein the loss function comprises three parts: loss L calculated from the center coordinates and width and height of the prediction box box Target classification confidence, loss L obj Loss L due to target classification result cls (ii) a Said L box Caused by the deviation of the central coordinate of the prediction frame relative to the marking frame and the width and height difference with the marking frame; said L obj Calculating the loss of classification confidence degree no matter whether an object exists in the prediction frame or not, wherein the loss comprises the loss of an object part and the loss of an object part in the detection frame, and the weights of the two parts are determined by a set parameter lambda; said L cls The target classification result loss is calculated as the difference between the classification class of the target and its true class, which uses a cross-entropy function to calculate the loss.
4. A multi-branch depth convolution network used in the airplane part identification and counting method is characterized by comprising three convolution network branches including a large-scale convolution kernel, a medium-scale convolution kernel and a small-scale convolution kernel, wherein the three convolution network branches are used for respectively extracting features of parts with different sizes in an input image, original image features extracted by the three convolution branches are stacked to obtain a combined feature map, and the combined feature map is mapped to a density map through a convolution of 1 x 1.
5. The multi-branch deep convolutional network of claim 4, wherein the multi-branch deep convolutional network uses Euclidean distance to measure the difference between the predicted result density map and the labeled value during the part calculation training, as shown in the following formula:
Figure FDA0003631547980000021
n is the number of input images; x is the number of i Is an input image; theta is a parameter which can be learned in the model and is used for adjusting the whole network training process; f (x) i And theta) is a density map finally obtained by the model; f i Is the mark value.
6. The aircraft part detection system is characterized by comprising a motion control module, an image acquisition and processing module and a mechanical control module, wherein the motion control module comprises a conveyor belt for conveying parts; the image acquisition processing module comprises a CCD camera and a computer; the mechanical control module is a sliding frame capable of sliding up and down; the CCD camera is fixed on the sliding frame, the CCD camera shoots part images parked on the conveying belt with different resolutions along with the up-and-down movement of the sliding frame, the part images with different resolutions collected by the CCD camera are sent to the computer, and the computer identifies the types and the number of the parts in the images through the trained model.
CN202210490468.8A 2022-05-07 2022-05-07 Airplane part identification and counting method and detection system Withdrawn CN114972967A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210490468.8A CN114972967A (en) 2022-05-07 2022-05-07 Airplane part identification and counting method and detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210490468.8A CN114972967A (en) 2022-05-07 2022-05-07 Airplane part identification and counting method and detection system

Publications (1)

Publication Number Publication Date
CN114972967A true CN114972967A (en) 2022-08-30

Family

ID=82981482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210490468.8A Withdrawn CN114972967A (en) 2022-05-07 2022-05-07 Airplane part identification and counting method and detection system

Country Status (1)

Country Link
CN (1) CN114972967A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740549A (en) * 2023-08-14 2023-09-12 南京凯奥思数据技术有限公司 Vehicle part identification method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740549A (en) * 2023-08-14 2023-09-12 南京凯奥思数据技术有限公司 Vehicle part identification method and system
CN116740549B (en) * 2023-08-14 2023-11-07 南京凯奥思数据技术有限公司 Vehicle part identification method and system

Similar Documents

Publication Publication Date Title
CN111598861B (en) Improved Faster R-CNN model-based non-uniform texture small defect detection method
CN111126472B (en) SSD (solid State disk) -based improved target detection method
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
Yuliang et al. Detecting curve text in the wild: New dataset and new solution
CN110163836B (en) Excavator detection method used under high-altitude inspection based on deep learning
CN111179217A (en) Attention mechanism-based remote sensing image multi-scale target detection method
CN108830188A (en) Vehicle checking method based on deep learning
CN111091105A (en) Remote sensing image target detection method based on new frame regression loss function
CN111626176B (en) Remote sensing target rapid detection method and system based on dynamic attention mechanism
CN110163187A (en) Remote road traffic sign detection recognition methods based on F-RCNN
CN110263705A (en) Towards two phase of remote sensing technology field high-resolution remote sensing image change detecting method
CN111310862A (en) Deep neural network license plate positioning method based on image enhancement in complex environment
CN110991435A (en) Express waybill key information positioning method and device based on deep learning
CN111368769B (en) Ship multi-target detection method based on improved anchor point frame generation model
CN110991444B (en) License plate recognition method and device for complex scene
CN104299006A (en) Vehicle license plate recognition method based on deep neural network
CN113920107A (en) Insulator damage detection method based on improved yolov5 algorithm
CN110929795B (en) Method for quickly identifying and positioning welding spot of high-speed wire welding machine
CN107016413A (en) A kind of online stage division of tobacco leaf based on deep learning algorithm
CN109948457B (en) Real-time target recognition method based on convolutional neural network and CUDA acceleration
CN114549507B (en) Improved Scaled-YOLOv fabric flaw detection method
CN110334656A (en) Multi-source Remote Sensing Images Clean water withdraw method and device based on information source probability weight
CN109272060A (en) A kind of method and system carrying out target detection based on improved darknet neural network
CN116994047A (en) Small sample image defect target detection method based on self-supervision pre-training
CN113221956B (en) Target identification method and device based on improved multi-scale depth model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220830