CN113569734A - Image identification and classification method and device based on feature recalibration - Google Patents

Image identification and classification method and device based on feature recalibration Download PDF

Info

Publication number
CN113569734A
CN113569734A CN202110856064.1A CN202110856064A CN113569734A CN 113569734 A CN113569734 A CN 113569734A CN 202110856064 A CN202110856064 A CN 202110856064A CN 113569734 A CN113569734 A CN 113569734A
Authority
CN
China
Prior art keywords
training
vehicle
data
classification
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110856064.1A
Other languages
Chinese (zh)
Other versions
CN113569734B (en
Inventor
张凯
姚丽
丁冬睿
杨光远
逯天斌
王潇涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Liju Robot Technology Co ltd
Original Assignee
Shandong Liju Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Liju Robot Technology Co ltd filed Critical Shandong Liju Robot Technology Co ltd
Priority to CN202110856064.1A priority Critical patent/CN113569734B/en
Publication of CN113569734A publication Critical patent/CN113569734A/en
Application granted granted Critical
Publication of CN113569734B publication Critical patent/CN113569734B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image identification and classification method and device based on feature recalibration. Wherein, the method comprises the following steps: acquiring vehicle image data; identifying and classifying the vehicle image data to obtain a training data set; training a preset network through the training data set to obtain a training result; and generating a vehicle identification and classification result according to the training result and the vehicle image data. The invention solves the technical problems that the image recognition problem is solved by using a convolutional neural network in the prior art, the efficiency of a traditional image processing algorithm depending on manual feature extraction is low, the defective vehicle pictures are mostly not correctly recognized when a FasterR-CNN network is used for training due to the fact that a plurality of edge defective positions exist in pictures shot by a plurality of loop crossings, and the precision on a training set is low under the condition that background regions are wrongly judged as vehicles.

Description

Image identification and classification method and device based on feature recalibration
Technical Field
The invention relates to the technical field of computer vision, in particular to an image recognition and classification method and device based on feature recalibration.
Background
Along with the continuous development of intelligent science and technology, people use intelligent equipment more and more among life, work, the study, use intelligent science and technology means, improved the quality of people's life, increased the efficiency of people's study and work.
In recent years, the highway traffic mileage in China is increased at a high speed, and meanwhile, with the rapid development of information technology, powerful support is provided for the construction of an intelligent high-speed three-dimensional management system. The method for judging traffic abnormality by artificial intelligence is realized by utilizing a digital image processing technology and a pattern recognition technology, can realize the automatic detection of moving vehicles on the highway, the automatic tracking according to vehicle characteristics, the detection of traffic accidents, the judgment of road abnormality and other targets, and has important significance for improving the efficiency of traffic accident rescue and troubleshooting and the efficiency of highway operation management.
At present, convolutional neural networks are often utilized to solve the problem of image recognition. For example, the vehicle image of the expressway, the traditional image processing algorithm depending on the manual feature extraction is time-consuming and labor-consuming, and at this time, fast R-CNN with better performance can be adopted for processing. Because a plurality of defective positions exist in pictures shot by a plurality of road junctions, when the fast R-CNN network is used for training, most defective vehicle pictures cannot be correctly identified, a background area is wrongly judged as a vehicle, and the precision ratio on a training set is low, so that the fast R-CNN algorithm needs to be improved to solve the problems.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an image recognition and classification method and device based on feature recalibration, which at least solve the technical problems that in the prior art, a convolutional neural network is used for solving the image recognition problem, the efficiency of a traditional image processing algorithm relying on manual feature extraction is low, and due to the fact that a plurality of incomplete edge positions exist in pictures shot by a plurality of turns of road junctions, most of the incomplete vehicle pictures cannot be correctly recognized when a Faster R-CNN network is used for training, a background region is wrongly judged as a vehicle, and the precision rate on a training set is low.
According to an aspect of the embodiments of the present invention, there is provided an image recognition and classification method based on feature recalibration, including: acquiring vehicle image data; identifying and classifying the vehicle image data to obtain a training data set; training a preset network through the training data set to obtain a training result; and generating a vehicle identification and classification result according to the training result and the vehicle image data.
Optionally, the identifying and classifying the vehicle image data to obtain a training data set includes: acquiring vehicle frame data and vehicle classification data according to the vehicle image data; generating a data set file according to the vehicle frame data and the vehicle classification data; and summarizing the vehicle frame data and the vehicle classification data into the data set file to obtain the training data set.
Optionally, after the preset network is trained through the training data set to obtain a training result, the method further includes: and testing the training result.
Optionally, after generating a vehicle recognition and classification result according to the training result and the vehicle image data, the method further includes: and displaying the vehicle identification and classification result.
According to another aspect of the embodiments of the present invention, there is also provided an image recognition and classification apparatus based on feature recalibration, including: the acquisition module is used for acquiring vehicle image data; the recognition module is used for recognizing and classifying the vehicle image data to obtain a training data set; the training module is used for training a preset network through the training data set to obtain a training result; and the generating module is used for generating a vehicle identification and classification result according to the training result and the vehicle image data.
Optionally, the identification module includes: the acquisition unit is used for acquiring vehicle frame data and vehicle classification data according to the vehicle image data; the generating unit is used for generating a data set file through the vehicle frame data and the vehicle classification data; and the summarizing unit is used for summarizing the vehicle frame data and the vehicle classification data into the data set file to obtain the training data set.
Optionally, the apparatus further comprises: and the testing module is used for testing the training result.
Optionally, the apparatus further comprises: and the display module is used for displaying the vehicle identification and classification result.
According to another aspect of the embodiments of the present invention, there is also provided a non-volatile storage medium including a stored program, wherein the program controls a device in which the non-volatile storage medium is located to perform a feature recalibration-based image recognition and classification method.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions when executed perform a method for feature recalibration based image recognition and classification.
In the embodiment of the invention, vehicle image data is acquired; identifying and classifying the vehicle image data to obtain a training data set; training a preset network through the training data set to obtain a training result; according to the training result and the vehicle image data, a mode of generating a vehicle identification and classification result is achieved, the technical problems that in the prior art, the image identification problem is solved by using a convolutional neural network, the efficiency of a traditional image processing algorithm relying on manual feature extraction is low, and due to the fact that a plurality of incomplete positions exist in pictures shot by a plurality of turns of road junctions, most of the incomplete vehicle pictures cannot be correctly identified when a Faster R-CNN network is used for training, a background region is wrongly judged as a vehicle, and the precision rate on a training set is low are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method for feature re-alignment based image recognition and classification in accordance with an embodiment of the present invention;
fig. 2 is a block diagram of an image recognition and classification apparatus based on feature re-calibration according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is provided a method embodiment of an image recognition and classification method based on feature recalibration, it is noted that the steps illustrated in the flowchart of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than here.
Example one
Fig. 1 is a flowchart of an image recognition and classification method based on feature re-calibration according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S102, vehicle image data is acquired.
And step S104, identifying and classifying the vehicle image data to obtain a training data set.
Optionally, the identifying and classifying the vehicle image data to obtain a training data set includes: acquiring vehicle frame data and vehicle classification data according to the vehicle image data; generating a data set file according to the vehicle frame data and the vehicle classification data; and summarizing the vehicle frame data and the vehicle classification data into the data set file to obtain the training data set.
And step S106, training a preset network through the training data set to obtain a training result.
Optionally, after the preset network is trained through the training data set to obtain a training result, the method further includes: and testing the training result.
And S108, generating a vehicle identification and classification result according to the training result and the vehicle image data.
Optionally, after generating a vehicle recognition and classification result according to the training result and the vehicle image data, the method further includes: and displaying the vehicle identification and classification result.
Specifically, according to the embodiment of the invention, when more incomplete and local vehicle images exist in the sample picture, the method can still accurately identify the target vehicle, and higher precision ratio and recall ratio are obtained. For example, the improved fast R-CNN method is used, VGG-16 embedded in an SE unit is used as a new feature extraction network, a classification network is designed by combining K-means clustering and RPN, ReLU is used as an activation function to design a detection network, and the final output result is a Bounding box (hereinafter, referred to as bbox) regression and normalized classification score. The method specifically comprises the following steps:
1. high speed vehicle images are collected.
2. And making a data set for vehicle identification and classification, and making the data set into a VOC format, wherein the xml file comprises the frame labels of the vehicles and the category labels of the vehicles in the picture.
3. Placing the image files of different vehicle types processed in the step 2 into a classification/JPEGImages folder; placing the xml file containing the labeling information generated in the step 2 into a/classification/indications folder; the files of train.txt, val.txt and test.txt are newly created in the/classification/ImageSets/Main folder. The four files respectively correspond to a training set, a training and verification set, a verification set and a test set.
4. Training a vehicle recognition and classification network: taking the data sets in the steps 2 and 3 as input data of the SEnet-based Fater R-CNN, and judging whether the neural network has converged according to an observed change curve of a loss function in the training process; if the convergence occurs, stopping training; otherwise, continuing to train the model on the training set.
5. Model performance was tested on the test set.
Further, as shown in fig. 1, in the detection stage, the embodiment of the present invention performs an aggregation operation by using a detection network, classifies the region by using a bbox classification network, and predicts the boundary frame of the vehicle by using a bbox regression network. In the detection network, there are two parallel output layers, and the output of the classification layer is the probability distribution of each candidate region on pedestrians and backgrounds. The probability distribution for each candidate region for both vehicle and background categories is p ═ p (p0, p 1). The output of the regression network is the parameters of the vehicle bounding box coordinates:
Figure BDA0003184007420000051
where k represents a category. The bounding box regression network and the boundary classification network are trained by a joint loss function of L (p, u, t)u,v)=Lcls(p,u)+λ[u≥1]·Lreg(tu,v),Lcls(p,u)=-log(pu) Refers to the logarithmic loss of the actual class. L isregIt is only activated when a vehicle is detected.
The network detection main process comprises the following steps:
s1: raw data is preprocessed into an M × N size image as a network input.
S2: and extracting features through a feature extraction network SE-VGG-16.
S3: and dividing the extracted feature set into two paths, inputting one path of feature set into an RPN network, and transmitting the other path of feature set to a specific convolution layer to obtain a higher-dimensional feature.
S4: the feature map processed by the RPN network will generate a corresponding region score, and then the region suggestion is obtained by the maximum suppression algorithm.
S5: the high-dimensional features obtained in step S4 and the region suggestions obtained in step S5 are simultaneously input to the RoI pooling layer, and the features corresponding to the region suggestions are extracted.
S6: and inputting the obtained region suggestion characteristics into a full connection layer to obtain the classification score of the region and the regressed bbox, classifying the region through a bbox classification network, and predicting a boundary frame of the vehicle through a bbox regression network.
The network training main process comprises the following steps:
firstly, training the SE-VGG network, then training the RPN, suggesting to train the detection network through the region extracted by the RPN, and then training the RPN of the parameters of the detection network, namely, the RPN and the detection network are trained jointly, and the training is repeated until convergence is realized.
S1: and setting the specification and the number of the candidate areas, and continuously adjusting the parameters of the candidate areas along with the increase of the iteration times to finally approach the real vehicle identification area.
S2: to accelerate the convergence speed, candidate regions similar to the vehicle in the image are clustered using the K-means method.
S3: IoU is an important indicator that reflects the difference between the candidate region and the true region to be detected. The larger the IoU value, the smaller the difference between the two regions.
S4: k-means clustering uses Euclidean distance to measure the distance between two points as follows:
min∑NM(1-IoU(Box[N],Truth[M]))。
where N refers to the category of the cluster, M refers to the sample level of the cluster, Box [ N ] refers to the width and height of the candidate region, and Box [ M ] refers to the width and height of the actual vehicle region.
Through the embodiment, the technical problems that in the prior art, the problem of image recognition is solved by using a convolutional neural network, the efficiency of a traditional image processing algorithm depending on manual feature extraction is low, and due to the fact that a plurality of incomplete positions of edges exist in pictures shot by a plurality of loop crossings, most of the incomplete vehicle pictures cannot be correctly recognized during training by using an Faster R-CNN network, a background region is mistakenly judged as a vehicle, and the precision rate on a training set is low are solved.
Example two
Fig. 2 is a block diagram of an image recognition and classification apparatus based on feature re-calibration according to an embodiment of the present invention, as shown in fig. 2, the apparatus includes:
and an obtaining module 20, configured to obtain vehicle image data.
And the recognition module 22 is configured to recognize and classify the vehicle image data to obtain a training data set.
Optionally, the identification module includes: the acquisition unit is used for acquiring vehicle frame data and vehicle classification data according to the vehicle image data; the generating unit is used for generating a data set file through the vehicle frame data and the vehicle classification data; and the summarizing unit is used for summarizing the vehicle frame data and the vehicle classification data into the data set file to obtain the training data set.
And the training module 24 is configured to train the preset network through the training data set to obtain a training result.
Optionally, the apparatus further comprises: and the testing module is used for testing the training result.
And the generating module 26 is configured to generate a vehicle identification and classification result according to the training result and the vehicle image data.
Optionally, the apparatus further comprises: and the display module is used for displaying the vehicle identification and classification result.
Specifically, according to the embodiment of the invention, when more incomplete and local vehicle images exist in the sample picture, the method can still accurately identify the target vehicle, and higher precision ratio and recall ratio are obtained. For example, the improved fast R-CNN method is used, VGG-16 embedded in an SE unit is used as a new feature extraction network, a classification network is designed by combining K-means clustering and RPN, ReLU is used as an activation function to design a detection network, and the final output result is a Bounding box (hereinafter, referred to as bbox) regression and normalized classification score. The method specifically comprises the following steps:
1. high speed vehicle images are collected.
2. And making a data set for vehicle identification and classification, and making the data set into a VOC format, wherein the xml file comprises the frame labels of the vehicles and the category labels of the vehicles in the picture.
3. Placing the image files of different vehicle types processed in the step 2 into a classification/JPEGImages folder; placing the xml file containing the labeling information generated in the step 2 into a/classification/indications folder; the files of train.txt, val.txt and test.txt are newly created in the/classification/ImageSets/Main folder. The four files respectively correspond to a training set, a training and verification set, a verification set and a test set.
4. Training a vehicle recognition and classification network: taking the data sets in the steps 2 and 3 as input data of the SEnet-based Fater R-CNN, and judging whether the neural network has converged according to an observed change curve of a loss function in the training process; if the convergence occurs, stopping training; otherwise, continuing to train the model on the training set.
5. Model performance was tested on the test set.
Further, as shown in fig. 1, in the detection stage, the embodiment of the present invention performs an aggregation operation by using a detection network, classifies the region by using a bbox classification network, and predicts the boundary frame of the vehicle by using a bbox regression network. In the detection network, there are two parallel output layers, and the output of the classification layer is the probability distribution of each candidate region on pedestrians and backgrounds. The probability distribution for each candidate region for both vehicle and background categories is p ═ p (p0, p 1). The output of the regression network is the parameters of the vehicle bounding box coordinates:
Figure BDA0003184007420000081
where k represents a category. The bounding box regression network and the boundary classification network are trained by a joint loss function of L (p, u, t)u,v)=Lcls(p,u)+λ[u≥1]·Lreg(tu,v),Lcls(p,u)=-log(pu) Refers to the logarithmic loss of the actual class. L isregIt is only activated when a vehicle is detected.
The network detection main process comprises the following steps:
s1: raw data is preprocessed into an M × N size image as a network input.
S2: and extracting features through a feature extraction network SE-VGG-16.
S3: and dividing the extracted feature set into two paths, inputting one path of feature set into an RPN network, and transmitting the other path of feature set to a specific convolution layer to obtain a higher-dimensional feature.
S4: the feature map processed by the RPN network will generate a corresponding region score, and then the region suggestion is obtained by the maximum suppression algorithm.
S5: the high-dimensional features obtained in step S4 and the region suggestions obtained in step S5 are simultaneously input to the RoI pooling layer, and the features corresponding to the region suggestions are extracted.
S6: and inputting the obtained region suggestion characteristics into a full connection layer to obtain the classification score of the region and the regressed bbox, classifying the region through a bbox classification network, and predicting a boundary frame of the vehicle through a bbox regression network.
The network training main process comprises the following steps:
firstly, training the SE-VGG network, then training the RPN, suggesting to train the detection network through the region extracted by the RPN, and then training the RPN of the parameters of the detection network, namely, the RPN and the detection network are trained jointly, and the training is repeated until convergence is realized.
S1: and setting the specification and the number of the candidate areas, and continuously adjusting the parameters of the candidate areas along with the increase of the iteration times to finally approach the real vehicle identification area.
S2: to accelerate the convergence speed, candidate regions similar to the vehicle in the image are clustered using the K-means method.
S3: IoU is an important indicator that reflects the difference between the candidate region and the true region to be detected. The larger the IoU value, the smaller the difference between the two regions.
S4: k-means clustering uses Euclidean distance to measure the distance between two points as follows:
min∑NM(1-IoU(Box[N],Truth[M]))。
where N refers to the category of the cluster, M refers to the sample level of the cluster, Box [ N ] refers to the width and height of the candidate region, and Box [ M ] refers to the width and height of the actual vehicle region.
According to another aspect of the embodiments of the present invention, there is also provided a non-volatile storage medium including a stored program, wherein the program controls a device in which the non-volatile storage medium is located to perform a feature recalibration-based image recognition and classification method.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions when executed perform a method for feature recalibration based image recognition and classification.
Through the embodiment, the technical problems that in the prior art, the problem of image recognition is solved by using a convolutional neural network, the efficiency of a traditional image processing algorithm depending on manual feature extraction is low, and due to the fact that a plurality of incomplete positions of edges exist in pictures shot by a plurality of loop crossings, most of the incomplete vehicle pictures cannot be correctly recognized during training by using an Faster R-CNN network, a background region is mistakenly judged as a vehicle, and the precision rate on a training set is low are solved.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An image identification and classification method based on feature recalibration is characterized by comprising the following steps:
acquiring vehicle image data;
identifying and classifying the vehicle image data to obtain a training data set;
training a preset network through the training data set to obtain a training result;
and generating a vehicle identification and classification result according to the training result and the vehicle image data.
2. The method of claim 1, wherein the identifying and classifying the vehicle image data to obtain a training data set comprises:
acquiring vehicle frame data and vehicle classification data according to the vehicle image data;
generating a data set file according to the vehicle frame data and the vehicle classification data;
and summarizing the vehicle frame data and the vehicle classification data into the data set file to obtain the training data set.
3. The method of claim 1, wherein after the training of the predetermined network by the training data set to obtain the training result, the method further comprises:
and testing the training result.
4. The method of claim 1, wherein after the generating a vehicle identification and classification result from the training result and the vehicle image data, the method further comprises:
and displaying the vehicle identification and classification result.
5. An image recognition and classification device based on feature recalibration, comprising:
the acquisition module is used for acquiring vehicle image data;
the recognition module is used for recognizing and classifying the vehicle image data to obtain a training data set;
the training module is used for training a preset network through the training data set to obtain a training result;
and the generating module is used for generating a vehicle identification and classification result according to the training result and the vehicle image data.
6. The apparatus of claim 5, wherein the identification module comprises:
the acquisition unit is used for acquiring vehicle frame data and vehicle classification data according to the vehicle image data;
the generating unit is used for generating a data set file through the vehicle frame data and the vehicle classification data;
and the summarizing unit is used for summarizing the vehicle frame data and the vehicle classification data into the data set file to obtain the training data set.
7. The apparatus of claim 5, further comprising:
and the testing module is used for testing the training result.
8. The apparatus of claim 5, further comprising:
and the display module is used for displaying the vehicle identification and classification result.
9. A non-volatile storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the non-volatile storage medium is located to perform the method of any one of claims 1 to 4.
10. An electronic device comprising a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions when executed perform the method of any one of claims 1 to 4.
CN202110856064.1A 2021-07-28 2021-07-28 Image recognition and classification method and device based on feature recalibration Active CN113569734B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110856064.1A CN113569734B (en) 2021-07-28 2021-07-28 Image recognition and classification method and device based on feature recalibration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110856064.1A CN113569734B (en) 2021-07-28 2021-07-28 Image recognition and classification method and device based on feature recalibration

Publications (2)

Publication Number Publication Date
CN113569734A true CN113569734A (en) 2021-10-29
CN113569734B CN113569734B (en) 2023-05-05

Family

ID=78168375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110856064.1A Active CN113569734B (en) 2021-07-28 2021-07-28 Image recognition and classification method and device based on feature recalibration

Country Status (1)

Country Link
CN (1) CN113569734B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194318A (en) * 2017-04-24 2017-09-22 北京航空航天大学 The scene recognition method of target detection auxiliary
CN111553200A (en) * 2020-04-07 2020-08-18 北京农业信息技术研究中心 Image detection and identification method and device
CN112101117A (en) * 2020-08-18 2020-12-18 长安大学 Expressway congestion identification model construction method and device and identification method
CN112329737A (en) * 2020-12-01 2021-02-05 哈尔滨理工大学 Vehicle detection method based on improved Faster RCNN
US20210055737A1 (en) * 2019-08-20 2021-02-25 Volkswagen Ag Method of pedestrian activity recognition using limited data and meta-learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194318A (en) * 2017-04-24 2017-09-22 北京航空航天大学 The scene recognition method of target detection auxiliary
US20210055737A1 (en) * 2019-08-20 2021-02-25 Volkswagen Ag Method of pedestrian activity recognition using limited data and meta-learning
CN111553200A (en) * 2020-04-07 2020-08-18 北京农业信息技术研究中心 Image detection and identification method and device
CN112101117A (en) * 2020-08-18 2020-12-18 长安大学 Expressway congestion identification model construction method and device and identification method
CN112329737A (en) * 2020-12-01 2021-02-05 哈尔滨理工大学 Vehicle detection method based on improved Faster RCNN

Also Published As

Publication number Publication date
CN113569734B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
CN108830188B (en) Vehicle detection method based on deep learning
Liu et al. Machine vision based traffic sign detection methods: Review, analyses and perspectives
CN108921083B (en) Illegal mobile vendor identification method based on deep learning target detection
CN106248559B (en) A kind of five sorting technique of leucocyte based on deep learning
CN101859382B (en) License plate detection and identification method based on maximum stable extremal region
CN107545271B (en) Image recognition method, device and system
CN105574550A (en) Vehicle identification method and device
CN109359696A (en) A kind of vehicle money recognition methods, system and storage medium
CN109523518B (en) Tire X-ray defect detection method
CN106022285A (en) Vehicle type identification method and vehicle type identification device based on convolutional neural network
CN107563280A (en) Face identification method and device based on multi-model
CN110689043A (en) Vehicle fine granularity identification method and device based on multiple attention mechanism
CN108764361B (en) Working condition identification method of indicator diagram of beam-pumping unit based on integrated learning
Yang et al. Improved lane detection with multilevel features in branch convolutional neural networks
CN111950583B (en) Multi-scale traffic signal sign recognition method based on GMM (Gaussian mixture model) clustering
CN111400533B (en) Image screening method, device, electronic equipment and storage medium
CN104615986A (en) Method for utilizing multiple detectors to conduct pedestrian detection on video images of scene change
CN111522951A (en) Sensitive data identification and classification technical method based on image identification
CN105989334A (en) Road detection method based on monocular vision
Maldonado-Bascon et al. Traffic sign recognition system for inventory purposes
CN112084890A (en) Multi-scale traffic signal sign identification method based on GMM and CQFL
CN113155173A (en) Perception performance evaluation method and device, electronic device and storage medium
Sikirić et al. Image representations on a budget: Traffic scene classification in a restricted bandwidth scenario
CN104978569A (en) Sparse representation based incremental face recognition method
CN109800790A (en) A kind of feature selection approach towards high dimensional data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 1409, Floor 14, Building 1, High tech Zone Entrepreneurship Center, No. 177, Gaoxin 6th Road, Rizhao, Shandong 276801

Applicant after: Shandong Liju Robot Technology Co.,Ltd.

Address before: 276808 No.99, Yuquan 2nd Road, antonwei street, Lanshan District, Rizhao City, Shandong Province

Applicant before: Shandong Liju Robot Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant