CN109740422B - Method and device for identifying automobile - Google Patents

Method and device for identifying automobile Download PDF

Info

Publication number
CN109740422B
CN109740422B CN201811401131.5A CN201811401131A CN109740422B CN 109740422 B CN109740422 B CN 109740422B CN 201811401131 A CN201811401131 A CN 201811401131A CN 109740422 B CN109740422 B CN 109740422B
Authority
CN
China
Prior art keywords
image
automobile
identified
headstock
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811401131.5A
Other languages
Chinese (zh)
Other versions
CN109740422A (en
Inventor
刘均
李方勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Launch Technology Co Ltd
Original Assignee
Shenzhen Launch Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Launch Technology Co Ltd filed Critical Shenzhen Launch Technology Co Ltd
Priority to CN201811401131.5A priority Critical patent/CN109740422B/en
Publication of CN109740422A publication Critical patent/CN109740422A/en
Application granted granted Critical
Publication of CN109740422B publication Critical patent/CN109740422B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a method and a device for identifying an automobile. The method comprises the following steps: acquiring a preprocessed first image to be identified, wherein the first image to be identified is a head image of an automobile to be identified; performing first image feature processing on the preprocessed first image to be identified to obtain a headstock feature image, wherein the first image feature processing comprises one or more times of headstock image feature extraction and headstock image feature size gradual reduction; and identifying the brand or model of the automobile according to the head characteristic image. A corresponding apparatus is also disclosed. The method comprises the steps of carrying out convolution operation on a head image of an automobile to be identified through a head neural network model, extracting a head characteristic image, predicting the brand or model of the automobile to be identified according to the head characteristic in the head characteristic image by a softmax layer, autonomously and rapidly obtaining the probability of the brand or model of the automobile, and finally accurately identifying the brand or model of the automobile to be identified according to the probability value.

Description

Method and device for identifying automobile
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a method and a device for identifying an automobile.
Background
In recent years, economic development is rapid, the living standard of people is continuously improved, and more people purchase automobiles. Automobiles bring much convenience to people, but at the end, automobiles are also machines and have faults. The structures and parts of the automobiles of different models are different, so that the models of the automobiles need to be accurately identified when the automobiles are maintained.
In some cases, it is difficult for a maintenance master to identify a specific model of a vehicle by naked eyes, and conventional vehicle identification technologies including processes of vehicle detection separation, feature extraction and selection, pattern identification and the like have low identification rate in a complex background.
Disclosure of Invention
The application provides a method and a device for identifying an automobile so as to identify the automobile type.
In a first aspect, a method of identifying an automobile is provided, comprising: acquiring a preprocessed first image to be identified; performing first image feature processing on the preprocessed first image to be identified to obtain a headstock feature image, wherein the first image feature processing comprises one or more times of headstock image feature extraction and headstock image feature size gradual reduction; and identifying the brand or model of the automobile according to the head characteristic image.
In one possible implementation manner, the acquiring the head image of the car to be identified includes: acquiring the first image to be identified; normalizing the headstock image to obtain a first image to be identified after pretreatment.
In another possible implementation manner, the performing a first image feature processing on the preprocessed first image to be identified includes: performing convolution operation on the preprocessed first image to be identified to obtain extracted headstock image features; reducing the size of the extracted headstock image characteristics to a target size to obtain a first headstock characteristic image; and carrying out the convolution operation and the size reduction on the first headstock characteristic image for target times to obtain the headstock characteristic image.
In yet another possible implementation, the method further includes: predicting the brand or model of the automobile corresponding to the head in the first image to be identified according to the processed image characteristics to obtain one or more probability values, wherein the probability values are probabilities of the model or the brand of the automobile to be identified; and taking the automobile model or the automobile brand corresponding to the maximum value in the one or more probability values as the identified automobile brand or model.
In yet another possible implementation, the method further includes: when the vehicle head characteristic image identifies the brand of the vehicle, acquiring a preprocessed second image to be identified, wherein the second image to be identified is a vehicle tail image of the vehicle to be identified; performing second image feature processing on the preprocessed second image to be identified to obtain one or more tail feature images of the automobile model associated with the head feature image, wherein the second image feature processing comprises one or more times of tail image feature extraction and tail image feature size gradual reduction; and obtaining the automobile model corresponding to the automobile to be identified according to the automobile tail characteristic image.
In a second aspect, there is provided an apparatus for identifying an automobile, comprising: the acquisition unit is used for acquiring the preprocessed first image to be identified; the first processing unit is used for carrying out first image feature processing on the preprocessed first image to be identified to obtain a headstock feature image, wherein the first image feature processing comprises one or more times of headstock image feature extraction and step-by-step downsizing of headstock image features; the first identification unit is used for identifying the brand or model of the automobile according to the head characteristic image.
In one possible implementation manner, the acquiring unit includes: the acquisition subunit is used for acquiring a head image of the automobile to be identified; the normalization subunit is used for normalizing the headstock image to obtain the preprocessed first image to be identified.
In another possible implementation manner, the first processing unit includes: the first processing subunit is used for carrying out convolution operation on the preprocessed first image to be identified to obtain extracted headstock image characteristics; the second processing subunit is used for reducing the size of the extracted headstock image characteristic to a target size to obtain a first headstock characteristic image; and the third processing subunit is used for carrying out the convolution operation and the size reduction on the first headstock characteristic image for target times to obtain the headstock characteristic image.
In yet another possible implementation, the apparatus further includes: the prediction unit is used for predicting the brand or model of the automobile corresponding to the head in the first image to be identified according to the processed image characteristics to obtain one or more probability values, wherein the probability values are the probabilities of the model or the brand of the automobile to be identified; and the determining unit is used for taking the automobile model or the automobile brand corresponding to the maximum value in the one or more probability values as the identified automobile brand or model.
In yet another possible implementation, the apparatus further includes: the acquiring unit is further used for acquiring a preprocessed second image to be identified when the vehicle head characteristic image identifies the brand of the vehicle, wherein the second image to be identified is a vehicle tail image of the vehicle to be identified; the second processing unit is used for carrying out second image feature processing on the preprocessed second image to be identified to obtain one or more tail feature images of the automobile model associated with the head feature image, wherein the second image feature processing comprises one or more times of tail image feature extraction and tail image feature size gradual reduction; and the second identification unit is used for obtaining the automobile model corresponding to the automobile to be identified according to the tail characteristic image.
In a third aspect, there is provided an apparatus for identifying an automobile, comprising: comprises a processor and a memory; the processor is configured to support the apparatus to perform the respective functions of the method of the first aspect and any one of its possible implementations. The memory is used to couple with the processor, which holds the programs (instructions) and data necessary for the device. Optionally, the apparatus may further comprise an input/output interface for supporting communication between the apparatus and other apparatuses.
In a fourth aspect, a computer readable storage medium is provided, having instructions stored therein, which when run on a computer, cause the computer to perform the method of the first aspect and any of its possible implementations.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect and any one of its possible implementations.
According to the embodiment of the application, convolution operation is carried out on the head image of the automobile to be identified through the head neural network model, the head characteristic image is extracted, the softmax layer predicts the brand or model of the automobile to be identified according to the head characteristic in the head characteristic image, the probability of the brand or model of the automobile is autonomously and rapidly obtained, and finally the brand or model of the automobile to be identified is accurately identified according to the probability value; because the recognition result of the head neural network model may only be the brand of the automobile, the tail of the automobile to be recognized is recognized through the automobile brand neural network model on the basis of the automobile brand obtained by recognition, and the automobile type of the automobile to be recognized is obtained. The identification process does not need to be manually participated, and a user can obtain a high-accuracy identification result only by respectively inputting the photographed head image and the photographed tail image of the automobile to be identified into the model.
Drawings
In order to more clearly describe the technical solutions in the embodiments or the background of the present application, the following description will describe the drawings that are required to be used in the embodiments or the background of the present application.
Fig. 1 is a schematic flow chart of a method for identifying an automobile according to an embodiment of the present application;
FIG. 2 is a flow chart of another method for identifying an automobile according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an apparatus for identifying an automobile according to an embodiment of the present application;
fig. 4 is a schematic hardware structure of an apparatus for identifying an automobile according to an embodiment of the present application.
Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to more clearly describe the technical solutions in the embodiments or the background of the present application, the following description will describe the drawings that are required to be used in the embodiments or the background of the present application.
Embodiments of the present application are described below with reference to the accompanying drawings in the embodiments of the present application.
Referring to fig. 1, fig. 1 is a flowchart of a method for identifying an automobile according to an embodiment of the present application.
101. And acquiring a preprocessed first image to be identified.
The embodiment of the application mainly identifies the specific model of the automobile through the vehicle head neural network model, and the specific model refers to a specific model of a certain brand, such as: BMW 3, 5; audi A4, A6; popular maiteng, passat, etc. The vehicle head neural network model needs to process the vehicle head image, namely processes the first image to be identified, and gives out an identification result, so that the vehicle head neural network model has better processing effect on the first image to be identified, and the first image to be identified needs to be preprocessed.
102. And carrying out first image feature processing on the preprocessed first image to be identified to obtain a headstock feature image.
The head neural network model carries out convolution operation on the preprocessed first image to be identified, extracts a head characteristic image from the preprocessed first image to be identified, and reduces the size of the head characteristic by carrying out pooling treatment on the head image so as to reduce the calculation amount of subsequent treatment. It should be noted that the neural network model of the locomotive consists of a plurality of convolution layers, the features extracted by the upper convolution layer are used as the input of the next convolution operation, and the more the number of convolution layers is, the more the extracted feature information is, and the higher the accuracy of the finally extracted features is.
103. And identifying the brand or model of the automobile according to the head characteristic image.
The softmax layer in the headstock neural network model identifies the brand or model of the automobile to be identified according to the headstock features in the headstock feature images, gives out the probability that the automobile to be identified is a certain brand or model, and finally selects the brand or model of the automobile corresponding to the maximum probability value as the brand or model of the automobile to be identified.
According to the embodiment of the application, the head image of the automobile to be identified is convolved through the head neural network model, the head characteristic image is extracted, the softmax layer predicts the brand or model of the automobile to be identified according to the head characteristic in the head characteristic image, the probability of the brand or model of the automobile is autonomously and rapidly obtained, and finally the brand or model of the automobile to be identified is accurately identified according to the probability value.
Referring to fig. 2, fig. 2 is a flowchart of another method for identifying an automobile according to an embodiment of the present application.
201. And acquiring a head image of the automobile to be identified.
The embodiment of the application mainly identifies the specific model of the automobile through the vehicle head neural network model, and the specific model refers to a specific model of a certain brand, such as: BMW 3, 5; audi A4, A6; popular maiteng, passat, etc. The vehicle head neural network model needs to process the vehicle head image and give the recognition result, so when a user needs to recognize the vehicle of an unknown vehicle type through the vehicle head neural network model, the user needs to acquire the vehicle head image of the vehicle to be recognized, namely, acquire a first image to be recognized. Optionally, the model of the locomotive neural network can be obtained based on VGG16 deep neural network training; the first image to be identified can be obtained by shooting through any intelligent terminal (such as a mobile phone, a tablet personal computer and the like) with a camera.
202. Normalizing the headstock image to obtain a preprocessed first image to be identified.
And inputting the first image to be identified into the neural network model of the locomotive for processing. On the one hand, since the deep neural network is actually used for completing the distribution of data in the process of processing the first image to be identified and extracting the features, and a good generalization effect needs to be achieved on a subsequent test set, obviously, if the data input into the deep neural network has different distributions each time, great difficulty is brought to the feature extraction of the network. On the other hand, in the process of extracting the features, the distribution of each layer of network in the deep neural network is changed due to the change of the parameters of the previous layer of network, and the data distribution of the data is changed after the data is calculated by the layer of network, so that the difficulty is brought to the feature extraction of the next layer of network. Therefore, before the first image to be identified is subjected to subsequent processing, normalization processing is required to remove the correlation of the data, and the distribution difference between the characteristic data is highlighted, so that the preprocessed first image to be identified is obtained. For specific steps of normalization, see the following examples:
let the input data be β=x 1→m Totally m data, the output is y i =bn (x), the normalization process will operate on the data as follows:
the batch data beta=x is obtained first 1→m Average value of (i), i.e
Figure BDA0001876349150000051
Then according to the average value mu β Solving for variance of the batch data, i.e
Figure BDA0001876349150000052
Then according to the average value mu β Sum of variances
Figure BDA0001876349150000053
Normalizing the batch data, namely +.>
Figure BDA0001876349150000054
Wherein ε is a constant, we get +.>
Figure BDA0001876349150000055
The normalized data is obtained.
203. And carrying out convolution operation on the preprocessed first image to be identified to obtain the extracted headstock image characteristics.
And carrying out convolution operation on the preprocessed first image to be identified through the headstock neural network model, and extracting headstock image characteristics from the preprocessed first image to be identified. Specifically, a convolution operation is performed on the preprocessed first image to be recognized, a convolution kernel is utilized to slide on the preprocessed first image to be recognized, the gray value of a pixel on an image point is multiplied by a numerical value on a corresponding convolution kernel, then the sum of all multiplied values is used as the gray value of the pixel on the image corresponding to the middle pixel of the convolution kernel, and finally all the image points in the preprocessed first image to be recognized are subjected to the sliding processing, and the vehicle head image characteristics are extracted.
It should be understood that the features of the vehicle head image extracted by the vehicle head neural network model are determined by training the VGG16 deep neural network, which is not described in detail herein.
204. And reducing the size of the extracted headstock image characteristic to a target size to obtain a first headstock characteristic image.
Because the number of the extracted headstock image features is large and the size is large, if the extracted headstock image features are directly used for subsequent processing, huge calculation amount is generated, so before the extracted headstock image features are subjected to the subsequent processing, the extracted headstock image features are subjected to pooling processing, the size of the extracted headstock image features is reduced to a target size, the requirement of the subsequent processing is met, and meanwhile, the calculation amount of the subsequent processing can be greatly reduced.
For a specific implementation process of the pooling process, see the following examples: assuming that the size of the extracted headstock image feature is h×w, when the target size of the headstock image feature to be obtained is h×w, the target motion feature may be divided into h×w lattices, so that the size of each lattice is (H/H) ×w, and then an average value or a maximum value of the headstock image feature in each lattice is calculated, so as to obtain the headstock feature image with the target size.
205. And carrying out target times convolution operation and size reduction on the first vehicle head characteristic image to obtain a vehicle head characteristic image.
As described above, the vehicle head application network model in the embodiment of the present application is obtained based on the VGG16 deep neural network training, so that the main architecture of the model is consistent with the architecture of the VGG16 deep neural network, and the specific architecture and related parameters are as follows:
the first layer is an input layer, the input layer inputs a first image to be identified, the output layer outputs the preprocessed first image to be identified, and the size of the first image to be identified is 256×256×3.
The second layer to the fourth layer are the first convolutional neural network units. The second layer is a convolution layer, the convolution layer comprises 64 3*3 convolution kernels, the first image to be identified after pretreatment is subjected to convolution operation through convolution check, and the vehicle head features are extracted, so that a vehicle head feature image is obtained. The third layer is a convolution layer, the convolution layer comprises 64 3*3 convolution kernels, and the vehicle head characteristic images extracted from the second layer are subjected to convolution through the convolution kernels to further extract the vehicle head characteristic. The fourth convolution layer is a pooling layer, the step length of the pooling layer is set to be 2 x 2, and the pooling layer is used for pooling the acquired headstock characteristic images and reducing the size of headstock characteristics.
The fifth layer to the seventh layer are the second convolutional neural network units. The fifth layer is a convolution layer, the convolution layer comprises 128 3*3 convolution kernels, the convolution operation is carried out on the headstock feature image output by the previous layer through convolution check, the headstock feature is further extracted, and the headstock feature image is obtained. The sixth layer is a convolution layer, the convolution layer comprises 128 3*3 convolution kernels, and the vehicle head characteristic images extracted from the fifth layer are subjected to convolution through convolution cores to further extract vehicle head characteristics. The seventh convolution layer is a pooling layer, the step length of the pooling layer is set to be 2 x 2, and the pooling layer is used for pooling the acquired headstock characteristic images and reducing the size of headstock characteristics.
The eighth layer to the tenth layer are third convolutional neural network units. The eighth layer is a convolution layer, the convolution layer comprises 256 3*3 convolution kernels, the convolution operation is carried out on the headstock feature image output by the previous layer through convolution check, the headstock feature is further extracted, and the headstock feature image is obtained. The ninth layer is a convolution layer, the convolution layer comprises 256 3*3 convolution kernels, and the vehicle head characteristic images extracted from the eighth layer are subjected to convolution through convolution check to further extract the vehicle head characteristic. The tenth convolution layer is a pooling layer, the step length of the pooling layer is set to be 2 x 2, and the pooling layer is used for pooling the acquired headstock characteristic images and reducing the size of headstock characteristics.
The eleventh layer to the tenth layer are fourth convolutional neural network units. The eleventh layer is a convolution layer, the convolution layer comprises 512 3*3 convolution kernels, the convolution operation is performed on the headstock feature image output by the last layer through convolution check, the headstock feature is further extracted, and the headstock feature image is obtained. The twelfth layer is a convolution layer, the convolution layer comprises 512 3*3 convolution kernels, and the vehicle head characteristic images extracted from the eleventh layer are subjected to convolution through convolution check to further extract the vehicle head characteristic. The thirteenth convolution layer is a pooling layer, the step length of the pooling layer is set to be 2 x 2, and the pooling layer is used for pooling the acquired headstock characteristic images and reducing the size of headstock characteristics.
The fourteenth to sixteenth layers are fifth convolutional neural network elements. The fourteenth layer is a convolution layer, the convolution layer comprises 512 3*3 convolution kernels, the convolution operation is carried out on the headstock feature image output by the previous layer through convolution check, the headstock feature is further extracted, and the headstock feature image is obtained. The tenth layer is a convolution layer, the convolution layer comprises 512 3*3 convolution kernels, and the head feature image extracted from the fourteenth layer is subjected to convolution through convolution check to further extract the head feature. The sixteenth convolution layer is a pooling layer, the step length of the pooling layer is set to be 2 x 2, and the pooling layer is used for pooling the acquired headstock characteristic images and reducing the size of headstock characteristics.
From the above network structure and parameters, it can be seen that the neural network model of the locomotive includes multiple convolution layers, the features extracted from the previous convolution layer will be used as the input of the next convolution operation, the more convolution kernels are included from the previous convolution layer, the more the corresponding feature information is enriched, and the higher the accuracy of the finally extracted features is. 203-205 performing first image feature processing on the preprocessed first image to be identified to obtain a headstock feature image, wherein the first image feature processing comprises one or more times of headstock image feature extraction and step-by-step downsizing of headstock image features.
206. And predicting the brand or model of the automobile corresponding to the head in the first image to be identified according to the processed image characteristics to obtain one or more probability values.
17. Eighteen and nineteen layers are fully connected layers, containing 4096 total nuclei. The twentieth layer is a softmax layer, and N nuclei are total, and the activation function is softmax. It should be noted that, N convolution kernels are in one-to-one correspondence with N car brands, such as: in the market, M automobile brands are shared, at least one head image is selected for each automobile brand and used as a training material, and the training material is input into the VGG16 deep neural network for training, so that M convolution kernels are arranged on a softmax layer in the finally obtained head neural network model and are respectively used for identifying the M automobile brands.
The softmax layer can map different input features into values between 0 and 1 through a built-in softmax function, the sum of all mapped values is 1, the mapped values correspond to the input features one by one, thus, the prediction is completed for each input feature, and corresponding probability is given in a numerical form. See for details the following examples: if there are 10 kernels corresponding to 10 input actions, i.e., in any of input actions 1, 2, 3..any of actions up to 10, the softmax layer will give the input actions as action 1, 2, 3, respectively, up to the corresponding probability of action 10, and predict the action with the largest probability value as the input action.
Alternatively, when training the VGG16 deep neural network, the headstock features in the headstock feature image may be predicted by the softmax layer. Specifically, the softmax layer can combine different headstock features in the headstock feature image to predict whether the automobile to be identified is a specific certain automobile model or a certain brand, give the probability that the automobile to be identified is the automobile model or the brand, and finally obtain probability values of one or more automobile models or automobile brands. And selecting the vehicle model corresponding to the maximum probability value from the plurality of probability values, and outputting the vehicle model as a recognition result. And comparing the model with the model of the automobile to be identified in the input image, and adjusting related parameters of the softmax function to finish the training of the softmax layer.
It should be noted that, in practical application, the car model or the car brand corresponding to the maximum value of the one or more probability values is used as the identified car model or the identified car brand, that is, the identification result output by the vehicle head neural network model is not necessarily the vehicle type.
207. And when the vehicle head characteristic image identifies the brand of the vehicle, acquiring a second image to be identified after pretreatment.
As described above, the recognition result output by the headstock neural network model is not necessarily a vehicle model, but may be a brand of the vehicle to be recognized, and it is obvious that the recognition result required by the user is a model of the vehicle to be recognized, and when the recognition result output by the headstock neural network model is a vehicle brand, the embodiment of the present application recognizes the tail of the vehicle to be recognized through the vehicle brand neural network model. As with the vehicle head neural network model, the car brand neural network model is also based on VGG16 deep neural network training, and thus the structure of the car brand neural network model and the number of convolution kernels can be seen at 205. Because each automobile brand flag contains a plurality of automobile types, namely each automobile brand corresponds to a plurality of automobile types, after the brand of the automobile to be identified is identified through the vehicle head neural network model, the automobile types can be identified aiming at the brand, so that the identification range can be reduced, the operation speed can be increased, and the identification accuracy can be improved. Therefore, the method and the device have the advantages that each automobile brand is independently trained to obtain an automobile brand neural network model, and all automobile types corresponding to the automobile brand can be identified by the model.
The second image to be identified, namely the tail image of the automobile to be identified, is identified by the automobile brand neural network model, and the whole identification process can be summarized into the following three steps: performing second image feature processing on the preprocessed second image to be identified to obtain one or more tail feature images of the automobile model associated with the head feature image, wherein the second image feature processing comprises one or more times of tail image feature extraction and tail image feature size gradual reduction; and obtaining the automobile model corresponding to the automobile to be identified according to the automobile tail characteristic image. Since the process of recognizing the second image to be recognized is similar to that of the first image to be recognized, reference will be made to 201 to 206 in detail, and will not be described in detail herein.
According to the embodiment of the application, convolution operation is carried out on the head image of the automobile to be identified through the head neural network model, the head characteristic image is extracted, the softmax layer predicts the brand or model of the automobile to be identified according to the head characteristic in the head characteristic image, the probability of the brand or model of the automobile is autonomously and rapidly obtained, and finally the brand or model of the automobile to be identified is accurately identified according to the probability value; because the recognition result of the head neural network model may only be the brand of the automobile, the tail of the automobile to be recognized is recognized through the automobile brand neural network model on the basis of the automobile brand obtained by recognition, and the automobile type of the automobile to be recognized is obtained. The identification process does not need to be manually participated, and a user can obtain a high-accuracy identification result only by respectively inputting the photographed head image and the photographed tail image of the automobile to be identified into the model.
The foregoing details the method of embodiments of the present application, and the apparatus of embodiments of the present application is provided below.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an apparatus for identifying an automobile according to an embodiment of the present application, where the apparatus 1000 includes: the apparatus comprises an acquisition unit 11, a first processing unit 12, a first identification unit 13, a prediction unit 14, a determination unit 15, a second processing unit 16 and a second identification unit 17. Wherein:
an acquiring unit 11, configured to acquire a preprocessed first image to be identified;
a first processing unit 12, configured to perform a first image feature process on the preprocessed first image to be identified to obtain a headstock feature image, where the first image feature process includes one or more times of headstock image feature extraction and step-by-step downsizing of headstock image features;
a first identifying unit 13, configured to identify a brand or model of the automobile according to the headstock feature image.
Further, the acquisition unit 11 includes: an acquisition subunit 111, configured to acquire the first image to be identified; and the normalization subunit 112 is configured to normalize the first image to be identified, and obtain the preprocessed first image to be identified.
Further, the first processing unit 12 includes: a first processing subunit 121, configured to perform a convolution operation on the preprocessed first image to be identified, so as to obtain an extracted feature of the vehicle head image; a second processing subunit 122, configured to reduce the size of the extracted features of the vehicle head image to a target size, so as to obtain a first feature image of the vehicle head; and a third processing subunit 123, configured to perform the convolution operation and the size reduction on the first headstock feature image for a target number of times, to obtain the headstock feature image.
Further, the apparatus 1000 further comprises: the prediction unit 14 is configured to predict a brand or model of an automobile corresponding to the head in the first image to be identified according to the processed image feature, so as to obtain one or more probability values, where the probability value is a probability of the model or brand of the automobile to be identified; and a determining unit 15, configured to use the model number or the brand of the automobile corresponding to the maximum value of the one or more probability values as the identified brand or model number of the automobile.
Further, the apparatus 1000 further comprises: the obtaining unit 11 is further configured to obtain a second preprocessed image to be identified when the brand of the automobile is identified according to the head feature image, where the second image to be identified is a tail image of the automobile to be identified; a second processing unit 16, configured to perform a second image feature process on the preprocessed second image to be identified, to obtain one or more tail feature images of the vehicle model associated with the head feature image, where the second image feature process includes one or more of extraction of tail image features and gradual reduction of sizes of tail image features; and the second recognition unit 17 is configured to obtain a vehicle model corresponding to the vehicle to be recognized according to the tail feature image.
According to the embodiment of the application, convolution operation is carried out on the head image of the automobile to be identified through the head neural network model, the head characteristic image is extracted, the softmax layer predicts the brand or model of the automobile to be identified according to the head characteristic in the head characteristic image, the probability of the brand or model of the automobile is autonomously and rapidly obtained, and finally the brand or model of the automobile to be identified is accurately identified according to the probability value; because the recognition result of the head neural network model may only be the brand of the automobile, the tail of the automobile to be recognized is recognized through the automobile brand neural network model on the basis of the automobile brand obtained by recognition, and the automobile type of the automobile to be recognized is obtained. The identification process does not need to be manually participated, and a user can obtain a high-accuracy identification result only by respectively inputting the photographed head image and the photographed tail image of the automobile to be identified into the model.
Fig. 4 is a schematic hardware structure of an apparatus for identifying an automobile according to an embodiment of the present application. The identification means comprise a processor 21 and may further comprise input means 22, output means 23 and a memory 24. The input device 22, the output device 23, the memory 24 and the processor 21 are interconnected by a bus.
The memory includes, but is not limited to, random access memory (random access memory, RAM), read-only memory (ROM), erasable programmable read-only memory (erasable programmable read only memory, EPROM), or portable read-only memory (compact disc read-only memory, CD-ROM) for associated instructions and data.
The input means is for inputting data and/or signals and the output means is for outputting data and/or signals. The output device and the input device may be separate devices or may be a single device.
A processor may include one or more processors, including for example one or more central processing units (central processing unit, CPU), which in the case of a CPU may be a single core CPU or a multi-core CPU.
The memory is used to store program codes and data for the network device.
The processor is used for calling the program codes and data in the memory and executing the following steps:
in one implementation, the processor is configured to perform the steps of: acquiring a preprocessed first image to be identified, wherein the first image to be identified is a head image of an automobile to be identified; performing first image feature processing on the preprocessed first image to be identified to obtain a headstock feature image, wherein the first image feature processing comprises one or more times of headstock image feature extraction and headstock image feature size gradual reduction; and identifying the brand or model of the automobile according to the head characteristic image.
In another implementation, the processor is configured to perform the steps of: acquiring the first image to be identified; normalizing the first image to be identified to obtain the preprocessed first image to be identified.
In yet another implementation, the processor is configured to perform the steps of: performing convolution operation on the preprocessed first image to be identified to obtain extracted headstock image features; reducing the size of the extracted headstock image features to obtain a first headstock feature image; and carrying out convolution operation and size reduction on the first headstock characteristic image for a plurality of times to obtain the headstock characteristic image.
In yet another implementation, the processor is configured to perform the steps of: predicting the brand or model of the automobile corresponding to the head in the first image to be identified according to the processed image characteristics to obtain one or more probability values, wherein the probability values are probabilities of the model or the brand of the automobile to be identified; and taking the automobile model or the automobile brand corresponding to the maximum value in the one or more probability values as the identified automobile brand or model.
In yet another implementation, the processor is configured to perform the steps of: when the vehicle head characteristic image identifies the brand of the vehicle, acquiring a preprocessed second image to be identified, wherein the second image to be identified is a vehicle tail image of the vehicle to be identified; performing second image feature processing on the preprocessed second image to be identified to obtain one or more tail feature images of the automobile model associated with the head feature image, wherein the second image feature processing comprises one or more times of tail image feature extraction and tail image feature size gradual reduction; and obtaining the automobile model corresponding to the automobile to be identified according to the automobile tail characteristic image.
It will be appreciated that fig. 4 shows only a simplified design of a device for detecting a motor vehicle. In practical applications, the device for identifying an automobile may also include other necessary elements, including but not limited to any number of input/output devices, processors, controllers, memories, etc., and all devices that can implement the embodiment of the present application for identifying an automobile are within the scope of protection of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital versatile disk (digital versatile disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: a read-only memory (ROM) or a random access memory (random access memory, RAM), a magnetic disk or an optical disk, or the like.

Claims (8)

1. A method of identifying an automobile, comprising:
acquiring a preprocessed first image to be identified;
performing first image feature processing on the preprocessed first image to be identified according to a headstock neural network model to obtain a headstock feature image, wherein the first image feature processing comprises one or more headstock image feature extraction and headstock image feature size gradual reduction;
the softmax layer in the headstock neural network model identifies the brand or model of the automobile according to headstock features in the headstock feature image;
when the vehicle head characteristic image identifies the brand of the vehicle, acquiring a preprocessed second image to be identified, wherein the second image to be identified is a vehicle tail image of the vehicle to be identified;
performing second image feature processing on the preprocessed second image to be identified according to an automobile brand neural network model to obtain one or more automobile model tail feature images related to the automobile head feature image, wherein the second image feature processing comprises one or more times of extraction of automobile tail image features and gradual reduction of the sizes of the automobile tail image features;
and the automobile brand neural network model obtains the automobile model corresponding to the automobile to be identified according to the automobile tail characteristic image.
2. The method of claim 1, wherein the acquiring the preprocessed first image to be identified comprises:
acquiring a head image of an automobile to be identified;
normalizing the headstock image to obtain a first image to be identified after pretreatment.
3. The method according to claim 1, wherein the performing, according to the model of the neural network of the vehicle head, the first image feature processing on the preprocessed first image to be identified includes:
the headstock neural network model carries out convolution operation on the preprocessed first image to be identified to obtain extracted headstock image characteristics;
reducing the size of the extracted headstock image characteristics to a target size to obtain a first headstock characteristic image;
and carrying out the convolution operation and the size reduction on the first headstock characteristic image for target times to obtain the headstock characteristic image.
4. A method according to any one of claims 1 to 3, further comprising:
predicting the brand or model of the automobile corresponding to the head in the first image to be identified according to the processed image characteristics to obtain one or more probability values, wherein the probability values are probabilities of the model or the brand of the automobile to be identified;
and taking the automobile model or the automobile brand corresponding to the maximum value in the one or more probability values as the identified automobile brand or model.
5. An apparatus for identifying an automobile, comprising:
the acquisition unit is used for acquiring the preprocessed first image to be identified;
the first processing unit is used for carrying out first image feature processing on the preprocessed first image to be identified according to the headstock neural network model to obtain headstock feature images, wherein the first image feature processing comprises one or more times of headstock image feature extraction and headstock image feature size gradual reduction;
the first identification unit is used for identifying the brand or model of the automobile according to the locomotive characteristics in the locomotive characteristic image by a softmax layer in the locomotive neural network model;
the acquiring unit is further used for acquiring a preprocessed second image to be identified when the vehicle head characteristic image identifies the brand of the vehicle, wherein the second image to be identified is a vehicle tail image of the vehicle to be identified;
the second processing unit is used for performing second image feature processing on the preprocessed second image to be identified according to the automobile brand neural network model to obtain one or more automobile model tail feature images related to the automobile head feature image, wherein the second image feature processing comprises one or more times of extraction of automobile tail image features and gradual reduction of the sizes of the automobile tail image features;
the second recognition unit is used for the automobile brand neural network model to obtain the automobile model corresponding to the automobile to be recognized according to the automobile tail characteristic image.
6. The apparatus of claim 5, wherein the first processing unit further comprises:
the first processing subunit is used for carrying out convolution operation on the preprocessed first image to be identified by the headstock neural network model to obtain extracted headstock image characteristics;
the second processing subunit is used for reducing the size of the extracted headstock image characteristic to a target size to obtain a first headstock characteristic image;
and the third processing subunit is used for carrying out the convolution operation and the size reduction on the first headstock characteristic image for target times to obtain the headstock characteristic image.
7. An apparatus for identifying an automobile, comprising: a processor, a memory; the computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of any of claims 1-4.
8. A computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of any of claims 1 to 4.
CN201811401131.5A 2018-11-22 2018-11-22 Method and device for identifying automobile Active CN109740422B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811401131.5A CN109740422B (en) 2018-11-22 2018-11-22 Method and device for identifying automobile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811401131.5A CN109740422B (en) 2018-11-22 2018-11-22 Method and device for identifying automobile

Publications (2)

Publication Number Publication Date
CN109740422A CN109740422A (en) 2019-05-10
CN109740422B true CN109740422B (en) 2023-05-05

Family

ID=66358027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811401131.5A Active CN109740422B (en) 2018-11-22 2018-11-22 Method and device for identifying automobile

Country Status (1)

Country Link
CN (1) CN109740422B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444867A (en) * 2020-03-31 2020-07-24 高新兴科技集团股份有限公司 Convolutional neural network-based vehicle post-beat brand identification method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446949A (en) * 2016-09-26 2017-02-22 成都通甲优博科技有限责任公司 Vehicle model identification method and apparatus
WO2018076575A1 (en) * 2016-10-24 2018-05-03 深圳市元征科技股份有限公司 Method and apparatus for recording illegal driving

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548145A (en) * 2016-10-31 2017-03-29 北京小米移动软件有限公司 Image-recognizing method and device
CN108805934B (en) * 2017-04-28 2021-12-28 华为技术有限公司 External parameter calibration method and device for vehicle-mounted camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446949A (en) * 2016-09-26 2017-02-22 成都通甲优博科技有限责任公司 Vehicle model identification method and apparatus
WO2018076575A1 (en) * 2016-10-24 2018-05-03 深圳市元征科技股份有限公司 Method and apparatus for recording illegal driving

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周必书 ; .基于深度学习特征表达的车辆检测和分析.信息通信.2018,(04),全文. *

Also Published As

Publication number Publication date
CN109740422A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
CN108875522B (en) Face clustering method, device and system and storage medium
CN111950638B (en) Image classification method and device based on model distillation and electronic equipment
CN109471945B (en) Deep learning-based medical text classification method and device and storage medium
CN111160375B (en) Three-dimensional key point prediction and deep learning model training method, device and equipment
CN107958230B (en) Facial expression recognition method and device
CN111950596A (en) Training method for neural network and related equipment
CN111738403B (en) Neural network optimization method and related equipment
CN113408570A (en) Image category identification method and device based on model distillation, storage medium and terminal
CN110647938B (en) Image processing method and related device
CN111950570B (en) Target image extraction method, neural network training method and device
CN113869282A (en) Face recognition method, hyper-resolution model training method and related equipment
CN110909817B (en) Distributed clustering method and system, processor, electronic device and storage medium
CN113822207A (en) Hyperspectral remote sensing image identification method and device, electronic equipment and storage medium
CN111340213B (en) Neural network training method, electronic device, and storage medium
CN115601692A (en) Data processing method, training method and device of neural network model
CN109740422B (en) Method and device for identifying automobile
CN110717407A (en) Human face recognition method, device and storage medium based on lip language password
CN112699907B (en) Data fusion method, device and equipment
CN116631060A (en) Gesture recognition method and device based on single frame image
CN113139561B (en) Garbage classification method, garbage classification device, terminal equipment and storage medium
CN110135464B (en) Image processing method and device, electronic equipment and storage medium
CN115147434A (en) Image processing method, device, terminal equipment and computer readable storage medium
CN112036504A (en) Temperature measurement model training method, device, equipment and storage medium
CN112070022A (en) Face image recognition method and device, electronic equipment and computer readable medium
CN112906724A (en) Image processing device, method, medium and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant