CN110781904B - Vehicle color recognition method and device, computer equipment and readable storage medium - Google Patents

Vehicle color recognition method and device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN110781904B
CN110781904B CN201911003792.7A CN201911003792A CN110781904B CN 110781904 B CN110781904 B CN 110781904B CN 201911003792 A CN201911003792 A CN 201911003792A CN 110781904 B CN110781904 B CN 110781904B
Authority
CN
China
Prior art keywords
training
channel
saturation
hue
lightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911003792.7A
Other languages
Chinese (zh)
Other versions
CN110781904A (en
Inventor
陈媛媛
刘硕迪
周欣
潘薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201911003792.7A priority Critical patent/CN110781904B/en
Publication of CN110781904A publication Critical patent/CN110781904A/en
Application granted granted Critical
Publication of CN110781904B publication Critical patent/CN110781904B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Color Image Communication Systems (AREA)

Abstract

The embodiment of the application provides a vehicle color identification method, a vehicle color identification device, computer equipment and a readable storage medium, wherein the vehicle color identification method comprises the following steps: the method comprises the steps of obtaining an RGB image of a target vehicle, and converting the RGB image of the target vehicle into a target HSV image, wherein the target HSV image comprises a target hue channel, a target saturation channel and a target brightness channel; inputting the target hue channel, the target saturation channel and the target brightness channel into a pre-constructed convolutional neural network for color discrimination, and calculating to obtain a hue discrimination result, a saturation discrimination result and a brightness discrimination result; and calculating the color of the target vehicle according to the hue judgment result, the saturation judgment result and the brightness judgment result, so that the color of the target vehicle can be reliably acquired.

Description

Vehicle color recognition method and device, computer equipment and readable storage medium
Technical Field
The present application relates to the field of image recognition, and in particular, to a vehicle color recognition method, apparatus, computer device, and readable storage medium.
Background
At present, for the identification of the vehicle color, a traditional Machine learning method like a Support Vector Machine (SVM) is mostly adopted, and the traditional Machine learning is mainly used for classifying the vehicle color under the condition of limited samples. Because the number of the vehicles is huge and the vehicles grow rapidly at present, the identification method based on the traditional machine learning is low in efficiency and prone to errors in practical application.
In view of the above, it is considered by those skilled in the art how to provide a more reliable vehicle color recognition scheme.
Disclosure of Invention
The embodiment of the application provides a vehicle color identification method and device, computer equipment and a readable storage medium.
The embodiment of the application can be realized as follows:
in a first aspect, an embodiment of the present application provides a vehicle color identification method, including:
the method comprises the steps of obtaining an RGB image of a target vehicle, and converting the RGB image of the target vehicle into a target HSV image, wherein the target HSV image comprises a target hue channel, a target saturation channel and a target lightness channel;
inputting the target hue channel, the target saturation channel and the target brightness channel into a pre-constructed convolutional neural network for color discrimination, and calculating to obtain a hue discrimination result, a saturation discrimination result and a brightness discrimination result;
and calculating the color of the target vehicle according to the hue judgment result, the saturation judgment result and the brightness judgment result.
In an alternative embodiment, the convolutional neural network is constructed by:
acquiring an HSV training image of a target vehicle and an HSV test image of the target vehicle, wherein the HSV training image of the target vehicle comprises a training hue channel, a training saturation channel and a training brightness channel, and the HSV test image of the target vehicle comprises a testing hue channel, a testing saturation channel and a training brightness channel;
inputting a training hue channel, a training saturation channel and a training lightness channel included in the HSV training image of the target vehicle into a convolutional neural network to be trained for network training and parameter updating;
inputting a test hue channel, a test saturation channel and a training brightness channel which are included in the HSV test image of the target vehicle into a convolutional neural network to be trained after network training and parameter updating, judging whether the accuracy of the output result of the convolutional neural network to be trained after network training and parameter updating meets a preset requirement, if so, taking the convolutional neural network to be trained after network training and parameter updating as the pre-constructed convolutional neural network, if not, adjusting the hyper-parameter of the convolutional neural network to be trained, and returning to execute the step of inputting the training hue channel, the training saturation channel and the training brightness channel which are included in the HSV test image of the target vehicle into the convolutional neural network to be trained for network training and parameter updating until the accuracy of the output result of the convolutional neural network to be trained after network training and parameter updating meets the preset requirement .
In an optional embodiment, the convolutional neural network to be trained includes a first convolutional layer, a second convolutional layer, a first pooling layer, a second pooling layer, and a global connection layer, and the step of inputting the training hue channel, the training saturation channel, and the training brightness channel included in the HSV training image of the target vehicle into the convolutional neural network to be trained to perform network training includes:
inputting the training hue channel, the training saturation channel and the training lightness channel into the first convolutional layer, calculating to obtain first training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, inputting the first training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel into the first pooling layer, calculating to obtain second training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, inputting the second training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel into the first pooling layer and the second convolutional layer, calculating to obtain third training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, and inputting the training hue channel, the training saturation channel and the training lightness channel into the first pooling layer and the second convolutional layer, calculating to obtain third training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, Inputting third training feature images corresponding to a training saturation channel and a training lightness channel into the second pooling layer, calculating to obtain fourth training feature images corresponding to a training hue channel, a training saturation channel and a training lightness channel, inputting the fourth training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel into the global connection layer, and outputting to obtain a training hue channel judgment result, a training saturation channel judgment result and a training lightness channel judgment result;
and judging whether the initial training of the convolutional neural network to be trained is finished or not according to the training hue channel judgment result, the training saturation channel judgment result and the training lightness channel judgment result.
In an optional embodiment, the determining whether the initial training of the convolutional neural network to be trained is completed includes:
calculating according to the training hue channel judgment result to obtain a loss function of a training hue channel, calculating according to the training saturation channel judgment result to obtain a loss function of a training saturation channel, and calculating according to the training lightness channel to obtain a loss function of a training lightness channel;
and judging whether the values of the loss function of the training hue channel, the loss function of the training saturation channel and the loss function of the training lightness channel reach respective corresponding loss function thresholds, if so, judging that the initial training of the convolutional neural network to be trained is finished, if not, adjusting the data of the first convolutional layer and the second convolutional layer, and returning to execute the steps of inputting the training hue channel, the training saturation channel and the training lightness channel into the first convolutional layer to output to obtain a training hue channel judgment result, a training lightness channel judgment result and a training lightness channel judgment result until the values of the loss function of the training hue channel, the loss function of the training saturation channel and the loss function of the training lightness channel reach respective corresponding loss function thresholds.
In an alternative embodiment, the acquiring the RGB image of the target vehicle includes:
acquiring an image containing an RGB image of a target vehicle to be processed;
cutting the RGB image containing the target vehicle to be processed from the image containing the RGB image of the target vehicle to be processed according to a YOLO network to obtain the RGB image of the target vehicle to be processed;
and adjusting the RGB image of the target vehicle to be processed to a preset size to obtain the RGB image of the target vehicle.
In an optional embodiment, the preset size of the RGB image of the target vehicle is 240 × 240, the convolution kernel size of the first convolution layer is 3 × 3, the step size is 1, and the inner distance is 1, the convolution kernel size of the second convolution layer is 5 × 5, the step size is 1, and the inner distance is 2, the first pooling layer is a maximum pooling layer of 2 × 2, and the second pooling layer is an average pooling layer of 5 × 5.
In an alternative embodiment, the hyper-parameters of the convolutional neural network to be trained include a learning rate, a batch size, and an iteration number.
In a second aspect, an embodiment of the present application provides a vehicle color identification device, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring an RGB (red, green and blue) image of a target vehicle and converting the RGB image of the target vehicle into a target HSV (hue, saturation and value) image, and the target HSV image comprises a target hue channel, a target saturation channel and a target brightness channel;
the calculation module is used for inputting the target hue channel, the target saturation channel and the target brightness channel into a pre-constructed convolutional neural network for color discrimination, and calculating to obtain a hue discrimination result, a saturation discrimination result and a brightness discrimination result;
and the judging module is used for calculating the color of the target vehicle according to the hue judging result, the saturation judging result and the brightness judging result.
In a third aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a non-volatile memory storing computer instructions, and when the computer instructions are executed by the processor, the computer device executes the vehicle color identification method according to any one of the foregoing embodiments.
In a fourth aspect, the present application provides a readable storage medium, which includes a computer program, where the computer program is executed to control a computer device in the readable storage medium to execute the vehicle color identification method according to any one of the foregoing embodiments.
The beneficial effects of the embodiment of the application include, for example:
by adopting the vehicle color identification method, the vehicle color identification device, the computer equipment and the readable storage medium, the RGB image of the target vehicle is skillfully converted into the target HSV image, and the target hue channel, the target saturation channel and the target brightness channel which are included in the target HSV image are input into the preset convolutional neural network for color identification, so that the color of the target vehicle can be reliably calculated according to the hue judgment result, the saturation judgment result and the brightness judgment result which are output by the convolutional neural network.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a schematic flow chart illustrating steps of a vehicle identification method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a convolutional neural network provided in an embodiment of the present application;
FIG. 3 is a block diagram schematically illustrating a structure of a vehicle identification device according to an embodiment of the present application;
fig. 4 is a block diagram schematically illustrating a structure of a computer device according to an embodiment of the present disclosure.
Icon: 100-a computer device; 110-vehicle colour recognition means; 1101-an acquisition module; 1102-a calculation module; 1103-a decision module; 111-a memory; 112-a processor; 113-communication unit.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Furthermore, the appearances of the terms "first," "second," and the like, if any, are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
It should be noted that the features of the embodiments of the present application may be combined with each other without conflict.
At present, the application of vehicle color identification is more and more extensive, for example, the condition that a violation vehicle (a black automobile in a certain area needs to be obtained in a certain time period) is locked or a violation system determines a violation person (information such as license plate number, vehicle color, driver and the like is bound) is determined. However, as the number of vehicles increases rapidly, the identification method based on the conventional machine learning cannot meet the requirements of practical applications, and therefore, the embodiment of the present application provides a vehicle color identification method, please refer to fig. 1, which includes steps S201 to S203.
Step S201, obtaining an RGB image of a target vehicle, and converting the RGB image of the target vehicle into a target HSV image, wherein the target HSV image comprises a target hue channel, a target saturation channel and a target lightness channel.
And S202, inputting the target hue channel, the target saturation channel and the target brightness channel into a pre-constructed convolutional neural network for color discrimination, and calculating to obtain a hue discrimination result, a saturation discrimination result and a brightness discrimination result.
And step S203, calculating the color of the target vehicle according to the hue judgment result, the saturation judgment result and the brightness judgment result.
The RGB image is obtained by using the principle of superposition of three primary colors in physics, can display various colors, and is generally used for a display system, most of the sources of the image containing the target vehicle in the embodiment are also patterns obtained by a monitoring device and then displayed on a display, and the HSV image contains three aspects of chromaticity, saturation and brightness. Chroma is typically used to macroscopically distinguish a certain color, for example: white, yellow, cyan, green, magenta, red, blue, black, etc. are the chroma; saturation refers to the purity of the color, and in general, the brighter the color, the higher the saturation, the darker the color, and the lower the saturation; the brightness refers to the brightness of the color, and the higher the brightness, the brighter the color, and the lower the brightness, the darker the color. The HSV color space is not suitable for display systems but better fits the visual characteristics of the human eye (what color (H), how deep (S), how bright (V)). Compared with the two, the color space of the RGB image can be directly displayed, but the method has the disadvantages that the mutual dependence is stronger, and 3 components must be considered for any color degree; however, in the color space of the HSV image, the H and S components represent color information. The RGB color is sensitive to the change of light intensity, and the influence of different illumination intensities on the color in color identification needs to be overcome, so that the RGB image of the target vehicle can be converted into the HSV image for subsequent processing.
And respectively inputting the target hue channel, the target saturation channel and the target brightness channel into a pre-constructed convolutional neural network for color discrimination, calculating to obtain a hue discrimination result, a saturation discrimination result and a brightness discrimination result, and further calculating the three results by adopting a voting method in machine learning to obtain the color of the target vehicle.
On the basis of the foregoing, the present embodiment provides an example of constructing a convolutional neural network, which can be constructed by the following steps:
the method comprises the steps of obtaining an HSV training image of a target vehicle and an HSV test image of the target vehicle, wherein the HSV training image of the target vehicle comprises a training hue channel, a training saturation channel and a training brightness channel, and the HSV test image of the target vehicle comprises a testing hue channel, a testing saturation channel and a training brightness channel.
Inputting a training hue channel, a training saturation channel and a training lightness channel included in the HSV training image of the target vehicle into a convolutional neural network to be trained for network training and parameter updating.
Inputting a test hue channel, a test saturation channel and a training brightness channel included in the HSV test image of the target vehicle into a convolutional neural network to be trained after network training and parameter updating, judging whether the accuracy of the output result of the convolutional neural network to be trained after network training and parameter updating meets a preset requirement, if so, taking the convolutional neural network to be trained after network training and parameter updating as the pre-constructed convolutional neural network, if not, adjusting the hyper-parameter of the convolutional neural network to be trained, and returning to execute the steps of inputting the training hue channel, the training saturation channel and the training brightness channel included in the HSV test image of the target vehicle into the convolutional neural network to be trained for network training and parameter updating until the accuracy of the output result of the convolutional neural network to be trained after network training and parameter updating meets the preset requirement .
While the hyper-parameters of the convolutional neural network to be trained may include the learning rate, batch size, and number of iterations. The learning rate may refer to a magnitude of parameter update in each training (iteration), or may refer to a step size, and the batch size may refer to the number of samples in one batch (i.e., the number of HSV test images). In a convolutional neural network, for all training samples, it is neither necessary to have all samples pass through the convolutional neural network at the same time, nor it is necessary to pass through the network one by one. But to find a suitable value, such as 2 6 And (4) sampling. In each iteration, 64 samples in the HSV test image are sequentially selected and pass through the network until all the samples in the HSV test image pass through the network, and the iteration is completed (one training is completed). The number of iterations may be the number of training times of the convolutional neural network. Since each iteration will update the net onceThe parameters of the convolution further influence the result, so that the required iteration times can be determined when the accuracy of the output result of the convolution neural network is not improved any more.
In this embodiment, the HSV training image of the target vehicle and the HSV test image of the target vehicle may be set separately to complete the training of the convolutional neural network to be trained, the training hue channel, the training saturation channel and the training lightness channel included in the HSV training image can be input into a convolutional neural network for training, after network training and parameter updating of the convolutional neural network to be trained using the HSV training images, the HSV test image can be input into the convolutional neural network, whether the accuracy of the result output by inputting the test hue channel, the test saturation channel and the training lightness channel which are included in the HSV test image of the target vehicle into the convolutional neural network to be trained and subjected to network training and parameter updating meets the preset requirement or not can be judged, when the preset requirements are met, the convolutional neural network training can be considered to be completed, and the function of accurately recognizing the color of the vehicle is achieved.
On this basis, the present embodiment provides an example of a convolutional neural network, please refer to fig. 2, where the convolutional neural network to be trained includes a first convolutional layer, a second convolutional layer, a first pooling layer, a second pooling layer, and a global connection layer, and based on this, the present embodiment provides an example of inputting a training hue channel, a training saturation channel, and a training lightness channel included in an HSV training image of the target vehicle into the convolutional neural network to be trained for network training, which may be implemented by the following steps:
inputting the training hue channel, the training saturation channel and the training lightness channel into the first convolutional layer, calculating to obtain first training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, inputting the first training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel into the first pooling layer, calculating to obtain second training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, inputting the second training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel into the first pooling layer and the second convolutional layer, calculating to obtain third training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, and inputting the training hue channel, the training saturation channel and the training lightness channel into the first pooling layer and the second convolutional layer, calculating to obtain third training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, And inputting the third training feature images corresponding to the training saturation channel and the training lightness channel into the second pooling layer, calculating to obtain fourth training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, inputting the fourth training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel into the global connection layer, and outputting to obtain a training hue channel judgment result, a training saturation channel judgment result and a training lightness channel judgment result.
And judging whether the initial training of the convolutional neural network to be trained is finished or not according to the training hue channel judgment result, the training saturation channel judgment result and the training lightness channel judgment result.
It should be understood that, in this embodiment, the convolutional neural network after the training by inputting the training hue channel, the training saturation channel, and the training lightness channel into the convolutional neural network to be trained already has a function of determining the color of the vehicle, the training is a process of fitting data, the fitted data is data in a training set, the data can be directly used after the preliminary training is completed, and after the preliminary training, the complete training of the convolutional neural network to be tested can be further completed by using the HSV test image of the target vehicle.
Specifically, the preset size of the RGB image of the target vehicle may be 240 × 240, the convolution kernel size (filter size) of the first convolution layer may be 3 × 3, the step size (stride) may be 1, the inner distance (padding) may be 1, the convolution kernel size of the second convolution layer may be 5 × 5, the step size may be 1, the inner distance may be 2, the first pooling layer may be a 2 × 2 maximum pooling layer (max pooling), and the second pooling layer may be a 5 × 5 average pooling layer (avg pooling). After a training hue channel, a training saturation channel and a training lightness channel are input into a first convolution layer, convolution calculation is carried out to obtain first training feature images with the size of 240 x 240 corresponding to the training hue channel, the training saturation channel and the training lightness channel respectively, then the first convolution layer is input into the first pooling layer to obtain second training feature images with the size of 120 x 120 corresponding to the training hue channel, the training saturation channel and the training lightness channel respectively, then the second convolution layer and the first pooling layer are input to obtain third training feature images with the size of 60 x 60 corresponding to the training hue channel, the training saturation channel and the training lightness channel respectively, then the second pooling layer is input into the second pooling layer to obtain fourth training feature images with the size of 12 x 12 corresponding to the training hue channel, the training saturation channel and the training lightness channel respectively, and finally the fourth training feature images with the size of 12 x 12 are input into a global connection layer (the global connection layer can be provided with eight colors, e.g., white, yellow, cyan, green, magenta, red, blue, and black, i.e., 8 classes), the training hue channel determination result, the training saturation channel determination result, and the training lightness channel determination result (one of eight colors) corresponding to the training hue channel, the training saturation channel, and the training lightness channel, respectively, are output.
On the basis of the foregoing processing of the convolutional neural network to be trained, this embodiment provides an example of determining whether the initial training of the convolutional neural network to be trained is completed, which may be implemented by the following steps:
and calculating according to the training hue channel judgment result to obtain a loss function of a training hue channel, calculating according to the training saturation channel judgment result to obtain a loss function of a training saturation channel, and calculating according to the training lightness channel to obtain a loss function of a training lightness channel.
And judging whether the values of the loss function of the training hue channel, the loss function of the training saturation channel and the loss function of the training lightness channel reach respective corresponding loss function thresholds, if so, judging that the initial training of the convolutional neural network to be trained is finished, if not, adjusting the data of the first convolutional layer and the second convolutional layer, and returning to execute the steps of inputting the training hue channel, the training saturation channel and the training lightness channel into the first convolutional layer to output to obtain a training hue channel judgment result, a training lightness channel judgment result and a training lightness channel judgment result until the values of the loss function of the training hue channel, the loss function of the training saturation channel and the loss function of the training lightness channel reach respective corresponding loss function thresholds.
The loss function of the training tone channel can be calculated according to the judgment result of the training tone channel output from the convolutional neural network to be trained, calculating to obtain a loss function of a training saturation channel according to the training saturation channel judgment result, calculating to obtain a loss function of a training lightness channel according to the training lightness channel, when the three values reach the respective corresponding preset loss function threshold values, the preliminary training can be considered to be completed, the step of performing a complete training of the convolutional neural network to be tested using the HSV test images of the target vehicle may be performed, when the three values do not reach the corresponding preset loss function threshold values, the network parameters need to be adjusted, the data of the first convolution layer and the second convolution layer can be adjusted, and specifically, the convolution kernels of the first convolution layer and the second convolution layer can be adjusted. After the network parameters are adjusted, the process of executing the preliminary training can be returned until the loss function of the training hue channel is obtained by calculating the training hue channel judgment result output by the convolutional neural network to be trained, the loss function of the training saturation channel is obtained by calculating according to the training saturation channel judgment result, and the loss functions of the training lightness channels obtained by calculating according to the training lightness channel all reach the corresponding preset loss function thresholds.
In addition, the present embodiment provides an example of acquiring an RGB image of a target vehicle, which may be implemented by the following steps:
an image containing an RGB image of a target vehicle to be processed is acquired.
And cutting the RGB image of the target vehicle to be processed from the image containing the RGB image of the target vehicle to be processed according to a YOLO network.
And adjusting the RGB image of the target vehicle to be processed to a preset size to obtain the RGB image of the target vehicle.
In this embodiment, an image including an RGB image of the target vehicle to be processed may be obtained first, and the RGB image of the target vehicle to be processed may be obtained by cropping the image including the RGB image of the target vehicle to be processed through the YOLO network, and the YOLO network may find a set target image from the image, for example, in order to determine the color of the vehicle more accurately, the RGB image of the target vehicle in this embodiment may be an image of a hood of the vehicle or an image of a side door of the vehicle, and after the images are obtained, in order to input the images into the neural network, a uniform standard may be customized and set to a preset size, for example, the size of the RGB image of the target vehicle may be uniformly set to 240 × 240.
Referring to fig. 3, the vehicle color recognition device 110 includes:
the acquiring module 1101 is configured to acquire an RGB image of a target vehicle, and convert the RGB image of the target vehicle into a target HSV image, where the target HSV image includes a target hue channel, a target saturation channel, and a target lightness channel.
And the calculating module 1102 is configured to input the target hue channel, the target saturation channel, and the target brightness channel into a pre-constructed convolutional neural network for color discrimination, and calculate to obtain a hue discrimination result, a saturation discrimination result, and a brightness discrimination result.
A determining module 1103, configured to calculate, according to the hue determination result, the saturation determination result, and the brightness determination result, a color of the target vehicle.
The implementation principle of the vehicle color recognition device 110 in this embodiment is consistent with that of the foregoing vehicle color recognition method, and is not described herein again.
The present embodiment provides a computer device 100, the computer device 100 includes a processor and a non-volatile memory storing computer instructions, when the computer instructions are executed by the processor, the computer device 100 executes the aforementioned vehicle color identification method. As shown in fig. 4, fig. 4 is a block diagram of a computer device 100 according to an embodiment of the present disclosure. The computer apparatus 100 includes a vehicle color recognition device 110, a memory 111, a processor 112, and a communication unit 113.
The memory 111, the processor 112 and the communication unit 113 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The vehicle color recognition device 110 includes at least one software function module that may be stored in the memory 111 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the computer apparatus 100. The processor 112 is used for executing executable modules stored in the memory 111, such as software functional modules and computer programs included in the vehicle color identification device 110.
The Memory 111 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The present embodiment provides a readable storage medium comprising a computer program which, when executed, controls a computer device 100 in which the readable storage medium is located to perform the aforementioned vehicle color identification method.
In summary, the embodiment of the present application provides a vehicle color identification method, a device, a computer device, and a readable storage medium, wherein an RGB image of a target vehicle is skillfully converted into a target HSV image, a target hue channel, a target saturation channel, and a target lightness channel included in the target HSV image are input into a preset convolutional neural network for color identification, and a color of the target vehicle is reliably and accurately calculated according to a hue discrimination result, a saturation discrimination result, and a lightness discrimination result output by the convolutional neural network.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A vehicle color recognition method, characterized by comprising:
the method comprises the steps of obtaining an RGB image of a target vehicle, and converting the RGB image of the target vehicle into a target HSV image, wherein the target HSV image comprises a target hue channel, a target saturation channel and a target lightness channel;
inputting the target hue channel, the target saturation channel and the target brightness channel into a pre-constructed convolutional neural network for color discrimination, and calculating to obtain a hue discrimination result, a saturation discrimination result and a brightness discrimination result;
calculating to obtain the color of the target vehicle according to the hue judgment result, the saturation judgment result and the brightness judgment result;
the convolutional neural network is constructed by the following steps:
acquiring an HSV training image of a target vehicle and an HSV test image of the target vehicle, wherein the HSV training image of the target vehicle comprises a training hue channel, a training saturation channel and a training brightness channel, and the HSV test image of the target vehicle comprises a testing hue channel, a testing saturation channel and a training brightness channel;
inputting a training hue channel, a training saturation channel and a training lightness channel included in the HSV training image of the target vehicle into a convolutional neural network to be trained for network training and parameter updating;
inputting a test hue channel, a test saturation channel and a training brightness channel included in the HSV test image of the target vehicle into a convolutional neural network to be trained after network training and parameter updating, judging whether the accuracy of the output result of the convolutional neural network to be trained after network training and parameter updating meets a preset requirement, if so, taking the convolutional neural network to be trained after network training and parameter updating as the pre-constructed convolutional neural network, if not, adjusting the hyper-parameter of the convolutional neural network to be trained, and returning to execute the steps of inputting the training hue channel, the training saturation channel and the training brightness channel included in the HSV test image of the target vehicle into the convolutional neural network to be trained for network training and parameter updating until the accuracy of the output result of the convolutional neural network to be trained after network training and parameter updating meets the preset requirement (ii) a
The convolutional neural network to be trained comprises a first convolutional layer, a second convolutional layer, a first pooling layer, a second pooling layer and a global connecting layer, and the step of inputting a training hue channel, a training saturation channel and a training lightness channel which are included in the HSV training image of the target vehicle into the convolutional neural network to be trained for network training comprises the following steps:
inputting the training hue channel, the training saturation channel and the training lightness channel into the first convolutional layer, calculating to obtain first training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, inputting the first training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel into the first pooling layer, calculating to obtain second training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, inputting the second training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel into the first pooling layer and the second convolutional layer, calculating to obtain third training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, and inputting the training hue channel, the training saturation channel and the training lightness channel into the first pooling layer and the second convolutional layer, calculating to obtain third training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, Inputting third training feature images corresponding to a training saturation channel and a training lightness channel into the second pooling layer, calculating to obtain fourth training feature images corresponding to a training hue channel, a training saturation channel and a training lightness channel, inputting the fourth training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel into the global connection layer, and outputting to obtain a training hue channel judgment result, a training saturation channel judgment result and a training lightness channel judgment result;
and judging whether the initial training of the convolutional neural network to be trained is finished or not according to the training hue channel judgment result, the training saturation channel judgment result and the training lightness channel judgment result.
2. The method of claim 1, wherein the determining whether the initial training of the convolutional neural network to be trained is completed comprises:
calculating according to the training hue channel judgment result to obtain a loss function of a training hue channel, calculating according to the training saturation channel judgment result to obtain a loss function of a training saturation channel, and calculating according to the training lightness channel to obtain a loss function of a training lightness channel;
and judging whether the values of the loss function of the training hue channel, the loss function of the training saturation channel and the loss function of the training lightness channel reach respective corresponding loss function thresholds, if so, judging that the initial training of the convolutional neural network to be trained is finished, if not, adjusting the data of the first convolutional layer and the second convolutional layer, and returning to execute the steps of inputting the training hue channel, the training saturation channel and the training lightness channel into the first convolutional layer to output to obtain a training hue channel judgment result, a training lightness channel judgment result and a training lightness channel judgment result until the values of the loss function of the training hue channel, the loss function of the training saturation channel and the loss function of the training lightness channel reach respective corresponding loss function thresholds.
3. The method of claim 1, wherein the acquiring the RGB image of the target vehicle comprises:
acquiring an image containing an RGB image of a target vehicle to be processed;
cutting the RGB image containing the target vehicle to be processed from the image containing the RGB image of the target vehicle to be processed according to a YOLO network to obtain the RGB image of the target vehicle to be processed;
and adjusting the RGB image of the target vehicle to be processed to a preset size to obtain the RGB image of the target vehicle.
4. The method of claim 3, wherein the predetermined size of the RGB image of the target vehicle is 240 x 240, the convolution kernel size of the first convolution layer is 3 x 3, the step size is 1, the inner distance is 1, the convolution kernel size of the second convolution layer is 5 x 5, the step size is 1, the inner distance is 2, the first pooling layer is a maximum pooling layer of 2 x 2, and the second pooling layer is an average pooling layer of 5 x 5.
5. The method of claim 1, wherein the hyper-parameters of the convolutional neural network to be trained comprise a learning rate, a batch size, and a number of iterations.
6. A vehicle color recognition device, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring an RGB image of a target vehicle and converting the RGB image of the target vehicle into a target HSV image, and the target HSV image comprises a target hue channel, a target saturation channel and a target brightness channel;
the calculation module is used for inputting the target hue channel, the target saturation channel and the target brightness channel into a pre-constructed convolutional neural network for color discrimination, and calculating to obtain a hue discrimination result, a saturation discrimination result and a brightness discrimination result;
the judging module is used for calculating the color of the target vehicle according to the hue judging result, the saturation judging result and the brightness judging result;
the convolutional neural network is constructed by the following method:
acquiring an HSV training image of a target vehicle and an HSV test image of the target vehicle, wherein the HSV training image of the target vehicle comprises a training hue channel, a training saturation channel and a training brightness channel, and the HSV test image of the target vehicle comprises a testing hue channel, a testing saturation channel and a training brightness channel;
inputting a training hue channel, a training saturation channel and a training lightness channel included in the HSV training image of the target vehicle into a convolutional neural network to be trained for network training and parameter updating;
inputting a test hue channel, a test saturation channel and a training brightness channel included in the HSV test image of the target vehicle into a convolutional neural network to be trained after network training and parameter updating, judging whether the accuracy of the output result of the convolutional neural network to be trained after network training and parameter updating meets a preset requirement, if so, taking the convolutional neural network to be trained after network training and parameter updating as the pre-constructed convolutional neural network, if not, adjusting the hyper-parameter of the convolutional neural network to be trained, and returning to execute the steps of inputting the training hue channel, the training saturation channel and the training brightness channel included in the HSV test image of the target vehicle into the convolutional neural network to be trained for network training and parameter updating until the accuracy of the output result of the convolutional neural network to be trained after network training and parameter updating meets the preset requirement (ii) a
The convolutional neural network to be trained comprises a first convolutional layer, a second convolutional layer, a first pooling layer, a second pooling layer and a global connecting layer;
inputting the training hue channel, the training saturation channel and the training lightness channel into the first convolutional layer, calculating to obtain first training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, inputting the first training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel into the first pooling layer, calculating to obtain second training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, inputting the second training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel into the first pooling layer and the second convolutional layer, calculating to obtain third training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, and inputting the training hue channel, the training saturation channel and the training lightness channel into the first pooling layer and the second convolutional layer, calculating to obtain third training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel, Inputting third training feature images corresponding to a training saturation channel and a training lightness channel into the second pooling layer, calculating to obtain fourth training feature images corresponding to a training hue channel, a training saturation channel and a training lightness channel, inputting the fourth training feature images corresponding to the training hue channel, the training saturation channel and the training lightness channel into the global connection layer, and outputting to obtain a training hue channel judgment result, a training saturation channel judgment result and a training lightness channel judgment result;
and judging whether the initial training of the convolutional neural network to be trained is finished or not according to the training hue channel judgment result, the training saturation channel judgment result and the training lightness channel judgment result.
7. A computer device comprising a processor and a non-volatile memory having computer instructions stored thereon, wherein when the computer instructions are executed by the processor, the computer device performs the vehicle color identification method of any one of claims 1-5.
8. A readable storage medium, characterized in that the readable storage medium comprises a computer program which, when executed, controls a computer device in which the readable storage medium is located to perform the vehicle color identification method according to any one of claims 1 to 5.
CN201911003792.7A 2019-10-22 2019-10-22 Vehicle color recognition method and device, computer equipment and readable storage medium Active CN110781904B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911003792.7A CN110781904B (en) 2019-10-22 2019-10-22 Vehicle color recognition method and device, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911003792.7A CN110781904B (en) 2019-10-22 2019-10-22 Vehicle color recognition method and device, computer equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN110781904A CN110781904A (en) 2020-02-11
CN110781904B true CN110781904B (en) 2022-08-02

Family

ID=69384422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911003792.7A Active CN110781904B (en) 2019-10-22 2019-10-22 Vehicle color recognition method and device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN110781904B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106384117A (en) * 2016-09-14 2017-02-08 东软集团股份有限公司 Vehicle color recognition method and device
CN106651966A (en) * 2016-09-26 2017-05-10 广东安居宝数码科技股份有限公司 Picture color identification method and system
CN110298893A (en) * 2018-05-14 2019-10-01 桂林远望智能通信科技有限公司 A kind of pedestrian wears the generation method and device of color identification model clothes

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106384117A (en) * 2016-09-14 2017-02-08 东软集团股份有限公司 Vehicle color recognition method and device
CN106651966A (en) * 2016-09-26 2017-05-10 广东安居宝数码科技股份有限公司 Picture color identification method and system
CN110298893A (en) * 2018-05-14 2019-10-01 桂林远望智能通信科技有限公司 A kind of pedestrian wears the generation method and device of color identification model clothes

Also Published As

Publication number Publication date
CN110781904A (en) 2020-02-11

Similar Documents

Publication Publication Date Title
Aquino et al. vitisBerry: An Android-smartphone application to early evaluate the number of grapevine berries by means of image analysis
EP3063289B1 (en) Method and system for classifying and identifying individual cells in a microscopy image
KR101640998B1 (en) Image processing apparatus and image processing method
CN109978890A (en) Target extraction method, device and terminal device based on image procossing
CN109101934A (en) Model recognizing method, device and computer readable storage medium
CN104854620A (en) Image processing device, image processing system, and program
CN111951933A (en) Eyeground color photograph image grading method, device, computer equipment and storage medium
CN109871845A (en) Certificate image extracting method and terminal device
CN114066857A (en) Infrared image quality evaluation method and device, electronic equipment and readable storage medium
US20190325567A1 (en) Dynamic image modification based on tonal profile
CN115619787B (en) UV glue defect detection method, system, equipment and medium
CN111784665B (en) OCT image quality evaluation method, system and device based on Fourier transform
CN109152517B (en) Image processing apparatus, control method of image processing apparatus, and recording medium
CN114677316B (en) Real-time visible light image and infrared image multi-channel fusion method and device
CN112750162A (en) Target identification positioning method and device
CN109949248A (en) Modify method, apparatus, equipment and the medium of the color of vehicle in the picture
CN111967401A (en) Target detection method, device and storage medium
CN109101922A (en) Operating personnel device, assay, device and electronic equipment
CN111325211A (en) Method for automatically recognizing color of vehicle, electronic device, computer apparatus, and medium
CN110781904B (en) Vehicle color recognition method and device, computer equipment and readable storage medium
CN111387932A (en) Vision detection method, device and equipment
CN106530286A (en) Method and device for determining definition level
CN113963428B (en) Model training method, occlusion detection method, system, electronic device, and medium
CN112989924B (en) Target detection method, target detection device and terminal equipment
CN111428779B (en) Method, device, equipment and storage medium for determining irradiation range of light supplementing lamp

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant