CN114998619A - Target color classification method, device and computer-readable storage medium - Google Patents

Target color classification method, device and computer-readable storage medium Download PDF

Info

Publication number
CN114998619A
CN114998619A CN202210524015.2A CN202210524015A CN114998619A CN 114998619 A CN114998619 A CN 114998619A CN 202210524015 A CN202210524015 A CN 202210524015A CN 114998619 A CN114998619 A CN 114998619A
Authority
CN
China
Prior art keywords
color
color classification
classification result
target
confidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210524015.2A
Other languages
Chinese (zh)
Inventor
舒梅
郝行猛
周子寒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202210524015.2A priority Critical patent/CN114998619A/en
Publication of CN114998619A publication Critical patent/CN114998619A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method and a device for classifying target colors and a computer-readable storage medium. The method comprises the following steps: performing feature extraction and color classification on the target image by using a first classification network to obtain at least one first color classification result of the target; determining whether to acquire at least one second color classification result of the target based on a first confidence of the at least one first color classification result; performing feature extraction and color classification on the target image by using a second classification network in response to the at least one second color classification result of the target, so as to obtain at least one second color classification result of the target, wherein the feature extraction mode of the second classification network is different from that of the first classification network; determining a color class to which the target belongs based on the at least one first color classification result and the at least one second color classification result. By the method, the accuracy of the color category to which the determined target belongs can be improved.

Description

Target color classification method, device and computer-readable storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to a method and an apparatus for classifying colors of an object, and a computer-readable storage medium.
Background
Color classification techniques are used to determine the color class to which an object belongs, and may be applied to a variety of scenarios. For example, in a vehicle management scenario, a color classification technique is used to obtain a color class to which a vehicle belongs, and the color class is used for managing the vehicle. And under the condition of handicraft quality detection, acquiring the color class to which the target belongs, and determining the quality of the target. And under a pedestrian tracking scene, acquiring the color category to which the pedestrian clothing belongs, and matching the pedestrians in different video frames.
However, the accuracy of the color class to which the target belongs is not high by the current target color classification method.
Disclosure of Invention
The application provides a method and a device for classifying target colors and a computer readable storage medium, which can solve the problem that the accuracy of the color class of a target obtained by the current method for classifying the target colors is not high.
In order to solve the technical problem, the application adopts a technical scheme that: a method of classifying a target color is provided. The method comprises the following steps: performing feature extraction and color classification on the target image by using a first classification network to obtain at least one first color classification result of the target; determining whether to acquire at least one second color classification result of the target based on a first confidence of the at least one first color classification result; performing feature extraction and color classification on the target image by using a second classification network in response to the at least one second color classification result of the target, so as to obtain at least one second color classification result of the target, wherein the feature extraction mode of the second classification network is different from that of the first classification network; determining a color class to which the target belongs based on the at least one first color classification result and the at least one second color classification result.
In order to solve the above technical problem, another technical solution adopted by the present application is: the target color classification device comprises a processor and a memory connected with the processor, wherein the memory stores program instructions; the processor is configured to execute the program instructions stored by the memory to implement the above-described method.
In order to solve the above technical problem, the present application adopts another technical solution: there is provided a computer readable storage medium storing program instructions that when executed are capable of implementing the above method.
Through the manner, after the first color classification result is obtained, whether the second color classification result is obtained or not is determined based on the first confidence coefficient of the first color classification result, the first confidence coefficient reflects whether the accuracy of the first color classification result is high enough and whether the first color classification result can be directly used for determining the color class to which the target belongs, and the determination of the second color classification result means that the accuracy of the first color classification result is not high enough and cannot be directly used for determining the color class to which the target belongs, so that the color class to which the target belongs is determined based on the first color classification result and the second color classification result. The first classification network and the second classification network have different feature extraction modes, which means that the features extracted by the first classification network and the second classification network have different expression conditions on the color information of the target, so that in the process of determining the color class of the target by combining the first color classification result and the second color classification result, the first color classification result and the second color classification result can play a complementary role, and the determined color class of the target is more accurate.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a method for classifying colors of an object of the present application;
FIG. 2 is a schematic flow chart diagram of another embodiment of a method for classifying colors targeted by the present application;
FIG. 3 is a schematic flow chart diagram illustrating a further embodiment of a method for color classification of an object of the present application;
FIG. 4 is a schematic diagram of an embodiment of the color classification system of the present application;
FIG. 5 is a schematic flow chart diagram illustrating an embodiment of a target color classification method according to the present application;
FIG. 6 is a schematic diagram of an embodiment of a color sorting apparatus according to the present application;
FIG. 7 is a schematic structural diagram of another embodiment of a color sorting apparatus of the present application;
FIG. 8 is a schematic structural diagram of an embodiment of a computer-readable storage medium of the present application;
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second" and "third" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any indication of the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those skilled in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Fig. 1 is a schematic flow chart of an embodiment of a target color classification method of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 1 is not limited in this embodiment. As shown in fig. 1, the present embodiment may include:
s11: and performing feature extraction and color classification on the target image by using a first classification network to obtain at least one first color classification result of the target.
The target may be any subject corresponding to a variety of color categories, such as vehicles, apparel, backpacks, artwork, pedestrians, and so on. For ease of understanding, the present application will be described below mainly with reference to a vehicle as an example.
The first color classification result is associated with a first confidence coefficient, and the number of the first color classification results is equal to the number of color categories corresponding to the target. If the color class corresponding to the target is multiple (for example, between 20 and 30), the first color classification result is multiple and is associated with the first confidence level. Each first color classification result is a color class corresponding to the target, and the first confidence of each first color classification result expresses the possibility that the color class to which the target belongs is the first color classification result, and reflects the accuracy of the first color classification result.
S12: determining whether to obtain at least one second color classification result of the target based on the first confidence of the at least one first color classification result.
The condition for obtaining at least one second color classification result of the target may include: the maximum value of the first confidence of each first color classification result is greater than the confidence threshold. Accordingly, in S12, it may be determined whether the maximum value of the first confidence of each first color classification result is greater than the confidence threshold; in response to not being greater than the confidence threshold, obtaining at least one second color classification result for the target; and in response to the confidence coefficient threshold value being larger than the threshold value, not acquiring at least one second color classification result of the target, and determining a first color classification result corresponding to the maximum value of the first confidence coefficient as the color class to which the target belongs.
Alternatively, the condition for obtaining at least one second color classification result of the target may include: the difference between the maximum value of the first confidence of each first color classification result and other first confidence is larger than the difference threshold. Accordingly, in S12, it may be determined whether the maximum value of the first confidence of each first color classification result and the differences between other first confidence are all greater than the difference threshold; in response to the unevenness being greater than the difference threshold, obtaining at least one second color classification result for the target; and in response to the fact that the first confidence degrees are all larger than the difference threshold value, at least one second color classification result of the target is not obtained, and the first color classification result corresponding to the maximum value of the first confidence degrees is determined as the color class to which the target belongs.
If the step of obtaining at least one second color classification result of the target is determined to be executed based on the first confidence degree, which means that the accuracy of the first color classification result is not high enough and cannot be directly used for determining the color class to which the target belongs, executing S13-S14; if it is determined based on the first confidence level that the step of obtaining at least one second color classification result of the object is not performed, which means that the accuracy of the first color classification result is high enough to be directly used for determining the color class to which the object belongs, S15 is performed.
S13: and performing feature extraction and color classification on the target image by using a second classification network to obtain at least one second color classification result of the target.
Wherein the second classification network has a different feature extraction than the first classification network.
The feature extraction mode of the first classification network and the feature extraction mode of the second classification network are different in expression conditions (including expression capability, expression angle and the like) of color information of the target in the target image.
The difference between the feature extraction modes of the first classification network and the second classification network may be caused by the difference of the categories of the feature extraction layers, the difference of the number of the feature extraction layers of the same category, or the difference of the processing sequences of the feature extraction layers of different categories. In some embodiments, the first classification network is EfficientNet, the second classification network is a Transformer, and the Transformer may be based on a single-head attention mechanism or a multi-head attention mechanism.
Similar to the first color classification result, the second color classification result is associated with a second confidence, and the number of the second color classification results is equal to the number of color classes corresponding to the target. And if the color types corresponding to the targets are multiple, the second color classification results are multiple and are respectively associated with second confidence degrees. Each second color classification result is a color class corresponding to one target, and the second confidence of each second color classification result expresses the possibility that the color class to which the target belongs is the second color classification result.
S14: determining a color class to which the target belongs based on the at least one first color classification result and the at least one second color classification result.
In some embodiments, the first confidence of the first color classification result and the second confidence of the corresponding second color classification result may be fused to obtain a final confidence. The second color classification result and the corresponding second color classification result are in the same color class, and the fusion mode includes weighting, multiplication and the like. The color class to which the target belongs is the first color classification result/the second color classification result corresponding to the maximum value of the final confidence degree.
In some embodiments, a number of candidate color classification results may be selected from the first color classification results, a first confidence of the candidate color classification results being greater than first confidences of other first color classification results; and determining the candidate color classification result corresponding to the maximum value of the second confidence coefficient as the color category to which the target belongs. The number of candidate color classification results is an integer not less than 2, such as 2, 3, 4, and so on. The accuracy of the candidate color classification result is higher than the accuracy of the other first color classification results, in other words, the probability that the candidate color classification result includes the color class to which the target belongs is higher than the other first color classification results. For example, the number of candidate color classification results is 3, and the probability of including the color class to which the target belongs in the candidate color classification results is close to 100%.
S15: determining a color class to which the target belongs based on the at least one first color classification result.
The first color classification result corresponding to the maximum value of the first confidence may be used as the color class to which the target belongs.
It is understood that if the accuracy of the first color classification result is not concerned, but the color class to which the target belongs is determined directly based on the first color classification result, the accuracy of determining the color class to which the target belongs may not be high.
By implementing the embodiment, after the first color classification result is obtained, whether to obtain the second color classification result is determined based on the first confidence of the first color classification result, where the first confidence reflects whether the accuracy of the first color classification result is high enough and whether the first color classification result can be directly used for determining the color class to which the target belongs, and determining to obtain the second color classification result means that the accuracy of the first color classification result is not high enough and cannot be directly used for determining the color class to which the target belongs, so that the color class to which the target belongs is determined based on the first color classification result and the second color classification result. The first classification network and the second classification network have different feature extraction modes, which means that the features extracted by the first classification network and the second classification network have different expression conditions on the color information of the target, so that in the process of determining the color class of the target by combining the first color classification result and the second color classification result, the first color classification result and the second color classification result can play a complementary role, and the determined color class of the target is more accurate.
Fig. 2 is a schematic flow chart of another embodiment of the target color classification method of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 2 is not limited in this embodiment. This embodiment is a further extension of the above embodiments S12 to S15. As shown in fig. 2, the present embodiment may include:
s21: and judging whether the maximum value of the first confidence degrees of the first color classification results is greater than a confidence degree threshold value.
In response to not being greater than the confidence threshold, performing S22-S24; otherwise, S26 is executed.
S22: at least one second color classification result of the object is obtained.
S23: a number of candidate color classification results are selected from the first color classification results.
The first confidence of the candidate color classification result is greater than the first confidence of the other first color classification results.
S24: and determining the candidate color classification result corresponding to the maximum value of the second confidence coefficient as the color class to which the target belongs.
The candidate color classification result corresponding to the maximum value of the second confidence coefficient, that is, the second color classification result to which the maximum value of the second confidence coefficient belongs, and the corresponding candidate color classification result.
S25: and determining the first color classification result corresponding to the maximum value of the first confidence coefficient as the color class to which the target belongs.
For further details of this embodiment, please refer to the previous embodiment, which is not repeated herein.
Further, in the case that the first classification network and the second classification network have different feature extraction methods due to different types of feature extraction layers, the first classification network may perform feature extraction in a convolution manner, and the second classification network may perform feature extraction in a non-convolution manner. Or the first classification network adopts a non-convolution mode to extract the features, and the second classification network adopts a convolution mode to extract the features.
The convolution scheme is a scheme in which the first classification network includes a convolution feature extraction layer, and a convolution operation is performed using the convolution feature extraction layer in the feature extraction process. The non-convolution mode means that the second classification network does not include a convolution feature extraction layer, and convolution operation is not adopted in the feature extraction process.
It is understood that the convolution operation may cause the extracted features to express not only the color information of the object but also the edge contour information of the object, so that the features extracted by the convolution (hereinafter referred to as fine-grained features) may not be strong enough to express the color information of the object. The features (hereinafter, referred to as color features) extracted by the non-convolution method only express the color information of the object in the object image, and the expression capability of the color information of the object is strong enough. Fine-grained features are common to various target processing tasks, e.g., where the target is a vehicle, which may include vehicle tracking, license plate detection, vehicle color classification, and so forth. The color space where the color features are located may be RGB, YUV, etc., and in the RGB color space, the color features include R, G and B channels; in the YUV color space, the color features include Y, U and V three color channels.
With respect to color features, the fine-grained features are applied to color classification, so that interference of external environment factors on a color classification process can be resisted, but the edge contour information (appearance) can interfere with the color classification process (if an appearance similar target exists, interference can be caused on the color classification process). And applying the color features to color classification relative to fine-grained features, wherein the interference of edge contour information does not exist in the color classification process, but the interference of external environment factors on the color classification process is possible. The external environmental factors include, but are not limited to, lighting conditions and shading conditions. The external environment factors may cause the color information of the object in the object image to be inconsistent with the actual color information of the object. For example, the illumination intensity is too high, which may cause the target to reflect light, so that the color information of the target in the target image does not match the actual color information of the target; the target is occluded by other subjects, which may cause a situation of string color, so that the color information of the target does not coincide with the actual color information of the target. Thereby interfering with the color classification process.
Therefore, if the color feature is extracted by the first classification network and the fine-grained feature is extracted by the second classification network, the color class to which the target belongs can be determined together based on the second color classification result obtained based on the fine-grained feature and the first color classification result under the condition that the first color classification result obtained based on the color feature is not accurate enough, so that the accuracy of the color class to which the target belongs is improved. Or if the first classification network extracts the fine-grained features and the second classification network extracts the color features, the color class to which the target belongs can be determined together based on the second color classification result and the first color classification result obtained based on the color features under the condition that the first color classification result obtained based on the fine-grained features is not accurate enough, so that the accuracy of the color class to which the target belongs is improved. Therefore, the color classification of the target can be realized by combining the advantages of the color characteristic and the fine-grained characteristic, so that the color classification of the target can be accurately classified under the condition of external environment factor interference, the difficult problem of color classification of the target with similar appearance can be considered, and the accuracy of the color classification of the determined target is integrally improved.
In the case where the first classification network performs feature extraction in a convolution manner and the second classification network performs feature extraction in a non-convolution manner, S13 may be expanded as follows:
fig. 3 is a schematic flow chart of a further embodiment of the object color classification method of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 3 is not limited in this embodiment. As shown in fig. 3, the present embodiment may include:
s31: the target image is divided into a number of image blocks.
S32: and acquiring the representative pixel value of each image block in each color channel, and sequencing the representative pixel values to form a feature sequence.
The representative pixel value of an image block in a color channel may be a mean value, a maximum value, a minimum value, etc. of the pixel values of the image block in the color channel.
S33: and performing color classification based on the characteristic sequence to obtain a second color classification result.
In S33, the feature sequences may be directly color-classified to obtain a second color classification result. Or, the feature sequence can be processed by using an attention mechanism to obtain a processed feature sequence; and classifying the processed characteristic sequences to obtain a second color classification result. The attention mechanism can be a single-head attention mechanism or a multi-head attention mechanism. The characteristic sequence is processed through an attention mechanism, and the expression capacity of the characteristic sequence on the color information of the target can be improved.
As illustrated in S31 to S33, the second classification network is a Transformer of the multi-head attention system, and may divide the target image into 8 × 8 tiles, and average the R, G and the B three color channels of each tile, respectively, to obtain a 3 × 8 × 8 feature sequence. And processing the 3 multiplied by 8 characteristic sequence by using a multi-head attention mechanism to obtain a processed characteristic sequence, and classifying the processed characteristic sequence to obtain a second color classification result.
For ease of understanding, an application scenario of the method provided in the present application is introduced as follows:
the target is a vehicle, and a vehicle image is acquired in a vehicle tracking scene; classifying the colors of the vehicles to determine the colors of the vehicles; based on the color of the vehicle, carrying out fuzzy matching on the vehicle; and performing accurate matching on the vehicle based on the fuzzy matching result. The corresponding color category of the vehicle includes red, green, blue, yellow, purple, black, and so on.
Further, a color classification system according to which the color classification method in the application scenario is implemented is described with reference to fig. 4. The color classification system may include an image capture device and a target color classification device. The camera device is used for shooting a vehicle image, and the target color classification device is used for acquiring the shot vehicle image and performing color classification on the vehicle image to obtain the color class of the vehicle. The camera device can be an independent camera or a terminal containing the camera; the color sorting means may be any terminal having processing capabilities.
Further, the method for classifying the vehicle color in the application scenario is described in detail with reference to fig. 5:
the first classification network comprises a first feature extraction layer and a first color classification layer (full connectivity layer), and the second classification network comprises a second feature extraction layer and a second color classification layer.
A first stage:
1) performing feature extraction on the vehicle image by using the first feature extraction layer to obtain fine-grained features; and carrying out color classification on the fine-grained features by utilizing the first color classification layer to obtain a first color classification result.
2) And sequencing the first color classification results according to the descending order of the first confidence coefficients to obtain { top1, top2, top3, … }.
3) Judging whether the first confidence coefficient of the top1 is greater than a confidence coefficient threshold value T; if not, taking top 1-top 3 as candidate color classification results, and entering 4) -7); otherwise 8) is executed.
Two stages:
4) the vehicle image is divided into 8 x 8 image blocks.
5) Extracting features of the vehicle image by using the second feature extraction layer to obtain color features, namely respectively averaging R, G channels and B channels of each image block by using the second feature extraction layer to obtain a 3 x 8 feature sequence; and carrying out color classification on the color features by utilizing the second color classification layer to obtain a second color classification result.
6) And sequencing the tops 1-3 according to the descending order of the second confidence degrees of the corresponding second color classification results to obtain { top2, top1, top3 }.
7) The top2 is taken as the color class to which the object belongs.
8) The top1 is taken as the color class to which the object belongs.
Through test statistics, the accuracy of the color class top1 to which the target belongs determined based on the first color classification result is close to 100% in the case that the first confidence of the first color classification result is greater than the confidence threshold. With the decrease of the first confidence coefficient, the accuracy of the first color classification result decreases, and in the case that the first confidence coefficient of the first color classification result is not greater than the confidence coefficient threshold, the second color classification result needs to be combined with the first color classification result to assist the judgment.
Fig. 6 is a schematic structural diagram of an embodiment of a color sorting apparatus according to the present application. As shown in fig. 6, the object color classification apparatus includes a first result acquisition module 11, a judgment module 12, a second result acquisition module 13, and a determination module 14.
The first result obtaining module 11 may be configured to perform feature extraction and color classification on the target image by using a first classification network, so as to obtain a first color classification result of the target.
The determining module 12 is configured to determine whether to obtain at least one second color classification result of the target based on the first confidence of the at least one first color classification result.
The second result obtaining module 13 may be configured to, in response to obtaining at least one second color classification result of the target, perform feature extraction and color classification on the target image by using a second classification network to obtain a second color classification result of the target, where a feature extraction manner of the second classification network is different from that of the first classification network.
The determination module 14 may be configured to determine a color class to which the target belongs based on the at least one first color classification result and the at least one second color classification result.
Through implementation of this embodiment, in the present application, after the determining module determines that the first color classification result is obtained, whether to obtain the second color classification result is determined based on a first confidence of the first color classification result, where the first confidence reflects whether the accuracy of the first color classification result is high enough and can be directly used for determining the color class to which the target belongs, and determining that the second color classification result is obtained means that the accuracy of the first color classification result is not high enough and cannot be directly used for determining the color class to which the target belongs, so the second result obtaining module obtains the second color classification result, and the determining module determines the color class to which the target belongs based on the first color classification result and the second color classification result. The first classification network and the second classification network have different feature extraction modes, which means that the features extracted by the first classification network and the second classification network have different expression conditions on the color information of the target, so that in the process of determining the color class of the target by combining the first color classification result and the second color classification result, the first color classification result and the second color classification result can play a complementary role, and the determined color class of the target is more accurate.
Fig. 7 is a schematic structural diagram of another embodiment of the color sorting apparatus of the present application. As shown in fig. 7, the object color classification apparatus includes a processor 21, and a memory 22 coupled to the processor 21.
Wherein the memory 22 stores program instructions for implementing the method of any of the above embodiments; processor 21 is operative to execute program instructions stored by memory 22 to implement the steps of the above-described method embodiments. The processor 21 may also be referred to as a CPU (Central Processing Unit). The processor 21 may be an integrated circuit chip having signal processing capabilities. The processor 21 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
FIG. 8 is a schematic structural diagram of an embodiment of a computer-readable storage medium of the present application. As shown in fig. 8, the computer readable storage medium 30 of the embodiment of the present application stores program instructions 31, and the program instructions 31 implement the method provided by the above-mentioned embodiment of the present application when executed. The program instructions 31 may form a program file stored in the computer-readable storage medium 30 in the form of a software product, so as to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned computer-readable storage medium 30 includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.

Claims (10)

1. A method of classifying a color of an object, comprising:
performing feature extraction and color classification on a target image by using a first classification network to obtain at least one first color classification result of the target;
determining whether to obtain at least one second color classification result of the target based on a first confidence of the at least one first color classification result;
in response to the at least one second color classification result of the target, performing feature extraction and color classification on the target image by using a second classification network to obtain at least one second color classification result of the target, wherein the feature extraction mode of the second classification network is different from that of the first classification network;
determining a color class to which the object belongs based on the at least one first color classification result and the at least one second color classification result.
2. The method according to claim 1, wherein the first color classification result is plural and is associated with the first confidence level respectively; the determining whether to obtain at least one second color classification result of the target based on the first confidence of the at least one first color classification result comprises:
and acquiring at least one second color classification result of the target in response to the maximum value of the first confidence degrees of the first color classification results not being larger than a confidence degree threshold value.
3. The method of claim 2, wherein determining whether to obtain at least one second color classification result for the target based on the first confidence level of the at least one first color classification result further comprises:
in response to that the maximum value of the first confidence of each first color classification result is greater than the confidence threshold, determining the first color classification result corresponding to the maximum value of the first confidence as the color class to which the target belongs.
4. The method according to claim 2, wherein the second color classification result is a plurality of color classification results, and a second confidence degree is associated with each color classification result; the determining, based on the at least one first color classification result and the at least one second color classification result, a color class to which the target belongs includes:
selecting a plurality of candidate color classification results from each first color classification result, wherein the first confidence of the candidate color classification results is greater than the first confidence of other first color classification results;
and determining the candidate color classification result corresponding to the maximum value of the second confidence coefficient as the color class to which the target belongs.
5. The method according to claim 4, wherein the number of candidate color classification results is an integer not less than 2.
6. The method of claim 1, wherein the first classification network performs feature extraction in a convolutional manner and the second classification network performs feature extraction in a non-convolutional manner.
7. The method of claim 6, wherein said performing feature extraction and color classification on the target image using a second classification network comprises:
dividing the target image into a plurality of image blocks;
and acquiring the representative pixel value of each image block in each color channel, and serializing to form a feature sequence.
8. The method of claim 7, wherein performing feature extraction and color classification on the target image using a second classification network further comprises:
processing the characteristic sequence based on an attention mechanism to obtain a processed characteristic sequence;
and classifying the processed characteristic sequences to obtain a second color classification result.
9. An object color classification apparatus comprising a processor, a memory connected to the processor, wherein,
the memory stores program instructions;
the processor is configured to execute the program instructions stored by the memory to implement the method of any of claims 1-8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores program instructions executable by a processor, which when executed, implement the method of any one of claims 1-8.
CN202210524015.2A 2022-05-13 2022-05-13 Target color classification method, device and computer-readable storage medium Pending CN114998619A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210524015.2A CN114998619A (en) 2022-05-13 2022-05-13 Target color classification method, device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210524015.2A CN114998619A (en) 2022-05-13 2022-05-13 Target color classification method, device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN114998619A true CN114998619A (en) 2022-09-02

Family

ID=83026504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210524015.2A Pending CN114998619A (en) 2022-05-13 2022-05-13 Target color classification method, device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN114998619A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563770A (en) * 2023-07-10 2023-08-08 四川弘和数智集团有限公司 Method, device, equipment and medium for detecting vehicle color

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563770A (en) * 2023-07-10 2023-08-08 四川弘和数智集团有限公司 Method, device, equipment and medium for detecting vehicle color
CN116563770B (en) * 2023-07-10 2023-09-29 四川弘和数智集团有限公司 Method, device, equipment and medium for detecting vehicle color

Similar Documents

Publication Publication Date Title
CN107665324B (en) Image identification method and terminal
Vezhnevets et al. A survey on pixel-based skin color detection techniques
US7957597B2 (en) Foreground/background segmentation in digital images
CN111797653A (en) Image annotation method and device based on high-dimensional image
Ren et al. Fusion of intensity and inter-component chromatic difference for effective and robust colour edge detection
CN108171247B (en) Vehicle re-identification method and system
US8831357B2 (en) System and method for image and video search, indexing and object classification
CN101436252A (en) Method and system for recognizing vehicle body color in vehicle video image
KR20150039367A (en) Licence plate recognition system
CN107273838A (en) Traffic lights capture the processing method and processing device of picture
US8498496B2 (en) Method and apparatus for filtering red and/or golden eye artifacts
CN106815587A (en) Image processing method and device
CN104660905A (en) Shooting processing method and device
CN114998619A (en) Target color classification method, device and computer-readable storage medium
CN113868457A (en) Image processing method based on image gathering and related device
CN111898448B (en) Pedestrian attribute identification method and system based on deep learning
Mouats et al. Fusion of thermal and visible images for day/night moving objects detection
US9514545B2 (en) Object detection apparatus and storage medium
CN108961357B (en) Method and device for strengthening over-explosion image of traffic signal lamp
CN110492934B (en) Noise suppression method for visible light communication system
Cika et al. Vehicle license plate detection and recognition using symbol analysis
Ciobanu et al. A novel iris clustering approach using LAB color features
CN111415372A (en) Moving target merging method based on HSI color space and context information
Zhang et al. Background subtraction based on pixel clustering
Brown Color retrieval for video surveillance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination