CN115984680A - Identification method and device for can printing colors, storage medium and equipment - Google Patents

Identification method and device for can printing colors, storage medium and equipment Download PDF

Info

Publication number
CN115984680A
CN115984680A CN202310114824.0A CN202310114824A CN115984680A CN 115984680 A CN115984680 A CN 115984680A CN 202310114824 A CN202310114824 A CN 202310114824A CN 115984680 A CN115984680 A CN 115984680A
Authority
CN
China
Prior art keywords
result
color
pot
color result
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310114824.0A
Other languages
Chinese (zh)
Inventor
张智
滕慧慧
曹晨思
程京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CapitalBio Corp
Original Assignee
CapitalBio Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CapitalBio Corp filed Critical CapitalBio Corp
Priority to CN202310114824.0A priority Critical patent/CN115984680A/en
Publication of CN115984680A publication Critical patent/CN115984680A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a method, a device, a storage medium and equipment for identifying an overprint color, which are used for acquiring an overprint image to be identified from an image database; inputting the to-be-identified canned image into a classification model to obtain a classification result output by the classification model; for each pot color result, when the probability value of the pot color result is greater than the preset threshold value, the pot color result is determined to be the target pot color result, compared with the prior art, the method does not need to depend on manual experience, as long as a large number of canned images and corresponding color labels are provided, the correspondence between the canned images and the color labels can be learned, and therefore, the accuracy of the diagnosis result is ensured.

Description

Identification method and device for can printing colors, storage medium and equipment
Technical Field
The present application relates to the field of color recognition, and in particular, to a method, an apparatus, a storage medium, and a device for recognizing a can color.
Background
The traditional Chinese medicine cupping diagnosis can see through the functional states of viscera and the health state of a human body by observing the color and morphological characteristics of the cupping marks in different areas of the back after cupping, thereby diagnosing diseases.
At present, traditional Chinese medicine cupping diagnosis mainly depends on manual judgment of the color and shape characteristics of the cupping print, although the traditional Chinese medicine cupping diagnosis has certain consensus on the judgment of the color and shape characteristics of the cupping print, the division and judgment standards of the color and the shape characteristics of the cupping print are only fuzzy definition at present, and accurate and standard division methods and standards are not formed, so that the same cupping print is judged by different doctors, different diagnosis results can be obtained, and the accuracy of the diagnosis results cannot be ensured.
Therefore, how to ensure the accuracy of the diagnosis result becomes a problem to be solved urgently in the field.
Disclosure of Invention
The application provides a method, a device, a storage medium and equipment for identifying an overprint can color, and aims to ensure the accuracy of a diagnosis result.
In order to achieve the above object, the present application provides the following technical solutions:
a method of identifying a can color, comprising:
acquiring an impression image to be identified from an image database;
inputting the to-be-identified canned image into a classification model to obtain a classification result output by the classification model; the classification result at least comprises one or more cannistic color results and probability values of the cannistic color results; the classification model is obtained by pre-training based on a sample canned image;
and for each pot color result, when the probability value of the pot color result is greater than a preset threshold value, determining that the pot color result is a target pot color result.
Optionally, for each of the pot color results, when the probability value of the pot color result is greater than a preset threshold, determining that the pot color result is a target pot color result, including:
sequencing the pot color results according to the descending order of the probability values to obtain a pot color result sequence; the pot color result sequence at least comprises the serial number of the pot color result;
for each pot color result, sequentially judging whether the probability value of the pot color result is greater than a preset threshold value according to the sequence of the sequence numbers from small to large;
and if the probability value of the tank color result is greater than the preset threshold value, determining that the tank color result is a target tank color result.
Optionally, the process of obtaining the classification model based on pre-training of the sample canned image includes:
pre-obtaining each sample canned image and an entity label corresponding to the sample canned image;
generating a test set and a training set based on each of the sample canned images;
determining an initial model according to the type number of the entity tags;
inputting each sample canned image in the training set into the initial model, and coding each sample canned image through a coder of the initial model to obtain a high-dimensional feature vector of each sample canned image;
for each high-dimensional feature vector, inputting the high-dimensional feature vector into a classifier of the initial model to obtain a classification result of the sample canned image output by the classifier of the initial model; the classification result at least comprises an entity tank color result;
calculating to obtain a prediction loss according to the entity tank color result and the entity label, and adjusting each parameter of the initial model by using a preset algorithm;
when each sample canned image in a training set completes the calculation of the prediction loss, calculating the prediction loss and the accuracy of each sample canned image in the testing set;
when the prediction loss and the accuracy meet preset conditions, identifying a model with the highest accuracy as an initial classification model; the preset conditions are as follows: the prediction loss is not reduced any more or the accuracy is not increased any more or the training times reach the preset times;
and adjusting the initial classification model according to the hyper-parameters of model training to obtain the classification model.
Optionally, the method further includes:
for each of the pot color results, determining that the pot color result is not the target pot color result when the probability value of the pot color result is not greater than the preset threshold.
An identification device for can colors, comprising:
the acquisition unit is used for acquiring the canned image to be identified from the image database;
the output unit is used for inputting the to-be-identified canned image into a classification model to obtain a classification result output by the classification model; the classification result at least comprises one or more pot color results and probability values of the pot color results; the classification model is obtained by pre-training based on the sample canned image;
and the determining unit is used for determining the tank color result as a target tank color result when the probability value of the tank color result is greater than a preset threshold value for each tank color result.
Optionally, the determining unit is specifically configured to:
sequencing the pot color results according to the descending order of the probability values to obtain a pot color result sequence; the tank color result sequence at least comprises the serial number of the tank color result;
for each pot color result, sequentially judging whether the probability value of the pot color result is greater than a preset threshold value according to the sequence of the sequence numbers from small to large;
and if the probability value of the tank color result is greater than the preset threshold value, determining that the tank color result is a target tank color result.
Optionally, the output unit is specifically configured to:
pre-obtaining each sample canned image and an entity label corresponding to the sample canned image;
generating a test set and a training set based on each of the sample canned images;
determining an initial model according to the type number of the entity tags;
inputting each sample canned image in the training set into the initial model, and coding each sample canned image through a coder of the initial model to obtain a high-dimensional feature vector of each sample canned image;
for each high-dimensional feature vector, inputting the high-dimensional feature vector into a classifier of the initial model to obtain a classification result of the sample canned image output by the classifier of the initial model; the classification result at least comprises an entity tank color result;
calculating to obtain a prediction loss according to the entity tank color result and the entity label, and adjusting each parameter of the initial model by using a preset algorithm;
when each sample canned image in a training set completes the calculation of the prediction loss, calculating the prediction loss and the accuracy of each sample canned image in the testing set;
when the prediction loss and the accuracy meet preset conditions, identifying a model with the highest accuracy as an initial classification model; the preset conditions are as follows: the prediction loss is not reduced any more or the accuracy is not increased any more or the training times reach the preset times;
and adjusting the initial classification model according to the hyper-parameters of model training to obtain the classification model.
Optionally, the method further includes:
for each of the pot color results, determining that the pot color result is not the target pot color result when the probability value of the pot color result is not greater than the preset threshold.
A computer-readable storage medium comprising a stored program, wherein the program, when executed by a processor, performs the method for identifying a can color.
An identification apparatus of an overprint can color, comprising: a processor, memory, and a bus; the processor and the memory are connected through the bus;
the memory is used for storing a program, and the processor is used for executing the program, wherein the program executes the identification method of the canned color when being executed by the processor.
According to the technical scheme, the to-be-identified canned image is acquired from an image database; inputting the to-be-identified canned image into a classification model to obtain a classification result output by the classification model; for each pot color result, when the probability value of the pot color result is greater than the preset threshold value, the pot color result is determined to be the target pot color result.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for identifying a can color according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a training method of a classification model according to an embodiment of the present application;
FIG. 3 is a flow chart of another identification method for can colors provided by embodiments of the present application;
fig. 4 is a schematic structural diagram of an apparatus for identifying an embossed color according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As shown in fig. 1, a flowchart of a method for identifying a canned color provided in an embodiment of the present application includes:
s101: and acquiring the to-be-identified canned image from the image database.
S102: and inputting the to-be-identified canned image into the classification model to obtain a classification result output by the classification model.
Wherein the classification result comprises at least one or more of a pot color result, a probability value of the pot color result, the pot color result indicating a color of the canned image.
Optionally, the classification model may be an image classification neural network model, which includes but is not limited to: VGG, ***Net, resNet, inclusion.
It should be noted that the classification model is obtained by taking the sample canned image as an input and taking the entity label manually labeled on the sample canned image as a training target and training in advance, and specifically, the training process of the classification model can refer to the steps shown in fig. 2 and the explanation of the steps.
S103: and sequencing the pot color results according to the sequence of the probability values from large to small to obtain a pot color result sequence.
Wherein the sequence of pot color results at least comprises the serial number of the pot color results.
Specifically, assuming that there are three can color results, namely a first can color result, a second can color result and a third can color result, the probability value of the first can color result is 0.04, the probability value of the second can color result is 0.50, and the probability value of the third can color result is 0.46, the three can color results are sorted in descending order of the probability values to obtain a first can color result with a sequence number of 3, a second can color result with a sequence number of 1, and a third can color result with a sequence number of 2.
It should be noted that, since the result of the can color has primary and secondary scores, the result of the can color with higher probability value is ranked as the primary color in the front for further analysis.
S104: and for each pot color result, sequentially judging whether the probability value of the pot color result is greater than a preset threshold value according to the sequence of the sequence numbers from small to large.
And if the probability value of the pot color result is greater than the preset threshold value, executing S105, otherwise executing S106.
Optionally, the preset threshold is measured and calculated when the optimal model is found through test training in advance.
Specifically, it is assumed that there are three can color results (i.e., a first can color result, a second can color result, and a third can color result), where the number of the first can color result is 3, the probability value is 0.04, the number of the second can color result is 1, the probability value is 0.50, the number of the third can color result is 2, the probability value is 0.46, and the preset threshold value is 0.45, and for each can color result, it is sequentially determined whether the probability value of the can color result is greater than the preset threshold value according to the sequence of the numbers from small to large, obviously, the probability value of the second can color result and the probability value of the third can color result are greater than the preset threshold value, so S105 continues to be executed, the probability value of the first can color result is not greater than the preset threshold value, and S106 continues to be executed.
Optionally, the determining method may be modified to determine whether a difference between the probability value of the color result and the highest probability value is smaller than a preset threshold, and determine the color result smaller than the preset threshold as the final color result.
S105: and determining the tank color result as a target tank color result.
Wherein the target can color result indicates a final can color result.
It should be noted that when the probability value of the can color result is greater than the preset threshold, the can color result is determined as the target can color result.
S106: determining that the can color result is not the target can color result.
It should be noted that when the probability value of the pot color result is not greater than the preset threshold, it is determined that the pot color result is not the target pot color result.
In summary, for each can color result, whether the probability value of the can color result is greater than a preset threshold is sequentially judged according to the sequence of the sequence numbers from small to large, and if the probability value of the can color result is greater than the preset threshold, the can color result is determined to be the target can color result.
As shown in fig. 2, a flowchart of a training method for a classification model provided in an embodiment of the present application includes:
s201: and obtaining each sample canned image and the entity label corresponding to the sample canned image in advance.
S202: a test set and a training set are generated based on the respective sample canned images.
S203: and determining an initial model according to the type number of the entity tags.
It should be noted that, the entity labels (i.e., can colors) are designed into a corresponding number of classification models according to their kinds, for example, if the target can color is 5 colors, the classification model is designed into a five-classification model. For example, when the target can color is 3 colors of red, cyan, and violet, the final can color can be obtained by determining the three binary models (red binary model, cyan binary model, and violet binary model).
Alternatively, the initial model may be determined according to actual conditions, and is not limited specifically herein.
It is emphasized that the type of initial model employed in this embodiment may be an image classification neural network model, including but not limited to: VGG, ***Net, resNet, inclusion.
S204: and inputting each sample canned image in the training set into the initial model, and coding each sample canned image through a coder of the initial model to obtain a high-dimensional feature vector of each sample canned image.
S205: and for each high-dimensional feature vector, inputting the high-dimensional feature vector into a classifier of the initial model to obtain a classification result of the sample canned image output by the classifier of the initial model.
Wherein the classification result at least comprises a solid tank color result.
S206: and calculating to obtain the predicted loss according to the entity tank color result and the entity label, and adjusting various parameters of the initial model by using a preset algorithm.
The preset algorithm includes, but is not limited to, a back propagation algorithm which is a deep neural network.
S207: and when each sample canned image in the training set completes the calculation of the predicted loss, calculating the predicted loss and the accuracy of each sample canned image in the testing set.
S208: and when the prediction loss and the accuracy meet preset conditions, identifying the model with the highest accuracy as an initial classification model.
Wherein the preset conditions are as follows: the prediction loss is not reduced any more, the accuracy is not increased any more, and the training times reach the preset times.
S209: and adjusting the initial classification model according to the hyper-parameters of the model training to obtain the classification model.
And adjusting the initial classification model according to the hyper-parameter of the model training, namely adjusting the hyper-parameter to perform multiple times of training and performance comparison, and screening out the optimal model to obtain the classification model.
In summary, by using the scheme shown in this embodiment, the classification model can be obtained through effective training.
As shown in fig. 3, a flowchart of another method for identifying a canned color provided in an embodiment of the present application includes:
s301: and acquiring the to-be-identified canned image from the image database.
S302: and inputting the to-be-identified canned image into the classification model to obtain a classification result output by the classification model.
The classification result at least comprises one or more pot color results and probability values of the pot color results; the classification model is obtained by pre-training based on the sample canned image.
S303: and for each pot color result, when the probability value of the pot color result is greater than a preset threshold value, determining the pot color result as a target pot color result.
In summary, for each can color result, whether the probability value of the can color result is greater than the preset threshold is sequentially determined according to the sequence of the sequence numbers from small to large, and if the probability value of the can color result is greater than the preset threshold, the can color result is determined to be the target can color result.
As shown in fig. 4, an architecture diagram of an identification apparatus for identifying a canned color provided in an embodiment of the present application includes:
an obtaining unit 100, configured to obtain the to-be-identified canned image from the image database.
The output unit 200 is used for inputting the to-be-identified canned image into the classification model to obtain a classification result output by the classification model; the classification result at least comprises one or more tank color results and probability values of the tank color results; the classification model is obtained by pre-training based on the sample canned image.
The output unit 200 is specifically configured to: pre-obtaining each sample canned image and an entity label corresponding to the sample canned image; generating a test set and a training set based on each sample canned image; determining an initial model according to the type number of the entity tags; inputting each sample canned image in the training set into an initial model, and coding each sample canned image through a coder of the initial model to obtain a high-dimensional feature vector of each sample canned image; for each high-dimensional feature vector, inputting the high-dimensional feature vector into a classifier of the initial model to obtain a classification result of the sample canned image output by the classifier of the initial model; the classification result at least comprises an entity tank color result; calculating to obtain a predicted loss according to the entity tank color result and the entity label, and adjusting various parameters of the initial model by using a preset algorithm; when each sample canned image in the training set completes the calculation of the prediction loss, calculating the prediction loss and the accuracy of each sample canned image in the testing set; when the prediction loss and the accuracy meet preset conditions, identifying a model with the highest accuracy as an initial classification model; the preset conditions are as follows: the prediction loss is not reduced any more or the accuracy rate is not increased any more or the training times reach the preset times; and adjusting the initial classification model according to the hyper-parameters of the model training to obtain a classification model.
The determining unit 300 is configured to determine, for each of the pot color results, that the pot color result is the target pot color result when the probability value of the pot color result is greater than a preset threshold.
The determining unit 300 is specifically configured to: sequencing the pot color results according to the sequence of the probability values from large to small to obtain a pot color result sequence; the pot color result sequence at least comprises the serial number of the pot color result; for each pot color result, sequentially judging whether the probability value of the pot color result is greater than a preset threshold value according to the sequence of the sequence numbers from small to large; and if the probability value of the tank color result is greater than the preset threshold value, determining the tank color result as a target tank color result.
The determining unit 300 is further configured to determine, for each of the can color results, that the can color result is not the target can color result when the probability value of the can color result is not greater than the preset threshold.
In summary, for each can color result, whether the probability value of the can color result is greater than a preset threshold is sequentially judged according to the sequence of the sequence numbers from small to large, and if the probability value of the can color result is greater than the preset threshold, the can color result is determined to be the target can color result.
The present application also provides a computer-readable storage medium including a stored program, wherein the program performs the method for identifying a can color provided by the present application.
The application also provides a recognition equipment of can seal tints, includes: a processor, a memory, and a bus. The processor is connected with the memory through a bus, the memory is used for storing programs, and the processor is used for running the programs, wherein when the programs are run, the method for identifying the canned colors provided by the application is executed, and the method comprises the following steps:
retrieval from image databases a to-be-identified canned image;
inputting the to-be-identified canned image into a classification model to obtain a classification result output by the classification model; the classification result at least comprises one or more cannistic color results and probability values of the cannistic color results; the classification model is obtained by pre-training based on the sample canned image;
and for each pot color result, when the probability value of the pot color result is greater than a preset threshold value, determining that the pot color result is a target pot color result.
Optionally, for each of the pot color results, when the probability value of the pot color result is greater than a preset threshold, determining that the pot color result is a target pot color result, including:
sequencing the pot color results according to the descending order of the probability values to obtain a pot color result sequence; the tank color result sequence at least comprises the serial number of the tank color result;
for each pot color result, sequentially judging whether the probability value of the pot color result is greater than a preset threshold value according to the sequence of the sequence numbers from small to large;
and if the probability value of the tank color result is greater than the preset threshold value, determining that the tank color result is a target tank color result.
Optionally, the process of obtaining the classification model based on pre-training of the sample canned image includes:
pre-obtaining each sample canned image and an entity label corresponding to the sample canned image;
generating a test set and a training set based on each of the sample canned images;
determining an initial model according to the type number of the entity tags;
inputting each sample canned image in the training set into the initial model, and coding each sample canned image through a coder of the initial model to obtain a high-dimensional feature vector of each sample canned image;
for each high-dimensional feature vector, inputting the high-dimensional feature vector into a classifier of the initial model to obtain a classification result of the sample canned image output by the classifier of the initial model; the classification result at least comprises an entity tank color result;
calculating to obtain a prediction loss according to the entity tank color result and the entity label, and adjusting each parameter of the initial model by using a preset algorithm;
when each sample canned image in a training set completes the calculation of the prediction loss, calculating the prediction loss and the accuracy of each sample canned image in the testing set;
when the prediction loss and the accuracy meet preset conditions, identifying a model with the highest accuracy as an initial classification model; the preset conditions are as follows: the prediction loss is not reduced any more or the accuracy is not increased any more or the training times reach the preset times;
and adjusting the initial classification model according to the hyper-parameters of model training to obtain the classification model.
Optionally, the method further includes:
for each of the pot color results, determining that the pot color result is not the target pot color result when the probability value of the pot color result is not greater than the preset threshold.
The functions described in the method of the embodiment of the present application, if implemented in the form of software functional units and sold or used as independent products, may be stored in a storage medium readable by a computing device. Based on such understanding, part of the technical solutions or portions of the embodiments contributing to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computing device (which may be a personal computer, a server, a mobile computing device, a network device, or the like) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: u disk, removable hard disk, read only memory, random access memory, magnetic or optical disk, etc. for storing program codes.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for identifying can colors, comprising:
acquiring an impression image to be identified from an image database;
inputting the to-be-identified canned image into a classification model to obtain a classification result output by the classification model; the classification result at least comprises one or more pot color results and probability values of the pot color results; the classification model is obtained by pre-training based on a sample canned image;
and for each pot color result, when the probability value of the pot color result is greater than a preset threshold value, determining that the pot color result is a target pot color result.
2. The method as claimed in claim 1, wherein the determining the canister color result as the target canister color result when the probability value of the canister color result is greater than a preset threshold value for each canister color result comprises:
sequencing the pot color results according to the descending order of the probability values to obtain a pot color result sequence; the pot color result sequence at least comprises the serial number of the pot color result;
for each pot color result, sequentially judging whether the probability value of the pot color result is greater than a preset threshold value according to the sequence of the sequence numbers from small to large;
and if the probability value of the tank color result is greater than the preset threshold value, determining that the tank color result is a target tank color result.
3. The method of claim 1, wherein the pre-training of the classification model based on the sample canned image comprises:
pre-obtaining each sample canned image and an entity label corresponding to the sample canned image;
generating a test set and a training set based on each of the sample canned images;
determining an initial model according to the type number of the entity tags;
inputting each sample canned image in the training set into the initial model, and coding each sample canned image through a coder of the initial model to obtain a high-dimensional feature vector of each sample canned image;
for each high-dimensional feature vector, inputting the high-dimensional feature vector into a classifier of the initial model to obtain a classification result of the sample canned image output by the classifier of the initial model; the classification result at least comprises an entity tank color result;
calculating to obtain a prediction loss according to the entity tank color result and the entity label, and adjusting each parameter of the initial model by using a preset algorithm;
when each sample canned image in a training set completes the calculation of the prediction loss, calculating the prediction loss and the accuracy of each sample canned image in the testing set;
when the prediction loss and the accuracy meet preset conditions, identifying a model with the highest accuracy as an initial classification model; the preset conditions are as follows: the prediction loss is not reduced any more or the accuracy is not increased any more or the training times reach the preset times;
and adjusting the initial classification model according to the hyper-parameters of model training to obtain the classification model.
4. The method of claim 1, further comprising:
for each of the pot color results, determining that the pot color result is not the target pot color result when the probability value of the pot color result is not greater than the preset threshold.
5. An identification device for can colors, comprising:
the acquisition unit is used for acquiring the canned image to be identified from the image database;
the output unit is used for inputting the to-be-identified canned image into a classification model to obtain a classification result output by the classification model; the classification result at least comprises one or more pot color results and probability values of the pot color results; the classification model is obtained by pre-training based on the sample canned image;
and the determining unit is used for determining the tank color result as a target tank color result when the probability value of the tank color result is greater than a preset threshold value for each tank color result.
6. The apparatus according to claim 5, wherein the determining unit is specifically configured to:
sequencing the pot color results according to the descending order of the probability values to obtain a pot color result sequence; the pot color result sequence at least comprises the serial number of the pot color result;
for each pot color result, sequentially judging whether the probability value of the pot color result is greater than a preset threshold value according to the sequence of the sequence numbers from small to large;
and if the probability value of the tank color result is greater than the preset threshold value, determining that the tank color result is a target tank color result.
7. The apparatus of claim 5, wherein the output unit is specifically configured to:
pre-obtaining each sample canned image and an entity label corresponding to the sample canned image;
generating a test set and a training set based on each of the sample canned images;
determining an initial model according to the type number of the entity tags;
inputting each sample canned image in the training set into the initial model, and coding each sample canned image through a coder of the initial model to obtain a high-dimensional feature vector of each sample canned image;
for each high-dimensional feature vector, inputting the high-dimensional feature vector into a classifier of the initial model to obtain a classification result of the sample canned image output by the classifier of the initial model; the classification result at least comprises an entity tank color result;
calculating to obtain a prediction loss according to the entity tank color result and the entity label, and adjusting each parameter of the initial model by using a preset algorithm;
when each sample canned image in a training set completes the calculation of the prediction loss, calculating the prediction loss and the accuracy of each sample canned image in the testing set;
when the prediction loss and the accuracy meet preset conditions, identifying a model with the highest accuracy as an initial classification model; the preset conditions are as follows: the prediction loss is not reduced any more or the accuracy is not increased any more or the training times reach the preset times;
and adjusting the initial classification model according to the hyper-parameters of model training to obtain the classification model.
8. The apparatus of claim 5, further comprising:
for each of the pot color results, determining that the pot color result is not the target pot color result when the probability value of the pot color result is not greater than the preset threshold.
9. A computer-readable storage medium, comprising a stored program, wherein the program, when executed by a processor, performs the method for identifying a can color of any one of claims 1 to 4.
10. An identification apparatus for can colors, comprising: a processor, a memory, and a bus; the processor and the memory are connected through the bus;
the memory is used for storing a program, and the processor is used for executing the program, wherein the program is executed by the processor to execute the identification method of the canned color according to any one of claims 1 to 4.
CN202310114824.0A 2023-02-15 2023-02-15 Identification method and device for can printing colors, storage medium and equipment Pending CN115984680A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310114824.0A CN115984680A (en) 2023-02-15 2023-02-15 Identification method and device for can printing colors, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310114824.0A CN115984680A (en) 2023-02-15 2023-02-15 Identification method and device for can printing colors, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN115984680A true CN115984680A (en) 2023-04-18

Family

ID=85965130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310114824.0A Pending CN115984680A (en) 2023-02-15 2023-02-15 Identification method and device for can printing colors, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN115984680A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116843705A (en) * 2023-07-25 2023-10-03 中国中医科学院望京医院(中国中医科学院骨伤科研究所) Segmentation recognition method, device, equipment and medium for tank printing image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090016616A1 (en) * 2007-02-19 2009-01-15 Seiko Epson Corporation Category Classification Apparatus, Category Classification Method, and Storage Medium Storing a Program
CN111325256A (en) * 2020-02-13 2020-06-23 上海眼控科技股份有限公司 Vehicle appearance detection method and device, computer equipment and storage medium
CN111340896A (en) * 2020-02-21 2020-06-26 北京迈格威科技有限公司 Object color identification method and device, computer equipment and storage medium
CN115294600A (en) * 2022-06-30 2022-11-04 武汉众智数字技术有限公司 Method, system, electronic device and storage medium for pedestrian clothing color identification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090016616A1 (en) * 2007-02-19 2009-01-15 Seiko Epson Corporation Category Classification Apparatus, Category Classification Method, and Storage Medium Storing a Program
CN111325256A (en) * 2020-02-13 2020-06-23 上海眼控科技股份有限公司 Vehicle appearance detection method and device, computer equipment and storage medium
CN111340896A (en) * 2020-02-21 2020-06-26 北京迈格威科技有限公司 Object color identification method and device, computer equipment and storage medium
CN115294600A (en) * 2022-06-30 2022-11-04 武汉众智数字技术有限公司 Method, system, electronic device and storage medium for pedestrian clothing color identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄冬梅 等: "《案例驱动的大数据原理技术及应用》", 上海交通大学出版社 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116843705A (en) * 2023-07-25 2023-10-03 中国中医科学院望京医院(中国中医科学院骨伤科研究所) Segmentation recognition method, device, equipment and medium for tank printing image
CN116843705B (en) * 2023-07-25 2023-12-22 中国中医科学院望京医院(中国中医科学院骨伤科研究所) Segmentation recognition method, device, equipment and medium for tank printing image

Similar Documents

Publication Publication Date Title
CN106803247B (en) Microangioma image identification method based on multistage screening convolutional neural network
EP1229493B1 (en) Multi-mode digital image processing method for detecting eyes
CN109840554B (en) Alzheimer's disease MRI image classification method based on SVM-RFE-MRMR algorithm
CN110414541B (en) Method, apparatus, and computer-readable storage medium for identifying an object
CN111462102B (en) Intelligent analysis system and method based on novel coronavirus pneumonia X-ray chest radiography
Usman et al. Intelligent automated detection of microaneurysms in fundus images using feature-set tuning
CN113569554B (en) Entity pair matching method and device in database, electronic equipment and storage medium
Wen et al. Grouping attributes zero-shot learning for tongue constitution recognition
CN115984680A (en) Identification method and device for can printing colors, storage medium and equipment
CN113240655B (en) Method, storage medium and device for automatically detecting type of fundus image
CN115393351B (en) Method and device for judging cornea immune state based on Langerhans cells
CN114722892A (en) Continuous learning method and device based on machine learning
CN117315379B (en) Deep learning-oriented medical image classification model fairness evaluation method and device
CN111414930B (en) Deep learning model training method and device, electronic equipment and storage medium
CN116664585B (en) Scalp health condition detection method and related device based on deep learning
Lonij et al. Open-world visual recognition using knowledge graphs
CN117274278B (en) Retina image focus part segmentation method and system based on simulated receptive field
CN109509180A (en) Metal button flaw detection method based on machine vision
CN113140309A (en) Traditional Chinese medicine complexion diagnosis method and device
White et al. DevStaR: high-throughput quantification of C. elegans developmental stages
CN116091496B (en) Defect detection method and device based on improved Faster-RCNN
CN116188445A (en) Product surface defect detection and positioning method and device and terminal equipment
CN113537407B (en) Image data evaluation processing method and device based on machine learning
Shaffi et al. Performance evaluation of deep, shallow and ensemble machine learning methods for the automated classification of Alzheimer’s disease
US20220222816A1 (en) Medical image analysis system and method for identification of lesions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230418

RJ01 Rejection of invention patent application after publication