CN114170522A - Color classification identification method and system based on chromatographic similarity measurement - Google Patents

Color classification identification method and system based on chromatographic similarity measurement Download PDF

Info

Publication number
CN114170522A
CN114170522A CN202210132017.7A CN202210132017A CN114170522A CN 114170522 A CN114170522 A CN 114170522A CN 202210132017 A CN202210132017 A CN 202210132017A CN 114170522 A CN114170522 A CN 114170522A
Authority
CN
China
Prior art keywords
color
target
human visual
image
color spectrum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210132017.7A
Other languages
Chinese (zh)
Inventor
孟然
柴华
贾勇
王哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Smarter Eye Technology Co Ltd
Original Assignee
Beijing Smarter Eye Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Smarter Eye Technology Co Ltd filed Critical Beijing Smarter Eye Technology Co Ltd
Priority to CN202210132017.7A priority Critical patent/CN114170522A/en
Publication of CN114170522A publication Critical patent/CN114170522A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Spectrometry And Color Measurement (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a color classification identification method and a system based on chromatographic similarity measurement, wherein the method comprises the following steps: acquiring a target color image to be recognized, and converting the target color image into a human visual color space image; performing multi-dimensional color spectrum conversion on each effective pixel point of the human visual color space image, and fusing each dimensional color spectrum conversion result to obtain a target color spectrum; and performing color spectrum similarity measurement on the target color spectrum and a pre-stored target category, and if the similarity measurement result is greater than a threshold value, judging that the target color image is the target category. The technical problems that in the prior art, color identification accuracy is poor and visual habits are not met are solved, and color identification accuracy and interaction friendliness are improved.

Description

Color classification identification method and system based on chromatographic similarity measurement
Technical Field
The invention relates to the technical field of color identification, in particular to a color classification identification method and system based on color spectrum similarity measurement.
Background
In some practical application scenarios, the target needs to be classified or identified according to its color. For example, color contrast analysis can be performed on the standard samples of known classes and the images of the targets to be recognized, and the targets can be classified or recognized according to the similarity of the colors of the targets. The output image of the camera is typically a grayscale image or an RGB color space color image. However, the information of the grayscale image is too little, resulting in low accuracy of recognition; the RGB color space is not suitable for human visual habits and is not suitable for color recognition.
Disclosure of Invention
Therefore, the invention provides a color classification identification method and system based on chromatographic similarity measurement, which are used for solving the technical problems that the color identification accuracy is poor and the visual habit is not met in the prior art, and improving the color identification accuracy and the interaction friendliness.
In order to achieve the above object, the embodiments of the present invention provide the following technical solutions:
a color classification identification method based on a chromatographic similarity measure, the method comprising:
acquiring a target color image to be recognized, and converting the target color image into a human visual color space image;
performing multi-dimensional color spectrum conversion on each effective pixel point of the human visual color space image, and fusing each dimensional color spectrum conversion result to obtain a target color spectrum;
and performing color spectrum similarity measurement on the target color spectrum and a pre-stored target category, and if the similarity measurement result is greater than a threshold value, judging that the target color image is the target category.
Further, the performing a multi-dimensional color spectrum conversion on the human visual color space image further comprises:
extracting the brightness value of a target pixel point in the human visual color space image;
if the brightness value is larger than a pre-stored brightness high threshold value or the brightness value is smaller than a pre-stored brightness low threshold value, judging the target prime point as an invalid point;
and traversing the human visual color space image to obtain and remove all the invalid points, and taking the residual pixel points as the valid pixel points.
Further, the performing a multi-dimensional color spectrum conversion on the human visual color space image further comprises:
extracting the saturation value of a target pixel point in the human visual color space image;
if the saturation value is lower than a pre-stored saturation threshold value, the target prime point is judged to be an invalid point;
and traversing the human visual color space image to obtain and remove all the invalid points, and taking the residual pixel points as the valid pixel points.
Further, performing multi-dimensional color spectrum conversion on each effective pixel point of the human visual color space image, specifically comprising:
performing projection statistics on H dimension on each effective pixel point, and generating a first chromatogram;
performing projection statistics on the S dimension on each effective pixel point and generating a second chromatogram;
and carrying out projection statistics on HS plane dimensionality on each effective pixel point and generating a third chromatogram.
Further, carry out projection statistics on the HS plane dimension to each effective pixel point, and generate the third chromatogram, specifically include:
dividing the HS plane dimension into a plurality of equally divided sectors, and dividing each sector into a high saturation region and a low saturation region;
and circulating all effective pixel points of the human visual color space image, accumulating and counting the pixel numbers falling into each color partition, and storing the ratio of the counting result to the total effective pixel number into a corresponding element of a third color spectrum array to generate the third color spectrum.
Further, fusing the conversion results of the dimension chromatograms to obtain a target chromatogram, which specifically comprises:
connecting the first, second, and third chromatograms end-to-end to form the target chromatogram.
Further, a chromatographic similarity measure is performed using manhattan distance.
The invention also provides a color classification and identification system based on the chromatographic similarity measurement, which comprises:
the color space image conversion unit is used for acquiring a target color image to be recognized and converting the target color image into a human visual color space image;
the color spectrum conversion and fusion unit is used for carrying out multi-dimensional color spectrum conversion on each effective pixel point of the human visual color space image and fusing each dimensional color spectrum conversion result to obtain a target color spectrum;
and the result output unit is used for carrying out color spectrum similarity measurement on the target color spectrum and a pre-stored target category, and if the similarity measurement result is greater than a threshold value, judging that the target color image is the target category.
The present invention also provides an intelligent terminal, including: the device comprises a data acquisition device, a processor and a memory;
the data acquisition device is used for acquiring data; the memory is to store one or more program instructions; the processor is configured to execute one or more program instructions to perform the method as described above.
The present invention also provides a computer readable storage medium having embodied therein one or more program instructions for executing the method as described above.
According to the color classification and identification method and system based on the color spectrum similarity measurement, a target color image to be identified is obtained, the target color image is converted into a human visual color space image, multiple-dimension color spectrum conversion is carried out on each effective pixel point of the human visual color space image, and the conversion results of the dimension color spectrums are fused to obtain a target color spectrum; and carrying out color spectrum similarity measurement on the target color spectrum and a pre-stored target category, and if the similarity measurement result is greater than a threshold value, judging that the target color image is the target category.
Therefore, the color classification and identification method provided by the invention converts the image to be identified into the human visual color space image, so that the method is more suitable for the visual habits of people; meanwhile, the target chromatogram is obtained by utilizing the multi-dimensional chromatogram conversion and fusion, and the chromatogram conversion precision is improved, so that the color identification accuracy is improved. The technical problems that in the prior art, color identification accuracy is poor and visual habits are not met are solved, and color identification accuracy and interaction friendliness are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so as to be understood and read by those skilled in the art, and are not used to limit the conditions that the present invention can be implemented, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the effects and the achievable by the present invention, should still fall within the range that the technical contents disclosed in the present invention can cover.
FIG. 1 is a flow chart of an embodiment of a color classification identification method based on a color spectrum similarity measure according to the present invention;
FIG. 2 is a flowchart of a color classification recognition method according to an embodiment of the present invention;
FIGS. 3 and 4 are flow charts of invalid point screening;
FIG. 5 is a histogram corresponding to the first color spectrum;
FIG. 6 is a histogram corresponding to the second color spectrum;
FIGS. 7-9 are color-partition diagrams of the HS plane;
FIG. 10 is a diagram showing examples of membership functions in the hue H direction;
fig. 11 is a block diagram of an embodiment of a color classification and identification system based on a color spectrum similarity measure according to the present invention.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the problems of poor accuracy and unfriendly human-computer interaction of the existing color classification and identification, the invention adopts a human visual color space image as a color spectrum conversion basis and utilizes a target color spectrum obtained by multi-dimensional color spectrum fusion as a category contrast parameter so as to improve the identification accuracy. The human visual color space (HSL color space or HSI color space) is a visual representation of human visual characteristics, reflecting the way human perceives and experiences colors, and in this embodiment, an HSL color space image is taken as an example for illustration.
In a specific embodiment, as shown in fig. 1, the color classification and identification method based on the chromatographic similarity metric provided by the present invention includes the following steps:
s1: the method comprises the steps of obtaining a target color image to be recognized, and converting the target color image into a human visual color space image. In an actual use scene, most of images output by the camera are RGB color space images, and at this time, the RGB images need to be converted into human visual color space images, and then correlation operation is performed in an HSL color space or an HSI color space.
S2: performing multi-dimensional color spectrum conversion on each effective pixel point of the human visual color space image, and fusing each dimensional color spectrum conversion result to obtain a target color spectrum;
the HSL color space is a three-dimensional space, and the calculated amount is relatively large, and the color spectrum can perform one-dimensional description on the three-dimensional color space, and can better maintain the color information of the image, so that before the similarity measurement is performed, in order to reduce the calculated amount and improve the calculation efficiency, the human visual color space image (in this embodiment, the HSL color space image) can be converted into the color spectrum. Meanwhile, in order to improve the accuracy of the chromatogram, chromatograms of multiple dimensions are acquired and fused to obtain a target chromatogram.
S3: and performing color spectrum similarity measurement on the target color spectrum and a pre-stored target category, and if the similarity measurement result is greater than a threshold value, judging that the target color image is the target category. That is to say, after the target color spectrum is obtained, the similarity between the target color spectrum and the pre-stored image color spectrum corresponding to the target category is compared, and the color classification and identification of the target are completed. It should be understood that there are many pre-stored target categories, and during comparison, the target categories may be compared one by one.
In principle, the color spectrum may characterize the color distribution of a human visual color space image. The HSL color space separates the luminance information from the color information, making the color representation of the object more robust against changes in lighting conditions. Therefore, for the RGB color space image output by the camera, it is first converted into a human visual color space image, and then the color spectrum of the image is generated in the HSL color space.
In a usage scenario, as shown in fig. 2, a color image of a target to be recognized is first converted into a human visual color space image, then the human visual color space image is subjected to color spectrum conversion, the color spectrum of the target to be recognized and the color spectrums corresponding to the pre-stored and known target categories are subjected to similarity measurement one by one, and when the similarity score is greater than a given threshold value, the target to be recognized is determined to be the known category.
In order to reduce the calculation amount of the chromatographic conversion, thereby shortening the calculation time and improving the calculation efficiency, invalid pixel points can be eliminated before the chromatographic conversion is carried out on the image.
Theoretically, in the HSL color space, H represents chromaticity, S represents saturation, and L represents luminance.
The color of the target cannot be well reflected due to the fact that the color value of the pixel is too bright or too dark. Then when the L value of a certain pixel is greater than the luminance high threshold value LThr1 or when the L value is less than the luminance low threshold value LThr2, the pixel is judged as a null point without performing subsequent projection statistics. Typically, L has a value in the range of [0,255 ]. For example, LThr1 may be set to 235, LThr2 may be set to 10, and the pixel may be placed directly as an invalid point when the value of L is greater than 235 or when L is less than 10.
Meanwhile, the low saturation pixels also have color randomness. Then when the S value of a pixel is lower than the saturation threshold SThr, it is also determined as an invalid point without performing subsequent projection statistics. Generally, S is in the range of [0,100 ]. For example, SThr may be set to 10, and when the S value is less than 10, the pixel is directly set as an invalid point.
Accordingly, in some preferred embodiments, as shown in fig. 3, the performing a multi-dimensional color spectrum conversion on the human visual color space image further comprises:
s011: extracting the brightness value of a target pixel point in the human visual color space image;
s012: if the brightness value is larger than a pre-stored brightness high threshold value or the brightness value is smaller than a pre-stored brightness low threshold value, judging the target prime point as an invalid point;
s013: and traversing the human visual color space image to obtain and remove all the invalid points, and taking the residual pixel points as the valid pixel points.
When eliminating the invalid point, in addition to the brightness dimension, a saturation dimension may be adopted, as shown in fig. 4, the performing a color spectrum conversion on the human visual color space image in multiple dimensions may further include:
s021: extracting the saturation value of a target pixel point in the human visual color space image;
s022: if the saturation value is lower than a pre-stored saturation threshold value, the target prime point is judged to be an invalid point;
s023: and traversing the human visual color space image to obtain and remove all the invalid points, and taking the residual pixel points as the valid pixel points.
In some embodiments, the above steps S011-S013 can be performed in combination with steps S1-S3, steps S021-S023 can be performed in combination with steps S1-S3, or steps S011-S013, steps S021-S023 and steps S1-S3 can be performed in combination, wherein the order of performing steps S011-S013 and steps S021-S023 is not limited.
In some embodiments, performing multi-dimensional color spectrum conversion on each effective pixel point of the human visual color space image specifically includes the following steps:
s31: and performing projection statistics on H dimension on each effective pixel point, and generating a first color spectrum.
In particular, the value range of H in a computer is typically [0,255]]In the implementation process, firstly, a variable array with the length of 256 is established; and circulating all effective pixels in the image to be identified, and performing quantity accumulation on array elements (variables) corresponding to the H values of the pixel points. Thereby forming a histogram of 0 to 255 as shown in fig. 5. Processing the histogram to convert the actual statistical quantity into quantity ratio, and recording as the first color spectrum
Figure 127011DEST_PATH_IMAGE001
S32: performing projection statistics on the S dimension on each effective pixel point and generating a second chromatogram;
specifically, the value range of S in a computer is typically [0,99 ]]And when S is less than 10, the S is set as an invalid point by us, so that the value of S in the statistical range of us is [10,99 ]]. In the implementation process, firstly, a variable array with the length of 90 is established; circulating all effective images in the image to be identifiedAnd (4) performing quantity accumulation on array elements (variables) corresponding to the S values of the pixel points. Thereby forming a histogram of S values from 10 to 99 as shown in fig. 6. Processing the histogram, converting the actual statistical quantity into a quantity ratio, and recording the quantity ratio as a second spectrum
Figure 675804DEST_PATH_IMAGE002
S33: and carrying out projection statistics on HS plane dimensionality on each effective pixel point and generating a third chromatogram.
Specifically, the HS plane dimension is divided into a plurality of equally divided sectors, and each sector is divided into a high saturation area and a low saturation area; and circulating all effective pixel points of the human visual color space image, accumulating and counting the pixel numbers falling into each color partition, and storing the ratio of the counting result to the total effective pixel number into a corresponding element of a third color spectrum array to generate the third color spectrum.
As shown in fig. 7, is a color partition diagram of the HSL color space in the HS plane; the hue dimension is divided into a number of equally divided sectors, each of which is divided into two parts as shown in fig. 8; as shown in fig. 9, a part of fig. 8 shows a high saturation region, and the other part shows a low saturation region. Each color partition in fig. 9 corresponds to an element in the color array. So, if the HSL color space is divided into n sector-shaped regions in the hue dimension, the third spectral array consists of 2n elements. For convenience of description, in this embodiment, if n =24, that is, the third color spectrum array element is divided into 24 sector areas, then the number of the third color spectrum array elements is 48.
It should be understood that the more elements in the third color spectrum array, the finer the partitioning of the color space, i.e. the higher the color resolution of the color spectrum. The division number of the color partitions can be determined according to actual needs. In the same sector, the saturation threshold for distinguishing the high and low saturation regions can also be adjusted as needed.
The value of the element in the third color spectrum array represents the proportion of the number of effective pixels in the color partition to the total number of effective pixels. On-line confirmationAfter the tone dimension division mode and the saturation threshold are fixed, all effective pixels in the image to be recognized are circulated, the number of pixels falling into each color partition is accumulated and counted, the ratio of the counting result to the total effective pixel number is stored in the corresponding element of the third color spectrum array and recorded as the third color spectrum array
Figure 232687DEST_PATH_IMAGE003
For an actual target point corresponding to a pixel located at the boundary of a color partition, the actual target point is divided into different color partitions due to the influence of randomness factors such as illumination, so that the color recognition result is influenced. The membership function is such that each color point is not completely subordinate to a unique color partition, but may be subordinate to a plurality of adjacent color partitions. The different weights of the color zones are also referred to as membership degrees, and the membership functions describe the membership degrees of different color points to their neighboring color zones. Since the generation of the third color spectrum is divided in the hue H direction and the saturation S direction, there are two membership functions with respect to the hue H direction and with respect to the saturation S direction.
To illustrate the principle of membership functions, the embodiment exemplifies membership functions in the hue H direction, and membership functions in the saturation S direction. As shown in fig. 10, for the sake of simplicity of explanation, the number of divisions in the hue H direction is 4, each 64 units is a division width, the horizontal axis represents the hue H (range 0 to 255), and the vertical axis represents the membership degree of different divisions in the hue direction. And calculating corresponding membership degrees according to the membership functions, as shown in a formula 1.
Figure 499720DEST_PATH_IMAGE004
Wherein,
Figure 292096DEST_PATH_IMAGE005
in order to assign a degree of membership to the zone to which it belongs, WidH is the width of the zone,
Figure 960974DEST_PATH_IMAGE006
is the hue value of the center of the partition, H is the hue value of the current pixel, and the membership degree assigned to the adjacent partition is 1-
Figure 954338DEST_PATH_IMAGE007
Further, fusing the conversion results of the dimension chromatograms to obtain a target chromatogram, which specifically comprises:
connecting the first, second, and third chromatograms end-to-end to form the target chromatogram.
Specifically, when the statistics of the last effective pixel point of the target image is completed, the chromatogram in the H dimension is obtained
Figure 443088DEST_PATH_IMAGE008
Chromatography in S dimension
Figure 180100DEST_PATH_IMAGE009
With HS plane partition chromatography
Figure 437906DEST_PATH_IMAGE010
. Three chromatograms (one-dimensional arrays) are connected end to form a fusion chromatogram with the number of elements k1+ k2+ k3, and is denoted as
Figure 962691DEST_PATH_IMAGE011
In some embodiments, the manhattan distance may be used for the measurement of chromatographic similarity.
Specifically, assume that the color spectrum of the color image of the target to be recognized is
Figure 204316DEST_PATH_IMAGE012
(N is the length of the color spectrum array), and the color spectrum of a certain color image is known as
Figure 213861DEST_PATH_IMAGE013
Then the manhattan distance between the two spectra is:
Figure 857331DEST_PATH_IMAGE014
Figure 192498DEST_PATH_IMAGE015
Figure 655840DEST_PATH_IMAGE016
Figure 734655DEST_PATH_IMAGE017
the Manhattan distance between the two chromatograms has a value range of [0,2]。
Figure 232632DEST_PATH_IMAGE018
The smaller, the more similar the chromatogram is indicated,
Figure 863334DEST_PATH_IMAGE019
larger indicates less similar chromatograms. In particular when
Figure 813972DEST_PATH_IMAGE019
Equal to zero, the two chromatograms are identical.
At manhattan distance
Figure 165319DEST_PATH_IMAGE020
Defining the chromatographic similarity fraction as:
Figure 517803DEST_PATH_IMAGE021
wherein,
Figure 460351DEST_PATH_IMAGE022
is the similarity score between the two spectra
Figure 632706DEST_PATH_IMAGE022
Has a value range of [0,100]],
Figure 53324DEST_PATH_IMAGE022
Larger indicates more similar chromatography.
In the above embodiment, the color classification recognition method provided by the invention better conforms to the human visual habit by converting the image to be recognized into the human visual color space image; meanwhile, the target chromatogram is obtained by utilizing the multi-dimensional chromatogram conversion and fusion, and the chromatogram conversion precision is improved, so that the color identification accuracy is improved. The technical problems that in the prior art, color identification accuracy is poor and visual habits are not met are solved, and color identification accuracy and interaction friendliness are improved.
In addition to the above method, the present invention also provides a color classification recognition system based on a color spectrum similarity measure, as shown in fig. 11, the system comprising:
a color space image conversion unit 100 configured to acquire a target color image to be recognized and convert the target color image into a human visual color space image;
the color spectrum conversion and fusion unit 200 is configured to perform multi-dimensional color spectrum conversion on each effective pixel point of the human visual color space image, and fuse each dimensional color spectrum conversion result to obtain a target color spectrum;
a result output unit 300, configured to perform color spectrum similarity measurement on the target color spectrum and a pre-stored target category, and if a similarity measurement result is greater than a threshold, determine that the target color image is the target category.
Wherein, the chromatogram conversion fusion unit 200 is further configured to:
extracting the brightness value of a target pixel point in the human visual color space image;
if the brightness value is larger than a pre-stored brightness high threshold value or the brightness value is smaller than a pre-stored brightness low threshold value, judging the target prime point as an invalid point;
and traversing the human visual color space image to obtain and remove all the invalid points, and taking the residual pixel points as the valid pixel points.
The chromatographic conversion fusion unit 200 is also used for:
extracting the saturation value of a target pixel point in the human visual color space image;
if the saturation value is lower than a pre-stored saturation threshold value, the target prime point is judged to be an invalid point;
and traversing the human visual color space image to obtain and remove all the invalid points, and taking the residual pixel points as the valid pixel points.
The chromatographic conversion fusion unit 200 is specifically configured to:
performing projection statistics on H dimension on each effective pixel point, and generating a first chromatogram;
performing projection statistics on the S dimension on each effective pixel point and generating a second chromatogram;
and carrying out projection statistics on HS plane dimensionality on each effective pixel point and generating a third chromatogram.
The chromatographic conversion fusion unit 200 is specifically configured to:
dividing the HS plane dimension into a plurality of equally divided sectors, and dividing each sector into a high saturation region and a low saturation region;
and circulating all effective pixel points of the human visual color space image, accumulating and counting the pixel numbers falling into each color partition, and storing the ratio of the counting result to the total effective pixel number into a corresponding element of a third color spectrum array to generate the third color spectrum.
In the above embodiment, the color classification recognition system provided by the invention better conforms to the human visual habit by converting the image to be recognized into the human visual color space image; meanwhile, the target chromatogram is obtained by utilizing the multi-dimensional chromatogram conversion and fusion, and the chromatogram conversion precision is improved, so that the color identification accuracy is improved. The technical problems that in the prior art, color identification accuracy is poor and visual habits are not met are solved, and color identification accuracy and interaction friendliness are improved.
The present invention also provides an intelligent terminal, including: the device comprises a data acquisition device, a processor and a memory;
the data acquisition device is used for acquiring data; the memory is to store one or more program instructions; the processor is configured to execute one or more program instructions to perform the method as described above.
In correspondence with the above embodiments, embodiments of the present invention also provide a computer storage medium containing one or more program instructions therein. Wherein the one or more program instructions are for executing the method as described above by a binocular camera depth calibration system.
In an embodiment of the invention, the processor may be an integrated circuit chip having signal processing capability. The Processor may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The processor reads the information in the storage medium and completes the steps of the method in combination with the hardware.
The storage medium may be a memory, for example, which may be volatile memory or nonvolatile memory, or which may include both volatile and nonvolatile memory.
The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory.
The volatile Memory may be a Random Access Memory (RAM) which serves as an external cache. By way of example and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), SLDRAM (SLDRAM), and Direct Rambus RAM (DRRAM).
The storage media described in connection with the embodiments of the invention are intended to comprise, without being limited to, these and any other suitable types of memory.
Those skilled in the art will appreciate that the functionality described in the present invention may be implemented in a combination of hardware and software in one or more of the examples described above. When software is applied, the corresponding functionality may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above embodiments are only for illustrating the embodiments of the present invention and are not to be construed as limiting the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the embodiments of the present invention shall be included in the scope of the present invention.

Claims (10)

1. A color classification identification method based on a chromatographic similarity measure, characterized in that the method comprises:
acquiring a target color image to be recognized, and converting the target color image into a human visual color space image;
performing multi-dimensional color spectrum conversion on each effective pixel point of the human visual color space image, and fusing each dimensional color spectrum conversion result to obtain a target color spectrum;
and performing color spectrum similarity measurement on the target color spectrum and a pre-stored target category, and if the similarity measurement result is greater than a threshold value, judging that the target color image is the target category.
2. The color classification recognition method according to claim 1, wherein the performing a multi-dimensional color spectrum transformation of the human visual color space image further comprises:
extracting the brightness value of a target pixel point in the human visual color space image;
if the brightness value is larger than a pre-stored brightness high threshold value or the brightness value is smaller than a pre-stored brightness low threshold value, judging the target prime point as an invalid point;
and traversing the human visual color space image to obtain and remove all the invalid points, and taking the residual pixel points as the valid pixel points.
3. The color classification recognition method according to claim 1, wherein the performing a multi-dimensional color spectrum transformation of the human visual color space image further comprises:
extracting the saturation value of a target pixel point in the human visual color space image;
if the saturation value is lower than a pre-stored saturation threshold value, the target prime point is judged to be an invalid point;
and traversing the human visual color space image to obtain and remove all the invalid points, and taking the residual pixel points as the valid pixel points.
4. The color classification and identification method according to claim 1, wherein performing multi-dimensional color spectrum conversion on each effective pixel point of the human visual color space image specifically comprises:
performing projection statistics on H dimension on each effective pixel point, and generating a first chromatogram;
performing projection statistics on the S dimension on each effective pixel point and generating a second chromatogram;
and carrying out projection statistics on HS plane dimensionality on each effective pixel point and generating a third chromatogram.
5. The color classification and identification method according to claim 4, wherein performing projection statistics on the HS plane dimension on each effective pixel point and generating a third color spectrum specifically comprises:
dividing the HS plane dimension into a plurality of equally divided sectors, and dividing each sector into a high saturation region and a low saturation region;
and circulating all effective pixel points of the human visual color space image, accumulating and counting the pixel numbers falling into each color partition, and storing the ratio of the counting result to the total effective pixel number into a corresponding element of a third color spectrum array to generate the third color spectrum.
6. The color classification and identification method according to claim 5, wherein the fusing of the dimensional color spectrum conversion results to obtain the target color spectrum specifically comprises:
connecting the first, second, and third chromatograms end-to-end to form the target chromatogram.
7. The color classification recognition method according to claim 1, characterized in that the manhattan distance is used for the chromatographic similarity measure.
8. A color classification recognition system based on a chromatographic similarity metric, the system comprising:
the color space image conversion unit is used for acquiring a target color image to be recognized and converting the target color image into a human visual color space image;
the color spectrum conversion and fusion unit is used for carrying out multi-dimensional color spectrum conversion on each effective pixel point of the human visual color space image and fusing each dimensional color spectrum conversion result to obtain a target color spectrum;
and the result output unit is used for carrying out color spectrum similarity measurement on the target color spectrum and a pre-stored target category, and if the similarity measurement result is greater than a threshold value, judging that the target color image is the target category.
9. An intelligent terminal, characterized in that, intelligent terminal includes: the device comprises a data acquisition device, a processor and a memory;
the data acquisition device is used for acquiring data; the memory is to store one or more program instructions; the processor, configured to execute one or more program instructions to perform the method of any of claims 1-7.
10. A computer-readable storage medium having one or more program instructions embodied therein for performing the method of any of claims 1-7.
CN202210132017.7A 2022-02-14 2022-02-14 Color classification identification method and system based on chromatographic similarity measurement Pending CN114170522A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210132017.7A CN114170522A (en) 2022-02-14 2022-02-14 Color classification identification method and system based on chromatographic similarity measurement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210132017.7A CN114170522A (en) 2022-02-14 2022-02-14 Color classification identification method and system based on chromatographic similarity measurement

Publications (1)

Publication Number Publication Date
CN114170522A true CN114170522A (en) 2022-03-11

Family

ID=80489919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210132017.7A Pending CN114170522A (en) 2022-02-14 2022-02-14 Color classification identification method and system based on chromatographic similarity measurement

Country Status (1)

Country Link
CN (1) CN114170522A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020102018A1 (en) * 1999-08-17 2002-08-01 Siming Lin System and method for color characterization using fuzzy pixel classification with application in color matching and color match location
CN102012939A (en) * 2010-12-13 2011-04-13 中国人民解放军国防科学技术大学 Method for automatically tagging animation scenes for matching through comprehensively utilizing overall color feature and local invariant features
CN102156888A (en) * 2011-04-27 2011-08-17 西安电子科技大学 Image sorting method based on local colors and distribution characteristics of characteristic points
CN105005766A (en) * 2015-07-01 2015-10-28 深圳市迈科龙电子有限公司 Vehicle body color identification method
CN109460710A (en) * 2018-10-12 2019-03-12 北京中科慧眼科技有限公司 A kind of color classification recognition methods, device and automated driving system
CN111125416A (en) * 2019-12-27 2020-05-08 郑州轻工业大学 Image retrieval method based on multi-feature fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020102018A1 (en) * 1999-08-17 2002-08-01 Siming Lin System and method for color characterization using fuzzy pixel classification with application in color matching and color match location
CN102012939A (en) * 2010-12-13 2011-04-13 中国人民解放军国防科学技术大学 Method for automatically tagging animation scenes for matching through comprehensively utilizing overall color feature and local invariant features
CN102156888A (en) * 2011-04-27 2011-08-17 西安电子科技大学 Image sorting method based on local colors and distribution characteristics of characteristic points
CN105005766A (en) * 2015-07-01 2015-10-28 深圳市迈科龙电子有限公司 Vehicle body color identification method
CN109460710A (en) * 2018-10-12 2019-03-12 北京中科慧眼科技有限公司 A kind of color classification recognition methods, device and automated driving system
CN111125416A (en) * 2019-12-27 2020-05-08 郑州轻工业大学 Image retrieval method based on multi-feature fusion

Similar Documents

Publication Publication Date Title
WO2019233264A1 (en) Image processing method, computer readable storage medium, and electronic device
EP3849170B1 (en) Image processing method, electronic device, and computer-readable storage medium
Zhuo et al. Cloud classification of ground-based images using texture–structure features
Dev et al. Systematic study of color spaces and components for the segmentation of sky/cloud images
CN110334635A (en) Main body method for tracing, device, electronic equipment and computer readable storage medium
CN110520768B (en) Hyperspectral light field imaging method and system
CN103888682B (en) Image processing equipment and image processing method
CN108764321B (en) Image-recognizing method and device, electronic equipment, storage medium
CN111784620B (en) Light field camera full-focusing image fusion algorithm for guiding angle information by space information
CN112580433A (en) Living body detection method and device
CN107993209A (en) Image processing method, device, computer-readable recording medium and electronic equipment
CN114719966A (en) Light source determination method and device, electronic equipment and storage medium
CN113609907B (en) Multispectral data acquisition method, device and equipment
CN110111292B (en) Infrared and visible light image fusion method
CN110852207A (en) Blue roof building extraction method based on object-oriented image classification technology
CN111369605A (en) Infrared and visible light image registration method and system based on edge features
CN114745532A (en) White balance processing method and device for mixed color temperature scene, storage medium and terminal
CN113052754B (en) Method and device for blurring picture background
CN107392948B (en) Image registration method of amplitude-division real-time polarization imaging system
CN117975133A (en) Hyperspectral image classification method, hyperspectral image classification system and hyperspectral image classification computer program product
CN114170522A (en) Color classification identification method and system based on chromatographic similarity measurement
Kınlı et al. Modeling the lighting in scenes as style for auto white-balance correction
CN114821333B (en) High-resolution remote sensing image road material identification method and device
CN116612103A (en) Intelligent detection method and system for building structure cracks based on machine vision
CN110852300A (en) Ground feature classification method, map drawing device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220311