CN111127333B - Improved color enhancement method for two-color vision-oriented color image - Google Patents

Improved color enhancement method for two-color vision-oriented color image Download PDF

Info

Publication number
CN111127333B
CN111127333B CN201911103035.7A CN201911103035A CN111127333B CN 111127333 B CN111127333 B CN 111127333B CN 201911103035 A CN201911103035 A CN 201911103035A CN 111127333 B CN111127333 B CN 111127333B
Authority
CN
China
Prior art keywords
color
image
dominant
improved
vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911103035.7A
Other languages
Chinese (zh)
Other versions
CN111127333A (en
Inventor
沈徐铭
冯健华
张显斗
王勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201911103035.7A priority Critical patent/CN111127333B/en
Publication of CN111127333A publication Critical patent/CN111127333A/en
Application granted granted Critical
Publication of CN111127333B publication Critical patent/CN111127333B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an improved method for enhancing the color of a color image facing to two-color vision. The invention acquires the main colors of the image by clustering in the u 'v' space for the image to be subjected to color correction, and then converts u 'and v' into R and theta. Then, by remapping the theta of the several dominant colors to theta' according to the order of confusion priority, the improved method is followed, and for each pixel point belonging to the kth dominant color, the original angle theta is calculated according to the original angle theta 0 Correspondingly adjust to theta 0 +(θ k ‑θ′ k ) Thereby ensuring the difference of hues between different colors. Meanwhile, R based on the pixel point ij 、L ij Mean value of dominant color radius
Figure DDA0002270425230000011
And a maximum dominant color radius difference R max ‑R min The brightness of the image is adjusted, so that the visual difference among colors is improved, the visual experience of a dichromatic viewer is improved, and the dependence on clustering results is reduced.

Description

Improved color enhancement method for two-color vision-oriented color image
Technical Field
The invention relates to an improved method for enhancing the color of a color image facing to two-color vision. Belongs to the technical fields of computational vision, digital image processing, color enhancement and the like.
Background
Modern anatomy verifies that three cone cells exist in the human eye, namely, an L cone cell (sensitive to long-band light), an M cone cell (sensitive to middle-band light) and an S cone cell (sensitive to short-band light). The so-called "achromatopsia" and "color weakness" are caused by the lack or damage of certain cone cells, and cannot normally distinguish the visible light of a specific wave band in the natural spectrum, so that other means can be radically cured except for the gene therapy with huge cost.
Among the most types of color vision disorders, the most common types are dichroism and abnormal trichromatism, that is, the inconvenience of color discrimination caused by the lack or damage of one type of cone cells, the ratio of which exceeds 95%, and can be classified into red blindness (protanaria, lack of L cone cells), green blindness (deuteranopa, lack of M cone cells), blue blindness (tritanaria, lack of S cone cells) according to the type of lack of cone cells, and the ratio of male to female is 7% globally and the ratio of female to 0.5% in terms of red blindness and green blindness. The data report of public welfare organization ColorBlind Awareness established in the united kingdom for color vision impairment shows that 4.5% of the population worldwide is currently achromatopsia. At present, the China has more than 6000 ten thousand people with dyschromatopsia, but under the current medical level, the cost of gene therapy is huge, the gene therapy is affordable by unusual families, and the dyschromatopsia brings inconvenience to the dyschromatopsia people only in the color differentiation of daily life, but has no direct threat on health and safety, so that the method has an important role in improving the visual experience of the dyschromatopsia people when the digital image is displayed.
In 1997, hans bretetel et al in Computerized simulation of color appearance for dichromats, provided a specific conversion calculation method by converting linear algebra to obtain a visual simulation matrix from trichromatism to dichroism based on the assumption that neutral colors have no change in the dichroism visual angle in the LMS color space, according to the fact that the color blindness of red, green and blue can correctly distinguish 475nm light from 575nm light and the color blindness of blue can correctly distinguish 485nm light from 660nm light. In 1999 the number of years,
Figure BDA0002270425210000011
vienot et al, in Digital Video Colourmaps for Checking the Legibility of Displays by Dichromats, gives a specific transformation matrix in the LMS color space.
Since many expert students have conducted rich discussions and researches on the image enhancement method for two-color vision, mainly there are remapping, brightness adjustment and other methods, but because of the lack of two-color vision cone cells, three-dimensional color information is accepted as two-dimensional information, and the trouble of manufacturing new color confusion is often caused when remapping and brightness adjustment are not combined with image content information, so that the current research is mainly focused on color enhancement based on image content.
Disclosure of Invention
The invention mainly provides an improved image color enhancement method facing to two-color vision based on image content. For an image to be subjected to color correction, clustering is carried out in a u 'v' space to obtain the main colors of the image, and then u 'and v' are converted into R and theta. Then, by remapping the theta of the several dominant colors to theta' according to the order of confusion priority, the improved method is followed, and for each pixel point belonging to the kth dominant color, the original angle theta is calculated according to the original angle theta 0 Correspondingly adjust to theta 0 +(θ k -θ′ k ) Thereby ensuring the difference of hues between different colors. Meanwhile, R and dominant color radius mean value based on the pixel point
Figure BDA0002270425210000023
And a maximum dominant color radius difference R max -R min The brightness of the image is adjusted, so that the visual difference among colors is improved, the visual experience of a dichromatic viewer is improved, and the dependence on clustering results is reduced.
The technical scheme adopted by the invention for solving the practical application problem is a processing method for enhancing the image color of a dichromatic visual crowd, and the processing process is in a Lu 'v' color space, and the specific steps are as follows:
step (1) converts the image from the sRGB color space to the Lu 'v' color space, and clusters in the u 'v' space, obtaining image dominant color information based on the image content, wherein the conversion from the sRGB color space to the XYZ color space is calculated as follows:
Figure BDA0002270425210000021
wherein the method comprises the steps of
Figure BDA0002270425210000022
Figure BDA0002270425210000031
The conversion of the XYZ color space to the Lu 'v' color space is calculated as follows:
Figure BDA0002270425210000032
step (2) the (u ', v') of the dominant color obtained by the clustering is expressed as (u 'in the u' v 'space' con ,v′ con ) = (0.678,0.501) as center (here, lack of L cone cells results in two-color vision, and lack of M cone cells results in (u' con ,v′ con ) = (-1.217,0.782) as center, if S cone cells are absent, (u '' con, v′ con ) = (0.257,0.0) as a center), converting (u ', v') into polar representation (R, θ), the conversion method is as follows:
Figure BDA0002270425210000033
step (3) angle θ for the dominant color with highest priority of current adjacent confusion without remapping k Remapping to θ 'according to the improved remapping method' k Until all dominant colors have been remapped, the calculation formula for the adjacent confusion priority is as follows:
M k =(θ k -θ′ k-1 ) 2 +(θ′ k+1k ) 2 ,k=1,2..K
wherein if the kth dominant color has not been remapped, θ' k In the absence of θ' k By theta k Instead of calculating the value of θ' 0 And θ' K+1 For the angle approximation of the two boundaries of the u ' v ' space, θ ' 0 =-2.169782992,θ′ K+1 = -1.46003987, selecting among the main colors that have not been mappedThe nearest confusion priority is taken to be the largest (i.e. M in the primary color is not remapped) i The largest dominant color).
The two-color vision lacking L-cone cells or M-cone cells was remapped according to the following improved method:
Figure BDA0002270425210000041
wherein, for a dichromatic viewer lacking L cone cells, θ d =-1.49939917,θ′ 0 And theta'. K+1 The value is unchanged, a is an hyper-parameter of an algorithm, the adjustment angle can be controlled within a certain range, so that the problem that the image naturalness is reduced due to overlarge image change before and after enhancement is avoided, and the attempt a epsilon [0.1,0.15 ] is tried]The naturalness of the image can be better maintained, and the image enhancement is better affected; for the dichromatic vision lacking M cone cells, θ d =1.64282159285,θ′ 0 =1.73292258701,θ′ K+1 =1.99216810406,a∈[0.0365,0.0548]The value of a should be different depending on the application scenario.
The two-color vision lacking S-cone cells was remapped as follows:
Figure BDA0002270425210000042
wherein, θ' 0 =-0.476983507,θ′ K+1 =-0.23049605;a∈[0.0347,0.0521]The value of a should be different depending on the application scenario.
Step (4) if the pixel point (i, j) is affiliated to the kth class dominant color cluster, then the theta is calculated ij Go and theta k The same adjustment is carried out to obtain theta' ij =θ ij +(θ′ kk ) (k=1, 2,..k), where K is the dominant color class cluster number and K is the total number of clustered dominant colors;
step (5) for each pixel point (i, j), based on R ij Mean value of dominant color radius R
Figure BDA0002270425210000043
And dominant color maximum radius difference |R max -R min The brightness is adjusted to obtain L':
Figure BDA0002270425210000044
wherein b is an hyper-parameter of the algorithm, and through an attempt, when b epsilon [20, 30], the difference of pixels with different hues in brightness can be improved under the condition of better keeping the naturalness of the image, so that the contrast of mixed colors under the two-color vision is improved, and the value of b is different according to the application scene in specific application.
Step (6) converts the enhanced (R, θ ') to (u ', v ') by:
Figure BDA0002270425210000051
in combination with L ', the (L ', u ", v") is converted back to (R ', G ', B ') for display.
And obtaining the color image after color enhancement.
The technical scheme provided by the invention has the beneficial effects that:
the adjustment sequence before remapping of the angle theta is improved, the color enhancement effect is improved, and when the Euclidean distance between different main colors in the u 'v' plane after enhancement is longer; the improvement of remapping is carried out on a dichromatic vision person (red blindness) lacking L cone cells, a dichromatic vision person (green blindness) lacking M cone cells and a dichromatic vision person (blue blindness) lacking S cone cells, and the distribution balance of the enhanced colors on a u 'v' plane is ensured while the color contrast of the image is enhanced; further utilize R ij Mean value of dominant color radius R
Figure BDA0002270425210000052
And dominant color maximum radius difference |R max -R min Adjusting brightness for colors with the same angle θ but different RThe brightness of the images is enhanced (namely, the colors which can be confused by the two-color vision person) and the difference of original confusion colors of the finally obtained images in eyes of the dyschromatopsia person becomes obvious, so that the visual effect is improved, and meanwhile, the dependence on clustering results is reduced.
Drawings
FIG. 1 is an original view of a normal three-color viewing angle embodiment;
fig. 2 is a primary color (cluster centroid of c=2 and c=4) of the red blind confusing color at the trichromatic viewing angle in fig. 1;
FIG. 3 is an illustration of an embodiment at a red blind simulated viewing angle;
fig. 4 is a confusing dominant color at the red blind viewing angle in fig. 3 (cluster centroids of c=2 and c=4, i.e. the red blind simulated viewing angle of fig. 2);
fig. 5 is a graph of clustering results obtained after clustering an original image by Kmeans (k=4), wherein different clusters are represented by different gray levels;
FIG. 6 is a diagram of the original color enhancement results for a normal three-color viewing angle embodiment;
fig. 7 is an enhanced dominant color (cluster centroid of c=2 and c=4) of the red blind confusing color at the trichromatic viewing angle of fig. 6;
FIG. 8 is a diagram of the color enhancement results of the original image of the embodiment at a red blind simulated viewing angle;
fig. 9 is a confusing dominant color at the red blind viewing angle in fig. 8 (cluster centroids of c=2 and c=4, i.e. the red blind simulated viewing angle of fig. 6);
fig. 10 is a robustness check result diagram.
Fig. 11 is a flow chart of the present invention.
Detailed Description
The technical scheme of the invention can automatically carry out the flow by adopting the computer software technology. For a better understanding of the technical solution of the present invention, the following describes the present invention in further detail with reference to the drawings and examples. An embodiment of the invention is a real image of red blind and difficult to distinguish colors. Referring to fig. 1 and 11, the flow of the embodiment of the present invention includes the following steps:
step (1) clustering the image colors in a u 'v' space;
step (2) converting (u ', v') of several dominant colors obtained by clustering into (R, θ), finding the degree of confusion M that is not remapped and is currently in proximity to k Angle θ of maximum dominant color k Remapping to θ 'following the improved method' i
Step (3) when the angle θ of the dominant color is still maintained k Repeating step (2) when the remapping is not performed;
step (4) for each pixel point in the image, θ ' = (θ ') is performed on it ' ii ) Adjustment of +θ, wherein the color of the pixel belongs to the dominant color of the i-th class;
step (5) utilizes R ij 、L ij Mean value of dominant color radius R
Figure BDA0002270425210000062
And dominant color maximum radius difference |R max -R min Adjusting brightness;
step (6) converts the image from the Lu 'v' color space to the sRGB color space.
Step (1) converts the image from the RGB color space to the XYZ color space and then to the Lu 'v' color space, where dominant color clustering is performed, here exemplified by Kmeans clustering.
For an image to be enhanced, it is first transformed into XYZ color space, as follows:
Figure BDA0002270425210000061
wherein the method comprises the steps of
Figure BDA0002270425210000071
Figure BDA0002270425210000072
The conversion of the XYZ color space to the Lu 'v' color space is calculated as follows:
Figure BDA0002270425210000073
the Kmeans clustering algorithm pseudocode is as follows:
Figure BDA0002270425210000074
the Kmeans clustering result (k=4) is shown in fig. 2, where gray values 0, 50, 100, 150 represent the cluster categories 1,2, 3, 4, respectively.
In the step (2), converting (u ', v') of several dominant colors obtained by clustering into (R, theta), and according to the current proximity confusion degree M k Angle θ of maximum dominant color k Remapping to θ 'following the improved method' k The specific procedure of (2) is as follows.
The cluster centers of the four main colors obtained in the step (1) are shown in the following table:
clustering categories u′ k v′ k
1 0.204268 0.493217
2 0.358061 0.512938
3 0.220898 0.522419
4 0.187399 0.524124
Since it is image enhancement for the dichromatic viewer (red blindness) lacking L-cone cells, (u' con ,v′ con ) = (0.678,0.501), the (u ', v') of the 4 dominant colors are converted to (R, θ) according to the following formula
Figure BDA0002270425210000081
The results are shown in the following table:
clustering categories R k θ k
1 0.47379636 -1.58722
2 0.32016166 -1.5335
3 0.45760337 -1.52397
4 0.491146 -1.5237
The proximity confusion M of the currently un-remapped dominant color is calculated according to the following calculation formula i
M k =(θ k -θ′ k-1 ) 2 +(θ′ k+1k ) 2 ,k=1,2..K
Wherein, when facing to the people lacking L cone cell dichroism vision, θ' 0 =-1.46003987,θ′ K+1 =-2.169782992。
Clustering categories θ k M k
1 -1.58722 0.342261974
2 -1.5335 0.002976842
3 -1.52397 9.08785E-05
4 -1.5237 0.004052372
At this time, the proximity confusion M of the 1 st color 1 Maximum, first for the angle θ of the first type of color 1 By adjusting the colors with large adjacent confusion degree preferentially, more angle space can be reserved for the colors which are adjusted subsequently, and an angle mapping formula for the two-color vision person lacking the L cone cells is shown as follows, namely theta d =-1.49939917:
Figure BDA0002270425210000091
Solving to obtain theta' 1 =-1.6872。
Wherein θ d Intersection of color confusion lines (u 'for over-red blindness (lack of L-cone cytodichroism)' con ,v′ con ) = (0.678,0.501) perpendicular to the straight line (v '=13.86 u' -2.273) fitted to the color range visible in the u 'v' plane of the red blind
Figure BDA0002270425210000092
Figure BDA0002270425210000093
Is (u '=0.2026, v' = 0.5353) (i.e. v '=13.86 u' -2.273 and +.>
Figure BDA0002270425210000094
Is a cross point of (2); for green blindness, the solution process only changes (u' con ,v′ con ) = (-1.217,0.782) the straight line fitted in the color range visible on the u 'v' plane is the same as the red blindness, and the specific process of solving the above-mentioned solution of the red blindness is not specifically given here, θ for the green blindness d = 1.642821592848115; and according to Hans Brettel and +.>
Figure BDA0002270425210000096
Vihenot simulates a blue blind dichromatic viewing angle, the range of colors visible in the u 'v' plane is not straight, so another mapping method is used), where a takes 0.1.
θ 1 Mapped to theta' 1 Continuing to calculate the adjacent confusion M for the currently unmapped color k
Figure BDA0002270425210000095
At this time, the proximity confusion M of the class 2 color 2 Maximum, remapping is θ' 2 = -1.6063, continuing to calculate the adjacent confusion M for the currently unmapped color i
Figure BDA0002270425210000101
At this time, the proximity confusion M of the 3 rd color 3 Maximum, remapping is θ' 3 = -1.5651, finally remapped class 4 colors that have not been remapped, θ' 4 = -1.5126. The results before and after final remapping are shown below:
clustering categories θ k θ k θ′ kk
1 -1.5872 -1.6872 -0.1
2 -1.5335 -1.6063 -0.0728
3 -1.52397 -1.5651 -0.04113
4 -1.5237 -1.5126 0.0111
Step (4) for the pixel point at the image (i, j), θ 'is performed' ij =(θ′ kk )+θ ij Wherein the color of the pixel belongs to the dominant color of the kth class;
step (5) for each pixel point (i, j), based on R ij Mean value of dominant color radius R
Figure BDA0002270425210000102
And dominant color maximum radius difference |R max -R min And adjusting the brightness to obtain L', wherein the calculation formula is as follows: />
Figure BDA0002270425210000103
Wherein b is taken here as 25,
Figure BDA0002270425210000104
|R max -R min meter of IThe calculation is as follows:
Figure BDA0002270425210000105
Figure BDA0002270425210000111
step (6) converts the image from the Lu 'v' color space to the sRGB color space.
And converting the remapped theta ' into u ' v ', wherein the conversion method is as follows:
Figure BDA0002270425210000112
the Lu 'v' color space is converted to XYZ color space as follows:
Figure BDA0002270425210000113
the XYZ color space is converted to the RGB color space by the following method:
Figure BDA0002270425210000114
Figure BDA0002270425210000115
wherein c=r, G, B.
And obtaining the final color correction image.
The feasibility of the technical scheme of the invention is proved as follows:
the chromatic aberration is a widely applied color similarity calculation method at present, L * a * b * The chromatic aberration is the currently mainstream and most easy-to-calculate chromatic aberration calculation mode, defined as:
Figure BDA0002270425210000116
wherein DeltaL * ,Δa * ,Δb * The difference of three channels of the two colors in Lab space.
In the original image, the colors (red and green) originally mixed are clustered by Kmeans in two clusters of c=2 and c=4, as shown in the following table of cluster centroid RGB values (cluster RGB average):
Figure BDA0002270425210000121
converting it from RGB to L * a * b * The results are shown below:
Figure BDA0002270425210000122
calculating according to a color difference formula to obtain a color difference result as follows:
color confusing L in example graph * a * b * Chromatic aberration
Original image The method enhances the color difference
Normal viewing angle of three colors 59.07364448 54.21596997
View angle of achromatopsia 11.03078338 36.75849249
The smaller the color difference is, the higher the similarity between the two colors is, and the color difference improvement percentage under the red blind view angle after the color enhancement of the example image is calculated to be 233.24% by taking the red blind view angle as an example, meanwhile, the color discrimination is obviously improved under the simulated red blind view angle by comparing the attached figures 3 and 8 through subjective judgment, and the visual experience of the dichroism vision is effectively improved.
From the color difference and visual experience obtained by experiments, the color correction method provided by the invention can better improve the identification degree of different colors after the image is enhanced, and the method can effectively improve the visual experience of a dichromatic viewer.
Robustness test
The method has lower dependence on the clustering result of the adopted clustering method and the clustering quantity obtained during clustering, namely, the method can still have better performance under the conditions of non-ideal clustering result and non-ideal clustering quantity, and the result is shown in figure 10, so that the method can be proved to have higher robustness and stability.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and is not intended to limit the practice of the invention to such description. It will be understood by those skilled in the art that various changes in detail may be effected therein without departing from the scope of the invention as defined by the claims appended hereto.

Claims (6)

1. An improved method for enhancing the color of a color image facing to two-color vision is characterized by comprising the following steps:
step (1) clustering the image colors in a u 'v' space;
step (2) converting (u ', v') of several dominant colors obtained by clustering into(R, θ) find the current proximity confusion M without remapping k Angle θ of maximum dominant color k Remapping to θ 'following the improved method' i
Step (3) when the angle θ of the dominant color is still maintained k Repeating step (2) when the remapping is not performed;
step (4) for each pixel point in the image, θ ' = (θ ') is performed on it ' kk ) Adjustment of +θ, wherein the color of the pixel belongs to the dominant color of the i-th class;
step (5) utilizes R ij 、L ij Mean value of dominant color radius R
Figure FDA0004092135420000012
And dominant color maximum radius difference |R max -R min Adjusting brightness;
step (6) converts the image from the Lu 'v' color space to the sRGB color space.
2. An improved two-color vision-oriented color image color enhancement method according to claim 1, characterized in that said step (2) is performed by a proximity confusion priority M for the dominant color of the image to be remapped i To confirm the adjustment sequence of the dominant color angle, the specific implementation is as follows:
M k =(θ k -θ' k-1 ) 2 +(θ' k+1k ) 2 ,k=1,2..K。
3. an improved two-color vision-oriented color image color enhancement method according to claim 1 or 2, characterized in that said angle θ for remapping to be performed in step (2) i The method is concretely realized as follows:
for both the dichroism (red blindness) lacking L cone cells and the dichroism (green blindness) lacking M cone cells, remapping was performed according to the improved method, specifically as follows:
Figure FDA0004092135420000011
for dichroism (red blindness) lacking L cone cells: θ d =-1.49939917,θ′ 0 =-2.169782992,θ′ K+1 =-1.46003987;
For dichroism (green blindness) lacking M cone cells: θ d =1.64282159285,θ′ 0 =1.73292258701,θ′ K+1 =1.99216810406。
4. An improved colour enhancement method for colour images facing dichroism according to claim 1 or 2, characterized in that in step (2) the dichroism lacking S-cone cells (blue blindness) is remapped as follows:
Figure FDA0004092135420000021
wherein, θ' 0 =-0.476983507,θ' K+1 =-0.23049605;
The value of a depends on the type of the oriented dichromatic vision and the application scene.
5. An improved two-color vision-oriented color image color enhancement method as defined in claim 3, wherein step (4) is specifically implemented as follows:
based on the principal color of the clustered image, R is used for each pixel point (i, j) ij Mean value of dominant color radius R
Figure FDA0004092135420000024
And dominant color maximum radius difference |R max -R min The luminance is adjusted to obtain adjusted luminance L ', and the luminance L' is solved as follows: />
Figure FDA0004092135420000022
The value of b in a specific application is different according to the application scene.
6. An improved two-color vision-oriented color image color enhancement method as defined in claim 5, wherein step (6) is specifically implemented as follows:
the enhanced (R, θ') is converted to (u ", v") by the following formula:
Figure FDA0004092135420000023
and (L ', u ', v ') is converted back to (R ', G ', B ') for display in combination with the luminance L '.
CN201911103035.7A 2019-11-12 2019-11-12 Improved color enhancement method for two-color vision-oriented color image Active CN111127333B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911103035.7A CN111127333B (en) 2019-11-12 2019-11-12 Improved color enhancement method for two-color vision-oriented color image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911103035.7A CN111127333B (en) 2019-11-12 2019-11-12 Improved color enhancement method for two-color vision-oriented color image

Publications (2)

Publication Number Publication Date
CN111127333A CN111127333A (en) 2020-05-08
CN111127333B true CN111127333B (en) 2023-05-05

Family

ID=70495238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911103035.7A Active CN111127333B (en) 2019-11-12 2019-11-12 Improved color enhancement method for two-color vision-oriented color image

Country Status (1)

Country Link
CN (1) CN111127333B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114863095B (en) * 2022-03-25 2023-11-28 电子科技大学 Answer sheet image segmentation method based on color conversion
CN117116176A (en) * 2022-11-01 2023-11-24 深圳市Tcl云创科技有限公司 Content display method, apparatus, electronic device, and computer-readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006086882A (en) * 2004-09-16 2006-03-30 Ricoh Co Ltd Image forming apparatus, image forming method, program, and recording medium
CN101853492A (en) * 2010-05-05 2010-10-06 浙江理工大学 Method for fusing night-viewing twilight image and infrared image
CN106249406A (en) * 2016-08-30 2016-12-21 喻阳 Improve Color perception and correct artificial intelligence's lens and the method for designing of achromatopsia color weakness vision
CN106530250A (en) * 2016-11-07 2017-03-22 湖南源信光电科技有限公司 Low illumination color image enhancement method based on improved Retinex
CN109710791A (en) * 2018-12-14 2019-05-03 中南大学 A kind of multi-source color image color moving method based on significant filter
US10331207B1 (en) * 2013-03-15 2019-06-25 John Castle Simmons Light management for image and data control
CN110322520A (en) * 2019-07-04 2019-10-11 厦门美图之家科技有限公司 Image key color extraction method, apparatus, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006086882A (en) * 2004-09-16 2006-03-30 Ricoh Co Ltd Image forming apparatus, image forming method, program, and recording medium
CN101853492A (en) * 2010-05-05 2010-10-06 浙江理工大学 Method for fusing night-viewing twilight image and infrared image
US10331207B1 (en) * 2013-03-15 2019-06-25 John Castle Simmons Light management for image and data control
CN106249406A (en) * 2016-08-30 2016-12-21 喻阳 Improve Color perception and correct artificial intelligence's lens and the method for designing of achromatopsia color weakness vision
CN106530250A (en) * 2016-11-07 2017-03-22 湖南源信光电科技有限公司 Low illumination color image enhancement method based on improved Retinex
CN109710791A (en) * 2018-12-14 2019-05-03 中南大学 A kind of multi-source color image color moving method based on significant filter
CN110322520A (en) * 2019-07-04 2019-10-11 厦门美图之家科技有限公司 Image key color extraction method, apparatus, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Metamer Mismatching and Its Consequences for Predicting How Colours are Affected by the Illuminant;Xiandou Zhang etal;《2015 IEEE International Conference on Computer Vision Workshop (ICCVW)》;全文 *

Also Published As

Publication number Publication date
CN111127333A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN110276727B (en) Color vision disorder-oriented color image color enhancement method
CN111127333B (en) Improved color enhancement method for two-color vision-oriented color image
US7873213B2 (en) Systems and methods for color-deficient image enhancement
CN105184754B (en) Method for enhancing picture contrast
CN106997584A (en) A kind of haze weather image enchancing method
WO2018113050A1 (en) Drive method and drive apparatus of display panel
CN111915508B (en) Image texture detail enhancement method for color vision disorder
WO2019056445A1 (en) Color gamut mapping method and device
CN108711142A (en) Image processing method and image processing apparatus
Walraven et al. Color displays for the color blind
CN107092753A (en) A kind of intelligent city's planning system
CN110175969A (en) Image processing method and image processing apparatus
CN111105383B (en) Three-color vision-oriented image fusion color enhancement method
CN107122613A (en) A kind of DICOM image display methods based on pseudo-colours
CN115345788B (en) Method and device for improving image color contrast under vision of person with color vision abnormality
CN105208362B (en) Image colour cast auto-correction method based on gray balance principle
CN106599185B (en) HSV-based image similarity identification method
EP2802139B1 (en) Image color adjusting method and electronic device using the same
TWI438718B (en) Image processing method and system by using adaptive inverse hyperbolic curve
CN101036606B (en) Method for rectifying the daltonism on the basis of self-adapted mapping
CN110880306B (en) Color restoration correction method for medical display
CN104796679A (en) Laser display color adjusting method and device thereof
CN110767150B (en) Method for solving squint color cast problem of LED screen
CN107154015B (en) Color blindness correction method based on regional mapping
CN108564632B (en) Black and white image colorizing algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Shen Xuming

Inventor after: Feng Jianhua

Inventor after: Zhang Xiandou

Inventor after: Wang Yong

Inventor before: Zhang Xiandou

Inventor before: Shen Xuming

Inventor before: Feng Jianhua

GR01 Patent grant
GR01 Patent grant