CN114792294B - Underwater image color correction method based on attenuation coefficient - Google Patents

Underwater image color correction method based on attenuation coefficient Download PDF

Info

Publication number
CN114792294B
CN114792294B CN202210549452.XA CN202210549452A CN114792294B CN 114792294 B CN114792294 B CN 114792294B CN 202210549452 A CN202210549452 A CN 202210549452A CN 114792294 B CN114792294 B CN 114792294B
Authority
CN
China
Prior art keywords
image
underwater
attenuation coefficient
attenuation
image sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210549452.XA
Other languages
Chinese (zh)
Other versions
CN114792294A (en
Inventor
陈恩依
郑冰
付民
孙梦楠
王晓晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202210549452.XA priority Critical patent/CN114792294B/en
Publication of CN114792294A publication Critical patent/CN114792294A/en
Application granted granted Critical
Publication of CN114792294B publication Critical patent/CN114792294B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an attenuation coefficient-based color correction method for an underwater image, which comprises the steps of firstly, respectively obtaining underwater images and on-water images with a distance d, carrying out linearization treatment to obtain a RAW linear image, and carrying out pretreatment on the RAW linear image by DCRaw to obtain a tiff image which is convenient for MATLAB reading treatment; then, according to the obtained pixel point energy value and the obtained pixel point energy value recorded in water, calculating the attenuation coefficient of the image sensor when the distance in the water body is d; estimating an attenuation coefficient of the image sensor in the unreachable area; fitting the characteristic trend of the three-channel attenuation coefficient curve of the image sensor by combining the characteristic trend of the underwater attenuation coefficient curve measured by adopting the AC-S, and obtaining the attenuation coefficient when the distance d is alpha times; and finally, compensating the original image by using beer's law to finish correction. The invention can realize more accurate color correction on the underwater image under different environmental conditions. And the algorithm application range is popularized to obtain the calculation algorithm of the attenuation coefficients of the image sensors with different distances.

Description

Underwater image color correction method based on attenuation coefficient
Technical Field
The invention belongs to the technical field of color correction of underwater images, and particularly relates to an underwater image color correction method based on attenuation coefficients.
Background
The ocean covers 97% of water resources worldwide, provides a large amount of resources for human beings, has important significance for human society survival and development, and provides a rich material foundation for realizing high-quality development of society; the green sustainable development is carried out on the basis of reasonably utilizing natural resources, the ocean resources are important components of the natural resources, the natural resources play an increasing role in a natural resource system, and the reasonable utilization of the ocean resources improves quality and increases efficiency for the development of entity economy.
"To be made into its material, must be used for its purpose. The main stream of the current underwater imaging technology is imaging by means of light waves and sound waves, wherein the development of the underwater optical imaging technology such as the high quality of the happiness is the key point of the development and the utilization of ocean resources. The underwater optical imaging technology is widely applied to the fields of underwater optical detection, aquaculture, military camouflage, anti-camouflage and the like. The imaging record is carried out on the target object by using visible light, and the imaging record has the characteristics of high speed, accuracy, convenience, real color and high reliability.
However, when light is transmitted underwater, it is affected by other conditions in the body of water, so that the application of the light imaging technology in the body of water is greatly limited. There is a significant attenuation of the transmission of light by a body of water compared to transmission in air, which is mainly reflected in two aspects: one is absorption. A large amount of water molecules and other small particle suspended particles exist in the water body, the particles can directly absorb the energy carried by photons in the water body transmission process of the photons, so that the energy loss of optical imaging is caused, and the direct visual effect caused by people is that the underwater image is always darker than the whole underwater image in the air; and secondly, scattering effect. The large particles in the ocean body suspend the microorganisms such as particles, phytoplankton, bacteria and the like, so that the transmission path of photons is deviated. Under the attenuation effect of the water body, the underwater image becomes fuzzy in detail, disordered in edge and color variation, which is not beneficial to the application of marine economy, scientific investigation measurement, military and the like, wherein the color variation has the most obvious influence on specific application, even can cause complete wrong judgment in some application fields, and causes irrecoverable loss. For example: in sea water krill cultivation, the health condition of the krill can be judged through the body surface color of the krill in the air, and the body surface color of the healthy krill and the body surface color of the krill with red body diseases and Huang Tou diseases only have weak macroscopic color difference, however, in a water body, red-yellow light attenuation is serious, an underwater image directly obtained without color correction treatment is not easy to distinguish the actual body surface color of the krill, so that treatment time is delayed and production is greatly reduced; in underwater torpedo camouflage, the optical false torpedo camouflage is performed by utilizing the attenuation characteristic of the water body, and if the camouflage is put into the air, the camouflage appears as "drawing a snake and adding" and can be accurately identified. Therefore, if the underwater torpedo is restored to the color in the air by a good color correction processing means, the underwater optical false torpedo and the real torpedo can be effectively identified, and then the intention of enemy military is identified, and the battlefield first machine is occupied.
The underwater image restoration algorithm has an accurate restoration model, image restoration is carried out on the underwater distorted image by means of the accurate restoration model, and the more accurate model parameters and parameter corresponding relation calculation fitting are, the more ideal restoration result is. However, the restoration model is limited by the environmental parameters, and the environmental parameters often need to be estimated by referring to the optical attenuation model, and then the restoration model can be restored, which is also a great difficulty in establishing the restoration model.
The underwater image restoration basic algorithm can be roughly divided into the following:
1. Restoration algorithm based on dark channel priori
2. Foggy day defogging algorithm
3. Image restoration algorithm based on deep learning
4. Retinex-based image restoration algorithm
In practical application, the color correction algorithm often adopts a mode of combining different algorithms and is assisted by other denoising means, so that certain achievement is achieved in the color correction field based on underwater image restoration.
In 2007, hou et al combined the basic image restoration method with the underwater propagation characteristics of photons to directly estimate the attenuation coefficient of the water body, and adopted a convolution and deconvolution combined mode to perform color correction of the underwater image, so that the information entropy of the underwater image is enriched from 7.055 to 7.278.
In 2010, he et al successfully realized underwater image restoration by using a dark channel prior method, the dark channel prior was an algorithm applied to defogging in images, but He et al quickly thought that the underwater image principle is the same as the foggy day image principle, the color attenuation effect of He et al relates to two directions of absorption and scattering, so that the dark channel prior algorithm is applied to underwater image restoration to obtain a result CIELAB1976 of 47.258, and the image obtained in foggy days has almost no color shift, so that the algorithm cannot be effectively improved, and the color distortion phenomenon still exists.
In 2012, the Chiang et al aims at recovering chromatic aberration caused by inaccurate underwater attenuation models, creatively proposes an algorithm combining sub-band wavelength compensation and dark channel priori, and effectively reduces chromatic aberration of an underwater restored image and an underwater real image, but has the defects of high hardware requirement, strong dependence and excessively redundant model parameters to be estimated, and is inconvenient to reappear.
In 2013, wen et al improved the dark channel prior algorithm on the basis of Chiang et al, estimated the attenuation coefficient of the water, simultaneously provided a novel background light calculation method, restored the underwater image with good contrast, and the image CIE1976LAB was 41.468. In the same year, drews et al focused on the G, B channels of the three channels of the image, significantly improved the "blue" fogging effect in the removed seawater, and the resulting CIELAB1976 was 44.773, because the design of the restoration model for the R channel is too simplified, the R channel is the most severely attenuated channel in the underwater image, and finally resulted in a larger color difference distance between the restored image and the original image.
In 2014, serikawa et al designed a triangular joint filter to solve the problem of color distortion of underwater images, and to some extent, the triangular joint filter has a reducing effect on the color difference between the distorted image and the original image, and the restored image CIEDE2000 is 3.8542.
In 2015, galdran et al have opened up to take the red channel as the main part, comprehensively use dark channel priori and foggy day defogging algorithm, aim at fitting out the decay model parameter of the underwater image, the image quality evaluation index proves that the method has obvious promotion to the contrast ratio of the underwater image, and has strengthened the visual effect of different regional edges, accord with the human eye vision requirement, selected CIELAB1976 in the comparison parameter, the other algorithms that the method process the obtained result 28.333 and compare are the lowest 31.107, have given effective correction in the color deviation aspect.
In 2016, lu et al reconstructed an attenuation model of light propagating under water, proposed a new underwater image transmission model, and provided a local adaptive filtering algorithm, and expected to solve the problems of image blurring and color distortion, and processed the obtained underwater image CIE1976LAB was 29.864, and compared with other algorithms, the minimum was 32.461, indicating that the color difference obtained by the processing result was smaller than that obtained by other algorithms.
In 2018, berman et al comprehensively uses a method of background light intensity estimation and dark channel priori after Wen et al improvement, and after reasonably calculating background light, the method is combined with a dark channel priori algorithm to reestablish an attenuation model of an underwater image. In the same year, barbosa et al inspired by Shin, and doped tea water with different concentrations in water further enriches the underwater image dataset, simulates the underwater degradation environment with different attenuation coefficients, and provides a new idea for different environment color research works.
In 2019, hu et al put forward the underwater image restoration algorithm based on deep learning to solve the problem of inaccurate background light estimation of the underwater environment, reasonably consider the attenuation characteristic of three channels, and according to the principle that the attenuation of a red channel is stronger than that of a blue-green channel, the designed convolutional neural network is used for performing inverse underwater imaging model, the CIEDE2000 index of the restored image is 8.6316, which is lower than that of other comparison methods, and the color correction effect of the image is well improved.
In 2020, li et al conducted research on restoration of underwater images based on a variational method, and processed the obtained image to have a CQE index of 0.9599, which is higher than other algorithms compared with the image, indicating that the algorithm contributes to restoration of the color of the image.
2021, Gong et al, based on studies of imaging characteristics and imaging models of underwater images, converted the RGB color gamut space of the images to YCbCr color space, calculated an attenuation offset parameter matrix based on three channel attenuation coefficients, corrected the values of Y, cb, cr, redefined the image 'white point', thereby calculating the gain of each channel of the image, converting the bluish-green hue of the underwater image to white hue, and lifting UCIQE of the underwater raw image from 0.4654 to 0.5421, indicating that it has a certain effect on correcting color offset.
In summary, the development of color correction of underwater images is to continuously improve the underwater image restoration model by applying different ideas, and as the parameters of the restoration model are accurate, the underwater restoration work gradually achieves effects in various application fields, but the restoration work still has flaws, the image obtained by color correction still has a certain color deviation from the real image, and in the underwater application situation with high precision of color requirements, such as krill cultivation and torpedo camouflage mentioned above, the accuracy of the restoration algorithm still needs to be improved.
Although the color correction of the underwater image under the fixed distance is realized, the application range of the underwater image has certain limitation, and in some application situations, special situations such as underwater torpedo camouflage, krill disease and insect pest judgment and the like exist, which can not be reached in certain areas. The light source cannot be placed at the corresponding position to directly measure and calculate the attenuation coefficient of the image sensor, so that the color correction work cannot be performed, and therefore, the application range of the color correction algorithm is expanded, and the color correction of the underwater image in the unreachable area is realized.
Disclosure of Invention
The essence of the underwater image restoration work is that more applicable restoration models are continuously tried to be established, and the problem of color distortion of the underwater image is solved from different angles.
The first aspect of the invention provides an underwater image color correction method based on attenuation coefficients, which comprises the following steps:
step 1, respectively acquiring an underwater target image and an on-water target image with a distance d, respectively carrying out unified linearization processing on the acquired original target images to obtain RAW linear images, and carrying out pretreatment on the RAW linear images by DCRaw to obtain tiff images which are convenient for MATLAB to read and process;
Step 2, according to the tiff image obtained in the step 1, obtaining a pixel energy value E air recorded in the air and a pixel energy value E water recorded in the water, and calculating an attenuation coefficient of the image sensor when the distance in the water is d, wherein the calculation formula is as follows:
Wherein e is a natural constant of 2.71828, and C s is the attenuation coefficient of the image sensor when the distance is d;
Step 3, estimating the attenuation coefficient of the image sensor in the unreachable area; the characteristic trend of the underwater attenuation coefficient curve is measured by adopting the AC-S to fit the characteristic trend of the three-channel attenuation coefficient curve of the image sensor, and the attenuation coefficient C s of the fixed distance d obtained in the step 2 and the spectral response characteristic curve of the image sensor are combined to obtain the attenuation coefficient C of the image sensor of which the distance d is alpha times, wherein the calculation formula is as follows:
Wherein d represents imaging distance, T represents transmittance of any channel of the image sensor, lambda represents wavelength, and c i represents attenuation coefficients of water bodies in different wave bands;
And 4, calculating according to the formula in the step 3 to obtain an attenuation coefficient of the image sensor at a specific distance, and compensating the original image by using beer's law, thereby completing the color correction of the original image.
In one possible design, when the original target image is in the nonlinear image format in the step 1, the specific process of linearizing is:
Assuming that the obtained original image is not subjected to gamma transformation and the pixel value before normalization is a, the pixel value of the image is z, the maximum saturation value is s, and the black pixel value is b, the image relationship is:
normalizing the pixel value of the acquired image to obtain:
assuming that the pixel value obtained by performing inverse gamma conversion on the normalized pixel value is a, then:
At this time, the obtained A is an image which has a linear corresponding relation with the pixel point and the energy.
In one possible design, the pixel energy value of the image is obtained in the step 2, namely, the pixel point number values of the whole photo are respectively sequenced according to the sizes of three channel values, and 18 points with the largest three channel value values are respectively taken for measuring the attenuation coefficient of the image sensor.
In one possible design, the specific process of estimating the attenuation coefficient of the image sensor in the unreachable area in the step 3 is as follows:
When the measured distance is d, assuming that the spectral response curve of the three channels of the image sensor is divided into n segments altogether, there are:
Where a m is the spectral response coefficient of the m-th segment, for any a m it can be expressed as:
wherein T is the transmissivity of the spectral response curve, and lambda n、λn+1 is the start-stop wavelength of the nth section respectively;
Fitting the characteristic trend of the three-channel attenuation coefficient curve of the image sensor by adopting the characteristic trend of the underwater attenuation coefficient curve measured by the AC-S, wherein the characteristic trend is specific to any wave band c m in 400-700 nm:
Wherein c acs is the underwater attenuation coefficient measured by AC-S;
Taking into consideration the attenuation coefficient of the image sensor when the known underwater imaging distance is d, applying the attenuation coefficient to the situation when the attenuation compensation of any distance is carried out, and setting the attenuation coefficient of the three channels of the image sensor at the known specific distance d as
Based on d, consider the case when the distances are d, 2d, 3d, αd, respectively, a system of equations is constructed:
Assuming a.epsilon.z, for a total of a equations in the system of equations, looking at all terms to the left of the equations, one can consider all terms to the left of the equations as a matrix of a.x.n, with all elements in the same column of the matrix forming a common ratio of Is equal to the number of the equal ratio series;
Adding the alpha equations of the equation set (4) in turn, and setting the sum as B α;
Then there are:
the first (α -1) equations of equation set (4) are added sequentially, and the sum is set as B α-1:
the difference between the two formulas can be obtained:
substituting the result of equation (3) into (7) yields:
The fractional term in each bracket in the above formula is available with a reduction in the common factor:
In the formula (10), c, d and T are constants, d represents imaging distance, T represents transmittance of any channel of the image sensor, lambda represents wavelength, c i represents attenuation coefficient of water bodies in different wave bands, and c is attenuation coefficient of the image sensor when the distance is alpha times of d.
In one possible design, the image color correction in step 4 is specifically: assuming that the pixel value of the original image to be corrected is A, compensating the pixel value on the basis of A, setting the value after compensation as B, and according to the beer's law relation:
To meet the visual requirement of human eyes, the compensated value needs to be gamma-converted again to obtain the pixel value of the final photo in the air, which comprises the following steps:
where e is the natural constant 2.71828 and l is the distance.
The second aspect of the present invention also provides an underwater image color correction device based on attenuation coefficients, the device comprising at least one processor and at least one memory; the memory stores an execution program of color correction of the underwater image based on the attenuation coefficient; the processor may implement the method for color correction of an underwater image based on attenuation coefficients according to the first aspect when executing the execution program stored in the memory.
The third aspect of the present invention also provides a computer-readable storage medium having stored therein a computer-executable program for implementing the attenuation coefficient-based underwater image color correction method as described in the first aspect when executed by a processor.
Compared with the prior art, the invention provides the method, the device and the storage medium for correcting the color of the underwater image based on the attenuation coefficient, which integrate the two aspects of the degradation function and the system noise into one aspect for consideration, find out a reasonable medium to synchronously embody the two aspects, simplify the two aspects, theoretically, only establish a reasonable medium relation to realize the accurate expression of the degradation degree of the underwater true color image, and accurately compensate based on the accurate expression, thereby realizing the color correction of the underwater true color image.
1. The invention starts from an underwater image recording instrument-image sensor, and explores the reasons of color distortion of the underwater image from the source by an integral method. In order to explore the total energy loss condition of each process in underwater imaging, an attenuation coefficient of an image sensor is defined, and the attenuation coefficient covers the attenuation effect of a water body on light waves, ambient light, system noise and other influencing factors. And calculating an attenuation coefficient of the image sensor according to the linear underwater distortion image and the linear original image obtained in DCRaw.
2. The invention provides an underwater image color correction algorithm based on an attenuation coefficient of an image sensor. And gradually repeating the whole imaging process of the camera in Matlab, realizing the control of parameters of the whole imaging process, and removing nonlinear processing steps of pixel recording so as to improve the calculation accuracy of energy loss in the underwater image propagation process. The linear recording deviation of the camera is compensated indirectly and cooperatively, the water body, the ambient light and other noise are compensated, and the compensated image is subjected to gamma conversion treatment, so that the image conforming to the visual effect of human eyes can be obtained. Therefore, more accurate color correction can be performed on the underwater image under different environmental conditions.
3. The invention promotes the application range of the algorithm to obtain the calculation algorithm of the attenuation coefficients of the image sensors with different distances. And deducing an image sensor attenuation coefficient expression by combining the water attenuation coefficient measured by the AC-S and the CMOS spectral response characteristic curve of the image sensor, calculating the image sensor attenuation coefficients under different distances, and realizing the color correction of the underwater images with different distances in the non-detectable region.
4. The RGB attenuation coefficient measured when 18 pixel points are selected when the image pixel value is obtained is most stable, the corresponding actual area of the 18 pixel points in space is extremely small, the attenuation of the scattering effect on the water body can be ignored, and the accidental influence of the surrounding pixel points of a single pixel is avoided.
Drawings
Fig. 1 is a conventional underwater image degradation and restoration model.
FIG. 2 is an image sensor underwater attenuation imaging model of the present invention.
FIG. 3 is a flow chart of the overall method of the present invention.
Fig. 4 is a graph showing the spectral response characteristics of Nikon D5100.
FIG. 5 is a plot of the absorption, scattering, and attenuation coefficients measured for AC-S.
FIG. 6 is a comparison of color correction results for a class of underwater images of a body of water.
FIG. 7 is a comparison of color correction results for underwater images of two water bodies.
FIG. 8 is a comparison of color correction results for three types of underwater images of water.
FIG. 9 is a schematic diagram of the apparatus in embodiment 2.
Detailed Description
The principle explanation about the water attenuation coefficient measurement is as follows:
the total attenuation of seawater is described by the beam attenuation coefficient c (λ), which is a linear combination of absorption coefficient and scattering coefficient, expressed as:
c(λ)=a(λ)+b(λ)
where a (λ) is the absorption coefficient, b (λ) is the scattering coefficient, and all three units are m-1.
The attenuation coefficient of underwater light describes the exponential decay of non-scattered light, and on the direct axis of the light source, the intensity of light at z meters from the light source can be expressed according to beer's law as:
I=I0exp(-cz)
Where I 0 is the light intensity at the target and z is the imaging distance. The combined term cz is then a unitless parameter called the decay length (AL). When cz=1, there are Definition of the definitionEach decay length represents the loss of the beam with exp (-1).
The recovery algorithm thought of color correction needs to have a clear recovery model, the simulation fineness of model parameters to environment variables and internal noise often directly determines the authenticity and accuracy of color correction of an underwater image, and the invention develops a new way in the aspect of model establishment, such as a conventional underwater image degradation and recovery model shown in fig. 1.
F (x, y) is the actual image of the object, g (x, y) is the image after being affected by the degradation function and system noise. The degradation function H is characterized by the attenuation characteristics of the water body and is mainly generated by absorption action, forward scattering and backward scattering of the water body, noise eta (x, y) is caused by the working mechanism inside the system, and the influence of the degradation function H and the noise eta (x, y) is considered at the same time and is recovered.
The invention combines the degradation function and the system noise into one aspect for consideration, and finds out a reasonable medium to synchronously embody the two aspects, thereby simplifying the two aspects. In theory, the accurate expression of the degradation degree of the underwater true color image can be realized by only establishing a reasonable medium relation, and the accurate compensation is performed based on the accurate expression, so that the color correction of the underwater true color image is realized.
The invention takes the image sensor as a medium and takes the image as a specific expression, an underwater attenuation imaging model is established, and the influence degree of the whole of the degradation function H and noise eta (x, y) on an ideal image f (x, y) is characterized as the attenuation degree of the pixel of the image sensor, as shown in figure 2. In principle, the attenuation coefficient of the image sensor relates to all factors which need to be compensated for restoring the underwater true color image, so that the real image before degradation can be obtained by compensating the pixel data of the image after degradation based on the pixel data of the image sensor, and the restoration of the underwater true color image can be accurately realized as long as the attenuation degree of the image sensor is accurately represented.
Definition of image sensor attenuation coefficient:
According to the integral method, the attenuation degree of the image sensor is qualitatively explained, but quantitative energy compensation is needed for the image to realize the color correction of the underwater degraded image. According to beer's law:
I=I0exp(-cz)
Since light energy is a representation of energy, to explain the need for compensation, the energy is characterized herein:
E Starting from the beginning e-cd=E Powder (D)
e Starting from the beginning represents the energy before the pixel is unattenuated, and E Powder (D) represents the energy after the effect of the overall attenuation. Beer's law does not take into account the effect of system noise, and the overall attenuation of the image degradation process is directly represented by the water attenuation coefficient, which is obviously inaccurate.
In the image sensor attenuation model, for the overall image sensor attenuation consideration, an image sensor attenuation coefficient c s is defined, which has
C s represents the energy loss mapped to the whole attenuation process of the image sensor, and mainly comprises the influence of the water body degradation function and the background noise. The method for expressing the underwater attenuation characteristic by taking the sensor as a medium lays a foundation for color correction of the underwater image.
The method for calculating the attenuation coefficient of the image sensor comprises the following steps:
Because the light wave has absorption and scattering effects when propagating under water, the energy of the light wave is sharply reduced in a long distance, but the light wave propagates in air, but almost no absorption and attenuation exist, so the energy when the light wave passes through a propagation distance d in air is used for replacing the energy when the light wave does not propagate in water, the object can be guaranteed to have the same divergence angle to a camera lens as long as the propagation distance d is fixed, and the formula can evolve:
In order to ensure the accuracy of the energy recording data, raw is used as a recording tool to measure the attenuation coefficient of the image sensor. E air is the energy recorded in Raw in air and E water is the energy recorded in Raw in water. The formula facilitates the acquisition of energy in the beginning and ending processes of the underwater image, and is beneficial to calculating the attenuation coefficient of the image sensor. The E air and E water can be obtained by shooting the light sources with the same distance from the camera lens in water and air respectively, and if the imaging distance d can be accurately recorded, the attenuation coefficient c s of the sensor can be measured and calculated.
The invention will be further described with reference to specific examples.
Example 1:
As shown in fig. 3, the invention provides an underwater image color correction method based on attenuation coefficients, which comprises the following steps:
step 1, respectively acquiring an underwater target image and an on-water target image with a distance d, respectively carrying out unified linearization processing on the acquired original target images to obtain RAW linear images, and carrying out pretreatment on the RAW linear images by DCRaw to obtain tiff images which are convenient for MATLAB to read and process;
Step 2, according to the tiff image obtained in the step 1, obtaining a pixel energy value E air recorded in the air and a pixel energy value E water recorded in the water, and calculating an attenuation coefficient of the image sensor when the distance in the water is d, wherein the calculation formula is as follows:
Wherein e is a natural constant of 2.71828, and C s is the attenuation coefficient of the image sensor when the distance is d;
Step 3, estimating the attenuation coefficient of the image sensor in the unreachable area; the characteristic trend of the underwater attenuation coefficient curve is measured by adopting the AC-S to fit the characteristic trend of the three-channel attenuation coefficient curve of the image sensor, and the attenuation coefficient C s of the fixed distance d obtained in the step 2 and the spectral response characteristic curve of the image sensor are combined to obtain the attenuation coefficient C of the image sensor of which the distance d is alpha times, wherein the calculation formula is as follows:
Wherein d represents imaging distance, T represents transmittance of any channel of the image sensor, lambda represents wavelength, and c i represents attenuation coefficients of water bodies in different wave bands;
And 4, calculating according to the formula in the step 3 to obtain an attenuation coefficient of the image sensor at a specific distance, and compensating the original image by using beer's law, thereby completing the color correction of the original image.
The specific explanation about each step is as follows:
1. If the image source is a nonlinear image, such as a JPEG format, linearizing; if the image source is a Raw linear image, the Raw original image is directly preprocessed by DCRaw to obtain a tiff image which is convenient for MATLAB reading and processing.
1. Regarding the Raw characteristics:
the Raw linear image is composed of pixel files of the image sensor and metadata generated by the image sensor. The camera metadata provides information about the image capturing mode, and the image sensor can freely adjust various parameters during shooting while acquiring Raw. Regardless of the filter array, the Raw file contains only specific values for different pixels, and is ultimately a gray scale image. Unlike a normal gray image, the Raw file indirectly records color information by recording the arrangement of color filters. Some image application programs can directly view the Raw image containing the visible color information of human eyes, because the processing such as color interpolation, gamma conversion and the like is carried out in the default internal of the program, the series of processes cater to the needs of the human eyes, but the Raw image cannot truly show the original appearance of the Raw image.
Raw contains the most original information of a scene shot by a camera, and no processing of operations such as white balance, brightness adjustment, demosaicing algorithm, gamma correction and the like of a specific scene is performed, so Raw original data is also called "electronic negative".
The image sensor records the Raw image in order to achieve a more limited control of the image. When processing the Raw image, all the above-mentioned processing procedures can be manipulated.
When a JPEG image is photographed, the image sensor performs all the aforementioned image operation processes by default to directly generate a color image, and then compresses the image using a JPEG compression method. Although some image sensors set proprietary parameters for the conversion process, they are also limited to the effect of selecting a color space or tone curve, and the photo still has a dependency on the shooting environment. JPEG does not actually record scene illuminance data, which has actually changed the actual color of the scene.
The pixel record of Raw has linearization characteristic, namely the pixel point value and the intensity of the recorded energy are in linear corresponding relation, which is an important requirement for measuring underwater attenuation and is also a direct expression of the advantages of the color correction algorithm.
Raw data has a larger bit depth than JPEG or the like. Today's image sensors can capture 12 bits of data per pixel, up to 4096 levels per channel. The larger the bit depth, the more accurately the scene illuminance information is reflected, but the bit depth of each pixel of the JEPG format photograph is limited to 8 bits.
For the Raw recording process, the parameters that the image sensor has an impact on capturing pixel data are auto-exposure gain, shutter speed, aperture size. And in the experimental process, the camera gear is positioned to the M gear, and the shutter time and the aperture size can be manually selected. Other parameters can be controlled and processed when the Raw converter converts the Raw image, so that the full control of the image recording process is realized. If nonlinear processing is avoided in all recording processes of the image, it can be considered that Raw data recorded by the image sensor reflects the most realistic scene illuminance information.
Therefore, raw original data has the truest reflection on scene information, and has great attenuation measurement and utilization value of underwater optical signals. In order to accurately and truly reflect the attenuation condition of the light wave under water, pixels of the raw original image are selected as basic data to be processed and calculated.
2. Regarding JPEG image linear processing:
The gamma conversion is an image nonlinear processing process, the main function of the gamma conversion is to make the image meet the visual requirement of human eyes, the image after the gamma conversion is a nonlinear image, the data recorded by the pixel points of the image is not in linear mapping relation with the actual illumination of the underwater scene, so that the image cannot be directly compensated, the nonlinear image is required to be subjected to inverse gamma conversion, and after the image is linearized, attenuation compensation can be performed, so that color correction is realized. The precondition of gamma transformation and inverse transformation is data normalization, and most of daily photos are based on non-normalized 8-bit-depth images. Therefore, normalization processing is required before the inverse gamma conversion is performed.
Assuming that the obtained original image is not subjected to gamma transformation and the pixel value before normalization is a, the pixel value of the image is z, the maximum saturation value is s, and the black pixel value is b, the image relationship is:
normalizing the pixel value of the acquired image to obtain:
assuming that the pixel value obtained by performing inverse gamma conversion on the normalized pixel value is a, then:
At this time, the obtained A is an image which has a linear corresponding relation with the pixel point and the energy.
3. Regarding Raw linear image preprocessing:
DCRaw is a RAW development tool authored by David Coffin, supporting conversion work on RAW format. DCRaw can be used in Linux, windows and Mac systems, and the operation modes in the three systems are completely the same, and the characteristic improves the compatibility of the program to a certain extent. DCRaw need not install, only copy executable file to corresponding path name, then operate it from terminal control desk.
While some front-end interfaces may make DCRaw easier to use, such as UFRaw, from a developer's perspective, the lack of practicality and flexibility of front-end interfaces has a certain limiting effect on the operational throughput of DCRaw. In terms of operation instructions, the official operation instruction list of DCRaw is compared, and the instruction can be used normally after the meaning of the instruction is mastered. In addition, DCRaw performs some function updates almost every month, so any front-end interface will quickly go out of date. Taken together, DCRaw is the most convenient and quick way to handle Raw image conversion, so DCRaw is selected to be used directly as Raw image converter without any front-end interface.
DCRaw alone is not a friendly tool and DCRaw is not an ideal choice for typical image processing procedures. DCRaw can achieve transparency of image control and have powerful functions, which makes DCRaw an ideal tool for performing troublesome Raw processing tasks.
DCRaw can not only perform almost all the work that other Raw processing tools can do, but also as a tool that directly provides the original image without any processing of it, to ensure high quality development under absolute control. DCraw can also perform tasks that cannot be performed by other commercial Raw processors, such as Adobe CAMERA RAW. In summary DCRaw is a conversion procedure that enables the operator to implement absolute control of Raw images.
The system with best compatibility with DCRaw is a windows xp, the physical machines which normally run at present are basically 64-bit machines, the windows xp system usually needs to run in 32-bit machines, and the subsequent release of a novel windows xp which is claimed to run in 64-bit machines is not suitable for development and research due to poor self stability, so that the operation DCRaw under the original windows xp system is the most stable method for preprocessing images. However, 32-bit computers have been eliminated for many years, and currently, 32-bit computers are difficult to find in the market, and most of the existing 32-bit computers become antique, have low running speed and small memory space, and are not suitable for DCRaw development platforms.
By comprehensively considering the above factors, DCRaw is put into the virtual machine for operation, so that the inapplicable problem brought by the operating system can be avoided.
In the VMware virtual machine, the original window xp mirror image is burnt, the VMware can automatically set the virtual machine as a 32-bit machine according to the burnt system, and only 512MB of running memory and 40GB of storage memory are required to be allocated for the 32-bit virtual machine.
In the data processing flow of attenuation measurement, DCRaw is used for developing Raw original data in a windows environment, an operation method of DCRaw in the windows environment is described, and the data processing flow is different in size in other environments, so that the description is omitted.
Windows version DCRaw does not need to be installed, itself contains only one compact exe console application file. The file name is dcraw.exe, which can be copied directly to C: in the \WINDOWS\path to access from any folder in the command line.
The reason that DCRaw is further selected as the Raw original image converter is that DCRaw, compared with some commercial Raw original image converters, can not only deeply study color interpolation data applied to different fields, but also obtain tiff files with higher quality and smaller noise. DCRaw support a range of functions that are not available on most commercial Raw converters and are of great application value for measuring underwater light attenuation.
After the above preparation is made, the conversion of Raw to a tiff format that can be recognized and processed by matlab begins.
To meet the attenuation measurement requirement, dcraw is used for prohibiting any nonlinear processing on Raw Data, and the final instruction is input:
dcraw-4-T-D-v pathfilename
The pretreatment of the image is finished so far, and the tiff image which is convenient to read by MATLAB is obtained.
2. And measuring the attenuation coefficient of the image sensor at a fixed distance.
Because the light wave has absorption and scattering effects when propagating under water, the energy of the light wave is sharply reduced in a long distance, but the light wave propagates in air, but almost no absorption and attenuation exist, so the energy when the light wave passes through a propagation distance d in air is used for replacing the energy when the light wave does not propagate in water, the object can be guaranteed to have the same divergence angle to a camera lens as long as the propagation distance d is fixed, and the formula can evolve:
According to the tiff image obtained in the step 1, a pixel energy value E air recorded in air and a pixel energy value E water recorded in water are obtained, wherein E represents a natural constant of 2.71828. In order to ensure the accuracy of the energy recording data, the high bit depth tiff obtained in the first step is used as a recording tool to measure the attenuation coefficient of the image sensor. The formula facilitates the acquisition of energy in the beginning and ending processes of the underwater image, and is beneficial to calculating the attenuation coefficient of the image sensor. The E air and E water can be obtained by shooting the light sources with the same distance from the camera lens in water and air respectively, and if the imaging distance d can be accurately recorded, the attenuation coefficient c s of the sensor can be measured and calculated.
In order to ensure that the attenuation coefficient of the image sensor can reflect the energy attenuation degree of the whole underwater scene, a method for selecting all pixel points of the whole photo and calculating the average value is often considered, but the method does not consider all the attenuation effects of the water body.
The attenuation of photons propagating in a body of water is divided into absorption and scattering, with the scattering being mostly small angle scattering. If the whole photo is selected as the research object, the mapping to the actual scene is definitely a quite large area. Photons which are received by a part of pixel points cannot be received by the pixel points after being scattered by a small angle, and the photons are considered to be attenuated, but because a research object is an area with a large area, the photons which are received by a plurality of pixel points are received by other pixel points nearby the pixel points after being scattered by a small angle, so that the received energy is enhanced. The attenuation is calculated in this way, which amounts to neglecting the scattering effect of most photons in the body of water.
If the energy loss is to be measured accurately, it is theoretically possible to measure the energy after and without the distance d in the water at a certain point in a specific and fixed-position real scene. In air, the idea is easy to realize, and the accurate correspondence can be realized only by selecting a pixel point at a certain fixed position in a one-thousand-five million-afterimage pixel point array of the high-precision image sensor. However, in the case of water, it is not easy to precisely select the corresponding pixel from among so many pixels.
There are three ways to select the corresponding pixel point:
1. Directly selecting pixel points of the pixel point array corresponding to the same position
The image is imported into Matlab for processing, and the pixel point corresponding to the pixel point array position is selected, but in the experimental process, although the position fixing of the image sensor and the hard connection of the light source and the guide rail can be realized, the sensor attenuation coefficients under different distances are required to be measured, and tiny relative position change of the light source is unavoidable when the guide rail is moved. And refraction phenomenon can occur when the obtained light source image is shot in the water body, and the spatial position of the sensor pixel point in the air and the water cannot be accurately corresponding.
2. Selecting local brightest pixel point of image
Although the method can solve the refraction problem in the water body, the light source is only a relatively ideal diffuse reflection light source, and the brightness of all spatial positions in all directions cannot be exactly unified, so that the tiny position deviation generated in the moving process of the guide rail can still have great influence on the recording of the pixel points. Even if an accurate correspondence problem is achieved, there is another disadvantage. When the brightest pixel point is selected as a measurement object, the following situations occur with a certain probability:
photons which are supposed to be received by the pixel points near the brightest pixel point happen to be received by the brightest pixel point due to small angle deviation of directions after scattering, attenuation measurement is inaccurate to a certain extent, attenuation measured by the method can have larger errors in multiple measurements, and single-pixel point calculation can be seriously influenced by dark current generated when an image sensor works.
In order to avoid the probability event and take the scattering effect of photons in the water body into consideration, attenuation measurement is performed by adopting a method of selecting a plurality of brighter pixels.
3. Selecting a plurality of local brightest pixel points of an image
Selecting a proper number of pixel points, after acquiring pixel point data recorded by an image sensor, respectively sequencing the pixel point values of the whole photo according to the sizes of three channel values, and taking a plurality of points with the maximum values of the three channel values to measure the attenuation coefficient of the image sensor.
The theoretical derivation and practice always have errors, and a large number of comparison experiments show that when 18 pixel points are selected for measurement, the attenuation coefficient of the measured image sensor is the most stable, the corresponding actual area of the 18 pixel points in space is extremely small, the attenuation of the scattering effect of the water body is negligible, the accident that a single pixel is influenced by surrounding pixel points is avoided, and finally the number of the selected pixel points is 18.
3. The image sensor attenuation coefficient of the unreachable area is estimated.
In order to expand the application range, the thought is used as a reference from a professional attenuation measuring instrument, the light sensation characteristic of the attenuation measuring instrument is compared and analyzed with the light sensation characteristic of the image sensor, and finally, the calculation of the attenuation coefficients of the image sensors with different distances is realized by starting with a spectral response characteristic curve.
The AC-S attenuator has a large difference in light-sensing characteristics from the image sensor, and the same difference exists between different channel sensitivities.
The AC-S attenuator is used as a professional level precise attenuation measuring instrument, and technical parameters in all aspects are strictly controlled. Spectral range is 400-730nm, band pass is 15nm per channel, measuring range: 0.001-10m -1, the optical path can be set to be in two modes of 10cm or 25cm, the diameter of the beam section is 8mm, and the spectral resolution is set to be 4nm for being close to the most realistic attenuation result. The accuracy can reach +/-0.01 m -1.
The optical sensor has excellent spectral performance, but the quantum efficiency is far less than 100%, in order to realize accurate measurement, a photon compensation algorithm is arranged in an AC-S instrument, partial photons which do not realize conversion and generate photoelectric effect are compensated and counted, the spectral resolution is 4nm, the band division is extremely fine, and therefore the purpose of accurately measuring the attenuation coefficient of the water body is achieved.
The main influencing parameters of the current CMOS image sensor are spectral response characteristics and quantum efficiency, the spectral response range of the imaging device of the CMOS image sensor is determined by the material of the photosurface, and the spectral response range of intrinsic silicon is between 400 nm and 1100 nm.
The spectral performance and quantum efficiency of CMOS imaging devices depend on the image-sensitive cell, the photodiode. The spectral response characteristics of photodiodes and the quantum efficiency of devices are generally affected by factors such as light reflection from the surface of the device, light interference, differences in the transmittance of light through the surface layer, and photoelectron recombination, and the quantum efficiency is generally less than 100%. In addition, since device impact varies with wavelength, quantum efficiency also varies with wavelength.
The quantum efficiency of the CMOS imaging device of Nikon D5100 is only about 60% in the 400nm-700nm wave band on average. Because the CMOS imaging device of the image sensor does not use any photon compensation algorithm, namely, the Raw data is directly utilized to carry out attenuation measurement, the method is equivalent to that photons which do not generate photoelectric effect on the CMOS imaging device of the image sensor are also calculated into the photon category of attenuation caused by absorption or scattering in the water body, and the method is a main reason that the attenuation coefficient value of the image sensor is greatly different from the attenuation coefficient value of the water body.
Therefore, the attenuation coefficient of the image sensor applied by the invention is different from the attenuation coefficient of the water body in the strict sense, the inherent characteristic of the water body can not be reflected, but the attenuation characteristic of the water body is reflected under the optical characteristic of the CMOS imaging device of the image sensor,
The method has no universality and can be only applied to the color correction work of the underwater true color image based on the attenuation coefficient of the image sensor.
The spectral response curve of the image sensor of the Nikon D5100 camera used in the present invention is shown in fig. 4.
The normalized spectral response characteristic curves of the image sensors are observed, and the transmittance of the three channels in the wave bands outside 400nm-700nm is approximately 0, and the transmittance of the three channels in the wave bands inside 400nm-700nm is larger than 0, so that the transmittance of the wave bands of 400nm-700nm is only needed to be studied when the attenuation coefficients of the image sensors with different distances are calculated. And simulating an AC-S calculation method, and carrying out interpolation processing on the attenuation coefficient of the image sensor by the sub-band.
When the measured distance is d, assuming that the spectral response curve of the three channels of the image sensor is divided into n segments altogether, there are:
Where a m is the spectral response coefficient of the m-th segment, for any a m it can be expressed as:
wherein T is the transmissivity of the spectral response curve, and lambda n、λn+1 is the start-stop wavelength of the nth section respectively;
The underwater attenuation coefficients are different for different wave bands, the underwater attenuation coefficients of different wave bands measured by the AC-S are observed, and the trend of the underwater attenuation coefficient curves of different wave bands measured by the AC-S is found to be approximately similar for different water quality environments, so that the characteristic trend of the three-channel attenuation coefficient curve of the image sensor is fitted by adopting the characteristic trend of the underwater attenuation coefficient curves measured by the AC-S, and the characteristic trend of the three-channel attenuation coefficient curve of the image sensor is shown as a random wave band c m in 400nm-700 nm:
Wherein c acs is the underwater attenuation coefficient measured by AC-S; the curves of the absorption, scattering and attenuation coefficients of the water body in different wave bands measured by the AC-S are shown in figure 5.
Next, consider the case when the attenuation coefficient of the image sensor is known when the imaging distance d is underwater, and apply it to the attenuation compensation of any distance, let the attenuation coefficient of the three channels of the image sensor at the known specific distance d be
Based on d, consider the case when the distances are d, 2d, 3d, αd, respectively, a system of equations is constructed:
For convenience of research, assuming that α is z, a total of α equations in the equation set, observing all terms on the left of the equation can consider all terms on the left of the equation as a matrix of α×n, and all elements in the same column of the matrix form a common ratio of Is equal to the number of the equal ratio series;
Adding the alpha equations of the equation set (4) in turn, and setting the sum as B α;
Then there are:
the first (α -1) equations of equation set (4) are added sequentially, and the sum is set as Bα - 1:
the difference between the two formulas can be obtained:
substituting the result of equation (3) into (7) yields:
The fractional term in each bracket in the above formula is available with a reduction in the common factor:
In the formula (10), c, d and T are constants, d represents imaging distance, T represents transmittance of any channel of the image sensor, lambda represents wavelength, c i represents attenuation coefficient of water bodies in different wave bands, and c is attenuation coefficient of the image sensor when the distance is alpha times of d.
4. And (3) performing image color correction according to the attenuation coefficient of the image sensor with the specific distance obtained in the step (3).
The estimation of the attenuation coefficient of the image sensor is realized, and a sufficient condition is provided for the color correction work of the underwater degraded image in the unreachable area. Whether the tiff obtained by preprocessing Raw data or the linear image obtained by linearizing JPEG in the first step, the attenuation coefficient of the sensor (including the attenuation coefficient of the measurable area in the step 2 and the attenuation coefficient calculated by the unreachable area in the step 3) can be directly compensated by beer's law.
Assuming that the pixel value of the original image to be corrected is A, compensating the pixel value on the basis of A, setting the value after compensation as B, and according to the beer's law relation:
To meet the visual requirement of human eyes, the compensated value needs to be gamma-converted again to obtain the pixel value of the final photo in the air, which comprises the following steps:
where e is a natural constant 2.71828,1 is distance. Thus, the restoration compensation of the underwater degraded image is completed, and the color correction work of the underwater unreachable area is realized.
The CIEDE2000 color difference evaluation system is introduced to analyze the image correction result of the invention:
In each ocean body, the inherent optical characteristics are influenced by the longitude and latitude and the composition and concentration of the contained particles. Scientific researchers quantify the turbidity of a water body according to attenuation coefficients and divide ocean water bodies into the following four types:
(1) Pure seawater. Pure seawater is a pure water body almost completely composed of water molecules, and contains a small amount of major and trace elements such as sodium, potassium, calcium, silicon ions and the like. Ideal pure seawater is not present in nature.
(2) A class of water bodies. Based on pure seawater, the seawater additionally contains soluble inorganic salt with certain mass. In the depth range of 50-200 m of clear sea water, more phytoplankton are distributed. But still relatively clear, water bodies of this type tend to be distributed deep in the ocean.
(3) And a second type of water body. The second type of water body contains a higher proportion of photosynthetic particles than the pure seawater and the first type of water body, and the attenuation in the first type of water body is stronger than the attenuation in the pure seawater and the first type of water body.
(4) Three types of water bodies. The seawater is the water with highest turbidity in a water standard model, contains a large amount of soluble particles and suspended particles, and is also a compound of organic particles such as plankton and organic matter debris and suspended inorganic particles such as sediment and loess. The strong absorption and scattering effects severely limit the transmission of the light beam, and the color correction of images in such bodies of water can be a significant difficulty.
TABLE 1 typical coefficient values for three classes of Water bodies
The three water bodies are three water bodies commonly used in the current underwater optical research, and the research on the color correction of the underwater image is based on the three water bodies.
The CIE technical commission provides a new color evaluation formula in 2000 and obtains the recommendation of the international illumination commission in 2001 through the analysis and test of the existing color difference formula and visual evaluation data, namely CIEDE2000, the smaller the numerical value is, the smaller the visual color difference is.
The CIEDE2000 color difference formula is as follows:
The color difference model is finer, the related parameters are more, and the expressions of the parameters are as follows:
X, Y, Z are tristimulus values of the target, and X0, Y0, Z0 are tristimulus values of the standard illuminants irradiated on the fully diffuse reflector and reflected into the vision of the observer. Wherein f is a piecewise function:
L′=L*
a′=(1+G)a*
b′=b*
g represents the CIELAB adjustment factor.
ΔL=L′1-L′2
ΔC′ab=C′ab,1-C′ab,2
RT=-sin(2Δθ)Rc
The color correction results of three types of underwater images of water in the evaluation system are shown in fig. 6 to 8, and from the whole analysis, the color correction method and the distance calculation method of the invention can obtain the results superior to those of other algorithms at present, and the color difference between the image and the ideal image under the same imaging distance is smaller than that of the results obtained by other restoration algorithms, so that the color difference between the processed image and the ideal image is the smallest from the statistical point of view. The two methods reasonably use the advantage of higher bit depth of Raw, and obtain a result which is closer to an ideal image in the aspect of restoration of three-channel values of pixel points.
Qualitative analysis, hu et al algorithm has obvious halation in the processing result, and seriously affects the hue of the image area; the Berman et al algorithm restores that the resulting image is entirely darker, so the difference in LAB space is larger than the ideal image; the Gong et al algorithm obtains an ideal recovery result, and under the long-distance condition in the third type of water body, the color difference of the recovery result is smaller than that of the distance estimation algorithm, which shows that the noise resistance of the method is stronger than that of the distance estimation algorithm; the distance calculation algorithm is excellent in performance in a close-range imaging scene and the color correction algorithm in the text, and the color correction algorithm for the detectable region and the distance calculation algorithm for the non-detectable region in the text have good effect on color correction of the underwater close-range true color degradation image. When the imaging distance gradually becomes far and the water quality gradually becomes poor, the color correction of the true color degraded image of the color correction algorithm of the invention has a performance result which is inferior to that of the Gong et al algorithm, and gradually deviates from the distance curve of the color correction algorithm of the invention, so that the calculation deviation starts to become more obvious when the calculation model of the attenuation coefficient of the image sensor is far.
Further quantitative analysis shows that the distances between the distance calculation algorithm and the curve offset degree of the color correction algorithm in the first, second and third water bodies are sequentially about: 4.6m, 3.8m and 3.2m. In the first type of water body, the recovery result of the distance calculation algorithm is better than the recovery result of the Gong et al algorithm under all imaging distances of the experiment; in the second type of water body, the recovery result of the distance calculation algorithm is better than the Gong et al algorithm under the imaging distance of 0-3.5m, and the color difference obtained by the recovery result under the imaging distance of 3.5-5m is larger than the Gong et al algorithm; in the third type of water body, the restoration result of the distance calculation algorithm is better than the Gong et al algorithm under the imaging distance of 0-2.6m, and the color difference obtained by the restoration result under the imaging distance of 2.6-5m is larger than the Gong et al algorithm.
Example 2:
As shown in fig. 9, the present invention further provides an underwater image color correction device based on attenuation coefficients, the device including at least one processor and at least one memory, and further including a communication interface and an internal bus; the memory stores the execution program code of the color correction of the underwater image based on the attenuation coefficient, etc.; when the processor executes the execution program stored in the memory, the method for correcting the color of the underwater image based on the attenuation coefficient as described in embodiment 1 can be realized.
Wherein the internal bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (PERIPHERAL COMPONENT, PCI) bus, or an extended industry standard architecture (. XtendedIndustry Standard Architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the buses in the drawings of the present application are not limited to only one bus or to one type of bus. The memory may include a high-speed RAM memory, and may further include a nonvolatile memory NVM, such as at least one magnetic disk memory, and may also be a U-disk, a removable hard disk, a read-only memory, a magnetic disk, or an optical disk.
The processor includes one or more general-purpose processors that execute the various functional modules by invoking program code in memory. The general purpose processor may be any type of device capable of processing electronic instructions, including a central processing unit (Central Processing Unit, CPU), a microprocessor, a microcontroller, a main processor, a controller, and an ASIC (Application SPECIFIC INTEGRATED Circuit), among others. The processor reads the program code stored in the memory and cooperates with the communication interface to perform all the steps of the method of the above-described embodiments of the application.
The communication interface may be a wired interface (e.g., an ethernet interface) for communicating with other computing nodes or users. When the communication interface is a wired interface, the communication interface may employ a protocol family over TCP/IP, such as RAAS protocol, remote function call (Remote Function Call, RFC) protocol, simple object access protocol (Simple Object Access Protocol, SOAP) protocol, simple network management protocol (Simple Network Management Protocol, SNMP) protocol, common object request broker architecture (Common Object Request Broker Architecture, CORBA) protocol, and distributed protocol, among others.
The device is in the form of a general purpose computing device, which may be provided as a terminal, server or other form of device.
Example 3:
the present invention also provides a non-transitory computer-readable storage medium having stored therein a computer-executable program for implementing the attenuation coefficient-based underwater image color correction method as described in embodiment 1 when executed by a processor.
In particular, a system, apparatus or device provided with a readable storage medium on which a software program code implementing the functions of any of the above embodiments is stored and whose computer or processor is caused to read and execute instructions stored in the readable storage medium may be provided.
In this case, the program code itself read from the readable medium may implement the functions of any of the above-described embodiments, and thus the machine-readable code and the readable storage medium storing the machine-readable code form part of the present invention.
The storage medium may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks (e.g., CD-ROM, CD-R, CD-RW, DVD-20ROM, DVD-RAM, DVD-RW), magnetic tape, and the like. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
It should be understood that the above Processor may be a central processing unit (english: central Processing Unit, abbreviated as CPU), or may be other general purpose processors, a digital signal Processor (english: DIGITAL SIGNAL Processor, abbreviated as DSP), an Application-specific integrated Circuit (english: application SPECIFIC INTEGRATED Circuit, abbreviated as ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution.
It should be understood that a storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an Application SPECIFIC INTEGRATED Circuits (ASIC). The processor and the storage medium may reside as discrete components in a terminal or server.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
The computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as SMALLTALK, C ++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.
While the foregoing describes the embodiments of the present invention, it should be understood that the present invention is not limited to the embodiments, and that various modifications and changes can be made by those skilled in the art without any inventive effort.

Claims (7)

1. An underwater image color correction method based on attenuation coefficients is characterized by comprising the following steps:
Step 1, respectively acquiring an underwater target image and an on-water target image when the distance d is the same, respectively carrying out unified linearization processing on the acquired original target images to obtain RAW linear images, and carrying out pretreatment on the RAW linear images by DCRaw to obtain tiff images which are convenient for MATLAB to read and process;
Step 2, according to the tiff image obtained in the step 1, obtaining a pixel energy value E air recorded in the air and a pixel energy value E water recorded in the water, and calculating an attenuation coefficient of the image sensor when the distance in the water is d, wherein the calculation formula is as follows:
Wherein e is a natural constant of 2.71828, and C s is the attenuation coefficient of the image sensor when the distance is d;
step 3, estimating the attenuation coefficient of the image sensor in the unreachable area; the characteristic trend of the underwater attenuation coefficient curve is measured by adopting the AC-S to fit the characteristic trend of the three-channel attenuation coefficient curve of the image sensor, and the attenuation coefficient C s of the fixed distance d obtained in the step 2 and the spectral response characteristic curve of the image sensor are combined to obtain the attenuation coefficient C of the image sensor of which the distance d is alpha times, wherein the calculation formula is as follows:
Wherein d represents imaging distance, T represents transmissivity of a spectral response curve, lambda represents wavelength, and c i represents water attenuation coefficients of different wave bands;
And 4, calculating according to the formula in the step 3 to obtain an attenuation coefficient of the image sensor at a specific distance, and compensating the original image by using beer's law, thereby completing the color correction of the original image.
2. The method for color correction of underwater image based on attenuation coefficient as claimed in claim 1, wherein, when the original target image is in nonlinear image format in step 1, the specific process of linearization is as follows:
Assuming that the obtained original image is not subjected to gamma transformation and the pixel value before normalization is a, the pixel value of the image is z, the maximum saturation value is s, and the black pixel value is b, the image relationship is:
normalizing the pixel value of the acquired image to obtain:
assuming that the pixel value obtained by performing inverse gamma conversion on the normalized pixel value is a, then:
At this time, the obtained A is an image which has a linear corresponding relation with the pixel point and the energy.
3. The method for color correction of an underwater image based on attenuation coefficients as claimed in claim 1, wherein: and 2, acquiring pixel energy values of the image, namely respectively sequencing the pixel point number values of the whole photo according to the sizes of three channel values, and respectively taking 18 points with the largest three channel value values for measuring attenuation coefficients of the image sensor.
4. The method for correcting colors of an underwater image based on attenuation coefficients as claimed in claim 1, wherein the specific process of estimating the attenuation coefficients of the image sensor in the unreachable area in the step 3 is as follows:
When the measured distance is d, assuming that the spectral response curve of the three channels of the image sensor is divided into n segments altogether, there are:
Where a m is the spectral response coefficient of the m-th segment, for any a m it can be expressed as:
wherein T is the transmissivity of the spectral response curve, and lambda n、λn+1 is the start-stop wavelength of the nth section respectively;
Fitting the characteristic trend of the three-channel attenuation coefficient curve of the image sensor by adopting the characteristic trend of the underwater attenuation coefficient curve measured by the AC-S, wherein the characteristic trend is specific to any wave band c m in 400-700 nm:
Wherein c acs is the underwater attenuation coefficient measured by AC-S;
Taking into consideration the attenuation coefficient of the image sensor when the known underwater imaging distance is d, applying the attenuation coefficient to the situation when the attenuation compensation of any distance is carried out, and setting the attenuation coefficient of the three channels of the image sensor at the known specific distance d as
Based on d, consider the case when the distances are d, 2d, 3d, αd, respectively, a system of equations is constructed:
Assuming a.epsilon.z, for a total of a equations in the system of equations, looking at all terms to the left of the equations, one can consider all terms to the left of the equations as a matrix of a.x.n, with all elements in the same column of the matrix forming a common ratio of Is equal to the number of the equal ratio series;
Adding the alpha equations of the equation set (4) in turn, and setting the sum as B α;
Then there are:
the first (α -1) equations of equation set (4) are added sequentially, and the sum is set as B α-1:
the difference between the two formulas can be obtained:
substituting the result of equation (3) into (7) yields:
The fractional term in each bracket in the above formula is available with a reduction in the common factor:
In the formula (10), c i, d and T are constants, d represents imaging distance, T represents transmittance of any channel of the image sensor, lambda represents wavelength, c i represents attenuation coefficients of water bodies in different wave bands, and c is the attenuation coefficient of the image sensor when the distance is alpha times of d.
5. The method for correcting colors of an underwater image based on attenuation coefficients as claimed in claim 2, wherein the process of correcting colors of an image in the step 4 is specifically as follows: assuming that the pixel value obtained by carrying out inverse gamma conversion on the normalized pixel value to be corrected is A, compensating the pixel value on the basis of A, setting the value after compensation as B, and according to the beer's law relation:
To meet the visual requirement of human eyes, the compensated value needs to be gamma-converted again to obtain the pixel value of the final photo in the air, which comprises the following steps:
Where e is the natural constant 2.71828 and l is the distance between the optical sensor and the target.
6. An underwater image color correction device based on attenuation coefficients, characterized in that: the apparatus includes at least one processor and at least one memory; the memory stores an execution program of color correction of the underwater image based on the attenuation coefficient; the processor, when executing the execution program stored in the memory, can implement the method for correcting color of an underwater image based on attenuation coefficients as set forth in any one of claims 1 to 5.
7. A computer-readable storage medium, in which a computer-executable program is stored, which when executed by a processor is adapted to implement the attenuation coefficient-based underwater image color correction method as claimed in any one of claims 1 to 5.
CN202210549452.XA 2022-05-20 2022-05-20 Underwater image color correction method based on attenuation coefficient Active CN114792294B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210549452.XA CN114792294B (en) 2022-05-20 2022-05-20 Underwater image color correction method based on attenuation coefficient

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210549452.XA CN114792294B (en) 2022-05-20 2022-05-20 Underwater image color correction method based on attenuation coefficient

Publications (2)

Publication Number Publication Date
CN114792294A CN114792294A (en) 2022-07-26
CN114792294B true CN114792294B (en) 2024-07-16

Family

ID=82463662

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210549452.XA Active CN114792294B (en) 2022-05-20 2022-05-20 Underwater image color correction method based on attenuation coefficient

Country Status (1)

Country Link
CN (1) CN114792294B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564543A (en) * 2018-04-11 2018-09-21 长春理工大学 A kind of underwater picture color compensation method based on electromagnetic theory
CN110415178A (en) * 2019-06-06 2019-11-05 长春理工大学 A kind of underwater picture clarification method estimated based on electromagnetic wave energy residue ratio and bias light

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10885611B2 (en) * 2016-04-07 2021-01-05 Carmel Haifa University Economic Corporation Ltd. Image dehazing and restoration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564543A (en) * 2018-04-11 2018-09-21 长春理工大学 A kind of underwater picture color compensation method based on electromagnetic theory
CN110415178A (en) * 2019-06-06 2019-11-05 长春理工大学 A kind of underwater picture clarification method estimated based on electromagnetic wave energy residue ratio and bias light

Also Published As

Publication number Publication date
CN114792294A (en) 2022-07-26

Similar Documents

Publication Publication Date Title
Mann et al. Beingundigital’with digital cameras
Akkaynak et al. A revised underwater image formation model
Zhou et al. Underwater image enhancement method via multi-feature prior fusion
Simon Chane et al. Event-based tone mapping for asynchronous time-based image sensor
CN107424198A (en) Image processing method, device, mobile terminal and computer-readable recording medium
Jacobs High dynamic range imaging and its application in building research
Hale et al. Comparison of film and digital hemispherical photography across a wide range of canopy densities
WO2020034735A1 (en) Imaging control method and electronic device
TWI464706B (en) Dark portion exposure compensation method for simulating high dynamic range with single image and image processing device using the same
JP4999763B2 (en) Imaging apparatus, imaging method, program, recording medium, and integrated circuit
Garcia et al. Linearisation of RGB camera responses for quantitative image analysis of visible and UV photography: a comparison of two techniques
JP2000324314A (en) Correction of surface defect by infrared reflection scanning
CN109639960A (en) Image processing apparatus, image processing method and recording medium
CN107194900A (en) Image processing method, device, computer-readable recording medium and mobile terminal
CN106454079A (en) Image processing method and apparatus, and camera
CN109005369A (en) Exposal control method, device, electronic equipment and computer readable storage medium
KR101150111B1 (en) System and method for automated correction of digital images
EP2011329B1 (en) Method of processing a relative illumination phenomenon on a digital image and associated processing system
CN112804510B (en) Color fidelity processing method and device for deep water image, storage medium and camera
CN114792294B (en) Underwater image color correction method based on attenuation coefficient
CN107392870A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN115100400A (en) Lens film color measuring method, system, computer device and storage medium
Zhang et al. A dynamic range adjustable inverse tone mapping operator based on human visual system
Singh et al. Detail Enhanced Multi-Exposer Image Fusion Based on Edge Perserving Filters
Bhadouria et al. Effective Framework for Underwater Image Enhancement using Multi-Fusion Technique

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant