CN113850747A - Underwater image sharpening processing method based on light attenuation and depth estimation - Google Patents

Underwater image sharpening processing method based on light attenuation and depth estimation Download PDF

Info

Publication number
CN113850747A
CN113850747A CN202111151760.9A CN202111151760A CN113850747A CN 113850747 A CN113850747 A CN 113850747A CN 202111151760 A CN202111151760 A CN 202111151760A CN 113850747 A CN113850747 A CN 113850747A
Authority
CN
China
Prior art keywords
image
underwater
processed
depth
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111151760.9A
Other languages
Chinese (zh)
Other versions
CN113850747B (en
Inventor
陈芬
史启超
彭宗举
蒋东荣
雷晨阳
张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Technology
Original Assignee
Chongqing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Technology filed Critical Chongqing University of Technology
Priority to CN202111151760.9A priority Critical patent/CN113850747B/en
Publication of CN113850747A publication Critical patent/CN113850747A/en
Application granted granted Critical
Publication of CN113850747B publication Critical patent/CN113850747B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an underwater image sharpening processing method based on light attenuation and depth estimation, which comprises the following steps: s1, acquiring an underwater image to be processed; s2, carrying out scene depth estimation operation on the underwater image to be processed to obtain a scene depth map of the underwater image to be processed; s3, extracting a background light value of the underwater image to be processed, performing color preprocessing on the underwater image to be processed according to the extracted background light value, and estimating the light attenuation rate of the underwater image to be processed after the color preprocessing; s4, estimating the water depth value of the underwater image to be processed according to the scene depth map, the background light value and the light attenuation rate of the underwater image to be processed; and S5, restoring the underwater image to be processed according to the estimated water depth value, the scene depth map of the underwater image to be processed, the background light value and the light attenuation rate. Compared with the prior art, the underwater image color and contrast restoring method can improve the accuracy of restoring the colors and the contrast of the underwater image and obtain better underwater image clear imaging quality.

Description

Underwater image sharpening processing method based on light attenuation and depth estimation
Technical Field
The invention relates to the field of underwater image processing, in particular to an underwater image sharpening processing method based on light attenuation and depth estimation.
Background
The high-quality underwater image can provide abundant visual information and has an important role in near-distance underwater engineering. However, light travels underwater with an exponential decay, which varies at different wavelengths. The attenuation of light of the same wavelength varies from public waters to coastal waters under different water qualities. Meanwhile, the suspended matter and the water body medium change the light propagation track, and the reliable information received by the camera is further reduced. These complications cause underwater images to exhibit different levels of color distortion. Therefore, the current underwater image acquired by the camera is difficult to directly meet the requirements of underwater vision application due to color distortion, and needs to rely on color and contrast restoration processing of the underwater image as a technical supplement of the underwater vision application.
Due to the complexity of the underwater environment, water produces absorption and scattering effects on light, which can cause problems of color shift and reduced contrast of underwater images. The underwater image sharpening technology is to correct the problems of color cast and contrast reduction of the underwater image as much as possible by carrying out technical processing such as color adjustment, enhancement and the like on the image, so that the underwater image is more natural in color and more clear in appearance.
The current underwater image restoration processing algorithm can be divided into an imaging model method and an image enhancement method. The imaging model method is generally based on an original atmospheric scattering model, and an underwater imaging model is constructed, wherein the main parameters in the model are transmissivity and background light. On the imaging model, the light attenuation rate and the depth value have a high correlation with these two parameters. However, most of the existing algorithms based on the imaging model adopt a fixed light attenuation rate to restore the image, and the algorithm robustness is low in multi-type underwater image processing; meanwhile, the scene depth information related to the depth value can be divided into a scene depth map and a water depth value from a water surface to a target object, some mature scene depth estimation algorithms aiming at images exist at present, the algorithm is mainly used for carrying out scene depth estimation of a conventional overwater space scene image, and because the underwater image is influenced by an underwater shadow environment, if the existing scene depth estimation algorithm is directly used for carrying out scene depth estimation on the underwater image, a certain estimation error is caused, on the other hand, the underwater scene depth data of the known image data is rare and not enough for realizing supervised network training of underwater scene depth estimation, so that a scheme for carrying out underwater scene depth estimation by training a supervised network model is also hindered by technical conditions, and in addition, the existing water depth value estimation methods related to the underwater images are generally complex, it is difficult to widely implement and apply. These factors all adversely affect the accuracy and effectiveness of the color and contrast restoration process for underwater images.
In summary, how to more accurately perform color and contrast restoration processing on an underwater image to obtain better quality of the underwater image, which is a technical problem urgently needed to be solved by the technical personnel in the field.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an underwater image sharpening processing method based on light attenuation and depth estimation, which is used for improving the accuracy of underwater image color and contrast restoration processing so as to obtain better underwater image sharpening imaging quality.
In order to solve the technical problems, the invention adopts the following technical scheme:
an underwater image sharpening processing method based on light attenuation and depth estimation comprises the following steps:
s1, acquiring an underwater image to be processed;
s2, carrying out scene depth estimation operation on the underwater image to be processed to obtain a scene depth map of the underwater image to be processed;
s3, extracting a background light value of the underwater image to be processed, performing color preprocessing on the underwater image to be processed according to the extracted background light value, and estimating the light attenuation rate of the underwater image to be processed after the color preprocessing;
s4, estimating the water depth value of the underwater image to be processed according to the scene depth map, the background light value and the light attenuation rate of the underwater image to be processed;
and S5, restoring the underwater image to be processed according to the estimated water depth value, the scene depth map of the underwater image to be processed, the background light value and the light attenuation rate to obtain a clear image of the underwater image to be processed.
In the above method for processing an underwater image with sharpness based on light attenuation and depth estimation, as a preferred scheme, in step S2, the scene depth estimation network is trained by using the known underwater sample image and the pseudo-underwater depth image as training input data, and then the trained scene depth estimation network is used to process the underwater image to be processed, so as to obtain a scene depth map of the underwater image to be processed; the pseudo-underwater depth image is an image obtained by carrying out atomization synthesis processing on an overwater space scene image with a depth label and then carrying out underwater image style migration processing.
In the underwater image sharpening processing method based on light attenuation and depth estimation, as a preferred scheme, the training step of the scene depth estimation network includes:
s201, acquiring an underwater sample image and an overwater space scene image with a depth label; the depth label of the water space scene image is used for indicating scene depth information of the water space scene image; the underwater sample image is an existing underwater image serving as a usable training sample, but the original underwater sample image is not provided with a depth label.
S202, carrying out atomization synthesis processing on the acquired overwater space scene image by means of atmospheric scattering data to obtain an overwater space scene atomization synthetic image;
s203, performing style migration training by taking the underwater sample image as training input data of a style migration network, performing underwater image style migration processing on the atomized synthetic image of the overwater space scene by using the trained style migration network, and taking the processed image as a pseudo underwater depth image;
s204, taking the pseudo-underwater depth image as training input data of the scene depth estimation network, taking a depth label of the overwater space scene image corresponding to the pseudo-underwater depth image as a training result label of the scene depth estimation network, and performing preliminary training on the scene depth estimation network;
s205, carrying out scene depth estimation on the underwater sample image by using the scene depth estimation network after the pre-training to obtain a depth label of the underwater sample image; the depth label of the underwater sample image is used for indicating scene depth information of the underwater sample image;
s206, taking the underwater sample image and the pseudo-underwater depth image as training input data of the scene depth estimation network after the preparation training, taking the depth label of the underwater sample image and the depth label of the overwater space scene image corresponding to the pseudo-underwater depth image as training result labels of the scene depth estimation network after the preparation training, and training again to obtain the completely trained scene depth estimation network;
s207, carrying out scene depth estimation on the underwater image to be processed by using the completely trained scene depth estimation network to obtain a scene depth value of the underwater image to be processed, and carrying out depth image conversion processing according to the scene depth value of the underwater image to be processed to obtain a scene depth image of the underwater image to be processed.
In the above method for processing underwater image sharpening based on light attenuation and depth estimation, as a preferred scheme, in step S3, the manner of extracting the background light value of the underwater image to be processed is as follows:
according to a scene depth map of an underwater image to be processed, selecting an image area which is 5% larger than a field depth value in the underwater image to be processed as a background light value area, then selecting pixel points which are 1% larger than the sum of RGB (red, green and blue) three-color channels from the background light value area as background light candidate points, and taking the median value of each background light candidate point in each color channel as the background light value of the corresponding color channel:
Bc=Median(Bc_cand),c∈{r,g,b};
wherein, when c is ∈ { r, g, B }, Br、BgAnd BbRespectively representing the background light values of the underwater image to be processed in R, G, B color channels; b isr_cand、Bg_candAnd Bb_candRespectively obtaining an R color channel value set, a G color channel value set and a B color channel value set of each background light candidate point in the underwater image to be processed; median () is a Median operator.
In the method for processing the underwater image with the sharpness based on the light attenuation and the depth estimation, as a preferred scheme, if the underwater image to be processed is a pseudo-underwater depth image, after the scene depth value of the underwater image to be processed is obtained through the processing of the step S2, before the step S3 is executed, the scene depth conversion processing is firstly performed on the underwater image to be processed;
carrying out scene depth transformation processing on an underwater image to be processed according to the following formula:
Figure BDA0003287405890000031
dep is a scene depth value of the underwater image to be processed after the scene depth is transformed; median () is a Median operator; d is the scene depth value of the underwater image to be processed before the scene depth is transformed; sigrThe actual space apparent distance range value, Sig, of the original overwater space scene image corresponding to the pseudo-underwater depth imager-simThe range value of the sight distance of the underwater image scene to be simulated.
In the above method for processing underwater image sharpening based on light attenuation and depth estimation, as a preferred scheme, in step S3, the method for performing color preprocessing on the underwater image to be processed is as follows:
for each pixel point in the underwater image to be processed, carrying out color preprocessing on R, B color channels based on the value of the G color channel under the condition of the following light attenuation ratio:
Figure BDA0003287405890000041
in the formula Ir_pAnd Ib_pRespectively an R color channel value and a B color channel value, I, of the underwater image to be processed after color preprocessingr、IbAnd IgRespectively obtaining an R color channel value, a B color channel value and a G color channel value of an original underwater image to be processed; b isr、BgAnd BbRespectively representing the background light values of the underwater image to be processed in R, G, B color channels; abs () is the operator to find the absolute value;
Figure BDA0003287405890000042
the light attenuation ratio of the green light wavelength to the red light wavelength;
Figure BDA0003287405890000043
is the light attenuation ratio of green light wavelength to blue light wavelength.
In the above method for processing underwater image sharpening based on light attenuation and depth estimation, as a preferred scheme, in step S3, the method for estimating the light attenuation rate of the underwater image to be processed after color preprocessing is as follows:
firstly, according to a scene depth map of an underwater image to be processed, selecting an image area which is 5% larger than a field depth value in the underwater image to be processed as a background light value area, then selecting pixel points which are 1% larger than the sum of RGB (red, green and blue) three-color channels from each pixel corresponding to the background light value area in the underwater image to be processed after color preprocessing as background light candidate points, and taking the median value of each background light candidate point in each color channel as the background light value of the corresponding color channel:
then, estimating the light attenuation rate of the underwater image to be processed after color preprocessing in an R color channel according to the following relational expression:
Figure BDA0003287405890000044
in the formula, betarIs an underwater picture to be processed after color pretreatmentLight attenuation rate like in the R color channel; when c ∈ { g, b },
Figure BDA0003287405890000045
is the light attenuation ratio of green light wavelength to red light wavelength,
Figure BDA0003287405890000046
the light attenuation ratio of the blue light wavelength to the red light wavelength; coefficient of performance
Figure BDA0003287405890000047
The values of (a) are determined according to the following table:
Figure BDA0003287405890000048
after the light attenuation rate of the underwater image to be processed after the color pretreatment in the R color channel is obtained, the light attenuation rate beta of the underwater image to be processed after the color pretreatment in the G color channel can be respectively obtained according to the light attenuation ratio relation among the wavelengths of the red light, the green light and the blue lightgAnd light attenuation ratio beta of B color channelb
In the underwater image sharpening processing method based on light attenuation and depth estimation, as a preferable aspect, the step S4 includes:
s401, determining a first water depth candidate value D according to a relational expression of the light attenuation coefficient, the water depth value and the background light1
Figure BDA0003287405890000051
In the formula, Br、BgAnd BbRespectively representing the background light values of the underwater image to be processed in R, G, B color channels; beta is ar、βgAnd betabRespectively representing the light attenuation rates of the underwater image to be processed after the color preprocessing in R, G, B color channels; vmaxThe maximum brightness intensity of the underwater image to be processed after color preprocessing is obtained; min () is the operator to find the minimum;
s402, determining a second water depth candidate value D according to the following formula2
Figure BDA0003287405890000052
Wherein the content of the first and second substances,
Figure BDA0003287405890000053
wherein, when c is ∈ { r, g, B }, Br、BgAnd BbRespectively representing the background light values of the underwater image to be processed in R, G, B color channels; i isr、IgAnd IbRespectively obtaining an R color channel value, a G color channel value and a B color channel value of an original underwater image to be processed; d (i) is the scene depth value of the ith pixel point in the underwater image to be processed, wherein i represents the position label of the pixel point in the image; beta is ar、βgAnd betabRespectively representing the light attenuation rates of the underwater image to be processed after the color preprocessing in R, G, B color channels; e is a natural exponential constant; abs () is the operator to find the absolute value; mean () is the averaging operator;
s403, determining a final water depth value Df
Df=Max(D1,D2);
In the formula, Max () is an operator for solving the maximum value; the final water depth value D is obtainedfAs an estimated water depth value of the underwater image to be processed.
In the above method for processing underwater image sharpening based on light attenuation and depth estimation, as a preferred solution, in step S5, the underwater image to be processed is subjected to image restoration processing according to the following formula:
Figure BDA0003287405890000061
wherein the content of the first and second substances,
Figure BDA0003287405890000062
wherein J (i) isCarrying out image restoration on an underwater image to be processed to obtain a pixel value of an ith pixel point in a clarified image; h (i) is the pixel value of the ith pixel point in the no-light attenuation image obtained after the no-light attenuation treatment is carried out on the underwater image to be treated; when c ∈ { r, g, b }, Hr(i)、Hg(i) And Hb(i) Respectively carrying out non-light attenuation treatment on the underwater image to be treated to obtain an R color channel value, a G color channel value and a B color channel value of an ith pixel point in the non-light attenuation image; i isr(i)、Ig(i) And Ib(i) Respectively obtaining an R color channel value, a G color channel value and a B color channel value of an ith pixel point in an original underwater image to be processed; d is a water depth value of the underwater image to be processed; b isr、BgAnd BbRespectively representing the background light values of the underwater image to be processed in R, G and B color channels; beta is ar、βgAnd betabRespectively representing the light attenuation rates of the underwater image to be processed after the color preprocessing at R, G and a color channel B; e is a natural exponential constant; a is the value of atmospheric light, and alpha is the scattering coefficient;
if the underwater image to be processed is a real underwater image, d (i) is a scene depth value of the ith pixel point in the underwater image to be processed, and dep (i) is d (i);
if the underwater image to be processed is the pseudo underwater depth image, dep (i) is the scene depth value of the ith pixel point after the underwater image to be processed is subjected to scene depth transformation, and d (i) is the scene depth value of the ith pixel point before the underwater image to be processed is subjected to scene depth transformation.
In the above method for processing underwater image sharpening based on light attenuation and depth estimation, as a preferred scheme, in step S5, after obtaining a sharpened image of the underwater image to be processed, contrast enhancement processing is further performed to obtain a final sharpened image of the underwater image to be processed.
Compared with the prior art, the invention has the beneficial effects that:
1. the underwater image sharpening processing method disclosed by the invention combines multi-dimensional data such as an estimated water depth value, a scene depth map, a background light value and a light attenuation rate of the underwater image, and comprehensively restores the underwater image so as to reduce a restoration error of the underwater image and improve the accuracy of restoration processing of colors and contrast of the underwater image, thereby obtaining better underwater image sharpening imaging quality.
2. According to the method, the mathematical relation among background light, illumination intensity and light attenuation rate is established by deducing the underwater optical imaging model, so that the self-adaptive estimation of the light attenuation rate of a single image is realized, the underwater real scene can be effectively recovered, and the robustness of the method in various water body environments is ensured.
3. Aiming at the problems that underwater RGB-D data are rare and supervised network training on an underwater scene depth estimation network is difficult, the method provides a scene depth estimation network training strategy combining an underwater image and a pseudo-underwater depth map, and greatly expands the data volume of training data by utilizing a double-task network joint training mode to meet the training data volume requirement of the training scene depth estimation network, thereby realizing unsupervised training of the underwater scene depth estimation network and better ensuring the scene depth estimation accuracy of the underwater image to be processed.
4. The invention provides a single image global water depth estimation method based on light scattering characteristics, which is less related to water depth value estimation in the prior art, and carries out water depth value estimation by combining a scene depth map, a background light value and a light attenuation rate of an underwater image so as to better ensure the accuracy of water depth estimation.
Drawings
FIG. 1 is a flow chart of an underwater image sharpening processing method based on light attenuation and depth estimation.
FIG. 2 is a flowchart illustrating an example of an underwater image sharpening processing method based on light attenuation and depth estimation according to the present invention.
Fig. 3 is a schematic diagram of the structural similarity relationship between the pseudo underwater depth image and the real underwater image.
Fig. 4 is a schematic diagram of a dual-task network joint training process of the style transition network and the scene depth estimation network.
FIG. 5 is a graph of the ratio of light attenuation between R, G, B color lights and a fit.
Fig. 6 is a comparison graph of the underwater image sharpening effect of different final water depth values in different selection modes.
FIG. 7 is a graph showing the results of light attenuation estimation in an experiment according to the present invention.
Fig. 8 is a comparative example diagram of scene depth estimation results of underwater images by using different scene depth estimation methods in an experiment of the present invention.
Fig. 9 is an exemplary diagram of comparison results of processing results of different underwater image sharpening processing methods in the experiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, the invention discloses an underwater image sharpening processing method based on light attenuation and depth estimation, comprising the following steps:
s1, acquiring an underwater image to be processed;
s2, carrying out scene depth estimation operation on the underwater image to be processed to obtain a scene depth map of the underwater image to be processed;
s3, extracting a background light value of the underwater image to be processed, performing color preprocessing on the underwater image to be processed according to the extracted background light value, and estimating the light attenuation rate of the underwater image to be processed after the color preprocessing;
s4, estimating the water depth value of the underwater image to be processed according to the scene depth map, the background light value and the light attenuation rate of the underwater image to be processed;
and S5, restoring the underwater image to be processed according to the estimated water depth value, the scene depth map of the underwater image to be processed, the background light value and the light attenuation rate to obtain a clear image of the underwater image to be processed.
The underwater image sharpening processing method provided by the invention carries out light attenuation and depth estimation aiming at the self data characteristics of an underwater image to be processed, on one hand, a scene depth map of the underwater image to be processed is obtained through scene depth estimation operation, on the other hand, the light attenuation rate is estimated according to the self background light of the underwater image to be processed, the water depth value is estimated according to the scene depth map, the background light value and the light attenuation rate of the underwater image to be processed, and finally, all the data are collected for underwater image restoration, the accuracy of underwater image color and contrast restoration processing is improved through the combination means, and the sharpened underwater image with better imaging quality and effect is obtained. Furthermore, if the underwater image is sharpened in step S5 to obtain a better image, the contrast enhancement process may be further performed to obtain a final sharpened image with better imaging effect.
The present invention will be described in more detail below to show more technical details of the present invention.
An exemplary flowchart of the underwater image sharpening processing method based on light attenuation and depth estimation is shown in fig. 2, and specifically includes the following steps:
and S1, acquiring the underwater image to be processed.
The acquired underwater image to be processed is an underwater image which is prepared as an image color restoration object.
And S2, carrying out scene depth estimation operation on the underwater image to be processed to obtain a scene depth map of the underwater image to be processed.
In specific implementation, the existing scene depth estimation algorithm can be adopted to perform scene depth estimation operation on the underwater image to be processed. However, the existing mature scene depth estimation algorithm has a good effect when mainly applied to processing of an overwater space scene image, and if the algorithm is used for estimating the scene depth of an underwater image, a certain depth estimation error is easily caused due to the influence of the underwater image on an underwater shadow environment and the like.
Therefore, as a more preferable scheme, it is preferable that a scene depth estimation network is trained by using a known underwater image with scene depth information, and then the trained scene depth estimation network is used to perform scene depth estimation operation on the underwater image to be processed, so that accuracy of scene depth estimation is easier to ensure. However, the implementation of the scheme faces a new problem, and the existing underwater image sample data with scene depth information is not enough to be used for realizing supervised network training for underwater scene depth estimation.
In order to solve the problem, the invention provides a new solution, an underwater image is adopted to train a scene depth estimation network in combination with a pseudo depth map strategy, namely, the scene depth estimation network is trained by using known underwater sample images and pseudo underwater depth images as training input data; the pseudo underwater depth image is an image obtained by carrying out atomization synthesis processing on an overwater space scene image with a depth label and then carrying out underwater image style migration processing, so that the data volume of training data is greatly expanded to meet the training data volume requirement of a training scene depth estimation network; and then, the trained scene depth estimation network is used for carrying out scene depth estimation operation on the underwater image to be processed, so that the accuracy of scene depth estimation can be better ensured.
The invention provides a scene depth estimation network training strategy based on the combination of the underwater image and the pseudo-underwater depth map, which comprises the following specific training steps:
s201, acquiring an underwater sample image and an overwater space scene image with a depth label; the depth label of the water space scene image is used for indicating scene depth information of the water space scene image.
The underwater sample image is an existing underwater image serving as an available training sample, the existing underwater image serving as the available training sample can be from existing image databases of an internet such as a UIEB (ultra information Board) data set and a RUIE (RuIE) data set, but because the underwater image data in the existing image database often rarely has depth data information (because the technical difficulty of obtaining the depth data information of the underwater image through the existing technology is high), the obtained underwater sample image usually does not have a depth label. It is therefore desirable to consider using other means to obtain scene depth information for an underwater sample image.
The image of the scene in the space above the water refers to the image of the scene in the space of the air medium (the expression above the water is relative to the image below the water). In specific implementation, the aquatic space scene image is acquired through a plurality of channels, for example, the aquatic space scene image can be acquired from image acquisition in daily life or from some known internet existing image databases such as an NYU data set; meanwhile, as the depth data information acquisition technology of the water space scene image is mature, the water space scene image with a depth label (indicating scene depth information) can be easily obtained.
S202, carrying out atomization synthesis processing on the acquired overwater space scene image by means of atmospheric scattering data to obtain an overwater space scene atomization synthetic image.
In the step, atomizing and synthesizing treatment can be carried out on the overwater space scene images obtained from an existing image database such as an NYU data set by means of some existing atmospheric scattering data models to form atomized overwater space scene atomizing and synthesizing images, so that the light attenuation rate presented in the images is increased, the contrast is reduced, and the actual contrast condition of the underwater images is better simulated. The water space scene atomization synthetic image with the scene depth information can be used for training a scene depth estimation network and assisting in extracting the scene depth information of the underwater sample image in a subsequent processing link.
The method for performing the atomization synthesis processing on the image is the prior art, and for example, the atomization synthesis method in the literature "HAHNER M, DAI D, SAKARIDIS C, et al.
To illustrate the similarity of the contrast between the synthetic fog image of the water space scene obtained by the synthetic fog process and the real underwater image, the transmittance t of the synthetic fog image data set of the water space scene can be estimated by the DCP algorithm (dark channel prior defogging algorithm)fAccording to the transmittance tfObtaining a water scene pseudo-depth map d corresponding to the water space scene atomization synthetic imagef
df=Norm(ln(-tf));
In the formula, Norm () is a normalization function.
Then, respectively calculating a gray level image of the real underwater image and a gray level image of an overwater scene pseudo-depth image corresponding to the overwater space scene atomization synthetic image, and comparing the Structural Similarity (SSIM) between the gray level images; as shown in fig. 3, (a) is a diagram of a water space scene fog synthetic image, (b) is a diagram of a real underwater image, (c) is a gray scale diagram of a water scene pseudo depth map corresponding to the water space scene fog synthetic image (a), (d) is a gray scale diagram of the real underwater image (b), and (e) is a Structural Similarity (SSIM) curve diagram of the gray scale diagram (c) and the gray scale diagram (d); as can be seen from the structural similarity curve of the two gray-scale images shown in the graph (e) in fig. 3, the contrast of the fog synthetic image of the water space scene represented by the gray-scale images has higher similarity than the contrast of the real underwater image.
The above shows that the above-mentioned steps can be used to obtain a fog synthetic image of the above-mentioned spatial scene, which can well simulate the actual contrast condition of the underwater image.
S203, performing style migration training by taking the underwater sample image as training input data of a style migration network, performing underwater image style migration processing on the atomized synthetic image of the overwater space scene by using the trained style migration network, and taking the processed image as a pseudo underwater depth image.
Compared with the overwater space scene image, the real underwater image has the advantages that the difference is mainly reflected in color cast of the image scene except the difference that the light attenuation rate is high and the contrast ratio is reduced, and the actual color cast condition of the real underwater image is difficult to adjust and simulate due to the fact that the light and shadow condition of the underwater scene is complex and if the color mixing processing is carried out through simple manual operation. The present invention therefore addresses this problem by way of image style migration.
The processing purpose of step S203 is to perform style migration processing on the above-water space scene fog synthetic image by using the underwater sample image, so that the color style of the processed pseudo-underwater depth image is closer to that of the real underwater image. The style migration is a mature image processing technology, and the technical idea is that the style migration network is trained by using image data with image style characteristics; and processing the input images by using the trained style migration network to obtain images with similar style characteristics. In the scheme of the invention, the color style of the pseudo-underwater depth image is closer to that of the real underwater image by mainly applying style migration processing, and the actual contrast condition of the underwater image is simulated by atomization synthesis processing so that the presentation effect of the pseudo-underwater depth image obtained in the step can be very close to that of the real underwater image. And it is worth noting that the obtained pseudo-underwater depth image is derived from an overwater space scene image and provided with scene depth label information, so that the pseudo-underwater depth image becomes underwater image simulation data with the scene depth label information and can be used for training a scene depth estimation network and extracting scene depth information of an underwater sample image.
In specific implementation, FC-DenseNets (full volume dense block network) may be used as a main network of the style migration network, and the style migration network adopts a structure of CycleGANs (cyclic generation countermeasure network).
S204, taking the pseudo-underwater depth image as training input data of the scene depth estimation network, taking a depth label of the overwater space scene image corresponding to the pseudo-underwater depth image as a training result label of the scene depth estimation network, and performing preliminary training on the scene depth estimation network.
The pseudo-underwater depth image is provided with a depth label, so that the scene depth estimation network is subjected to preliminary training by using the pseudo-underwater depth image, the depth label of the overwater space scene image corresponding to the pseudo-underwater depth image is used as a training result label of the scene depth estimation network, and the scene depth estimation network has certain scene depth estimation capacity through the preliminary training so as to be used for carrying out scene depth estimation on the underwater sample image subsequently. In addition, the image of the water space scene with the depth label is easy to obtain, so that the training data source and the training effect of the preliminary training can be better ensured.
In a specific implementation, FC-DenseNets may be used as a main network of the scene depth estimation network, and the decider uses 70 × 70 patchgains (markov arbiter).
S205, carrying out scene depth estimation on the underwater sample image by using the scene depth estimation network after the pre-training to obtain a depth label of the underwater sample image; the depth label to the underwater sample image is used to indicate scene depth information of the underwater sample image.
The depth label of the obtained underwater sample image is estimated through the step, so that the underwater sample image with the depth label can also be used as basic data for further training the scene depth estimation network.
S206, taking the underwater sample image and the pseudo-underwater depth image as training input data of the scene depth estimation network after the preparation training, taking the depth label of the underwater sample image and the depth label of the overwater space scene image corresponding to the pseudo-underwater depth image as training result labels of the scene depth estimation network after the preparation training, and training again to obtain the completely trained scene depth estimation network.
Although the scene depth estimation network has certain scene depth estimation capacity through preliminary training, the pseudo underwater depth image is not real underwater image data after all; therefore, the scene depth estimation network is trained again by further combining the real underwater sample image with the depth label and the pseudo underwater depth image, and the method has important significance and practical improvement effect for improving the scene depth estimation capability of the scene depth estimation network for the real underwater image.
S207, carrying out scene depth estimation on the underwater image to be processed by using the completely trained scene depth estimation network to obtain a scene depth value of the underwater image to be processed, and carrying out depth image conversion according to the scene depth value of the underwater image to be processed to obtain a scene depth image of the underwater image to be processed.
The completely trained scene depth estimation network has the scene depth estimation capability on the real underwater image, so that the accuracy of scene depth information estimation on the underwater image to be processed can be better ensured.
In fact, the training and processing procedure of the style transition network and the scene depth estimation network in steps S203 to S206 can be regarded as a dual task network joint training processing procedure of the overall joint network formed by the style transition network and the scene depth estimation network, and the dual task network joint training processing procedure can be represented by the schematic flow shown in fig. 4.
In the specific implementation, in the joint training process, the main networks of the style migration network and the scene depth estimation network can both adopt FC-DenseNets (full volume compact block network), and the epoch number can be set to 200; the batch size may be set to 1; the initial value of the learning rate can be set to be 0.0001, sectional attenuation is adopted, 1-50 epoch is the initial value, 50-100 epoch is 0.99 x 0.0001,100-200 epoch is 0.0001 x (1-0.01 x (epoch-100)); the optimizer may employ the Adam algorithm.
Wherein the loss function L of the style migration partSTNIncluding round robin uniform loss, antagonistic loss, and style loss:
Figure BDA0003287405890000121
in the formula, LcycleIs a cyclic consistent loss term, LganIs against the loss term, LstyleIs a style loss term; λ is a weight value, and its value is 5.
Loss function L of scene depth estimation portionDENIncluding true depth loss terms, pseudo depth loss terms, countermeasure loss terms, and style loss terms:
Figure BDA0003287405890000122
in the formula, Ldep、Lfdep、LganAnd LstyleRespectively a real depth loss term, a pseudo depth loss term, a countermeasure loss term and a style loss term; gamma is the weight of each item of the loss function, gamma1、γ2、γ3And gamma4Are 0.5, 0.1 and 0.1, respectively. Wherein, the real depth loss term is from the fog synthetic image and the corresponding real depth label, and the error between the estimated value and the real value is calculated respectively from the pixel level, the gradient (grad) and the Structural Similarity (SSIM):
Figure BDA0003287405890000123
in the formula, the weight ω has a value of 0.1.
The pseudo-depth loss term represents the structural error between the pseudo-underwater depth map and the depth estimation map by the SSIM value:
Figure BDA0003287405890000124
wherein the content of the first and second substances,
Figure BDA0003287405890000125
water scene pseudo depth map d corresponding to representing water space scene atomization synthetic imagefThe depth estimation map is obtained by carrying out scene depth estimation on a pseudo underwater depth image obtained by processing the synthetic image atomized with an overwater space scene through a scene depth estimation network
Figure BDA0003287405890000126
Structural similarity normalization values between; then it is determined that,
Figure BDA0003287405890000127
then the pseudo depth map d of the above-mentioned water scene is representedfAnd depth estimation map
Figure BDA0003287405890000128
Structural error between.
And S3, extracting a background light value of the underwater image to be processed, performing color preprocessing on the underwater image to be processed according to the extracted background light value, and estimating the light attenuation rate of the underwater image to be processed after the color preprocessing.
The water depth directly affects the background light brightness and the light attenuation intensity of the underwater scene environment, and the specific expression is that the background light brightness of the underwater image is weakened and the light attenuation intensity is enhanced along with the increase of the water depth.
The step is to extract the background light value and estimate the light attenuation rate of the underwater image to be processed, so as to be more accurate to estimate the water depth value by combining the data information subsequently.
In specific implementation, the background light is considered to be located in an area with a large depth value, so that an image area which is 5% of the depth value of the field in the underwater image to be processed is selected as a background light value area according to the scene depth image of the underwater image to be processed; and because the background light on the imaging model of the underwater image is formed by white light attenuation, the RGB channel values are known to be larger, therefore, a pixel point which is 1% larger than the sum of RGB three-color channels is selected from a background light value area of the underwater image to be processed as a background light candidate point, and the median value of each background light candidate point in each color channel is taken as the background light value of the corresponding color channel:
Bc=Median(Bc_cand),c∈{r,g,b};
wherein, when c is ∈ { r, g, B }, Br、BgAnd BbRespectively representing the background light values of the underwater image to be processed in R, G, B color channels; b isr_cand、Bg_candAnd Bb_candRespectively obtaining an R color channel value set, a G color channel value set and a B color channel value set of each background light candidate point in the underwater image to be processed; median () is a Median operator.
In addition, in specific implementation, before extracting the backlight value of the underwater image to be processed, whether scene depth transformation processing needs to be performed on the underwater image to be processed first or not can be considered.
If the underwater image to be processed is a real underwater scene image, scene depth transformation is not needed; if the test is performed, for example, by using RGB-D image data (RGB-D image data refers to RGB image data with scene depth information) such as a pseudo-underwater depth map as the underwater image to be processed for the test, the scene depth is converted by considering the difference between the scene view distance of the pseudo-underwater depth map and the underwater scene view distance before the background light value extraction process of step S3) is performed after the scene depth value of the underwater image to be processed is obtained through the process of step S2).
For example, if the RGB-D image data used in the test is a pseudo-underwater depth image (an image obtained by performing atomization synthesis processing on an underwater space scene image with a depth label and then performing underwater image style migration processing), the pseudo-underwater depth image corresponds to the actual spatial apparent distance range Sig of the original underwater space scene imagerIf the depth of field is only 10 meters, the sight distance range Sig of the underwater image scene needing simulationr-simThe depth of the scene is 100 meters, the transformation processing of the scene depth estimation result is needed to adapt to the visual range of the underwater scene image to be simulated, at this time, the transformation mode of the scene depth transformation is that the median value of the depth estimation value D of the RGB-D image data (pseudo-underwater depth image) used in the test is taken as the maximum visual range, and the underwater scene depth transformation is carried out according to the following formula:
Figure BDA0003287405890000131
dep is a scene depth value of the underwater image to be processed after the scene depth is transformed; median () is a Median operator; d is a scene depth value of the underwater image to be processed before the scene depth transformation, and can be obtained by utilizing a scene depth estimation network to perform depth estimation processing on the pseudo-underwater depth image; sigrThe actual space apparent distance range value, Sig, of the original overwater space scene image corresponding to the pseudo-underwater depth imager-simThe range value of the sight distance of the underwater image scene to be simulated.
The reason that the underwater scene depth transformation processing is required for the pseudo-underwater depth image is that compared with a real underwater image, the background light state of the pseudo-underwater depth image obtained by processing the water space scene image still has a certain difference, and the extracted background light value can reflect the background light state of the pseudo-underwater depth image simulation scene more truly through the underwater scene depth transformation processing.
Then, to estimate the light attenuation rate of the underwater image, an underwater optical model needs to be deduced first to determine the relation of the attenuation rate of each wavelength of light; in particular, the optical model may be expressed as:
I(i)=(H(i)·e-βD(i))·e-βd(i)+(1-e-βd(i))·B;
wherein, i (i) is the pixel value of the ith pixel point in the underwater image, and h (i) is the pixel value of the ith pixel point in the no-light attenuation image corresponding to the underwater image; d is the water depth value of the underwater image; d (i) is the scene depth of the ith pixel point in the underwater image; b, the underwater image is background light; β is the light attenuation rate; e is a natural exponential constant.
Figure BDA0003287405890000141
Wherein, when c is ∈ { r, g, B }, Br、BgAnd BbBackground light values for the underwater image at R, G and the B color channel, respectively; beta is ar、βgAnd betabLight attenuation rates for the underwater image at R, G and the B color channel, respectively; vmaxThe maximum brightness intensity of the underwater image; d (∞) represents the depth of the scene at infinity, corresponding to an underwater image, and may represent the deepest depth of the scene in the underwater image.
The relationship between the light attenuation rate and the related parameter is as follows:
Figure BDA0003287405890000142
determining the ratio relation among the optical attenuation rates of the wavelengths as follows:
Figure BDA0003287405890000143
the relationship is expressed as a ratio of the background light and the maximum luminance intensity between the wavelengths.
FIG. 5 shows a graph of fitted curves and the ratio of light attenuation between R, G, B color lights; in FIG. 5, (a) is a graph showing a ratio of light attenuations of B color light and G color light, (B) is a graph showing a fit of R color light and B/R, and (c) is a graph showing a fit of R color light and G/R.
Since there is an approximately linear correlation between the blue light (B) and the green light (G), as shown in (a) of fig. 5, the polynomial fitting method has the advantage of low error in a certain data range, and a light attenuation estimation model of the red light can be constructed by using the polynomial fitting method:
Figure BDA0003287405890000144
in the formula (I), the compound is shown in the specification,
Figure BDA0003287405890000151
is the light attenuation ratio of green light wavelength to red light wavelength,
Figure BDA0003287405890000152
the light attenuation ratio of the blue light wavelength to the red light wavelength; coefficient of performance
Figure BDA0003287405890000153
The values of (A) are shown in Table 1. As shown in fig. 5 (b) and (c), the two curves have high coincidence with the real data, and the curve trends are substantially consistent. And averaging the estimation results of the red light attenuation rate, and finally obtaining the attenuation rate of the blue light and the green light according to the ratio relation.
TABLE 1 light attenuation model coefficients
Figure BDA0003287405890000154
In addition, under the action of background light, the distribution characteristics of different color components of light rays in an underwater scene are different, and in consideration of R, G, B color distribution characteristics of an underwater image, before the light attenuation rate of an underwater image to be processed is carried out, color preprocessing is carried out on R, B color channels based on the value of the G color channel of each pixel point in the underwater image to be processed under the condition of the following light attenuation rate ratio:
Figure BDA0003287405890000155
in the formula Ir_pAnd Ib_pRespectively an R color channel value and a B color channel value, I, of the underwater image to be processed after color preprocessingr、IbAnd IgRespectively obtaining an R color channel value, a B color channel value and a G color channel value of an original underwater image to be processed; (ii) a B isr、BgAnd BbRespectively representing the background light values of the underwater image to be processed in R, G, B color channels; abs () is the operator to find the absolute value;
Figure BDA0003287405890000156
the light attenuation ratio of the green light wavelength to the red light wavelength;
Figure BDA0003287405890000157
is the light attenuation ratio of green light wavelength to blue light wavelength. The light attenuation ratio between different wavelengths can be determined according to the existing 10 types of light attenuation coefficient.
In summary, after the color preprocessing is performed on the underwater image to be processed, the light attenuation rate of the underwater image to be processed after the color preprocessing is estimated as follows:
firstly, according to a scene depth map of an underwater image to be processed, selecting an image area which is 5% larger than a field depth value in the underwater image to be processed as a background light value area, then selecting pixel points which are 1% larger than the sum of RGB (red, green and blue) three-color channels from each pixel corresponding to the background light value area in the underwater image to be processed after color preprocessing as background light candidate points, and taking the median value of each background light candidate point in each color channel as the background light value of the corresponding color channel:
then, estimating the light attenuation rate of the underwater image to be processed after color preprocessing in an R color channel according to the following relational expression:
Figure BDA0003287405890000161
in the formula, betar、βgAnd betabRespectively representing the light attenuation rates of the underwater image to be processed after the color preprocessing in R, G, B color channels; when c ∈ { g, b },
Figure BDA0003287405890000162
is the light attenuation ratio of green light wavelength to red light wavelength,
Figure BDA0003287405890000163
the light attenuation ratio of the blue light wavelength to the red light wavelength; coefficient of performance
Figure BDA0003287405890000164
The values of (a) are determined according to the above table 1;
after the light attenuation rate of the underwater image to be processed after the color pretreatment in the R color channel is obtained, the light attenuation rate beta of the underwater image to be processed after the color pretreatment in the G color channel can be respectively obtained according to the light attenuation ratio relation among the wavelengths of the red light, the green light and the blue lightgAnd light attenuation ratio beta of B color channelb
And S4, estimating the water depth value of the underwater image to be processed according to the scene depth map, the background light value and the light attenuation rate of the underwater image to be processed.
After the scene depth map, the background light value and the light attenuation rate data of the underwater image to be processed are obtained through the previous processing, the water depth value of the underwater image to be processed can be estimated and processed according to the data.
In specific implementation, step S4 includes:
s401, determining a first water depth candidate value D according to a relational expression of the light attenuation coefficient, the water depth value and the background light1
Figure BDA0003287405890000165
In the formula, Br、BgAnd BbRespectively representing the background light values of the underwater image to be processed in R, G, B color channels; beta is ar、βgAnd beta is the light attenuation rate of the underwater image to be processed after color preprocessing in R, G, B color channels respectively; vmaxThe maximum brightness intensity of the underwater image to be processed after color preprocessing is obtained; min () is the operator to find the minimum;
s402, based on the characteristic that the mean values of all channels of the foggy day image are very close, namely:
Mean(Hr(i))=Mean(Hg(i))=Mean(Hb(i));
Figure BDA0003287405890000166
therefore, the second depth candidate D is determined as follows2
Figure BDA0003287405890000167
Wherein the content of the first and second substances,
Figure BDA0003287405890000171
wherein, when c is ∈ { r, g, B }, Br、BgAnd BbRespectively representing the background light values of the underwater image to be processed in R, G, B color channels; i isr、IgAnd IbRespectively obtaining an R color channel value, a G color channel value and a B color channel value of an original underwater image to be processed; d (i) is the ith pixel point in the underwater image to be processedI represents the position label of the pixel point in the image; beta is ar、βgAnd betabRespectively representing the light attenuation rates of the underwater image to be processed after the color preprocessing in R, G, B color channels; e is a natural exponential constant; abs () is the operator to find the absolute value; mean () is the averaging operator;
s403, determining a final water depth value Df
Df=Max(D1,D2);
In the formula, Max () is an operator for solving the maximum value; the final water depth value D is obtainedfAs an estimated water depth value of the underwater image to be processed.
During the actual recovery process, if the candidate value D is obtained from two water depths1、D2And a smaller water depth value is selected, the restored image still has certain color distortion, and the restoration effect is poor. Fig. 6 shows the comparison of the underwater image sharpening effects of different final water depth values in different selection manners, where in fig. 6, (a) is an original underwater image, (b) is a sharpened image whose final water depth value is determined by a minimum value selection method, and (c) is a sharpened image whose final water depth value is determined by a maximum value selection method; it can be seen that the color restoration effect of the sharpened image obtained by the maximum value selection method is better. Therefore, in the present solution, the final water depth value D is obtainedfThe selection mode of (2) adopts a maximum value selection method.
And S5, restoring the underwater image to be processed according to the estimated water depth value, the scene depth map of the underwater image to be processed, the background light value and the light attenuation rate to obtain a clear image of the underwater image to be processed.
In specific implementation, the estimated depth value and parameters such as a scene depth map, a background light value, a light attenuation rate and the like of the underwater image to be processed can be used for performing no light attenuation processing on the underwater image to be processed:
Figure BDA0003287405890000172
in the formulac is in { r, g, b }, Hr(i)、Hg(i) And Hb(i) Respectively carrying out non-light attenuation treatment on the underwater image to be treated to obtain an R color channel value, a G color channel value and a B color channel value of an ith pixel point in the non-light attenuation image; i isr(i)、Ig(i) And Ib(i) Respectively obtaining an R color channel value, a G color channel value and a B color channel value of an ith pixel point in an original underwater image to be processed; d is a water depth value of the underwater image to be processed; b isr、BgAnd BbRespectively representing the background light values of the underwater image to be processed in R, G, B color channels; beta is ar、βgAnd betabRespectively representing the light attenuation rates of the underwater image to be processed after the color preprocessing in R, G, B color channels; e is a natural exponential constant.
If the underwater image to be processed is a real underwater image, d (i) is a scene depth value of the ith pixel point in the underwater image to be processed, and dep (i) is d (i);
if the underwater image to be processed is the pseudo underwater depth image, dep (i) is the scene depth value of the ith pixel point after the underwater image to be processed is subjected to scene depth transformation, and d (i) is the scene depth value of the ith pixel point before the underwater image to be processed is subjected to scene depth transformation. The manner of performing the scene depth transformation process on the underwater image to be processed is as described in the foregoing step S3).
Then, considering that the problem of back scattering still exists in the no-light attenuation image, an atmospheric scattering model is adopted for defogging, and under the condition that the image content is the same, the scene depth estimated value is not changed, so that the scattering recovery is further carried out to obtain a clear image of the underwater image to be processed:
Figure BDA0003287405890000181
in the formula, J (i) is the pixel value of the ith pixel point in the sharpening image obtained by carrying out image restoration processing on the underwater image to be processed; h (i) is the pixel value of the ith pixel point in the no-light attenuation image obtained after the no-light attenuation treatment is carried out on the underwater image to be treated; a is the value of atmospheric light, typically set to 255; α is a scattering coefficient, and is usually set to 0.4. In the calculation, because the estimated value of the scene depth of the scattering recovery defogging process is not changed no matter whether the underwater image is a real underwater image or a pseudo underwater depth image, the scene depth value d (i) of the ith pixel point in the underwater image to be processed is directly adopted (even if the underwater image to be processed is a pseudo underwater depth image, the scene depth value d (i) of the pixel point before the scene depth transformation is adopted) to participate in the calculation.
And finally, outputting and displaying the clear image of the underwater image to be processed, which is obtained by processing.
In summary, compared with the prior art, the invention has the following technical advantages:
(1) the underwater image sharpening processing method disclosed by the invention combines multi-dimensional data such as an estimated water depth value, a scene depth map, a background light value and a light attenuation rate of the underwater image, and comprehensively restores the underwater image so as to reduce a restoration error of the underwater image and improve the accuracy of restoration processing of colors and contrast of the underwater image, thereby obtaining better underwater image sharpening imaging quality.
(2) According to the method, the mathematical relation among background light, illumination intensity and light attenuation rate is established by deducing the underwater optical imaging model, so that the self-adaptive estimation of the light attenuation rate of a single image is realized, the underwater real scene can be effectively recovered, and the robustness of the method in various water body environments is ensured.
(3) Aiming at the problems that underwater RGB-D data are rare and supervised network training on an underwater scene depth estimation network is difficult, the method provides a scene depth estimation network training strategy combining an underwater image and a pseudo-underwater depth map, and greatly expands the data volume of training data by utilizing a double-task network joint training mode to meet the training data volume requirement of the training scene depth estimation network, thereby realizing unsupervised training of the underwater scene depth estimation network and better ensuring the scene depth estimation accuracy of the underwater image to be processed.
(4) The invention provides a single image global water depth estimation method based on light scattering characteristics, which is less related to water depth value estimation in the prior art, and carries out water depth value estimation by combining a scene depth map, a background light value and a light attenuation rate of an underwater image so as to better ensure the accuracy of water depth estimation.
Experiments and experimental data
In the experiment, underwater images are synthesized on an NYU data set according to the existing 10 light attenuation rates by using an underwater imaging model, and 10 types of underwater synthetic image data sets are established, wherein each type of underwater synthetic image comprises 1449 images. In the imaging model, the water depth value was set to 7, the true backlight value was set to 255, and the light attenuation rates of 10 types were as shown in table 2.
TABLE 2 Ten types of light attenuation coefficient β
Figure BDA0003287405890000191
According to the method provided by the invention, the background light of each image is obtained, the light attenuation coefficient is estimated, all estimation results of the same type are averaged, and the final result is shown in fig. 7. The solid line is the real light attenuation coefficient, the dotted line is the estimated value of the light attenuation coefficient, and the estimated value is basically equal to the real value. In the 10 th water body environment, the error of the real value and the estimated value of the blue light is about 0.1, which is related to serious attenuation of the blue light and dark image. In general, experimental results show that the light attenuation estimation method of the present invention is accurate in underwater synthetic images.
In order to embody the effect of scene depth estimation on the underwater images in the method, the experiment also compares the scene depth estimation algorithm used in the underwater image clarification methods such as Berman, UDCP, Galdran, Peng, UWD and the like with the method, and compares the scene depth estimation algorithm with the method, as shown in FIG. 8; in fig. 8, columns from left to right are an original image, a real depth map, Berman, UDCP, Galdran, Peng, UWD, and a scene depth estimation result in the present invention, respectively. The network structure for estimating the scene depth of the underwater image in the method is similar to that of the UWD, and the difference is that a pseudo-underwater depth map is introduced to improve the accuracy of depth estimation. Sea-thu provides an RGB-D data set with real scene depth underwater, on which several advanced underwater scene depth estimation methods are compared, respectively. Considering that part of the method estimates transmittance, the result of the directional filtering for each algorithm derives the scene depth from S201, and the subjective result is shown in fig. 8. It can be seen that the scene depth estimation result of the method of the present invention is closer to the true depth map than other methods. To verify the effectiveness of unsupervised training, Sea-thu training and fine-tuning are not used by the depth estimation network. The invention measures the error of the depth estimation result and the depth true value of each algorithm through the RMSE objective index, and the result is shown in the table 3.
TABLE 3 Objective score of Underwater scene depth estimation Algorithm
Figure BDA0003287405890000201
In combination with subjective and objective comparison, Berman adopts a fog line method to estimate depth, but the prior condition of the method comes from a natural fog image, which does not accord with the underwater characteristics; UDCP estimates depth on GB channel, resulting in overall lower depth value estimation; galdran estimates the depth by combining the RGB channel according to the characteristics of the red channel, and test data come from a public sea area and accord with the algorithm principle, so the overall depth is more accurate; peng estimates the depth by using fuzzy characteristics, so that the influence of light attenuation is avoided, and an accurate depth value is obtained; the UWD performs the underwater depth estimation by interconversion of the foggy RGB-D image and the underwater RGB image, with artifacts in the result, which are related to changes in image content during the 3-channel to 4-channel conversion. The depth estimation subjective result of the invention is closer to the real depth, and has the optimal result in the objective score.
Finally, in order to embody the final restoration processing effect of the method for the underwater image, the experiment also adopts three classic underwater image clarification algorithms and four advanced underwater image clarification algorithms of nearly two years to compare with the method, wherein the three classic underwater image clarification algorithms are respectively UDCP, ULA and NUDCP algorithms, the four advanced underwater image clarification algorithms of nearly two years are respectively the underwater image clarification algorithms proposed by Galdran, Li, Peng, Berman and the like, and the underwater image restoration processing result of each algorithm is shown in FIG. 9; in fig. 9, columns from left to right are an original image, a preprocessing result image of the original image, UDCP, Galdran, Li, Peng, ULA, Berman, NUDCP, and an image restoration processing result by the method of the present invention, respectively. UDCP mainly uses GB channel to estimate transmittance to solve the scattering problem, but does not solve the light attenuation problem, and even if the image color cast is weak, the final result still has color distortion. And the Galdran estimates the transmittance after performing inversion operation on the R channel, which compensates the attenuated transmission diagram to a certain extent, but has poor treatment effect on the environment of the green water body. Li introduces a light attenuation coefficient of class I in the transmittance and further corrects the image using histogram specification from the natural image histogram prior, however there are artifacts and blocking artifacts in the image that do not conform to the prior, as shown in the second and fourth lines of fig. 9. Peng estimates the scene depth according to fuzzy prior, has certain effect on the scattering problem, but does not solve the color distortion problem, and has content distortion in a high-brightness area. The ULA estimates the scene depth through the linear relation between RGB, and the class I light attenuation coefficient is also used, however, the R channel oversaturation exists in the partial image, and the processing result is red overall. Berman processes the image with ten light attenuation coefficients and selects the best result by gray world priors, but also has the problem of poor robustness. The NUDCP obtains image background light through a background light estimation model, utilizes the background light to compensate the transmissivity, and the result is better, but the distortion phenomenon caused by overexposure exists. The method obtains the light attenuation rate through the background light and the estimation model, introduces the water depth factor, has better subjective effect on various images, and has high robustness.
The invention selects 70 images containing various water quality conditions, and measures the performance of the algorithm from four aspects of chroma, contrast, image content and underwater comprehensive indexes. In consideration of the limitation of evaluation indexes, extremely high abnormal scores in all algorithms are removed, the rational scores are averaged, objective evaluation results are shown in table 4, and the bold font is the highest score in a single index.
TABLE 4 Objective scores for Underwater processing Algorithm
Figure BDA0003287405890000211
And a dispersion evaluation method is used on the chromaticity index, and the method is used for measuring the color cast condition of the image based on human eye perception on the CIELab color space. K represents the color shift score, with a higher score indicating greater color distortion. The estimated value of the light attenuation model is adaptive to the attenuation condition of each image, and can better solve various water quality images, so that the estimated value has the highest score in all comparison algorithms. NUDCP has a second highest color recovery score, which is consistent with subjective results. Peng over-enhances the image contrast, which leads to an increased color distortion and therefore a lower evaluation score.
The average gradient provides a better measure of the image contrast, with higher values of the index giving higher contrast. In all comparison algorithms, the score of Li is closer to that of the present invention, however, partial images still have color cast, and therefore the score is slightly lower than that of the present invention. Although neither UDCP nor Galdran introduce a light attenuation coefficient, UDCP is higher than Galdran on the average gradient fraction because UDCP has a better defogging effect than Galdran.
The invention adopts the energy entropy index to represent the performance of the restoration algorithm in the restoration of the image content, and the higher the value is, the more the content of the image is. The defogging effect of Li is better than the present invention in some images, so the fraction of the present method is slightly lower than the fraction of Li. However, artifacts and blocking artifacts in flat areas also affect the index, and in combination with subjective images, the method of the present invention recovers underwater image content well. The processing results of other algorithms have content distortion caused by overexposure, contrast overexcitation, and red oversaturation, and thus the energy entropy scores are all low.
UCIQE is an underwater comprehensive evaluation method and is widely applied to quality evaluation of underwater processing algorithms. Among all comparison algorithms, the invention has the highest score, which indicates that the algorithm can better recover the underwater distorted image. Li takes a natural image histogram prior as a correction standard, while Berman takes the color constancy of a natural image as a standard, and the two methods both take the natural image prior as a standard and therefore have higher comprehensive scores. NUDCP has a good subjective effect, but overexposure distortion is the main cause of the low score.
Finally, it is noted that the above-mentioned embodiments illustrate rather than limit the invention, and that, while the invention has been described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An underwater image sharpening processing method based on light attenuation and depth estimation is characterized by comprising the following steps:
s1, acquiring an underwater image to be processed;
s2, carrying out scene depth estimation operation on the underwater image to be processed to obtain a scene depth map of the underwater image to be processed;
s3, extracting a background light value of the underwater image to be processed, performing color preprocessing on the underwater image to be processed according to the extracted background light value, and estimating the light attenuation rate of the underwater image to be processed after the color preprocessing;
s4, estimating the water depth value of the underwater image to be processed according to the scene depth map, the background light value and the light attenuation rate of the underwater image to be processed;
and S5, restoring the underwater image to be processed according to the estimated water depth value, the scene depth map of the underwater image to be processed, the background light value and the light attenuation rate to obtain a clear image of the underwater image to be processed.
2. The method for processing underwater image sharpening based on light attenuation and depth estimation according to claim 1, wherein in step S2, the scene depth estimation network is trained by using the known underwater sample image and the pseudo-underwater depth image as training input data, and the trained scene depth estimation network is used to process the underwater image to be processed to obtain a scene depth map of the underwater image to be processed; the pseudo-underwater depth image is an image obtained by carrying out atomization synthesis processing on an overwater space scene image with a depth label and then carrying out underwater image style migration processing.
3. The method for underwater image sharpening processing based on light attenuation and depth estimation according to claim 2, wherein the training step of the scene depth estimation network comprises:
s201, acquiring an underwater sample image and an overwater space scene image with a depth label; the depth label of the water space scene image is used for indicating scene depth information of the water space scene image; the underwater sample image is an existing underwater image serving as a usable training sample, but the original underwater sample image is not provided with a depth label.
S202, carrying out atomization synthesis processing on the acquired overwater space scene image by means of atmospheric scattering data to obtain an overwater space scene atomization synthetic image;
s203, performing style migration training by taking the underwater sample image as training input data of a style migration network, performing underwater image style migration processing on the atomized synthetic image of the overwater space scene by using the trained style migration network, and taking the processed image as a pseudo underwater depth image;
s204, taking the pseudo-underwater depth image as training input data of the scene depth estimation network, taking a depth label of the overwater space scene image corresponding to the pseudo-underwater depth image as a training result label of the scene depth estimation network, and performing preliminary training on the scene depth estimation network;
s205, carrying out scene depth estimation on the underwater sample image by using the scene depth estimation network after the pre-training to obtain a depth label of the underwater sample image; the depth label of the underwater sample image is used for indicating scene depth information of the underwater sample image;
s206, taking the underwater sample image and the pseudo-underwater depth image as training input data of the scene depth estimation network after the preparation training, taking the depth label of the underwater sample image and the depth label of the overwater space scene image corresponding to the pseudo-underwater depth image as training result labels of the scene depth estimation network after the preparation training, and training again to obtain the completely trained scene depth estimation network;
s207, carrying out scene depth estimation on the underwater image to be processed by using the completely trained scene depth estimation network to obtain a scene depth value of the underwater image to be processed, and carrying out depth image conversion processing according to the scene depth value of the underwater image to be processed to obtain a scene depth image of the underwater image to be processed.
4. The method for processing underwater image sharpening based on light attenuation and depth estimation according to claim 1, wherein in step S3, the manner of extracting the background light value of the underwater image to be processed is as follows:
according to a scene depth map of an underwater image to be processed, selecting an image area which is 5% larger than a field depth value in the underwater image to be processed as a background light value area, then selecting pixel points which are 1% larger than the sum of RGB (red, green and blue) three-color channels from the background light value area as background light candidate points, and taking the median value of each background light candidate point in each color channel as the background light value of the corresponding color channel:
Bc=Median(Bc_cand),c∈{r,g,b};
wherein, when c is ∈ { r, g, B }, Br、BgAnd BbRespectively representing the background light values of the underwater image to be processed in R, G, B color channels; b isr_cand、Bg_candAnd Bb_candRespectively obtaining an R color channel value set, a G color channel value set and a B color channel value set of each background light candidate point in the underwater image to be processed; median () is a Median operator.
5. The method for sharpening underwater images based on light attenuation and depth estimation as claimed in claim 1, wherein if the underwater image to be processed is a pseudo underwater depth image, after the scene depth value of the underwater image to be processed is obtained through the processing of step S2, before the step S3 is executed, the underwater image to be processed is subjected to scene depth transformation processing; the pseudo underwater depth image is an image obtained by carrying out atomization synthesis processing on an overwater space scene image with a depth label and then carrying out underwater image style migration processing;
carrying out scene depth transformation processing on an underwater image to be processed according to the following formula:
Figure FDA0003287405880000021
dep is a scene depth value of the underwater image to be processed after the scene depth is transformed; median () is a Median operator; d is the scene depth value of the underwater image to be processed before the scene depth is transformed; sigrThe actual space apparent distance range value, Sig, of the original overwater space scene image corresponding to the pseudo-underwater depth imager-simThe range value of the sight distance of the underwater image scene to be simulated.
6. The method for underwater image sharpness processing based on light attenuation and depth estimation according to claim 1, wherein in step S3, the underwater image to be processed is color preprocessed by:
for each pixel point in the underwater image to be processed, carrying out color preprocessing on R, B color channels based on the value of the G color channel under the condition of the following light attenuation ratio:
Figure FDA0003287405880000031
in the formula Ir_pAnd Ib_pRespectively an R color channel value and a B color channel value, I, of the underwater image to be processed after color preprocessingr、IbAnd IgRespectively obtaining an R color channel value, a B color channel value and a G color channel value of an original underwater image to be processed; b isr、BgAnd BbRespectively representing the background light values of the underwater image to be processed in R, G, B color channels; abs () is the operator to find the absolute value;
Figure FDA0003287405880000032
the light attenuation ratio of the green light wavelength to the red light wavelength;
Figure FDA0003287405880000033
is the light attenuation ratio of green light wavelength to blue light wavelength.
7. The method for underwater image sharpness processing based on light attenuation and depth estimation according to claim 1, wherein in step S3, the light attenuation rate of the underwater image to be processed after color preprocessing is estimated by:
firstly, according to a scene depth map of an underwater image to be processed, selecting an image area which is 5% larger than a field depth value in the underwater image to be processed as a background light value area, then selecting pixel points which are 1% larger than the sum of RGB (red, green and blue) three-color channels from each pixel corresponding to the background light value area in the underwater image to be processed after color preprocessing as background light candidate points, and taking the median value of each background light candidate point in each color channel as the background light value of the corresponding color channel:
then, estimating the light attenuation rate of the underwater image to be processed after color preprocessing in an R color channel according to the following relational expression:
Figure FDA0003287405880000034
in the formula, betarThe light attenuation rate of the underwater image to be processed after color preprocessing in an R color channel is obtained; when c ∈ { g, b },
Figure FDA0003287405880000035
is the light attenuation ratio of green light wavelength to red light wavelength,
Figure FDA0003287405880000036
the light attenuation ratio of the blue light wavelength to the red light wavelength; coefficient of performance
Figure FDA0003287405880000037
The values of (a) are determined according to the following table:
Figure FDA0003287405880000038
after the light attenuation rate of the underwater image to be processed after the color pretreatment in the R color channel is obtained, the light attenuation rate beta of the underwater image to be processed after the color pretreatment in the G color channel can be respectively obtained according to the light attenuation ratio relation among the wavelengths of the red light, the green light and the blue lightgAnd light attenuation ratio beta of B color channelb
8. The method for underwater image sharpness processing based on light attenuation and depth estimation according to claim 1, wherein the step S4 includes:
s401, determining a first water depth candidate value D according to a relational expression of the light attenuation coefficient, the water depth value and the background light1
Figure FDA0003287405880000041
In the formula, Br、BgAnd BbRespectively representing the background light values of the underwater image to be processed in R, G, B color channels; beta is ar、βgAnd betabRespectively representing the light attenuation rates of the underwater image to be processed after the color preprocessing in R, G, B color channels; vmaxThe maximum brightness intensity of the underwater image to be processed after color preprocessing is obtained; min () is the operator to find the minimum;
s402, determining according to the following formulaSecond water depth candidate value D2
Figure FDA0003287405880000042
Wherein the content of the first and second substances,
Figure FDA0003287405880000043
wherein, when c is ∈ { r, g, B }, Br、BgAnd BbRespectively representing the background light values of the underwater image to be processed in R, G and B color channels; i isr、IgAnd IbRespectively obtaining an R color channel value, a G color channel value and a B color channel value of an original underwater image to be processed; d (i) is the scene depth value of the ith pixel point in the underwater image to be processed, wherein i represents the position label of the pixel point in the image; beta is ar、βgAnd betabRespectively representing the light attenuation rates of the underwater image to be processed after the color preprocessing at R, G and a color channel B; e is a natural exponential constant; abs () is the operator to find the absolute value; mean () is the averaging operator;
s403, determining a final water depth value Df
Df=Max(D1,D2);
In the formula, Max () is an operator for solving the maximum value; the final water depth value D is obtainedfAs an estimated water depth value of the underwater image to be processed.
9. The method for underwater image sharpness processing based on light attenuation and depth estimation according to claim 1, wherein in step S5, the underwater image to be processed is subjected to image restoration according to the following formula:
Figure FDA0003287405880000044
wherein the content of the first and second substances,
Figure FDA0003287405880000045
in the formula, J (i) is the pixel value of the ith pixel point in the sharpening image obtained by carrying out image restoration on the underwater image to be processed; h (i) is the pixel value of the ith pixel point in the no-light attenuation image obtained after the no-light attenuation treatment is carried out on the underwater image to be treated; when c ∈ { r, g, b }, Hr(i)、Hg(i) And Hb(i) Respectively carrying out non-light attenuation treatment on the underwater image to be treated to obtain an R color channel value, a G color channel value and a B color channel value of an ith pixel point in the non-light attenuation image; i isr(i)、Ig(i) And Ib(i) Respectively obtaining an R color channel value, a G color channel value and a B color channel value of an ith pixel point in an original underwater image to be processed; d is a water depth value of the underwater image to be processed; b isr、BgAnd BbRespectively representing the background light values of the underwater image to be processed in R, G and B color channels; beta is ar、βgAnd betabRespectively representing the light attenuation rates of the underwater image to be processed after the color preprocessing at R, G and a color channel B; e is a natural exponential constant; a is the value of atmospheric light, and alpha is the scattering coefficient;
if the underwater image to be processed is a real underwater image, d (i) is a scene depth value of the ith pixel point in the underwater image to be processed, and dep (i) is d (i);
if the underwater image to be processed is a pseudo underwater depth image, dep (i) is the scene depth value of the ith pixel point after the underwater image to be processed is subjected to scene depth transformation, and d (i) is the scene depth value of the ith pixel point before the underwater image to be processed is subjected to scene depth transformation; the method for carrying out scene depth transformation processing on the underwater image to be processed comprises the following steps:
Figure FDA0003287405880000051
dep is a scene depth value of the underwater image to be processed after the scene depth is transformed; median () is a Median operator; d is the scene depth value of the underwater image to be processed before the scene depth is transformed; sigrCorresponding original for pseudo-underwater depth imageActual spatial range of view, Sig, of spatial scene images over waterr-simThe range value of the sight distance of the underwater image scene to be simulated.
10. The method for underwater image sharpening based on light attenuation and depth estimation according to claim 9, wherein in step S5, after obtaining the sharpened image of the underwater image to be processed, the method further performs contrast enhancement processing to obtain the final sharpened image of the underwater image to be processed.
CN202111151760.9A 2021-09-29 2021-09-29 Underwater image sharpening processing method based on light attenuation and depth estimation Active CN113850747B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111151760.9A CN113850747B (en) 2021-09-29 2021-09-29 Underwater image sharpening processing method based on light attenuation and depth estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111151760.9A CN113850747B (en) 2021-09-29 2021-09-29 Underwater image sharpening processing method based on light attenuation and depth estimation

Publications (2)

Publication Number Publication Date
CN113850747A true CN113850747A (en) 2021-12-28
CN113850747B CN113850747B (en) 2024-06-14

Family

ID=78976960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111151760.9A Active CN113850747B (en) 2021-09-29 2021-09-29 Underwater image sharpening processing method based on light attenuation and depth estimation

Country Status (1)

Country Link
CN (1) CN113850747B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174774A (en) * 2022-06-29 2022-10-11 上海飞机制造有限公司 Depth image compression method, device, equipment and storage medium
CN115760582A (en) * 2023-01-09 2023-03-07 吉林大学 Super-resolution method for underwater depth map
CN115908998A (en) * 2022-11-17 2023-04-04 北京星天科技有限公司 Training method of water depth data identification model, water depth data identification method and device
CN116452470A (en) * 2023-06-20 2023-07-18 深圳市欧冶半导体有限公司 Image defogging method and device based on deep learning staged training

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080151057A1 (en) * 2006-12-22 2008-06-26 Nikon Corporation Image capturing apparatus with clarity sensor, underwater image compensation and underwater flash compensation
CN107316278A (en) * 2017-05-13 2017-11-03 天津大学 A kind of underwater picture clearness processing method
WO2017198746A1 (en) * 2016-05-18 2017-11-23 Tomtom International B.V. Methods and systems for underwater digital image processing
CN107563980A (en) * 2017-09-04 2018-01-09 天津大学 Underwater picture clarification method based on Underwater Imaging model and the depth of field
US20180286066A1 (en) * 2015-09-18 2018-10-04 The Regents Of The University Of California Cameras and depth estimation of images acquired in a distorting medium
CN112070683A (en) * 2020-07-21 2020-12-11 西北工业大学 Underwater polarization image restoration method based on polarization and wavelength attenuation joint optimization

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080151057A1 (en) * 2006-12-22 2008-06-26 Nikon Corporation Image capturing apparatus with clarity sensor, underwater image compensation and underwater flash compensation
US20180286066A1 (en) * 2015-09-18 2018-10-04 The Regents Of The University Of California Cameras and depth estimation of images acquired in a distorting medium
WO2017198746A1 (en) * 2016-05-18 2017-11-23 Tomtom International B.V. Methods and systems for underwater digital image processing
CN107316278A (en) * 2017-05-13 2017-11-03 天津大学 A kind of underwater picture clearness processing method
CN107563980A (en) * 2017-09-04 2018-01-09 天津大学 Underwater picture clarification method based on Underwater Imaging model and the depth of field
CN112070683A (en) * 2020-07-21 2020-12-11 西北工业大学 Underwater polarization image restoration method based on polarization and wavelength attenuation joint optimization

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李佳宽;于洪志;: "双目立体视觉的水下应用", 科技创新与应用, no. 32, 13 November 2018 (2018-11-13) *
翟艺书;王宏: "基于光学深度估计的雾天降质图像清晰化复原", 计算机仿真, vol. 27, no. 3, 15 March 2010 (2010-03-15) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174774A (en) * 2022-06-29 2022-10-11 上海飞机制造有限公司 Depth image compression method, device, equipment and storage medium
CN115174774B (en) * 2022-06-29 2024-01-26 上海飞机制造有限公司 Depth image compression method, device, equipment and storage medium
CN115908998A (en) * 2022-11-17 2023-04-04 北京星天科技有限公司 Training method of water depth data identification model, water depth data identification method and device
CN115760582A (en) * 2023-01-09 2023-03-07 吉林大学 Super-resolution method for underwater depth map
CN116452470A (en) * 2023-06-20 2023-07-18 深圳市欧冶半导体有限公司 Image defogging method and device based on deep learning staged training
CN116452470B (en) * 2023-06-20 2023-09-15 深圳市欧冶半导体有限公司 Image defogging method and device based on deep learning staged training

Also Published As

Publication number Publication date
CN113850747B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
CN113850747B (en) Underwater image sharpening processing method based on light attenuation and depth estimation
CN108596853A (en) Underwater picture Enhancement Method based on bias light statistical model and transmission map optimization
CN111161170B (en) Underwater image comprehensive enhancement method for target recognition
CN106485681B (en) Underwater color image restoration method based on color correction and red channel prior
CN107798661B (en) Self-adaptive image enhancement method
CN110570360B (en) Retinex-based robust and comprehensive low-quality illumination image enhancement method
CN110288550B (en) Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition
CN107527325B (en) Monocular underwater vision enhancement method based on dark channel priority
CN111861896A (en) UUV-oriented underwater image color compensation and recovery method
CN109741285B (en) Method and system for constructing underwater image data set
CN113284061B (en) Underwater image enhancement method based on gradient network
CN109118450B (en) Low-quality image enhancement method under sand weather condition
CN111462002B (en) Underwater image enhancement and restoration method based on convolutional neural network
CN112070683A (en) Underwater polarization image restoration method based on polarization and wavelength attenuation joint optimization
CN108596843B (en) Underwater image color recovery algorithm based on bright channel
Huang et al. Underwater image enhancement based on color restoration and dual image wavelet fusion
CN116757949A (en) Atmosphere-ocean scattering environment degradation image restoration method and system
Zhang et al. An underwater image enhancement method based on local white balance
CN115456910A (en) Color recovery method for serious color distortion underwater image
Ting et al. Underwater Image Enhancement Based on IMSRCR and CLAHE-WGIF
Zhang et al. A two-stage underwater image enhancement method
Arora et al. HOG and SIFT Transformation Algorithms for the Underwater Image Fusion
Guodong et al. Underwater image enhancement and detection based on convolutional DCP and YOLOv5
Li et al. Research on improved image recovery algorithm based on Dark-Channel and multi-scale Retinex theory
Fayaz et al. GALEIR: Global Atmospheric Light Estimation based Underwater Image Restoration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant