CN115439349A - Underwater SLAM optimization method based on image enhancement - Google Patents

Underwater SLAM optimization method based on image enhancement Download PDF

Info

Publication number
CN115439349A
CN115439349A CN202210965506.0A CN202210965506A CN115439349A CN 115439349 A CN115439349 A CN 115439349A CN 202210965506 A CN202210965506 A CN 202210965506A CN 115439349 A CN115439349 A CN 115439349A
Authority
CN
China
Prior art keywords
underwater
image
light
optimization method
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210965506.0A
Other languages
Chinese (zh)
Inventor
陈凯锐
黄沛昇
赖冠宇
杨嘉俊
蔡培周
周晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN202210965506.0A priority Critical patent/CN115439349A/en
Publication of CN115439349A publication Critical patent/CN115439349A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an underwater SLAM (simultaneous localization and mapping) optimization method based on image enhancement, which comprises the following steps of: s1, obtaining corrected feature point coordinates according to a refraction principle of light; s2, processing the gray value of each pixel point of the original image acquired by the underwater robot through linear transformation in the aspect of airspace; s3, a high-pass filter is used in the frequency domain, a proper cut-off frequency is set, and only signals above the cut-off frequency are allowed to pass; s4, introducing a dark primary color defogging algorithm on the land, and controlling the defogging degree, the minimum transmittance and the size of the partitioned window area; and S5, performing feature extraction in the traditional visual SALM on the image processed in the S4 to obtain a global map. The method improves the definition of the image acquired underwater, is beneficial to extracting better characteristic points and further obtains a clearer underwater three-dimensional model.

Description

Underwater SLAM optimization method based on image enhancement
Technical Field
The invention relates to the field of visual images, in particular to an underwater SLAM optimization method based on image enhancement.
Background
Currently, the visual SLAM technology is mainly used to describe that a robot synchronously performs mapping and self-positioning in an unknown place of an unknown environment. The robot establishes a map of an unknown environment through environment data acquired by a sensor, and then performs self positioning and map establishment by matching environmental features observed in motion with features in the map. The visual SLAM technology provides great help for robot positioning and navigation, and has the advantages of low cost, rich scene information and the like.
The existing visual SLAM technology is greatly influenced by the environment, so that most of the technology is applied to the land and the air in sunny weather. The underwater three-dimensional map reconstruction method is relatively few in use in an underwater environment, and is mainly caused by the fact that due to the particularity of the underwater environment, image information acquired by a vision sensor of the robot is refracted and scattered by light rays, and is influenced by environmental factors such as water quality and brightness, so that the image information is dim and fuzzy, subsequent feature extraction and matching are not facilitated, and the positioning accuracy and the final three-dimensional map reconstruction effect of the underwater robot are poor.
Therefore, how to design an image acquisition method by which a robot vision sensor can acquire high definition in an underwater environment is a problem to be solved by those skilled in the art.
Disclosure of Invention
The invention provides an underwater SLAM optimization method based on image enhancement by adopting an improved front-end image processing algorithm, so as to solve the problems.
The invention provides the following technical scheme:
an underwater SLAM optimization method based on image enhancement comprises the following steps:
s1, obtaining corrected feature point coordinates according to a refraction principle of light;
s2, processing the gray value of each pixel point of the original image acquired by the underwater robot through linear transformation in the aspect of airspace;
s3, a high-pass filter is used in the frequency domain, a proper cut-off frequency is set, and only signals above the cut-off frequency are allowed to pass;
s4, introducing a dark primary color defogging algorithm on the land, and controlling the defogging degree, the minimum transmittance and the size of the partitioned window area;
and S5, performing feature extraction in the traditional visual SALM on the image processed in the S4 to obtain a global map.
Preferably, in S1, the corrected feature point coordinate is obtained through a feature point coordinate equation, where the feature point coordinate equation is obtained as follows:
Figure BDA0003794739990000021
Figure BDA0003794739990000022
the point A is a projection point after underwater refraction, the point B is a projection point without underwater refraction, alpha and beta respectively represent incident angles of incident light and refracted light in the y-axis direction, theta and lambda respectively represent incident angles of the incident light and the refracted light in the x-axis direction, d is a distance between the protective layer glass and the lens, and coordinates of the corrected feature point (x is calculated through a feature point coordinate equation) B ,y B )。
Preferably, in S2, processing the gray value of each pixel point of the original image acquired by the underwater robot is represented as:
g(x,y)=E H [f(x,y)]
where g (x, y) is the enhanced image, f (x, y) is the original image, E H Is an enhancement function.
Preferably, in S3, a time domain transform domain of the original image is converted into a fourier transform domain, a cut-off frequency is set, a high-pass filter is used to pass signals above the cut-off frequency, and signals below the cut-off frequency are filtered.
Preferably, in S4, the underwater optical imaging is modeled, and the underwater optical imaging model includes a reflection model and an illumination model, where expressions of the reflection model S and the illumination model E are as follows:
S=L(x,y)R(x,y)
E(L(x,y)R(x,y))e -βd +E∞(1-e -βd )
wherein L is the illumination intensity of the incident light, and L (x, y) is at the coordinate position (x, y) where the light reaches the surface of the target object through the attenuation in the propagation process; r (x, y) represents a reflection function of a point on the target object at a coordinate position of (x, y); beta is the attenuation parameter of the aqueous medium; d is the light propagation length; e And E is the light intensity at distance and the light intensity reaching the underwater robot camera, respectively, where E is Is a constant.
Preferably, based on the underwater optical imaging model, the similarity of a land defogging algorithm, an underwater environment and a fog weather is integrated to obtain an underwater defogging function based on a dark primary color principle image, wherein the dark primary color principle image J is dark The definition is as follows:
Figure BDA0003794739990000031
in the formula, J c A certain color channel of J, r, g, b respectively represent three color channels; omega (x) is a region centered on x, J dark Is very low and approaches 0, J is the image acquired a priori, J dark The dark primary of J.
Preferably, assuming that the light factor is constant in a certain small area, it follows:
Figure BDA0003794739990000032
t(x)=e -βd
Figure BDA0003794739990000041
wherein the content of the first and second substances,
Figure BDA0003794739990000042
expressing the light factor parameter of the atmosphere, E c (y) and S c (y) represents a certain color channel of the illumination model and the reflection model, respectively.
Preferably, by introducing a constant λ (0 < λ < 1), fog in a part of the scene is preserved by adjusting the value of λ:
Figure BDA0003794739990000043
wherein Ω (x) is a fixed matrix and t (x) represents a transmission factor; setting a lower limit t for the transmission factor t (x) 0 Calculating an image S (x) after the defogging process:
Figure BDA0003794739990000044
wherein E (x) is an atmospheric light component, E (x) Representing the maximum density pixel in the light component.
Preferably, in S5, the feature extraction step in the conventional visual SALM includes extracting ORB descriptors, feature matching, local mapping, loop detection, and global mapping.
The invention has the beneficial effects that:
the method is different from the traditional land vision SLAM image processing method, innovatively improves the original algorithm, and has the greatest advantages of solving the problems caused by the particularity of the underwater environment, such as dim and fuzzy images caused by light refraction, light scattering, water turbidity and the like, improving the definition of the images acquired underwater, being beneficial to extracting more excellent characteristic points and further obtaining a clearer underwater three-dimensional model.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, without inventive effort, further drawings may be derived from the following figures.
FIG. 1 is a flowchart of an underwater SLAM optimization method based on image enhancement according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the refraction of rays by an underwater lens according to an embodiment of the present invention;
FIG. 3 is a schematic view of an underwater optical imaging model according to an embodiment of the present invention;
FIG. 4 is a flowchart of an underwater image processing algorithm with dark primary colors according to an embodiment of the present invention.
Detailed Description
An image enhancement-based underwater SLAM optimization method is described in further detail below with reference to specific embodiments, which are provided for purposes of comparison and explanation only, and the present invention is not limited to these embodiments.
Examples
Referring to fig. 1, the underwater SLAM optimization method based on image enhancement in the embodiment of the present invention includes the following steps:
s1, establishing a coordinate equation of the feature points before and after correction according to a refraction principle of light;
as shown in fig. 2, which is a principle diagram of refraction of rays of an underwater lens, point a is a refracted projection point, point B is an un-refracted projection point, angles of incidence a and β of incident light and refracted light in a y-axis direction are measured according to a real scene, angles of incidence θ and λ of incident light and refracted light in an x-axis direction are measured, and a new coordinate equation is established:
Figure BDA0003794739990000051
wherein d is the distance between the protective layer glass and the lens, and the coordinate (x) of the corrected characteristic point can be calculated B ,y B )。
S2, processing the gray value of each pixel point of the original image acquired by the underwater robot through linear transformation in the airspace aspect;
the spatial domain image enhancement is adopted, the gray value of the pixel point is directly processed, and the processing method can be expressed as follows:
g(x,y)=E H [f(x,y)]
where g (x, y) is the enhanced image, f (x, y) is the original image, E H For enhancement functions, usually linear transformations, histogram equalization and histogram specification are used. Firstly, more balancing gray values which are distributed more intensively in the original image through histogram balancing of the original image; the histogram is specified through a histogram, a change function of a specified histogram is calculated, the equalized image histogram is subjected to linear transformation and is transformed into a corresponding specified histogram, and the contrast of the image can be obviously enhanced;
s3, a high-pass filter is used in the frequency domain, a proper cut-off frequency is set, and only signals above the cut-off frequency are allowed to pass;
the method has the advantages that the frequency domain image enhancement is adopted, the time domain transform domain of the original image is converted into the Fourier transform domain, a proper cut-off frequency is set, signals above the cut-off frequency are passed through a high-pass filter, signals below the cut-off frequency are filtered, the effect of enhancing the edge information of the image is achieved, the outline of the image information is clear, and the identification degree of the image is improved.
S4, introducing a dark primary color defogging algorithm on the land, and controlling the defogging degree, the minimum transmittance and the size of the partitioned window area;
FIG. 3 shows an underwater optical imaging model, where L is the illumination intensity of incident light, and L (x, y) is the coordinate position (x, y) of the surface of the target object reached through attenuation in the propagation process; r (x, y) represents a reflection function of a point of coordinate position (x, y) on the target object, the function being related only to the color characteristics of the target object itself; beta is the attenuation parameter of the aqueous medium; d is the light propagation length; e And E are the remote light intensity and the light intensity reaching the camera, respectively, where E Taking a constant. The expressions of the reflection model and the illumination model are as follows:
S=L(x,y)R(x,y)
E(L(x,y)R(x,y))e -βd +E∞(1-e -βd )
according to the model, a land defogging algorithm is combined, and then the underwater defogging function can be obtained according to the similarity of the underwater environment and the foggy weather. Wherein the dark primary principle image is defined as:
Figure BDA0003794739990000071
in the formula, J c A certain color channel of J; Ω (x) is a region centered on x. Experiments prove that J dark The value of (d) is very low and approaches 0. Let J be the image acquired a priori, then J dark J, and the empirical rule obtained by observation experiments is called the dark channel prior rule. When image enhancement is performed by this method, the transmittance needs to be calculated. Here, the
Figure BDA0003794739990000072
The optical factor parameter representing the atmosphere, assuming that the optical factor is constant in a certain small area, can be further obtained:
Figure BDA0003794739990000073
t(X)=e -βd
as known from the prior law of the dark primaries,
Figure BDA0003794739990000074
toward 0, then:
Figure BDA0003794739990000075
E c (y) and S c (y) represents a certain color channel of the illumination model and the reflection model, respectively.
In practice, even clear water with a background may contain some foreign particles and a distant scene may still be perceived as a haze. If the fog in the image is removed too thoroughly, depth information is lost. Therefore, at the time of defogging processing, a constant λ (0 < λ < 1) is introduced based on the above equation, and "fog" in a part of the scene is retained by adjusting the value of λ:
Figure BDA0003794739990000076
in the above formula, Ω (x) is taken as a fixed matrix, and therefore, t (x) obtained by the defogging treatment has a blocking effect, and the removal of the blocking effect makes the transmittance more accurate. In addition, when t (x) tends to zero, t (x) J (x) in the formula also tends to zero. Therefore, we set a lower limit t for the transmission factor t (x) 0 From this, the image after defogging processing can be calculated:
Figure BDA0003794739990000081
wherein E (x) is an atmospheric light component, E (x) Representing the maximum density pixel in the light component.
The dark primary underwater image processing algorithm flow is shown in fig. 4.
S5, performing feature extraction in the traditional visual SALM on the image processed in the S4 to obtain a global map;
and performing feature extraction in the traditional visual SALM on the processed image, extracting an ORB descriptor, performing feature matching, and removing points which do not meet the graph optimization by utilizing a PNP optimization matching pair. Estimating the relative motion of the current frame and the previous key frame according to the matched characteristic points, calculating the pose relationship, repositioning when tracking is lost, and tracking a local map for local optimization when enough matching time is available.
And then, local map building is carried out, the key frames are inserted into the map, the common view is updated, map points which do not meet the requirements are removed, new map points are triangulated, more matching pairs are searched in the domain key frames, local BA is executed, and then redundant local key frames are checked and removed.
In the loop detection step, whether a key frame exists in the queue is checked, in an embodiment, if the distance from the last loop is less than 10 key frames, loop detection is not needed, and the loop detection mainly performs appearance verification on the key frames and performs geometric verification on the relative motion relationship.
And finally, carrying out global map building, processing map points obtained in the previous step, starting a global BA, updating the poses of all key frames, updating all map points, and splicing into a final global map.
The embodiment of the invention mainly solves the problems caused by the particularity of the underwater environment, so that the underwater robot can well extract the characteristics in the motion process, and a clearer underwater three-dimensional model is constructed. The method is different from the traditional land vision SLAM image processing method, innovatively improves the original algorithm, and has the greatest advantages of solving the problems caused by the particularity of the underwater environment, such as dim and fuzzy images caused by light refraction, light scattering, water turbidity and the like, improving the definition of the images acquired underwater, being beneficial to extracting more excellent characteristic points and further obtaining a clearer underwater three-dimensional model.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (9)

1. An underwater SLAM optimization method based on image enhancement is characterized by comprising the following steps:
s1, obtaining corrected feature point coordinates according to a refraction principle of light;
s2, processing the gray value of each pixel point of the original image acquired by the underwater robot through linear transformation in the aspect of airspace;
s3, a high-pass filter is used in the frequency domain, a proper cut-off frequency is set, and only signals above the cut-off frequency are allowed to pass;
s4, introducing a dark primary color defogging algorithm on the land, and controlling the defogging degree, the minimum transmittance and the size of the partitioned window area;
and S5, performing feature extraction in the traditional visual SALM on the image processed in the S4 to obtain a global map.
2. The image enhancement-based underwater SLAM optimization method of claim 1, wherein in S1, the corrected feature point coordinates are obtained through a feature point coordinate equation, wherein the feature point coordinate equation is obtained as follows:
Figure FDA0003794739980000011
Figure FDA0003794739980000012
the point A is a projection point after underwater refraction, the point B is a projection point without underwater refraction, alpha and beta respectively represent incident angles of incident light and refracted light in the y-axis direction, theta and lambda respectively represent incident angles of the incident light and the refracted light in the x-axis direction, d is a distance between the protective layer glass and the lens, and coordinates (x is the coordinate of the corrected characteristic point) of the corrected characteristic point can be calculated through a characteristic point coordinate equation B ,y B )。
3. The image enhancement-based underwater SLAM optimization method according to claim 1, wherein in S2, the gray scale value of each pixel point of the original image acquired by the underwater robot is represented by:
g(x,y)=E H [f(x,y)]
where g (x, y) is the enhanced image, f (x, y) is the original image, E H Is an enhancement function.
4. The image enhancement-based underwater SLAM optimization method according to claim 1, wherein in S3, a time domain transform domain of an original image is converted into a Fourier transform domain, a cut-off frequency is set, a high-pass filter is used for passing signals above the cut-off frequency, and signals below the cut-off frequency are filtered.
5. The image enhancement-based underwater SLAM optimization method according to claim 1, wherein in S4, underwater optical imaging is modeled, the underwater optical imaging model comprises a reflection model and an illumination model, wherein the expressions of the reflection model S and the illumination model E are as follows:
S=L(x,y)R(x,y)
E=(L(x,y)R(x,y))e -βd +E (1-e -βd )
wherein L is the illumination intensity of the incident light, and L (x, y) is at the coordinate position (x, y) where the light reaches the surface of the target object through the attenuation in the propagation process; r (x, y) represents a reflection function of a point on the target object at coordinate position (x, y); beta is the attenuation parameter of the aqueous medium; d is the light propagation length; e And E is the light intensity at distance and the light intensity reaching the underwater robot camera, respectively, where E is Is a constant.
6. The image enhancement-based underwater SLAM optimization method of claim 5, wherein an underwater defogging function based on a dark primary color principle image J is obtained by integrating the similarity of a land defogging algorithm, an underwater environment and a foggy weather based on the underwater optical imaging model, wherein the dark primary color principle image J is a black primary color principle image dark The definition is as follows:
Figure FDA0003794739980000021
in the formula, J c A certain color channel of J, r, g, b respectively represent three color channels; omega (x) is a region centered on x, J dark Is very low and approaches 0, J is the image acquired a priori, J dark The dark primary of J.
7. The image enhancement based underwater SLAM optimization method of claim 6, wherein assuming that the light factor is constant in a certain small area, we derive:
Figure FDA0003794739980000031
t(x)=e -βd
Figure FDA0003794739980000032
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003794739980000033
parameter of light factor representing atmosphere, E c (y) and S c (y) represents a certain color channel of the illumination model and the reflection model, respectively.
8. The method of image enhancement based underwater SLAM optimization of claim 7, characterized in that fog in a part of the scene is preserved by introducing a constant λ (0 < λ < 1), by adjusting the value of λ:
Figure FDA0003794739980000034
wherein Ω (x) is a fixed matrix and t (x) represents a transmission factor; setting a lower limit t for the transmission factor t (x) 0 Calculating an image S (x) after the defogging process:
Figure FDA0003794739980000035
wherein E (x) is an atmospheric light component, E (x) Representing the maximum density pixel in the light component.
9. The image enhancement-based underwater SLAM optimization method of claim 1, wherein in S5, the feature extraction steps in the traditional visual SALM comprise ORB descriptor extraction, feature matching, local mapping, loop detection and global mapping.
CN202210965506.0A 2022-08-12 2022-08-12 Underwater SLAM optimization method based on image enhancement Pending CN115439349A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210965506.0A CN115439349A (en) 2022-08-12 2022-08-12 Underwater SLAM optimization method based on image enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210965506.0A CN115439349A (en) 2022-08-12 2022-08-12 Underwater SLAM optimization method based on image enhancement

Publications (1)

Publication Number Publication Date
CN115439349A true CN115439349A (en) 2022-12-06

Family

ID=84242136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210965506.0A Pending CN115439349A (en) 2022-08-12 2022-08-12 Underwater SLAM optimization method based on image enhancement

Country Status (1)

Country Link
CN (1) CN115439349A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580291A (en) * 2023-05-24 2023-08-11 中国海洋大学 Visual synchronous positioning and mapping method and system for underwater turbid strong scattering

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580291A (en) * 2023-05-24 2023-08-11 中国海洋大学 Visual synchronous positioning and mapping method and system for underwater turbid strong scattering

Similar Documents

Publication Publication Date Title
CN108596849B (en) Single image defogging method based on sky region segmentation
CN110232666B (en) Underground pipeline image rapid defogging method based on dark channel prior
CN111079556A (en) Multi-temporal unmanned aerial vehicle video image change area detection and classification method
Tripathi et al. Removal of fog from images: A review
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
JP5982719B2 (en) Image fog removal apparatus, image fog removal method, and image processing system
CN112837233A (en) Polarization image defogging method for acquiring transmissivity based on differential polarization
TWI489416B (en) Image recovery method
CN111553862B (en) Defogging and binocular stereoscopic vision positioning method for sea and sky background image
CN110675340A (en) Single image defogging method and medium based on improved non-local prior
CN110245600B (en) Unmanned aerial vehicle road detection method for self-adaptive initial quick stroke width
CN111210396A (en) Multispectral polarization image defogging method combined with sky light polarization model
CN111833258A (en) Image color correction method based on double-transmittance underwater imaging model
CN112465708A (en) Improved image defogging method based on dark channel
CN112561996A (en) Target detection method in autonomous underwater robot recovery docking
CN111539246A (en) Cross-spectrum face recognition method and device, electronic equipment and storage medium thereof
Bansal et al. A review of image restoration based image defogging algorithms
CN113610728A (en) Polarization double-image defogging method based on four-dark-channel mean comparison
CN115439349A (en) Underwater SLAM optimization method based on image enhancement
CN111951339A (en) Image processing method for performing parallax calculation by using heterogeneous binocular cameras
CN113487509B (en) Remote sensing image fog removal method based on pixel clustering and transmissivity fusion
CN111899269B (en) Unmanned aerial vehicle image and SAR satellite image matching method based on edge structure information
CN112862721A (en) Underground pipeline image defogging method based on dark channel and Retinex
CN109191405B (en) Aerial image defogging algorithm based on transmittance global estimation
CN116757949A (en) Atmosphere-ocean scattering environment degradation image restoration method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination