CN111899200B - Infrared image enhancement method based on 3D filtering - Google Patents

Infrared image enhancement method based on 3D filtering Download PDF

Info

Publication number
CN111899200B
CN111899200B CN202010794350.5A CN202010794350A CN111899200B CN 111899200 B CN111899200 B CN 111899200B CN 202010794350 A CN202010794350 A CN 202010794350A CN 111899200 B CN111899200 B CN 111899200B
Authority
CN
China
Prior art keywords
image
filtering
detail
window
guide
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010794350.5A
Other languages
Chinese (zh)
Other versions
CN111899200A (en
Inventor
贺明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoke Tiancheng Technology Co ltd
Original Assignee
Guoke Tiancheng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoke Tiancheng Technology Co ltd filed Critical Guoke Tiancheng Technology Co ltd
Priority to CN202010794350.5A priority Critical patent/CN111899200B/en
Publication of CN111899200A publication Critical patent/CN111899200A/en
Application granted granted Critical
Publication of CN111899200B publication Critical patent/CN111899200B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an infrared image enhancement method based on 3D filtering, which comprises the steps of firstly, carrying out rapid guiding filtering on an infrared image by adopting an improved guiding filter to obtain a basic image and a detail image, then carrying out optical flow motion estimation on a local image block of the detail image to obtain a motion vector of the detail image, adding the motion vector into a continuous frame image, filtering the detail image by adopting three methods of space and time sequence, and carrying out self-adaptive weighting fusion on the obtained detail image and the basic image to obtain a final enhanced infrared image.

Description

Infrared image enhancement method based on 3D filtering
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an infrared image enhancement method based on 3D filtering.
Background
With the continuous development of science and technology, infrared imaging is used as a product combining an infrared technology and an imaging technology, is applied more and more widely, and is already applied to many fields such as security monitoring, military target detection and tracking, medical treatment and the like. When the temperature of an object is actually detected, the temperature is easily influenced by heat transfer, heat radiation and atmospheric attenuation, so that low contrast of a graph, unclear texture details and the like are caused, wherein the contrast between a target and a background is low, so that the background and the target object in an infrared graph are difficult to identify, and a lot of inconvenience is brought to target identification and tracking. Therefore, it is very important to study the infrared enhancement algorithm.
The traditional image enhancement algorithm is divided into spatial domain enhancement and frequency domain enhancement, the spatial domain enhancement directly processes pixel gray values, and the main methods comprise gray stretching, histogram equalization, unsharp masking and the like; the frequency domain enhancement firstly transforms the image to a frequency domain, and then processes the frequency domain image by using a frequency domain filter to realize the enhancement, and the simple spatial enhancement or the frequency domain enhancement can not meet the requirements of the existing system for not only eliminating noise but also enhancing details.
Disclosure of Invention
Aiming at the defects in the prior art, the infrared image enhancement method based on 3D filtering provided by the invention solves the problem that the existing infrared image enhancement method is difficult to ensure that the noise is eliminated and the details are enhanced.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: an infrared image enhancement method based on 3D filtering comprises the following steps:
s1, acquiring an original infrared image, and dividing the original infrared image into a basic image and a detail image through a guiding filter;
s2, carrying out motion estimation on local image blocks in the detail image by adopting a local optical flow method to obtain motion vectors of the detail image;
s3, carrying out 3D filtering on the detail image based on the motion vector to obtain a 3D filtered detail image;
and S4, carrying out self-adaptive weighted synthesis on the 3D filtered detail image and the basic image to obtain an enhanced infrared image.
Further, the formula of the pilot filter in step S1 is as follows:
Figure BDA0002624988650000021
in the formula, quTo output an image, IfTo guide the image, wkIs a pixel block window, subscript u is a pixel point, akAnd bkIs a window coefficient in a window of pixel blocks; wherein,
Figure BDA0002624988650000022
Figure BDA0002624988650000023
to guide the variance of the image in the window, ε is the linear regression coefficient,
Figure BDA0002624988650000024
is the average value of the image to be smoothed in the window.
Further, the step S1 is specifically:
s11, taking the original infrared image as a guide image in a guide filter;
s12, respectively calculating a guide filtering window and a coefficient window of a guide filter based on the guide image to obtain a corresponding basic image;
and S13, subtracting the basic image from the original infrared image to obtain a corresponding detail image.
Further, the size of the pilot filter window is 8 × 8, and the size of the coefficient window is 4 × 4.
Further, the step S2 is specifically:
s21, determining an optical flow calculation model for motion estimation in a two-dimensional plane:
Ix(u)Vx+Iy(u)Vy+It(u)=0,u=(1,2,...,n)
in the formula Ix(u) and Iy(u) spatial dimension information, I, of the pixels of the detail image, respectivelyt(u) is the time dimension information, V, of a pixel u of a detail imagexAnd VyAre respectively motion vector (V)x,Vy) Components in the horizontal and vertical directions;
s22, solving the optical flow calculation model by adopting a least square method to obtain a motion vector (V)x,Vy) The expression of (a) is:
Figure BDA0002624988650000031
in the formula, omega is a weight coefficient;
s23, at motion vector (V)x,Vy) Sets the intermediate calculation parameter in the expression of (c), obtains the motion vector (V)x,Vy) Comprises the following steps:
Figure BDA0002624988650000032
in the formula, AAxx, AAyy, AAxy, ABxt and AByt are all set intermediate calculation parameters, and
Figure BDA0002624988650000033
AAxy=∑iωIx(i)Iy(i),ABxt=∑iωIx(i)It(i),AByt=∑iωIy(i)It(i)。
further, the step S3 is specifically:
s31, determining a local window image of 3 x 3 around each pixel in the detail image of the current frame;
s32, determining local window images in the first two frames of detail images of the current frame of detail image by using the motion vector;
and S33, performing 3D filtering on the 3X 3 local window images by taking the 3X 3 local window image center as a center point and performing three dimensions of spatial similarity factor, gray scale similarity factor and time similarity factor to obtain a 3D filtered detail image.
Further, the expression of the 3D filtered detail image is:
Figure BDA0002624988650000034
wherein h (y) is the gray value of the detail image after 3D filtering, hij(y) is the gray value of the (i, j) th neighborhood detector, wr(i, j) is the gray level similarity factor of the (i, j) th neighborhood detector element, ws(i, j) is the spatial proximity factor of the (i, j) th neighborhood probe, wk(i, j) is a time domain factor, (i, j) is a label of the neighborhood probe, i, j is a horizontal and vertical coordinate of the neighborhood probe, wherein,
Figure BDA0002624988650000041
Figure BDA0002624988650000042
Figure BDA0002624988650000043
is the spatial position variance, f (i, j) is the original infrared image,
Figure BDA0002624988650000044
is the variance of gray value, f (k, l) is the gray value of the pixel at the center point of the current frame image,
Figure BDA0002624988650000045
is the time variance.
Further, in the step S4, the enhanced infrared image fout(i, j) is:
fout(i,j)=LP[f(i,j)]+α*h(y)
where α is a weighting factor, LP [ f (i, j) ] is the base image after guided filtering, and h (y) is the detail image after 3D filtering.
The invention has the beneficial effects that:
the method comprises the steps of firstly carrying out rapid guiding filtering on an infrared image by adopting an improved guiding filter to obtain a basic image and a detail image, then carrying out optical flow motion estimation on a local image block of the detail image to obtain a motion vector of the detail image, adding continuous frame images by utilizing the motion vector, filtering the detail image by adopting three methods of space and time sequence, and carrying out self-adaptive weighting fusion on the obtained detail image and the basic image to obtain a finally enhanced infrared image.
Drawings
Fig. 1 is a flowchart of an infrared image enhancement method based on 3D filtering according to the present invention.
Fig. 2 is a comparison diagram of the first infrared image enhancement effect provided by the present invention.
Fig. 3 is a comparison diagram of the second infrared image enhancement effect provided by the present invention.
Fig. 4 is a comparison diagram of the third infrared image enhancement effect provided by the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Example 1:
as shown in fig. 1, an infrared image enhancement method based on 3D filtering includes the following steps:
s1, acquiring an original infrared image, and dividing the original infrared image into a basic image and a detail image through a guiding filter;
s2, carrying out motion estimation on local image blocks in the detail image by adopting a local optical flow method to obtain motion vectors of the detail image;
s3, carrying out 3D filtering on the detail image based on the motion vector to obtain a 3D filtered detail image;
and S4, carrying out self-adaptive weighted synthesis on the 3D filtered detail image and the basic image to obtain an enhanced infrared image.
In step S1, the guiding filter has the same functions as the bilateral filter in edge-preserving and smoothing filtering, and the formula of the guiding filter is:
Figure BDA0002624988650000051
in the formula, quTo output an image, IfTo guide the image, wkIs a pixel block window, subscript u is a pixel point, akAnd bkIs a window coefficient in a window of pixel blocks; in determining akAnd bkAccording to the minimum mean square error criterion, one can define:
Figure BDA0002624988650000052
Figure BDA0002624988650000061
in the formula, mukTo guide the average of the image in the window,
Figure BDA0002624988650000062
to guide the variance of the image in the window, w is the sum of the number of pixels in the window, PkFor the pixel points in the image to be smoothed,
Figure BDA0002624988650000063
determining the smoothing degree of the filter for the mean value of the image to be smoothed in the window, wherein epsilon is a linear regression coefficient;
in the embodiment, the guide filter is improved, the guide filter adopts the large window and the small window as the guide filtering window and the coefficient window to respectively carry out the coefficient, the calculation speed is greatly improved, the algorithm efficiency is increased, the guide image is the input image, akAnd bkCan be modified into the following steps:
Figure BDA0002624988650000064
Figure BDA0002624988650000065
based on the guiding filter, the method for guiding and filtering the original infrared image in step S1 specifically includes:
s11, taking the original infrared image as a guide image in a guide filter;
s12, respectively calculating a guide filtering window and a coefficient window of a guide filter based on the guide image to obtain a corresponding basic image;
and S13, subtracting the basic image from the original infrared image to obtain a corresponding detail image.
Specifically, the size of the pilot filter window in the pilot filter in this embodiment is 88, and the size of the coefficient window is 4 × 4, where the coefficient window is smaller than the pilot filter window, which can reduce the amount of calculation.
In step S2, an image sequence g (x) may be represented by a three-dimensional column vector x ═ x, y, tTWhere x and y are spatial components and t is a temporal component. According to the constant constraint of brightness, the object motion of the space-time domain generates a brightness model with a certain direction, and the following is assumed according to the constant brightness:
I(x,y,t)=I′(x+dx,y+dy,t+dt)
where I and I' represent adjacent frames of the image and dx and dy represent the incremental displacement of the pixels in the x, y direction over dt times. According to the assumption of a small motion model, the above formula Taylor is expanded and then high-order terms are omitted, and the optical flow calculation model equation is obtained as follows:
IxVx+IyVy+It=0,
in the formula, Vx,VyComponents of the light flow vector in the horizontal and vertical directions, Ix,IyAnd ItRepresenting spatial and temporal dimension information of the image, respectively.
In a three-dimensional world, if points belonging to the same object plane have the same velocity, points projected to the two-dimensional plane also have the same velocity.
Based on the above, in this embodiment, the step S2 is specifically:
s21, determining an optical flow calculation model for motion estimation in a two-dimensional plane:
Ix(u)Vx+Iy(u)Vy+It(u)=0,u=(1,2,...,n)
in the formula Ix(u) and Iy(u) spatial dimension information, I, of the pixels of the detail image, respectivelyt(u) is the time dimension information, V, of a pixel u of a detail imagexAnd VyAre respectively motion vector (V)x,Vy) Components in the horizontal and vertical directions;
s22, solving the optical flow calculation model by adopting a least square method to obtain a motion vector (V)x,Vy) The expression of (a) is:
Figure BDA0002624988650000071
in the formula, omega is a weight coefficient;
s23, at motion vector (V)x,Vy) Sets the intermediate calculation parameters in the expression of (1) to obtain the movementVector (V)x,Vy) Comprises the following steps:
Figure BDA0002624988650000072
in the formula, AAxx, AAyy, AAxy, ABxt and AByt are all set intermediate calculation parameters, and
Figure BDA0002624988650000081
AAxy=∑iωIx(i)Iy(i),ABxt=∑iωIx(i)It(i),AByt=∑iωIy(i)It(i)。
the step S3 is specifically:
s31, determining a local window image of 3 x 3 around each pixel in the detail image of the current frame;
s32, determining local window images in the first two frames of detail images of the current frame of detail image by using the motion vector;
and S33, performing 3D filtering on the 3X 3 local window images by taking the 3X 3 local window image center as a center point and performing three dimensions of spatial similarity factor, gray scale similarity factor and time similarity factor to obtain a 3D filtered detail image.
Wherein, the expression of the detail image after 3D filtering is:
Figure BDA0002624988650000082
wherein h (y) is the gray value of the detail image after 3D filtering, S represents the nine neighborhood space with the central point (k, l), hij(y) is the gray value of the (i, j) th neighborhood detector, wr(i, j) is the gray level similarity factor of the (i, j) th neighborhood detecting element, which decreases with the increase of the gray level difference, ws(i, j) is the spatial proximity factor of the (i, j) th neighborhood probe element, which decreases with increasing Euclidean distance from the center point, wk(i, j) is a time domain factor which decreases with time domain gray scale difference, and (i, j) is a neighborhood detectionThe label of the detecting element, i, j, is the horizontal and vertical coordinates of the neighborhood detecting element, wherein,
Figure BDA0002624988650000083
Figure BDA0002624988650000084
is the spatial position variance, f (i, j) is the original infrared image,
Figure BDA0002624988650000085
is the variance of gray value, f (k, l) is the gray value of the pixel at the center point of the current frame image,
Figure BDA0002624988650000086
is the time variance.
In the 3D filtering process, in the region where the image is gentle and the motion between frames is small, the gray level difference in the neighborhood is not large, bilateral filtering is converted into a Gaussian low-pass filter, and in the image with the suddenly changed gray level, the filter replaces the original value by the average gray level of the similar gray level elements of the gray level values near the edge pixels, so that the three-direction detail filter not only smoothes the image, but also keeps the edge of the image.
Obtaining the enhanced infrared image f in the step S4 based on the above processout(i, j) is:
fout(i,j)=LP[f(i,j)]+α*h(y)
where α is a weighting factor, LP [ f (i, j) ] is the base image after guided filtering, and h (y) is the detail image after 3D filtering.
Example 2:
the infrared image is enhanced by the method of the invention to obtain the effect contrast images (a is the original infrared image, and b is the enhanced infrared image) of fig. 2-4, and the images enhanced by the algorithm can be seen from the images, so that the noise can be effectively filtered, the image contrast is improved, and the image details are greatly improved.

Claims (4)

1. An infrared image enhancement method based on 3D filtering is characterized by comprising the following steps:
s1, acquiring an original infrared image, and dividing the original infrared image into a basic image and a detail image through a guiding filter;
s2, carrying out motion estimation on local image blocks in the detail image by adopting a local optical flow method to obtain motion vectors of the detail image;
s3, carrying out 3D filtering on the detail image based on the motion vector to obtain a 3D filtered detail image;
s4, carrying out self-adaptive weighted synthesis on the 3D filtered detail image and the basic image to obtain an enhanced infrared image;
the step S1 specifically includes:
s11, taking the original infrared image as a guide image in a guide filter;
s12, respectively calculating a guide filtering window and a coefficient window of a guide filter based on the guide image to obtain a corresponding basic image;
s13, subtracting the basic image from the original infrared image to obtain a corresponding detail image;
the size of the guide filtering window is 8 × 8, and the size of the coefficient window is 4 × 4;
the step S3 specifically includes:
s31, determining a local window image of 3 x 3 around each pixel in the detail image of the current frame;
s32, determining local window images in the first two frames of detail images of the current frame of detail image by using the motion vector;
s33, 3D filtering of three dimensions of a spatial similarity factor, a gray scale similarity factor and a time similarity factor is carried out on 3 local window images of 3 x 3 by taking the centers of the local window images of 3 x 3 as central points, and a detail image after 3D filtering is obtained;
in the 3D filtering process, in the region where the image is gentle and the motion between frames is small, the gray level difference in the neighborhood is not large, bilateral filtering is converted into a Gaussian low-pass filter, and in the image with the suddenly changed gray level, the filter averagely replaces the original value by utilizing the gray levels of the similar elements near the edge pixels;
the expression of the 3D filtered detail image is:
Figure FDA0003061563910000021
wherein h (y) is the gray value of the detail image after 3D filtering, hij(y) is the gray value of the (i, j) th neighborhood detector, wr(i, j) is the gray level similarity factor of the (i, j) th neighborhood detector element, ws(i, j) is the spatial proximity factor of the (i, j) th neighborhood probe, wk(i, j) is a time domain factor, (i, j) is a label of the neighborhood probe, i, j is a horizontal and vertical coordinate of the neighborhood probe, wherein,
Figure FDA0003061563910000022
Figure FDA0003061563910000023
Figure FDA0003061563910000024
is the spatial position variance, f (i, j) is the original infrared image,
Figure FDA0003061563910000025
is the variance of gray value, f (k, l) is the gray value of the pixel at the center point of the current frame image,
Figure FDA0003061563910000026
is the time variance.
2. The infrared image enhancement method based on 3D filtering as claimed in claim 1, wherein the formula of the guiding filter in step S1 is:
Figure FDA0003061563910000027
in the formula, quTo output an image, IfTo guide the image, wkFor a window of pixel blocks, subscript u is pixelPoint, akAnd bkIs a window coefficient in a window of pixel blocks; wherein,
Figure FDA0003061563910000028
Figure FDA0003061563910000029
to guide the variance of the image in the window, ε is the linear regression coefficient,
Figure FDA00030615639100000210
is the average value of the image to be smoothed in the window.
3. The infrared image enhancement method based on 3D filtering according to claim 1, wherein the step S2 specifically includes:
s21, determining an optical flow calculation model for motion estimation in a two-dimensional plane:
Ix(u)Vx+Iy(u)Vy+It(u)=0,u=(1,2,...,n)
in the formula Ix(u) and Iy(u) spatial dimension information of pixels of detail image, It(u) is the time dimension information, V, of a pixel u of a detail imagexAnd VyAre respectively motion vector (V)x,Vy) Components in the horizontal and vertical directions;
s22, solving the optical flow calculation model by adopting a least square method to obtain a motion vector (V)x,Vy) The expression of (a) is:
Figure FDA0003061563910000031
in the formula, ω is a weight coefficient, Ix(i)、Iy(i) And It(i) The components of the motion vector in the horizontal direction, the vertical direction and the time dimension at the pixel point i are respectively;
s23, at motion vector (V)x,Vy) Sets the intermediate calculation parameter in the expression of (c), obtains the motion vector (V)x,Vy) Comprises the following steps:
Figure FDA0003061563910000032
in the formula, AAxx, AAyy, AAxy, ABxt and AByt are all set intermediate calculation parameters, and
Figure FDA0003061563910000033
AAxy=∑iωIx(i)Iy(i),ABxt=∑iωIx(i)It(i),AByt=∑iωIy(i)It(i)。
4. the infrared image enhancement method based on 3D filtering as claimed in claim 1, wherein in step S4, the enhanced infrared image fout(i, j) is:
fout(i,j)=LP[f(i,j)]+α*h(y)
where α is a weighting factor, LP [ f (i, j) ] is the base image after guided filtering, and h (y) is the detail image after 3D filtering.
CN202010794350.5A 2020-08-10 2020-08-10 Infrared image enhancement method based on 3D filtering Active CN111899200B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010794350.5A CN111899200B (en) 2020-08-10 2020-08-10 Infrared image enhancement method based on 3D filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010794350.5A CN111899200B (en) 2020-08-10 2020-08-10 Infrared image enhancement method based on 3D filtering

Publications (2)

Publication Number Publication Date
CN111899200A CN111899200A (en) 2020-11-06
CN111899200B true CN111899200B (en) 2021-06-22

Family

ID=73246713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010794350.5A Active CN111899200B (en) 2020-08-10 2020-08-10 Infrared image enhancement method based on 3D filtering

Country Status (1)

Country Link
CN (1) CN111899200B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822352B (en) * 2021-09-15 2024-05-17 中北大学 Infrared dim target detection method based on multi-feature fusion
CN113935922B (en) * 2021-10-21 2024-05-24 燕山大学 Infrared and visible light image characteristic enhancement fusion method
CN115239558A (en) * 2022-07-19 2022-10-25 河南省肿瘤医院 Low-dose lung CT image detail super-resolution reconstruction method and system
CN118072224A (en) * 2024-03-26 2024-05-24 中国科学院空天信息创新研究院 Flying target detection method based on multidirectional filtering

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015172235A1 (en) * 2014-05-15 2015-11-19 Tandemlaunch Technologies Inc. Time-space methods and systems for the reduction of video noise

Also Published As

Publication number Publication date
CN111899200A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
CN111899200B (en) Infrared image enhancement method based on 3D filtering
Zhao et al. Multisensor image fusion and enhancement in spectral total variation domain
Zhao et al. Multi-focus image fusion with a natural enhancement via a joint multi-level deeply supervised convolutional neural network
Shin et al. Radiance–reflectance combined optimization and structure-guided $\ell _0 $-Norm for single image dehazing
Cao et al. Effective strip noise removal for low-textured infrared images based on 1-D guided filtering
CN103533214B (en) Video real-time denoising method based on kalman filtering and bilateral filtering
Yan et al. Injected infrared and visible image fusion via $ l_ {1} $ decomposition model and guided filtering
CN109887021B (en) Cross-scale-based random walk stereo matching method
CN110796616B (en) Turbulence degradation image recovery method based on norm constraint and self-adaptive weighted gradient
CN111161222A (en) Printing roller defect detection method based on visual saliency
CN107610159A (en) Infrared small object tracking based on curvature filtering and space-time context
CN103886553A (en) Method and system for non-local average value denoising of image
CN112465725B (en) Infrared image frame rate up-conversion method based on PWC-Net
Zhu et al. Infrared moving point target detection based on an anisotropic spatial-temporal fourth-order diffusion filter
CN107451986B (en) Single infrared image enhancement method based on fusion technology
Li et al. Image enhancement algorithm based on depth difference and illumination adjustment
WO2022233252A1 (en) Image processing method and apparatus, and computer device and storage medium
Hua et al. Removing atmospheric turbulence effects via geometric distortion and blur representation
CN113177898B (en) Image defogging method and device, electronic equipment and storage medium
CN109658357A (en) A kind of denoising method towards remote sensing satellite image
Raveendran et al. Image fusion using LEP filtering and bilinear interpolation
Zhang et al. Dehazing with improved heterogeneous atmosphere light estimation and a nonlinear color attenuation prior model
Chen et al. Image segmentation in thermal images
Zhou et al. Single image dehazing based on weighted variational regularized model
Wang et al. Image haze removal using a hybrid of fuzzy inference system and weighted estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100094 room 901, 9 / F, building 4, zone 1, 81 Beiqing Road, Haidian District, Beijing

Applicant after: Guoke Tiancheng Technology Co.,Ltd.

Address before: 100094 room 901, 9 / F, building 4, zone 1, 81 Beiqing Road, Haidian District, Beijing

Applicant before: TEEMSUN (BEIJING) TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant