CN112488958A - Image contrast enhancement method based on scale space - Google Patents

Image contrast enhancement method based on scale space Download PDF

Info

Publication number
CN112488958A
CN112488958A CN202011477203.1A CN202011477203A CN112488958A CN 112488958 A CN112488958 A CN 112488958A CN 202011477203 A CN202011477203 A CN 202011477203A CN 112488958 A CN112488958 A CN 112488958A
Authority
CN
China
Prior art keywords
image
pyramid
gaussian
scale
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011477203.1A
Other languages
Chinese (zh)
Inventor
杜小智
罗晓东
姜洪超
岳合合
虞挺
张博文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202011477203.1A priority Critical patent/CN112488958A/en
Publication of CN112488958A publication Critical patent/CN112488958A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image contrast enhancement method based on scale space, which comprises the following steps: s1, preprocessing the original image data to obtain input image data; s2, constructing an image pyramid for the input image data and an image difference pyramid constructed by the image pyramid; s3, overlapping the interlayer images in the same group of the image difference pyramid obtained in the S2 to obtain a group of new image scale spaces; s4, and carrying out feature detection, feature description and feature matching in a new image scale space of the image difference pyramid constructed in S2 and the S3 building block. The invention enhances the image contrast by overlapping and fusing the images under different scales in the scale space, enhances the image contrast on the premise of not changing the characteristic structure of the image, and simultaneously has a certain inhibiting effect on the image noise. Meanwhile, the method has great advantages in calculation amount, and good effects are achieved in the traditional image recognition, image matching and image splicing algorithms.

Description

Image contrast enhancement method based on scale space
Technical Field
The invention relates to the technical field of image processing and the like, relates to an image contrast enhancement technology, and particularly relates to an image contrast enhancement method based on a scale space.
Background
With the development of science and technology, the artificial intelligence industry rises rapidly, and the field of computer vision is more colorful. Various algorithms are provided, so that the landing of various artificial intelligence products, such as fingerprint identification, face identification, license plate identification and the like, is realized, and great convenience is brought to human life. The scientific technology continuously provides innovation and solves the problems before various problems.
In the image processing direction, various algorithms have been proposed in academic circles such as image denoising and image contrast enhancement, and great potential is brought into play under image noises with different properties. The quality of an image is also a big factor influencing practical application effects such as image recognition, image matching, image splicing and the like, and various existing filtering algorithms can smooth off some detailed information of the image while filtering abnormal information, so that under a low-quality image environment, an image processing related algorithm has an unobvious effect on contrast enhancement of a low-quality image, and actual requirements are difficult to meet.
Disclosure of Invention
Aiming at the problem that the contrast enhancement effect of the low-quality image is not obvious by the existing image processing correlation algorithm, the invention aims to provide an image contrast enhancement method based on a scale space so as to solve the problem of the low-quality image enhancement correlation and improve the effects in the applications of image identification, image matching and image splicing.
The invention adopts the following technical scheme:
an image contrast enhancement method based on scale space comprises the following steps:
s1, preprocessing the original image data to obtain input image data;
s2, constructing an image pyramid for the input image data and an image difference pyramid constructed by the image pyramid;
s3, overlapping the interlayer images in the same group of the image difference pyramid obtained in the S2 to obtain a group of new image scale spaces;
s4, feature detection, feature description and feature matching in the new image scale space of the S3 building block.
Preferably, in S1, the original image data is subjected to denoising processing to obtain input image data.
Preferably, when denoising the original image data, the filter is selected according to the noise property of the image.
Preferably, in S2, the image pyramid is an image gaussian difference pyramid, and the image difference pyramid is an image gaussian difference pyramid.
Preferably, the gaussian functions with different scale factors are used for performing convolution calculation on the input image data to obtain an image gaussian pyramid, and a calculation formula of the image gaussian pyramid is as follows:
L(x,y,σ)=G(x,y,σ)*I(x,y)
Figure BDA0002837585470000021
wherein, denotes convolution operation, σ is a scale factor, G (x, y, σ) is a gaussian function with the scale factor, I (x, y) is input image data, L (x, y, σ) is a gaussian pyramid layer image obtained after convolution, x is a horizontal scale of the image, y is a vertical coordinate of the image, m is a gaussian kernel length, and n is a gaussian kernel width.
Preferably, in S2, images of two adjacent gaussian scales in the same group of the image gaussian pyramid are subtracted to obtain an image gaussian difference pyramid, where a calculation formula of the image gaussian difference pyramid is as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=L(x,y,kσ)-L(x,y,σ)
wherein D (x, y, σ) is a gaussian difference pyramid of the image, G (x, y, k σ) is a gaussian function with a scale factor k σ, G (x, y, σ) is a gaussian function with a scale factor attached thereto, denotes a convolution operation, I (x, y) is input image data, L (x, y, k σ) is a gaussian pyramid layer image with a scale k σ, and L (x, y, σ) is a gaussian pyramid layer image with a scale σ.
Preferably, in S3, different energy coefficients are superimposed on the interlayer image in the same group of the image gaussian difference pyramid, so as to obtain a new image scale space, where a calculation formula of the new image scale space is as follows:
Di(x,y)=a*Di(x,y)+b*Di+2(x,y)
wherein a and b are energy coefficients and range is (0.0, 1.0); i is the index of the image of the Gaussian difference pyramid layer, represents the convolution operation, Di(x, y) is a new image scale space.
Preferably, in S4, the feature detection process includes:
and comparing the current pixel point with a plurality of adjacent pixel points with the same scale and a plurality of pixel point response values corresponding to the upper and lower adjacent scales, judging whether the current pixel point is an extreme point, if so, finding the accurate position of the point on the sub-pixel in a curve fitting mode, recording the accurate position as a characteristic point, and finishing the characteristic detection.
Preferably, in S4, the feature description describes the feature points according to the SIFT operator, including the calculation of the feature point principal direction and the descriptor.
Preferably, in S4, the feature matching process includes: and carrying out primary coarse matching through the Euclidean distance between the feature points, and then carrying out secondary accurate matching by using a random sampling consistency algorithm to obtain an image feature matching result.
The invention has the following beneficial effects:
the image contrast enhancement method based on the scale space enhances the image contrast by overlapping and fusing images under different scales in the scale space, enhances the image contrast on the premise of not changing the characteristic structure of the image, and simultaneously has a certain inhibition effect on image noise. In addition, the method of the invention also has great advantages in the calculation amount, and has good effects in the traditional algorithms of image recognition, image matching and image splicing.
Further, in S3, different energy coefficients are superimposed on the interlayer images in the same group of the image gaussian difference pyramid to obtain a new image scale space, and this method depends on the characteristic that the images in the scale space are features of the same input image at different scales, so that the operation does not cause abnormal changes in the image structure.
Drawings
FIG. 1 is a flow chart of a method for scale-space based image contrast enhancement according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the pyramid of the image described in step S2 according to an embodiment of the present invention;
fig. 3 is a schematic diagram of the image differential pyramid calculation described in step S2 according to the embodiment of the present invention;
fig. 4 is a comparison of the original scale space and the completely new scale space after the superposition of the images with different scales described in step S3 provided by the embodiment of the present invention.
Fig. 5 is a comparison of the effects of feature detection, description, and matching described in step S4 provided in the example of the invention.
Detailed Description
The present invention will now be described in further detail with reference to the attached drawings, which are illustrative, but not limiting, of the present invention.
Referring to fig. 1-4, the image contrast enhancement method based on scale space of the present invention includes the following steps:
s1, preprocessing the original image data to obtain input image data, wherein the preprocessing comprises image denoising;
s2, constructing an image scale space for the input image, wherein the image scale space comprises an image pyramid and an image difference pyramid, and the image scale space adopts a Gaussian pyramid and a Gaussian difference pyramid.
And S3, overlapping the interlayer images in the same group of the image differential pyramid to obtain the scale space of the method.
And S4, carrying out feature detection, feature description and feature matching in the Scale space of the invention, and carrying out related experiments by using a Scale-invariant feature transform (SIFT) operator.
S5, the index for evaluating the method is the comparison between the image scale space visual effect and the feature matching result.
In step S1, preprocessing such as denoising is performed on the image data, and an average filter or other filters are used to select the image data according to the noise property of the image, so as to obtain input image data.
In the image scale space described in step S2, the gaussian function with different scale factors is used to perform convolution calculation on the input image to obtain a gaussian pyramid of the image. The calculation formula is as follows:
L(x,y,σ)=G(x,y,σ)*I(x,y)
Figure BDA0002837585470000041
where σ is a scale factor, G (x, y, σ) is a gaussian function with the scale factor attached, I (x, y) is an input image, and L (x, y, σ) is a gaussian pyramid layer image obtained after convolution.
Step S2 further includes constructing an image difference pyramid, and subtracting two adjacent gaussian-scale images in the same group of gaussian pyramids to obtain a gaussian difference pyramid layer image. The calculation formula is as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=L(x,y,kσ)-L(x,y,kσ)
wherein D (x, y, σ) is a gaussian difference pyramid layer image obtained by subtraction.
In step S3, the interlayer images are superimposed with different energy coefficients in the same group of the gaussian difference image pyramid to enhance the image contrast. The method is based on the characteristic that the image in the scale space is the same input image with the characteristics of different scales, so the operation does not cause the abnormal change of the image structure. The calculation formula is as follows:
Di(x,y)=a*Di(x,y)+b*Di+2(x,y)
wherein a and b are energy coefficients in the range of (0.0, 1.0); i is a Gaussian difference pyramid level image index, Di(x, y) is the enhanced image.
In step S4, comparison tests such as feature detection, feature description, feature matching, and the like are performed in the original Scale space and the new Scale space, and a Scale-invariant feature transform (SIFT) operator is taken as an example. The characteristic detection is calculated in a Gaussian difference pyramid, and the current pixel point is compared with 8 adjacent points with the same scale and 9 x 2 points corresponding to upper and lower adjacent scales to count up to 26 point response values, so that whether the current pixel point is an extreme point is judged. If yes, the accurate position of the sub-pixel is found through a curve fitting mode, and the position is recorded as a feature point.
In the step S4, the feature description is performed on the feature points according to SIFT operators; in the characteristic matching stage, firstly, primary coarse matching is carried out through the Euclidean distance between characteristic points, and then secondary accurate matching is carried out by using the RANSAC algorithm. Further, please refer to the objects or advantages of the supplementary explanation arrangement according to the claims.
Step S5 is to use the visual effect of the image in two different scales and space and the result of feature matching as the evaluation index of the method.
Examples
Referring to fig. 1, a flow chart of an image contrast enhancement method based on scale space includes the following steps:
and S1, carrying out denoising preprocessing on the original image data to obtain input image data.
And S2, constructing an image pyramid and an image difference pyramid.
And S3, overlapping and fusing the images under different scales in the same group of the image difference pyramid by using different energy coefficients.
And S4, performing feature detection, description and matching in two different scale spaces, and outputting the result.
Referring to fig. 2, the image pyramid is constructed in the following calculation manner:
the scale space L (x, y, σ) is defined as the convolution of the original image I (x, y) with a two-dimensional gaussian function of variable scale, and has the formula L (x, y, σ) ═ G (x, y, σ) × I (x, y), where x represents the convolution operation,
Figure BDA0002837585470000061
the image scale of each layer in the Gaussian pyramid group is sigmai=kσ0Wherein
Figure BDA0002837585470000062
S is the number of layers used for feature point detection, N is the number of layers in each group of Gaussian pyramid, sigma0Is the scale of the input image.
Images within the same group are convolved by a Gaussian convolution factor of
Figure BDA0002837585470000063
And convolving the input image.
Referring to fig. 3, the construction step of the image differential pyramid includes the following steps:
the image difference pyramid image is obtained by subtracting two adjacent layers of the pyramid, and the calculation formula is as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=L(x,y,kσ)-L(x,y,kσ)
referring to fig. 4, the specific method for superimposing images of different scales of the image difference pyramid is as follows:
the superposition of different energy coefficients is carried out on the interlayer images in the same group, so that the characteristic structure of the images cannot be abnormally changed, because the images in the same group are obtained by carrying out convolution on the reference layer images, the blurring degree is different, and the image sizes are consistent. The calculation formula is as follows:
Di(x,y)=a*Di(x,y)+b*Di+2(x,y)
referring to fig. 5, feature detection, description, and matching are performed in a scale space:
the characteristic detection is obtained by calculation in three continuous layers of a scale space, and the current pixel point is compared with 8 adjacent points with the same scale and 9-2 points corresponding to the upper and lower adjacent scales, and the total response value of 26 points is judged to determine whether the current pixel point is an extreme point. If yes, the accurate position of the sub-pixel is found through a curve fitting mode, and the position is recorded as a feature point.
The feature description needs to calculate information such as a main direction, a descriptor and the like for a feature point, and the information is calculated by using a calculation method provided by a Scale-invariant feature transform (SIFT) operator privately.
Calculating Euclidean distance between feature points extracted from quasi-matching image if Dismin<th*Dissec_min,0.0<th<1.0
Then Dis is recordedminThe pair of matching point pairs. And reversely calculating the characteristic points of the two images, and recording the point pair as a high-quality point pair if the matching point pair also meets the calculation formula. And purifying all the matching point pairs by using an RANSAC algorithm to obtain a final matching result.
In this embodiment, the feature matching point logarithm (CNT), the matching optimal point logarithm (BCNT) index, the feature matching result extracted from the original scale space and the scale space calculated by the method provided by the present invention, and the visual effect of the image are evaluated, as shown in table 1.
Methods CNT BCNT
Original method 107 48
The method of the invention 172 79
TABLE 1
As can be seen from table 1, the method proposed in the present invention has a very significant effect in two indexes.
The invention discloses an image contrast enhancement method based on a scale space, which is used for carrying out superposition fusion on images under different scales by using different energy coefficients. According to the method, the image characteristics in the scale space are used, the effect generated by superposition not only enhances the image contrast, but also has an obvious effect on image denoising, and the quality of the detected new features is higher. In addition, the method of the invention also has strong advantages in the aspect of calculation amount, and the performance and the effect of the traditional algorithm related to image processing are greatly improved.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (10)

1. An image contrast enhancement method based on scale space is characterized by comprising the following steps:
s1, preprocessing the original image data to obtain input image data;
s2, constructing an image pyramid for the input image data and an image difference pyramid constructed by the image pyramid;
s3, overlapping the interlayer images in the same group of the image difference pyramid obtained in the S2 to obtain a group of new image scale spaces;
s4, feature detection, feature description and feature matching in the new image scale space of the S3 building block.
2. The method of claim 1, wherein in S1, denoising processing is performed on original image data to obtain input image data.
3. The method of claim 2, wherein the denoising of the original image data is performed by selecting a filter according to the noise property of the image.
4. The method of claim 1, wherein in step S2, the image pyramid is an image gaussian pyramid, and the image difference pyramid is an image gaussian difference pyramid.
5. The method of claim 4, wherein the Gaussian functions with different scale factors are used to perform convolution calculation on the input image data to obtain the Gaussian pyramid of the image, and the calculation formula of the Gaussian pyramid of the image is as follows:
L(x,y,σ)=G(x,y,σ)*I(x,y)
Figure FDA0002837585460000011
wherein, denotes convolution operation, σ is a scale factor, G (x, y, σ) is a gaussian function with the scale factor, I (x, y) is input image data, L (x, y, σ) is a gaussian pyramid layer image obtained after convolution, x is a horizontal scale of the image, y is a vertical coordinate of the image, m is a gaussian kernel length, and n is a gaussian kernel width.
6. The method of claim 4, wherein in step S2, the subtraction is performed between two adjacent gaussian-scale images in the same group of gaussian pyramids of the image to obtain a gaussian difference pyramid of the image, and the calculation formula of the gaussian difference pyramid of the image is as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=L(x,y,kσ)-L(x,y,σ)
wherein D (x, y, σ) is a gaussian difference pyramid of the image, G (x, y, k σ) is a gaussian function with a scale factor k σ, G (x, y, σ) is a gaussian function with a scale factor attached thereto, denotes a convolution operation, I (x, y) is input image data, L (x, y, k σ) is a gaussian pyramid layer image with a scale k σ, and L (x, y, σ) is a gaussian pyramid layer image with a scale σ.
7. The method of claim 4, wherein in step S3, different energy coefficients are superimposed on the interlayer images in the same group of Gaussian difference pyramids of the images to obtain a new image scale space, and a calculation formula of the new image scale space is as follows:
Di(x,y)=a*Di(x,y)+b*Di+2(x,y)
wherein a and b are energy coefficients and range is (0.0, 1.0); i is the index of the image of the Gaussian difference pyramid layer, represents the convolution operation, Di(x, y) is a new image scale space.
8. The method of claim 4, wherein in step S4, the feature detection process includes:
and comparing the current pixel point with a plurality of adjacent pixel points with the same scale and a plurality of pixel point response values corresponding to the upper and lower adjacent scales, judging whether the current pixel point is an extreme point, if so, finding the accurate position of the point on the sub-pixel in a curve fitting mode, recording the accurate position as a characteristic point, and finishing the characteristic detection.
9. The method of claim 4, wherein in the step S4, the description of the feature points by the feature description according to SIFT operator includes the calculation of feature point principal direction and descriptor.
10. The method of claim 4, wherein in step S4, the feature matching process includes: and carrying out primary coarse matching through the Euclidean distance between the feature points, and then carrying out secondary accurate matching by using a random sampling consistency algorithm to obtain an image feature matching result.
CN202011477203.1A 2020-12-15 2020-12-15 Image contrast enhancement method based on scale space Pending CN112488958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011477203.1A CN112488958A (en) 2020-12-15 2020-12-15 Image contrast enhancement method based on scale space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011477203.1A CN112488958A (en) 2020-12-15 2020-12-15 Image contrast enhancement method based on scale space

Publications (1)

Publication Number Publication Date
CN112488958A true CN112488958A (en) 2021-03-12

Family

ID=74917084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011477203.1A Pending CN112488958A (en) 2020-12-15 2020-12-15 Image contrast enhancement method based on scale space

Country Status (1)

Country Link
CN (1) CN112488958A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255523A (en) * 2021-05-26 2021-08-13 上海弘遥电子研究开发有限公司 Method, system, device and storage medium for improving gesture recognition precision and recognition
CN114170445A (en) * 2022-02-10 2022-03-11 河北工业大学 Indoor smoke environment image matching method suitable for fire-fighting robot
CN116385280A (en) * 2023-01-09 2023-07-04 爱芯元智半导体(上海)有限公司 Image noise reduction system and method and noise reduction neural network training method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103593832A (en) * 2013-09-25 2014-02-19 重庆邮电大学 Method for image mosaic based on feature detection operator of second order difference of Gaussian
CN104834931A (en) * 2015-03-13 2015-08-12 江南大学 Improved SIFT algorithm based on wavelet transformation
CN108304883A (en) * 2018-02-12 2018-07-20 西安电子科技大学 Based on the SAR image matching process for improving SIFT
CN108830237A (en) * 2018-06-21 2018-11-16 北京师范大学 A kind of recognition methods of human face expression
CN109785371A (en) * 2018-12-19 2019-05-21 昆明理工大学 A kind of sun image method for registering based on normalized crosscorrelation and SIFT
CN111223068A (en) * 2019-11-12 2020-06-02 西安建筑科技大学 Retinex-based self-adaptive non-uniform low-illumination image enhancement method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103593832A (en) * 2013-09-25 2014-02-19 重庆邮电大学 Method for image mosaic based on feature detection operator of second order difference of Gaussian
CN104834931A (en) * 2015-03-13 2015-08-12 江南大学 Improved SIFT algorithm based on wavelet transformation
CN108304883A (en) * 2018-02-12 2018-07-20 西安电子科技大学 Based on the SAR image matching process for improving SIFT
CN108830237A (en) * 2018-06-21 2018-11-16 北京师范大学 A kind of recognition methods of human face expression
CN109785371A (en) * 2018-12-19 2019-05-21 昆明理工大学 A kind of sun image method for registering based on normalized crosscorrelation and SIFT
CN111223068A (en) * 2019-11-12 2020-06-02 西安建筑科技大学 Retinex-based self-adaptive non-uniform low-illumination image enhancement method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255523A (en) * 2021-05-26 2021-08-13 上海弘遥电子研究开发有限公司 Method, system, device and storage medium for improving gesture recognition precision and recognition
CN113255523B (en) * 2021-05-26 2022-07-12 上海弘遥电子研究开发有限公司 Method, system, device and storage medium for improving gesture recognition precision and recognition
CN114170445A (en) * 2022-02-10 2022-03-11 河北工业大学 Indoor smoke environment image matching method suitable for fire-fighting robot
CN114170445B (en) * 2022-02-10 2022-04-12 河北工业大学 Indoor smoke environment image matching method suitable for fire-fighting robot
CN116385280A (en) * 2023-01-09 2023-07-04 爱芯元智半导体(上海)有限公司 Image noise reduction system and method and noise reduction neural network training method
CN116385280B (en) * 2023-01-09 2024-01-23 爱芯元智半导体(上海)有限公司 Image noise reduction system and method and noise reduction neural network training method

Similar Documents

Publication Publication Date Title
CN112488958A (en) Image contrast enhancement method based on scale space
Park et al. Double JPEG detection in mixed JPEG quality factors using deep convolutional neural network
CN106355577B (en) Rapid image matching method and system based on significant condition and global coherency
CN115294158A (en) Hot continuous rolling strip steel image segmentation method based on machine vision
CN112819772B (en) High-precision rapid pattern detection and recognition method
CN108416789A (en) Method for detecting image edge and system
CN112017223B (en) Heterologous image registration method based on improved SIFT-Delaunay
Yu et al. A new edge detection approach based on image context analysis
CN116228747B (en) Metal cabinet processing quality monitoring method
CN110782477A (en) Moving target rapid detection method based on sequence image and computer vision system
CN112580661B (en) Multi-scale edge detection method under deep supervision
CN108550166B (en) Spatial target image matching method
CN109509163B (en) FGF-based multi-focus image fusion method and system
CN113870235A (en) Method for detecting defects of circular stamping part based on quantum firework arc edge extraction
Cao et al. Infrared small target detection based on derivative dissimilarity measure
CN111915486B (en) Confrontation sample defense method based on image super-resolution reconstruction
CN110648316A (en) Steel coil end face edge detection algorithm based on deep learning
CN108205657A (en) Method, storage medium and the mobile terminal of video lens segmentation
CN106600613A (en) Embedded GPU-based improved LBP infrared target detection method
CN111223063A (en) Finger vein image NLM denoising method based on texture features and binuclear function
CN111950635B (en) Robust feature learning method based on layered feature alignment
CN108596928A (en) Based on the noise image edge detection method for improving Gauss-Laplace operator
CN108470345A (en) A kind of method for detecting image edge of adaptive threshold
CN111368856A (en) Spine extraction method and device of book checking system based on vision
CN108010076B (en) End face appearance modeling method for intensive industrial bar image detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210312

RJ01 Rejection of invention patent application after publication