CN113139557B - Feature extraction method based on two-dimensional multi-element empirical mode decomposition - Google Patents

Feature extraction method based on two-dimensional multi-element empirical mode decomposition Download PDF

Info

Publication number
CN113139557B
CN113139557B CN202110505248.3A CN202110505248A CN113139557B CN 113139557 B CN113139557 B CN 113139557B CN 202110505248 A CN202110505248 A CN 202110505248A CN 113139557 B CN113139557 B CN 113139557B
Authority
CN
China
Prior art keywords
image
decomposition
color
dimensional
bimf
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110505248.3A
Other languages
Chinese (zh)
Other versions
CN113139557A (en
Inventor
夏亦犁
闫溪
裴文江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202110505248.3A priority Critical patent/CN113139557B/en
Publication of CN113139557A publication Critical patent/CN113139557A/en
Application granted granted Critical
Publication of CN113139557B publication Critical patent/CN113139557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a feature extraction method based on two-dimensional multivariate empirical mode decomposition (Bidimensional Multivariate Empirical Mode Decomposition, BMEMD), which is based on BEMD, and a plurality of channels are added to realize multi-channel image decomposition. The method converts an input two-dimensional multi-element signal into a plurality of two-dimensional single elements through real-value projection in a plurality of directions. The invention applies BMEMD to the processing of color images, divides the color images into RGB three channels for two-dimensional empirical mode decomposition, and synthesizes the BIMF (Bidimensional Intrinsic Mode Function, two-dimensional eigen solid state function) obtained by decomposition to obtain color BIMF and residual components, which can be used for extracting color information and texture characteristics of the color images.

Description

Feature extraction method based on two-dimensional multi-element empirical mode decomposition
Technical Field
The invention belongs to the technical field of technical image processing, and particularly relates to an image feature extraction method, in particular to a feature extraction method based on two-dimensional multivariate empirical mode decomposition (Bidimensional Multivariate Empirical Mode Decomposition, BMEMD).
Background
Texture is an important feature of images, an important information source for computer vision and graphics signal processing, and although its form is difficult to define, it can be described by models based on statistics, transformations, models and structures. Since the texture measures the characteristics of an image in terms of roughness, contrast, structure, regularity, directionality, etc., the meaningful texture is extracted from the image, which can provide the most direct prior information for subsequent processing and analysis of the image. Image feature extraction is typically performed from texture, color, and edge aspects.
Texture, one of the important attributes of an image, is the macroscopic representation of some local repeating pattern of eigenvalue intensities in the image. Common texture extraction methods include statistical methods, geometric methods, model methods, signal processing methods, structural methods, and the like. Color is one of the important features of an image. The idea of processing a color image is to decompose colors according to three channels, and analyze the value presented by a specific pixel point in the image to obtain the color characteristic component thereof. The method for representing the color includes a color histogram, a color moment, and the like. Edge detection is a feature extraction method that can be used to capture the boundaries of areas of abrupt brightness changes in an image. Areas of an image where luminance discontinuities occur are typically caused by the following four conditions: (1) image gradient (orientation) discontinuities; (2) image intensity (illumination) discontinuities; (3) image depth discontinuities; (4) texture changes. Ideally, edge detection can obtain a continuous curve that delineates the boundary of the detected object, thereby preserving important structures and filtering out redundant information. However, in practice, the detected boundary curve is usually discontinuous, which leads to some increase in research-useless information, and thus it is necessary to improve the accuracy of the edge detection algorithm.
In color natural images, color and texture are the two most prominent visual features, which are two important factors that cannot be ignored in color image analysis. Many current feature extraction methods only ignore color information for gray level images, but most of images to be processed in real life are color images, so the colors are also very distinctive visual features in the images. In view of this, if color information is introduced in feature extraction of an image, and is extracted and utilized effectively, it is possible to have an effect of promoting improvement in accuracy of texture image recognition.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a feature extraction method based on two-dimensional multi-element empirical mode decomposition, and the method aims to reduce image redundancy and simultaneously maintain basic features of images for subsequent image recognition analysis.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a feature extraction method based on two-dimensional multi-element empirical mode decomposition comprises the following steps:
s1, carrying out multi-scale multivariate decomposition on a color image by utilizing BMEMD (matrix electron mobility model) to obtain a series of two-dimensional eigenmode functions BIMF (binary image function) with high frequency to low frequency, and screening out a component diagram describing the characteristics of the color image;
s2, extracting and decomposing the color and texture features of the obtained feature map.
Further, the specific steps of the BMEMD algorithm in step S1 are as follows:
s11, forming a plurality of two-dimensional signals into a two-dimensional multi-element signal, wherein each two-dimensional signal is a channel of the two-dimensional multi-element signal and is used as an input signal of the BMEMD;
s12, performing real-value projection in a plurality of directions, and converting an input two-dimensional multi-element signal into a plurality of two-dimensional single-element signals, which are called projection signals in the directions;
s13, decomposing the obtained signal by BEMD to obtain a series of BIMF components and residual components.
Further, in the step S13, BEMD decomposes: setting an original image as I (x, y), wherein I is the original image, and x, y are respectively the horizontal coordinate points and the vertical coordinate points of the image pixel points;
the specific decomposition steps are as follows:
s131, initializing, to make i=1, f 0 (x, y) =i (x, y); i is the decomposition number, f 0 (x, y) is an initial input image;
s132, solving a t-th order two-dimensional eigenmode function (Bidimensional Intrinsic Mode Function, BIMF);
s1321, initializing, let k=0, h k (x,y)=f i-1 (x, y); k is the iteration number, h k (x, y) decomposing the input image for BIMF, f i-1 (x, y) is the ith input image;
s1322 find h k (x, y) all maxima and minima points;
s1323, performing interpolation fitting on the maximum and minimum value points by adopting interpolation to obtain a corresponding upper envelope e upper (x, y) and lower envelope e lower (x,y);
S1324 calculating the mean envelope e m (x,y):
S1325, subtracting the mean envelope from the original signal to obtain the k+1st residual component h k+1 (x,y):
h k+1 (x,y)=h k (x,y)-e m (x,y)
S1326、Validating h by calculating standard deviation criteria SD k+1 (x, y) whether the iteration termination condition is satisfied:
wherein: m and n are the length and width of the image respectively;
s1327, if SD is less than the termination threshold, obtaining a qualified BIMF as the remainder, and then letting the eigen-solid function imf i (x,y)=h k+1 (x, y), go to step S1328; otherwise, let k=k+1, go to step S132 to continue execution;
s1328 residual component res i (x, y) the residual component res resulting from the previous cycle i-1 (x, y) subtracting the eigensolid state function imf i (x, y) to obtain:
res i (x,y)=res i-1 (x,y)-imf i (x,y)
decomposition stop condition: residual component res i The number of the BIMF obtained by decomposition or not more than two extreme points of (x, y) reaches the number required by practical application; otherwise, let i=i+1, will res i (x, y) is regarded as a new initial signal to step S1322 until the stop condition is satisfied, and the decomposed signal is obtained.
Further, the specific steps of the step S2 are as follows:
s21, converting a color image I to be processed into an RGB color space to obtain a R, G, B component;
s22, selecting a texture extraction method convenient to combine with the color features, and extracting gray texture features f of RGB components texture
S23, extracting color features f of the image I by selecting a proper method color
S24, fusing color features and gray texture features, and fusing features f ct
f ct =[f color ,f texture ]。
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a feature extraction method based on two-dimensional multi-element empirical mode decomposition, which introduces a BMEMD algorithm because BEMD can only be applied to gray images, provides an idea of dividing a color image into R, G, B channels for decomposition and re-fusion, takes first-order BIMF and residual components of a decomposition result as texture and color features of the color image, reduces information redundancy of the color image on the basis of ensuring effective information of the image, and provides conditions for follow-up refinement study image recognition, image restoration and the like.
Drawings
Fig. 1 is a multi-focal image in which (a) the focal position is upper left, (b) the focal position is upper right, (c) the focal position is lower left, (d) the focal position is lower right, and (e) the focal position is center;
FIG. 2 is a diagram of the BIMF components of each channel of FIG. 1 after BMEMD decomposition, wherein: (a) is a first channel, (b) is a second channel, (c) is a third channel, (d) is a fourth channel, and (e) is a fifth channel;
FIG. 3 is an image of a male wearing glasses to be disassembled;
FIG. 4 is an exploded effect diagram of the three channels RGB of FIG. 3, wherein: (a) is R channel, (B) is G channel, and (c) is B channel;
FIG. 5 shows the synthesis results of the BMEMD decomposition for each channel of FIG. 4, wherein: (a) is BIMF1, (b) is BIMF2, (c) is BIMF3, (d) is BIMF4, and (e) is res;
fig. 6 is a diagram of a fusion effect of fig. 3 after texture feature and color information extraction, wherein: the characteristics of (a) are glasses, (b) are willow eyebrows, and (c) are smiles.
Detailed Description
The invention will be further illustrated with reference to examples.
Example 1:
BEMD decomposition: setting an original image as I (x, y), wherein I is the original image, and x, y are respectively the horizontal coordinate points and the vertical coordinate points of the image pixel points;
the specific decomposition steps are as follows:
s131, initializing, to make i=1, f 0 (x, y) =i (x, y); i is a scoreNumber of solutions f 0 (x, y) is an initial input image;
s132, solving a t-th order two-dimensional eigenmode function (Bidimensional Intrinsic Mode Function, BIMF);
s1321, initializing, let k=0, h k (x,y)=f i-1 (x, y); k is the iteration number, h k (x, y) decomposing the input image for BIMF, f i-1 (x, y) is the ith input image;
s1322 find h k (x, y) all maxima and minima points;
s1323, performing interpolation fitting on the maximum and minimum value points by adopting interpolation to obtain a corresponding upper envelope e upper (x, y) and lower envelope e lower (x,y);
S1324 calculating the mean envelope e m (x,y):
S1325, subtracting the mean envelope from the original signal to obtain the k+1st residual component h k+1 (x,y):
h k+1 (x,y)=h k (x,y)-e m (x,y)
S1326, verifying h by calculating standard deviation criterion SD k+1 (x, y) whether the iteration termination condition is satisfied:
wherein: m and n are the length and width of the image respectively;
s1327, if SD is less than the termination threshold, obtaining a qualified BIMF as the remainder, and then letting the eigen-solid function imf i (x,y)=h k+1 (x, y), go to step S1328; otherwise, let k=k+1, go to step S132 to continue execution;
s1328 residual component res i (x, y) the residual component res resulting from the previous cycle i-1 (x, y) subtracting the eigensolid state function imf i (x, y) to obtain:
res i (x,y)=res i-1 (x,y)-imf i (x,y)
decomposition stop condition: residual component res i The number of the BIMF obtained by decomposition or not more than two extreme points of (x, y) reaches the number required by practical application; otherwise, let i=i+1, will res i (x, y) is regarded as a new initial signal to step S1322 until the stop condition is satisfied, and the decomposed signal is obtained.
Example 2
BMEMD is a multi-element expansion of BEMD, and the implementation mode is as follows:
(1) Forming a plurality of two-dimensional signals into a two-dimensional multi-element signal, wherein each two-dimensional signal is a channel of the two-dimensional multi-element signal and is used as an input signal of the BMEMD;
(2) Through real-value projection in a plurality of directions, the input two-dimensional multi-element signals are converted into a plurality of two-dimensional single-element signals, which are called projection signals in the directions;
(3) The resulting signal is decomposed using BEMD to obtain a series of BIMF components and residual components.
Fig. 1 and 2 show the decomposition effect of the same picture of the BMEMD processing 5 focuses.
Fig. 1 provides five images with different focuses as inputs of five channels of BMEMD, fig. 2 is a decomposition result, from top to bottom, sequentially from BIMF1 to BIMF4 and a residual component res, the spatial frequency of the BIMF gradually decreases from top to bottom, and the image gradually smoothes as reflected on the texture details of the image.
Example 3
(1) The invention is taught by the idea of BMEMD multi-channel, and the main idea is that the color image to be decomposed is divided into R, G, B channels for processing, and the main idea is that: each channel can be analogous to a gray scale image, then BEMD decomposition is performed on each channel separately to obtain the required BIMF components, and finally R, G, B channels are fused laterally for each level of BIMF components, thus obtaining the color BIMF components of each level. The invention selects an image of a male wearing glasses in the celebA-HQ dataset. As shown in fig. 3, it was divided into R, G, B channels for BMEMD decomposition, and the results are shown in fig. 4. And then synthesizing RGB channels transversely by the decomposed BIMF patterns of each order to obtain the components of each order of the color BIMF, wherein the invention takes the first fourth order and one residual component, and the result is shown in figure 5.
(2) The decomposition of the three channels of RGB is then carried out:
r, G, B is based on the principle of mapping the color raw image onto three red, green and blue channels, each still being processed in gray scale. Taking the R channel as an example, if the original image contains more red, the darker the color appears in the R channel.
(3) The result of BMEMD decomposition of the color image of FIG. 3 is shown in FIG. 5, where the present invention takes the first fourth order BIMF component and a residual component. As can be seen from (a) to (d) in fig. 5, the BIMF1 clearly retains the texture detail characteristics of glasses, cap visors, etc. of the image, and the texture intensity is weakened step by step. Although the BIMF1 greatly highlights the texture feature of the original image, the color information is not obvious. Notably, the residual component contains a large number of color features of the image. In view of this, consider adding the BIMF1 and res components so that the image maximizes the combined appearance of texture and color features while removing redundant information.
The effect of adding the residual component res to the BIMF1 after decomposition of FIG. 3 is shown in FIG. 6 (a). Experiments show that the fused image not only highlights detailed information such as the hat brim and the eye frame, but also fuses colors into the detailed information. Fig. 6 (b) and (c) are representative two pictures selected from the celebA-HQ dataset according to the invention, and the characteristic labels are "willow eyebrow" and "smile", respectively, and the result shows that the fusion of texture and color also achieves better effect, and provides conditions for subsequent image recognition and analysis.
Example 4
A feature extraction method based on two-dimensional multi-element empirical mode decomposition comprises the following steps:
s1, carrying out multi-scale multivariate decomposition on a color image by utilizing BMEMD (matrix electron mobility model) to obtain a series of two-dimensional eigenmode functions BIMF (binary image function) with high frequency to low frequency, and screening out a component diagram describing the characteristics of the color image;
the specific steps of the BMEMD algorithm in the step S1 are as follows:
s11, forming a plurality of two-dimensional signals into a two-dimensional multi-element signal, wherein each two-dimensional signal is a channel of the two-dimensional multi-element signal and is used as an input signal of the BMEMD;
s12, performing real-value projection in a plurality of directions, and converting an input two-dimensional multi-element signal into a plurality of two-dimensional single-element signals, which are called projection signals in the directions;
s13, decomposing the obtained signal by BEMD to obtain a series of BIMF components and residual components.
In the step S13, BEMD decomposition: setting an original image as I (x, y), wherein I is the original image, and x, y are respectively the horizontal coordinate points and the vertical coordinate points of the image pixel points;
the specific decomposition steps are as follows:
s131, initializing, to make i=1, f 0 (x, y) =i (x, y); i is the decomposition number, f 0 (x, y) is an initial input image;
s132, solving a t-th order two-dimensional eigenmode function (Bidimensional Intrinsic Mode Function, BIMF);
s1321, initializing, let k=0, h k (x,y)=f i-1 (x, y); k is the iteration number, h k (x, y) decomposing the input image for BIMF, f i-1 (x, y) is the ith input image;
s1322 find h k (x, y) all maxima and minima points;
s1323, performing interpolation fitting on the maximum and minimum value points by adopting interpolation to obtain a corresponding upper envelope e upper (x, y) and lower envelope e lower (x,y);
S1324 calculating the mean envelope e m (x,y):
S1325, subtracting the mean envelope from the original signal to obtain the k+1st residual component h k+1 (x,y):
h k+1 (x,y)=h k (x,y)-e m (x,y)
S1326, verifying h by calculating standard deviation criterion SD k+1 (x, y) whether the iteration termination condition is satisfied:
wherein: m and n are the length and width of the image respectively;
s1327, if SD is less than the termination threshold, obtaining a qualified BIMF as the remainder, and then letting the eigen-solid function imf i (x,y)=h k+1 (x, y), go to step S1328; otherwise, let k=k+1, go to step S132 to continue execution;
s1328 residual component res i (x, y) the residual component res resulting from the previous cycle i-1 (x, y) subtracting the eigensolid state function imf i (x, y) to obtain:
res i (x,y)=res i-1 (x,y)-imf i (x,y)
decomposition stop condition: residual component res i The number of the BIMF obtained by decomposition or not more than two extreme points of (x, y) reaches the number required by practical application; otherwise, let i=i+1, will res i (x, y) is regarded as a new initial signal to step S1322 until the stop condition is satisfied, and the decomposed signal is obtained.
S2, extracting color and texture features of the feature map obtained by decomposition;
the specific steps of the step S2 are as follows:
s21, converting a color image I to be processed into an RGB color space to obtain a R, G, B component;
s22, selecting a texture extraction method convenient to combine with the color features, and extracting gray texture features f of RGB components texture
S23, extracting color features f of the image I by selecting a proper method color
S24, fusing color features and gray texture features, and fusing features f ct
f ct =[f color ,f texture ]。
The invention discloses a feature extraction method based on two-dimensional multivariate empirical mode decomposition (Bidimensional Multivariate Empirical Mode Decomposition, BMEMD), and belongs to the technical field of image processing. BMEMD adds a plurality of channels based on BEMD to realize multi-channel image decomposition. The method converts an input two-dimensional multi-element signal into a plurality of two-dimensional single elements through real-value projection in a plurality of directions. The invention applies BMEMD to the processing of color images, divides the color images into R, G, B channels for two-dimensional empirical mode decomposition, and synthesizes the BIMF (Bidimensional Intrinsic Mode Function, two-dimensional eigen solid state function) obtained by decomposition to obtain color BIMF and residual components, which can be used for extracting color information and texture characteristics of the color images.
The foregoing is only a preferred embodiment of the invention, it being noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.

Claims (2)

1. The feature extraction method based on the two-dimensional multi-element empirical mode decomposition is characterized by comprising the following steps of:
s1, carrying out multi-scale multivariate decomposition on a color image by utilizing BMEMD (matrix electron mobility model) to obtain a series of two-dimensional eigenmode functions BIMF (binary image function) with high frequency to low frequency, and screening out a component diagram describing the characteristics of the color image;
s2, extracting color and texture features of the feature map obtained by decomposition;
the specific steps of the BMEMD algorithm in the step S1 are as follows:
s11, forming a plurality of two-dimensional signals into a two-dimensional multi-element signal, wherein each two-dimensional signal is a channel of the two-dimensional multi-element signal and is used as an input signal of the BMEMD;
s12, performing real-value projection in a plurality of directions, and converting an input two-dimensional multi-element signal into a plurality of two-dimensional single-element signals, which are called projection signals in the directions;
s13, decomposing the obtained signal by adopting BEMD to obtain a series of BIMF components and residual components;
in the step S13, BEMD decomposition: setting an original image as I (x, y), wherein I is the original image, and x, y are respectively the horizontal coordinate points and the vertical coordinate points of the image pixel points;
the specific decomposition steps are as follows:
s131, initializing, to make i=1, f 0 (x, y) =i (x, y); i is the decomposition number, f 0 (x, y) is an initial input image;
s132, solving a t-th order two-dimensional eigenmode function BIMF;
s1321, initializing, let k=0, h k (x,y)=f i-1 (x, y); k is the iteration number, h k (x, y) decomposing the input image for BIMF, f i-1 (x, y) is the ith input image;
s1322 find h k (x, y) all maxima and minima points;
s1323, performing interpolation fitting on the maximum and minimum value points by adopting interpolation to obtain a corresponding upper envelope e upper (x, y) and lower envelope e lower (x,y);
S1324 calculating the mean envelope e m (x,y):
S1325, subtracting the mean envelope from the original signal to obtain the k+1st residual component h k+1 (x,y):
h k+1 (x,y)=h k (x,y)-e m (x,y)
S1326, verifying h by calculating standard deviation criterion SD k+1 (x, y) whether the iteration termination condition is satisfied:
wherein: m and n are the length and width of the image respectively;
S1327if SD is less than the termination threshold, the resulting margin is a qualified BIMF, and then the eigen-solid function imf i (x,y)=h k+1 (x, y), go to step S1328; otherwise, let k=k+1, go to step S132 to continue execution;
s1328 residual component res i (x, y) the residual component res resulting from the previous cycle i-1 (x, y) subtracting the eigensolid state function imf i (x, y) to obtain:
res i (x,y)=res i-1 (x,y)-imf i (x,y)
decomposition stop condition: residual component res i The number of the BIMF obtained by decomposition or not more than two extreme points of (x, y) reaches the number required by practical application; otherwise, let i=i+1, will res i (x, y) is regarded as a new initial signal to step S1322 until the stop condition is satisfied, and the decomposed signal is obtained.
2. The feature extraction method based on two-dimensional multivariate empirical mode decomposition according to claim 1, wherein,
the specific steps of the step S2 are as follows:
s21, converting a color image I to be processed into an RGB color space to obtain a R, G, B component;
s22, selecting a texture extraction method convenient to combine with the color features, and extracting gray texture features f of RGB components texture
S23, extracting color features f of the image I by selecting a proper method color
S24, fusing color features and gray texture features, and fusing features f ct
f ct =[f color ,f texture ]。
CN202110505248.3A 2021-05-10 2021-05-10 Feature extraction method based on two-dimensional multi-element empirical mode decomposition Active CN113139557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110505248.3A CN113139557B (en) 2021-05-10 2021-05-10 Feature extraction method based on two-dimensional multi-element empirical mode decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110505248.3A CN113139557B (en) 2021-05-10 2021-05-10 Feature extraction method based on two-dimensional multi-element empirical mode decomposition

Publications (2)

Publication Number Publication Date
CN113139557A CN113139557A (en) 2021-07-20
CN113139557B true CN113139557B (en) 2024-03-29

Family

ID=76817945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110505248.3A Active CN113139557B (en) 2021-05-10 2021-05-10 Feature extraction method based on two-dimensional multi-element empirical mode decomposition

Country Status (1)

Country Link
CN (1) CN113139557B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113703059B (en) * 2021-09-02 2023-11-17 中船海洋探测技术研究院有限公司 Remote magnetic detection method for water ferromagnetic target clusters
CN113742802A (en) * 2021-09-03 2021-12-03 国网经济技术研究院有限公司 Two-dimensional multi-element signal empirical mode decomposition rapid method for engineering drawing fusion
CN118155194B (en) * 2024-05-11 2024-07-23 国网安徽省电力有限公司营销服务中心 Intelligent comparison method and system for components of electric energy meter

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542296A (en) * 2012-01-10 2012-07-04 哈尔滨工业大学 Method for extracting image characteristics by multivariate gray model-based bi-dimensional empirical mode decomposition
CN110287446A (en) * 2019-05-29 2019-09-27 东南大学 A kind of polynary empirical mode decomposition algorithm of fast two-dimensional
CN111369488A (en) * 2020-05-28 2020-07-03 江苏集萃移动通信技术研究所有限公司 Two-dimensional multi-element signal empirical mode decomposition fast algorithm for multi-image fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542296A (en) * 2012-01-10 2012-07-04 哈尔滨工业大学 Method for extracting image characteristics by multivariate gray model-based bi-dimensional empirical mode decomposition
CN110287446A (en) * 2019-05-29 2019-09-27 东南大学 A kind of polynary empirical mode decomposition algorithm of fast two-dimensional
CN111369488A (en) * 2020-05-28 2020-07-03 江苏集萃移动通信技术研究所有限公司 Two-dimensional multi-element signal empirical mode decomposition fast algorithm for multi-image fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于二维本征模态函数的SAR图像目标检测;黄世奇;黄文准;刘哲;;兵器装备工程学报(第08期);全文 *

Also Published As

Publication number Publication date
CN113139557A (en) 2021-07-20

Similar Documents

Publication Publication Date Title
CN113139557B (en) Feature extraction method based on two-dimensional multi-element empirical mode decomposition
CN103914699A (en) Automatic lip gloss image enhancement method based on color space
US20170039683A1 (en) Image processing apparatus, image processing method, image processing system, and non-transitory computer readable medium
US20170193644A1 (en) Background removal
CN116583878A (en) Method and system for personalizing 3D head model deformation
CN112634125A (en) Automatic face replacement method based on off-line face database
JP2021517281A (en) Multi-gesture fine division method for smart home scenes
Tangsakul et al. Single image haze removal using deep cellular automata learning
CN116648733A (en) Method and system for extracting color from facial image
JP2014016688A (en) Non-realistic conversion program, device and method using saliency map
CN113052783A (en) Face image fusion method based on face key points
CN104268845A (en) Self-adaptive double local reinforcement method of extreme-value temperature difference short wave infrared image
CN108550124B (en) Illumination compensation and image enhancement method based on bionic spiral
Kuzovkin et al. Descriptor-based image colorization and regularization
CN116822548B (en) Method for generating high recognition rate AI two-dimensional code and computer readable storage medium
CN104298961B (en) Video method of combination based on Mouth-Shape Recognition
Avinash et al. Color hand gesture segmentation for images with complex background
Youlian et al. Face detection method using template feature and skin color feature in rgb color space
JP2003256834A (en) Face area extracting and face configuring element position judging device
JP4742068B2 (en) Image processing method, image processing system, and image processing program
CN112926500B (en) Pedestrian detection method combining head and overall information
CN111882495B (en) Image highlight processing method based on user-defined fuzzy logic and GAN
JP2009050035A (en) Image processing method, image processing system, and image processing program
CN110348452B (en) Image binarization processing method and system
Juneja et al. A hybrid mathematical model for face localization over multi-person images and videos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant