CN116843596B - Method, system and device for adaptive fusion of multi-mode images of X-ray grating - Google Patents

Method, system and device for adaptive fusion of multi-mode images of X-ray grating Download PDF

Info

Publication number
CN116843596B
CN116843596B CN202311083848.0A CN202311083848A CN116843596B CN 116843596 B CN116843596 B CN 116843596B CN 202311083848 A CN202311083848 A CN 202311083848A CN 116843596 B CN116843596 B CN 116843596B
Authority
CN
China
Prior art keywords
component
empirical mode
decomposition
image
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311083848.0A
Other languages
Chinese (zh)
Other versions
CN116843596A (en
Inventor
徐月暑
田宗翰
匡翠方
陶思玮
柏凌
刘旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZJU Hangzhou Global Scientific and Technological Innovation Center
Original Assignee
ZJU Hangzhou Global Scientific and Technological Innovation Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZJU Hangzhou Global Scientific and Technological Innovation Center filed Critical ZJU Hangzhou Global Scientific and Technological Innovation Center
Priority to CN202311083848.0A priority Critical patent/CN116843596B/en
Publication of CN116843596A publication Critical patent/CN116843596A/en
Application granted granted Critical
Publication of CN116843596B publication Critical patent/CN116843596B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method, a system and a device for self-adaptive fusion of multi-mode images of an X-ray grating, wherein the method comprises the following steps: acquiring an absorption contrast image, a dark field contrast image and a differential phase contrast image of a sample to be measured; decomposing to obtain a first decomposition component, a second decomposition component and a third decomposition component; respectively carrying out pretreatment before fusion to obtain a pretreated result; respectively fusing the preprocessed results to obtain a second fused empirical mode component and a second fused residual error component; performing detail enhancement processing to obtain a second enhanced empirical mode component, and performing contrast enhancement processing to obtain a second enhanced residual component; and reconstructing, and performing image definition processing to obtain a reconstructed image. The invention solves the problem that the characteristic information reflected by the three characteristic images cannot be directly reflected in the unified result, and the fused image data is not affected, thereby having the advantages of directly carrying out image noise reduction and enhancement in the process of decomposition and reconstruction.

Description

Method, system and device for adaptive fusion of multi-mode images of X-ray grating
Technical Field
The invention relates to the technical field of image processing, in particular to an adaptive fusion method, system and device for multi-mode images of an X-ray grating.
Background
The traditional X-ray imaging can only obtain a single absorption contrast image, the reflected characteristic information is limited, and an X-ray imaging system capable of reflecting the differential phase contrast characteristic and the dark field contrast characteristic has high coherence requirements on an X-ray source, and is often a synchronous radiation source or a liquid metal target X-ray source with high manufacturing cost.
After an X-ray three-grating phase contrast imaging system based on Talbot-Lau effect is introduced, an absorption contrast image, a differential phase contrast image and a dark field contrast image of a tested experimental sample can be obtained by utilizing a traditional X-ray source, but the obtained three contrast images cannot be directly reacted into a single image, and the three contrast images need to be switched back and forth in the process of image comparison, so that the practical application is complicated.
Most of the existing image fusion methods are based on priori decomposition technologies, such as wavelet decomposition or Fourier decomposition, and the like, so that the technology can cause obvious loss of characteristic information of the fused image and easily cause loss of key characteristic information.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an adaptive fusion method, a system and a device for multi-mode images of an X-ray grating.
In order to solve the problems, the invention is solved by the following technical scheme:
an adaptive fusion method of multi-mode images of an X-ray grating comprises the following steps:
acquiring an absorption contrast image, a dark field contrast image and a differential phase contrast image of a sample to be measured;
decomposing the absorption contrast image, the dark field contrast image and the differential phase contrast image respectively to obtain a corresponding first decomposition component, a second decomposition component and a third decomposition component, wherein the first decomposition component comprises a first empirical mode component and a first residual error component, the second decomposition component comprises a second empirical mode component and a second residual error component, and the third decomposition component comprises a third empirical mode component and a third residual error component;
respectively carrying out pre-fusion pretreatment on the first decomposition component, the second decomposition component and the third decomposition component to obtain a first empirical mode component and a first residual error component after pretreatment, a second empirical mode component and a second residual error component, and a third empirical mode component and a third residual error component;
Respectively fusing the preprocessed first empirical mode component and the preprocessed first residual error component, and fusing the second empirical mode component and the second residual error component to obtain a first fused empirical mode component and a first fused residual error component of the first fused component, and fusing the first fused empirical mode component and the first fused residual error component with the preprocessed third empirical mode component and the preprocessed third residual error component to obtain a second fused empirical mode component and a second fused residual error component of the second fused component;
performing detail enhancement processing on the second fused empirical mode component to obtain a second enhanced empirical mode component, and performing contrast enhancement processing on the second fused residual component to obtain a second enhanced residual component;
and reconstructing based on the second enhanced empirical mode component and the second enhanced residual error component, and performing image definition processing to obtain a reconstructed image.
As an implementation manner, the acquiring the absorption contrast image, the dark field contrast image and the differential phase contrast image of the sample to be tested includes the following steps:
acquiring a first molar stripe image generated by a sample which does not pass through the test, wherein a grating is placed in front of an X-ray light source, and the molar stripe image is formed through interaction between the light source and a plurality of gratings;
And obtaining a second molar fringe image generated by the sample to be tested, and obtaining a plurality of first molar fringe images and second molar fringe images under different phases, thereby obtaining an absorption contrast image, a dark field contrast image and a differential phase contrast image of the sample to be tested.
As an embodiment, decomposing the absorption contrast image, the dark field contrast image, and the differential phase contrast image to obtain corresponding first, second, and third decomposed components, respectively, includes the steps of:
decomposing the absorption contrast image, the dark field contrast image and the differential phase contrast image, and adjusting the number of decomposition layers, a threshold value, the size of a decomposition window and a stopping condition to obtain a first decomposition component, a second decomposition component and a third decomposition component, wherein the first decomposition component comprises a first experimental modal component and a first residual component, the second decomposition component comprises a second empirical modal component and a second residual component, and the third decomposition component comprises a third empirical modal component and a third residual component;
the first, second and third decomposition components are expressed as:
wherein,representing the decomposition component +_>Respectively corresponding to the absorption contrast image, the dark field contrast image and the differential phase contrast image,/respectively >Indicate->Order empirical mode component->The number of decomposition layers is indicated,representing decomposing the remaining residual components.
As an implementation manner, the pre-fusion pretreatment is performed on the first decomposition component, the second decomposition component and the third decomposition component to obtain a first empirical mode component and a first residual component, a second empirical mode component and a second residual component, and a third empirical mode component and a third residual component after pretreatment, which include the following steps:
denoising and background optimization are respectively carried out on the first decomposition component, the second decomposition component and the third decomposition component, so that the influence of irregular noise, uneven illumination and slowly-changing components on the fusion quality is reduced;
and performing adaptive filtering processing on the first decomposition component, the second decomposition component and the remaining other components of the third decomposition component to obtain a preprocessed first empirical mode component and a preprocessed first residual component, a preprocessed second empirical mode component and a preprocessed second residual component, and a preprocessed third empirical mode component and a preprocessed third residual component.
As an embodiment, the adaptive filtering process is represented as follows:
wherein,representing the input empirical mode component +.>Representing the empirical mode component after the adaptive filtering process,/-, is shown >For the mean value of the input empirical mode components +.>For the variance of the input empirical mode component, +.>Is the estimated noise variance.
As an embodiment, the method further comprises the steps of:
respectively carrying out standardization processing on the preprocessed first empirical mode component, the second empirical mode component and the third empirical mode component to obtain a standardized first empirical mode component, a standardized second empirical mode component and a standardized third empirical mode component;
based on the normalized first empirical mode component, the second empirical mode component and the third empirical mode component, obtaining a corresponding component principal component;
performing principal component analysis transformation on all normalized first empirical mode components, second empirical mode components and third empirical mode components which participate in fusion to obtain fused empirical mode components, namely a first fused empirical mode component and a second fused empirical mode component;
the normalization process is expressed as follows:
obtaining a correlation coefficient matrix based on jacobian methodCharacteristic value of +.>Corresponding characteristic componentCorrelation coefficient matrix->The expression is as follows:
the component principal components are represented as follows:
wherein,for each empirical mode component after normalization treatment, < - >Representing the individual input empirical mode components +.>For the mean value of the individual input empirical mode components +.>Standard deviation for each input empirical mode component.
As an implementation manner, the first residual component, the second residual component and the third residual component are fused by using a maximum energy fusion method, which is expressed as follows:
wherein,、/>residual components of the two decomposition components each participating in the fusion, < >>And->Respectively->And->Mean value of->And->Respectively isAnd->Variance of->For modulation factor->The fused residual components are a first fused residual component and a first fused residual component.
As an embodiment, the second enhanced empirical mode component is obtained by a detail enhancement process, expressed as follows:
the second enhanced residual component is obtained by contrast enhancement processing, as follows:
wherein,representing convolution calculations +.>Representing a frequency domain high pass filter function, +.>Representing histogram equalization operations, ++>Representing a second enhanced empirical mode component, +.>Representing the second enhanced residual component.
As an embodiment, the reconstructed image is represented as follows:
wherein,as a weight factor, ++>For the order of->Representing reconstructed image +. >Representing a second enhanced empirical mode component, +.>Representing the second enhanced residual component.
As an embodiment, the first molar fringe image intensity is expressed as follows:
the second moire image intensity is expressed as follows:
wherein,representing the second molar fringe image intensity, +.>Representing the first molar fringe image intensity,represents abscissa, ++>Indicates the ordinate, ++>And->Respectively indicate->Zero-order and first-order coefficients after decomposition, +.>And->Respectively indicate->Zero-order and first-order coefficients after decomposition, +.>Representing different stepping times;
the absorption contrast image, the dark field contrast image, and the differential phase contrast image are respectively expressed as follows:
wherein,representing an absorption contrast image, < >>Representing dark field contrast images,/->Representing differential phase contrast images,/->Representing the phase of the image when the sample to be tested is placed, < >>Representing the phase of the image when no sample is placed, < ->Representing the period of the analysis grating +.>Representing the distance between the phase grating and the analysis grating, < >>Represents differential phase, < > when placing the sample under test>The differential phase when the sample to be measured is not placed is represented, arg represents the depression angle calculation, and rem represents the complex modulo calculation.
An X-ray grating multi-mode image self-adaptive fusion system comprises an image acquisition module, an image decomposition module, a component processing module, a component fusion module and an image reconstruction module;
The image acquisition module is used for acquiring an absorption contrast image, a dark field contrast image and a differential phase contrast image of the sample to be detected;
the image decomposition module is used for respectively decomposing the absorption contrast image, the dark field contrast image and the differential phase contrast image to obtain a corresponding first decomposition component, a second decomposition component and a third decomposition component, wherein the first decomposition component comprises a first empirical mode component and a first residual error component, the second decomposition component comprises a second empirical mode component and a second residual error component, and the third decomposition component comprises a third empirical mode component and a third residual error component;
the component processing module is used for respectively carrying out pre-fusion pretreatment on the first decomposition component, the second decomposition component and the third decomposition component to obtain a first empirical mode component and a first residual component, a second empirical mode component and a second residual component after pretreatment, and a third empirical mode component and a third residual component after pretreatment;
the component fusion module is used for respectively fusing the preprocessed first empirical mode component and the first residual component, the second empirical mode component and the second residual component to obtain a first fused empirical mode component and a first fused residual component of the first fused component, and fusing the first fused empirical mode component and the first fused residual component with the preprocessed third empirical mode component and the third residual component to obtain a second fused empirical mode component and a second fused residual component of the second fused component;
The image re-modeling is used for carrying out detail enhancement processing on the second fused empirical mode component to obtain a second enhanced empirical mode component, and carrying out contrast enhancement processing on the second fused residual component to obtain a second enhanced residual component; and reconstructing based on the second enhanced empirical mode component and the second enhanced residual error component, and performing image definition processing to obtain a reconstructed image.
A computer readable storage medium storing a computer program which when executed by a processor performs the method of:
acquiring an absorption contrast image, a dark field contrast image and a differential phase contrast image of a sample to be measured;
decomposing the absorption contrast image, the dark field contrast image and the differential phase contrast image respectively to obtain a corresponding first decomposition component, a second decomposition component and a third decomposition component, wherein the first decomposition component comprises a first empirical mode component and a first residual error component, the second decomposition component comprises a second empirical mode component and a second residual error component, and the third decomposition component comprises a third empirical mode component and a third residual error component;
respectively carrying out pre-fusion pretreatment on the first decomposition component, the second decomposition component and the third decomposition component to obtain a first empirical mode component and a first residual error component after pretreatment, a second empirical mode component and a second residual error component, and a third empirical mode component and a third residual error component;
Respectively fusing the preprocessed first empirical mode component and the preprocessed first residual error component, and fusing the second empirical mode component and the second residual error component to obtain a first fused empirical mode component and a first fused residual error component of the first fused component, and fusing the first fused empirical mode component and the first fused residual error component with the preprocessed third empirical mode component and the preprocessed third residual error component to obtain a second fused empirical mode component and a second fused residual error component of the second fused component;
performing detail enhancement processing on the second fused empirical mode component to obtain a second enhanced empirical mode component, and performing contrast enhancement processing on the second fused residual component to obtain a second enhanced residual component;
and reconstructing based on the second enhanced empirical mode component and the second enhanced residual error component, and performing image definition processing to obtain a reconstructed image.
An X-ray raster phase-contrast multi-modality image fast adaptive fusion apparatus comprising a memory, a processor, and a computer program stored in the memory and running on the processor, the processor implementing the method when executing the computer program of:
Acquiring an absorption contrast image, a dark field contrast image and a differential phase contrast image of a sample to be measured;
decomposing the absorption contrast image, the dark field contrast image and the differential phase contrast image respectively to obtain a corresponding first decomposition component, a second decomposition component and a third decomposition component, wherein the first decomposition component comprises a first empirical mode component and a first residual error component, the second decomposition component comprises a second empirical mode component and a second residual error component, and the third decomposition component comprises a third empirical mode component and a third residual error component;
respectively carrying out pre-fusion pretreatment on the first decomposition component, the second decomposition component and the third decomposition component to obtain a first empirical mode component and a first residual error component after pretreatment, a second empirical mode component and a second residual error component, and a third empirical mode component and a third residual error component;
respectively fusing the preprocessed first empirical mode component and the preprocessed first residual error component, and fusing the second empirical mode component and the second residual error component to obtain a first fused empirical mode component and a first fused residual error component of the first fused component, and fusing the first fused empirical mode component and the first fused residual error component with the preprocessed third empirical mode component and the preprocessed third residual error component to obtain a second fused empirical mode component and a second fused residual error component of the second fused component;
Performing detail enhancement processing on the second fused empirical mode component to obtain a second enhanced empirical mode component, and performing contrast enhancement processing on the second fused residual component to obtain a second enhanced residual component;
and reconstructing based on the second enhanced empirical mode component and the second enhanced residual error component, and performing image definition processing to obtain a reconstructed image.
The invention has the remarkable technical effects due to the adoption of the technical scheme:
the method solves the problem that the pictures need to be switched back and forth in the image comparison process, and simultaneously solves the problem that key characteristic information is easy to lose in the existing image fusion method.
On the basis of the original image characteristic information, the image has clearer detail expression and visual effect.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a schematic flow chart of the method of the present invention;
FIG. 2 is an overall schematic of the system of the present invention;
FIG. 3 is an overall flow chart of the method of the present invention;
FIG. 4 is a schematic diagram of the structure of three contrast images obtained by the invention;
fig. 5 is a schematic diagram of subjective evaluation results of the fusion image of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following examples, which are illustrative of the present invention and are not intended to limit the present invention thereto.
Example 1:
an adaptive fusion method of phase-contrast multi-mode images of an X-ray grating is shown in fig. 1, and comprises the following steps:
s100, acquiring an absorption contrast image, a dark field contrast image and a differential phase contrast image of a sample to be measured;
s200, respectively decomposing the absorption contrast image, the dark field contrast image and the differential phase contrast image to obtain a corresponding first decomposition component, a second decomposition component and a third decomposition component, wherein the first decomposition component comprises a first experimental modal component and a first residual component, the second decomposition component comprises a second empirical modal component and a second residual component, and the third decomposition component comprises a third empirical modal component and a third residual component;
S300, respectively carrying out pre-fusion pretreatment on the first decomposition component, the second decomposition component and the third decomposition component to obtain a first empirical mode component and a first residual component after pretreatment, a second empirical mode component and a second residual component, and a third empirical mode component and a third residual component;
s400, respectively fusing the preprocessed first empirical mode component and the preprocessed first residual component, and the second empirical mode component and the second residual component to obtain a first fused empirical mode component and a first fused residual component of the first fused component, and fusing the first fused empirical mode component and the first fused residual component with the preprocessed third empirical mode component and the preprocessed third residual component to obtain a second fused empirical mode component and a second fused residual component of the second fused component;
s500, carrying out detail enhancement processing on the second fusion empirical mode component to obtain a second enhancement empirical mode component, and carrying out contrast enhancement processing on the second fusion residual component to obtain a second enhancement residual component;
s600, reconstructing based on the second enhanced empirical mode component and the second enhanced residual error component, and performing image definition processing to obtain a reconstructed image.
The method solves the complicated problem of switching three contrast images back and forth in the comparison process, solves the problem that the conventional image fusion method easily causes key characteristic information loss, improves the detail expression and visual effect of the images, and fig. 3 is an overall flow chart of the method and describes the specific implementation process of the method.
In the conventional image fusion, certain distortion influence is caused on the data of the original image during decomposition, and the characteristic information of the original image cannot be reflected essentially. By the method, the image of which the image registration is completed on the basis of the rapid self-adaptive two-dimensional empirical mode decomposition is subjected to image decomposition on the basis of the absorption contrast image, the dark field contrast image and the differential phase contrast image, so that corresponding two-dimensional empirical mode components and residual components are obtained, and the high-frequency characteristics and the low-frequency information of the original data are respectively reflected. And further preprocessing, and respectively carrying out image fusion operation on the high-frequency component and the low-frequency information by utilizing different fusion rules. And finally, carrying out linear weighting and contrast enhancement on each component obtained after fusion, thereby further improving the image quality of the reconstructed image.
Unlike conventional wavelet decomposition and fourier decomposition methods, the method of the present invention starts from the original data of the image, uses the information features provided in the original data to continuously decompose the original image through a 'window', and finally obtains a series of components, which are called two-dimensional empirical mode components. The characteristic information of the image can be decomposed into each empirical mode component from coarse to fine by using the combination of the two-dimensional empirical mode components and the residual errors of different levels. Since the two-dimensional empirical mode components are derived entirely from the raw data of the image, there is no impact on image quality.
The two-dimensional empirical mode components of different levels reflect different characteristics of an original image, the lower-level empirical mode components reflect high-frequency information of the image, such as characteristics of textures, details, image boundaries and the like, and the higher-level empirical mode components and residual errors mainly reflect low-frequency characteristics of the image, such as gray distribution, overview, contour and the like of the image. After different levels of empirical mode components and residual errors of the decomposed image are obtained, a corresponding image preprocessing method, an image fusion method and a fused image enhancement method are adopted according to image feature information reflected by each level of empirical mode components, and image contrast is improved on the basis of maintaining original image feature information, so that the image has clearer detail expression and visual effect, and self-adaptive multi-mode image fusion based on image original data features is realized. The processing methods for the empirical mode components and the residual errors of different levels are respectively a principal component analysis image fusion method and a maximum energy image fusion method, and the principal component analysis image fusion method has the advantages that the energy and spectral characteristics of images before and after fusion are better kept, and the gray level distribution of the images can be better kept by adopting the maximum energy image fusion method for residual errors containing more low-frequency information and the like.
In step S100, an absorption contrast image, a dark field contrast image, and a differential phase contrast image of the sample to be measured are obtained, and the three contrast image device structures are shown in fig. 4, including the following steps:
s110: an absorption grating is added in front of an X-ray tabletop light source 1 and is called a light source grating G0, a circular spot-shaped light source is divided into a plurality of mutually incoherent linear light sources, the coherence of the light source is improved, and an X-ray light source with low coherence laboratory performance can be used for a phase contrast imaging experiment;
s120: for a plurality of sub-light sources generated by dividing the light source grating G0, each sub-light source has higher coherence in the vertical dividing direction, and Talbot self-imaging can be formed on a certain distance behind the phase grating G1 independently;
s130: through specific geometric distance parameter design, after one or a plurality of periods of mutually incoherent sub-light sources self-image, no interference effect occurs at the analysis grating G2, and the independent intensity superposition is completed, so that the condition of self-imaging stripes required by experiments is obtained;
s140: in the plane of the detector, the self-imaging stripe superimposed by the linear sub-light source intensity is combined with the absorption stripe pattern generated by the analysis grating G2 to form a molar stripe under the combined action;
S150: the G2 grating is moved 5 times at equal interval step length, and a first molar stripe image P1 generated when a sample to be measured is not placed under different phase states is obtained by utilizing a phase stepping method 1 -P1 5 And a second molar fringe image P2 generated when the sample to be measured is placed 1 -P2 5
S160: by analysis of P1 1 -P1 5 And P2 1 -P2 5 The absorption contrast image, the differential phase contrast image and the dark field contrast image of the tested experimental sample can be obtained.
It can be understood that: acquiring a first molar stripe image generated by a sample which does not pass through the test; acquiring a second molar fringe image generated through the sample under test, there are various representations:
the first molar fringe image intensity is expressed as follows:
the second moire image intensity is expressed as follows:
wherein,representing the second molar fringe image intensity, +.>Representing the first molar fringe image intensity,represents abscissa, ++>Indicates the ordinate, ++>And->Respectively indicate->Zero-order and first-order coefficients after decomposition, +.>And->Respectively indicate->Zero-order and first-order coefficients after decomposition, +.>Representing different stepping times;
the absorption contrast image, the dark field contrast image, and the differential phase contrast image are respectively expressed as follows:
wherein,representing an absorption contrast image, < > >Representing dark field contrast images,/->Representing differential phase contrast images,/->Representing the phase of the image when the sample to be tested is placed, < >>Representing the phase of the image when no sample is placed, < ->Representing the period of the analysis grating +.>Representing the distance between the phase grating and the analysis grating, < >>Represents differential phase, < > when placing the sample under test>The differential phase when the sample to be measured is not placed is represented, arg represents the depression angle calculation, and rem represents the complex modulo calculation.
For the absorption contrast image, the dark field contrast image and the differential phase contrast image, the gray value calculation of each pixel point is obtained by a series of operations from the values of the pixel points corresponding to the molar stripe images of the two groups of the original samples to be tested, and the three contrast images are in one-to-one correspondence with each other in the pixel-level operation relationship, so that the space registration work required before the image fusion is automatically completed.
In the three contrast image device structure, namely the X-ray three-grating phase contrast interference system, error influence possibly generated in the image registration process is avoided due to the absorption contrast image, the dark field contrast image and the differential phase contrast image acquisition process required by image fusion. Because the three images of the absorption contrast image, the dark field contrast image and the differential phase contrast image are obtained from a group of original image data, the three images are almost identical in spatial distribution, so that the image registration work is simpler compared with other types of fusion work.
In step S200, the absorption contrast image, the dark field contrast image, and the differential phase contrast image are decomposed to obtain a first decomposition component, a second decomposition component, and a third decomposition component, respectively, which includes the steps of:
s210: the method comprises the following steps of adjusting the decomposition layer number, a threshold value and a stopping condition to obtain decomposition components of different degrees of an absorption contrast image, a dark field contrast image and a differential phase contrast image, wherein the method comprises the following steps of:
step1: initializing decomposition level parameters
Step2: initializing residual functions required in decomposition processesWherein->A gray value distribution for the input contrast image;
step3: let the transition function
Step4: traversing the input image, computing from 8 neighbors of each pixelMaximum point set of (2)And minimum value Point set->
Step5: for the maximum value point set obtained in Step4Interpolation fitting to obtain->Upper envelope surface->
Step6: by a set of minimum points obtained for Step4Interpolation fitting to obtain->Lower envelope surface->
Step7: calculating the upper and lower envelope curves obtained in Step5 and Step6And->Mean>
Step8: calculating from the mean plane and the original transition function to obtain a new transition function
Step9: verificationWhether the iteration termination condition of the two-dimensional empirical mode decomposition initially set in the decomposition is satisfied or not is generally determined by using a standard deviation SD;
Step10: if calculateThe standard deviation SD value of (2) is smaller, which can be considered +.>Is the +.>An empirical mode component, a new intermediate transition function +.>Returning to Step4 as a new input, and continuing the iteration operation until the iteration termination condition is met or the preset maximum iteration times are reached;
step11: calculation ofIf->The number of the extreme points smaller than the preset number or the decomposition scale reaches the upper limit of the preset number of the two-dimensional empirical mode decomposition, the decomposition process is finished,output as residual error; otherwise, go (L)>The operation is continued by returning to Step 2.
S220: and selecting proper decomposition parameters, and obtaining a first decomposition component, a second decomposition component and a third decomposition component after the S210 decomposition step.
In one embodiment, the first, second, and third decomposition components are represented as:
wherein,representing the decomposition component +_>Respectively corresponding to the absorption contrast image, the dark field contrast image and the differential phase contrast image,/respectively>Indicate->Order empirical mode component->The number of decomposition layers is indicated,representing decomposing the remaining residual components.
In step S400, the pre-fusion preprocessing is performed on the first decomposition component, the second decomposition component, and the third decomposition component to obtain a preprocessed first experimental modal component and a preprocessed first residual component, a preprocessed second experimental modal component and a preprocessed second residual component, and a preprocessed third experimental modal component and a preprocessed third residual component, where the method includes the following steps:
Denoising and background optimization are respectively carried out on the first decomposition component, the second decomposition component and the third decomposition component, so that the influence of irregular noise, uneven illumination and slowly-changing components on the fusion quality is reduced;
and performing adaptive filtering processing on the first decomposition component, the second decomposition component and the remaining other components of the third decomposition component to obtain a preprocessed first empirical mode component and a preprocessed first residual component, a preprocessed second empirical mode component and a preprocessed second residual component, and a preprocessed third empirical mode component and a preprocessed third residual component.
Specifically, the adaptive filtering process is represented as follows:
wherein,representing the input empirical mode component +.>Representing the empirical mode component after the adaptive filtering process,/-, is shown>For the mean value of the input empirical mode components +.>For the variance of the input empirical mode component, +.>Is the estimated noise variance.
By the above pretreatment, noise is suppressed and uneven distribution of background light is improved.
In the fusion process, the fusion process carries out a corresponding image fusion strategy according to the characteristics of different frequencies, namely the high-frequency information reflects the detail characteristics of the image, and the low-frequency information reflects the texture details and the energy distribution of the image, so that the residual components and the empirical mode components are fused respectively.
The process of merging empirical mode components is detailed in the following steps:
respectively carrying out standardization processing on the preprocessed first empirical mode component, the second empirical mode component and the third empirical mode component to obtain a standardized first empirical mode component, a standardized second empirical mode component and a standardized third empirical mode component;
based on the normalized first empirical mode component, the second empirical mode component and the third empirical mode component, obtaining a corresponding component principal component;
performing principal component analysis transformation on all normalized first empirical mode components, second empirical mode components and third empirical mode components which participate in fusion to obtain fused empirical mode components, namely a first fused empirical mode component and a second fused empirical mode component;
wherein, the normalization process is expressed as follows:
obtaining a correlation coefficient matrix based on jacobian methodCharacteristic value of +.>Corresponding characteristic componentCorrelation coefficient matrix->The expression is as follows:
the component principal components are represented as follows:
wherein,for each empirical mode component after normalization treatment, < ->Representing the individual input empirical mode components +.>For the mean value of the individual input empirical mode components +. >Standard deviation for each input empirical mode component.
The residual component fusion process is as follows: the first residual component, the second residual component and the third residual component are fused by adopting a maximum energy fusion method, and the method is expressed as follows:
wherein,、/>residual components of the two decomposition components each participating in the fusion, < >>And->Respectively->And->Mean value of->And->Respectively isAnd->Variance of->For modulation factor->The fused residual components are a first fused residual component and a first fused residual component.
In one embodiment, to obtain better image quality, richer image detail features and better visual effects, image quality enhancement is required for the obtained fused empirical mode components and residuals. And (3) utilizing the characteristics of different frequencies to reflect different characteristics of the image, sharpening the high-frequency information which is the most reflected image detail part, and carrying out contrast enhancement and histogram equalization on the residual error which is the most reflected image gray distribution.
The second enhanced empirical mode component is obtained by a detail enhancement process, expressed as follows:
the second enhanced residual component is obtained by contrast enhancement processing, as follows:
wherein, Representing convolution calculations +.>Representing a frequency domain high pass filter function, +.>Representing histogram equalization operations, ++>Representing a second enhanced empirical mode component, +.>Representing the second enhanced residual component.
In one embodiment, the reconstructed image is represented as follows:
wherein,as a weight factor, ++>For the order of->Representing reconstructed image +.>Representing a second enhanced empirical mode component, +.>Representing a second enhanced residual component comprising +.>Order component, weight factor->The value is generally larger than 1.5, and the values of other orders are 0.9-1.1.
And finally, evaluating the image quality to obtain a reconstructed image, and evaluating the quality and retrograde motion of the fused image based on objective image quality evaluation indexes (such as information entropy, structural similarity, peak signal to noise ratio and other conditions) and subjective image evaluation results (such as whether the characteristic information of contrast images reflecting different information of the image is embodied in the fused image, intuitiveness, distinction degree and other factors) so as to form negative feedback for the multi-mode characteristic image fusion process. The objective image quality evaluation indexes are four indexes of average gradient, standard deviation, peak signal-to-noise ratio and calculation speed respectively.
Average gradient is used to express that there is a significant difference in gray scale near the border or hatch of the image, i.e., the gray scale rate of change, the magnitude of which can be used to express the sharpness of the image. The rate of change of the contrast of the tiny details of the image, namely the rate of change of the density in the multidimensional direction of the image, is reflected to represent the relative sharpness of the image. The larger the average gradient, the more rich the image detail information, and relatively more clear. The standard deviation is used for describing the discrete degree between the pixel value of the image and the integral mean value of the image, and reflects the information such as the gray distribution, the brightness change and the like of the image. The larger the standard deviation of an image is, the higher the contrast of the light and shade is, and the better the image quality is. And (3) evaluating the influence degree of noise and the like on the fused image based on the peak signal-to-noise ratio and the error between corresponding pixel points between the actual image and the reference image. The greater the peak signal-to-noise ratio, the higher the quality of the image.
Fig. 5 is a schematic diagram of subjective evaluation results of the fusion image according to the present invention.
Further, the objective evaluation results of the fused image of the present invention can be obtained from table 1, table 1 below:
TABLE 1
The best results of the method of the present invention can be obtained by comparison of the data in Table 1.
Example 2:
an X-ray grating phase-contrast multi-mode image rapid self-adaptive fusion system is shown in fig. 2, and comprises an image acquisition module 100, an image decomposition module 200, a component processing module 300, a component fusion module 400 and an image reconstruction module 500;
the image acquisition module 100 is configured to acquire an absorption contrast image, a dark field contrast image, and a differential phase contrast image of a sample to be measured;
the image decomposition module 200 decomposes the absorption contrast image, the dark field contrast image, and the differential phase contrast image to obtain a first decomposition component, a second decomposition component, and a third decomposition component, where the first decomposition component includes a first empirical mode component and a first residual component, the second decomposition component includes a second empirical mode component and a second residual component, and the third decomposition component includes a third empirical mode component and a third residual component;
the component processing module 300 is configured to perform pre-fusion preprocessing on the first decomposition component, the second decomposition component, and the third decomposition component, to obtain a preprocessed first tested modal component and a preprocessed first residual component, a preprocessed second empirical modal component and a preprocessed second residual component, and a preprocessed third empirical modal component and a preprocessed third residual component;
The component fusion module 400 is configured to fuse the preprocessed first empirical mode component and the first residual component, and the second empirical mode component and the second residual component to obtain a first fused empirical mode component and a first fused residual component of the first fused component, and fuse the first fused empirical mode component and the first fused residual component with the preprocessed third empirical mode component and the third residual component to obtain a second fused empirical mode component and a second fused residual component of the second fused component;
the image reconstruction module 500 is configured to perform detail enhancement processing on the second fused empirical mode component to obtain a second enhanced empirical mode component, and perform contrast enhancement processing on the second fused residual component to obtain a second enhanced residual component; and reconstructing based on the second enhanced empirical mode component and the second enhanced residual error component, and performing image definition processing to obtain a reconstructed image.
All changes and modifications that come within the spirit and scope of the invention are desired to be protected and all equivalent thereto are deemed to be within the scope of the invention.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different manner from other embodiments, so that identical and similar parts of each embodiment are mutually referred to.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that:
reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.
In addition, the specific embodiments described in the present specification may differ in terms of parts, shapes of components, names, and the like. All equivalent or simple changes of the structure, characteristics and principle according to the inventive concept are included in the protection scope of the present invention. Those skilled in the art may make various modifications or additions to the described embodiments or substitutions in a similar manner without departing from the scope of the invention as defined in the accompanying claims.

Claims (10)

1. The self-adaptive fusion method of the multi-mode images of the X-ray grating is characterized by comprising the following steps of:
acquiring an absorption contrast image, a dark field contrast image and a differential phase contrast image of a sample to be measured;
decomposing the absorption contrast image, the dark field contrast image and the differential phase contrast image respectively to obtain a corresponding first decomposition component, a second decomposition component and a third decomposition component, wherein the first decomposition component comprises a first empirical mode component and a first residual error component, the second decomposition component comprises a second empirical mode component and a second residual error component, and the third decomposition component comprises a third empirical mode component and a third residual error component;
Decomposing the absorption contrast image, the dark field contrast image and the differential phase contrast image, including adjusting the number of decomposition layers, a threshold value, a decomposition window size and a stopping condition to obtain a first decomposition component, a second decomposition component and a third decomposition component, wherein the first decomposition component comprises a first experimental modal component and a first residual component, the second decomposition component comprises a second empirical modal component and a second residual component, and the third decomposition component comprises a third empirical modal component and a third residual component;
the first, second and third decomposition components are expressed as:
wherein,representing the decomposition component +_>Respectively corresponding to the absorption contrast image, the dark field contrast image and the differential phase contrast image,/respectively>Indicate->Order empirical mode component->Indicating the number of decomposition layers->Representing decomposing residual components remaining;
respectively carrying out pre-fusion pretreatment on the first decomposition component, the second decomposition component and the third decomposition component to obtain a first empirical mode component and a first residual error component after pretreatment, a second empirical mode component and a second residual error component, and a third empirical mode component and a third residual error component;
respectively fusing the preprocessed first empirical mode component and the preprocessed first residual error component, and fusing the second empirical mode component and the second residual error component to obtain a first fused empirical mode component and a first fused residual error component of the first fused component, and fusing the first fused empirical mode component and the first fused residual error component with the preprocessed third empirical mode component and the preprocessed third residual error component to obtain a second fused empirical mode component and a second fused residual error component of the second fused component;
The empirical mode component fusion process is expressed as follows:
respectively carrying out standardization processing on the preprocessed first empirical mode component, the second empirical mode component and the third empirical mode component to obtain a standardized first empirical mode component, a standardized second empirical mode component and a standardized third empirical mode component;
based on the normalized first empirical mode component, the second empirical mode component and the third empirical mode component, obtaining a corresponding component principal component;
performing principal component analysis transformation on all normalized first empirical mode components, second empirical mode components and third empirical mode components which participate in fusion to obtain fused empirical mode components, namely a first fused empirical mode component and a second fused empirical mode component;
the normalization process is expressed as follows:
obtaining a correlation coefficient matrix based on jacobian methodCharacteristic value of +.>Corresponding characteristic componentCorrelation coefficient matrix->The expression is as follows:
the component principal components are represented as follows:
wherein,for each empirical mode component after normalization treatment, < ->Representing the individual input empirical mode components +.>For the mean value of the individual input empirical mode components +. >Standard deviation for each input empirical mode component;
the first residual error component, the second residual error component and the third residual error component are fused by adopting a maximum energy fusion method, and the method is expressed as follows:
wherein,、/>residual components of the two decomposition components each participating in the fusion, < >>And->Respectively->And->Mean value of->And->Respectively isAnd->Variance of->For modulation factor->The fused residual components are a first fused residual component and a first fused residual component;
performing detail enhancement processing on the second fused empirical mode component to obtain a second enhanced empirical mode component, and performing contrast enhancement processing on the second fused residual component to obtain a second enhanced residual component;
and reconstructing based on the second enhanced empirical mode component and the second enhanced residual error component, and performing image definition processing to obtain a reconstructed image.
2. The method for adaptively fusing the multi-modal image of the X-ray grating according to claim 1, wherein the steps of obtaining the absorption contrast image, the dark field contrast image and the differential phase contrast image of the sample to be tested comprise the following steps:
acquiring a first molar stripe image generated by a sample which does not pass through the test, wherein a grating is placed in front of an X-ray light source, and the molar stripe image is formed through interaction between the light source and a plurality of gratings;
And obtaining a second molar fringe image generated by the sample to be tested, and obtaining a plurality of first molar fringe images and second molar fringe images under different phases, thereby obtaining an absorption contrast image, a dark field contrast image and a differential phase contrast image of the sample to be tested.
3. The method for adaptively fusing the multi-modal image of the X-ray grating according to claim 1, wherein the pre-fusing preprocessing is performed on the first decomposition component, the second decomposition component and the third decomposition component to obtain a preprocessed first experimental modal component and a preprocessed first residual component, a preprocessed second empirical modal component and a preprocessed second residual component, and a preprocessed third empirical modal component and a preprocessed third residual component, respectively, comprising the following steps:
denoising and background optimization are respectively carried out on the first decomposition component, the second decomposition component and the third decomposition component, so that the influence of irregular noise, uneven illumination and slowly-changing components on the fusion quality is reduced;
and performing adaptive filtering processing on the first decomposition component, the second decomposition component and the remaining other components of the third decomposition component to obtain a preprocessed first empirical mode component and a preprocessed first residual component, a preprocessed second empirical mode component and a preprocessed second residual component, and a preprocessed third empirical mode component and a preprocessed third residual component.
4. A method of adaptive fusion of multi-modal images of X-ray gratings according to claim 3, characterized in that the adaptive filtering process is represented as follows:
wherein,representing the input empirical mode component +.>Representing the empirical mode component after the adaptive filtering process,/-, is shown>For the mean value of the input empirical mode components +.>For the variance of the input empirical mode component, +.>Is the estimated noise variance.
5. The method for adaptive fusion of multi-modal images of an X-ray grating according to claim 1, wherein the second enhanced empirical mode component is obtained by a detail enhancement process, expressed as follows:
the second enhanced residual component is obtained by contrast enhancement processing, as follows:
wherein,representing convolution calculations +.>Representing a frequency domain high pass filter function, +.>Representing histogram equalization operations, ++>Representing a second enhanced empirical mode component, +.>Representing the second enhanced residual component.
6. The method of claim 1, wherein the reconstructed image is represented as follows:
wherein,as a weight factor, ++>For the order of->Representing reconstructed image +.>Representing a second enhanced empirical mode component, +. >Representing the second enhanced residual component.
7. The method for adaptive fusion of multi-modal images of an X-ray grating as defined in claim 2 wherein,
the first molar fringe image intensity is expressed as follows:
the second moire image intensity is expressed as follows:
wherein,representing the second molar fringe image intensity, +.>Representing the first molar fringe image intensity, +.>Represents abscissa, ++>Indicates the ordinate, ++>And->Respectively indicate->The zero-order and first-order coefficients after decomposition,and->Respectively indicate->Zero-order and first-order coefficients after decomposition, +.>Representing different stepping times;
the absorption contrast image, the dark field contrast image, and the differential phase contrast image are respectively expressed as follows:
wherein,representing the absorption contrastImage (S)/(S)>Representing dark field contrast images,/->Representing differential phase contrast images,/->Representing the phase of the image when the sample to be tested is placed, < >>Representing the phase of the image when no sample is placed, < ->Representing the period of the analysis grating +.>Representing the distance between the phase grating and the analysis grating, < >>Represents differential phase, < > when placing the sample under test>The differential phase when the sample to be measured is not placed is represented, arg represents the depression angle calculation, and rem represents the complex modulo calculation.
8. The system is characterized by comprising an image acquisition module, an image decomposition module, a component processing module, a component fusion module and an image reconstruction module;
the image acquisition module is used for acquiring an absorption contrast image, a dark field contrast image and a differential phase contrast image of the sample to be detected;
the image decomposition module is used for respectively decomposing the absorption contrast image, the dark field contrast image and the differential phase contrast image to obtain a corresponding first decomposition component, a second decomposition component and a third decomposition component, wherein the first decomposition component comprises a first empirical mode component and a first residual error component, the second decomposition component comprises a second empirical mode component and a second residual error component, and the third decomposition component comprises a third empirical mode component and a third residual error component;
decomposing the absorption contrast image, the dark field contrast image and the differential phase contrast image, including adjusting the number of decomposition layers, a threshold value, a decomposition window size and a stopping condition to obtain a first decomposition component, a second decomposition component and a third decomposition component, wherein the first decomposition component comprises a first experimental modal component and a first residual component, the second decomposition component comprises a second empirical modal component and a second residual component, and the third decomposition component comprises a third empirical modal component and a third residual component;
The first, second and third decomposition components are expressed as:
wherein,representing the decomposition component +_>Respectively corresponding to the absorption contrast image, the dark field contrast image and the differential phase contrast image,/respectively>Indicate->Order empirical mode component->Indicating the number of decomposition layers->Representing decomposing residual components remaining;
the component processing module is used for respectively carrying out pre-fusion pretreatment on the first decomposition component, the second decomposition component and the third decomposition component to obtain a first empirical mode component and a first residual component, a second empirical mode component and a second residual component after pretreatment, and a third empirical mode component and a third residual component after pretreatment;
the component fusion module is used for respectively fusing the preprocessed first empirical mode component and the first residual component, the second empirical mode component and the second residual component to obtain a first fused empirical mode component and a first fused residual component of the first fused component, and fusing the first fused empirical mode component and the first fused residual component with the preprocessed third empirical mode component and the third residual component to obtain a second fused empirical mode component and a second fused residual component of the second fused component;
The empirical mode component fusion process is expressed as follows:
respectively carrying out standardization processing on the preprocessed first empirical mode component, the second empirical mode component and the third empirical mode component to obtain a standardized first empirical mode component, a standardized second empirical mode component and a standardized third empirical mode component;
based on the normalized first empirical mode component, the second empirical mode component and the third empirical mode component, obtaining a corresponding component principal component;
performing principal component analysis transformation on all normalized first empirical mode components, second empirical mode components and third empirical mode components which participate in fusion to obtain fused empirical mode components, namely a first fused empirical mode component and a second fused empirical mode component;
the normalization process is expressed as follows:
obtaining a correlation coefficient matrix based on jacobian methodCharacteristic value of +.>Corresponding characteristic componentCorrelation coefficient matrix->The expression is as follows:
the component principal components are represented as follows:
wherein,for each empirical mode component after normalization treatment, < ->Representing the individual input empirical mode components +.>For the mean value of the individual input empirical mode components +. >Standard deviation for each input empirical mode component;
the first residual error component, the second residual error component and the third residual error component are fused by adopting a maximum energy fusion method, and the method is expressed as follows:
wherein,、/>residual components of the two decomposition components each participating in the fusion, < >>And->Respectively->And->Mean value of->And->Respectively isAnd->Variance of->For modulation factor->The fused residual components are a first fused residual component and a first fused residual component;
the image reconstruction module is used for carrying out detail enhancement processing on the second fused empirical mode component to obtain a second enhanced empirical mode component, and carrying out contrast enhancement processing on the second fused residual component to obtain a second enhanced residual component; and reconstructing based on the second enhanced empirical mode component and the second enhanced residual error component, and performing image definition processing to obtain a reconstructed image.
9. A computer readable storage medium storing a computer program, which when executed by a processor implements the method of any one of claims 1 to 7.
10. An X-ray raster phase-contrast multi-modality image fast adaptive fusion apparatus comprising a memory, a processor and a computer program stored in the memory and running on the processor, wherein the processor implements the method of any of claims 1 to 7 when executing the computer program.
CN202311083848.0A 2023-08-28 2023-08-28 Method, system and device for adaptive fusion of multi-mode images of X-ray grating Active CN116843596B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311083848.0A CN116843596B (en) 2023-08-28 2023-08-28 Method, system and device for adaptive fusion of multi-mode images of X-ray grating

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311083848.0A CN116843596B (en) 2023-08-28 2023-08-28 Method, system and device for adaptive fusion of multi-mode images of X-ray grating

Publications (2)

Publication Number Publication Date
CN116843596A CN116843596A (en) 2023-10-03
CN116843596B true CN116843596B (en) 2023-11-14

Family

ID=88174611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311083848.0A Active CN116843596B (en) 2023-08-28 2023-08-28 Method, system and device for adaptive fusion of multi-mode images of X-ray grating

Country Status (1)

Country Link
CN (1) CN116843596B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117575901B (en) * 2024-01-11 2024-05-07 浙江大学杭州国际科创中心 X-ray phase contrast micro-splicing method and system based on multilayer film Laue lens

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129676A (en) * 2010-01-19 2011-07-20 中国科学院空间科学与应用研究中心 Microscopic image fusing method based on two-dimensional empirical mode decomposition
CN103858002A (en) * 2011-07-28 2014-06-11 保罗·谢勒学院 Method for image fusion based on principal component analysis
CN106981059A (en) * 2017-03-30 2017-07-25 中国矿业大学 With reference to PCNN and the two-dimensional empirical mode decomposition image interfusion method of compressed sensing
EP3427663A1 (en) * 2017-07-13 2019-01-16 Agfa Nv Phase contrast imaging method
CN111325703A (en) * 2020-01-20 2020-06-23 上海联影医疗科技有限公司 Multi-modal imaging guided radiotherapy method, device and system
EP3705044A1 (en) * 2019-03-08 2020-09-09 Koninklijke Philips N.V. System for x-ray dark field; phase contrast and attenuation tomosynthesis image acquisition
CN112837253A (en) * 2021-02-05 2021-05-25 中国人民解放***箭军工程大学 Night infrared medium-long wave image fusion method and system
CN114137002A (en) * 2021-11-18 2022-03-04 北京航空航天大学 Contrast enhancement-based low-dose X-ray differential phase contrast imaging method
WO2022120983A1 (en) * 2020-12-10 2022-06-16 中国科学院深圳先进技术研究院 X-ray phase contrast image extraction method and device, terminal and storage medium
CN115100093A (en) * 2022-07-28 2022-09-23 西安理工大学 Medical image fusion method based on gradient filtering
CN116029956A (en) * 2023-03-29 2023-04-28 成都理工大学工程技术学院 Image fusion method and system based on NSCT-SCM
WO2023109717A1 (en) * 2021-12-15 2023-06-22 深圳先进技术研究院 Terahertz time domain signal noise reduction method, and terahertz image reconstruction method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014194995A1 (en) * 2013-06-07 2014-12-11 Paul Scherrer Institut Image fusion scheme for differential phase contrast imaging

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129676A (en) * 2010-01-19 2011-07-20 中国科学院空间科学与应用研究中心 Microscopic image fusing method based on two-dimensional empirical mode decomposition
CN103858002A (en) * 2011-07-28 2014-06-11 保罗·谢勒学院 Method for image fusion based on principal component analysis
CN106981059A (en) * 2017-03-30 2017-07-25 中国矿业大学 With reference to PCNN and the two-dimensional empirical mode decomposition image interfusion method of compressed sensing
EP3427663A1 (en) * 2017-07-13 2019-01-16 Agfa Nv Phase contrast imaging method
EP3705044A1 (en) * 2019-03-08 2020-09-09 Koninklijke Philips N.V. System for x-ray dark field; phase contrast and attenuation tomosynthesis image acquisition
CN111325703A (en) * 2020-01-20 2020-06-23 上海联影医疗科技有限公司 Multi-modal imaging guided radiotherapy method, device and system
WO2022120983A1 (en) * 2020-12-10 2022-06-16 中国科学院深圳先进技术研究院 X-ray phase contrast image extraction method and device, terminal and storage medium
CN112837253A (en) * 2021-02-05 2021-05-25 中国人民解放***箭军工程大学 Night infrared medium-long wave image fusion method and system
CN114137002A (en) * 2021-11-18 2022-03-04 北京航空航天大学 Contrast enhancement-based low-dose X-ray differential phase contrast imaging method
WO2023109717A1 (en) * 2021-12-15 2023-06-22 深圳先进技术研究院 Terahertz time domain signal noise reduction method, and terahertz image reconstruction method and system
CN115100093A (en) * 2022-07-28 2022-09-23 西安理工大学 Medical image fusion method based on gradient filtering
CN116029956A (en) * 2023-03-29 2023-04-28 成都理工大学工程技术学院 Image fusion method and system based on NSCT-SCM

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
2D经验模态分解与非下采样方向滤波器组的红外与可见光图像融合算法;熊芳芳;肖宁;;光学技术(03);全文 *
MIMO Through-Wall-Radar Spotlight Imaging Based on Arithmetic Image Fusion;Yong Jia 等;《2018 21st International Conference on Information Fusion 》;全文 *
常规X光源大视野光栅成像方法与***研究;李新斌;《中国博士学位论文全文数据库 (信息科学辑)》;全文 *
能谱CT图像重建与材料分解算法研究;王少宇;《中国博士学位论文全文数据库 (信息科学辑)》;全文 *
表面疵病动态彩色编码融合成像检测技术;缪洁;李展;崔子健;刘德安;朱健强;;光学学报(09);全文 *

Also Published As

Publication number Publication date
CN116843596A (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN116843596B (en) Method, system and device for adaptive fusion of multi-mode images of X-ray grating
Trusiak et al. Advanced processing of optical fringe patterns by automated selective reconstruction and enhanced fast empirical mode decomposition
US7929746B2 (en) System and method for processing imaging data
CN106485721B (en) The method and its system of retinal structure are obtained from optical coherence tomography image
US10417746B2 (en) Image processing apparatus and image processing method for estimating fixed-pattern noise attributable to image sensor
CN111784620B (en) Light field camera full-focusing image fusion algorithm for guiding angle information by space information
CN111145089A (en) High fidelity image reconstruction method, system, computer equipment and storage medium
JP2013146558A (en) Method and system for image denoising using discrete total variation (tv) minimization with one-direction condition
US20080107352A1 (en) System and Method for Structure Enhancement and Noise Reduction in Medical Images
CN112017130B (en) Image restoration method based on self-adaptive anisotropic total variation regularization
Chang et al. Brain MR image restoration using an automatic trilateral filter with GPU-based acceleration
CN115546136A (en) Image processing method, equipment and device for medical image
Zhang Two-step non-local means method for image denoising
Van Der Jeught et al. Optimized loss function in deep learning profilometry for improved prediction performance
CN114298950A (en) Infrared and visible light image fusion method based on improved GoDec algorithm
Qiu et al. Edge structure preserving 3D image denoising by local surface approximation
CN114202476A (en) Infrared image enhancement method, device, equipment and computer readable medium
CN111652821B (en) Low-light video image noise reduction processing method, device and equipment based on gradient information
Dunn et al. Optimal Gabor-filter design for texture segmentation
CN112697751A (en) Multi-angle illumination lens-free imaging method, system and device
CN107907542B (en) IVMD and energy estimation combined DSPI phase filtering method
WO2014197658A1 (en) Method and system for noise standard deviation estimation
Zhang et al. Multi-resolution depth image restoration
JP2017098933A (en) Image processing apparatus and image processing method
Pan et al. Complex composite derivative and its application to edge detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant