CN117408905B - Medical image fusion method based on multi-modal feature extraction - Google Patents

Medical image fusion method based on multi-modal feature extraction Download PDF

Info

Publication number
CN117408905B
CN117408905B CN202311682241.4A CN202311682241A CN117408905B CN 117408905 B CN117408905 B CN 117408905B CN 202311682241 A CN202311682241 A CN 202311682241A CN 117408905 B CN117408905 B CN 117408905B
Authority
CN
China
Prior art keywords
medical image
image
low
data
frequency information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311682241.4A
Other languages
Chinese (zh)
Other versions
CN117408905A (en
Inventor
周红艳
田超
周鹏
刘杰克
尹刚
匡平
赵宇倩
武文博
胡彬
杨学刚
高宇亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Cancer Hospital
Original Assignee
Sichuan Cancer Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Cancer Hospital filed Critical Sichuan Cancer Hospital
Priority to CN202311682241.4A priority Critical patent/CN117408905B/en
Publication of CN117408905A publication Critical patent/CN117408905A/en
Application granted granted Critical
Publication of CN117408905B publication Critical patent/CN117408905B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image data processing, and provides a medical image fusion method based on multi-mode feature extraction, which comprises the following steps: manually labeling target features of each original medical image to form a first training set; preprocessing each original image to obtain first characteristic data; acquiring and removing noise data of the first characteristic data to obtain second characteristic data; performing shear wave transformation on the second characteristic data to respectively obtain a low-frequency information image and a high-frequency information image; respectively calculating Laplace energy sum of the low-frequency information image and the high-frequency information image; using sparse representation to fuse the low-frequency information image and the high-frequency information image to obtain third characteristic data; the first characteristic data, the second characteristic data and the third characteristic data are subjected to weight superposition fusion to form a multi-mode medical image; and taking the multi-modal medical image as a second training set, and training the feature extraction neural network together with the first training set.

Description

Medical image fusion method based on multi-modal feature extraction
Technical Field
The invention relates to the technical field of image data processing, in particular to a medical image fusion method based on multi-mode feature extraction.
Background
Along with the rapid development of image data processing technology, a large number of multi-mode medical images can be obtained through various different devices, feature extraction is carried out on the large number of multi-mode medical images through a certain algorithm, parameters which can most represent image information features are selected to be fused and processed, a new image is obtained, complementation and redundancy elimination among the images are achieved, and therefore doctors are helped to read focus information from the images rapidly.
However, the current processing and fusion of medical images are simple, noise existing in the original medical images, frequency band information and the like are not considered in the processing process, or noise is removed only through common filtering, but the high standard requirement of the feature extraction accuracy of modern images cannot be met.
Disclosure of Invention
The invention aims to provide a medical image fusion method based on multi-mode feature extraction, which fully meets the high standard requirement of medical image feature extraction by considering the extraction of noise, feature information and detail information when processing medical images.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions:
the medical image fusion method based on multi-modal feature extraction comprises the following steps:
step 1, acquiring a large number of historical original medical images, and manually marking target features of each original medical image to form a first training set;
step 2, preprocessing each original image to obtain first characteristic data;
step 3, obtaining noise data of the first characteristic data, and removing the noise data to obtain second characteristic data;
step 4, performing shear wave transformation on the second characteristic data to respectively obtain a low-frequency information image and a high-frequency information image; respectively calculating Laplace energy sum of the low-frequency information image and the high-frequency information image; using sparse representation to fuse the low-frequency information image and the high-frequency information image to obtain third characteristic data;
step 5, carrying out weight superposition fusion on the first feature data, the second feature data and the third feature data to form a multi-mode medical image; and taking the multi-modal medical image as a second training set, and training the feature extraction neural network together with the first training set.
In the step 2, the preprocessing includes gray value processing:
setting a high threshold C high Low threshold C low The pixel value in the original medical image is smaller than or equal to the low-order threshold C low Is set to 0; will be greater than the low threshold C low And is smaller than the high threshold C high Pixels according to the upper threshold C high Low threshold C low And determining a maximum gray value; will be greater than or equal to the high threshold C high Is set to a maximum gray value.
Wherein f (x, y) is the pixel value of the original medical image with the position (x, y); g (x, y) is the gray value of the pixel point with the position (x, y) after gray value processing; max is the maximum gray value in the original medical image.
In the step 2, the preprocessing further includes binarization processing:
wherein ThresholdC represents a binarized image; pixel represents a gray value; n_pixel represents the number of pixels with gray values of pixel; n represents the total number of pixels in the original medical image; l represents the total number of gray value values, which is 256.
In the step 2, the preprocessing further includes a de-artifact processing.
In the step 3, the obtained noise data is:
wherein, noise r Is noise data; n is the total number of pixel points, i is the ith pixel point, i epsilon N; a is the lower integral limit, b is the upper integral limit; x is pixel data of the first feature data;as the error weight coefficient, G r Is image error data; />Is the weight coefficient of noise category, J r Is the number of noise types; />Is noise weight coefficient, +>Is the standard deviation of noise; d (D) r Is the first characteristic data; />The term is adjusted for the curve.
In the step 4, the step of calculating the laplace energy sum of the low-frequency information image is as follows:
wherein SML [ f (x, y), I low ]Laplace energy and formula for low frequency information image; f (x, y) is the value of the pixel point (x, y); n is the total number of pixel points, N1 is the maximum number of pixels of the pixel points (x, y) extending in the horizontal direction, and N2 is the maximum number of pixels of the pixel points (x, y) extending in the vertical direction;is a Laplacian operator; (i, j) is a pixel point (x, y) extending in the horizontal direction and the vertical direction, and f (i, j) is a value of the pixel point (i, j); step is the step size, step=1; i LRS 、I MED Is an intermediate parameter; />Is the average pixel value of N pixel points.
In the step 4, the step of calculating the laplace energy sum of the high-frequency information image is as follows: the calculation of the Laplace energy sum for the high frequency information image is the same as that for the low frequency information image, if there are M high frequency information images, the average value of the Laplace energy sum for the M high frequency information images is taken as SML [ f (x, y), I high ]。
In the step 4, the step of fusing the low-frequency information image and the high-frequency information image by using sparse representation includes:
for low frequency information imageLinear representation +.>,/>For low-frequency information image->Is a linear representation of (2);
for high frequency information imageLinear representation +.>,/>For high-frequency information image->Is a linear representation of (2);
determining sparse coefficients using a binary norm
And (3) representing the low-frequency information image and the high-frequency information image by using the sparse coefficient to obtain third characteristic data F3:
wherein r is the pixel position (x, y);sparse coefficients for low frequency image information, +.>Sparse coefficients for high frequency image information; i 2 Representing a binary norm.
In the step 5, the step of performing weight superposition fusion on the first feature data, the second feature data and the third feature data to form a multi-mode medical image includes:
wherein W is the fused multi-modal medical image;、/>、/>the weight coefficients of the first characteristic data F1, the second characteristic data F2 and the third characteristic data F3 are respectively; h is a smoothing coefficient; />Is an error term; />Is a correction term.
In the step 5, the Loss function Loss of the feature extraction neural network is:
wherein X is the total number of original medical images, and k is the kth original medical image; a is that (k) A first training set corresponding to the kth original medical image, B (k) A second training set corresponding to the kth original medical image;to compromise parameters; />、/>Is a weight used to balance the parameter items; i is a label of the target feature; y is the background of the original medical image; i 1 Representing a norm; />Representing a point operation.
Further comprising step 6: inputting the newly acquired original medical image into a trained feature extraction neural network, and outputting target features in the original medical image.
Compared with the prior art, the invention has the beneficial effects that:
when the medical image is processed, the invention considers the extraction of noise, characteristic information and detail information, and fully meets the high standard requirement of medical image characteristic extraction.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Also, in the description of the present invention, the terms "first," "second," and the like are used merely to distinguish one from another, and are not to be construed as indicating or implying a relative importance or implying any actual such relationship or order between such entities or operations. In addition, the terms "connected," "coupled," and the like may be used to denote a direct connection between elements, or an indirect connection via other elements.
The invention is realized by the following technical scheme, as shown in fig. 1, a medical image fusion method based on multi-mode feature extraction, which comprises the following steps:
step 1, a large number of historical original medical images are obtained, and target features of each original medical image are manually marked to form a first training set.
And manually labeling X original medical images (such as MRI images, CT images and the like), and labeling required targets such as focus targets, organ targets and the like to form a first training set.
And step 2, preprocessing each original image to obtain first characteristic data.
And (3) preprocessing the X original medical images in the step (1), wherein the preprocessing comprises gray value processing, binarization processing and artifact removal processing.
The gray value processing specifically comprises the following steps:
setting a high threshold C high Low threshold C low The pixel value in the original medical image is smaller than or equal to the low-order threshold C low Is set to 0; will be greater than the low threshold C low And is smaller than the high threshold C high Pixels according to the upper threshold C high Low threshold C low And determining a maximum gray value; will be greater than or equal to the high threshold C high Is set to a maximum gray value.
Wherein f (x, y) is the pixel value of the original medical image with the position (x, y); g (x, y) is the gray value of the pixel point with the position (x, y) after gray value processing; max is the maximum gray value in the original medical image.
The binarization processing specifically comprises the following steps:
wherein ThresholdC represents a binarized image; pixel represents a gray value; n_pixel represents the number of pixels with gray values of pixel; n represents the total number of pixels in the original medical image; l represents the total number of gray value values, which is 256.
The artifact removal processing specifically includes:
artifacts are common interference components in medical images, and can cause unreal or distorted details in the images. The present embodiment uses the prior art to remove the artifact, and will not be described in detail here.
The first characteristic data F1 is obtained by sequentially carrying out graying, binarization and artifact removal treatment on the original medical image. In the preprocessing stage, more comprehensive and richer image data can be obtained, and the details and information quantity of the image can be enhanced, so that more comprehensive image description and more accurate feature acquisition basis are provided.
And step 3, acquiring noise data of the first characteristic data, and removing the noise data to obtain second characteristic data.
Wherein, noise r Is noise data; n is the total number of pixel points, i is the ith pixel point, i epsilon N; a is the lower integral limit, b is the upper integral limit; x is pixel data of the first feature data;as the error weight coefficient, G r Is image error data; />Is the weight coefficient of noise category, J r Is the number of noise types; />Is noise weight coefficient, +>Is the standard deviation of noise; d (D) r Is the first characteristic data; />The term is adjusted for the curve.
The noise data reflects noise parts existing in the image and is used for evaluating the noise quality of the image, the detection result of the noise data is determined together through the synergistic effect of parameters in the formula, the parameters comprise an error weight coefficient, a noise category weight coefficient, a noise weight coefficient and the like, the detection of the noise data considers the noise category, the error and the noise of the first characteristic data F1 of the original medical image after preprocessing, and the noise data is extracted and removed from the first characteristic data to provide interference-free second characteristic data F2 for subsequent image processing.
Step 4, performing shear wave transformation on the second characteristic data to respectively obtain a low-frequency information image and a high-frequency information image; and respectively calculating the Laplace energy sum of the low-frequency information image and the high-frequency information image, and fusing the low-frequency information image and the high-frequency information image by using sparse representation to obtain third characteristic data.
And performing shear wave transformation on the second characteristic data F2 to obtain a low-frequency information image and a plurality of high-frequency information images. Laplace energy sum calculation is carried out on the low-frequency information image:
wherein SML [ f (x, y), I low ]Laplace energy and formula for low frequency information image; f (x, y) is the value of the pixel point (x, y); n is the total number of pixel points, N1 is the maximum number of pixels of the pixel points (x, y) extending in the horizontal direction, and N2 is the maximum number of pixels of the pixel points (x, y) extending in the vertical direction;is a Laplacian operator; (i, j) is a pixel point (x, y) extending in the horizontal direction and the vertical direction, and f (i, j) is a value of the pixel point (i, j); step is the step size, step=1; i LRS 、I MED Is an intermediate parameter; />Is the average pixel value of N pixel points.
The calculation of the Laplace energy sum for the high frequency information image is the same as that for the low frequency information image, if there are M high frequency information images, the average value of the Laplace energy sum for the M high frequency information images is taken as SML [ f (x, y), I high ]。
Fusing the low-frequency information image and the high-frequency information image by using sparse representation:
for low frequency information imageLinear representation +.>,/>For low-frequency information image->Is a linear representation of (2);
for high frequency information imageLinear representation +.>,/>For high-frequency information image->Is a linear representation of (2);
determining sparse coefficients using a binary norm
And (3) representing the low-frequency information image and the high-frequency information image by using the sparse coefficient to obtain third characteristic data F3:
wherein r is the pixel position (x, y);sparse coefficients for low frequency image information, +.>Sparse coefficients for high frequency image information; i 2 Representing a binary norm.
Step 5, carrying out weight superposition fusion on the first feature data, the second feature data and the third feature data to form a multi-mode medical image; and taking the multi-modal medical image as a second training set, and training the feature extraction neural network together with the first training set.
The first characteristic data F1, the second characteristic data F2 and the third characteristic data F3 are subjected to weight superposition fusion:
wherein W is the fused multi-modal medical image;、/>、/>the weight coefficients of the first characteristic data F1, the second characteristic data F2 and the third characteristic data F3 are respectively; h is a smoothing coefficient; />Is an error term; />Is a correction term.
And taking the multi-modal medical image W as a second training set, taking the first training set and the second training set together as training data of the feature extraction neural network, and training the feature extraction neural network. The Loss function Loss at training is:
wherein X is the total number of original medical images, and k is the kth original medical image; a is that (k) A first training set corresponding to the kth original medical image, B (k) A second training set corresponding to the kth original medical image;to compromise parameters; />、/>Is a weight used to balance the parameter items; i is a label of the target feature; y is the background of the original medical image; i 1 Representing a norm; />Representing a point operation.
And 6, inputting the newly acquired original medical image into a trained feature extraction neural network, and outputting target features in the original medical image.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (5)

1. The medical image fusion method based on multi-modal feature extraction is characterized by comprising the following steps of: the method comprises the following steps:
step 1, acquiring a large number of historical original medical images, and manually marking target features of each original medical image to form a first training set;
step 2, preprocessing each original image to obtain first characteristic data;
step 3, obtaining noise data of the first characteristic data, and removing the noise data to obtain second characteristic data;
in the step 3, the obtained noise data is:
wherein, noise r Is noise data; n is the total number of pixel points, i is the ith pixel point, i epsilon N; a is the lower integral limit, b is the upper integral limit; x is pixel data of the first feature data;as the error weight coefficient, G r Is image error data; />Is the weight coefficient of noise category, J r Is the number of noise types; />Is noise weight coefficient, +>Is the standard deviation of noise; d (D) r Is the first characteristic data; />Is a curve adjustment term;
step 4, performing shear wave transformation on the second characteristic data to respectively obtain a low-frequency information image and a high-frequency information image; respectively calculating Laplace energy sum of the low-frequency information image and the high-frequency information image; using sparse representation to fuse the low-frequency information image and the high-frequency information image to obtain third characteristic data;
in the step 4, the step of calculating the laplace energy sum of the low-frequency information image is as follows:
wherein SML [ f (x, y), I low ]Laplace energy and formula for low frequency information image; f (x, y) is the value of the pixel point (x, y); n is the total number of pixel points, N1 is the maximum number of pixels of the pixel points (x, y) extending in the horizontal direction, and N2 is the maximum number of pixels of the pixel points (x, y) extending in the vertical direction;is a Laplacian operator; (i, j) is a pixel point (x, y) extending in the horizontal direction and the vertical direction, and f (i, j) is a value of the pixel point (i, j); step is the step size, step=1; i LRS 、I MED Is an intermediate parameter; />An average pixel value of N pixel points;
step 5, carrying out weight superposition fusion on the first feature data, the second feature data and the third feature data to form a multi-mode medical image; and taking the multi-modal medical image as a second training set, and training the feature extraction neural network together with the first training set.
2. The medical image fusion method based on multi-modal feature extraction according to claim 1, wherein: in the step 2, the preprocessing includes gray value processing:
setting a high threshold C high Low threshold C low The pixel value in the original medical image is smaller than or equal to the low-order threshold C low Is set to 0; will be greater than the low threshold C low And is smaller than the high threshold C high Pixels according to the upper threshold C high Low threshold C low And determining a maximum gray value; will be greater than or equal to the high threshold C high Is set to the maximum gray value:
wherein f (x, y) is the pixel value of the original medical image with the position (x, y); g (x, y) is the gray value of the pixel point with the position (x, y) after gray value processing; max is the maximum gray value in the original medical image.
3. The medical image fusion method based on multi-modal feature extraction according to claim 2, wherein: in the step 2, the preprocessing further includes binarization processing:
wherein ThresholdC represents a binarized image; pixel represents a gray value; n_pixel represents the number of pixels with gray values of pixel; n represents the total number of pixels in the original medical image; l represents the total number of gray value values, which is 256.
4. The medical image fusion method based on multi-modal feature extraction according to claim 1, wherein: in the step 5, the step of performing weight superposition fusion on the first feature data, the second feature data and the third feature data to form a multi-mode medical image includes:
wherein W is the fused multi-modal medical image;、/>、/>the weight coefficients of the first characteristic data F1, the second characteristic data F2 and the third characteristic data F3 are respectively; h is a smoothing coefficient; />Is an error term; />Is a correction term.
5. The medical image fusion method based on multi-modal feature extraction according to claim 1, wherein: further comprising step 6: inputting the newly acquired original medical image into a trained feature extraction neural network, and outputting target features in the original medical image.
CN202311682241.4A 2023-12-08 2023-12-08 Medical image fusion method based on multi-modal feature extraction Active CN117408905B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311682241.4A CN117408905B (en) 2023-12-08 2023-12-08 Medical image fusion method based on multi-modal feature extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311682241.4A CN117408905B (en) 2023-12-08 2023-12-08 Medical image fusion method based on multi-modal feature extraction

Publications (2)

Publication Number Publication Date
CN117408905A CN117408905A (en) 2024-01-16
CN117408905B true CN117408905B (en) 2024-02-13

Family

ID=89496477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311682241.4A Active CN117408905B (en) 2023-12-08 2023-12-08 Medical image fusion method based on multi-modal feature extraction

Country Status (1)

Country Link
CN (1) CN117408905B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974440B (en) * 2024-04-01 2024-06-07 四川省肿瘤医院 Method and system for stitching endoscope images

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009301584A (en) * 2009-09-28 2009-12-24 Seiko Epson Corp Image processor, image processing method and image processing program
CN110060225A (en) * 2019-03-28 2019-07-26 南京信息工程大学 A kind of Medical image fusion method based on rapid finite shearing wave conversion and rarefaction representation
CN110415198A (en) * 2019-07-16 2019-11-05 南京信息工程大学 A kind of Method of Medical Image Fusion based on laplacian pyramid Yu parameter adaptive Pulse Coupled Neural Network
CN111598822A (en) * 2020-05-18 2020-08-28 西安邮电大学 Image fusion method based on GFRW and ISCM
CN114445308A (en) * 2020-11-05 2022-05-06 江西理工大学 Infrared and visible light image fusion method based on novel regional feature fusion rule
KR102402677B1 (en) * 2021-06-15 2022-05-26 (주)지큐리티 Method and apparatus for image convergence
CN115018728A (en) * 2022-06-15 2022-09-06 济南大学 Image fusion method and system based on multi-scale transformation and convolution sparse representation
CN115100172A (en) * 2022-07-11 2022-09-23 西安邮电大学 Fusion method of multi-modal medical images
CN116630762A (en) * 2023-06-25 2023-08-22 山东卓业医疗科技有限公司 Multi-mode medical image fusion method based on deep learning
CN116721761A (en) * 2023-06-20 2023-09-08 四川省肿瘤医院 Radiotherapy data processing method, system, equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8155462B2 (en) * 2006-12-29 2012-04-10 Fastvdo, Llc System of master reconstruction schemes for pyramid decomposition

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009301584A (en) * 2009-09-28 2009-12-24 Seiko Epson Corp Image processor, image processing method and image processing program
CN110060225A (en) * 2019-03-28 2019-07-26 南京信息工程大学 A kind of Medical image fusion method based on rapid finite shearing wave conversion and rarefaction representation
CN110415198A (en) * 2019-07-16 2019-11-05 南京信息工程大学 A kind of Method of Medical Image Fusion based on laplacian pyramid Yu parameter adaptive Pulse Coupled Neural Network
CN111598822A (en) * 2020-05-18 2020-08-28 西安邮电大学 Image fusion method based on GFRW and ISCM
CN114445308A (en) * 2020-11-05 2022-05-06 江西理工大学 Infrared and visible light image fusion method based on novel regional feature fusion rule
KR102402677B1 (en) * 2021-06-15 2022-05-26 (주)지큐리티 Method and apparatus for image convergence
CN115018728A (en) * 2022-06-15 2022-09-06 济南大学 Image fusion method and system based on multi-scale transformation and convolution sparse representation
CN115100172A (en) * 2022-07-11 2022-09-23 西安邮电大学 Fusion method of multi-modal medical images
CN116721761A (en) * 2023-06-20 2023-09-08 四川省肿瘤医院 Radiotherapy data processing method, system, equipment and medium
CN116630762A (en) * 2023-06-25 2023-08-22 山东卓业医疗科技有限公司 Multi-mode medical image fusion method based on deep learning

Also Published As

Publication number Publication date
CN117408905A (en) 2024-01-16

Similar Documents

Publication Publication Date Title
CN117408905B (en) Medical image fusion method based on multi-modal feature extraction
CN109816742B (en) Cone beam CT geometric artifact removing method based on fully-connected convolutional neural network
CN109919160B (en) Verification code identification method, device, terminal and storage medium
CN109903254B (en) Improved bilateral filtering method based on Poisson nucleus
CN112330686A (en) Method for segmenting and calibrating lung bronchus
CN108492263B (en) Lens radial distortion correction method
CN111080592B (en) Rib extraction method and device based on deep learning
CN112053302B (en) Denoising method and device for hyperspectral image and storage medium
WO2018103015A1 (en) Ring artifact correction method and apparatus
CN106651981B (en) Method and device for correcting ring artifact
CN117710365B (en) Processing method and device for defective pipeline image and electronic equipment
CN114998356A (en) Axle defect detection method based on image processing
CN117689574B (en) Medical image processing method for tumor radio frequency ablation diagnosis and treatment positioning
CN109949334B (en) Contour detection method based on deep reinforced network residual error connection
CN117197345B (en) Intelligent bone joint three-dimensional reconstruction method, device and equipment based on polynomial fitting
CN117893550A (en) Moving object segmentation method under complex background based on scene simulation
CN115222755B (en) Medical image target segmentation method and device based on medical imaging equipment
CN112288680B (en) Automatic defect area extraction method and system for automobile hub X-ray image
CN110349129B (en) Appearance defect detection method for high-density flexible IC substrate
CN117252818A (en) PCB defect detection method based on improved YOLOv5
CN115272184A (en) Defect identification method based on optimization of industrial image quality
CN104764402A (en) Visual inspection method for citrus size
CN113361633A (en) Medical image big data 3D residual error network classification method
CN112906690A (en) License plate segmentation model training method, license plate segmentation method and related device
CN112784840A (en) License plate recognition method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant