CN117455786A - Multi-focus image fusion method and device, computer equipment and storage medium - Google Patents

Multi-focus image fusion method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN117455786A
CN117455786A CN202311422161.5A CN202311422161A CN117455786A CN 117455786 A CN117455786 A CN 117455786A CN 202311422161 A CN202311422161 A CN 202311422161A CN 117455786 A CN117455786 A CN 117455786A
Authority
CN
China
Prior art keywords
image
source image
focus
information
source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311422161.5A
Other languages
Chinese (zh)
Inventor
金岩
付宏语
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qizhiming Photoelectric Intelligent Technology Suzhou Co ltd
Original Assignee
Qizhiming Photoelectric Intelligent Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qizhiming Photoelectric Intelligent Technology Suzhou Co ltd filed Critical Qizhiming Photoelectric Intelligent Technology Suzhou Co ltd
Priority to CN202311422161.5A priority Critical patent/CN117455786A/en
Publication of CN117455786A publication Critical patent/CN117455786A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multi-focus image fusion method, a device, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring a multi-focus image set; extracting edge characteristics of the source image to obtain a rough focusing information graph of the source image; extracting information from the source image to obtain high-frequency information of the source image; obtaining a target focus information map of the source image based on the high-frequency information and the rough focus information map; acquiring pixel points positioned at the same position in a target focusing information graph of all source images; comparing all the pixel points positioned at the same position in the target focusing information graph to obtain an optimal focusing pixel point set of the source image; binarizing pixel points in a target focusing information graph of the source image based on the optimal focusing pixel point set to obtain a first decision graph of the source image; obtaining a second decision graph of the source image based on the first decision graph of the source image; and based on the second decision graph, carrying out fusion processing on all source images in the multi-focus image set to obtain a fusion image.

Description

Multi-focus image fusion method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a multi-focus image fusion method, apparatus, computer device, and storage medium.
Background
The microscopic imaging technology with large field of view and high resolution is the main research direction in the field of modern biomedicine, however, due to the limited depth of field of the optical lens, it is difficult to obtain a full-focus clear image of a large-size sample under the condition of single imaging. In order to extend the depth of field of the optical lens, multi-focus image fusion techniques have been developed. Multi-focus image fusion refers to fusing a series of images focused by different regions into one Zhang Quan focused image under the same field of view.
The existing multi-focus image fusion method is mostly based on fusion of two images with foreground and background focused respectively, each fusion can generate a new intermediate result, then fusion is continued on the basis of the intermediate result until a final fusion result is obtained, the process is similar to a cascade effect, and the output of one process serves as the input of the next process.
However, when the existing multi-focus image fusion method is adopted to fuse the images acquired under the microscope with large view field and high resolution, the focus areas in every two images cannot be extracted and fused accurately, and the final fusion result is low in accuracy due to multiple cascading.
Disclosure of Invention
In view of the above, the present invention provides a multi-focus image fusion method, apparatus, computer device and storage medium, so as to solve the problem of low accuracy of image fusion results obtained by the existing multi-focus image fusion.
In a first aspect, the present invention provides a multi-focus image fusion method, the method comprising:
acquiring a multi-focus image set, wherein the multi-focus image set comprises at least two source images, and the multi-focus image set is acquired by a large-view-field high-resolution microscope on the same scene along the Z axial direction of a displacement table;
performing edge feature extraction processing on each source image in the multi-focus image set to obtain a rough focus information image corresponding to each source image;
information extraction processing is carried out on each source image in the multi-focus image set, and high-frequency information corresponding to each source image is obtained;
performing fusion optimization processing on the high-frequency information and the rough focusing information corresponding to each source image to obtain a target focusing information diagram corresponding to each source image;
stacking target focusing information graphs corresponding to all source images in a three-dimensional direction to obtain pixel points positioned at the same position in the target focusing information graphs corresponding to all source images;
Comparing the pixel points positioned at the same position in the target focusing information diagrams corresponding to all the source images to obtain an optimal focusing pixel point set corresponding to each source image;
performing binarization processing on pixels in the target focus information graph corresponding to each source image based on the optimal focus pixel point set corresponding to each source image to obtain a first decision graph corresponding to each source image, wherein the value of the optimal focus pixel point set at the optimal pixel point position is different from the values of other pixel point positions;
performing rapid guided filtering optimization processing on the first decision graph corresponding to each source image to obtain a second decision graph corresponding to each source image;
and based on the second decision graph corresponding to each source image, carrying out fusion processing on all source images in the multi-focus image set to obtain a fused image after fusion.
Advantageous effects
In the embodiment of the application, a multi-focus image fusion method is used for acquiring a multi-focus image set; performing edge feature extraction processing on each source image in the multi-focus image set to obtain a rough focus information image corresponding to each source image; information extraction processing is carried out on each source image in the multi-focus image set, and high-frequency information corresponding to each source image is obtained; the high-frequency information and the rough focusing information corresponding to each source image are fused and optimized to obtain a target focusing information graph corresponding to each source image; stacking target focusing information graphs corresponding to all source images in a three-dimensional direction to obtain pixel points positioned at the same position in the target focusing information graphs corresponding to all source images; comparing the pixel points positioned at the same position in the target focusing information graph corresponding to all the source images to obtain an optimal focusing pixel point set corresponding to each source image; performing binarization processing on pixel points in a target focusing information graph corresponding to each source image based on an optimal focusing pixel point set corresponding to each source image to obtain a first decision graph corresponding to each source image, wherein the value of the position of the optimal focusing pixel point set is different from the value of the positions of other pixel points; carrying out rapid guided filtering optimization processing on the first decision graph corresponding to each source image to obtain a second decision graph corresponding to each source image; and based on a second decision diagram corresponding to each source image, carrying out fusion processing on all source images in the multi-focus image set to obtain a fused image after fusion. When the optimal focusing pixel point set corresponding to each source image is obtained, the target focusing information of all the source images is compared to obtain, and compared with the condition of fusion in pairs in the prior art, the image fusion method and device focus on the whole instead of relying on the previous fusion result, and the fusion accuracy is improved.
In a second aspect, the present invention provides a multi-focus image fusion apparatus, the apparatus comprising:
the multi-focus image set acquisition module is used for acquiring a multi-focus image set, wherein the multi-focus image set comprises at least two source images, and the multi-focus image set is acquired by a large-view-field high-resolution microscope on the same scene along the axial direction of a displacement table Z;
the rough focusing information graph acquisition module is used for carrying out edge feature extraction processing on each source image in the multi-focusing image set to obtain a rough focusing information graph corresponding to each source image;
the high-frequency information acquisition module is used for extracting information of each source image in the multi-focus image set to acquire high-frequency information corresponding to each source image;
the target focusing information diagram acquisition module is used for carrying out fusion optimization processing on the high-frequency information and the rough focusing information diagram corresponding to each source image to obtain a target focusing information diagram corresponding to each source image;
the pixel point acquisition module is used for stacking the target focusing information graphs corresponding to all the source images in the three-dimensional direction to acquire the pixel points positioned at the same position in the target focusing information graphs corresponding to all the source images;
The optimal focusing pixel point set acquisition module is used for comparing the pixel points positioned at the same position in the target focusing information graph corresponding to all the source images to obtain an optimal focusing pixel point set corresponding to each source image;
the first decision diagram obtaining module is used for carrying out binarization processing on pixel points in the target focusing information diagram corresponding to each source image based on the optimal focusing pixel point set corresponding to each source image to obtain a first decision diagram corresponding to each source image, wherein the value of the optimal pixel point position in the optimal focusing pixel point set is different from the value of the other pixel point positions;
the second decision diagram acquisition module is used for carrying out rapid guided filtering optimization processing on the first decision diagram corresponding to each source image to obtain a second decision diagram corresponding to each source image;
and the fusion image acquisition module is used for carrying out fusion processing on all the source images in the multi-focus image set based on the second decision graph corresponding to each source image to obtain a fused image after fusion.
In a third aspect, the present invention provides a computer device comprising: the memory and the processor are in communication connection, computer instructions are stored in the memory, and the processor executes the computer instructions, so that the multi-focus image fusion method of the first aspect or any implementation manner corresponding to the first aspect is executed.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon computer instructions for causing a computer to perform a multi-focus image fusion method according to the first aspect or any of its corresponding embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a hardware environment of an alternative multi-focus image fusion method according to an embodiment of the present application;
FIG. 2 is a flow chart of an alternative multi-focus image fusion method according to an embodiment of the invention;
FIG. 3 is a block diagram of an alternative multi-focus image fusion apparatus according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a hardware structure of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The microscopic imaging technology with large field of view and high resolution is a main research method in the field of modern biomedicine, however, due to the limited depth of field of the optical lens, it is difficult to obtain a full-focus clear image of a large-size sample under the condition of single imaging. In order to extend the depth of field of the optical lens, multi-focus image fusion techniques have been developed. The multi-focus image fusion refers to fusing a series of images focused by different areas into a Zhang Quan focused image under the same view.
The related multi-focus image fusion method is mostly based on fusion of two images focused by foreground and background respectively, each fusion can generate a new intermediate result, then fusion is continued on the basis of the intermediate result until a final fusion result is obtained, the process is similar to a cascading effect, and the output of one process serves as the input of the next process.
However, when the existing multi-focus image fusion method is adopted to fuse images acquired under a microscope with a large field of view and high resolution, the focusing areas in every two images cannot be accurately extracted and fused, and the final fusion result is low in accuracy due to multiple cascading.
In order to solve the above problems, the embodiments of the present application provide a multi-focus image fusion method, which obtains a target focus information map by fusion optimization of high-frequency information and a rough focus information map corresponding to each source image, and can comprehensively consider information of the source image, accurately extract a focus area of each source image, and further improve accuracy of later fusion; when the optimal focusing pixel point set corresponding to each source image is obtained, target focusing information of all source images is compared to obtain, and compared with the condition of fusion in pairs in the prior art, the image fusion method and device for the image fusion focus on the whole instead of relying on the previous fusion result, and therefore accuracy of fusion is improved, and the effect of improving accuracy of the fusion result is achieved.
Alternatively, in the present embodiment, the above-described multi-focus image fusion method may be applied to a hardware environment constituted by the terminal 10 and the server 11 as shown in fig. 1. As shown in fig. 1, a server 11 is connected to a terminal 10 through a network, which may be used to provide services to the terminal or a client installed on the terminal, and a database 12 may be provided on the server or independent of the server, for providing data storage services to the server 11, where the network includes, but is not limited to: the terminal 10 is not limited to a PC, a mobile phone, a tablet computer, or the like. The multi-focus image fusion method of the embodiments of the present application may be performed by the terminal 10. Wherein, the terminal 10 performs the multi-focus image fusion method of the embodiment of the present application may also be performed by a client installed thereon. A multi-focus image fusion method according to an embodiment of the present application is described below by taking a terminal as an example.
Fig. 2 is a flowchart of an alternative multi-focus image fusion method according to an embodiment of the present invention, as shown in fig. 2, the method may include the steps of:
step S20, acquiring a multi-focus image set.
The multi-focus image set comprises at least two source images, and is acquired by a large-view-field high-resolution microscope on the same scene along the axial direction of a displacement table Z.
By way of example, a large field high resolution microscope for multi-focus image acquisition can achieve a microscope with a lateral resolution of 0.68 μm and a maximum imaging field of up to 6 mm.
Further exemplary, the large field high resolution microscope uses an motorized axial displacement stage with a positioning accuracy better than 1 μm to acquire multi-focus images along the Z-axis.
It should be noted that, the multiple source images included in the multiple focused image set are multiple images of the same scene, and the focusing areas of the multiple images may be the same or the focusing areas of the multiple images may be different. In order to improve the accuracy of multi-focus image fusion, source images with different focus areas can be acquired in multiple ways, and for example, 10 source images with different focus areas are acquired for image fusion.
And S21, carrying out edge feature extraction processing on each source image in the multi-focus image set to obtain a rough focus information graph corresponding to each source image.
After the multi-focus image set is obtained, in order to obtain the focus area of each source image, edge feature extraction processing is required to be performed on each source image in the multi-focus image set, and edge features of each source image, that is, rough focus information of the source image, are initially obtained.
It will be appreciated that the acquired coarse focus information for each source image is displayed in its corresponding coarse focus information map.
And S22, carrying out information extraction processing on each source image in the multi-focus image set to obtain high-frequency information corresponding to each source image.
The steps of step S203 and step S202 may be performed simultaneously or not.
In this embodiment, after the multi-focus image set is obtained, in order to obtain the focus area of each source image, information extraction processing needs to be performed on each source image in the multi-focus image set, and it is understood that the information extraction processing refers to extracting high-frequency information in each source image, so as to obtain high-frequency information corresponding to each source image in the multi-focus image set.
And S23, carrying out fusion optimization processing on the high-frequency information and the rough focusing information corresponding to each source image to obtain a target focusing information diagram corresponding to each source image.
It should be noted that, the obtained high-frequency information and rough focusing information map corresponding to each source image are both preliminary extraction of the focusing region. In order to improve the accuracy of multi-focus image fusion, fusion optimization processing is carried out on the high-frequency information and the rough focus information map corresponding to each source image which are primarily extracted by the focus area, so that a more accurate focus area, namely a target focus information map corresponding to each source image, is obtained.
And step S24, stacking the target focusing information graphs corresponding to all the source images in the three-dimensional direction, and obtaining the pixel points positioned at the same position in the target focusing information graphs corresponding to all the source images.
The target focusing information graphs corresponding to all the source images are stacked on the Z axis, and the pixel points at the same position in the target focusing information graphs corresponding to all the source images are acquired along the Z axis.
It will be appreciated that the target focus information maps corresponding to the source images are the same in size, and the target focus information map display images corresponding to the source images are different because the high frequency information and the coarse focus information maps of the source images are generally different. Under the condition that the target focusing information graphs corresponding to all the source images are stacked in the three-dimensional direction, the pixel points which are positioned at the same position in the target focusing information graphs corresponding to all the source images are more easily obtained.
The pixel points located at the same position in the target focusing information graph corresponding to all the source images refer to the pixel points with the same plane coordinates in the target focusing information graph corresponding to all the source images.
And S25, comparing the pixel points at the same position in the target focusing information graph corresponding to all the source images to obtain an optimal focusing pixel point set corresponding to each source image.
After the pixel points at the same position in the target focusing information graph corresponding to all the source images are obtained, comparing all the pixel points at the same position, and judging whether the pixel points are the best focusing pixel points or not according to the comparison result.
It can be understood that all pixels included in the target focus information map corresponding to each source image have pixels at corresponding positions in the target focus information map corresponding to the remaining source images, that is, each position has an optimal focus pixel.
It should be noted that the best focus pixels at all positions are generally distributed in the target focus information map of different source images. And under the condition that all the positions are compared, acquiring an optimal focusing pixel point set corresponding to each source image.
Step S26, performing binarization processing on pixels in the target focus information graph corresponding to each source image based on the optimal focus pixel set corresponding to each source image, and obtaining a first decision graph corresponding to each source image, wherein the value of the optimal pixel position in the optimal focus pixel set is different from the values of other pixel positions.
After the optimal focusing pixel point set corresponding to each source image is obtained, binarization processing is carried out on the pixels in the target focusing information graph corresponding to each source image. Wherein, the value of the best focus pixel point position in the best focus pixel point set is different from the value of the other pixel point positions. Illustratively, the value of the best focus pixel location is 1 and the values of the other pixel locations are 0.
Step S27, performing rapid guided filtering optimization processing on the first decision graph corresponding to each source image to obtain a second decision graph corresponding to each source image;
specifically, the source images are used as guide images to conduct rapid guide filtering optimization processing on the first decision graphs corresponding to each source image, and the second decision graphs corresponding to each source image are obtained. It can be understood that on the basis of the first decision graph, the source image is used as a guide image to perform rapid guide filtering optimization, so that the accuracy of the fusion result is further improved.
And step S28, based on the second decision diagram corresponding to each source image, carrying out fusion processing on all source images in the multi-focus image set to obtain a fused image after fusion.
Under the condition that a second decision diagram corresponding to each source image is obtained, all source images in the multi-focus image set are fused according to the second decision diagram corresponding to each source image, and a fused image, namely a multi-focus image fusion result, is obtained.
The multi-focus image fusion method provided by the embodiment obtains a multi-focus image set; performing edge feature extraction processing on each source image in the multi-focus image set to obtain a rough focus information image corresponding to each source image; information extraction processing is carried out on each source image in the multi-focus image set, and high-frequency information corresponding to each source image is obtained; the high-frequency information and the rough focusing information corresponding to each source image are fused and optimized to obtain a target focusing information graph corresponding to each source image; stacking target focusing information graphs corresponding to all source images in a three-dimensional direction to obtain pixel points positioned at the same position in the target focusing information graphs corresponding to all source images; comparing the pixel points positioned at the same position in the target focusing information graph corresponding to all the source images to obtain an optimal focusing pixel point set corresponding to each source image; performing binarization processing on pixel points in a target focusing information graph corresponding to each source image based on an optimal focusing pixel point set corresponding to each source image to obtain a first decision graph corresponding to each source image, wherein the value of the position of the optimal focusing pixel point set is different from the value of the positions of other pixel points; carrying out rapid guided filtering optimization processing on the first decision graph corresponding to each source image to obtain a second decision graph corresponding to each source image; and based on a second decision diagram corresponding to each source image, carrying out fusion processing on all source images in the multi-focus image set to obtain a fused image after fusion. When the optimal focusing pixel point set corresponding to each source image is obtained, the target focusing information of all the source images is compared to obtain, and compared with the condition of fusion in pairs in the prior art, the image fusion method and device focus on the whole instead of relying on the previous fusion result, and the fusion accuracy is improved.
In an alternative embodiment, the step S21 includes:
step S210: and smoothing each source image in the multi-focus image set through an average filter to obtain a fuzzy image corresponding to each source image.
In particular, the mean filter is a commonly used image processing filter, which functions to smooth or blur an image. After each source image in the multi-focus image set is subjected to smoothing treatment of the mean value filter, a blurred image corresponding to each smoothed source image can be obtained.
Step S211: inputting each source image and the corresponding blurred image into a preset rough focusing information graph model to obtain a rough focusing information graph corresponding to each source image, wherein,
presetting a rough focusing information graph acquisition model, which comprises the following steps:
RI i (x,y)=|I i (x,y)-M i (x,y)|
wherein RI i (x, y) is the pixel value corresponding to the pixel point with the coordinate value of (x, y) in the rough focusing information graph corresponding to the ith Zhang Yuan image, I i (x, y) is the pixel value corresponding to the pixel point with the coordinate value of (x, y) in the ith Zhang Yuan image, M i (x, y) is a pixel value corresponding to a pixel point with a coordinate value of (x, y) in the blurred image corresponding to the i Zhang Yuan image.
It should be noted that, the blurred image corresponding to each smoothed source image blurs the sharp edge feature of the source image, so that the edge feature of the source image, that is, the rough focusing information map corresponding to the source image, can be obtained by making the difference between the source image and the blurred image of the smoothed source image.
In an alternative embodiment, the step S22 includes:
and S220, carrying out gradient acquisition processing on each source image in the multi-focus image set by using a Gaussian-like four-neighborhood gradient operator, and acquiring gradient values of all pixel points in each source image in the multi-focus image set.
In order to extract high-frequency information in the source images, the image gradient of each source image in the multi-focus image set is acquired by using a Gaussian-like four-neighborhood gradient operator GFG. Specifically, gradient values of all pixel points in each source image in the multi-focus image set are obtained by using a Gaussian-like four-neighborhood gradient operator.
Step S221, filtering the low-frequency information of each source image based on the gradient values of all the pixels in each source image in the multi-focus image set to obtain the high-frequency information corresponding to each source image.
Specifically, after gradient values of all pixel points in each source image are obtained, low-frequency information and high-frequency information contained in each source image are determined according to the gradient values, the low-frequency information in each source image is filtered, and the high-frequency information is reserved.
In an alternative embodiment, the step S220 includes:
Step S2200, inputting the pixel value of each pixel point in each source image in the multi-focus image set and the Gaussian-like convolution template into a preset convolution value acquisition model to acquire the convolution value corresponding to each pixel point.
The preset convolution value acquisition model comprises the following steps:
wherein P is i (x, y) is a convolution value corresponding to a pixel point with a coordinate value of (x, y) in the ith Zhang Yuan image, and T is a Gaussian-like convolution template;
illustratively, the gaussian-like convolution module may be:
step S2201, inputting the convolution value corresponding to each pixel point and the pixel value of the horizontal neighborhood pixel point of each pixel point to a preset horizontal gradient value acquisition model, to acquire the horizontal gradient value of each pixel point.
The preset horizontal gradient value acquisition model comprises the following steps:
G ||i ={|I i (x+1,y)-P i |+|I i (x-1,y)-P i |} 2
wherein I is i (x+1, y) is the pixel value corresponding to the pixel point with the coordinate value of (x+1, y) in the ith Zhang Yuan image, P i Is the convolution value corresponding to the pixel point with the coordinate value of (x, y) in the ith Zhang Yuan image, I i (x-1, y) is a pixel value corresponding to a pixel point having a coordinate value of (x-1, y) in the i Zhang Yuan th image, G ||i The horizontal gradient value of the pixel point in the i Zhang Yuan image whose coordinate value is (x, y).
Step S2202, inputting the convolution value corresponding to each pixel point and the pixel value of the pixel point in the vertical neighborhood of each pixel point to a preset vertical gradient value acquisition model, and acquiring the vertical gradient value of each pixel point.
The method for obtaining the vertical gradient value comprises the following steps of:
G ⊥i ={|I i (x,y+1)-P i |+|I i (x,y-1)-P i |} 2
wherein I is i (x, y+1) is a pixel value corresponding to a pixel point with a coordinate value of (x, y+1) in the ith Zhang Yuan image, I i (x, y-1) is a pixel value corresponding to a pixel point having a coordinate value of (x, y-1) in the i Zhang Yuan th image, G ⊥i The vertical gradient value of the pixel point in the i Zhang Yuan image whose coordinate value is (x, y).
Step S2203, inputting the horizontal gradient value and the vertical gradient value of each pixel point to a preset gradient value acquisition model, to obtain the gradient value of each pixel point.
The method for obtaining the gradient value comprises the following steps of:
G i (x,y)=G ||i +G ⊥i
wherein G is i (x, y) is a gradient value of a pixel point whose coordinate value is (x, y) in the i Zhang Yuan th image.
In an alternative embodiment, the step S221 includes:
step S2210, comparing the gradient values of all the pixels in each source image in the multi-focus image set with a preset gradient threshold.
It should be noted that the preset gradient threshold is determined according to the obtained gradient values of all the pixels, so as to filter out the low-frequency information.
The region with clear focus in the source image is a region with sharp change of the gray value, and the larger the gray value change is, the more obvious the gradient change is, so that the magnitude of the gradient value can represent whether the edge information is obvious or not. And comparing the gradient values of all the pixel points with a preset gradient threshold value, and acquiring high-frequency information to determine a clearly focused region in the source image.
Step S2211 uses the information corresponding to the pixels with gradient values smaller than the preset gradient threshold as low-frequency information, and uses the information corresponding to the pixels with gradient values greater than or equal to the preset gradient threshold as high-frequency information.
The comparison result of the gradient values of all the pixel points and a preset gradient threshold value is obtained and used for distinguishing high-frequency information and low-frequency information in each source image.
Step S2212, filtering the low frequency information to retain the high frequency information, and obtaining the high frequency information of each source image.
Specifically, in the case of distinguishing the low-frequency information and the high-frequency information of each source image, the low-frequency information in the source image is filtered, and the high-frequency information of the source image is retained.
In an alternative embodiment, the step S23 includes:
step S230, inputting the high-frequency information and the rough focus information map of each source image to a preset precise focus information map acquisition model, acquiring a precise focus information map corresponding to each source image,
the preset accurate focusing information graph acquisition model comprises the following steps:
wherein RI i (x, y) is the pixel value corresponding to the pixel point with the coordinate value of (x, y) in the rough focusing information graph corresponding to the ith Zhang Yuan image, G i (x, y) is the gradient value of the pixel point with the coordinate value of (x, y) in the ith Zhang Yuan image, T 0 For a preset gradient threshold, AI i And (x, y) is a pixel value corresponding to a pixel point with the coordinate value of (x, y) in the accurate focusing information graph corresponding to the i Zhang Yuan image.
And step S231, performing rapid guided filtering optimization processing on the accurate focusing information graph corresponding to each source image to obtain a target focusing information graph of each source image.
The guiding filtering uses the source image as a guiding image and the accurate focusing information graph as an input image to input, and an output image, namely a target focusing information graph, is generated. Fast guided filtering is an improvement of guided filtering and is suitable for processing large-scale image data, and can provide high-quality processing results with low delay.
Specifically, each source image and the accurate focusing information diagram corresponding to each source image are input into a preset target focusing information diagram acquisition model to acquire the target focusing information diagram corresponding to each source image, wherein,
the preset target focusing information graph acquisition model comprises the following steps:
FI i (x,y)=FGF[I i (x,y),AI i (x,y)]
wherein FGF is fast guided filtering, I i (x, y) is the pixel value corresponding to the pixel point with the coordinate value of (x, y) in the ith Zhang Yuan image, AI i (x, y) is the pixel value corresponding to the pixel point with the coordinate value of (x, y) in the accurate focusing information graph corresponding to the ith Zhang Yuan image, FI i (x, y) is a pixel value corresponding to a pixel point with a coordinate value of (x, y) in the target focus information map corresponding to the i Zhang Yuan image.
In an alternative embodiment, the step S25 includes:
and S250, comparing the gray values of the pixels positioned at the same position in the target focusing information graph corresponding to all the source images, and taking the pixel with the maximum gray value as the optimal focusing pixel.
Specifically, the gray values of the pixels located at the same position in the target focusing information graph corresponding to different source images are compared, and it is to be noted that the larger the gray value, the clearer the focus information can be represented by the gray value.
The pixel point with the largest gray value is obtained as the best focusing pixel point, and it can be understood that the best focusing pixel points at different positions can be located in different source images.
Step 251, obtaining the best focusing pixel points at all positions, and correspondingly forming the best focusing pixel point set corresponding to each source image.
The best focus pixel point includes source image information to which the best focus pixel point belongs and position information of the best focus pixel point in the source image to which the best focus pixel point belongs.
In an alternative embodiment, the step S25 further includes:
step S252, when the gray values of the pixel points at the same position in the target focusing information graph corresponding to at least two source images are the same and are all the maximum gray values, a definition evaluation operator is utilized to obtain local gradient values of a preset neighborhood of the pixel points with the same gray values and the maximum gray values.
For the situation that gray values of pixel points positioned at the same position in an adjacent target focusing information graph are the same in a microscopic image, considering that the number of pixels of a large-view-field image is huge, the focusing measurement and traversal calculation of a large area can obviously increase the time redundancy processed by a multi-focusing image fusion method, the embodiment of the application adopts a method of focusing measurement of a small area, and the focusing measurement of the area is realized through a definition evaluation operator Sobel.
The Sobel operator can provide more accurate edge information, is more sensitive to gradient change, and is suitable for microscopic images with gray information gradual change.
For example, the preset neighborhood of the pixel having the same gray value and the maximum gray value may be a 3*3 neighborhood of the pixel having the same gray value and the maximum gray value as the center pixel.
Step 253, taking the pixel point with the largest local gradient value of the preset neighborhood as the optimal focusing pixel point.
In an alternative embodiment, step S28 includes:
step S280, fusing each source image with a second decision graph corresponding to each source image to obtain a focusing image of each source image;
it can be understood that in the second decision diagram, the pixel value of the pixel point where the focusing information is located may be 1, and the pixel values of the pixel points of the rest positions may be 0, and then after each source image and the second decision diagram corresponding thereto are fused, the obtained focusing image is the focusing area image of each source image.
And S281, carrying out weighted fusion on the focused images of all the source images to obtain fused images.
Specifically, under the condition that the focused images of all the source images are acquired, the focused images of all the source images are subjected to weighted fusion, and a multi-focused image fusion result is obtained.
In this embodiment, a multi-focus image fusion device is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, and will not be described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The present embodiment provides a multi-focus image fusion apparatus, as shown in fig. 3, including:
a multi-focus image set acquisition module 30, configured to acquire a multi-focus image set, where the multi-focus image set includes at least two source images, and the multi-focus image set is acquired by a large-field high-resolution microscope on the same scene along the axis of the displacement stage Z;
the rough focus information map obtaining module 31 is configured to perform edge feature extraction processing on each source image in the multi-focus image set, so as to obtain a rough focus information map corresponding to each source image;
a high-frequency information acquisition module 32, configured to perform information extraction processing on each source image in the multi-focus image set, so as to obtain high-frequency information corresponding to each source image;
the target focus information map obtaining module 33 is configured to perform fusion optimization processing on the high-frequency information and the rough focus information map corresponding to each source image, so as to obtain a target focus information map corresponding to each source image;
the pixel point obtaining module 34 is configured to stack the target focus information maps corresponding to all the source images in a three-dimensional direction, and obtain the pixels located at the same position in the target focus information maps corresponding to all the source images;
The optimal focusing pixel point set obtaining module 35 compares the pixel points located at the same position in the target focusing information graph corresponding to all the source images to obtain an optimal focusing pixel point set corresponding to each source image;
the first decision diagram obtaining module 36 is configured to perform binarization processing on pixels in the target focus information diagram corresponding to each source image based on the best focus pixel set corresponding to each source image, to obtain a first decision diagram corresponding to each source image, where a value of a best pixel position in the best focus pixel set is different from a value of other pixel positions;
the second decision diagram obtaining module 37 is configured to perform fast guided filtering optimization processing on the first decision diagram corresponding to each source image, so as to obtain a second decision diagram corresponding to each source image.
The fused image acquisition module 38 is configured to perform fusion processing on all the source images in the multi-focus image set based on the second decision graph corresponding to each source image, so as to obtain a fused image.
In some alternative embodiments, the coarse focus information map acquisition module 31 includes:
and the fuzzy image acquisition unit is used for carrying out smoothing processing on each source image in the multi-focus image set through an average filter to obtain a fuzzy image corresponding to each source image.
A rough focus information graph acquisition subunit, configured to input each source image and its corresponding blurred image into a preset rough focus information graph acquisition model, acquire a rough focus information graph corresponding to each source image, where,
presetting a rough focusing information graph acquisition model, which comprises the following steps:
RI i (x,y)=|I i (x,y)-M i (x,y)|
wherein RI i (x, y) is the pixel value corresponding to the pixel point with the coordinate value of (x, y) in the rough focusing information graph corresponding to the ith Zhang Yuan image, I i (x, y) is the pixel value corresponding to the pixel point with the coordinate value of (x, y) in the ith Zhang Yuan image, M i (x, y) is a pixel value corresponding to a pixel point with a coordinate value of (x, y) in the blurred image corresponding to the i Zhang Yuan image.
In some alternative embodiments, the high frequency information acquisition module 32 includes:
the gradient value acquisition unit is used for carrying out gradient acquisition processing on each source image in the multi-focus image set through a Gaussian-like four-neighborhood gradient operator to acquire gradient values of all pixel points of each source image in the multi-focus image set.
The high-frequency information acquisition subunit is used for filtering the low-frequency information of each source image based on the gradient values of all pixel points in each source image in the multi-focus image set to obtain the high-frequency information corresponding to each source image.
In some alternative embodiments, the gradient value acquisition unit includes:
the gradient value acquisition subunit is used for inputting the pixel value of each pixel point in each source image in the multi-focus image set and the Gaussian-like convolution template into a preset convolution value acquisition model to acquire a convolution value corresponding to each pixel point;
the convolution value corresponding to each pixel point and the pixel value of the horizontal neighborhood pixel point of each pixel point are input into a preset horizontal gradient value acquisition model to acquire the horizontal gradient value of each pixel point;
the convolution value corresponding to each pixel point and the pixel value of the vertical neighborhood pixel point of each pixel point are input into a preset vertical gradient value acquisition model to acquire the vertical gradient value of each pixel point;
and inputting the horizontal gradient value and the vertical gradient value of each pixel point into a preset gradient value acquisition model to obtain the gradient value of each pixel point.
In some alternative embodiments, the high frequency information acquisition subunit comprises:
the first high-frequency information acquisition subunit is used for comparing gradient values of all pixel points in each source image in the multi-focus image set with a preset gradient threshold value;
taking the pixel point corresponding information with the gradient value smaller than the preset gradient threshold value as low-frequency information, and taking the pixel point corresponding information with the gradient value larger than or equal to the preset gradient threshold value as high-frequency information;
And filtering the low-frequency information to retain the high-frequency information, and obtaining the high-frequency information of each source image.
In some alternative embodiments, the target focus information map acquisition module 33 includes:
a target focus information map acquisition subunit, configured to input the high-frequency information and the rough focus information map corresponding to each source image to a preset precise focus information map acquisition model, acquire a precise focus information map corresponding to each source image,
presetting an accurate focusing information diagram acquisition model, which comprises the following steps:
wherein RI i (x, y) is the pixel value corresponding to the pixel point with the coordinate value of (x, y) in the rough focusing information graph corresponding to the ith Zhang Yuan image, G i (x, y) is the gradient value of the pixel point with the coordinate value of (x, y) in the ith Zhang Yuan image, T 0 For a preset gradient threshold, AI i (x, y) is the ith sheetPixel values corresponding to pixel points with coordinate values of (x, y) in the accurate focusing information graph corresponding to the source image;
and carrying out rapid guided filtering optimization processing on the accurate focusing information graph corresponding to each source image to obtain the target focusing information graph corresponding to each source image.
In some alternative embodiments, the best focus pixel set acquisition module 35 includes:
The first optimal focusing pixel point set acquisition subunit is used for comparing the gray values of the pixels positioned at the same position in the target focusing information graph corresponding to all the source images, and taking the pixel with the maximum gray value as the optimal focusing pixel point;
and obtaining the optimal focusing pixel points at all positions, and correspondingly forming an optimal focusing pixel point set corresponding to each source image.
In some alternative embodiments, the best focus pixel set acquisition module 35 further comprises:
the second best focusing pixel point set obtaining subunit is used for obtaining local gradient values of preset neighborhoods of the pixels with the same gray values and the maximum gray values by utilizing the definition evaluation operator under the condition that the gray values of the pixels at the same position in the target focusing information graph corresponding to at least two source images are the same and the gray values are the maximum gray values;
and taking the pixel point with the largest local gradient value of the preset neighborhood as the optimal focusing pixel point.
In some alternative embodiments, the fused image acquisition module 38 includes:
the focusing image acquisition unit is used for fusing each source image with the second decision graph corresponding to each source image to acquire a focusing image of each source image;
And the fusion image acquisition subunit is used for carrying out weighted fusion on the focused images of all the source images to obtain a fused image after fusion.
Further functional descriptions of the above respective modules and units are the same as those of the above corresponding embodiments, and are not repeated here.
The multi-focus image fusion apparatus in this embodiment is presented in the form of functional units, where the units refer to ASIC (Application Specific Integrated Circuit ) circuits, processors and memories executing one or more software or fixed programs, and/or other devices that can provide the above-described functionality.
The embodiment of the invention also provides a computer device which is provided with the multi-focus image fusion device shown in the figure 4.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a computer device according to an alternative embodiment of the present invention, as shown in fig. 4, the computer device includes: one or more processors 40, memory 41, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the computer device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple computer devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 40 is illustrated in fig. 4.
The processor 40 may be a central processor, a network processor, or a combination thereof. The processor 40 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 41 stores instructions executable by the at least one processor 40 to cause the at least one processor 40 to perform the method shown in implementing the above embodiments.
The memory 41 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the computer device, etc. In addition, the memory 41 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 41 may optionally include memory located remotely from processor 40, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The memory 41 may include a volatile memory, for example, a random access memory; the memory may also include non-volatile memory, such as flash memory, hard disk, or solid state disk; the memory 41 may also comprise a combination of memories of the kind described above.
The computer device further comprises input means 42 and output means 43. The processor 40, memory 41, input device 42 and output device 43 may be connected by a bus or otherwise, for example in fig. 4.
The input device 42 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the computer device, such as a touch screen, keypad, mouse, trackpad, touchpad, pointer stick, one or more mouse buttons, trackball, joystick, and the like. The output means 43 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. Such display devices include, but are not limited to, liquid crystal displays, light emitting diodes, displays and plasma displays. In some alternative implementations, the display device may be a touch screen.
The embodiments of the present invention also provide a computer readable storage medium, and the method according to the embodiments of the present invention described above may be implemented in hardware, firmware, or as a computer code which may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine readable storage medium downloaded through a network and to be stored in a local storage medium, so that the method described herein may be stored on such software process on a storage medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware. The storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, a flash memory, a hard disk, a solid state disk or the like; further, the storage medium may also comprise a combination of memories of the kind described above. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the methods illustrated by the above embodiments.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the appended claims.

Claims (12)

1. A method of multi-focus image fusion, the method comprising:
acquiring a multi-focus image set, wherein the multi-focus image set comprises at least two source images, and the multi-focus image set is acquired by a large-view-field high-resolution microscope on the same scene along the Z axial direction of a displacement table;
performing edge feature extraction processing on each source image in the multi-focus image set to obtain a rough focus information image corresponding to each source image;
information extraction processing is carried out on each source image in the multi-focus image set, and high-frequency information corresponding to each source image is obtained;
performing fusion optimization processing on the high-frequency information and the rough focusing information corresponding to each source image to obtain a target focusing information diagram corresponding to each source image;
stacking target focusing information graphs corresponding to all source images in a three-dimensional direction to obtain pixel points positioned at the same position in the target focusing information graphs corresponding to all source images;
comparing the pixel points positioned at the same position in the target focusing information diagrams corresponding to all the source images to obtain an optimal focusing pixel point set corresponding to each source image;
Performing binarization processing on pixels in the target focus information graph corresponding to each source image based on the optimal focus pixel point set corresponding to each source image to obtain a first decision graph corresponding to each source image, wherein the value of the optimal focus pixel point set at the optimal pixel point position is different from the values of other pixel point positions;
performing rapid guided filtering optimization processing on the first decision graph corresponding to each source image to obtain a second decision graph corresponding to each source image;
and based on the second decision graph corresponding to each source image, carrying out fusion processing on all source images in the multi-focus image set to obtain a fused image after fusion.
2. The method according to claim 1, wherein performing edge feature extraction processing on each source image in the multi-focus image set to obtain a rough focus information map corresponding to each source image, includes:
smoothing each source image in the multi-focus image set through an average filter to obtain a fuzzy image corresponding to each source image;
inputting each source image and the corresponding blurred image into a preset rough focusing information image acquisition model to acquire a rough focusing information image corresponding to each source image, wherein,
The preset rough focusing information graph acquisition model comprises the following steps:
RI i (x,y)=|I i (x,y)-M i (x,y)|
wherein RI i (x, y) is the pixel value corresponding to the pixel point with the coordinate value of (x, y) in the rough focusing information graph corresponding to the ith Zhang Yuan image, I i (x, y) is the pixel value corresponding to the pixel point with the coordinate value of (x, y) in the ith Zhang Yuan image, M i (x, y) is a pixel value corresponding to a pixel point with a coordinate value of (x, y) in the blurred image corresponding to the i Zhang Yuan image.
3. The method according to claim 1, wherein the performing information extraction processing on each source image in the multi-focus image set to obtain high-frequency information corresponding to each source image includes:
carrying out gradient acquisition processing on each source image in the multi-focus image set through a Gaussian-like four-neighborhood gradient operator to acquire gradient values of all pixel points in each source image in the multi-focus image set;
and filtering the low-frequency information of each source image based on the gradient values of all pixel points in each source image in the multi-focus image set to obtain the high-frequency information corresponding to each source image.
4. A method according to claim 3, wherein the gradient acquiring process is performed on each source image in the multi-focus image set by using a gaussian-like four-neighborhood gradient operator to obtain gradient values of all pixels in each source image in the multi-focus image set, including:
Inputting a pixel value of each pixel point in each source image in the multi-focus image set and a Gaussian-like convolution template into a preset convolution value acquisition model to acquire a convolution value corresponding to each pixel point;
the convolution value corresponding to each pixel point and the pixel value of the horizontal neighborhood pixel point of each pixel point are input into a preset horizontal gradient value acquisition model, and the horizontal gradient value of each pixel point is acquired;
inputting the convolution value corresponding to each pixel point and the pixel value of the pixel point in the vertical neighborhood of each pixel point into a preset vertical gradient value acquisition model to acquire the vertical gradient value of each pixel point;
and inputting the horizontal gradient value and the vertical gradient value of each pixel point into a preset gradient value acquisition model to obtain the gradient value of each pixel point.
5. The method according to claim 3, wherein the filtering the low-frequency information of each source image based on the gradient values of all pixels in each source image in the multi-focus image set to obtain the high-frequency information corresponding to each source image includes:
comparing gradient values of all pixel points in each source image in the multi-focus image set with a preset gradient threshold;
Taking the pixel point corresponding information with the gradient value smaller than the preset gradient threshold value as low-frequency information, and taking the pixel point corresponding information with the gradient value larger than or equal to the preset gradient threshold value as high-frequency information;
and filtering the low-frequency information to retain the high-frequency information, and obtaining the high-frequency information corresponding to each source image.
6. The method according to claim 1, wherein the performing fusion optimization processing on the high-frequency information and the rough focus information map corresponding to each source image to obtain the target focus information map corresponding to each source image includes:
inputting the high-frequency information and the rough focusing information corresponding to each source image into a preset precise focusing information graph acquisition model to acquire a precise focusing information graph corresponding to each source image, wherein,
the preset accurate focusing information graph acquisition model comprises the following steps:
wherein RI i (x, y) is the coordinates in the rough focusing information map corresponding to the i Zhang Yuan imagePixel value corresponding to pixel point with value of (x, y), G i (x, y) is the gradient value of the pixel point with the coordinate value of (x, y) in the ith Zhang Yuan image, T 0 For a preset gradient threshold, AI i (x, y) is a pixel value corresponding to a pixel point with a coordinate value of (x, y) in the accurate focusing information graph corresponding to the i Zhang Yuan image;
And carrying out rapid guided filtering optimization processing on the accurate focusing information graph corresponding to each source image to obtain a target focusing information graph corresponding to each source image.
7. The method according to claim 1, wherein the comparing the pixels located at the same position in the target focus information map corresponding to all the source images to obtain the best focus pixel set corresponding to each source image includes:
comparing the gray values of the pixel points positioned at the same position in the target focusing information graph corresponding to all the source images, and taking the pixel point with the maximum gray value as the optimal focusing pixel point;
and obtaining the optimal focusing pixel points at all positions, and correspondingly forming an optimal focusing pixel point set corresponding to each source image.
8. The method of claim 7, wherein the method further comprises:
when the gray values of the pixel points at the same position in the target focusing information diagrams corresponding to at least two source images are the same and are all the maximum gray values, a definition evaluation operator is utilized to obtain local gradient values of a preset neighborhood of the pixel points with the same gray values and the maximum gray values;
And taking the pixel point with the largest local gradient value of the preset neighborhood as the optimal focusing pixel point.
9. The method according to claim 1, wherein the fusing all the source images in the multi-focus image set based on the second decision graph corresponding to each source image to obtain a fused image includes:
fusing each source image with a second decision graph corresponding to each source image to obtain a focusing image of each source image;
and carrying out weighted fusion on the focused images of all the source images to obtain the fused image after fusion.
10. A multi-focus image fusion apparatus, the apparatus comprising:
the multi-focus image set acquisition module is used for acquiring a multi-focus image set, wherein the multi-focus image set comprises at least two source images, and the multi-focus image set is acquired by a large-view-field high-resolution microscope on the same scene along the axial direction of a displacement table Z;
the rough focusing information graph acquisition module is used for carrying out edge feature extraction processing on each source image in the multi-focusing image set to obtain a rough focusing information graph corresponding to each source image;
The high-frequency information acquisition module is used for extracting information of each source image in the multi-focus image set to acquire high-frequency information corresponding to each source image;
the target focusing information diagram acquisition module is used for carrying out fusion optimization processing on the high-frequency information and the rough focusing information diagram corresponding to each source image to obtain a target focusing information diagram corresponding to each source image;
the pixel point acquisition module is used for stacking the target focusing information graphs corresponding to all the source images in the three-dimensional direction to acquire the pixel points positioned at the same position in the target focusing information graphs corresponding to all the source images;
the optimal focusing pixel point set acquisition module is used for comparing the pixel points positioned at the same position in the target focusing information graph corresponding to all the source images to obtain an optimal focusing pixel point set corresponding to each source image;
the first decision diagram obtaining module is used for carrying out binarization processing on pixel points in the target focusing information diagram corresponding to each source image based on the optimal focusing pixel point set corresponding to each source image to obtain a first decision diagram corresponding to each source image, wherein the value of the optimal pixel point position in the optimal focusing pixel point set is different from the value of the other pixel point positions;
The second decision diagram acquisition module is used for carrying out rapid guided filtering optimization processing on the first decision diagram corresponding to each source image to obtain a second decision diagram corresponding to each source image;
and the fusion image acquisition module is used for carrying out fusion processing on all the source images in the multi-focus image set based on the second decision graph corresponding to each source image to obtain a fused image after fusion.
11. A computer device, comprising:
a memory and a processor, the memory and the processor being communicatively connected to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the multi-focus image fusion method of any one of claims 1 to 9.
12. A computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the multi-focus image fusion method of any one of claims 1 to 9.
CN202311422161.5A 2023-10-30 2023-10-30 Multi-focus image fusion method and device, computer equipment and storage medium Pending CN117455786A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311422161.5A CN117455786A (en) 2023-10-30 2023-10-30 Multi-focus image fusion method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311422161.5A CN117455786A (en) 2023-10-30 2023-10-30 Multi-focus image fusion method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117455786A true CN117455786A (en) 2024-01-26

Family

ID=89592428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311422161.5A Pending CN117455786A (en) 2023-10-30 2023-10-30 Multi-focus image fusion method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117455786A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118134788A (en) * 2024-05-08 2024-06-04 江苏艾玮得生物科技有限公司 Image fusion method, device, storage medium and terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118134788A (en) * 2024-05-08 2024-06-04 江苏艾玮得生物科技有限公司 Image fusion method, device, storage medium and terminal

Similar Documents

Publication Publication Date Title
CN110705583B (en) Cell detection model training method, device, computer equipment and storage medium
US10547871B2 (en) Edge-aware spatio-temporal filtering and optical flow estimation in real time
US9344619B2 (en) Method and apparatus for generating an all-in-focus image
CN111753961B (en) Model training method and device, prediction method and device
EP3989116A1 (en) Method and apparatus for detecting target object, electronic device and storage medium
CN108830780B (en) Image processing method and device, electronic device and storage medium
CN111739005B (en) Image detection method, device, electronic equipment and storage medium
CN117455786A (en) Multi-focus image fusion method and device, computer equipment and storage medium
JP2023166444A (en) Capture and storage of magnified images
KR20210036319A (en) Method, apparatus and electronic device for identifying text content
CN110879131B (en) Imaging quality testing method and imaging quality testing device for visual optical system, and electronic apparatus
Bhasin et al. Depth from defocus in presence of partial self occlusion
RU2608239C1 (en) Method and system for determining suitability of document image for optical character recognition and other image processing operations
US20190172226A1 (en) System and method for generating training images
CN111062916B (en) Definition evaluation method and device for microscopic image
CN112053280B (en) Panoramic map display method, device, equipment and storage medium
JP7269979B2 (en) Method and apparatus, electronic device, computer readable storage medium and computer program for detecting pedestrians
CN111899181A (en) Method and device for removing shadow in image
CN108898150B (en) Video structure alignment method and system
CN105809159A (en) Imaging-based visual weight graph extraction method
Li et al. Blurring-effect-free CNN for optimization of structural edges in focus stacking
CN114730070A (en) Image processing method, image processing apparatus, and image processing system
JP2016212813A (en) Image processor, image processing method and program
CN111862098A (en) Individual matching method, device, equipment and medium based on light field semantics
CN111862106A (en) Image processing method based on light field semantics, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination