WO2023279920A1 - 基于显微镜的超分辨率方法、装置、设备及介质 - Google Patents

基于显微镜的超分辨率方法、装置、设备及介质 Download PDF

Info

Publication number
WO2023279920A1
WO2023279920A1 PCT/CN2022/098411 CN2022098411W WO2023279920A1 WO 2023279920 A1 WO2023279920 A1 WO 2023279920A1 CN 2022098411 W CN2022098411 W CN 2022098411W WO 2023279920 A1 WO2023279920 A1 WO 2023279920A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
resolution
tested
sample
auxiliary
Prior art date
Application number
PCT/CN2022/098411
Other languages
English (en)
French (fr)
Inventor
蔡德
韩骁
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP22836674.6A priority Critical patent/EP4365774A1/en
Publication of WO2023279920A1 publication Critical patent/WO2023279920A1/zh
Priority to US18/127,502 priority patent/US20230237617A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Definitions

  • the present application relates to the field of image processing, in particular to a microscope-based super-resolution method, device, equipment and medium.
  • Super-resolution is a commonly used method when processing microscopy images.
  • Super-resolution refers to improving the resolution of the original image by means of hardware or software. For example, in the field of medical technology, a corresponding high-resolution microscope image is restored from a low-resolution microscope image.
  • super-resolution models which include feature extraction, nonlinear mapping and reconstruction modules, and technicians will pre-train through pairs of low-resolution microscope images and high-resolution microscope images
  • the super-resolution model when using the super-resolution model, will input a low-resolution microscope image and output a corresponding high-resolution microscope image.
  • the related technology only considers the corresponding relationship between a single low-resolution microscope image and a single high-resolution microscope image, and the obtained high-resolution microscope image has poor quality and cannot obtain more accurate image details.
  • Embodiments of the present application provide a microscope-based super-resolution method, device, equipment, and medium, and the method is used to obtain high-resolution microscope images with better effects. Described technical scheme is as follows:
  • a microscope-based super-resolution method the method being performed by a computer device, the method comprising:
  • an image to be tested and at least one auxiliary image the image to be tested includes a target area, the display area of the auxiliary image overlaps with the target area, and both the image to be tested and the auxiliary image are the first high-resolution microscope images;
  • the high-resolution features are used to represent the image features of the target area at a second resolution, the second resolution is greater than the first resolution Rate;
  • the high-resolution feature is reconstructed to obtain the target image of the second resolution corresponding to the image-to-be-tested.
  • a microscope-based super-resolution device comprising:
  • An acquisition unit configured to acquire an image to be tested and at least one auxiliary image, the image to be tested includes a target area, the display area of the auxiliary image overlaps with the target area, the image to be tested and the auxiliary
  • the images are all microscope images of the first resolution
  • a registration unit configured to register the image to be tested and the auxiliary image to obtain a registration image
  • An extraction unit configured to extract high-resolution features from the registration image, the high-resolution features are used to represent the image features of the target area at a second resolution, and the second resolution is greater than said first resolution;
  • a reconstruction unit configured to reconstruct the high-resolution feature to obtain the target image of the second resolution corresponding to the image-to-be-tested.
  • a computer device includes: a processor and a memory, at least one instruction, at least one section of program, code set or instruction set are stored in the memory, at least one instruction, at least one section of program , a code set or an instruction set is loaded and executed by a processor, so that the computer device implements the microscope-based super-resolution method as described above.
  • a computer storage medium at least one computer program is stored in the computer-readable storage medium, and the computer program is loaded and executed by a processor, so that the computer realizes the microscope-based super-resolution method.
  • a computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the microscope-based super-resolution method as described in the above aspect.
  • the image to be tested is registered through the auxiliary image, high-resolution features are extracted from the registered image, and a higher-resolution image is reconstructed based on the high-resolution features.
  • This method uses the auxiliary image to register the image to be tested, and supplements the image details in the auxiliary image for the image to be tested.
  • the registration image obtained by registration can fuse the image features of the image to be tested and the auxiliary image, and can be modeled and mined.
  • the correlation between multiple images is beneficial to the subsequent feature extraction and image reconstruction, so as to better reconstruct a higher-resolution image, so that the higher-resolution image has more accurate image details.
  • Fig. 1 is a schematic structural diagram of a computer system provided by an exemplary embodiment of the present application
  • Fig. 2 is a schematic diagram of a microscope-based super-resolution model provided by an exemplary embodiment of the present application
  • Fig. 3 is a schematic flow chart of a microscope-based super-resolution method provided by an exemplary embodiment of the present application
  • Fig. 4 is a schematic diagram of an image registration model provided by an exemplary embodiment of the present application.
  • Fig. 5 is a schematic flowchart of an image registration method provided by an exemplary embodiment of the present application.
  • Fig. 6 is a schematic diagram of motion compensation provided by an exemplary embodiment of the present application.
  • Fig. 7 is a schematic flowchart of a super-resolution model training method provided by an exemplary embodiment of the present application.
  • Fig. 8 is a schematic flowchart of a microscope-based super-resolution method provided by an exemplary embodiment of the present application.
  • Fig. 9 is a schematic diagram of a microscope-based super-resolution method provided by an exemplary embodiment of the present application.
  • Fig. 10 is a schematic structural diagram of a microscope-based super-resolution device provided by an exemplary embodiment of the present application.
  • Fig. 11 is a schematic structural diagram of a computer device provided by an exemplary embodiment of the present application.
  • Super-Resolution Improve the resolution of the original image by means of hardware or software.
  • the low-resolution image can be an image of a 10X magnification mirror, and the image of the corresponding area of the 20X magnification mirror is determined based on the image of the 10X magnification mirror.
  • Optical flow Refers to the movement of an object through the brightness pattern of the image. Optical flow expresses the change of the image, and because it contains the information of the target's motion, it can be used by the observer to determine the motion of the target.
  • Motion compensation It is a method to describe the difference between adjacent frame images, specifically how each small block of the previous frame image moves to a certain position in the current frame image.
  • Image registration the process of matching and superimposing two or more images acquired at different times, different sensors (imaging devices) or under different conditions (weather, illuminance, camera position and angle, etc.).
  • the method for image registration includes at least one of an image registration method based on motion estimation and compensation of a translation stage, and an image registration method based on an image registration module.
  • microscopes are currently equipped with image acquisition devices, which can collect digital images in the field of view of the eyepiece in real time for subsequent data storage and analysis.
  • image acquisition devices Under the condition of a given objective lens magnification, the highest resolution of the microscope is limited by the numerical aperture of the objective lens, that is, diffraction-limited resolution.
  • diffraction-limited resolution At present, there are many ways to break through the diffraction-limited resolution, so as to realize the super-resolution of microscope images.
  • the super-resolution method can obtain higher resolution than ordinary microscopes, and can see clearer sample details, thus better It is widely used in scientific research and disease diagnosis and other fields.
  • Fig. 1 shows a block diagram of a computer system provided by an exemplary embodiment of the present application.
  • the computer system includes: a computer device 120 , an image acquisition device 140 and a microscope 160 .
  • the computer device 120 runs an application program for image processing, and the application program may be a small program in an app (application, application program), may also be a special application program, or may be a webpage client.
  • the computer device 120 is a host computer, a smart phone, a tablet computer, an e-book reader, an MP3 (Moving Picture Experts Group Audio Layer III, moving picture expert compression standard audio level 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, Motion Picture Expert Compression Standard Audio Layer 4) At least one of Players, Laptops and Desktops.
  • the computer device 120 is connected to the image acquisition device 140 in a wired or wireless manner.
  • the image capture device 140 is used to capture microscope images.
  • the image acquisition device 140 is at least one of a camera, a video camera, a camera, a scanner, a smart phone, a tablet computer, and a laptop computer.
  • Microscope 160 is used to obtain magnified images of the sample.
  • the microscope 160 is at least one of an optical microscope and a polarizing microscope.
  • Fig. 2 shows a schematic diagram of a microscope-based super-resolution model provided by an exemplary embodiment of the present application.
  • the super-resolution model includes a registration module 201 , a feature extraction and fusion module 202 and a reconstruction module 203 .
  • the registration module 201 is used to register the input microscope image sequence, the input of the registration module 201 is the microscope image sequence 204, and the output is the registration image.
  • the microscope image sequence 204 is at least two microscope images arranged in time order, and the resolution of the microscope image sequence 204 is the first resolution.
  • the microscope image sequence 204 includes an image to be tested and an auxiliary image.
  • the image to be tested refers to an image containing the target area, and the display area of the auxiliary image overlaps with the target area.
  • the overlap here can be either a complete overlap or a partially overlapped.
  • the feature extraction and fusion module 202 is used to extract and fuse high-resolution features from the registered images.
  • the input of the feature extraction and fusion module 202 is the registered image, and the output is the high-resolution features.
  • the high-resolution feature is used to represent the image feature in the case of the second resolution, and the second resolution is greater than the first resolution.
  • the reconstruction module 203 is used to reconstruct a higher-resolution image of the target area, the input of the reconstruction module 203 is the high-resolution feature, and the output is the target image 205 .
  • the target image 205 refers to an image of the second resolution corresponding to the target area.
  • the target image of the second resolution will be obtained, which improves the resolution of the microscope image.
  • the image to be tested and the auxiliary image in the microscope image sequence of the first resolution are considered comprehensively, and the correlation of the microscope images corresponding to different positions can be fully utilized to combine the front and rear frame images Modeled as a whole to better reconstruct super-resolution microscopic images.
  • the front and rear frame images refer to the image to be tested and the auxiliary image.
  • Fig. 3 shows a schematic flowchart of a microscope-based super-resolution method provided by an exemplary embodiment of the present application. The method can be performed by the computer device 120 shown in FIG. 1, and the method includes the following steps:
  • Step 302 Obtain an image to be tested and at least one auxiliary image.
  • the image to be tested includes a target area, and the display area of the auxiliary image overlaps with the target area.
  • Both the image to be tested and the auxiliary image are microscope images of the first resolution.
  • the first resolution is set according to experience, or flexibly adjusted according to application scenarios.
  • the first resolution may refer to the resolution of an image observed under a microscope with a magnification of 10X, or may refer to the resolution of an image observed under a microscope with a magnification of 20X.
  • the microscope image of the first resolution refers to the image obtained by observing the sample under the magnification lens corresponding to the first resolution with a microscope.
  • the embodiment of the present application does not limit the method of obtaining the image to be tested and the auxiliary image, as long as the image to be tested and the auxiliary image are both microscope images with the first resolution, and the display area of the auxiliary image is consistent with the target area contained in the image to be tested It is enough that there is an overlapping area.
  • the image under test and the auxiliary image are selected from a library of microscope images of the first resolution.
  • the image under test and the auxiliary image are obtained from a sequence of microscope images at a first resolution.
  • a microscope image sequence refers to a sequence containing at least two microscope images.
  • the microscope image sequence is obtained from a microscope video.
  • the microscope video includes multiple frames of microscope images
  • the microscope images in the microscope video are arranged in time order to obtain a sequence of microscope images.
  • the auxiliary image is used to provide auxiliary information for the process of reconstructing the higher-resolution image corresponding to the image to be tested, so as to improve the reconstruction quality of the higher-resolution image.
  • An auxiliary image can be one image or multiple images.
  • the target area refers to the area in the sample observed with a microscope that needs to enlarge the details.
  • the target area can be set according to experience, or flexibly adjusted according to the application scene, which is not limited in the embodiment of the present application.
  • the image to be tested is an image containing the target area in the microscope image sequence.
  • that the image to be tested includes the target area may mean that the display area of the image to be tested is the same as the target area, or that the display area of the image to be tested is larger than the target area.
  • the display area of the image to be tested refers to the area in the observed sample presented by the image to be tested.
  • the display area of the auxiliary image and the target area may overlap completely or partially. For example, there is a 60% overlapping area between the display area of the auxiliary image and the target area.
  • the manner of acquiring the image to be tested and at least one auxiliary image includes: in the microscope image sequence of the first resolution, determining the image to be tested and an image that satisfies an association condition with the image to be tested; Among the images associated with the conditions, an image that has an overlapping area with the target area and whose proportion of the overlapping area is greater than a reference value is determined as an auxiliary image.
  • the manner of determining the image to be tested includes: in the microscope image sequence of the first resolution, determining an image containing the target area; if the number of images containing the target area is One image is used as the image to be tested; if there are multiple images containing the target area, any image containing the target area can be used as the image to be tested, or multiple images containing the target area can be used as the image to be tested.
  • One of the images that satisfies the selection conditions is used as the image to be tested. Satisfying the selection conditions can be set according to experience. Exemplarily, satisfying the selection condition may mean that the position of the target area in the display area is closest to the center of the display area.
  • the ratio of the overlapping area indicates the ratio of the overlapping area to the display area, that is, the ratio of the size of the overlapping area to the size of the display area.
  • Satisfying the correlation condition is set according to experience.
  • satisfying the correlation condition may refer to a first reference number of images and a second reference number of images adjacent to the image to be tested in the microscope image sequence.
  • the first reference quantity and the second reference quantity are set according to experience, and the first reference quantity and the second reference quantity may be the same or different. For example, both the first reference number and the second reference number are 1, or both the first reference number and the second reference number are 2, etc.
  • an image that satisfies an association condition with the image to be tested may also be referred to as a nearby image of the image to be tested, an image around the image to be tested, or the like.
  • the microscope image sequence includes image 1, image 2, image 3, image 4 and image 5 arranged in chronological order, assuming that image 3 is the image to be tested, the reference value is 60%, the first reference quantity and the second reference If the number is 2, then in the image 1, image 2, image 4 and image 5 surrounding the image 3, the image that has an overlapping area with the target area and the overlapping area accounts for 60% is determined as the auxiliary image.
  • Step 304 Register the image to be tested and the auxiliary image to obtain a registered image.
  • the registered image includes the target area.
  • the image to be tested and the auxiliary image are registered by at least one of an image registration method based on motion estimation and compensation of a displacement platform, and an image registration method based on an image registration module.
  • Step 306 Extracting high-resolution features from the registered image, the high-resolution features are used to represent image features of the target area at a second resolution, and the second resolution is greater than the first resolution.
  • the resolution of the registered image is the first resolution.
  • the network structure includes image data based on 4D (Four Dimensional, four-dimensional) (RGB (Red Green Blue, red green blue) image three dimension + time dimension) neural network structure, or at least one of the neural network structures based on long and short-term memory modules.
  • 4D Full Dimensional, four-dimensional
  • RGB Red Green Blue, red green blue
  • low-resolution features are extracted from the registration image, and the low-resolution features are used to represent the image features of the target area at the first resolution; the low-resolution features are mapped to obtain high-resolution features .
  • the low-resolution feature is f2 ⁇ f2
  • the high-resolution feature f3 ⁇ f3 is obtained by performing nonlinear mapping on the low-resolution feature.
  • the size of the low-resolution feature is smaller than the size of the high-resolution feature, that is, f2 is smaller than f3.
  • the registration image and the auxiliary image are fused to obtain a fused image; high-resolution features are extracted from the fused image.
  • the fusion of the registration image and the auxiliary image may be complete fusion or partial fusion.
  • the overlapping areas of the registration image and the auxiliary image are fused to obtain a fused image, or all display areas of the registration image and the auxiliary image are fused to obtain a fused image.
  • the microscope image sequence is obtained based on real-time microscope video, and the observed sample in the microscope video is in a moving state. Therefore, in order to meet the needs of users for real-time observation of samples, the observed target area also changes in real time. , so the image to be tested will also change accordingly. For example, at time t, the image to be tested is an image about area a; and at time t+1, the image to be tested is an image about area b.
  • fusing the first registration image and the second registration image to obtain a fusion registration image; extracting high-resolution features from the fusion registration image.
  • the first registration image and the second registration image are registration images with overlapping regions
  • the first registration image or the second registration image is the registration image obtained in step 304 .
  • fusing the first registration image and the second registration image may be complete fusion or partial fusion.
  • the overlapping regions of the first registration image and the second registration image are fused to obtain a fused registration image, or all display areas of the first registration image and the second registration image are fused to obtain a fused registration image.
  • Step 308 Reconstruct the high-resolution features to obtain a second-resolution target image corresponding to the image to be tested.
  • the image reconstruction is used to restore the target image of the target area with the second resolution.
  • the target image is an image corresponding to the image to be tested, and the target image also includes a target area.
  • the high-resolution feature is reconstructed through a neural network structure, wherein the network structure includes a neural network structure based on 4D image data (RGB image three dimensions+time dimension), or a neural network structure based on a long-short-term memory module at least one.
  • the network structure includes a neural network structure based on 4D image data (RGB image three dimensions+time dimension), or a neural network structure based on a long-short-term memory module at least one.
  • the high-resolution features are converted into pixel values of pixels in the target image through an image reconstruction network; a target image of a second resolution corresponding to the image-to-be-tested is obtained through the pixel values of the pixels.
  • step 304, step 306 and step 308 can be implemented by calling the target super-resolution model, that is, calling the target super-resolution model to register the image to be tested and the auxiliary image to obtain the registration image; call the target super-resolution model to extract high-resolution features from the registration image; call the target super-resolution model to reconstruct high-resolution features, and obtain a second-resolution target image corresponding to the image to be tested.
  • the target super-resolution model is a model that can reconstruct a second-resolution image based on the first-resolution image to be tested and the auxiliary image.
  • the target super-resolution model is obtained through training. The process of training the target super-resolution model is detailed in The embodiment shown in FIG. 7 will not be repeated here.
  • the target super-resolution model includes a target registration module, a target feature extraction and fusion module, and a target reconstruction module.
  • the registration module, the target feature extraction and fusion module, and the target reconstruction module are implemented. That is to say, call the target registration module to register the image to be tested and the auxiliary image to obtain a registered image; call the target feature extraction and fusion module to extract high-resolution features from the registered image; call the target reconstruction module to reconstruct high-resolution
  • the rate feature is used to obtain a target image with a second resolution corresponding to the image to be tested.
  • an auxiliary image is used to register the image to be tested, high-resolution features are extracted from the registered image, and an image with a higher resolution is reconstructed based on the high-resolution features.
  • This method uses the auxiliary image to register the image to be tested, and supplements the image details in the auxiliary image for the image to be tested.
  • the registration image obtained by registration can fuse the image features of the image to be tested and the auxiliary image, and can be modeled and mined.
  • the correlation between multiple images is beneficial to the subsequent feature extraction and image reconstruction, so as to better reconstruct a higher-resolution image, so that the higher-resolution image has more accurate image details.
  • an exemplary image registration method which can realize the registration between the image to be tested and the auxiliary image, establish the correlation between the image to be tested and the auxiliary image, and facilitate Down the image processing flow.
  • Fig. 4 shows a schematic diagram of an image registration model provided by an exemplary embodiment of the present application.
  • the image registration model includes an optical flow prediction network 401 , a super-resolution network 402 and a deconvolution network 403 .
  • the image registration model may be the registration module 201 in the super-resolution model shown in FIG. 2 .
  • the optical flow prediction network 401 is used to determine the optical flow prediction map of the test image and the auxiliary image, and the optical flow prediction map is used to predict the optical flow change between the test image and the auxiliary image.
  • the input of the optical flow prediction network 401 is the image to be tested 404 and the auxiliary image 405 , and the output is an optical flow prediction map 406 .
  • the super-resolution network 402 is used to perform motion compensation on the optical flow prediction map to obtain a compensated image.
  • the input of the super-resolution network 402 is the optical flow prediction map 406 and the auxiliary image 405, and the output is the compensated image with motion compensation.
  • the deconvolution network 403 is used to encode and decode the compensated image to obtain a registered image.
  • the input of the deconvolution network 403 is the compensation image and the image to be tested 404 , and the output is the registered image 407 .
  • Fig. 5 shows a schematic flowchart of an image registration method provided by an exemplary embodiment of the present application. The method can be performed by the computer device 120 shown in FIG. 1, and the method includes the following steps:
  • Step 501 Calculate the optical flow prediction map between the image to be tested and the auxiliary image.
  • the optical flow prediction map is used to predict the optical flow change between the test image and the auxiliary image. Since the image to be tested and the auxiliary image are images collected at different times, there is a difference in the optical flow information between the image to be tested and the auxiliary image, and the optical flow can represent the change of the image or the movement of the region.
  • this step includes the following sub-steps: calling the optical flow prediction network, and calculating an optical flow prediction map according to the optical flow field of the image to be tested and the auxiliary optical flow field of the auxiliary image.
  • the image to be tested is the i-th frame image in the microscope image sequence
  • the auxiliary image is the j-th frame image in the microscope image sequence
  • the optical flow prediction map Fi ⁇ j(hi ⁇ j, vi ⁇ j) ME(Ii, Ij; ⁇ ME )
  • hi ⁇ j and vi ⁇ j are the optical flow prediction map Fi ⁇
  • the horizontal component and vertical component of j, ME() is a function to calculate optical flow
  • ⁇ ME is a function parameter.
  • i and j are positive integers.
  • Step 502 Obtain a compensated image with motion compensation according to the optical flow prediction map and the auxiliary image.
  • this step includes the following sub-steps:
  • the upsampling map is a grid map obtained by upsampling the optical flow prediction map.
  • the optical flow prediction map 601 is up-sampled by a grid generator 603 (Grid Generator) to obtain an up-sampled map.
  • the size of the upsampling map is larger than the size of the optical flow prediction map. For example, if the size of the optical flow prediction map is 4 ⁇ 4, the size of the upsampling map may be 16 ⁇ 16.
  • the super-resolution network is an SPMC (Sub-Pixel Motion Compensation, sub-pixel motion compensation) network.
  • SPMC Sub-Pixel Motion Compensation, sub-pixel motion compensation
  • the upsampling map is incomplete, and some grids in the upsampling map have no numbers, so the upsampling map needs to be improved by interpolation.
  • the interpolation manner may be linear interpolation, or bilinear interpolation, or the like.
  • the auxiliary image 602 is sampled by the sampler 604 to obtain a sampling result; the sampling result is inserted into the up-sampling image to obtain a motion-compensated compensated image 605 .
  • Step 503 Encoding and decoding the compensation image to obtain a registration image.
  • this step includes the following sub-steps:
  • the size of the compensated image is consistent with the size of the upsampled upsampled image, it is necessary to downsample the compensated image to restore the size of the compensated image.
  • the deconvolution network may be an encoder-decoder network, that is, the compensated image is encoded and decoded by an encoder-decoder network to obtain a registered image.
  • the encoder-decoder network is composed of an encoder, LSTM (Long Short-Term Memory, long-short-term memory artificial neural network) and a decoder.
  • LSTM Long Short-Term Memory, long-short-term memory artificial neural network
  • the image residual is fused with the image to be tested to obtain a registered image.
  • the sum of the image residual and the pixel points in the image to be tested is obtained to obtain the registered image.
  • the image to be tested is connected to the deconvolution network by way of skip connections (Skip Connections).
  • the image to be tested will be registered through the auxiliary image to obtain the registered image. Since the registered image has already integrated the image features of the image to be tested and the auxiliary image, there are many ways to model and mine the registered image.
  • the correlation between the images is beneficial to the subsequent feature extraction and image reconstruction.
  • the image to be tested and the auxiliary image can be considered as consecutive frames of video images corresponding to different displacement positions. This embodiment can make full use of the correlation of consecutive frames of video images corresponding to different displacement positions, and model the front and rear frame images as a whole, thereby having It is conducive to better reconstruction of super-resolution microscopic images.
  • a method for training a super-resolution model is provided.
  • This embodiment enables the super-resolution model to store the image with improved resolution on the computer side for subsequent processing or project it into the field of view of the eyepiece through virtual reality technology, thereby realizing a super-resolution microscope.
  • Fig. 7 shows a schematic flowchart of a super-resolution model training method provided by an exemplary embodiment of the present application.
  • the method can be performed by the computer device 120 shown in FIG. 1 or other computer devices, and the method includes the following steps:
  • Step 701 Obtain a training data set.
  • the training data set includes a sample image to be tested at a first resolution, at least one sample auxiliary image, and real annotations at a second resolution.
  • the real annotation corresponds to at least two sample images, and the at least two sample images include one sample image to be tested and at least one sample auxiliary image.
  • the sample image to be tested and the sample auxiliary image are determined from the first sample image sequence, and the resolution of the images in the first sample image sequence is the first resolution; the real label is determined from the second sample image sequence As determined in the image sequence, the resolutions of the images in the second sample image sequence are all the second resolution.
  • a method for determining the training data set takes the i-th frame image in the second sample image sequence as the real label, and determine 2n+1 images from the first sample image sequence according to the display area of the real label
  • the 2n+1 images with overlapping regions refer to the sample images corresponding to the real labels, and the sample images include a sample image to be tested and 2n sample auxiliary images.
  • the manner of determining the sample image to be tested and the sample auxiliary image based on images with overlapping areas includes: taking the image with the largest proportion of the overlapping area among the images with overlapping areas as the sample image to be tested, and using other images as sample auxiliary images.
  • the ratio of the overlapping area refers to the ratio of the size of the overlapping area to the size of the display area of the image.
  • the images with overlapping regions are 2n+1 images arranged sequentially in the first sample image sequence, then the image arranged in the middle position among the 2n+1 images can be used as the sample image to be tested, and the other images As a sample auxiliary image.
  • the first sample image sequence I j ⁇ R H ⁇ W ⁇ 3 collected by the microscope under low magnification and the corresponding second sample image sequence I i ′ ⁇ R sH ⁇ sW ⁇ 3 , where , s represents the magnification factor, H represents the length of the image, W represents the width of the image, i, j represent the number of frames of the image, i, j are positive integers.
  • the frame number j of I j corresponding to I i ' belongs to the interval [in, i+n], n is a positive integer, and there are 2n+1 frames of images in total.
  • the super-resolution of 10X to 20X we first need to collect the continuous frame data of the sample under the microscope 10X and 20X. For a certain frame of 20X image, select 2n+1 frames of low-magnification images that overlap with the high-magnification image . In this way, 2n+1 frames of low-magnification images corresponding to each frame of high-magnification images can be obtained, so as to form a training data set for training a super-resolution model. Among the 2n+1 frames of low-magnification images corresponding to each frame of high-magnification images, there is one sample image to be tested and 2n sample auxiliary images. Exemplarily, the high-magnification image refers to a higher-resolution image than the low-magnification image, and the low-magnification image refers to a lower-resolution image than the high-magnification image.
  • Step 702 call the initial super-resolution model to register the sample image to be tested and the sample auxiliary image to obtain a sample registration image.
  • the image to be tested and the auxiliary image of the sample overlap with the actual marked display area, there is a high possibility that there is an overlapping area between the image to be tested and the auxiliary image of the sample, so it is necessary to combine the image to be tested with the auxiliary sample
  • the images are registered to determine the relationship between the sample image to be tested and the sample auxiliary image, and to supplement the image features contained in the sample image to be tested.
  • the sample registration image includes the sample target area.
  • the image to be tested of the sample and the auxiliary image of the sample are registered by at least one of an image registration method based on motion estimation and compensation of a displacement platform, and an image registration method based on an image registration module.
  • the initial super-resolution model includes an initial registration module
  • the implementation process of step 702 includes: calling the initial registration module in the initial super-resolution model to register the sample image to be tested and the sample auxiliary image to obtain the sample Register images.
  • Step 703 Extract high-resolution features of the sample from the sample registration image.
  • the network structure includes a neural network structure based on 4D image data (three dimensions of RGB image + time dimension), or a long-short-range At least one of the neural network structures of the memory module.
  • sample low-resolution features are extracted from the sample registration image, and the sample low-resolution features are used to represent the image features of the sample target region at the first resolution; the sample low-resolution features are mapped, Get sample high resolution features.
  • sample registration image and the sample auxiliary image are fused to obtain a sample fusion image; the sample high-resolution feature is extracted from the sample fusion image.
  • the initial super-resolution model includes an initial feature extraction and fusion module
  • the implementation process of step 703 includes: calling the initial feature extraction and fusion module in the initial super-resolution model to extract high-resolution samples from sample registration images feature.
  • the microscope image sequence is obtained based on real-time microscope video, and the observed sample in the microscope video is in a moving state. Therefore, in order to meet the needs of users for real-time observation of samples, the observed target area also changes in real time. , so the image to be tested will also change accordingly. For example, at time t, the image to be tested is an image about area a; and at time t+1, the image to be tested is an image about area b.
  • the first sample registration image and the second sample registration image are fused to obtain a sample fusion registration image; sample high-resolution features are extracted from the sample fusion registration image.
  • the first sample registration image and the second sample registration image are sample registration images with overlapping regions
  • the first sample registration image or the second sample registration image is the sample registration image obtained in step 702 .
  • the fusing of the first sample registration image and the second sample registration image may be complete fusion or partial fusion. For example, fusing the overlapping area of the first sample registration image and the second sample registration image to obtain the sample fusion registration image, or fusing the entire display area of the first sample registration image and the second sample registration image, Obtain the sample fusion registration image.
  • Step 704 Reconstruct the high-resolution features of the sample to obtain a sample target image of a second resolution corresponding to the sample image to be tested.
  • the image reconstruction is used to restore the sample target image of the sample target area with the second resolution.
  • the sample target image is an image corresponding to the sample image to be tested, and the sample target image also includes the sample target area.
  • the high-resolution feature of the sample is reconstructed through a neural network structure.
  • the network structure includes a neural network structure based on 4D image data (three dimensions of RGB image + time dimension), or a neural network based on a long-short-term memory module at least one of the structures.
  • the high-resolution feature of the sample is converted into pixel values of pixels in the sample target image through an image reconstruction network; a sample target image of a second resolution corresponding to the sample image to be tested is obtained through the pixel values of the pixels.
  • the initial super-resolution model includes an initial reconstruction module
  • the implementation process of step 704 includes: calling the initial reconstruction module in the initial super-resolution model to reconstruct the high-resolution features of the sample, and obtaining the second image corresponding to the sample image to be tested. Resolution sample target image.
  • Step 705 According to the difference between the sample target image and the real label, train the initial super-resolution model to obtain the target super-resolution model.
  • the initial super-resolution model is trained through an error backpropagation algorithm according to the difference between the sample target image and the real label.
  • a loss function is set, and the difference between the sample target image and the real label is substituted into the loss function to obtain the loss difference; the initial super-resolution model is trained according to the loss difference.
  • the initial super-resolution model includes an initial registration module, an initial feature extraction and fusion module, and an initial reconstruction module
  • training the initial super-resolution model refers to an initial registration module, an initial feature extraction and fusion module, and an initial Rebuild the module for training.
  • the process of training the initial super-resolution model is an iterative process, and every time the training is performed, it is judged whether the current training process satisfies the training termination condition. If the current training process meets the training termination condition, the super-resolution model obtained by the current training is used as the target super-resolution model; if the current training process does not meet the training termination condition, continue to train the super-resolution model obtained by the current training, Until the training process meets the training termination condition, the super-resolution model obtained when the training process meets the training termination condition is used as the target super-resolution model.
  • this embodiment provides a super-resolution model training method.
  • the method will be based on multiple first-resolution sample images to be tested and sample auxiliary images and the second-resolution real Labeling for training to ensure that a qualified super-resolution model can be obtained.
  • the sample image to be tested and the sample auxiliary image can be regarded as continuous frame video images corresponding to different displacement positions in the first sample image sequence.
  • This training method can make full use of the correlation of continuous frame video images corresponding to different displacement positions, and The frame image is modeled as a whole, which facilitates the training of a super-resolution model that can better reconstruct super-resolution microscopic images.
  • Fig. 8 shows a schematic flowchart of a microscope-based super-resolution method provided by an exemplary embodiment of the present application.
  • the method can be performed by the computer device 120 shown in FIG. 1 or other computer devices, and the method includes the following steps:
  • Step 801 Obtain a microscope video.
  • the microscope video is the first-resolution video under the microscope captured by the image acquisition device.
  • the sample on the microscope is moved, and the microscope video is collected by the image acquisition device.
  • the architecture is shown in FIG. 9 , and the sample is moved according to the moving direction of the microscope to obtain a microscope video 901 .
  • the resolution of the microscope video is the first resolution, wherein the magnification of the first resolution is lower.
  • the magnification of the first resolution can be 10X.
  • Step 802 Determine the image to be tested and the auxiliary image from the microscope video.
  • the image to be tested is an image at any moment in the microscope video. Exemplarily, as shown in FIG. 9 , the image at time t in the microscope video 901 is determined as the image to be tested.
  • the auxiliary image is an image in the microscope video that overlaps with the image to be tested.
  • the images at time t ⁇ 2, time t ⁇ 1, time t+1 and time t+2 in the microscope video 901 are determined as auxiliary images.
  • Step 803 call the super-resolution model, and determine the target image according to the image to be tested and the auxiliary image.
  • the super-resolution model is used to improve the resolution of the image under test based on the auxiliary image.
  • the super-resolution model here may be the target super-resolution model trained in the embodiment shown in FIG. 7 .
  • the target image is an image with the second resolution corresponding to the image to be tested.
  • the display area of the target image is the same as that of the image to be tested.
  • the second resolution is greater than the first resolution.
  • the magnification of the first resolution is 10X
  • the magnification of the second resolution is 40X.
  • the super-resolution model includes a registration module, a feature extraction and fusion module, and a reconstruction module.
  • the registration module is used to register the input microscope image sequence;
  • the feature extraction and fusion module is used to extract and fuse high-resolution features from the registration image;
  • the reconstruction module is used to reconstruct the higher-resolution image of the target area. image.
  • a super-resolution model 902 is called, and a microscope video 901 is substituted into the super-resolution model 902 to obtain a target image 903 .
  • this embodiment provides an end-to-end super-resolution model, which can directly obtain high-resolution microscope images only by inputting microscope videos, and the field of view of the microscope is larger under low-power microscopes, and the scanning data Faster, so this method can obtain high-resolution data at a faster speed for subsequent image processing and analysis, such as various auxiliary diagnoses based on artificial intelligence technology. It can also make full use of the existing hardware resources of ordinary microscopes without requiring additional equipment investment. Moreover, algorithm-based super-resolution does not need to be bound to reagent samples, and only needs to collect continuous videos of different samples to train the corresponding model and then deploy the application.
  • the auxiliary diagnosis based on artificial intelligence technology includes pathological diagnosis on pathological slices. In this case, the microscope video can be obtained by moving the pathological slices and observing the pathological slices with a microscope.
  • Fig. 10 shows a schematic structural diagram of a microscope-based super-resolution device provided by an exemplary embodiment of the present application.
  • the device can be implemented as all or a part of computer equipment through software, hardware or a combination of the two, and the device 1000 includes:
  • the acquiring unit 1001 is configured to acquire an image to be tested and at least one auxiliary image, the image to be tested includes a target area, the display area of the auxiliary image overlaps with the target area, the image to be tested and the
  • the auxiliary images are microscope images of the first resolution
  • a registration unit 1002 configured to register the image to be tested and the auxiliary image to obtain a registration image
  • An extraction unit 1003 configured to extract high-resolution features from the registration image, the high-resolution features are used to represent the image features of the target region at a second resolution, and the second resolution greater than the first resolution;
  • the reconstruction unit 1004 is configured to reconstruct the high-resolution feature to obtain the target image of the second resolution corresponding to the image-to-be-tested.
  • the registration unit 1002 is further configured to calculate an optical flow prediction map between the image-to-be-tested and the auxiliary image, and the optical flow prediction map is used to predict the Optical flow changes between the image to be tested and the auxiliary image; obtaining a compensation image with motion compensation according to the optical flow prediction map and the auxiliary image; encoding and decoding the compensation image to obtain the registration image .
  • the registration unit 1002 is also configured to call the optical flow prediction network to calculate the The optical flow prediction map described above.
  • the registration unit 1002 is also used to invoke the super-resolution network to up-sample the optical flow prediction map to obtain an up-sampling map; use the auxiliary image to up-sample the The image is subjected to bilinear interpolation to obtain the compensated image with motion compensation.
  • the registration unit 1002 is also configured to invoke a deconvolution network to encode and decode the compensated image to obtain an image residual; The measured images are fused to obtain the registration image.
  • the extracting unit 1003 is further configured to extract a low-resolution feature from the registration image, and the low-resolution feature is used to indicate that the target area is in the first Image features in the case of a resolution; the low-resolution features are mapped to obtain the high-resolution features.
  • the extraction unit 1003 is further configured to fuse the registration image and the auxiliary image to obtain a fusion image; extract the high-resolution feature from the fusion image.
  • the reconstruction unit 1004 is further configured to convert the high-resolution features into pixel values of pixels in the target image through an image reconstruction network; The pixel value is used to obtain the target image with the second resolution corresponding to the image to be tested.
  • the acquiring unit 1001 is further configured to determine the image to be tested and an image that satisfies an association condition with the image to be tested in the microscope image sequence of the first resolution ; among the images that satisfy the association condition with the image to be tested, determine the image that has the overlapping area with the target area and whose proportion of the overlapping area is greater than a reference value as the auxiliary image.
  • the registration unit 1002 is configured to call the target super-resolution model to register the image to be tested and the auxiliary image to obtain a registered image;
  • the extraction unit 1003 is configured to invoke the target super-resolution model to extract high-resolution features from the registered image
  • the reconstruction unit 1004 is configured to invoke the target super-resolution model to reconstruct the high-resolution features, so as to obtain the target image of the second resolution corresponding to the image-to-be-tested.
  • the device further includes: a training unit 1005 .
  • the training unit 1005 is configured to obtain a training data set, the training data set includes the sample image to be tested at the first resolution and at least one sample auxiliary image, and the real label of the second resolution; call The initial super-resolution model registers the sample image to be tested and the sample auxiliary image to obtain a sample registration image; extracts sample high-resolution features from the sample registration image; reconstructs the sample high-resolution feature, obtain the sample target image of the second resolution corresponding to the sample image to be tested; train the initial super-resolution model according to the difference between the sample target image and the real label , to obtain the target super-resolution model.
  • the sample image to be tested and the sample auxiliary image are determined from a first sample image sequence, and the resolutions of the images in the first sample image sequence are both The first resolution; the real label is determined from the second sample image sequence, and the resolution of the images in the second sample image sequence is the second resolution; the training unit 1005, It is also used to use the i-th frame image in the second sample image sequence as the real label, and determine 2n+1 images from the sample image sequence that are related to the real label according to the display area of the real label. Displaying images with overlapping regions, the i and n being positive integers; determining the sample image to be tested and the sample auxiliary image based on the images with overlapping regions.
  • an auxiliary image is used to register the image to be tested, high-resolution features are extracted from the registered image, and an image with a higher resolution is reconstructed based on the high-resolution features.
  • This method uses the auxiliary image to register the image to be tested, and supplements the image details in the auxiliary image for the image to be tested.
  • the registration image obtained by registration can fuse the image features of the image to be tested and the auxiliary image, and can be modeled and mined.
  • the correlation between multiple images is beneficial to the subsequent feature extraction and image reconstruction, so as to better reconstruct a higher-resolution image, so that the higher-resolution image has more accurate image details.
  • Fig. 11 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • the computer device 1100 includes a central processing unit (English: Central Processing Unit, referred to as: CPU) 1101, including a random access memory (English: Random Access Memory, referred to as: RAM) 1102 and a read-only memory (English: Read- Only Memory, referred to as: ROM) 1103 system memory 1104, and a system bus 1105 connecting the system memory 1104 and the central processing unit 1101.
  • the computer device 1100 also includes a basic input/output system (I/O system) 1106 that facilitates the transfer of information between the various components within the computer, and a mass storage device for storing the operating system 1113, application programs 1114, and other program modules 1115 1107.
  • I/O system basic input/output system
  • the input/output system 1106 includes a display 1108 for displaying information and an input device 1109 such as a mouse and a keyboard for user input of information. Wherein, both the display 1108 and the input device 1109 are connected to the central processing unit 1101 through the input/output controller 1110 connected to the system bus 1105 .
  • the input/output system 1106 may also include an input/output controller 1110 for receiving and processing input from a number of other devices such as a keyboard, mouse, or electronic stylus. Similarly, input/output controller 1110 also provides output to a display screen, printer, or other type of output device.
  • Mass storage device 1107 is connected to central processing unit 1101 through a mass storage controller (not shown) connected to system bus 1105 .
  • Mass storage device 1107 and its associated computer-readable media provide non-volatile storage for computer device 1100 . That is to say, the mass storage device 1107 may include a computer-readable medium (not shown) such as a hard disk or a CD-ROM (English: Compact Disc Read-Only Memory, CD-ROM for short) drive.
  • Computer-readable media may comprise computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, Erasable Programmable Read-Only Memory (English: Erasable Programmable Read-Only Memory, referred to as: EPROM), Electrically Erasable Programmable Read-Only Memory (English: Electrically Erasable Programmable Read-Only Memory , referred to as: EEPROM), flash memory or other solid-state storage technology, CD-ROM, digital versatile disc (English: Digital Versatile Disc, referred to as: DVD) or other optical storage, tape cartridges, tapes, disk storage or other magnetic storage devices.
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • computer device 1100 may also operate on a remote computer connected to a network through a network, such as the Internet. That is, the computer device 1100 can be connected to the network 1112 through the network interface unit 1111 connected to the system bus 1105, or in other words, the network interface unit 1111 can also be used to connect to other types of networks or remote computer systems (not shown).
  • the computer device 1100 may refer to a terminal, or may refer to a server.
  • a computer device includes: a processor and a memory, at least one instruction, at least one section of program, code set or instruction set are stored in the memory, the at least one instruction, at least one section A program, code set or instruction set is loaded and executed by the processor to cause the computer device to implement the microscope-based super-resolution method as described above.
  • a computer storage medium is also provided. At least one computer program is stored in the computer-readable storage medium. The computer program is loaded and executed by a processor, so as to realize the above-mentioned microscope-based super-resolution by computer. rate method.
  • a computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • a processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the microscope-based super-resolution method as described above.
  • the present application further provides a computer program product containing instructions, which, when run on a computer device, causes the computer device to execute the microscope-based super-resolution method described in the above aspects.
  • the program can be stored in a computer-readable storage medium.
  • the above-mentioned The storage medium mentioned may be a read-only memory, a magnetic disk or an optical disk, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

基于显微镜的超分辨率方法、装置、设备及介质,涉及图像处理领域。该方法包括:获取待测图像和至少一张辅助图像,待测图像包含目标区域,辅助图像的显示区域与目标区域存在重叠区域,待测图像和辅助图像均为第一分辨率的显微镜图像(302);将待测图像和辅助图像进行配准,得到配准图像(304);从配准图像中提取高分辨率特征,高分辨率特征用于表示目标区域在第二分辨率的情况下的图像特征,第二分辨率大于第一分辨率(306);重建高分辨率特征,得到与待测图像对应的第二分辨率的目标图像(308)。采用上述方法、装置、设备及介质,会对显微镜图像序列进行处理,获得重建效果较好的高分辨图像。

Description

基于显微镜的超分辨率方法、装置、设备及介质
本申请要求于2021年07月05日提交的申请号为202110758195.6、发明名称为“基于显微镜的超分辨率方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理领域,特别涉及一种基于显微镜的超分辨率方法、装置、设备及介质。
背景技术
在处理显微镜图像时,超分辨率是一种常用的方法。超分辨率是指通过硬件或软件的方法提高原有图像的分辨率,比如,在医疗技术领域,由一幅低分辨率的显微镜图像恢复出相应的高分辨率的显微镜图像。
相关技术是通过超分辨率模型实现的,超分辨率模型包括特征提取,非线性映射和重建等模块,技术人员会预先通过成对的低分辨率的显微镜图像和高分辨率的显微镜图像来训练超分辨率模型,在使用超分辨率模型时,会输入一张低分辨率的显微镜图像,输出一张对应的高分辨率的显微镜图像。
相关技术只考虑到单张低分辨率的显微镜图像和单张高分辨率的显微镜图像之间的对应关系,得到的高分辨率的显微镜图像质量较差,不能得到较为准确的图像细节。
发明内容
本申请实施例提供了一种基于显微镜的超分辨率方法、装置、设备及介质,该方法用于获取效果较好的高分辨率的显微镜图像。所述技术方案如下:
根据本申请的一个方面,提供了一种基于显微镜的超分辨率方法,所述方法由计算机设备执行,该方法包括:
获取待测图像和至少一张辅助图像,所述待测图像包含目标区域,所述辅助图像的显示区域与所述目标区域存在重叠区域,所述待测图像和所述辅助图像均为第一分辨率的显微镜图像;
将所述待测图像和所述辅助图像进行配准,得到配准图像;
从所述配准图像中提取高分辨率特征,所述高分辨率特征用于表示所述目标区域在第二分辨率的情况下的图像特征,所述第二分辨率大于所述第一分辨率;
重建所述高分辨率特征,得到与所述待测图像对应的所述第二分辨率的目标图像。
根据本申请的一个方面,提供了一种基于显微镜的超分辨率装置,该装置包括:
获取单元,用于获取待测图像和至少一张辅助图像,所述待测图像包含目标区域,所述辅助图像的显示区域与所述目标区域存在重叠区域,所述待测图像和所述辅助图像均为第一分辨率的显微镜图像;
配准单元,用于将所述待测图像和所述辅助图像进行配准,得到配准图像;
提取单元,用于从所述配准图像中提取高分辨率特征,所述高分辨率特征用于表示所述目标区域在第二分辨率的情况下的图像特征,所述第二分辨率大于所述第一分辨率;
重建单元,用于重建所述高分辨率特征,得到与所述待测图像对应的所述第二分辨率的目标图像。
根据本申请的另一方面,提供了一种计算机设备,该计算机设备包括:处理器和存储器,存储器中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段 程序、代码集或指令集由处理器加载并执行,以使该计算机设备实现如上方面所述的基于显微镜的超分辨率方法。
根据本申请的另一方面,提供了一种计算机存储介质,计算机可读存储介质中存储有至少一条计算机程序,计算机程序由处理器加载并执行,以使计算机实现如上方面所述的基于显微镜的超分辨率方法。
根据本申请的另一方面,提供了一种计算机程序产品或计算机程序,上述计算机程序产品或计算机程序包括计算机指令,上述计算机指令存储在计算机可读存储介质中。计算机设备的处理器从上述计算机可读存储介质读取上述计算机指令,上述处理器执行上述计算机指令,使得上述计算机设备执行如上方面所述的基于显微镜的超分辨率方法。
通过辅助图像来对待测图像进行配准,从配准图像中提取出高分辨率特征,再根据高分辨率特征重建出具有较高分辨率的图像。该方法使用辅助图像来对待测图像进行配准,为待测图像补充了辅助图像中的图像细节,配准得到的配准图像能够融合待测图像和辅助图像的图像特征,能够建模和挖掘多张图像之间的关联性,有利于后续的特征提取以及图像重建,从而更好地重建出较高分辨率的图像,使得较高分辨率的图像具有较为准确的图像细节。
附图说明
图1是本申请一个示例性实施例提供的计算机***的结构示意图;
图2是本申请一个示例性实施例提供的基于显微镜的超分辨率模型示意图;
图3是本申请一个示例性实施例提供的基于显微镜的超分辨率方法的流程示意图;
图4是本申请一个示例性实施例提供的图像配准模型的示意图;
图5是本申请一个示例性实施例提供的图像配准方法的流程示意图;
图6是本申请一个示例性实施例提供的运动补偿的示意图;
图7是本申请一个示例性实施例提供的一种超分辨率模型训练方法的流程示意图;
图8是本申请一个示例性实施例提供的基于显微镜的超分辨率方法的流程示意图;
图9是本申请一个示例性实施例提供的基于显微镜的超分辨率方法的示意图;
图10是本申请一个示例性实施例提供的基于显微镜的超分辨率装置的结构示意图;
图11是本申请一个示例性实施例提供的计算机设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
首先,对本申请实施例中涉及的名词进行介绍:
超分辨率(Super-Resolution):通过硬件或软件的方法提高原有图像的分辨率。当超分辨率运用在显微镜图像上时,低分辨率的图像可以是10X倍镜的图像,根据10X倍镜的图像确定出20X倍镜的对应区域的图像。
光流(optical flow):指通过图像亮度模式来表观物体的运动。光流表达了图像的变化,由于它包含了目标运动的信息,因此可被观察者用来确定目标的运动情况。
运动补偿:是一种描述相邻帧图像之间差别的方法,具体来说是描述前面一帧图像的每个小块怎样移动到当前帧图像中的某个位置去。
图像配准:将不同时间、不同传感器(成像设备)或不同条件下(天候、照度、摄像位置和角度等)获取的两幅或多幅图像进行匹配、叠加的过程。例如,图像配准的方法包括基于位移台运动估计和补偿的图像配准方法、基于图像配准模块的图像配准方法中的至少一种。
随着显微镜朝着数字化方向发展,目前大部分显微镜都配备有图像采集装置,可以实时采集目镜视野中的数字图像,用于后续的数据存储和分析。在给定物镜倍率的条件下,显微镜的最高分辨率受限于物镜的数值孔径,也即衍射受限分辨率。目前有多种方法来突破衍射 受限分辨率,从而实现显微镜图像的超分辨率,超分辨率方法可以得到比普通显微镜更高的分辨率,能够看到更清楚的样本细节信息,从而更好地应用于科学研究和疾病诊断等领域。
图1示出了本申请一个示例性实施例提供的计算机***的框图。该计算机***包括:计算机设备120、图像采集设备140和显微镜160。
计算机设备120运行有图像处理的应用程序,该应用程序可以是app(application,应用程序)中的小程序,也可以是专门的应用程序,也可以是网页客户端。计算机设备120是计算机主机、智能手机、平板电脑、电子书阅读器、MP3(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)播放器、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机中的至少一种。
计算机设备120与图像采集设备140通过有线或无线的方式进行连接。
图像采集设备140用于采集显微镜图像。图像采集设备140是摄像头、摄像机、相机、扫描仪、智能手机、平板电脑、膝上型便携计算机中的至少一种。
显微镜160用于获取样本的放大图像。显微镜160是光学显微镜、偏光显微镜中的至少一种。
图2示出了本申请一个示例性实施例提供的基于显微镜的超分辨率模型示意图。其中,超分辨率模型包括配准模块201、特征提取和融合模块202和重建模块203。
配准模块201用于对输入的显微镜图像序列进行配准,配准模块201的输入是显微镜图像序列204,输出是配准图像。其中,显微镜图像序列204是按照时间顺序排列的至少两张显微镜图像,且显微镜图像序列204的分辨率为第一分辨率。可选地,显微镜图像序列204包括待测图像和辅助图像,待测图像是指包含目标区域的图像,辅助图像的显示区域与目标区域存在重合,这里的重合既可以是完全重合,也可以是部分重合。
特征提取和融合模块202用于从配准图像中提取和融合出高分辨率特征,特征提取和融合模块202的输入是配准图像,输出是高分辨率特征。其中,高分辨率特征用于表示在第二分辨率的情况下的图像特征,第二分辨率大于第一分辨率。
重建模块203用于重建出目标区域的较高分辨率的图像,重建模块203的输入是高分辨率特征,输出是目标图像205。其中,目标图像205是指与目标区域对应的第二分辨率的图像。
综上所述,第一分辨率的显微镜图像序列在经过超分辨率模型后,会得到第二分辨率的目标图像,提高了显微镜图像的分辨率。在得到第二分辨率的目标图像的过程中,综合考虑了第一分辨率的显微镜图像序列中的待测图像和辅助图像,能够充分利用不同位置对应的显微镜图像的相关性,将前后帧图像作为一个整体建模,从而更好地重建出超分辨率的显微图像。示例性的,前后帧图像是指待测图像和辅助图像。
图3示出了本申请一个示例性实施例提供的基于显微镜的超分辨率方法的流程示意图。该方法可由图1所示的计算机设备120执行,该方法包括以下步骤:
步骤302:获取待测图像和至少一张辅助图像,待测图像包含目标区域,辅助图像的显示区域与目标区域存在重叠区域,待测图像和辅助图像均为第一分辨率的显微镜图像。
第一分辨率根据经验设置,或者根据应用场景灵活调整。示例性的,第一分辨率可以是指在显微镜10X倍镜下观察到的图像的分辨率,也可以是指在显微镜20X倍镜下观察到的图像的分辨率等。第一分辨率的显微镜图像是指利用显微镜在第一分辨率对应的倍镜下观察样本得到的图像。
本申请实施例对获取待测图像和辅助图像的方式不加以限定,只要保证待测图像和辅助图像均为第一分辨率的显微镜图像,且辅助图像的显示区域与待测图像包含的目标区域存在 重叠区域即可。在示例性实施例中,待测图像和辅助图像从第一分辨率的显微镜图像库中选取。在示例性实施例中,待测图像和辅助图像从第一分辨率的显微镜图像序列中获取。
显微镜图像序列指包含至少两张显微镜图像的序列。可选地,显微镜图像序列是根据显微镜视频获得的。例如,显微镜视频包括多帧显微镜图像,则将显微镜视频中的显微镜图像按照时间顺序进行排列,得到显微镜图像序列。
辅助图像用于为重建待测图像对应的较高分辨率的图像的过程提供辅助信息,以提高较高分辨率的图像的重建质量。辅助图像既可以是一张图像,也可以是多张图像。
目标区域是指利用显微镜观察的样本中的需要放大细节的区域,目标区域可以根据经验设置,或者根据应用场景灵活调整,本申请实施例对此不加以限定。待测图像是显微镜图像序列中的包含目标区域的图像。可选地,待测图像包含目标区域可以是指待测图像的显示区域与目标区域相同,或者,待测图像的显示区域大于目标区域。示例性的,待测图像的显示区域是指待测图像呈现出的被观察的样本中的区域。
需要说明的是,辅助图像的显示区域与目标区域之间既可以是完全重叠,也可以是部分重叠。例如,辅助图像的显示区域与目标区域之间存在60%的重叠区域。
可选地,获取待测图像和至少一张辅助图像的方式包括:在第一分辨率的显微镜图像序列中,确定待测图像以及与待测图像满足关联条件的图像;在与待测图像满足关联条件的图像中,将与目标区域存在重叠区域且重叠区域占比大于参考值的图像确定为辅助图像。
示例性地,在第一分辨率的显微镜图像序列中,确定待测图像的方式包括:在第一分辨率的显微镜图像序列中,确定包含目标区域的图像;若包含目标区域的图像的数量为一张,将该一张图像作为待测图像;若包含目标区域的图像的数量为多张,则可以将任一张包含目标区域的图像作为待测图像,也可以将多张包含目标区域的图像中满足选取条件的一张图像作为待测图像。满足选取条件可以根据经验设置。示例性的,满足选取条件可以是指目标区域在显示区域中所处的位置最靠近显示区域的中心位置。
重叠区域占比表示重叠区域与显示区域的比值,也即重叠区域的尺寸与显示区域的尺寸的比值。满足关联条件根据经验设置,示例性的,满足关联条件可以是指在显微镜图像序列中与待测图像相邻的前第一参考数量张图像和后第二参考数量张图像。第一参考数量和第二参考数量根据经验设置,第一参考数量和第二参考数量可以相同,也可以不同。例如,第一参考数量和第二参考数量均为1,或者,第一参考数量和第二参考数量均为2等。示例性的,与待测图像满足关联条件的图像还可以称为待测图像的附近图像、待测图像周围的图像等。示例性的,显微镜图像序列包括按照时间顺序排列的图像1、图像2、图像3、图像4和图像5,假设图像3是待测图像,参考值为60%,第一参考数量和第二参考数量均为2,则在图像3周围的图像1、图像2、图像4和图像5中,将与目标区域存在重叠区域且重叠区域占比达到60%的图像确定为辅助图像。
步骤304:将待测图像和辅助图像进行配准,得到配准图像。
由于待测图像与辅助图像之间是存在重叠区域的,因此需要将待测图像与辅助图像进行配准,确定待测图像与辅助图像之间的关系,补充待测图像包含的图像特征。其中,配准图像包括目标区域。
可选地,通过基于位移台运动估计和补偿的图像配准方法、基于图像配准模块的图像配准方法中的至少一种方法将待测图像和辅助图像进行配准。
步骤306:从配准图像中提取高分辨率特征,高分辨率特征用于表示目标区域在第二分辨率的情况下的图像特征,第二分辨率大于第一分辨率。
由于配准图像是由第一分辨率的待测图像与辅助图像进行配准得到的,且未经过其它处理,因此,配准图像的分辨率是第一分辨率。
可选地,通过神经网络结构从配准图像中提取高分辨率特征,示例性的,网络结构包括基于4D(Four Dimensional,四维)图像数据(RGB(Red Green Blue,红色绿色蓝色)图 像三个维度+时间维度)的神经网络结构,或者基于长短程记忆模块的神经网络结构中的至少一种。
可选地,从配准图像中提取出低分辨率特征,低分辨率特征用于表示目标区域在第一分辨率的情况下的图像特征;对低分辨率特征进行映射,得到高分辨率特征。示例性的,低分辨率特征是f2×f2,对低分辨率特征进行非线性映射得到高分辨率特征f3×f3。示例性的,低分辨率特征的尺寸小于高分辨率特征的尺寸,也即f2小于f3。
可选地,融合配准图像与辅助图像,得到融合图像;从融合图像中提取出高分辨率特征。示例性的,融合配准图像与辅助图像既可以是完全融合,也可以是部分融合。例如,融合配准图像与辅助图像的重叠区域,得到融合图像,或者,融合配准图像和辅助图像的全部显示区域,得到融合图像。
在实际应用场景中,显微镜图像序列是根据实时的显微镜视频得到的,而显微镜视频中被观察的样本是处于移动状态,因此,为满足用户实时观察样本的需求,被观察的目标区域也是实时变化的,故待测图像也会随之变化。例如,在t时刻,待测图像是关于区域a的图像;而在t+1时刻,待测图像是关于区域b的图像。可选地,融合第一配准图像和第二配准图像,得到融合配准图像;从融合配准图像中提取高分辨率特征。其中,第一配准图像和第二配准图像是存在重叠区域的配准图像,第一配准图像或第二配准图像是步骤304中得到的配准图像。进一步地,融合第一配准图像与第二配准图像既可以是完全融合,也可以是部分融合。例如,融合第一配准图像与第二配准图像的重叠区域,得到融合配准图像,或者,融合第一配准图像和第二配准图像的全部显示区域,得到融合配准图像。
步骤308:重建高分辨率特征,得到与待测图像对应的第二分辨率的目标图像。
图像重建用于复原出第二分辨率的目标区域的目标图像。其中,目标图像是与待测图像对应的图像,目标图像也包括目标区域。
可选地,通过神经网络结构重建高分辨率特征,其中,网络结构包括基于4D图像数据(RGB图像三个维度+时间维度)的神经网络结构,或者基于长短程记忆模块的神经网络结构中的至少一种。
可选地,通过图像重建网络,将高分辨率特征转化为目标图像中像素点的像素值;通过像素点的像素值得到与所述待测图像对应的第二分辨率的目标图像。
在示例性实施例中,上述步骤304、步骤306和步骤308可以通过调用目标超分辨率模型实现,也就是说,调用目标超分辨率模型将待测图像和辅助图像进行配准,得到配准图像;调用目标超分辨率模型从配准图像中提取高分辨率特征;调用目标超分辨率模型重建高分辨率特征,得到与待测图像对应的第二分辨率的目标图像。目标超分辨率模型为能够根据第一分辨率的待测图像和辅助图像重建出第二分辨率的图像的模型,目标超分辨率模型通过训练得到,训练得到目标超分辨率模型的过程详见图7所示的实施例,此处暂不赘述。
在示例性实施例中,目标超分辨率模型包括目标配准模块、目标特征提取和融合模块以及目标重建模块,上述步骤304、步骤306和步骤308可以通过分别调用目标超分辨率模型中的目标配准模块、目标特征提取和融合模块、目标重建模块实现。也就是说,调用目标配准模块将待测图像和辅助图像进行配准,得到配准图像;调用目标特征提取和融合模块从配准图像中提取高分辨率特征;调用目标重建模块重建高分辨率特征,得到与待测图像对应的第二分辨率的目标图像。
综上所述,本实施例通过辅助图像来对待测图像进行配准,从配准图像中提取出高分辨率特征,再根据高分辨率特征重建出具有较高分辨率的图像。该方法使用辅助图像来对待测图像进行配准,为待测图像补充了辅助图像中的图像细节,配准得到的配准图像能够融合待测图像和辅助图像的图像特征,能够建模和挖掘多张图像之间的关联性,有利于后续的特征提取以及图像重建,从而更好地重建出较高分辨率的图像,使得较高分辨率的图像具有较为准确的图像细节。
在接下来的实施例中,提供了一种示例性的图像配准方法,可以实现待测图像与辅助图像之间的配准,建立待测图像和辅助图像之间的关联性,有利于接下来对图像的处理流程。
图4示出了本申请一个示例性实施例提供的图像配准模型的示意图。图像配准模型包括光流预测网络401、超分网络402和反卷积网络403。示例性的,图像配准模型可以为图2所示的超分辨率模型中的配准模块201。
光流预测网络401用于确定待测图像和辅助图像的光流预测图,光流预测图用于预测待测图像和辅助图像之间的光流变化。光流预测网络401的输入是待测图像404和辅助图像405,输出是光流预测图406。
超分网络402用于对光流预测图进行运动补偿得到补偿图像。超分网络402的输入是光流预测图406和辅助图像405,输出是具有运动补偿的补偿图像。
反卷积网络403用于对补偿图像进行编码和解码,得到配准图像。反卷积网络403的输入是补偿图像和待测图像404,输出是配准图像407。
图5示出了本申请一个示例性实施例提供的图像配准方法的流程示意图。该方法可由图1所示的计算机设备120执行,该方法包括以下步骤:
步骤501:计算待测图像和辅助图像之间的光流预测图。
光流预测图用于预测待测图像和辅助图像之间的光流变化。由于待测图像与辅助图像是不同时刻采集到的图像,因此待测图像与辅助图像在光流信息上存在区别,而光流又可以表示图像的变化或区域的运动情况。
可选地,该步骤包括以下子步骤:调用光流预测网络,根据待测图像的待测光流场和辅助图像的辅助光流场计算光流预测图。
示例性的,假设待测图像是显微镜图像序列中的第i帧图像,辅助图像是显微镜图像序列中的第j帧图像,则使用Ii表示待测图像的待测光流场,Ij表示辅助图像的辅助光流场,故光流预测图Fi→j(hi→j,vi→j)=ME(Ii,Ij;θ ME),其中,hi→j和vi→j是光流预测图Fi→j的水平分量和垂直分量,ME()是计算光流的函数,θ ME是函数参数。其中,i和j为正整数。
步骤502:根据光流预测图和辅助图像得到具有运动补偿的补偿图像。
可选地,本步骤包括以下子步骤:
1、调用超分网络,对光流预测图进行升采样得到升采样图。
升采样图是对光流预测图进行升采样得到的网格图。
示例性的,如图6所示,通过格产生器603(Grid Generator)对光流预测图601进行升采样得到升采样图。升采样图的尺寸大于光流预测图的尺寸,例如,光流预测图的尺寸是4×4,则升采样图的尺寸可以是16×16。
可选地,超分网络是SPMC(Sub-Pixel Motion Compensation,亚像素运动补偿)网络。
2、通过辅助图像对升采样图进行插值,得到具有运动补偿的补偿图像。
由于升采样图的尺寸大于光流预测图的尺寸,因此升采样图是不完整的,升采样图中的部分网格是没有数,所以需要通过插值来完善升采样图。示例性的,插值的方式可以为线性插值,也可以为双线性插值等。
示例性的,如图6所示,通过采样器604对辅助图像602进行采样,得到采样结果;将采样结果***到升采样图中得到具有运动补偿的补偿图像605。
步骤503:对补偿图像进行编码和解码得到配准图像。
可选地,本步骤包括以下子步骤:
1、调用反卷积网络,对补偿图像进行编码和解码得到图像残差。
由于补偿图像的尺寸与升采样后的升采样图的尺寸保持一致,因此,需要对补偿图像进行降采样,还原补偿图像的尺寸。
可选地,反卷积网络可以为编码器-解码器网络,也就是说,通过编码器-解码器网络对补偿图像进行编码和解码,得到配准图像。
可选地,编码器-解码器网络是由编码器、LSTM(Long Short-Term Memory,长短期记忆人工神经网络)和解码器构成的。
2、将图像残差与待测图像进行融合,得到配准图像。
可选地,将图像残差与待测图像中的像素点进行求和,得到配准图像。
可选地,待测图像通过跳跃连接(Skip Connections)的方式与反卷积网络连接。
综上所述,本实施例会通过辅助图像对待测图像进行配准,得到配准图像,由于配准图像已经融合了待测图像和辅助图像的图像特征,因此,配准图像建模和挖掘多张图像之间的关联性,有利于后续的特征提取以及图像重建。待测图像和辅助图像可认为是不同位移位置对应的连续帧视频图像,本实施例能够充分利用不同位移位置对应的连续帧视频图像的相关性,将前后帧图像作为一个整体建模,从而有利于更好地重建出超分辨率的显微图像。
在接下来的实施例中,提供一种超分辨率模型的训练方法。该实施例使得超分辨率模型能够将分辨率提升后的图像存储在计算机端用于后续处理或者通过虚拟现实技术投射到目镜视野中,从而实现超分辨显微镜。
图7示出了本申请一个示例性实施例提供的一种超分辨率模型训练方法的流程示意图。该方法可由图1所示的计算机设备120或其他计算机设备执行,该方法包括以下步骤:
步骤701:获取训练数据集。
训练数据集包括第一分辨率的样本待测图像和至少一张样本辅助图像,以及第二分辨率的真实标注。真实标注对应至少两张样本图像,该至少两张样本图像包括一张样本待测图像和至少一张样本辅助图像。
示例性的,样本待测图像和样本辅助图像是从第一样本图像序列中确定的,第一样本图像序列中的图像的分辨率均为第一分辨率;真实标注是从第二样本图像序列中确定的,第二样本图像序列中的图像的分辨率均为第二分辨率。
可选地,提供一种确定训练数据集的方法:将第二样本图像序列中的第i帧图像作为真实标注,根据真实标注的显示区域,从第一样本图像序列中确定2n+1张与真实标注的显示区域存在重叠区域的图像,i和n为正整数;基于存在重叠区域的图像确定样本待测图像和样本辅助图像。示例性的,存在重叠区域的2n+1张图像是指真实标注对应的样本图像,该样本图像包括一张样本待测图像和2n张样本辅助图像。
示例性的,基于存在重叠区域的图像确定样本待测图像和样本辅助图像的方式包括:将存在重叠区域的图像中重叠区域的占比最大的图像作为样本待测图像,将其他图像作为样本辅助图像。重叠区域的占比是指重叠区域的尺寸与图像的显示区域的尺寸的比值。示例性的,存在重叠区域的图像是第一样本图像序列中顺序排列的2n+1张图像,则可以将2n+1张图像中排列在中间位置的图像作为样本待测图像,将其他图像作为样本辅助图像。
示例性的,设显微镜在低倍镜下采集到的第一样本图像序列I j∈R H×W×3,而对应的第二样本图像序列I i′∈R sH×sW×3,其中,s表示放大倍率因子,H表示图像的长,W表示图像的宽,i、j表示图像的帧数,i、j均为正整数。那么和I i′对应的I j的帧数j属于区间[i-n,i+n],n为正整数,总计2n+1帧图像。假如以10X到20X的超分辨为例,首先需要采集样本在显微镜10X和20X下的连续帧数据,对于20X的某一帧图像,选择和该高倍率图像有重合的2n+1帧低倍率图像。以此方式可以获得每一帧高倍率图像对应的2n+1帧低倍率图像,从而组成训练数据集用于训练超分辨率模型。在每一帧高倍率图像对应的2n+1帧低倍率图像中,存在一张样本待测图像和2n张样本辅助图像。示例性的,高倍率图像是指相对低倍率图像而言的较高分辨率的图像,低倍率图像是指相对高倍率图像而言的较低分辨率的图像。
步骤702:调用初始超分辨率模型对样本待测图像和样本辅助图像进行配准,得到样本配准图像。
由于样本待测图像和样本辅助图像均与真实标注的显示区域存在重叠区域,所以样本待测图像与样本辅助图像之间是极有可能存在重叠区域的,因此需要将样本待测图像与样本辅助图像进行配准,确定样本待测图像与样本辅助图像之间的关系,补充样本待测图像包含的图像特征。其中,样本配准图像包括样本目标区域。
可选地,通过基于位移台运动估计和补偿的图像配准方法、基于图像配准模块的图像配准方法中的至少一种方法将样本待测图像和样本辅助图像进行配准。
示例性的,初始超分辨率模型包括初始配准模块,该步骤702的实现过程包括:调用初始超分辨率模型中的初始配准模块对样本待测图像和样本辅助图像进行配准,得到样本配准图像。
步骤703:从样本配准图像中提取样本高分辨率特征。
可选地,通过神经网络结构从样本配准图像中提取高分辨率特征,示例性的,网络结构包括基于4D图像数据(RGB图像三个维度+时间维度)的神经网络结构,或者基于长短程记忆模块的神经网络结构中的至少一种。
可选地,从样本配准图像中提取出样本低分辨率特征,样本低分辨率特征用于表示样本目标区域在第一分辨率的情况下的图像特征;对样本低分辨率特征进行映射,得到样本高分辨率特征。
可选地,融合样本配准图像与样本辅助图像,得到样本融合图像;从样本融合图像中提取出样本高分辨率特征。
示例性的,初始超分辨率模型包括初始特征提取和融合模块,该步骤703的实现过程包括:调用初始超分辨率模型中的初始特征提取和融合模块从样本配准图像中提取样本高分辨率特征。
在实际应用场景中,显微镜图像序列是根据实时的显微镜视频得到的,而显微镜视频中被观察的样本是处于移动状态,因此,为满足用户实时观察样本的需求,被观察的目标区域也是实时变化的,故待测图像也会随之变化。例如,在t时刻,待测图像是关于区域a的图像;而在t+1时刻,待测图像是关于区域b的图像。可选地,融合第一样本配准图像和第二样本配准图像,得到样本融合配准图像;从样本融合配准图像中提取样本高分辨率特征。其中,第一样本配准图像和第二样本配准图像是存在重叠区域的样本配准图像,第一样本配准图像或第二样本配准图像是步骤702中得到的样本配准图像。示例性的,融合第一样本配准图像与第二样本配准图像既可以是完全融合,也可以是部分融合。例如,融合第一样本配准图像与第二样本配准图像的重叠区域,得到样本融合配准图像,或者,融合第一样本配准图像和第二样本配准图像的全部显示区域,得到样本融合配准图像。
步骤704:重建样本高分辨率特征,得到与样本待测图像对应的第二分辨率的样本目标图像。
图像重建用于复原出第二分辨率的样本目标区域的样本目标图像。其中,样本目标图像是与样本待测图像对应的图像,样本目标图像也包括样本目标区域。
可选地,通过神经网络结构重建样本高分辨率特征,示例性的,网络结构包括基于4D图像数据(RGB图像三个维度+时间维度)的神经网络结构,或者基于长短程记忆模块的神经网络结构中的至少一种。
可选地,通过图像重建网络,将样本高分辨率特征转化为样本目标图像中像素点的像素值;通过像素点的像素值得到与样本待测图像对应的第二分辨率的样本目标图像。
示例性的,初始超分辨率模型包括初始重建模块,该步骤704的实现过程包括:调用初始超分辨率模型中的初始重建模块重建样本高分辨率特征,得到与样本待测图像对应的第二分辨率的样本目标图像。
步骤705:根据样本目标图像和真实标注之间的差值,对初始超分辨率模型进行训练,得到目标超分辨率模型。
可选地,根据样本目标图像和真实标注之间的差值,通过误差反向传播算法,对初始超分辨率模型进行训练。
可选地,设置损失函数,将样本目标图像和真实标注之间的差值代入到损失函数中,得到损失差值;根据损失差值对初始超分辨率模型进行训练。
示例性的,初始超分辨率模型包括初始配准模块、初始特征提取和融合模块以及初始重建模块,对初始超分辨率模型进行训练是指对初始配准模块、初始特征提取和融合模块以及初始重建模块进行训练。
示例性的,对初始超分辨率模型进行训练的过程为迭代过程,每训练一次,判断一次当前训练过程是否满足训练终止条件。若当前训练过程满足训练终止条件,则将当前训练得到的超分辨率模型作为目标超分辨率模型;若当前训练过程不满足训练终止条件,则继续对当前训练得到的超分辨率模型进行训练,直至训练过程满足训练终止条件,将训练过程满足训练终止条件时得到的超分辨率模型作为目标超分辨率模型。
综上所述,本实施例提供了一种超分辨率模型的训练方法,该方法在训练时,会根据多张第一分辨率的样本待测图像和样本辅助图像以及第二分辨率的真实标注来进行训练,确保能够得到合格的超分辨率模型。样本待测图像和样本辅助图像可认为是第一样本图像序列中不同位移位置对应的连续帧视频图像,此种训练方法能够充分利用不同位移位置对应的连续帧视频图像的相关性,将前后帧图像作为一个整体建模,从而有利于训练出能够更好地重建出超分辨率的显微图像的超分辨率模型。
图8示出了本申请一个示例性实施例提供的基于显微镜的超分辨率方法的流程示意图。该方法可由图1所示的计算机设备120或其他计算机设备执行,该方法包括以下步骤:
步骤801:获取显微镜视频。
显微镜视频是通过图像采集设备采集到的显微镜下的第一分辨率的视频。示例性的,移动显微镜上样本,通过图像采集设备采集到显微镜视频,架构如图9所示,将样本按照显微镜移动方向进行移动,得到显微镜视频901。
显微镜视频的分辨率为第一分辨率,其中,第一分辨率的放大倍数较低。以光学显微镜微为例,第一分辨率的放大倍数可以是10X。
步骤802:从显微镜视频中确定待测图像和辅助图像。
待测图像是显微镜视频中任意时刻的图像。示例性的,如图9所示,将显微镜视频901中t时刻的图像确定为待测图像。
辅助图像是显微镜视频中与待测图像存在重合区域的图像。示例性的,如图9所示,将显微镜视频901中t-2时刻、t-1时刻、t+1时刻和t+2时刻的图像确定为辅助图像。
步骤803:调用超分辨率模型,根据待测图像和辅助图像确定目标图像。
超分辨率模型用于根据辅助图像来提高待测图像的分辨率。超分辨率模型的具体模型结构可参考图2所示的实施例。示例性的,此处的超分辨率模型可以为图7所示的实施例中训练得到的目标超分辨率模型。
目标图像是与待测图像对应的具有第二分辨率的图像。目标图像与待测图像的显示区域相同。第二分辨率大于第一分辨率,示例性的,当以光学显微镜微为例,第一分辨率的放大倍数是10X,则第二分辨率的放大倍数是40X。
可选地,超分辨率模型包括配准模块、特征提取和融合模块以及重建模块。配准模块用于对输入的显微镜图像序列进行配准;特征提取和融合模块用于从配准图像中提取和融合出高分辨率特征;重建模块用于重建出目标区域的较高分辨率的图像。
示例性的,如图9所示,调用超分辨率模型902,将显微镜视频901代入超分辨率模型902,得到目标图像903。
综上所述,本实施例提供了一种端到端的超分辨率模型,只需要输入显微镜视频就可以直接得到高分辨率的显微镜图像,而且在低倍镜下显微镜的视野更大,扫描数据更快,因此 该方法可以以更快的速度获得高分辨率数据,用于后续的图像处理和分析,如基于人工智能技术的各种辅助诊断。还可以充分利用普通显微镜已有的硬件资源,不需要额外的设备投入。且基于算法实现超分辨不需要和试剂样本绑定,只需要采集不同样本的连续视频就可以训练相应的模型然后部署应用。示例性的,基于人工智能技术的辅助诊断包括对病理切片的病理诊断,此种情况下,显微镜视频可以通过移动病理切片并利用显微镜对病理切片进行观察得到。
下面为本申请的装置实施例,对于装置实施例中未详细描述的细节,可以结合参考上述方法实施例中相应的记载,本文不再赘述。
图10示出了本申请的一个示例性实施例提供的基于显微镜的超分辨率装置的结构示意图。该装置可以通过软件、硬件或者两者的结合实现成为计算机设备的全部或一部分,该装置1000包括:
获取单元1001,用于获取待测图像和至少一张辅助图像,所述待测图像包含目标区域,所述辅助图像的显示区域与所述目标区域存在重叠区域,所述待测图像和所述辅助图像均为第一分辨率的显微镜图像;
配准单元1002,用于将所述待测图像和所述辅助图像进行配准,得到配准图像;
提取单元1003,用于从所述配准图像中提取高分辨率特征,所述高分辨率特征用于表示所述目标区域在第二分辨率的情况下的图像特征,所述第二分辨率大于所述第一分辨率;
重建单元1004,用于重建所述高分辨率特征,得到与所述待测图像对应的所述第二分辨率的目标图像。
在本申请的一个可选设计中,所述配准单元1002,还用于计算所述待测图像和所述辅助图像之间的光流预测图,所述光流预测图用于预测所述待测图像和所述辅助图像之间的光流变化;根据所述光流预测图和所述辅助图像得到具有运动补偿的补偿图像;对所述补偿图像进行编码和解码得到所述配准图像。
在本申请的一个可选设计中,所述配准单元1002,还用于调用光流预测网络,根据所述待测图像的待测光流场和所述辅助图像的辅助光流场计算所述光流预测图。
在本申请的一个可选设计中,所述配准单元1002,还用于调用超分网络,对所述光流预测图进行升采样得到升采样图;通过所述辅助图像对所述升采样图进行双线性插值,得到具有运动补偿的所述补偿图像。
在本申请的一个可选设计中,所述配准单元1002,还用于调用反卷积网络,对所述补偿图像进行编码和解码得到图像残差;将所述图像残差与所述待测图像进行融合,得到所述配准图像。
在本申请的一个可选设计中,所述提取单元1003,还用于从所述配准图像中提取出低分辨率特征,所述低分辨率特征用于表示所述目标区域在所述第一分辨率的情况下的图像特征;对所述低分辨率特征进行映射,得到所述高分辨率特征。
在本申请的一个可选设计中,所述提取单元1003,还用于融合所述配准图像与辅助图像,得到融合图像;从所述融合图像中提取出所述高分辨率特征。
在本申请的一个可选设计中,所述重建单元1004,还用于通过图像重建网络,将所述高分辨率特征转化为所述目标图像中像素点的像素值;通过所述像素点的像素值得到与所述待测图像对应的所述第二分辨率的所述目标图像。
在本申请的一个可选设计中,所述获取单元1001,还用于在所述第一分辨率的显微镜图像序列中,确定所述待测图像以及与所述待测图像满足关联条件的图像;在所述与所述待测图像满足关联条件的图像中,将与所述目标区域存在所述重叠区域且所述重叠区域占比大于参考值的图像确定为所述辅助图像。
在本申请的一个可选设计中,所述配准单元1002,用于调用目标超分辨率模型将所述待测图像和所述辅助图像进行配准,得到配准图像;
所述提取单元1003,用于调用所述目标超分辨率模型从所述配准图像中提取高分辨率特征;
所述重建单元1004,用于调用所述目标超分辨率模型重建所述高分辨率特征,得到与所述待测图像对应的所述第二分辨率的目标图像。
在本申请的一个可选设计中,所述装置还包括:训练单元1005。
所述训练单元1005,用于获取训练数据集,所述训练数据集包括所述第一分辨率的样本待测图像和至少一张样本辅助图像,以及所述第二分辨率的真实标注;调用初始超分辨率模型对所述样本待测图像和所述样本辅助图像进行配准,得到样本配准图像;从所述样本配准图像中提取样本高分辨率特征;重建所述样本高分辨率特征,得到与所述样本待测图像对应的所述第二分辨率的样本目标图像;根据所述样本目标图像和所述真实标注之间的差值,对所述初始超分辨率模型进行训练,得到所述目标超分辨率模型。
在本申请的一个可选设计中,所述样本待测图像和所述样本辅助图像是从第一样本图像序列中确定的,所述第一样本图像序列中的图像的分辨率均为所述第一分辨率;所述真实标注是从第二样本图像序列中确定的,所述第二样本图像序列中的图像的分辨率均为所述第二分辨率;所述训练单元1005,还用于将所述第二样本图像序列中的第i帧图像作为所述真实标注,根据所述真实标注的显示区域,从所述样本图像序列中确定2n+1张与所述真实标注的显示区域存在重叠区域的图像,所述i和n为正整数;基于所述存在重叠区域的图像确定所述样本待测图像和所述样本辅助图像。
综上所述,本实施例通过辅助图像来对待测图像进行配准,从配准图像中提取出高分辨率特征,再根据高分辨率特征重建出具有较高分辨率的图像。该方法使用辅助图像来对待测图像进行配准,为待测图像补充了辅助图像中的图像细节,配准得到的配准图像能够融合待测图像和辅助图像的图像特征,能够建模和挖掘多张图像之间的关联性,有利于后续的特征提取以及图像重建,从而更好地重建出较高分辨率的图像,使得较高分辨率的图像具有较为准确的图像细节。
图11是本申请一个实施例提供的计算机设备的结构示意图。示例性的,计算机设备1100包括中央处理单元(英文:Central Processing Unit,简称:CPU)1101、包括随机存取存储器(英文:Random Access Memory,简称:RAM)1102和只读存储器(英文:Read-Only Memory,简称:ROM)1103的***存储器1104,以及连接***存储器1104和中央处理单元1101的***总线1105。计算机设备1100还包括帮助计算机内的各个器件之间传输信息的基本输入/输出***(I/O***)1106,和用于存储操作***1113、应用程序1114和其他程序模块1115的大容量存储设备1107。
输入/输出***1106包括有用于显示信息的显示器1108和用于用户输入信息的诸如鼠标、键盘之类的输入设备1109。其中,显示器1108和输入设备1109都通过连接到***总线1105的输入/输出控制器1110连接到中央处理单元1101。输入/输出***1106还可以包括输入/输出控制器1110以用于接收和处理来自键盘、鼠标、或电子触控笔等多个其他设备的输入。类似地,输入/输出控制器1110还提供输出到显示屏、打印机或其他类型的输出设备。
大容量存储设备1107通过连接到***总线1105的大容量存储控制器(未示出)连接到中央处理单元1101。大容量存储设备1107及其相关联的计算机可读介质为计算机设备1100提供非易失性存储。也就是说,大容量存储设备1107可以包括诸如硬盘或者只读光盘(英文:Compact Disc Read-Only Memory,简称:CD-ROM)驱动器之类的计算机可读介质(未示出)。
不失一般性,计算机可读介质可以包括计算机存储介质和通信介质。计算机存储介质包括以用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机存储介质包括RAM、ROM、可擦除可编程只读存储器(英文:Erasable Programmable Read-Only Memory,简称:EPROM)、电可擦除可编程只读存储器(英文:Electrically Erasable Programmable Read-Only Memory,简 称:EEPROM)、闪存或其他固态存储其技术,CD-ROM、数字通用光盘(英文:Digital Versatile Disc,简称:DVD)或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知计算机存储介质不局限于上述几种。上述的***存储器1104和大容量存储设备1107可以统称为存储器。
根据本申请的各种实施例,计算机设备1100还可以通过诸如因特网等网络连接到网络上的远程计算机运行。也即计算机设备1100可以通过连接在***总线1105上的网络接口单元1111连接到网络1112,或者说,也可以使用网络接口单元1111来连接到其他类型的网络或远程计算机***(未示出)。示例性的,计算机设备1100可以是指终端,也可以是指服务器。
根据本申请的另一方面,还提供了一种计算机设备,计算机设备包括:处理器和存储器,存储器中存储有至少一条指令、至少一段程序、代码集或指令集,该至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行,以使计算机设备实现如上述的基于显微镜的超分辨率方法。
根据本申请的另一方面,还提供了一种计算机存储介质,计算机可读存储介质中存储有至少一条计算机程序,计算机程序由处理器加载并执行,以计算机实现如上述的基于显微镜的超分辨率方法。
根据本申请的另一方面,还提供了一种计算机程序产品或计算机程序,上述计算机程序产品或计算机程序包括计算机指令,上述计算机指令存储在计算机可读存储介质中。计算机设备的处理器从上述计算机可读存储介质读取上述计算机指令,上述处理器执行上述计算机指令,使得上述计算机设备执行如上述的基于显微镜的超分辨率方法。
可选地,本申请还提供了一种包含指令的计算机程序产品,当其在计算机设备上运行时,使得计算机设备执行上述各方面所述的基于显微镜的超分辨率方法。
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (18)

  1. 一种基于显微镜的超分辨率方法,其中,所述方法由计算机设备执行,所述方法包括:
    获取待测图像和至少一张辅助图像,所述待测图像包含目标区域,所述辅助图像的显示区域与所述目标区域存在重叠区域,所述待测图像和所述辅助图像均为第一分辨率的显微镜图像;
    将所述待测图像和所述辅助图像进行配准,得到配准图像;
    从所述配准图像中提取高分辨率特征,所述高分辨率特征用于表示所述目标区域在第二分辨率的情况下的图像特征,所述第二分辨率大于所述第一分辨率;
    重建所述高分辨率特征,得到与所述待测图像对应的所述第二分辨率的目标图像。
  2. 根据权利要求1所述的方法,其中,所述将所述待测图像和所述辅助图像进行配准,得到配准图像,包括:
    计算所述待测图像和所述辅助图像之间的光流预测图,所述光流预测图用于预测所述待测图像和所述辅助图像之间的光流变化;
    根据所述光流预测图和所述辅助图像得到具有运动补偿的补偿图像;
    对所述补偿图像进行编码和解码得到所述配准图像。
  3. 根据权利要求2所述的方法,其中,所述计算所述待测图像和所述辅助图像之间的光流预测图,包括:
    调用光流预测网络,根据所述待测图像的待测光流场和所述辅助图像的辅助光流场计算所述光流预测图。
  4. 根据权利要求2所述的方法,其中,所述根据所述光流预测图和所述辅助图像得到具有运动补偿的补偿图像,包括:
    调用超分网络,对所述光流预测图进行升采样得到升采样图;
    通过所述辅助图像对所述升采样图进行双线性插值,得到具有运动补偿的所述补偿图像。
  5. 根据权利要求2所述的方法,其中,所述对所述补偿图像进行编码和解码得到所述配准图像,包括:
    调用反卷积网络,对所述补偿图像进行编码和解码得到图像残差;
    将所述图像残差与所述待测图像进行融合,得到所述配准图像。
  6. 根据权利要求1至5任一所述的方法,其中,所述从所述配准图像中提取高分辨率特征,包括:
    从所述配准图像中提取出低分辨率特征,所述低分辨率特征用于表示所述目标区域在所述第一分辨率的情况下的图像特征;
    对所述低分辨率特征进行映射,得到所述高分辨率特征。
  7. 根据权利要求1至5任一所述的方法,其中,所述从所述配准图像中提取高分辨率特征,包括:
    融合所述配准图像与所述辅助图像,得到融合图像;
    从所述融合图像中提取出所述高分辨率特征。
  8. 根据权利要求1至5任一所述的方法,其中,所述重建所述高分辨率特征,得到与所述待测图像对应的所述第二分辨率的目标图像,包括:
    通过图像重建网络,将所述高分辨率特征转化为所述目标图像中像素点的像素值;
    通过所述像素点的像素值得到与所述待测图像对应的所述第二分辨率的所述目标图像。
  9. 根据权利要求1至5任一所述的方法,其中,所述获取待测图像和至少一张辅助图像,包括:
    在所述第一分辨率的显微镜图像序列中,确定所述待测图像以及与所述待测图像满足关联条件的图像;
    在所述与所述待测图像满足关联条件的图像中,将与所述目标区域存在所述重叠区域且所述重叠区域占比大于参考值的图像确定为所述辅助图像。
  10. 根据权利要求1至5任一所述的方法,其中,所述将所述待测图像和所述辅助图像进行配准,得到配准图像,包括:
    调用目标超分辨率模型将所述待测图像和所述辅助图像进行配准,得到配准图像;
    所述从所述配准图像中提取高分辨率特征,包括:
    调用所述目标超分辨率模型从所述配准图像中提取高分辨率特征;
    所述重建所述高分辨率特征,得到与所述待测图像对应的所述第二分辨率的目标图像,包括:
    调用所述目标超分辨率模型重建所述高分辨率特征,得到与所述待测图像对应的所述第二分辨率的目标图像。
  11. 根据权利要求10所述的方法,其中,所述方法还包括:
    获取训练数据集,所述训练数据集包括所述第一分辨率的样本待测图像和至少一张样本辅助图像,以及所述第二分辨率的真实标注;
    调用初始超分辨率模型对所述样本待测图像和所述样本辅助图像进行配准,得到样本配准图像;从所述样本配准图像中提取样本高分辨率特征;重建所述样本高分辨率特征,得到与所述样本待测图像对应的所述第二分辨率的样本目标图像;
    根据所述样本目标图像和所述真实标注之间的差值,对所述初始超分辨率模型进行训练,得到所述目标超分辨率模型。
  12. 根据权利要求11所述的方法,其中,所述样本待测图像和所述样本辅助图像是从第一样本图像序列中确定的,所述第一样本图像序列中的图像的分辨率均为所述第一分辨率;所述真实标注是从第二样本图像序列中确定的,所述第二样本图像序列中的图像的分辨率均为所述第二分辨率;
    所述获取训练数据集,包括:
    将所述第二样本图像序列中的第i帧图像作为所述真实标注,根据所述真实标注的显示区域,从所述第一样本图像序列中确定2n+1张与所述真实标注的显示区域存在重叠区域的图像,所述i和n为正整数;
    基于所述存在重叠区域的图像确定所述样本待测图像和所述样本辅助图像。
  13. 一种基于显微镜的超分辨率装置,其中,所述装置包括:
    获取单元,用于获取待测图像和至少一张辅助图像,所述待测图像包含目标区域,所述辅助图像的显示区域与所述目标区域存在重叠区域,所述待测图像和所述辅助图像均为第一分辨率的显微镜图像;
    配准单元,用于将所述待测图像和所述辅助图像进行配准,得到配准图像;
    提取单元,用于从所述配准图像中提取高分辨率特征,所述高分辨率特征用于表示所述目标区域在第二分辨率的情况下的图像特征,所述第二分辨率大于所述第一分辨率;
    重建单元,用于重建所述高分辨率特征,得到与所述待测图像对应的所述第二分辨率的 目标图像。
  14. 根据权利要求13所述的装置,其中,所述配准单元,用于调用目标超分辨率模型将所述待测图像和所述辅助图像进行配准,得到配准图像;
    所述提取单元,用于调用所述目标超分辨率模型从所述配准图像中提取高分辨率特征;
    所述重建单元,用于调用所述目标超分辨率模型重建所述高分辨率特征,得到与所述待测图像对应的所述第二分辨率的目标图像。
  15. 根据权利要求14所述的装置,其中,所述装置还包括:
    训练单元,用于获取训练数据集,所述训练数据集包括所述第一分辨率的样本待测图像和至少一张样本辅助图像,以及所述第二分辨率的真实标注;调用初始超分辨率模型对所述样本待测图像和所述样本辅助图像进行配准,得到样本配准图像;从所述样本配准图像中提取样本高分辨率特征;重建所述样本高分辨率特征,得到与所述样本待测图像对应的所述第二分辨率的样本目标图像;根据所述样本目标图像和所述真实标注之间的差值,对所述初始超分辨率模型进行训练,得到所述目标超分辨率模型。
  16. 一种计算机设备,其中,所述计算机设备包括:处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行,以使所述计算机设备实现如权利要求1至12中任一项所述的基于显微镜的超分辨率方法。
  17. 一种非临时性计算机可读存储介质,其中,所述非临时性计算机可读存储介质中存储有至少一条计算机程序,所述计算机程序由处理器加载并执行,以使计算机实现如权利要求1至12中任一项所述的基于显微镜的超分辨率方法。
  18. 一种计算机程序产品,其中,所述计算机程序产品包括计算机指令,所述计算机指令存储在计算机可读存储介质中,计算机设备的处理器从所述计算机可读存储介质读取所述计算机指令,所述处理器执行所述计算机指令,使得所述计算机设备执行如权利要求1至12中任一项所述的基于显微镜的超分辨率方法。
PCT/CN2022/098411 2021-07-05 2022-06-13 基于显微镜的超分辨率方法、装置、设备及介质 WO2023279920A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22836674.6A EP4365774A1 (en) 2021-07-05 2022-06-13 Microscope-based super-resolution method and apparatus, device and medium
US18/127,502 US20230237617A1 (en) 2021-07-05 2023-03-28 Microscope-based super-resolution

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110758195.6 2021-07-05
CN202110758195.6A CN113822802A (zh) 2021-07-05 2021-07-05 基于显微镜的超分辨率方法、装置、设备及介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/127,502 Continuation US20230237617A1 (en) 2021-07-05 2023-03-28 Microscope-based super-resolution

Publications (1)

Publication Number Publication Date
WO2023279920A1 true WO2023279920A1 (zh) 2023-01-12

Family

ID=78924151

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/098411 WO2023279920A1 (zh) 2021-07-05 2022-06-13 基于显微镜的超分辨率方法、装置、设备及介质

Country Status (4)

Country Link
US (1) US20230237617A1 (zh)
EP (1) EP4365774A1 (zh)
CN (1) CN113822802A (zh)
WO (1) WO2023279920A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822802A (zh) * 2021-07-05 2021-12-21 腾讯科技(深圳)有限公司 基于显微镜的超分辨率方法、装置、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931210A (zh) * 2016-04-15 2016-09-07 中国航空工业集团公司洛阳电光设备研究所 一种高分辨率图像重建方法
CN107633482A (zh) * 2017-07-24 2018-01-26 西安电子科技大学 一种基于序列图像的超分辨率重建方法
CN109671023A (zh) * 2019-01-24 2019-04-23 江苏大学 一种人脸图像超分辨率二次重建方法
CN112734646A (zh) * 2021-01-19 2021-04-30 青岛大学 一种基于特征通道划分的图像超分辨率重建方法
CN113822802A (zh) * 2021-07-05 2021-12-21 腾讯科技(深圳)有限公司 基于显微镜的超分辨率方法、装置、设备及介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931210A (zh) * 2016-04-15 2016-09-07 中国航空工业集团公司洛阳电光设备研究所 一种高分辨率图像重建方法
CN107633482A (zh) * 2017-07-24 2018-01-26 西安电子科技大学 一种基于序列图像的超分辨率重建方法
CN109671023A (zh) * 2019-01-24 2019-04-23 江苏大学 一种人脸图像超分辨率二次重建方法
CN112734646A (zh) * 2021-01-19 2021-04-30 青岛大学 一种基于特征通道划分的图像超分辨率重建方法
CN113822802A (zh) * 2021-07-05 2021-12-21 腾讯科技(深圳)有限公司 基于显微镜的超分辨率方法、装置、设备及介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIN TAO; HONGYUN GAO; RENJIE LIAO; JUE WANG; JIAYA JIA: "Detail-revealing Deep Video Super-resolution", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 10 April 2017 (2017-04-10), 201 Olin Library Cornell University Ithaca, NY 14853 , XP080761872, DOI: 10.1109/ICCV.2017.479 *

Also Published As

Publication number Publication date
US20230237617A1 (en) 2023-07-27
CN113822802A (zh) 2021-12-21
EP4365774A1 (en) 2024-05-08

Similar Documents

Publication Publication Date Title
CN109472270B (zh) 图像风格转换方法、装置及设备
CN110136066B (zh) 面向视频的超分辨率方法、装置、设备和存储介质
Hui et al. Progressive perception-oriented network for single image super-resolution
CN107067380B (zh) 基于低秩张量和层次化字典学习的高分辨率图像重构方法
CN112767290B (zh) 图像融合方法、图像融合装置、存储介质与终端设备
CN104011581A (zh) 图像处理装置、图像处理***、图像处理方法和图像处理程序
EP4207051A1 (en) Image super-resolution method and electronic device
CN108876716B (zh) 超分辨率重建方法及装置
Chen et al. MICU: Image super-resolution via multi-level information compensation and U-net
WO2023279920A1 (zh) 基于显微镜的超分辨率方法、装置、设备及介质
Rad et al. Benefiting from multitask learning to improve single image super-resolution
CN113269672B (zh) 一种超分辨率的细胞图像构建方法和***
CN117237648B (zh) 基于上下文感知的语义分割模型的训练方法、装置和设备
WO2024032331A9 (zh) 图像处理方法及装置、电子设备、存储介质
CN112884702A (zh) 一种基于内窥镜图像的息肉识别***和方法
CN113205451B (zh) 图像处理方法、装置、电子设备及存储介质
CN112927139B (zh) 一种双目热成像***及超分辨率图像获取方法
CN114170084A (zh) 一种图像超分辨率处理方法、装置及设备
Liu et al. Adaptive pixel aggregation for joint spatial and angular super-resolution of light field images
CN113888551A (zh) 基于高低层特征融合的密集连接网络的肝脏肿瘤图像分割方法
Liu et al. Hyperspectral image super-resolution employing nonlocal block and hybrid multiscale three-dimensional convolution
CN110111254B (zh) 一种基于多级递归引导和渐进监督的深度图超分辨率方法
WO2011158225A1 (en) System and method for enhancing images
Gherardi et al. Real-time whole slide mosaicing for non-automated microscopes in histopathology analysis
Chu et al. Dual attention with the self-attention alignment for efficient video super-resolution

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22836674

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022836674

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022836674

Country of ref document: EP

Effective date: 20240202