CN115631196A - Image segmentation method, model training method, device, equipment and storage medium - Google Patents

Image segmentation method, model training method, device, equipment and storage medium Download PDF

Info

Publication number
CN115631196A
CN115631196A CN202211638011.3A CN202211638011A CN115631196A CN 115631196 A CN115631196 A CN 115631196A CN 202211638011 A CN202211638011 A CN 202211638011A CN 115631196 A CN115631196 A CN 115631196A
Authority
CN
China
Prior art keywords
feature
optical flow
feature map
region
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211638011.3A
Other languages
Chinese (zh)
Other versions
CN115631196B (en
Inventor
张俊杰
霍志敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Taimei Xingcheng Pharmaceutical Technology Co ltd
Original Assignee
Hangzhou Taimei Xingcheng Pharmaceutical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Taimei Xingcheng Pharmaceutical Technology Co ltd filed Critical Hangzhou Taimei Xingcheng Pharmaceutical Technology Co ltd
Priority to CN202211638011.3A priority Critical patent/CN115631196B/en
Publication of CN115631196A publication Critical patent/CN115631196A/en
Application granted granted Critical
Publication of CN115631196B publication Critical patent/CN115631196B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image segmentation method, a model training method, a device, equipment and a storage medium, which are used for solving the problems that image assessment resources are insufficient and accuracy is difficult to guarantee in the prior art, wherein the medical image segmentation method comprises the following steps: acquiring a target image sequence, and performing feature extraction on a plurality of images to obtain a plurality of feature maps; performing M-layer feature dimension reduction on the feature graph to obtain M middle-layer feature graphs; performing N-layer feature dimensionality reduction on the optical flow feature map extracted from the region of interest in the feature map to obtain N intermediate-layer optical flow feature maps; fusing the intermediate layer feature map obtained by the M-th layer of feature dimension reduction and the intermediate layer optical flow feature map obtained by the N-th layer of feature dimension reduction to obtain a spatial cross feature map of the feature map; and segmenting the region of interest from the plurality of images in the target image sequence based on the spatial cross feature maps of the plurality of feature maps.

Description

Image segmentation method, model training method, device, equipment and storage medium
Technical Field
The application belongs to the technical field of computer data processing, and particularly relates to an image segmentation method, a model training method, an image segmentation device, an image segmentation equipment and a storage medium.
Background
In independent medical image evaluation, a high-annual-quality image doctor with complete medical knowledge and rich experience is generally required to play a role, but domestic image doctors are very resource-intensive, and the efficiency of independent film reading is extremely low due to a large amount of medical image data; moreover, even for high-age doctors, due to the difference of knowledge structure and experience, there may be a great difference in the evaluation of the therapeutic effect of the drug due to the inaccurate measurement of the tumor size, weight, and other indicators of a certain visit to the same patient.
The information disclosed in this background section is only for enhancement of understanding of the general background of the application and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art that is already known to a person skilled in the art.
Disclosure of Invention
The present application aims to provide a medical image segmentation method, which is used for solving the problems that image evaluation resources are in short supply and accuracy is difficult to guarantee in the prior art.
To achieve the above object, the present application provides a medical image segmentation method, including:
acquiring a target image sequence, and performing feature extraction on a plurality of images to obtain a plurality of feature maps, wherein the feature maps comprise regions of interest;
performing M-layer feature dimension reduction on the feature graph to obtain M middle-layer feature graphs;
performing N layers of feature dimensionality reduction on the optical flow feature maps extracted from the interested areas in the feature maps to obtain N intermediate layer optical flow feature maps, wherein at least one of the N intermediate layer optical flow feature maps is fused with the optical flow features extracted from the interested areas on the intermediate layer feature maps with corresponding dimensionalities;
fusing an intermediate layer feature map obtained by dimensionality reduction of the M-th layer of features and an intermediate layer optical flow feature map obtained by dimensionality reduction of the N-th layer of features to obtain a spatial cross feature map of the feature map;
and segmenting the region of interest from the plurality of images in the target image sequence based on the spatial cross feature maps of the plurality of feature maps.
In an embodiment, segmenting a region of interest from a plurality of images in the target image sequence based on a spatial cross feature map of the plurality of feature maps specifically includes:
fusing a plurality of the space cross feature maps to obtain a multi-dimensional space-time cross feature map;
and respectively fusing the multi-dimensional space-time cross feature map and the plurality of space cross feature maps and then decoding to segment the region of interest from the plurality of images in the target image sequence.
In one embodiment, the method specifically includes:
carrying out weighted average on corresponding features in the plurality of space cross feature maps to obtain a multi-dimensional space-time cross feature map;
and respectively multiplying the multi-dimensional space-time cross feature map with corresponding features in the plurality of space cross feature maps, and then decoding to segment the region of interest from the plurality of images in the target image sequence.
In an embodiment, performing N-layer feature dimensionality reduction on the optical flow feature map extracted from the region of interest in the feature map to obtain N intermediate-layer optical flow feature maps specifically includes:
and when the optical flow feature map is subjected to the first N-1-layer feature dimensionality reduction, fusing optical flow information extracted from the interested area on the intermediate-layer feature map with the corresponding dimensionality to obtain a corresponding intermediate-layer optical flow feature map.
The application also provides a training method of the medical image segmentation model, and in the model training stage, the method comprises the following steps:
acquiring a training sample set, wherein the training sample set comprises a plurality of first feature maps and first region-of-interest masks corresponding to the first feature maps;
determining a first region of interest on a corresponding first feature map based on the first region of interest mask;
extracting optical flow information of the first region of interest to obtain an optical flow feature map;
wherein the medical image segmentation model performs image segmentation based on the medical image segmentation method as described above.
In one embodiment, in the model test phase, the method further comprises:
obtaining a test sample set, wherein the test sample set comprises a plurality of second feature maps;
performing feature normalization on the second feature map in the test sample set based on a set threshold value to obtain a corresponding binary image;
reserving an image area with the largest area in the binarized image as a second region-of-interest mask;
determining a second region of interest on a corresponding second feature map based on the second region of interest mask;
and extracting optical flow information of the second region of interest to obtain an optical flow feature map.
The present application also provides a medical image segmentation apparatus, comprising:
the characteristic extraction module is used for acquiring a target image sequence and extracting characteristics of a plurality of images to obtain a plurality of characteristic graphs, wherein the characteristic graphs comprise interested areas;
the characteristic dimension reduction module is used for executing M layers of characteristic dimension reduction on the characteristic graph to obtain M middle layer characteristic graphs;
a first optical flow extraction module, configured to perform N-layer feature dimensionality reduction on the optical flow feature map extracted for the region of interest in the feature map to obtain N intermediate-layer optical flow feature maps, where at least one of the N intermediate-layer optical flow feature maps is fused with optical flow features extracted for the region of interest on the intermediate-layer feature map with corresponding dimensionality;
the fusion module is used for fusing the intermediate layer feature map obtained by the dimensionality reduction of the M-th layer of features and the intermediate layer optical flow feature map obtained by the dimensionality reduction of the N-th layer of features to obtain a spatial cross feature map of the feature map;
and the segmentation module is used for segmenting the interested region from a plurality of images in the target image sequence based on the spatial cross feature maps of the plurality of feature maps.
The present application further provides a training device for a medical image segmentation model, including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a training sample set, and the training sample set comprises a plurality of first feature maps and first region-of-interest masks corresponding to the first feature maps;
a determination module for determining a first region of interest on a corresponding first feature map based on the first region of interest mask;
a second optical flow extraction module, configured to extract optical flow information of the first region of interest to obtain an optical flow feature map;
wherein the medical image segmentation model performs image segmentation based on the medical image segmentation method as described above.
The present application further provides an electronic device, comprising:
at least one processor; and
a memory storing instructions that, when executed by the at least one processor, cause the at least one processor to perform a medical image segmentation method or a training method of a medical image segmentation model as described above.
The present application also provides a machine-readable storage medium having stored thereon executable instructions that, when executed, cause the machine to perform a medical image segmentation method or a training method of a medical image segmentation model as described above.
Compared with the prior art, according to the medical image segmentation method, the spatial cross feature map is obtained by fusing spatial features (feature dimensionality reduction of the feature map), optical flow features (region-of-interest optical flow feature extraction) and time sequence features (a plurality of feature maps in the target image sequence), more reference information can be given to model image segmentation, and the segmentation accuracy is improved.
On the other hand, compared with the optical flow feature extraction of the whole image, the optical flow feature extraction of the region of interest is adopted, so that the consumption of resource amount and the possible interference of the peripheral region of the image on the optical flow feature of the region of interest can be reduced.
In another aspect, the depth of the spatio-temporal feature fusion and the significance of the region of interest can be further enhanced by the fusion of the multi-dimensional spatial cross feature map and the plurality of spatial cross feature maps.
Drawings
FIG. 1 is a schematic view of a medical image segmentation method applied in the present application;
FIG. 2 is a flow chart of a medical image segmentation method according to an embodiment of the present application;
FIG. 3 is a network architecture diagram for generating a multi-dimensional spatiotemporal cross feature map from images of an image sequence in a medical image segmentation method according to an embodiment of the present application;
FIG. 4 is a flow chart of a training phase in a training method of a medical image segmentation model according to an embodiment of the present application;
FIG. 5 is a flow chart of a testing phase in a training method of a medical image segmentation model according to an embodiment of the present application;
FIG. 6 is a diagram of a network architecture in a training phase and a testing phase of a training method for a medical image segmentation model according to an embodiment of the present application;
FIG. 7 is a block diagram of a medical image segmentation apparatus according to an embodiment of the present application;
FIG. 8 is a block diagram of a training apparatus for a medical image segmentation model according to an embodiment of the present application;
FIG. 9 is a hardware block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to embodiments shown in the drawings. The embodiments are not limited to the embodiments, and structural, methodological, or functional changes made by those skilled in the art according to the embodiments are included in the scope of the present disclosure.
The terms "first," "second," "third," "fourth," and the like in the description and claims of this application and in the above-described drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "corresponding" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
With the increasingly fierce competition of the global medicine market, the strong demands of medicine enterprises on research and development, production, sales cost control and efficiency improvement promote the generation and development of the medicine outsourcing industry. The clinical Organization administration (SMO) in the pharmaceutical outsourcing industry is an Organization that provides professional services for pharmaceutical enterprises to research and develop clinical trials. The primary professional Clinical Research Coordinator (CRC) of SMO will be assigned to the Clinical trial site, supporting daily non-Clinical work under the direction of the primary researcher (PI). In the services provided by SMO, independent image evaluation is one of the important items.
In the development process of new drugs, clinical data of a subject is required as a basis for evaluating drug efficacy, and drugs passing clinical experiments can be subsequently marketed. Taking the development of new tumor drugs as an example, independent image evaluation is specified by the U.S. food and drug administration FDA and the european medicines agency EMA as a recommended test method for evaluating the efficacy of new chemotherapeutic drugs. In the process, the medical image specialist of the SMO submits the medical image data of each visit to an Independent image assessment Committee (IRC) for Independent radiograph interpretation to assess the treatment effect of the test drug on the tumor.
In recent years, with the rapid development and continuous landing of Artificial Intelligence (AI) technology, great potential is released in tumor segmentation application in the independent image evaluation process, and digital and intelligent support is provided for medical image processing from two dimensions of quality assurance and speed improvement.
Based on this, in a specific scenario example, a medical image segmentation method is proposed. The medical image segmentation method can be operated in an independent image evaluation platform, and the independent image evaluation platform can be deployed in a server. A doctor or a researcher performs a medical image examination of a specific body part (which may be referred to as a body region) of a subject by a medical image acquisition device at a study center (e.g., a hospital), and medical image data is obtained. The clinical coordinator CRC uploads the collected medical image data to a server or a computer device communicatively connected to the server. The medical image data collected includes a series of visits or a sequence of medical images of a subject. The medical image sequence comprises a plurality of medical images which are obtained by shooting a body area. The server can be deployed with a medical image segmentation model, the image is processed through the medical image segmentation model, and the region of interest is segmented from the image by utilizing optical flow characteristics, space characteristics, time sequence characteristics and the like in the image.
Referring to fig. 1, in a typical system architecture to which the present application is applied, a server and a terminal may be included. The user may use the terminal to interact with the server, for example by instructing the server to receive a sequence of target images via the terminal, to receive or send a message, etc. The medical image segmentation method disclosed by the application can be executed by a server, and accordingly, the medical image segmentation device disclosed by the application can be arranged in the server.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
In a system architecture in which the terminal can provide matching calculation power, the medical image segmentation method disclosed by the application can also be directly executed by the terminal, and accordingly, the medical image segmentation device disclosed by the application can be arranged in the terminal.
Referring to fig. 2, an embodiment of the medical image segmentation method of the present application is described. In this embodiment, the method includes:
s11, acquiring a target image sequence, and performing feature extraction on a plurality of images to obtain a plurality of feature maps.
With reference to fig. 3, according to the image collection scheme, the target object may be examined by various technical means such as CT (Computed Tomography), MRI (Magnetic Resonance Imaging), and the like, and generate a corresponding medical image sequence.
In this embodiment, the target image sequence may be saved as a DICOM image (i.e., a DICOM file). The DICOM file is stored in a manner that medical images (images) in a CT scan sequence are correspondingly stored as a DICOM file, and if an image sequence (image series) is acquired, for example, an image sequence of a brain or an image sequence of a whole body, the image sequence is stored as a corresponding number of DICOM files. A DICOM file is referred to herein as being stored as a separate file (e.g., a file suffixed by a. Dcm). Wherein the image data of each DICOM file corresponds to an image of an image slice (slice) in the medical image sequence, and the image sequences may correspond to the same examination (study) of a subject.
Taking the target image sequence as the CT image sequence as an example, the medical image sequence may be read by using software packages such as PyDicom, dcm4che, DCMTK, etc., and it can be understood that different programming languages may all correspond to one or more software packages for parsing the DICOM format medical image sequence.
The feature extraction of the image may be understood as a dimension reduction process on the image, and the image may be subjected to convolution processing, pooling processing, and the like, which is not limited in this embodiment.
For example, the image may be subjected to spatial feature extraction by two feature extraction layers in cascade in a feature extraction network, and the original image may be converted into a grayscale natural image. The image feature extraction Network may be a neural Network model, and the neural Network may be a Visual Geometry Group Network (VGGNet, visual Geometry Group Network), a Residual Network (ResNet, residual Network), a Dense connection convolution Network (densnet, dense connection Network), and the like, but it should be understood that the neural Network of the embodiment is not limited to the above listed types. Exemplarily, a convolution Conv layer, a normalized BatchNorm layer, an activated ReLu layer, and a pooled Pooling layer may be included in each feature extraction layer for dimensionality reduction.
And S12, performing M-layer feature dimension reduction on the feature graph to obtain M intermediate layer feature graphs.
Similarly, the feature dimension reduction here may refer to the feature extraction operation in step S11, where the input of the first feature dimension reduction is the original feature map, and each subsequent input of the feature dimension reduction is the output of the last feature dimension reduction, that is, by performing down-sampling in a gradient. In this embodiment, as the feature dimension reduction of the 1 st, 2 nd, … and M layers, the resolution of the corresponding 1 st, 2 nd, … and M-th intermediate layer feature maps gradually decreases.
S13, performing N-layer feature dimensionality reduction on the optical flow feature graph extracted from the region of interest in the feature graph to obtain N middle-layer optical flow feature graphs.
The optical flow feature extraction of the interest area in the feature map can obtain local sparse optical flow features, and compared with the optical flow feature extraction of the whole feature map, the method saves more computing resources. It should be noted that, in the embodiments of the present application, the optical flow feature extracted from the region of interest in the feature map may refer to the region of interest in the feature map compared to the region of interest in another feature map in the target image sequence.
The optical flow features may be represented by optical flow vectors. The motion of the pixel position can be determined by using the time domain variation and the correlation of the pixel intensity data in the image sequence, and the motion speed of each pixel point is obtained and used as optical flow information. Optical flow information corresponding to the region of interest between the optical flow reference feature map and the current feature map can be acquired as the optical flow information corresponding to the region of interest in the current feature map. The extraction of the optical flow may be, for example, a gradient-based method, a matching-based method, an energy-based method, a phase-based method. For example, the Lucas-Kanade calculation method estimates the optical flow using a weighted least squares method assuming that the motion vector remains constant within a certain spatial neighborhood. The optical flow calculation method may be a dense optical flow extraction method, or may be an optical flow extraction method based on a deep learning FlowNet (optical flow neural network), and FlowNet may predict optical flow information using a convolutional neural network, and may model an optical flow prediction problem as a supervised deep learning problem. Further, there are algorithms such as HS optical flow algorithm, pyramidal LK algorithm, etc., and this embodiment is not limited to this.
Since the optical flow feature extraction does not reduce the dimension of the original feature map, the dimension of the optical flow feature map is the same as the original feature degree; thereafter, each layer of feature dimension reduction of the optical flow feature map can keep the same dimension reduction as the feature dimension reduction of the feature map of the corresponding layer. In this way, the N intermediate-layer optical flow feature maps and the M intermediate-layer feature maps may be dimensionally correlated at a corresponding level.
Taking the size of the original feature map as 128 × 128 as an example, the size of the middle layer feature map after one layer of feature dimensionality reduction is 32 × 32, and the size of the middle layer feature map after two layers of feature dimensionality reduction is 8*8. The size of the optical flow feature graph extracted from the original feature graph is kept at 128 × 128, the size of the intermediate optical flow feature graph after the optical flow feature graph is subjected to one-layer feature dimensionality reduction is 32 × 32, and the size of the intermediate optical flow feature graph after the optical flow feature graph is subjected to two-layer feature dimensionality reduction is 8*8.
Of course, in alternative embodiments, the dimensionality reduction of each layer of the feature map and the optical flow feature map may also be set to be different, and will not be described herein again.
As shown in fig. 3, the network structure includes two branch networks, wherein the first branch network and the second branch network correspond to the feature dimension reduction of the feature map in step S12 and the feature dimension reduction of the feature map of the optical flow in step S13, respectively. The first branch network and the second branch network may adopt two dimension reduction layers in each dimension reduction, and the dimension reduction layers in the two branch networks may be the same or shared.
In this embodiment, at least one of the N intermediate-layer optical flow feature maps is fused with optical flow features extracted from the region of interest on the intermediate-layer feature map of the corresponding dimension. Or taking the network structure shown in fig. 3 as an example, in the second branch network, after the optical flow feature map is subjected to feature dimensionality reduction once, performing feature fusion with the optical flow features extracted from the region of interest on the intermediate layer feature map 1, thereby obtaining an intermediate layer optical flow feature map 1; and (3) performing feature dimensionality reduction on the intermediate-layer optical flow feature map 1, and directly obtaining an intermediate-layer optical flow feature map 2.
Specifically, when the first N-1 layers of feature dimensionality reduction is carried out on the optical flow feature map, the optical flow information extracted by the middle layer feature map of the corresponding dimensionality is fused to obtain the corresponding middle layer optical flow feature map. That is, in the intermediate-layer optical flow feature map obtained by the last feature dimensionality reduction, optical flow information extracted from the interested area on the intermediate-layer feature map with the corresponding dimensionality is not fused. The term "fusion" as used herein may refer to the calculation of the addition or multiplication of the optical flow information extracted from the intermediate layer feature map and the corresponding features in the optical flow feature map.
And S14, fusing the intermediate layer feature graph obtained by dimensionality reduction of the M-th layer of features and the intermediate layer optical flow feature graph obtained by dimensionality reduction of the N-th layer of features to obtain a spatial cross feature graph of the feature graph.
The spatial cross feature map realizes the spatial feature fusion of the image and the optical flow. Similarly, the term "fusion" as used herein may also refer to the calculation of the sum or product of the corresponding features in the intermediate-layer feature map and the intermediate-layer optical flow feature map.
Under the condition that the dimensionality of the intermediate layer feature graph obtained by dimensionality reduction of the M-th layer of features is the same as that of the intermediate layer optical flow feature graph obtained by dimensionality reduction of the N-th layer of features, the features of the intermediate layer feature graph and the intermediate layer optical flow feature graph can be in one-to-one correspondence, and the addition or product calculation is directly carried out. If the dimensions of the two are not the same, the dimensions of the two may be unified in advance, which is not described herein again.
And S15, segmenting the region of interest from the multiple images in the target image sequence based on the spatial cross feature maps of the multiple feature maps.
For an image sequence, one scan may include a single body part or multiple body parts. For example, at least one of head, neck, chest, abdomen, basin, head and neck cervico-thoracic chest, head and neck thoraco-abdominal basin, cervico-thoracic chest, neck thoraco-abdominal, neck and chest-abdominal basin, thoraco-abdominal basin, and abdominal basin may be included. Head, neck, chest, abdomen, basin can be understood as a single body part, head, neck, chest, abdomen basin can be understood as a composite body part of a plurality of body parts.
It can be understood that the scanning imaging times of the plurality of images in the image sequence are also different, so that the segmentation of the region of interest based on the spatial cross feature maps of the plurality of feature maps in the image sequence is equivalent to the feature with increased time (along the scanning direction dimension), and the 2D segmentation network is expanded to be a 3D segmentation network, so that the reliability of the region of interest segmentation is increased.
Specifically, a plurality of spatial cross feature maps may be fused first to obtain a multidimensional space-time cross feature map; and then respectively fusing the multi-dimensional space-time cross feature map and the plurality of space cross feature maps and decoding to segment the region of interest from the plurality of images in the target image sequence. Here, the fusion of the multi-dimensional spatiotemporal cross feature map with the spatial cross feature map may enhance the depth of the spatiotemporal feature fusion and the significance of the tumor features of the region of interest to be segmented.
Similarly, in the fusion process, the corresponding features in the plurality of spatial cross feature maps may be weighted-averaged to obtain a multi-dimensional space-time cross feature map; and then, respectively multiplying the multi-dimensional space-time cross feature maps with corresponding features in the plurality of space cross feature maps, and then decoding to segment the region of interest from the plurality of images in the target image sequence.
In various embodiments of the present application, the decoding portion may be implemented using conventional upsampling or deconvolution methods used by UNet, FCN (Full Convolutional neural Networks), deep Convolutional neural Networks, and the like.
Referring to fig. 4, an embodiment of a training method of a medical image segmentation model according to the present application is described. The medical image segmentation model mentioned in the present application may perform image segmentation using the medical image segmentation method in the above embodiments.
In the training phase of the medical image segmentation model of the application:
and S211, acquiring a training sample set.
With reference to fig. 6, the training sample set includes a plurality of first feature maps F _ i (i =1.. 3) and a first region of interest mask corresponding to the plurality of first feature maps. The first region-of-interest mask may be obtained by manually labeling the binarized image of the first feature map, for example, a portion corresponding to the first region-of-interest is labeled as 1, and a portion of the non-region-of-interest is labeled as 0.
S212, determining a first region of interest on the corresponding first feature map based on the first region of interest mask.
In particular, the product calculation may be performed on the first mask of interest and corresponding pixels in the first feature map. In the first region of interest mask, the value corresponding to the first region of interest is 1, and the value of the remaining region is 0. Through the product calculation, the pixel value of the non-interested region in the first feature map becomes 0, and the pixel value of the first interested region part does not change, so that the first interested region on the first feature map is determined.
S213, extracting the optical flow information of the first region of interest to obtain an optical flow feature map.
Similarly, the optical flow information can be extracted based on HS optical flow algorithm, lucas-Kanade algorithm, pyramidal LK algorithm, etc., which will not be described herein.
Referring to fig. 5 and 6, in the testing phase of the medical image segmentation model of the present application:
and S221, obtaining a test sample set.
The test sample set includes a plurality of second feature maps.
S222, performing feature normalization on the second feature map in the test sample set based on a set threshold value to obtain a corresponding binary image.
In the feature normalization of the second feature map, a suitable threshold value, for example, 0.9, may be set according to different experiences or scenes. That is, in the normalization process, if the normalized value of a certain pixel in the second feature map is greater than 0.9, the value of the certain pixel is set to 1, and otherwise, the value of the certain pixel is set to 0.
And S223, reserving the image area with the largest area in the binary image as a second region-of-interest mask.
In this embodiment, the second region of interest in the binarized image is determined based on the size of the area, that is, the image region with the largest area is considered to have the largest confidence as the second region of interest. Here, the region with the largest area in the binarized image is retained as the second region-of-interest mask, and the pixel values of the remaining regions except for the region with the largest area in the binarized image may be set to 0.
And S224, determining a second region of interest on the corresponding second characteristic diagram based on the second region of interest mask.
And S225, extracting the optical flow information of the second region of interest to obtain an optical flow feature map.
Similarly, the method for determining the second region of interest and extracting the optical flow information of the second region of interest in the second feature map can refer to the method described in the above model training stage, and is not described herein again.
In the training method of the medical image segmentation model of the present application, since other parts of the training method are not involved in improvement, these possible other steps are not specifically explained. As known to those skilled in the art, in the training process of the model, the training effect of the model can also be verified by verifying the sample set with the objective of minimizing the loss function. Exemplarily, the Loss function may employ a Cross entropic Loss function, a Dice Loss function, or the like.
Referring to fig. 7, an embodiment of a medical image segmentation apparatus according to the present application is described. In the embodiment, the medical image segmentation device comprises a feature extraction module, a feature dimension reduction module, a first optical flow extraction module, a fusion module and a segmentation module.
The characteristic extraction module is used for acquiring a target image sequence and extracting characteristics of a plurality of images to obtain a plurality of characteristic graphs, wherein the characteristic graphs comprise interested areas; the characteristic dimension reduction module is used for executing M layers of characteristic dimension reduction on the characteristic graph to obtain M middle layer characteristic graphs; a first optical flow extraction module, configured to perform N-layer feature dimensionality reduction on the optical flow feature map extracted for the region of interest in the feature map to obtain N intermediate-layer optical flow feature maps, where at least one of the N intermediate-layer optical flow feature maps is fused with optical flow features extracted for the region of interest on the intermediate-layer feature map with corresponding dimensionality; the fusion module is used for fusing the intermediate layer feature map obtained by the dimensionality reduction of the M-th layer of features and the intermediate layer optical flow feature map obtained by the dimensionality reduction of the N-th layer of features to obtain a spatial cross feature map of the feature map; and the segmentation module is used for segmenting the interested region from a plurality of images in the target image sequence based on the spatial cross feature maps of the plurality of feature maps.
In an embodiment, the segmentation module is specifically configured to fuse a plurality of the spatial cross feature maps to obtain a multi-dimensional spatiotemporal cross feature map; and respectively fusing the multi-dimensional space-time cross feature map and the plurality of space cross feature maps and then decoding to segment the region of interest from the plurality of images in the target image sequence.
In an embodiment, the segmentation module is specifically configured to perform weighted average on corresponding features in the plurality of spatial cross feature maps to obtain a multidimensional space-time cross feature map; and respectively multiplying the multi-dimensional space-time cross characteristic diagram with corresponding characteristics in the plurality of space cross characteristic diagrams, and then decoding to segment the region of interest from the plurality of images in the target image sequence.
In one embodiment, the first optical flow extraction module is specifically configured to, when performing N-1-layer feature dimension reduction on the optical flow feature map, fuse optical flow information extracted from an area of interest on the intermediate-layer feature map of a corresponding dimension to obtain a corresponding intermediate-layer optical flow feature map.
Referring to fig. 8, an embodiment of a training apparatus for a medical image segmentation model according to the present application is described. The medical image segmentation model performs image segmentation based on the medical image segmentation method as described in the above embodiments. In the embodiment, the training device of the medical image segmentation model comprises an acquisition module, a determination module and a second optical flow extraction module.
The acquisition module is used for acquiring a training sample set, wherein the training sample set comprises a plurality of first feature maps and first region-of-interest masks corresponding to the first feature maps; the determination module is used for determining a first region of interest on the corresponding first feature map based on the first region of interest mask; the second optical flow extraction module is used for extracting the optical flow information of the first region of interest so as to obtain an optical flow feature map.
The obtaining module is further configured to obtain a test sample set, where the test sample set includes a plurality of second feature maps; the determining module is further used for performing feature normalization on the second feature map in the test sample set based on a set threshold value to obtain a corresponding binary image; reserving an image area with the largest area in the binarized image as a second region-of-interest mask; determining a second region of interest on a corresponding second feature map based on the second region of interest mask; the second optical flow extraction module is further used for extracting optical flow information of the second region of interest to obtain an optical flow feature map.
As described above with reference to fig. 1 to 6, a medical image segmentation method and a training method of a medical image segmentation model according to an embodiment of the present specification are described. The details mentioned in the above description of the method embodiments are also applicable to the medical image segmentation apparatus and the training apparatus of the medical image segmentation model according to the embodiments of the present specification. The above medical image segmentation apparatus and the training apparatus for the medical image segmentation model may be implemented by hardware, or may be implemented by software, or a combination of hardware and software.
Fig. 9 illustrates a hardware configuration diagram of an electronic device according to an embodiment of the present specification. As shown in fig. 9, the electronic device 30 may include at least one processor 31, a storage 32 (e.g., a non-volatile storage), a memory 33, and a communication interface 34, and the at least one processor 31, the storage 32, the memory 33, and the communication interface 34 are connected together via an internal bus 35. The at least one processor 31 executes at least one computer readable instruction stored or encoded in the memory 32.
It should be understood that the computer-executable instructions stored in the memory 32, when executed, cause the at least one processor 31 to perform the various operations and functions described above in connection with fig. 1-6 in the various embodiments of the present description.
In embodiments of the present description, the electronic device 30 may include, but is not limited to: personal computers, server computers, workstations, desktop computers, laptop computers, notebook computers, mobile electronic devices, smart phones, tablet computers, cellular phones, personal Digital Assistants (PDAs), handsets, messaging devices, wearable electronic devices, consumer electronic devices, and the like.
According to one embodiment, a program product, such as a machine-readable medium, is provided. A machine-readable medium may have instructions (i.e., elements described above as being implemented in software) that, when executed by a machine, cause the machine to perform various operations and functions described above in connection with fig. 1-6 in the various embodiments of the present specification. Specifically, a system or apparatus may be provided which is provided with a readable storage medium on which software program code implementing the functions of any of the above embodiments is stored, and causes a computer or processor of the system or apparatus to read out and execute instructions stored in the readable storage medium.
In this case, the program code itself read from the readable medium can realize the functions of any of the above-described embodiments, and thus the machine-readable code and the readable storage medium storing the machine-readable code form part of this specification.
Examples of the readable storage medium include floppy disks, hard disks, magneto-optical disks, optical disks (e.g., CD-ROMs, CD-R, CD-RWs, DVD-ROMs, DVD-RAMs, DVD-RWs), magnetic tapes, nonvolatile memory cards, and ROMs. Alternatively, the program code may be downloaded from a server computer or from the cloud via a communications network.
It will be understood by those skilled in the art that various changes and modifications may be made in the above-disclosed embodiments without departing from the spirit of the invention. Accordingly, the scope of the present description should be limited only by the attached claims.
It should be noted that not all steps and units in the above flows and system structure diagrams are necessary, and some steps or units may be omitted according to actual needs. The execution order of the steps is not fixed, and can be determined as required. The apparatus structures described in the above embodiments may be physical structures or logical structures, that is, some units may be implemented by the same physical client, or some units may be implemented by multiple physical clients, or some units may be implemented by some components in multiple independent devices.
In the above embodiments, the hardware units or modules may be implemented mechanically or electrically. For example, a hardware unit, module or processor may comprise permanently dedicated circuitry or logic (such as a dedicated processor, FPGA or ASIC) to perform the corresponding operations. The hardware elements or processors may also comprise programmable logic or circuitry (e.g., a general-purpose processor or other programmable processor) that may be temporarily configured by software to perform corresponding operations. The specific implementation (mechanical, or dedicated permanent, or temporarily set) may be determined based on cost and time considerations.
The detailed description set forth above in connection with the appended drawings describes exemplary embodiments but does not represent all embodiments that may be practiced or fall within the scope of the claims. The term "exemplary" used throughout this specification means "serving as an example, instance, or illustration," and does not mean "preferred" or "advantageous" over other embodiments. The detailed description includes specific details for the purpose of providing an understanding of the described technology. However, the techniques may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described embodiments.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of medical image segmentation, the method comprising:
acquiring a target image sequence, and performing feature extraction on a plurality of images to obtain a plurality of feature maps, wherein the feature maps comprise regions of interest;
performing M-layer feature dimension reduction on the feature graph to obtain M middle-layer feature graphs;
performing N-layer feature dimensionality reduction on the optical flow feature map extracted from the region of interest in the feature map to obtain N intermediate-layer optical flow feature maps, wherein at least one of the N intermediate-layer optical flow feature maps is fused with optical flow features extracted from the region of interest on the intermediate-layer feature map with corresponding dimensionality;
fusing an intermediate layer feature map obtained by dimensionality reduction of the M-th layer of features and an intermediate layer optical flow feature map obtained by dimensionality reduction of the N-th layer of features to obtain a spatial cross feature map of the feature map;
and segmenting the region of interest from a plurality of images in the target image sequence based on the spatial cross feature maps of the plurality of feature maps.
2. The medical image segmentation method according to claim 1, wherein segmenting a region of interest from a plurality of images in the target image sequence based on a spatial cross feature map of the plurality of feature maps, specifically comprises:
fusing a plurality of the space cross feature maps to obtain a multi-dimensional space-time cross feature map;
and respectively fusing the multi-dimensional space-time cross feature map and the plurality of space cross feature maps and then decoding to segment the region of interest from the plurality of images in the target image sequence.
3. The medical image segmentation method according to claim 2, characterized in that the method specifically includes:
carrying out weighted average on corresponding features in the plurality of space cross feature maps to obtain a multi-dimensional space-time cross feature map;
and respectively multiplying the multi-dimensional space-time cross feature map with corresponding features in the plurality of space cross feature maps, and then decoding to segment the region of interest from the plurality of images in the target image sequence.
4. The medical image segmentation method according to claim 1, wherein N layers of feature dimensionality reduction is performed on the optical flow feature maps extracted from the region of interest in the feature maps to obtain N intermediate layer optical flow feature maps, and specifically includes:
and when the optical flow feature map is subjected to the first N-1-layer feature dimensionality reduction, fusing optical flow information extracted from the interested area on the intermediate layer feature map with the corresponding dimensionality to obtain a corresponding intermediate layer optical flow feature map.
5. A method for training a medical image segmentation model, wherein in a model training phase, the method comprises:
acquiring a training sample set, wherein the training sample set comprises a plurality of first feature maps and first region-of-interest masks corresponding to the first feature maps;
determining a first region of interest on a corresponding first feature map based on the first region of interest mask;
extracting optical flow information of the first region of interest to obtain an optical flow feature map;
wherein the medical image segmentation model performs image segmentation based on the medical image segmentation method according to any one of claims 1 to 4.
6. The method for training a medical image segmentation model according to claim 5, wherein in a model testing phase, the method further comprises:
obtaining a test sample set, wherein the test sample set comprises a plurality of second feature maps;
performing feature normalization on the second feature map in the test sample set based on a set threshold value to obtain a corresponding binary image;
reserving an image area with the largest area in the binarized image as a second region-of-interest mask;
determining a second region of interest on a corresponding second feature map based on the second region of interest mask;
and extracting optical flow information of the second region of interest to obtain an optical flow feature map.
7. A medical image segmentation apparatus, comprising:
the characteristic extraction module is used for acquiring a target image sequence and extracting characteristics of a plurality of images to obtain a plurality of characteristic graphs, wherein the characteristic graphs comprise interested areas;
the characteristic dimension reduction module is used for executing M layers of characteristic dimension reduction on the characteristic graph to obtain M middle layer characteristic graphs;
a first optical flow extraction module, configured to perform N-layer feature dimensionality reduction on the optical flow feature map extracted for the region of interest in the feature map to obtain N intermediate-layer optical flow feature maps, where at least one of the N intermediate-layer optical flow feature maps is fused with optical flow features extracted for the region of interest on the intermediate-layer feature map with corresponding dimensionality;
the fusion module is used for fusing an intermediate layer feature map obtained by dimensionality reduction of the M-th layer of features and an intermediate layer optical flow feature map obtained by dimensionality reduction of the N-th layer of features to obtain a spatial cross feature map of the feature map;
and the segmentation module is used for segmenting the interested region from a plurality of images in the target image sequence based on the spatial cross feature maps of the plurality of feature maps.
8. A training device for a medical image segmentation model is characterized by comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a training sample set, and the training sample set comprises a plurality of first feature maps and first region-of-interest masks corresponding to the first feature maps;
a determination module for determining a first region of interest on a corresponding first feature map based on the first region of interest mask;
a second optical flow extraction module, configured to extract optical flow information of the first region of interest to obtain an optical flow feature map;
wherein the medical image segmentation model performs image segmentation based on the medical image segmentation method according to any one of claims 1 to 4.
9. An electronic device, comprising:
at least one processor; and
a memory storing instructions that, when executed by the at least one processor, cause the at least one processor to perform the method of medical image segmentation of any one of claims 1 to 4 or the method of training a medical image segmentation model of any one of claims 5 to 6.
10. A machine-readable storage medium storing executable instructions that, when executed, cause the machine to perform a method of medical image segmentation of any one of claims 1 to 4, or a method of training a medical image segmentation model of any one of claims 5 to 6.
CN202211638011.3A 2022-12-20 2022-12-20 Image segmentation method, model training method, device, equipment and storage medium Active CN115631196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211638011.3A CN115631196B (en) 2022-12-20 2022-12-20 Image segmentation method, model training method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211638011.3A CN115631196B (en) 2022-12-20 2022-12-20 Image segmentation method, model training method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115631196A true CN115631196A (en) 2023-01-20
CN115631196B CN115631196B (en) 2023-03-10

Family

ID=84911166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211638011.3A Active CN115631196B (en) 2022-12-20 2022-12-20 Image segmentation method, model training method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115631196B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060228002A1 (en) * 2005-04-08 2006-10-12 Microsoft Corporation Simultaneous optical flow estimation and image segmentation
CN111027472A (en) * 2019-12-09 2020-04-17 北京邮电大学 Video identification method based on fusion of video optical flow and image space feature weight
CN111652081A (en) * 2020-05-13 2020-09-11 电子科技大学 Video semantic segmentation method based on optical flow feature fusion
CN113570608A (en) * 2021-06-30 2021-10-29 北京百度网讯科技有限公司 Target segmentation method and device and electronic equipment
CN113705575A (en) * 2021-10-27 2021-11-26 北京美摄网络科技有限公司 Image segmentation method, device, equipment and storage medium
CN114037839A (en) * 2021-10-21 2022-02-11 长沙理工大学 Small target identification method, system, electronic equipment and medium
US20220051416A1 (en) * 2019-08-19 2022-02-17 Tencent Technology (Shenzhen) Company Limited Medical image segmentation method and apparatus, computer device, and readable storage medium
US20220230282A1 (en) * 2021-01-12 2022-07-21 Samsung Electronics Co., Ltd. Image processing method, image processing apparatus, electronic device and computer-readable storage medium
CN114821105A (en) * 2022-05-05 2022-07-29 南昌航空大学 Optical flow calculation method combining image pyramid guidance and circular cross attention
CN115272086A (en) * 2022-09-29 2022-11-01 杭州太美星程医药科技有限公司 Medical image stitching method and device, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060228002A1 (en) * 2005-04-08 2006-10-12 Microsoft Corporation Simultaneous optical flow estimation and image segmentation
US20220051416A1 (en) * 2019-08-19 2022-02-17 Tencent Technology (Shenzhen) Company Limited Medical image segmentation method and apparatus, computer device, and readable storage medium
CN111027472A (en) * 2019-12-09 2020-04-17 北京邮电大学 Video identification method based on fusion of video optical flow and image space feature weight
CN111652081A (en) * 2020-05-13 2020-09-11 电子科技大学 Video semantic segmentation method based on optical flow feature fusion
US20220230282A1 (en) * 2021-01-12 2022-07-21 Samsung Electronics Co., Ltd. Image processing method, image processing apparatus, electronic device and computer-readable storage medium
CN113570608A (en) * 2021-06-30 2021-10-29 北京百度网讯科技有限公司 Target segmentation method and device and electronic equipment
CN114037839A (en) * 2021-10-21 2022-02-11 长沙理工大学 Small target identification method, system, electronic equipment and medium
CN113705575A (en) * 2021-10-27 2021-11-26 北京美摄网络科技有限公司 Image segmentation method, device, equipment and storage medium
CN114821105A (en) * 2022-05-05 2022-07-29 南昌航空大学 Optical flow calculation method combining image pyramid guidance and circular cross attention
CN115272086A (en) * 2022-09-29 2022-11-01 杭州太美星程医药科技有限公司 Medical image stitching method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MICHAEL MU-CHIEN HSU ETAL.: "Using Segmentation to Enhance Frame Prediction in a Multi-Scale Spatial-Temporal Feature Extraction Network" *
何晓云等: "基于注意力机制的视频人脸表情识别" *
李洪均;丁宇鹏;李超波;张士兵;: "基于特征融合时序分割网络的行为识别研究" *

Also Published As

Publication number Publication date
CN115631196B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
US10482603B1 (en) Medical image segmentation using an integrated edge guidance module and object segmentation network
Poudel et al. Recurrent fully convolutional neural networks for multi-slice MRI cardiac segmentation
US10210613B2 (en) Multiple landmark detection in medical images based on hierarchical feature learning and end-to-end training
CN109003267B (en) Computer-implemented method and system for automatically detecting target object from 3D image
Guan et al. Thigh fracture detection using deep learning method based on new dilated convolutional feature pyramid network
CN106339571B (en) Artificial neural network for classifying medical image data sets
Tang et al. High-resolution 3D abdominal segmentation with random patch network fusion
An et al. Medical image segmentation algorithm based on multilayer boundary perception-self attention deep learning model
Wang et al. Context-aware spatio-recurrent curvilinear structure segmentation
CN114120030A (en) Medical image processing method based on attention mechanism and related equipment
Sarica et al. A dense residual U-net for multiple sclerosis lesions segmentation from multi-sequence 3D MR images
Pradhan et al. Machine learning model for multi-view visualization of medical images
CN113724185B (en) Model processing method, device and storage medium for image classification
Salih et al. The local ternary pattern encoder–decoder neural network for dental image segmentation
US20230005158A1 (en) Medical image segmentation and atlas image selection
Daza et al. Cerberus: A multi-headed network for brain tumor segmentation
CN111209946B (en) Three-dimensional image processing method, image processing model training method and medium
Yang et al. Dual-path network for liver and tumor segmentation in CT images using Swin Transformer encoding approach
CN115272086B (en) Medical image stitching method and device, electronic equipment and storage medium
CN115631196B (en) Image segmentation method, model training method, device, equipment and storage medium
CN116128876A (en) Medical image classification method and system based on heterogeneous domain
CN111369564A (en) Image processing method, model training method and model training device
CN116129184A (en) Multi-phase focus classification method, device, equipment and readable storage medium
Waseem Sabir et al. FibroVit—Vision transformer-based framework for detection and classification of pulmonary fibrosis from chest CT images
CN114241198A (en) Method, device, equipment and storage medium for obtaining local imagery omics characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant