CN114708283A - Image object segmentation method and device, electronic equipment and storage medium - Google Patents

Image object segmentation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114708283A
CN114708283A CN202210421832.5A CN202210421832A CN114708283A CN 114708283 A CN114708283 A CN 114708283A CN 202210421832 A CN202210421832 A CN 202210421832A CN 114708283 A CN114708283 A CN 114708283A
Authority
CN
China
Prior art keywords
image
target
segmentation
registration
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210421832.5A
Other languages
Chinese (zh)
Inventor
余航
黄文豪
王少康
陈宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infervision Medical Technology Co Ltd
Original Assignee
Infervision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infervision Medical Technology Co Ltd filed Critical Infervision Medical Technology Co Ltd
Priority to CN202210421832.5A priority Critical patent/CN114708283A/en
Publication of CN114708283A publication Critical patent/CN114708283A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application relates to a method and a device for segmenting an image target, electronic equipment and a storage medium. The image target segmentation method comprises the following steps: performing image target segmentation processing on the medical image to obtain a first segmentation result; extracting a target image containing the image target from the medical image according to the first segmentation result; carrying out registration processing on the target image and a preset standard image to obtain a registration result; the preset standard image is an image containing a standard image target; and performing image target segmentation processing on the target image according to the registration result to obtain an image target segmentation result. Therefore, the accuracy of the image target segmentation result is effectively improved, and a more reliable basis is provided for clinical diagnosis and pathological research.

Description

Image object segmentation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for segmenting an image target, an electronic device, and a storage medium.
Background
The medical image segmentation is a necessary premise for subsequent operation of medical images, and aims to segment parts with certain special meanings in the medical images, extract relevant features, provide reliable basis for clinical diagnosis and pathology research and assist doctors in making more accurate diagnosis.
However, due to the complicated environment of the image target in the medical image, the segmentation result is often unsatisfactory, for example, the liver is surrounded by fat and a large number of other organs, and it is difficult to clearly segment the liver from the fat and other organs in the abdomen during segmentation, so that a doctor cannot make a more accurate diagnosis. Therefore, the accuracy of the image target segmentation result is improved, and the method has important significance for diagnosis of doctors.
Disclosure of Invention
In view of the above, the present application provides a method, an apparatus, an electronic device and a storage medium for segmenting an image target, which can improve the accuracy of the segmentation result of the image target in a medical image.
In order to achieve the purpose, the following technical scheme is adopted in the application:
a first aspect of the present application provides a method for segmenting an image target, including:
performing image target segmentation processing on the medical image to obtain a first segmentation result;
extracting a target image containing the image target from the medical image according to the first segmentation result;
carrying out registration processing on the target image and a preset standard image to obtain a registration result; the preset standard image is an image containing a standard image target;
and performing image target segmentation processing on the target image according to the registration result to obtain an image target segmentation result.
Optionally, the registering the target image and the preset standard image to obtain a registration result includes:
and registering the target image and a preset standard image based on a non-rigid body registration method to obtain a registration result.
Optionally, before registering the target image with a preset standard image based on the non-rigid body registration method, the method further includes:
determining an affine transformation relation between the target image and the preset standard image based on a rigid body registration method;
the registering the target image with a preset standard image based on the non-rigid body registering method to obtain the registering result, which comprises:
and registering the target image by using the affine transformation relation and the preset standard image based on a non-rigid body registration method to obtain a registration result.
Optionally, the determining an affine transformation relationship between the target image and the preset standard image based on the rigid body registration method includes:
and inputting the target image and the preset standard image into a rigid body registration model which is constructed in advance to obtain the affine transformation relation.
Optionally, the rigid body registration model is obtained by training as follows:
performing rigid body registration on the first sample information by using a pre-constructed rigid body registration model to determine a first loss function and a second loss function; the first sample information includes: a reference image containing a standard image target and a corresponding floating image;
according to the first loss function and the second loss function, parameter correction is carried out on the rigid body registration model;
wherein the first loss function is used for constraining model parameters so that the floating image subjected to model affine transformation is close to the reference image in shape; the second loss function is used to constrain model parameters to prevent an affine transformation relationship of the floating image and the reference image from overfitting.
Optionally, the registering the target image based on the non-rigid body by using the affine transformation relationship and the preset standard image to obtain the registering result includes:
and inputting the affine transformation relation, the preset standard image and the target image into a pre-constructed non-rigid registration model to obtain the registration result.
Optionally, the performing image target segmentation processing on the target image according to the registration result to obtain an image target segmentation result includes:
performing image target segmentation processing on the target image according to the registration result to obtain a second segmentation result;
and screening the second segmentation result to obtain the image target segmentation result based on a preset screening rule.
Optionally, the performing image target segmentation processing on the medical image to obtain a first segmentation result includes:
performing region cutting on the medical image based on a preset cutting size to obtain a cutting result of each region;
carrying out segmentation processing on the cutting result to obtain a region segmentation result;
and splicing the region segmentation results carrying the partial image targets to obtain the first segmentation result of the image targets.
A second aspect of the present application provides an image object segmentation apparatus, comprising:
the first segmentation module is used for carrying out image target segmentation processing on the medical image to obtain a first segmentation result;
an obtaining module, configured to obtain a target image according to the first segmentation result and the medical image;
the registration module is used for carrying out registration processing on the target image and a preset standard image to obtain a registration result;
and the second segmentation module is used for carrying out image target segmentation processing on the target image according to the registration result to obtain an image target segmentation result.
A third aspect of the present application provides an electronic device comprising:
a processor, and a memory coupled to the processor;
the memory is used for storing a computer program;
the processor is configured to invoke and execute the computer program in the memory to perform the method according to the first aspect of the application.
A fourth aspect of the present application provides a storage medium storing a computer program which, when executed by a processor, implements the steps of the method of segmenting an image object as described in the first aspect of the present application.
The technical scheme provided by the application can comprise the following beneficial effects:
in the scheme of the application, firstly, image target segmentation processing is carried out on the medical image to obtain a first segmentation result of the image target, so that the general position of the image target in the medical image is positioned. And then according to the obtained first segmentation result, acquiring a target image from the medical image, so as to obtain an approximate image of the image target, and filtering partial interference in the medical image to a certain extent. And then, registering the target image by using a preset standard image to obtain a registration result. And finally, carrying out image target segmentation processing on the target image according to the registration result to obtain an image target segmentation result. In the processing process, the target image containing the image target to be segmented is registered with the image containing the standard image target, so that the difference between the image target in the target image and the standard image target can be clarified, the registration result is used for assisting the segmentation of the image target, the reference data of the segmentation of the image target is enriched, and the identification and segmentation of the image target from the target image are facilitated more accurately.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment provided in an embodiment of the present application.
Fig. 2 is a flowchart of a segmentation method for an image object according to an embodiment of the present application.
Fig. 3 is a flowchart of a segmentation method for an image object according to another embodiment of the present application.
Fig. 4 is a flowchart of a segmentation method for an image object according to another embodiment of the present application.
Fig. 5 is a flowchart of a segmentation method for an image object according to still another embodiment of the present application.
Fig. 6 is a schematic diagram of a liver segmentation result without assistance of a registration result according to an embodiment of the present application.
Fig. 7 is a schematic diagram of a liver segmentation result processed by an image object segmentation method according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an image object segmentation apparatus according to an embodiment of the present application.
Fig. 9 is a block diagram of an electronic device according to another embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail below. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without making any creative effort, shall fall within the protection scope of the present application.
Summary of the application
Ct (computed tomography), namely, computed tomography, which uses precisely collimated X-ray beams, gamma rays, ultrasonic waves, etc. to scan sections of a human body one after another around a certain part of the human body together with a detector with extremely high sensitivity, has the characteristics of fast scanning time, clear images, etc., and can be used for the examination of various diseases; the following can be classified according to the radiation used: x-ray CT (X-CT), and gamma-ray CT (gamma-CT).
CT angiography, is a non-invasive angiography synthesized using a computer three-dimensional reconstruction method. It uses the fast scanning technique of spiral CT to complete cross section scanning within a certain range in a short time, i.e. when the contrast agent is still concentrated in the blood vessel. And sending the acquired image data to an image reconstruction functional area of an image workstation or a CT machine for image reconstruction. The reconstruction technique generally adopts a Maximum Intensity Projection (MIP) method or a Volume Reconstruction (VR) method, and only a continuous and clear blood vessel image without a surrounding tissue structure image can be obtained by adjusting an image display threshold. If a proper reconstruction method and a display threshold value are selected, a three-dimensional image which simultaneously displays the blood vessel and the tissue structure can be obtained and can be observed at any angle and cut in any direction by computer software.
CT angiography (CTA) has the advantage of non-invasive angiography, and although CTA requires the injection of contrast medium, it does not require puncture and vessel cannulation techniques, is extremely dangerous, and has few complications other than adverse effects of contrast medium. CTA helps one to understand the relationship between blood vessels and surrounding tissues or lesions while helping one to understand the condition of blood vessels, which is not possible with conventional angiography. CTA, however, has its drawbacks, such as unclear display of small blood vessels, artifacts of individual image reconstruction, and continuous dynamic display of arteriovenous vessels, which have not been realized.
The image segmentation model can segment a target region in an image (such as an original CT image), so that a user can know information such as the position of the target, the size of the target and the like, and the user can make a corresponding decision for the target region. The image segmentation model can be obtained by training a deep learning model by using a training sample. Compared with the traditional image segmentation method, the image segmentation method based on the deep learning has the advantages of high efficiency, good robustness, adaptability to various scenes and the like.
Exemplary System
Fig. 1 is a schematic diagram illustrating an implementation environment provided by an embodiment of the present application. The implementation environment includes a computer device 110 and a CT scanner 120.
The CT scanner 120 is used for scanning the human tissue to obtain a CT image of the human tissue. In one embodiment, the medical image of the present application can be obtained by scanning the abdomen with the CT scanner 120. The computer device 110 may acquire a medical image from the CT scanner 120 and process the medical image to obtain an image object segmentation result by performing the image object segmentation method provided by the embodiment of the present application. .
The computer device 110 may be a general-purpose computer or a computer device composed of an application-specific integrated circuit, and the like, which is not limited in this embodiment. For example, the Computer device 110 may be a mobile terminal device such as a tablet Computer, or may be a Personal Computer (PC), such as a laptop portable Computer and a desktop Computer. One skilled in the art will appreciate that the number of computer devices 110 described above may be one or more, and that the types may be the same or different. The number and the type of the computer devices 110 are not limited in the embodiments of the present application.
In an embodiment, the implementation environment of fig. 1 may be used to perform the segmentation method for the image target provided by the embodiment of the present application. The computer device 110 may acquire a medical image from the CT scanner 120 and process the medical image to obtain an image object segmentation result by performing the image object segmentation method provided by the embodiment of the present application.
In some embodiments, the computer device 110 may be a server, i.e., the CT scanner 120 is directly communicatively connected to the server.
In other embodiments, the computer device 110 may be communicatively connected to the CT scanner 120 and the server, respectively, and transmit the medical image acquired from the CT scanner 120 to the server, so that the server performs the segmentation method of the image target based on the medical image.
Exemplary method
Fig. 2 is a schematic flowchart illustrating a segmentation method for an image object according to an exemplary embodiment of the present application. The method of fig. 2 may be performed by a computer device, for example, by the computer device or server of fig. 1. As shown in fig. 2, the image object segmentation method at least includes the following implementation steps:
s201, carrying out image target segmentation processing on the medical image to obtain a first segmentation result.
In practice, before the medical image is subjected to the image object segmentation process, the medical image may be acquired first, wherein the medical image needs to be a medical image with an image object, for example, a liver is taken as the image object, and the medical image may be a CT image of an abdomen. The embodiment of the present application does not limit the specific type of the medical image, that is, the image target segmentation method of the embodiment of the present application can be applied to various types of images.
After the medical image is obtained, the image target in the medical image may be subjected to preliminary segmentation processing to obtain a first segmentation result of the image target.
When the preliminary segmentation processing is performed, a segmentation network may be used to perform image target segmentation processing on the medical image, so as to obtain a first segmentation result of the image target from the medical image. When applied, the first segmentation result comprises a coarse localization result of the image object in the medical image.
S202, extracting a target image containing an image target from the medical image according to the first segmentation result.
After the first segmentation result of the image target is obtained, the position information of the central point and the minimum external frame of the image target in the segmentation result can be roughly determined. Then, based on the position information, an image of the position can be intercepted from the medical image, and the intercepted image is taken as a target image.
In the implementation process, because the position information of the image target is roughly locked in the first segmentation result, the approximate region of the image target can be intercepted from the medical image based on the first segmentation result, so that the interference of other information in the medical image is filtered to a certain extent, and a basis is provided for further image segmentation in the follow-up process.
S203, registering the target image and a preset standard image to obtain a registration result; the preset standard image is an image containing a standard image target.
When the registration processing is carried out, a preset standard image and a target image which are selected in advance are input into a registration algorithm, and a deformation field is obtained by searching the relation between the preset standard image and the target image. Based on the deformation field, the region of the standard image target contained in the preset standard image can be registered to the target image, and a registration result is obtained. The area of the standard image target is a gold standard of the image target, and the gold standard can be a labeling result of a public data set of the image target or a labeling result labeled and audited by a technician.
For example, when a liver is taken as an image target, a minimum bounding box for liver segmentation can be found from an abdominal CT image according to a pre-selected gold standard for liver segmentation, an image within the range of the minimum bounding box is cut out and taken as a preset standard image, and meanwhile, the abdominal CT image is subjected to liver segmentation processing to obtain a first segmentation result of the liver, and the target image is cut out from the abdominal CT image based on the first segmentation result. And performing registration processing on the two images, determining a deformation field of the two images according to the corresponding relation of the outlines of the two images, and obtaining a registration result of the liver based on the deformation field, namely obtaining a preliminary body state of the liver, wherein the registration result comprises a mask of the liver, namely determining the region of interest of the liver in the target image.
And S204, performing image target segmentation processing on the target image according to the registration result to obtain an image target segmentation result.
The obtained liver registration result is used as a segmentation prompt, and the target image is subjected to liver segmentation processing, so that the deficiency and false positive of liver segmentation can be reduced, and the segmentation effect is improved.
In this embodiment, first, image target segmentation processing is performed on the medical image to obtain a first segmentation result of the image target, so that the general position of the image target in the medical image is located. And then according to the obtained first segmentation result, acquiring a target image from the medical image, so as to obtain an approximate image of the image target, and filtering partial interference in the medical image to a certain extent. And then, registering the target image by using a preset standard image to obtain a registration result. And finally, carrying out image target segmentation processing on the target image according to the registration result to obtain an image target segmentation result. In the processing process, the target image containing the image target to be segmented is registered with the image containing the standard image target, so that the difference between the image target in the target image and the standard image target can be clarified, the registration result is used for assisting the segmentation of the image target, the reference data of the segmentation of the image target is enriched, and the identification and segmentation of the image target from the target image are facilitated more accurately.
Fig. 3 is a flowchart illustrating a segmentation method for an image object according to another embodiment of the present application. The embodiment shown in fig. 3 of the present application is extended based on the embodiment shown in fig. 2 of the present application, and the differences between the embodiments shown in fig. 3 are emphasized below, and the same parts are not repeated.
As shown in fig. 3, in an embodiment of the present application, the step of obtaining the first segmentation result based on the image target segmentation processing performed on the medical image may include at least the following implementation steps:
and S2011, performing region cutting on the medical image based on the preset cutting size to obtain a cutting result of each region.
Wherein, preset cutting size can set up according to actual need, does not limit here.
And S2012, performing segmentation processing on the segmentation result to obtain a region segmentation result.
And S2013, splicing the region segmentation results carrying the partial image targets to obtain a first segmentation result of the image targets.
In specific implementation, the abdomen CT image is taken as a medical image, the liver is taken as an image target, for example, and in consideration of the characteristic of CT liver radiography, the value range of each pixel point in the abdomen CT image is [ -1024,1024], wherein the liver is taken as a highlight area and mainly concentrated between-200 and 300, and based on the prior knowledge, the abdomen CT image is subjected to liver segmentation processing so as to extract the pixels containing the liver range as a target. Considering the size of the video memory, if 500 or even 2000 abdomen CT images 512 × 512 are directly put into the video memory, not only more video memory is occupied, but also the processing speed of the abdomen CT images is affected, and the working efficiency is reduced. Therefore, the image can be cut into blocks of a predetermined size (e.g., blocks of 128 × 128 × 128) based on a predetermined cutting size, the cut blocks are the cutting results of each region, and then the small blocks are sequentially placed into the reset network for segmentation, so as to obtain the region segmentation result of each region in the abdomen CT image. Wherein, if part of the square blocks do not contain liver, the segmentation result of the region without liver is segmented. Finally, all the region segmentation results carrying the liver region are spliced together, so that a complete rough segmentation result of the liver, namely the first segmentation result, can be obtained. Therefore, occupation of the display memory can be effectively reduced, and the image processing efficiency is improved.
In some embodiments, in order to improve the registration accuracy, when the target image and the preset standard image are subjected to registration processing to obtain a registration result, the target image and the preset standard image may be registered based on a non-rigid registration method to obtain the registration result. In this way, a more accurate segmentation indication can be provided for subsequent processing of the image object.
The non-rigid registration method can be various, for example, a registration algorithm based on the combination of thin plate spline function point constraint and mutual information is used for carrying out global coarse registration on a target image and a preset standard image and then carrying out local correction by using a mutual information technology, so that a better registration effect can be realized.
Specifically, the non-rigid registration method may be selected according to actual requirements, and is not limited herein.
In some embodiments, in order to improve the registration speed and further improve the registration accuracy, rigid body registration and non-rigid body registration may be combined, and based on this, before the non-rigid body registration based method registers the target image with the preset standard image, the method for segmenting the image target may further include: and determining an affine transformation relation between the target image and the preset standard image based on a rigid body registration method.
Specifically, when the affine transformation relationship between the target image and the preset standard image is determined based on the rigid body registration method, the target image and the preset standard image may be input into a rigid body registration model which is constructed in advance, so as to obtain the affine transformation relationship.
The rigid registration model is used for extracting a spatial feature relationship between a target image and a preset standard image, and parameters of affine transformation relationships (scaling, rotation and translation) are set as output of the rigid registration model, so that the robustness of subsequent non-rigid registration is improved, and the convergence speed is accelerated.
Correspondingly, when the target image is registered with the preset standard image based on the non-rigid registration method to obtain the registration result, the target image can be registered by using the affine transformation relation and the preset standard image based on the non-rigid registration method to obtain the registration result. Therefore, the interference of most non-image targets in the medical image can be filtered, and the segmentation accuracy is further improved. Taking the liver as an image target as an example, through the combination of rigid registration and non-rigid registration, the influence of a large amount of fat and other organs around the liver, such as gallbladder and pancreas, on liver segmentation can be effectively avoided, and the accuracy of liver segmentation is improved.
In some embodiments, as shown in fig. 4, the rigid registration model may be obtained by training through the following steps:
s401, rigid body registration is carried out on the first sample information by using a pre-constructed rigid body registration model, and a first loss function and a second loss function are determined; the first sample information includes: containing a reference image of a standard image target and a corresponding floating image.
The first loss function is used for constraining model parameters so that the floating image subjected to model affine transformation is close to the reference image in shape; the second penalty function is used to constrain the model parameters to prevent the floating image from being overfitted to the affine transformation relationship of the reference image.
Taking an abdomen CT image as a medical image and a liver as an image target for example, when a reference image and a floating image are selected, on one hand, a minimum external frame for liver segmentation can be found on the basis of gold standard segmentation, and an image in the range of the minimum external frame is intercepted from the abdomen CT image and used as a reference image; on the other hand, the first segmentation result of the liver in the abdominal CT image can be determined, and the floating image can be cut from the abdominal CT image according to the minimum bounding box in the first segmentation result. Therefore, a large amount of first sample information can be acquired and used as a training sample to provide a training basis for subsequent training.
In practice, the ResNext3D network may be used as an application network for the rigid body registration model. The spatial feature relationship between the reference image and the floating image can be extracted through a ResNext3D network, and the parameters of the affine transformation relationship are set as the output parameters of the network. Also, two loss functions, i.e., a first loss function and a second loss function, may be provided. The first loss function can record the size of the morphological difference between the affine-transformed reference image and the floating image, so that the reference image and the floating image after affine transformation can be as close as possible by constraining the model parameters through the first loss function. The second loss function is the standard space and determinant size of the affine transformation parameters themselves, so that the parameter model can be constrained by the second loss function to prevent the affine transformation relation of the floating image and the reference image from being overfitted.
S402, parameter correction is carried out on the rigid body registration model according to the first loss function and the second loss function.
In implementation, by calculating the first loss function and the second loss function, the network parameters of the ResNext3D network can be updated in a backward propagation manner, and finally, a rigid body registration model is obtained through continuous training.
Similarly, in the non-rigid registration-based method, when the target image is registered by using the affine transformation relationship and the preset standard image to obtain the registration result, the affine transformation relationship, the preset standard image and the target image may be input into a pre-constructed non-rigid registration model to obtain the registration result.
The training method of the non-rigid body registration model can comprise the following steps: performing non-rigid registration on the second sample information by using a pre-constructed non-rigid registration model to determine a third loss function and a fourth loss function; wherein the second sample information includes: the reference image containing the standard image target, the floating image and the corresponding affine transformation relation. And according to the third loss function and the fourth loss function, parameter correction can be carried out on the non-rigid registration model. The third loss function is used for constraining the model parameters so that the floating image subjected to the non-rigid body deformation is close to the reference image in shape; the fourth loss function is used to constrain the model parameters to limit the degree of deformation of the deformation field to prevent overfitting.
In implementation, on the basis of an affine transformation relation obtained by adding rigid body registration, the deformation relation between the reference image and the floating image can be further searched. Specifically, a ResUnet network may be used as an application network of the non-rigid registration model. The ResUnet network is utilized to train a non-rigid deformation field, the deformation field can record the moving direction and the displacement of each pixel on the floating image, the output layer of the network is set as the deformation field, and meanwhile, two loss functions, namely a third loss function and a fourth loss function, can be set. The third loss function can record the pixel difference between the reference image and the floating image after the deformation field, so that the model parameters can be constrained through the third loss function, and the reference image and the floating image after the non-rigid body deformation are as close as possible. The fourth loss function may calculate the smoothness of the deformation field and the degree of zoom-in and zoom-out, and thus, the degree of deformation of the deformation field may be limited by the fourth loss function to prevent overfitting. Finally, the network parameters can be updated through back propagation by calculating the third loss function and the fourth loss function, so that the deformation field is more accurate, namely, the trained non-rigid registration model is more accurate, and the registration accuracy is improved.
It should be noted that the network structures of the rigid body registration model and the non-rigid body registration model may be various, and are not limited to the network structures mentioned in this application, and the application does not specifically limit the network structures of the two models.
Thus, after the registration result is obtained, the target image can be subjected to image target segmentation processing according to the registration result to obtain an image target segmentation result.
Fig. 5 is a flowchart illustrating a segmentation method for an image object according to another embodiment of the present application. The embodiment shown in fig. 5 of the present application is extended based on the embodiment shown in fig. 2 of the present application, and the differences between the embodiments shown in fig. 5 are emphasized below, and the same parts are not repeated.
As shown in fig. 5, in the embodiment of the present application, the step of performing image target segmentation processing on the target image according to the registration result to obtain the image target segmentation result may include at least the following implementation steps:
s2041, performing image target segmentation processing on the target image according to the registration result to obtain a second segmentation result.
When the image target segmentation is performed, in order to obtain a more refined segmentation result, a fine segmentation algorithm may be adopted, and the registration result is used to assist in performing fine segmentation, so as to improve the segmentation effect.
Specifically, a fine segmentation model may be constructed in advance, and thus, the obtained registration result and the target image are input into the fine segmentation model, so that the image target may be finely segmented, and a second segmentation result is obtained.
When a fine segmentation model is constructed, a segmentation network ResUnet can be used for training and testing to obtain a more accurate segmentation model.
As shown in fig. 6 and 7, the same abdominal CT image is used as the medical image, and fig. 6 is a schematic view of a liver segmentation result obtained by performing liver fine segmentation processing directly on the abdominal CT image, where a1 is a liver region image obtained by segmentation. Fig. 7 is a schematic diagram of a liver segmentation result obtained after performing fine segmentation with the aid of the registration result by using the method of the present application, where a2 is a liver region obtained by segmentation. As can be seen from fig. 6, the liver region a1 without being subjected to the fine segmentation process assisted by the image registration result has obvious prominence (e.g., region b1 in fig. 6) and adhesion (e.g., region c1 in fig. 6). In fig. 7, the liver region a2 subjected to the fine segmentation process assisted by the registration result obviously solves the problem, and the region b2 in a2 (corresponding to the region b1 in a 1) eliminates the protrusion, and the region c2 in a2 (corresponding to the region c1 in a 1) eliminates the adhesion. Therefore, the segmentation result obtained by performing liver segmentation processing according to the registration result is more accurate.
And S2042, screening the second segmentation result based on a preset screening rule to obtain an image target segmentation result.
Since the image object segmentation result usually appears as a complete connected domain. Therefore, after the second segmentation result is obtained, the connected components which may exist in the second segmentation result and are not the image target may be removed based on the preset screening rule, and finally the image target segmentation result is obtained.
In implementation, taking a liver as an image target as an example, after the abdomen CT image is processed to obtain a second segmentation result, the liver may be regarded as the largest connected domain therein, so that the preset screening rule may be set as: and reserving the largest connected domain in the second segmentation result. In this way, only the largest connected component in the second segmentation result is retained, which can be used as the final liver segmentation result, and the first segmentation result can be mapped back to the original medical image according to the position information of the first segmentation result.
It should be understood that, although the steps in the flowcharts of fig. 2, 3, 4 and 5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 3, 4, and 5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
Exemplary devices
Fig. 8 is a schematic structural diagram illustrating an image object segmentation apparatus according to an exemplary embodiment of the present application. As shown in fig. 8, the image object segmentation apparatus 800 may include: a first segmentation module 801, configured to perform image target segmentation processing on a medical image to obtain a first segmentation result; an obtaining module 802, configured to obtain a target image according to the first segmentation result and the medical image; a registration module 803, configured to perform registration processing on the target image and a preset standard image to obtain a registration result; and the second segmentation module 804 is configured to perform image target segmentation processing on the target image according to the registration result to obtain an image target segmentation result.
Optionally, when the target image and the preset standard image are registered to obtain a registration result, the registration module 803 may be specifically configured to: and registering the target image and a preset standard image based on a non-rigid body registration method to obtain a registration result.
Optionally, before registering the target image with the preset standard image based on the non-rigid body registration method, the registration module 803 may be specifically configured to: and determining an affine transformation relation between the target image and the preset standard image based on a rigid body registration method. Correspondingly, when the target image is registered with the preset standard image based on the non-rigid body registration method to obtain a registration result, the registration module 803 may be specifically configured to: and registering the target image by using an affine transformation relation and a preset standard image based on a non-rigid body registration method to obtain a registration result.
Optionally, when determining the affine transformation relationship between the target image and the preset standard image based on the rigid body registration method, the registration module 803 may be further configured to: and inputting the target image and the preset standard image into a rigid body registration model which is constructed in advance to obtain an affine transformation relation.
Optionally, the image target segmentation apparatus may further include a training module, where the training module is configured to train to obtain a rigid registration model by: performing rigid body registration on the first sample information by using a pre-constructed rigid body registration model to determine a first loss function and a second loss function; the first sample information includes: a reference image containing a standard image target and a corresponding floating image; according to the first loss function and the second loss function, parameter correction is carried out on the rigid body registration model; the first loss function is used for constraining model parameters so that the floating image subjected to model affine transformation is close to the reference image in shape; the second penalty function is used to constrain the model parameters to prevent the floating image from being overfitted to the affine transformation relationship of the reference image.
Optionally, when the target image is registered based on the non-rigid registration method by using the affine transformation relationship and the preset standard image to obtain the registration result, the registration module 803 may be specifically configured to: and inputting the affine transformation relation, the preset standard image and the target image into a pre-constructed non-rigid registration model to obtain a registration result.
Optionally, when performing image target segmentation processing on the target image according to the registration result to obtain an image target segmentation result, the second segmentation module 804 may be specifically configured to: performing image target segmentation processing on the target image according to the registration result to obtain a second segmentation result; and screening the second segmentation result to obtain an image target segmentation result based on a preset screening rule.
Optionally, when performing image target segmentation processing on the medical image to obtain a first segmentation result, the first segmentation module 801 may be specifically configured to: performing region cutting on the medical image based on a preset cutting size to obtain a cutting result of each region; performing segmentation processing on the cutting result to obtain a region segmentation result; and splicing the region segmentation results carrying the partial image targets to obtain a first segmentation result of the image targets.
It should be understood that, for a specific implementation of the image object segmentation apparatus provided in the embodiment of the present application, reference may be made to a specific implementation of the image object segmentation method described in any of the above embodiments, and details are not described here again.
Fig. 9 is a block diagram illustrating an electronic device 900 for performing a segmentation method for an image target according to an exemplary embodiment of the present application.
Referring to fig. 9, electronic device 900 includes a processing component 910 that further includes one or more processors, and memory resources, represented by memory 920, for storing instructions, such as applications, that are executable by processing component 910. The application programs stored in memory 920 may include one or more modules that each correspond to a set of instructions. Further, the processing component 910 is configured to execute instructions to perform the above-described image object segmentation method.
The electronic device 900 may also include a power component configured to perform power management for the electronic device 900, a wired or wireless network interface configured to connect the electronic device 900 to a network, and an input-output (I/O) interface. The electronic device 900 may be operated based on an operating system stored in memory 920, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
A non-transitory computer readable storage medium having instructions stored thereon that, when executed by a processor of the electronic device 900, enable the electronic device 900 to perform a method for segmenting an image object. The image target segmentation method comprises the following steps: performing image target segmentation processing on the medical image to obtain a first segmentation result; extracting a target image containing an image target from the medical image according to the first segmentation result; carrying out registration processing on the target image and a preset standard image to obtain a registration result; presetting a standard image as an image containing a standard image target; and performing image target segmentation processing on the target image according to the registration result to obtain an image target segmentation result.
All the above optional technical solutions can be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program check codes, such as a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in the description of the present application, the terms "first", "second", "third", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present application, "a plurality" means two or more unless otherwise specified.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modifications, equivalents and the like that are within the spirit and principle of the present application should be included in the scope of the present application.

Claims (11)

1. A method of segmenting an image object, comprising:
performing image target segmentation processing on the medical image to obtain a first segmentation result;
extracting a target image containing the image target from the medical image according to the first segmentation result;
carrying out registration processing on the target image and a preset standard image to obtain a registration result; the preset standard image is an image containing a standard image target;
and performing image target segmentation processing on the target image according to the registration result to obtain an image target segmentation result.
2. The image target segmentation method according to claim 1, wherein the registering the target image with a preset standard image to obtain a registration result includes:
and registering the target image and a preset standard image based on a non-rigid body registration method to obtain a registration result.
3. The method for segmenting an image target according to claim 2, wherein before the non-rigid body registration based method for registering the target image with a preset standard image, the method further comprises:
determining an affine transformation relation between the target image and the preset standard image based on a rigid body registration method;
the registering the target image with a preset standard image based on the non-rigid body registering method to obtain the registering result, which comprises the following steps:
and registering the target image by using the affine transformation relation and the preset standard image based on a non-rigid body registration method to obtain a registration result.
4. The method for segmenting an image target according to claim 3, wherein the determining affine transformation relationship of the target image and the preset standard image based on the rigid body registration method comprises:
and inputting the target image and the preset standard image into a rigid body registration model which is constructed in advance to obtain the affine transformation relation.
5. The image target segmentation method of claim 4, wherein the rigid body registration model is trained by:
performing rigid body registration on the first sample information by using a pre-constructed rigid body registration model to determine a first loss function and a second loss function; the first sample information includes: a reference image containing a standard image target and a corresponding floating image;
according to the first loss function and the second loss function, parameter correction is carried out on the rigid body registration model;
wherein the first loss function is used for constraining model parameters so that the floating image after model affine transformation is close to the reference image in shape; the second loss function is used to constrain model parameters to prevent an affine transformation relationship of the floating image and the reference image from overfitting.
6. The image target segmentation method according to claim 3, wherein the registering the target image by using the affine transformation relation and the preset standard image based on the non-rigid body registration method to obtain the registration result comprises:
and inputting the affine transformation relation, the preset standard image and the target image into a pre-constructed non-rigid registration model to obtain the registration result.
7. The method for segmenting the image target according to claim 1, wherein the performing image target segmentation processing on the target image according to the registration result to obtain an image target segmentation result comprises:
performing image target segmentation processing on the target image according to the registration result to obtain a second segmentation result;
and screening the second segmentation result to obtain the image target segmentation result based on a preset screening rule.
8. The method for segmenting the image target according to claim 1, wherein the performing the image target segmentation process on the medical image to obtain the first segmentation result comprises:
performing region cutting on the medical image based on a preset cutting size to obtain a cutting result of each region;
carrying out segmentation processing on the cutting result to obtain a region segmentation result;
and splicing the region segmentation results carrying the partial image targets to obtain the first segmentation result of the image targets.
9. An apparatus for segmenting an image object, comprising:
the first segmentation module is used for carrying out image target segmentation processing on the medical image to obtain a first segmentation result;
the acquisition module is used for acquiring a target image according to the first segmentation result and the medical image;
the registration module is used for carrying out registration processing on the target image and a preset standard image to obtain a registration result;
and the second segmentation module is used for carrying out image target segmentation processing on the target image according to the registration result to obtain an image target segmentation result.
10. An electronic device, comprising:
a processor, and a memory coupled to the processor;
the memory is used for storing a computer program;
the processor is configured to invoke and execute the computer program in the memory to perform the method of any one of claims 1-8.
11. A storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, carries out the steps of the method of segmentation of an image object according to any one of claims 1 to 8.
CN202210421832.5A 2022-04-21 2022-04-21 Image object segmentation method and device, electronic equipment and storage medium Pending CN114708283A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210421832.5A CN114708283A (en) 2022-04-21 2022-04-21 Image object segmentation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210421832.5A CN114708283A (en) 2022-04-21 2022-04-21 Image object segmentation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114708283A true CN114708283A (en) 2022-07-05

Family

ID=82174603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210421832.5A Pending CN114708283A (en) 2022-04-21 2022-04-21 Image object segmentation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114708283A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116468741A (en) * 2023-06-09 2023-07-21 南京航空航天大学 Pancreatic cancer segmentation method based on 3D physical space domain and spiral decomposition space domain

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854276A (en) * 2012-12-04 2014-06-11 株式会社东芝 Image registration device and method, image segmentation device and method and medical image device
CN104091346A (en) * 2014-07-24 2014-10-08 东南大学 Full-automatic CT image coronary artery calcification score calculating method
CN109378068A (en) * 2018-08-21 2019-02-22 深圳大学 A kind of method for automatically evaluating and system of Therapeutic Effects of Nasopharyngeal
CN110706233A (en) * 2019-09-30 2020-01-17 北京科技大学 Retina fundus image segmentation method and device
CN110838108A (en) * 2019-10-30 2020-02-25 腾讯科技(深圳)有限公司 Medical image-based prediction model construction method, prediction method and device
CN111127466A (en) * 2020-03-31 2020-05-08 上海联影智能医疗科技有限公司 Medical image detection method, device, equipment and storage medium
CN112884775A (en) * 2021-01-20 2021-06-01 推想医疗科技股份有限公司 Segmentation method, device, equipment and medium
CN113205538A (en) * 2021-05-17 2021-08-03 广州大学 Blood vessel image segmentation method and device based on CRDNet
CN114187337A (en) * 2021-12-07 2022-03-15 推想医疗科技股份有限公司 Image registration method, segmentation method, device, electronic equipment and storage medium
CN114332120A (en) * 2021-12-24 2022-04-12 上海商汤智能科技有限公司 Image segmentation method, device, equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854276A (en) * 2012-12-04 2014-06-11 株式会社东芝 Image registration device and method, image segmentation device and method and medical image device
CN104091346A (en) * 2014-07-24 2014-10-08 东南大学 Full-automatic CT image coronary artery calcification score calculating method
CN109378068A (en) * 2018-08-21 2019-02-22 深圳大学 A kind of method for automatically evaluating and system of Therapeutic Effects of Nasopharyngeal
CN110706233A (en) * 2019-09-30 2020-01-17 北京科技大学 Retina fundus image segmentation method and device
CN110838108A (en) * 2019-10-30 2020-02-25 腾讯科技(深圳)有限公司 Medical image-based prediction model construction method, prediction method and device
CN111127466A (en) * 2020-03-31 2020-05-08 上海联影智能医疗科技有限公司 Medical image detection method, device, equipment and storage medium
CN112884775A (en) * 2021-01-20 2021-06-01 推想医疗科技股份有限公司 Segmentation method, device, equipment and medium
CN113205538A (en) * 2021-05-17 2021-08-03 广州大学 Blood vessel image segmentation method and device based on CRDNet
CN114187337A (en) * 2021-12-07 2022-03-15 推想医疗科技股份有限公司 Image registration method, segmentation method, device, electronic equipment and storage medium
CN114332120A (en) * 2021-12-24 2022-04-12 上海商汤智能科技有限公司 Image segmentation method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116468741A (en) * 2023-06-09 2023-07-21 南京航空航天大学 Pancreatic cancer segmentation method based on 3D physical space domain and spiral decomposition space domain
CN116468741B (en) * 2023-06-09 2023-09-22 南京航空航天大学 Pancreatic cancer segmentation method based on 3D physical space domain and spiral decomposition space domain

Similar Documents

Publication Publication Date Title
US8335359B2 (en) Systems, apparatus and processes for automated medical image segmentation
CN107886508B (en) Differential subtraction method and medical image processing method and system
US9218661B2 (en) Image analysis for specific objects
US8229188B2 (en) Systems, methods and apparatus automatic segmentation of liver in multiphase contrast-enhanced medical images
US20090074276A1 (en) Voxel Matching Technique for Removal of Artifacts in Medical Subtraction Images
US10497123B1 (en) Isolation of aneurysm and parent vessel in volumetric image data
EP2807630B1 (en) Processing and displaying a breast image
JP2002345807A (en) Method for extracting specified region of medical care image
CN111540025A (en) Predicting images for image processing
EP3722996A2 (en) Systems and methods for processing 3d anatomical volumes based on localization of 2d slices thereof
CN110738633B (en) Three-dimensional image processing method and related equipment for organism tissues
US20120078101A1 (en) Ultrasound system for displaying slice of object and method thereof
CN114708283A (en) Image object segmentation method and device, electronic equipment and storage medium
US10417764B2 (en) System and methods for diagnostic image analysis and image quality assessment
CN112862752A (en) Image processing display method, system electronic equipment and storage medium
CN111325758A (en) Lung image segmentation method and device and training method of image segmentation model
CN116051553A (en) Method and device for marking inside three-dimensional medical model
Huang et al. POST-IVUS: A perceptual organisation-aware selective transformer framework for intravascular ultrasound segmentation
Kalapos et al. Automated T1 and T2 mapping segmentation on cardiovascular magnetic resonance imaging using deep learning
CN113129297A (en) Automatic diameter measurement method and system based on multi-phase tumor images
CN112767314A (en) Medical image processing method, device, equipment and storage medium
Mesanovic et al. Application of lung segmentation algorithm to disease quantification from CT images
US20190251691A1 (en) Information processing apparatus and information processing method
KR101886936B1 (en) The method and apparatus for enhancing contrast of ultrasound image using probabilistic edge map
Mwawado Development of a fast and accurate method for the segmentation of diabetic foot ulcer images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220705

RJ01 Rejection of invention patent application after publication