CN111553894A - Medical image segmentation model training method, medium and electronic device - Google Patents

Medical image segmentation model training method, medium and electronic device Download PDF

Info

Publication number
CN111553894A
CN111553894A CN202010334466.0A CN202010334466A CN111553894A CN 111553894 A CN111553894 A CN 111553894A CN 202010334466 A CN202010334466 A CN 202010334466A CN 111553894 A CN111553894 A CN 111553894A
Authority
CN
China
Prior art keywords
segmentation
medical image
segmentation result
result
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010334466.0A
Other languages
Chinese (zh)
Inventor
张雯
房劬
赵夕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xingmai Information Technology Co ltd
Original Assignee
Shanghai Xingmai Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xingmai Information Technology Co ltd filed Critical Shanghai Xingmai Information Technology Co ltd
Priority to CN202010334466.0A priority Critical patent/CN111553894A/en
Publication of CN111553894A publication Critical patent/CN111553894A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • G06T2207/30104Vascular flow; Blood flow; Perfusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides a medical image segmentation model training method, a medium and electronic equipment. The medical image segmentation model training method comprises the following steps: acquiring a medical image to be segmented; performing first segmentation on the medical image by adopting an AI medical image segmentation model to obtain a first segmentation result; performing second segmentation on the first segmentation result according to the received segmentation instruction to obtain a second segmentation result; the second segmentation result is used for training the AI medical image segmentation model; obtaining a difference value between the first segmentation result and the second segmentation result, wherein the difference value between the first segmentation result and the second segmentation result is used for obtaining the segmentation accuracy of the AI medical image segmentation model and/or is used for determining the segmentation workload of the second segmentation. The medical image segmentation model training method can retrain the AI medical image segmentation model in practical application.

Description

Medical image segmentation model training method, medium and electronic device
Technical Field
The invention belongs to the field of medical image processing, relates to an image segmentation model training method, and particularly relates to a medical image segmentation model training method, a medium and electronic equipment.
Background
Image segmentation is a key technique in image processing. As the first step of image processing and analysis, the accuracy of image segmentation directly affects the accuracy of subsequent operations such as feature extraction and target recognition. In the existing scheme, in order to improve the efficiency of image segmentation, an AI (Artificial Intelligence) medical image segmentation model is generally adopted in the industry to segment a medical image, specifically, in the existing scheme, a large number of image segmentation examples are firstly utilized to train the AI medical image segmentation model, and after the AI medical image segmentation model is trained, the trained AI medical image segmentation model is applied to medical image segmentation. However, in practical applications, the inventors found that, subject to the complexity and diversity of medical images, the segmentation accuracy of the trained AI medical image may not be high when applied to different scenes, for example: for an AI medical image segmentation model obtained by abdominal CT image training, if the AI medical image segmentation model is applied to coronary CT image segmentation, the accuracy may not be high, and the AI medical image segmentation model cannot be trained in real time according to an actual application scenario in the existing scheme.
Disclosure of Invention
In view of the foregoing drawbacks of the prior art, an object of the present invention is to provide a method, a medium, and an electronic device for training a medical image segmentation model, which are used to solve the problem that the AI medical image segmentation model cannot be trained in real time according to an actual application scenario in the prior art.
To achieve the above and other related objects, a first aspect of the present invention provides a medical image segmentation model training method. The medical image segmentation model training method comprises the following steps: acquiring a medical image to be segmented; performing first segmentation on the medical image by adopting an AI medical image segmentation model to obtain a first segmentation result; performing second segmentation on the first segmentation result according to the received segmentation instruction to obtain a second segmentation result; the second segmentation result is used for training the AI medical image segmentation model; obtaining a difference value between the first segmentation result and the second segmentation result, wherein the difference value between the first segmentation result and the second segmentation result is used for obtaining the segmentation accuracy of the AI medical image segmentation model and/or is used for determining the segmentation workload of the second segmentation.
In some embodiments of the first aspect, the medical image segmentation model training method further comprises: and acquiring the segmentation operation cost according to the segmentation operation amount and the cost coefficient of the second segmentation.
In some embodiments of the first aspect, a method of obtaining a difference between the first segmentation result and the second segmentation result comprises: obtaining a set of pixel differences between the first segmentation result and the second segmentation result; the set of pixel differences is the difference between the first segmentation result and the second segmentation result.
In some embodiments of the first aspect, a method of obtaining a difference between the first segmentation result and the second segmentation result comprises: the medical image is a 3-dimensional medical image; obtaining a set of voxel differences between the first segmentation result and the second segmentation result; the set of voxel differences is the difference between the first segmentation result and the second segmentation result.
In some embodiments of the first aspect, a method of obtaining a difference between the first segmentation result and the second segmentation result comprises: obtaining a corresponding difference value area according to the first segmentation result and the second segmentation result; obtaining one or more geometric features of the difference region as a difference between the first segmentation result and the second segmentation result.
In certain embodiments of the first aspect, the geometric characteristic of the difference region comprises one or more of a perimeter, a volume, or an area of the difference region.
In some embodiments of the first aspect, the split instruction comprises: a brush instruction, a paint instruction, an erase instruction, a single point trace instruction, and/or a multi point trace instruction.
In some embodiments of the first aspect, the medical image segmentation model training method further comprises: displaying a corresponding instruction icon on the display screen so that the segmentation personnel can input the segmentation instruction.
A second aspect of the present invention provides a computer-readable storage medium having a computer program stored thereon; the program is executed by a processor to implement the medical image segmentation model training method of the invention.
A third aspect of the present invention provides an electronic apparatus, comprising: a memory storing a computer program; the processor is in communication connection with the memory and is used for executing the medical image segmentation model training method when the computer program is called; and the display is in communication connection with the processor and the memory and is used for displaying a relevant GUI (graphical user interface) of the medical image segmentation model training method.
As described above, the medical image segmentation model training method, medium, and electronic device according to the present invention have the following advantages:
the medical image segmentation model training method utilizes an AI medical image segmentation model to obtain a first segmentation result, and performs second segmentation on the first segmentation result according to a received segmentation instruction to obtain a second segmentation result, wherein the second segmentation result is used for training the AI medical image segmentation model. Therefore, the medical image segmentation model training method can solve the problem that the AI medical image segmentation model cannot be trained in real time according to the actual application scene in the existing scheme, and achieves the technical effect of training the AI medical image segmentation model in real time according to the actual application scene so as to improve the segmentation accuracy.
Drawings
Fig. 1 is a flowchart illustrating a medical image segmentation model training method according to an embodiment of the present invention.
Fig. 2 is an exemplary diagram illustrating a difference value obtained by the medical image segmentation training method according to an embodiment of the present invention.
Fig. 3A is a flowchart illustrating a step S14 of the method for training a medical image segmentation model according to an embodiment of the present invention.
Fig. 3B is a flowchart illustrating a step S14 of a further embodiment of the training method for medical image segmentation models according to the present invention.
FIG. 4 is a flowchart illustrating a method for training a medical image segmentation model according to another embodiment of the present invention.
Fig. 5A is a diagram illustrating an example of a medical image to be segmented obtained by the medical image segmentation training method according to an embodiment of the present invention.
Fig. 5B is a diagram illustrating an example of a first segmentation result obtained by the medical image segmentation training method according to an embodiment of the invention.
Fig. 5C is a diagram illustrating an example of a second segmentation result obtained by the medical image segmentation training method according to an embodiment of the invention.
Fig. 5D is a diagram illustrating an example of the difference obtained by the medical image segmentation training method according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Description of the element reference numerals
51 dividing an object
52 first segmentation result
53 second segmentation result
54 first difference
55 second difference
600 electronic device
610 memory
620 processor
630 display
S11-S14
S141 a-S142 a
S141 b-S143 b
S41-S44
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the drawings only show the components related to the present invention rather than being drawn according to the number, shape and size of the components in actual implementation, and the type, number and proportion of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
Image segmentation is a key technology in image processing, and is often implemented by labeling a target object in an image. As the first step of image processing and analysis, the accuracy of image segmentation directly affects the accuracy of subsequent operations such as feature extraction and target recognition. In order to improve the efficiency of image segmentation, in some embodiments, an AI medical image segmentation model is selected for segmenting the medical image. Specifically, firstly, a large number of image segmentation examples are used for training an AI medical image segmentation model, and after the AI medical image segmentation model is trained, the trained AI medical image segmentation model is directly applied to image segmentation under different scenes. However, limited by the complexity and diversity of medical images, the segmentation accuracy of the trained AI medical image may not be high when applied to different scenes, for example: for the AI medical image segmentation model obtained by abdominal CT image training, if it is directly applied to the scene of coronary CT image segmentation, a large segmentation error may be caused. Therefore, it is necessary to retrain the AI medical image segmentation model according to the actual application scenario, and the above solution does not involve the problem of "real-time training the AI medical image segmentation model according to the actual application scenario", which results in that the segmentation accuracy of the AI medical image segmentation model in the above solution is not high.
In order to solve the problem, the invention provides a training method of a medical image segmentation model, which comprises the following steps: acquiring a medical image to be segmented; performing first segmentation on the medical image by adopting an AI medical image segmentation model to obtain a first segmentation result; performing second segmentation on the first segmentation result according to the received segmentation instruction to obtain a second segmentation result; the second segmentation result is used for training the AI medical image segmentation model; obtaining a difference value between the first segmentation result and the second segmentation result, wherein the difference value between the first segmentation result and the second segmentation result is used for obtaining the segmentation accuracy of the AI medical image segmentation model and/or is used for determining the segmentation workload of the second segmentation.
According to the training method of the medical image segmentation model, firstly, an AI medical image segmentation model is adopted to perform first segmentation on a medical image to obtain a first segmentation result, then, second segmentation is performed on the first segmentation result according to a received segmentation instruction to obtain a second segmentation result, and the second segmentation result is used for retraining the AI medical image segmentation model, so that the AI medical image segmentation model can be guaranteed to be suitable for different application scenes, and the accuracy of the first segmentation is improved.
Referring to fig. 1, in an embodiment of the present invention, the method for training a medical image segmentation model includes:
s11, acquiring the medical image to be segmented. Such as X-ray images, CT images, or magnetic resonance images, among others.
S12, carrying out first segmentation on the medical image by adopting an AI medical image segmentation model to obtain a first segmentation result. The AI medical image segmentation model is a pre-trained model, the first segmentation process can be realized by inputting the medical image into the AI medical image segmentation model, and a corresponding first segmentation result is obtained. The training of the AI medical image segmentation model may be implemented by the prior art, and is not described herein again.
S13, performing second segmentation on the first segmentation result according to the received segmentation instruction to obtain a second segmentation result; and the second segmentation result is used for training the AI medical image segmentation model. Wherein the second segmentation is a re-segmentation based on the first segmentation result, which is usually performed by a professional segmentation person, so that the second segmentation result can be regarded as a standard segmentation result; and the training of the AI medical image segmentation model in step S13 pertains to retraining in actual applications.
S14, obtaining a difference value between the first segmentation result and the second segmentation result, wherein the difference value between the first segmentation result and the second segmentation result is used for obtaining the segmentation accuracy of the AI medical image segmentation model and/or determining the segmentation workload of the second segmentation.
In this embodiment, the AI medical image segmentation model is retrained by using the second segmentation result, so that the adaptation degree of the AI medical image segmentation model to the current scene can be continuously improved, and the accuracy of the first segmentation is further improved. In particular, when the AI medical image segmentation model is applied to a new scene, retraining the AI medical image segmentation model through the second segmentation result can ensure that the AI medical image segmentation model is applicable to the new scene, thereby ensuring good first segmentation accuracy. The segmentation accuracy obtained in step S14 can visually reflect the quality of the training effect in step S13, which is beneficial for the user to further adjust the AI medical image segmentation model according to the training effect.
Furthermore, steps S11-S13 perform a first segmentation on the medical image by an AI medical image segmentation model, and perform a second segmentation on the first segmentation result by the received segmentation instruction and obtain a second segmentation result; wherein the second segmentation result may be regarded as a modification of the first segmentation result. On one hand, the first segmentation is carried out by adopting an AI medical image segmentation model, so that the image segmentation efficiency is ensured; on the other hand, performing the second segmentation according to the received segmentation instruction ensures the accuracy of the image segmentation. Therefore, the steps S11 to S13 can improve the efficiency of image segmentation while ensuring the segmentation accuracy.
Also, in particular applications the second segmentation is often performed by a professional segmenter. In order to detect the workload of the segmentation personnel, in one technical solution of this embodiment, the difference value obtained in step S14 can determine the segmentation workload of the second segmentation; and acquiring the workload of the segmentation personnel according to the segmentation workload. The segmentation workload of the segmentation personnel is related to the segmentation workload of the second segmentation, specifically: the higher the accuracy of the first segmentation result is, the smaller the segmentation workload of the second segmentation is, and the smaller the workload of segmentation personnel is in the process; the higher the accuracy of the first segmentation result is, the larger the segmentation workload of the second segmentation is, and the larger the workload of the segmentation personnel in the process is.
In practical applications, many development organizations for artificial intelligence diagnostic tools often need to segment a large number of medical images and obtain corresponding segmented images, which can be used for training and/or verifying artificial intelligence segmentation models. Therefore, a development organization usually employs a professional segmentation personnel (such as a doctor with medical image diagnosis experience) to perform segmentation labeling through a corresponding labeling platform. However, limited by the complexity of medical images and the diversity of image segmentation, the existing annotation platforms have difficulty in calculating the workload and cost of segmentation personnel in a reasonable manner. To address this problem, in an embodiment of the present invention, the method for training a medical image segmentation model further includes: and acquiring the segmentation operation cost according to the segmentation operation amount and the cost coefficient of the second segmentation. As described above, the workload of the human being can be obtained from the workload of the second division, and the division work cost of the human being can be obtained from the workload and the cost coefficient. Therefore, when the medical image segmentation model training method is applied to the labeling platform, automatic settlement of the workload and the cost of segmentation personnel can be realized, which is beneficial to simplifying settlement process and reducing labor cost.
In an embodiment of the present invention, an implementation method for obtaining a difference between the first segmentation result and the second segmentation result includes: obtaining a set of pixel differences between the first segmentation result and the second segmentation result; the set of pixel differences is the difference between the first segmentation result and the second segmentation result.
Specifically, referring to fig. 2, the pixel difference set is a set formed by all the pixels belonging to the first segmentation result 21 or the second segmentation result 22, that is: c ═ A ≈ B ═ U (B-A ≈ B); wherein C represents the pixel difference set, a represents a set formed by all pixel points in the first segmentation result 21, and B represents a set formed by all pixel points in the second segmentation result 22; A-AJBb represents an additionally labeled part of the AI medical image segmentation model, which needs to be deleted in a second segmentation process; B-A.andgate.B represents the missing part of the AI medical image segmentation model, which needs to be supplemented in the second segmentation process. The deleting and the supplementing are the work of the segmentation personnel in the second segmentation process, so that the workload of the segmentation personnel can be reflected according to the difference.
In this embodiment, the pixel difference set is selected as a difference between the first segmentation result and the second segmentation result, so that deletion work and supplement work performed by segmentation personnel in the second segmentation process can be clearly shown, and further, the segmentation workload of the second segmentation can be visually shown.
In an embodiment of the invention, the medical image is a 3-dimensional medical image. One implementation method of obtaining a difference between the first segmentation result and the second segmentation result includes: obtaining a set of voxel differences between the first segmentation result and the second segmentation result; the set of voxel differences is the difference between the first segmentation result and the second segmentation result. Wherein the 3-dimensional medical image may be a medical image composed of multi-slice CT images. The voxel is short for volume element, and the 3-dimensional medical image can be regarded as being composed of a plurality of voxel points. The set of voxel differences comprises all voxel points belonging only to the second segmentation result and not to the first segmentation result and all voxel points belonging only to the first segmentation result and not to the second segmentation result.
Referring to fig. 3A, in an embodiment of the present invention, an implementation method for obtaining a difference between the first segmentation result and the second segmentation result includes:
s141a, obtaining a corresponding difference value area according to the first segmentation result and the second segmentation result; wherein the difference region may be obtained from the pixel difference set.
S142a, obtaining one or more geometric features of the difference region as a difference between the first segmentation result and the second segmentation result. Preferably, the geometric characteristic of the difference region comprises one or more of a perimeter, a volume or an area of the difference region.
In an embodiment of the present invention, for a single-slice CT image, the implementation method for obtaining the geometric features of the difference region includes: acquiring all pixel points in the difference region; and obtaining a corresponding minimum convex polygon according to all the pixel points in the difference region, wherein the geometric characteristic of the minimum convex polygon is the corresponding geometric characteristic of the difference region. For example, the perimeter of the minimum convex polygon is the perimeter of the difference region, and the area of the minimum convex polygon is the area of the difference region. The implementation method for obtaining the corresponding minimum convex polygon according to the pixel points can be implemented by using the prior art, and details are not repeated here.
Referring to fig. 3B, in an embodiment of the invention, the medical image is a 3-dimensional medical image composed of 2 or more layers of images. In this embodiment, an implementation method for obtaining a difference between the first segmentation result and the second segmentation result includes:
s141b, for any layer image, obtaining a difference value area corresponding to the layer image according to the first segmentation result and the second segmentation result corresponding to the layer image;
s142b, acquiring one or more geometric features of the difference region corresponding to the layer image as the difference corresponding to the layer image;
s143b, combining the corresponding differences of the respective layer images together to obtain the difference between the first segmentation result and the second segmentation result.
According to the above description, the accuracy of the AI medical image segmentation model for performing the first segmentation on the 3-dimensional medical image can be obtained by the training method for the medical image segmentation model according to the embodiment.
In an embodiment of the present invention, the instructions include: a brush instruction, a paint instruction, an erase instruction, a single point trace instruction, and/or a multi point trace instruction. The drawing brush instruction is used for drawing the contour of a specific area in the medical image to obtain a contour line of the specific area; the smearing instruction is used for smearing a specific part or focus in the medical image so as to obtain a corresponding area of the specific part or focus; the erasing instruction is used for clearing error marks in the first segmentation result; the single-point tracking instruction is used for selecting all similar pixel points of one appointed pixel point according to the appointed pixel point; the multi-point tracking instruction is used for selecting all similar pixel points of the designated pixel points according to the designated pixel points.
Specifically, in the single-point tracking instruction or the multi-point tracking instruction, the similar pixel points of the designated pixel points may be obtained according to the gray scale values of the designated pixel points, for example: in the single-point tracking instruction, if the gray value of the designated pixel point is a, the single-point tracking instruction can select all pixel points with the gray values of 0.9 a-1.1 a as similar pixel points of the designated pixel point; or in the multi-point tracking instruction, if the minimum gray value of the designated pixel points is b and the maximum gray value of the designated pixel points is c, the multi-point tracking instruction can select all the pixel points with the gray values of b-c as similar pixel points of the designated pixel points.
In this embodiment, the segmentation personnel can correct the first labeling result and obtain the second segmentation result by using the brush instruction, the smearing instruction, the erasing instruction, the single-point tracking instruction and/or the multi-point tracking instruction.
In an embodiment of the invention, the method for training a medical image segmentation model further includes: displaying a corresponding instruction icon on the display screen so that the segmentation personnel can input the segmentation instruction. For example, a brush tool, a painting tool, an erasing tool, a single-point tracking tool, and/or a multi-point tracking tool may be displayed on the display screen, and when a segmentation personnel selects the brush tool and moves, input of a brush instruction may be implemented, that is: contouring in the medical image; when the segmentation tool is selected and moved by the segmentation personnel, the input of a smearing instruction can be realized, namely: smearing in the medical image to enable segmentation of a specific region; when the segmentation personnel select the erasing tool and move, the input of an erasing instruction is realized; and when the segmentation personnel select the single-point tracking tool or the multi-point tracking tool, realizing the corresponding single-point tracking instruction or multi-point tracking instruction. According to the embodiment, the corresponding instruction icons are displayed, so that the segmentation personnel can conveniently input the corresponding instructions.
Referring to fig. 4, in an embodiment of the present invention, the method for training a medical image segmentation model includes:
and S41, acquiring a first segmentation result. Specifically, step S41 automatically segments the medical image by using the artificial intelligent image segmentation model to obtain and display a corresponding first segmentation result.
And S42, acquiring a second segmentation result. Specifically, the second segmentation result is obtained by modifying the first segmentation result by a segmentation person, and the modification of the first segmentation result by the segmentation person is realized by a segmentation instruction.
And S43, calculating the difference value of the first segmentation result and the second segmentation result as the work load calculation index of the segmentation personnel.
And S44, calculating the division operation cost of the division personnel according to the workload calculation index and the cost coefficient of the division personnel.
The difference between the first segmentation result and the second segmentation result may be obtained according to a voxel difference set and a pixel difference set, or may be obtained according to a sum of perimeters of pixel difference sets of images of respective layers. By the medical image segmentation model training method, the perimeters of various tool use ranges can be automatically counted and calculated by the segmentation personnel, and the workload of different segmentation personnel is settled according to the cost of unit perimeter.
In an embodiment of the present invention, the brain CT image shown in fig. 5A is selected as the medical image to be segmented, and an inflammation region in the brain CT image is selected as the segmentation target 51. In this embodiment, the segmentation of the medical image is performed by labeling the segmentation target 51.
Referring to fig. 5B, a first segmentation result 52 obtained by the AI medical image segmentation model in the present embodiment is shown; referring to FIG. 5C, a second split result 53 obtained according to the split instruction in the present embodiment is shown. Fig. 5D shows the difference between the first segmentation result 52 and the second segmentation result 53 in this embodiment. Wherein the first difference 54 represents a multi-labeled part in the first segmentation process, and the label of the part should be deleted in the second segmentation process; a second difference 55 represents the part of the first segmentation process that is missing and whose label should be added during the second segmentation process. Therefore, the segmentation workload of the second segmentation can be obtained according to the first difference 54 and the second difference 55, and in practical applications, the sum of the circumferences of the first difference 54 and the second difference 55 can be selected as the workload of the segmenter on the layer of CT image.
Based on the above description of the medical image segmentation model training method, the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the medical image segmentation model training method of the present invention.
Based on the description of the medical image segmentation model training method, the invention further provides electronic equipment. Referring to fig. 6, in an embodiment of the invention, the electronic device 600 includes: a memory 610 storing a computer program; a processor 620, communicatively coupled to the memory 610, for executing the medical image segmentation model training method of the present invention when the computer program is invoked; a display 630, communicatively coupled to the processor 620 and the memory 610, for displaying a GUI interactive interface associated with the medical image segmentation model training method.
The protection scope of the medical image segmentation model training method according to the present invention is not limited to the execution sequence of the steps listed in this embodiment, and all the schemes of adding, subtracting, and replacing steps in the prior art according to the principles of the present invention are included in the protection scope of the present invention.
The training method of the medical image segmentation model can obtain the accuracy of the AI medical image segmentation model for first segmentation according to the difference value of the first segmentation result and the second segmentation result, so that the detection of the segmentation accuracy of the AI medical image segmentation model is realized;
the training method of the medical image segmentation model comprises the steps of firstly obtaining a first segmentation result by using an AI medical image segmentation model, and correcting the first segmentation result according to a received segmentation instruction to obtain a second segmentation result, so that the segmentation efficiency is guaranteed, and the segmentation accuracy is improved;
the second segmentation result obtained in the medical image segmentation model training method can be used for training the AI medical image segmentation model, and continuous optimization and perfection of the AI medical image segmentation model in practical application are facilitated, so that the accuracy of the first segmentation result is improved, and the workload of segmentation personnel is reduced;
the medical image segmentation model training method can realize quantitative evaluation on the workload and the cost of segmentation personnel according to the difference value of the first segmentation result and the second segmentation result.
In conclusion, the present invention effectively overcomes various disadvantages of the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A training method for a medical image segmentation model is characterized by comprising the following steps:
acquiring a medical image to be segmented;
performing first segmentation on the medical image by adopting an AI medical image segmentation model to obtain a first segmentation result;
performing second segmentation on the first segmentation result according to the received segmentation instruction to obtain a second segmentation result; the second segmentation result is used for training the AI medical image segmentation model;
obtaining a difference value between the first segmentation result and the second segmentation result, wherein the difference value between the first segmentation result and the second segmentation result is used for obtaining the segmentation accuracy of the AI medical image segmentation model and/or is used for determining the segmentation workload of the second segmentation.
2. The medical image segmentation model training method according to claim 1, further comprising: and acquiring the segmentation operation cost according to the segmentation operation amount and the cost coefficient of the second segmentation.
3. The method for training a medical image segmentation model according to claim 1, wherein one implementation method for obtaining a difference value between the first segmentation result and the second segmentation result comprises:
obtaining a set of pixel differences between the first segmentation result and the second segmentation result; the set of pixel differences is the difference between the first segmentation result and the second segmentation result.
4. The method for training a medical image segmentation model according to claim 1, wherein one implementation method for obtaining a difference value between the first segmentation result and the second segmentation result comprises:
the medical image is a 3-dimensional medical image;
obtaining a set of voxel differences between the first segmentation result and the second segmentation result; the set of voxel differences is the difference between the first segmentation result and the second segmentation result.
5. The method for training a medical image segmentation model according to claim 1, wherein one implementation method for obtaining a difference value between the first segmentation result and the second segmentation result comprises:
obtaining a corresponding difference value area according to the first segmentation result and the second segmentation result;
obtaining one or more geometric features of the difference region as a difference between the first segmentation result and the second segmentation result.
6. The medical image segmentation model training method according to claim 5, characterized in that: the geometric characteristic of the difference region includes one or more of a perimeter, a volume, or an area of the difference region.
7. The medical image segmentation model training method according to claim 1, wherein the segmentation instruction comprises: a brush instruction, a paint instruction, an erase instruction, a single point trace instruction, and/or a multi point trace instruction.
8. The medical image segmentation model training method according to claim 1, further comprising: displaying a corresponding instruction icon on the display screen so that the segmentation personnel can input the segmentation instruction.
9. A computer-readable storage medium having stored thereon a computer program, characterized in that: the program, when executed by a processor, implements a method for training a medical image segmentation model according to any one of claims 1 to 8.
10. An electronic device, characterized in that the electronic device comprises:
a memory storing a computer program;
a processor, communicatively coupled to the memory, for executing the medical image segmentation model training method of any one of claims 1 to 8 when the computer program is invoked;
and the display is in communication connection with the processor and the memory and is used for displaying a relevant GUI (graphical user interface) of the medical image segmentation model training method.
CN202010334466.0A 2020-04-24 2020-04-24 Medical image segmentation model training method, medium and electronic device Pending CN111553894A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010334466.0A CN111553894A (en) 2020-04-24 2020-04-24 Medical image segmentation model training method, medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010334466.0A CN111553894A (en) 2020-04-24 2020-04-24 Medical image segmentation model training method, medium and electronic device

Publications (1)

Publication Number Publication Date
CN111553894A true CN111553894A (en) 2020-08-18

Family

ID=72007669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010334466.0A Pending CN111553894A (en) 2020-04-24 2020-04-24 Medical image segmentation model training method, medium and electronic device

Country Status (1)

Country Link
CN (1) CN111553894A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990331A (en) * 2021-03-26 2021-06-18 共达地创新技术(深圳)有限公司 Image processing method, electronic device, and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100088134A1 (en) * 2008-10-02 2010-04-08 Certusview Technologies, Llc Methods and apparatus for analyzing locate and marking operations with respect to historical information
CN107194608A (en) * 2017-06-13 2017-09-22 复旦大学 A kind of mass-rent towards disabled person community marks Task Assigned Policy
CN107273891A (en) * 2017-06-08 2017-10-20 深圳市唯特视科技有限公司 A kind of target category detection method based on click supervised training
CN107480677A (en) * 2017-08-07 2017-12-15 北京深睿博联科技有限责任公司 The method and device of area-of-interest in a kind of identification three-dimensional CT image
CN107492135A (en) * 2017-08-21 2017-12-19 维沃移动通信有限公司 A kind of image segmentation mask method, device and computer-readable recording medium
CN107748771A (en) * 2017-10-11 2018-03-02 恩泊泰(天津)科技有限公司 Drive test displaying CalmCar big datas mark platform and method
CN108369642A (en) * 2015-12-18 2018-08-03 加利福尼亚大学董事会 Acute disease feature is explained and quantified according to head computer tomography
CN108932724A (en) * 2018-05-31 2018-12-04 杭州晓图科技有限公司 A kind of system automatic auditing method based on multi-person synergy image labeling
CN109190631A (en) * 2018-08-31 2019-01-11 阿里巴巴集团控股有限公司 The target object mask method and device of picture
CN109445948A (en) * 2018-11-15 2019-03-08 济南浪潮高新科技投资发展有限公司 A kind of data mark crowdsourcing plateform system and crowdsourcing data mask method based on intelligent contract
CN109697460A (en) * 2018-12-05 2019-04-30 华中科技大学 Object detection model training method, target object detection method
CN109740622A (en) * 2018-11-20 2019-05-10 众安信息技术服务有限公司 Image labeling task crowdsourcing method and system based on the logical card award method of block chain
CN110941684A (en) * 2018-09-21 2020-03-31 高德软件有限公司 Production method of map data, related device and system
CN110993064A (en) * 2019-11-05 2020-04-10 北京邮电大学 Deep learning-oriented medical image labeling method and device

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100088134A1 (en) * 2008-10-02 2010-04-08 Certusview Technologies, Llc Methods and apparatus for analyzing locate and marking operations with respect to historical information
CN108369642A (en) * 2015-12-18 2018-08-03 加利福尼亚大学董事会 Acute disease feature is explained and quantified according to head computer tomography
CN107273891A (en) * 2017-06-08 2017-10-20 深圳市唯特视科技有限公司 A kind of target category detection method based on click supervised training
CN107194608A (en) * 2017-06-13 2017-09-22 复旦大学 A kind of mass-rent towards disabled person community marks Task Assigned Policy
CN107480677A (en) * 2017-08-07 2017-12-15 北京深睿博联科技有限责任公司 The method and device of area-of-interest in a kind of identification three-dimensional CT image
CN107492135A (en) * 2017-08-21 2017-12-19 维沃移动通信有限公司 A kind of image segmentation mask method, device and computer-readable recording medium
CN107748771A (en) * 2017-10-11 2018-03-02 恩泊泰(天津)科技有限公司 Drive test displaying CalmCar big datas mark platform and method
CN108932724A (en) * 2018-05-31 2018-12-04 杭州晓图科技有限公司 A kind of system automatic auditing method based on multi-person synergy image labeling
CN109190631A (en) * 2018-08-31 2019-01-11 阿里巴巴集团控股有限公司 The target object mask method and device of picture
CN110941684A (en) * 2018-09-21 2020-03-31 高德软件有限公司 Production method of map data, related device and system
CN109445948A (en) * 2018-11-15 2019-03-08 济南浪潮高新科技投资发展有限公司 A kind of data mark crowdsourcing plateform system and crowdsourcing data mask method based on intelligent contract
CN109740622A (en) * 2018-11-20 2019-05-10 众安信息技术服务有限公司 Image labeling task crowdsourcing method and system based on the logical card award method of block chain
CN109697460A (en) * 2018-12-05 2019-04-30 华中科技大学 Object detection model training method, target object detection method
CN110993064A (en) * 2019-11-05 2020-04-10 北京邮电大学 Deep learning-oriented medical image labeling method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990331A (en) * 2021-03-26 2021-06-18 共达地创新技术(深圳)有限公司 Image processing method, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
RU2739713C1 (en) Training annotation of objects in an image
US11880977B2 (en) Interactive image matting using neural networks
US8929635B2 (en) Method and system for tooth segmentation in dental images
CN111369542B (en) Vessel marking method, image processing system, and storage medium
US8970581B2 (en) System and method for interactive contouring for 3D medical images
US20120051606A1 (en) Automated System for Anatomical Vessel Characteristic Determination
CN112885453A (en) Method and system for identifying pathological changes in subsequent medical images
JPH10187936A (en) Image processor
JP2009545052A (en) Interactive segmentation of images with a single scribble
US10672122B2 (en) Optimizing user interactions in segmentation
US7539333B2 (en) System and method for processing human body image
US11683438B2 (en) Systems and methods to semi-automatically segment a 3D medical image using a real-time edge-aware brush
US8050469B2 (en) Automated measurement of objects using deformable models
EP2353141B1 (en) One-click correction of tumor segmentation results
US10497127B2 (en) Model-based segmentation of an anatomical structure
CN112150571A (en) Image motion artifact eliminating method, device, equipment and storage medium
CN106952264B (en) Method and device for cutting three-dimensional medical target
CN113889238B (en) Image identification method and device, electronic equipment and storage medium
CN115601811A (en) Facial acne detection method and device
CN111553894A (en) Medical image segmentation model training method, medium and electronic device
CN112053769B (en) Three-dimensional medical image labeling method and device and related product
EP2734147B1 (en) Method for segmentation of dental images
Lu et al. Improved 3D live-wire method with application to 3D CT chest image analysis
CN112530554A (en) Scanning positioning method and device, storage medium and electronic equipment
CN112270643B (en) Three-dimensional imaging data stitching method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200818

RJ01 Rejection of invention patent application after publication