CN112070777B - Method and device for organ-at-risk segmentation under multiple scenes based on incremental learning - Google Patents

Method and device for organ-at-risk segmentation under multiple scenes based on incremental learning Download PDF

Info

Publication number
CN112070777B
CN112070777B CN202011249376.8A CN202011249376A CN112070777B CN 112070777 B CN112070777 B CN 112070777B CN 202011249376 A CN202011249376 A CN 202011249376A CN 112070777 B CN112070777 B CN 112070777B
Authority
CN
China
Prior art keywords
organ
data
slices
training
subsets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011249376.8A
Other languages
Chinese (zh)
Other versions
CN112070777A (en
Inventor
张子健
周蓉蓉
程婷婷
王姝婷
梁瞻
金泽夫
刘归
王一帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiangya Hospital of Central South University
Original Assignee
Xiangya Hospital of Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiangya Hospital of Central South University filed Critical Xiangya Hospital of Central South University
Priority to CN202011249376.8A priority Critical patent/CN112070777B/en
Publication of CN112070777A publication Critical patent/CN112070777A/en
Application granted granted Critical
Publication of CN112070777B publication Critical patent/CN112070777B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and equipment for organ-at-risk segmentation under multiple scenes based on incremental learning.

Description

Method and device for organ-at-risk segmentation under multiple scenes based on incremental learning
Technical Field
The invention relates to the technical field of medical information processing, in particular to a method and equipment for organ-at-risk segmentation under multiple scenes based on incremental learning.
Background
With the rise of big data and the improvement of computing power, deep learning has achieved great success in the visual field, new models and methods are proposed every year, and the precision of image detection and segmentation is continuously improved. Meanwhile, the application field of deep learning is also expanded from the traditional natural image field to the medical image field, but compared with the natural image field, the application of deep learning in the medical image field faces more challenges: the data is few, the data marking cost is high, and the scene is more complex.
One type of task often faced in the field of medical images is the problem of organ segmentation under multiple scenes, each scene corresponds to a series of segmentation tasks, and due to the fact that data of each scene has large difference, if the data under each scene are directly put into a model without being processed for training, the model is difficult to converge, and the final segmentation performance is poor; if a deep learning model is trained to model data under each scene task, the cost is greatly increased, and meanwhile, the relation of data among different scenes cannot be utilized for transfer learning.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the invention provides a method and equipment for organ-at-risk segmentation under multiple scenes based on incremental learning.
The invention provides a method for organ-at-risk segmentation under multiple scenes based on incremental learning, which comprises the following steps:
selecting a plurality of data sets, wherein each data set comprises a plurality of positive slices containing a plurality of organs to be segmented and a plurality of negative slices not containing the organs to be segmented, a plurality of data subsets which are in one-to-one correspondence with each organ to be segmented are divided from each data set, each data subset comprises a plurality of positive slices of the corresponding organs to be segmented and a plurality of negative slices not containing the organs to be segmented, the number of the positive slices in each data subset is larger than that of the negative slices, and the positive slices of each data subset are preprocessed;
constructing a segmentation Model based on U-Net, selecting any one data set, training by using one data subset to obtain an optimal segmentation Model0 of the corresponding organ to be segmented corresponding to the data set, training by using the next data subset of the data set based on a Model0 to obtain an optimal segmentation Model1 of the corresponding organ to be segmented corresponding to the data set, and sequentially carrying out iterative training until training of all i data subsets in the data set is completed to obtain an optimal segmentation Model; and based on Modeli, sequentially carrying out iterative training on each data subset of the next data set by adopting the same method until the training of each data subset in all the data sets is completed, and obtaining an optimal segmentation model Modelk.
According to the embodiment of the invention, at least the following technical effects are achieved:
the method can effectively transfer the characteristics learned from the old task to the new task by using the incremental learning method, improves the segmentation effect of the new task, can keep the segmentation performance of organs in the old scene, and effectively solves the segmentation problem of multiple organs in different scenes by using a single model.
According to some embodiments of the present invention, in the process of training the next optimal segmentation model through the corresponding data subset based on the previous optimal segmentation model, the method further includes the steps of:
and obtaining the K-L divergence of the previous optimal segmentation model and the next optimal segmentation model in the training process based on the trained corresponding data subset of the previous optimal segmentation model, wherein the K-L divergence acts on the weight to be updated in the training process of the next optimal segmentation model.
According to some embodiments of the present invention, before training by the corresponding data subset based on the previous optimal segmentation model, the method further comprises the steps of:
dividing all slices in the corresponding data subsets into two partial subsets, and performing image transformation of a gray domain on all slices in the first partial subset;
and training the previous optimal segmentation model based on the second part of subsets and the first part of subsets which are subjected to the image transformation of the gray domain, and acting the loss function of the previous optimal segmentation model by the K-L divergence between the first part of subsets and the second part of subsets until the corresponding next optimal segmentation model is obtained by training.
According to some embodiments of the invention, the respective data subsets are prior to the input of the model, further comprising the steps of: corresponding organ regions of all positive sections in the corresponding data subset were cropped.
According to some embodiments of the invention, the pre-processing of the positive sections of each of the data subsets comprises the steps of:
marking the mask of each slice body, and setting the gray outside the slice body to be 0;
performing contrast, gamma adjustment, and clahe enhancement on the slices based on the gray space;
performing translation, rotation, flipping and grid distortion processing on the slices;
the slices are subjected to gray histogram based equalization, normalizing the slices to a fixed size.
In a second aspect of the present invention, an organ-at-risk segmentation apparatus under multiple scenarios based on incremental learning is provided, including: at least one control processor and a memory for communicative connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform a method of organ-at-risk segmentation in an incremental learning based multi-scenario as described in the first aspect of the present invention.
In a third aspect of the present invention, a computer-readable storage medium is provided, which stores computer-executable instructions for causing a computer to perform the method for organ-at-risk segmentation under multiple scenarios based on incremental learning according to the first aspect of the present invention.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flowchart of an organ-at-risk segmentation method in multiple scenarios based on incremental learning according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a model Modle according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a segmentation training strategy for an organ in each data subset according to an embodiment of the present invention;
fig. 4 is a diagram of a prediction result of a final optimal segmentation model obtained by training based on an external illumination data set and a post-installation data set according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an organ-at-risk segmentation apparatus in multiple scenarios based on incremental learning according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
Referring to fig. 1 to 3, an embodiment of the present invention provides a method for organ-at-risk segmentation under multiple scenarios based on incremental learning, including the following steps:
s100, selecting a plurality of data sets, wherein each data set comprises a plurality of positive slices containing a plurality of organs to be segmented and a plurality of negative slices not containing the organs to be segmented, a plurality of data subsets which are in one-to-one correspondence with the organs to be segmented are divided from each data set, each data subset comprises a plurality of positive slices of the corresponding organs to be segmented and a plurality of negative slices not containing the organs to be segmented, the number of the positive slices in each data subset is larger than that of the negative slices, and the positive slices of each data subset are preprocessed.
In this step, the following relationships are satisfied between the data sets: the data sets are all images of a CT sequence; images of the same region of different modalities (abdominal images of MR or CT); or the same part under different scenes (such as a postloading data set and a pre-loading data set, wherein the postloading data set is an MR image shot in vitro during cervical cancer postloading radiotherapy, and the postloading data set is an MR image shot by an endoscope of a vagina during cervical cancer radiotherapy).
Positive slices are slices containing corresponding segmented organs; a negative slice is a slice that does not contain the corresponding segmented organ. The number of positive slices of the corresponding organ in each data subset is greater than that of negative slices, so that the current model can pay more attention to the current task, the model has an attention point, and the convergence of the model is guaranteed; meanwhile, a certain amount of negative slice samples are added into each data subset, so that certain generalization capability is achieved during model training.
As an alternative embodiment, in order to increase the generalization ability of the model, the positive slices in each data subset are preprocessed, and the preprocessing step includes:
marking the mask of each slice body, and setting the gray outside the slice body to be 0; to prevent the information outside the body from interfering with the model learning of the organ to be segmented (the characteristics of the organ that leads the network learning to concentrate on the body because the organ to be segmented is unlikely to appear outside the area of the body's mask);
performing contrast, gamma adjustment, and clahe enhancement on the slices based on the gray space;
performing translation, rotation, flipping and grid distortion processing on the slices;
the slices are subjected to gray histogram based equalization, normalizing the slices to a fixed size.
S200, constructing a segmentation Model based on U-Net, selecting any one data set, training one data subset to obtain an optimal segmentation Model0 of the corresponding organ to be segmented corresponding to the data set, training the next data subset of the data set based on a Model0 to obtain an optimal segmentation Model1 of the corresponding organ to be segmented corresponding to the data set, and sequentially carrying out iterative training until training of all i data subsets in the data set is completed to obtain an optimal segmentation Model; based on Modeli, the same method is adopted to carry out iterative training on each data subset of the next data set in sequence until the training of each data subset in all the data sets is completed, and the optimal segmentation model Modelk is obtained.
For convenience of understanding, optionally, taking the data sets in two different scenarios as an external illumination data set and an afterload data set as an example, the external illumination data set mainly includes negative slices and positive slices of Rectum and Sigmoid colon; afterloading data sets mainly included negative and positive slices of Rectum (Rectum), Bladder (Bladder), Sigmoid (Sigmoid colon). Dividing the afterload data set into three data subsets S0 (for rectum), S1 (for bladder) and S2 (for sigmoid colon); the external illumination data set is divided into two data subsets S3 (corresponding to the rectum) and S4 (corresponding to the sigmoid colon); the total number of slices per data subset is 200 (200 is taken as an example, but not limited to 200); the present embodiment considers a balanced ratio of the current task and the model generalization, and the number of positive slices corresponding to the organ to be segmented in each data subset accounts for 90% (taking 90% as an example, but not limited to 90%) of each data subset.
(1) Firstly, a Model based on U-Net is constructed, and the structure of the Model is shown in FIG. 2.
(2) And training the Model through the data subset S0 to obtain a Model0 with the optimal segmentation performance (namely, the minimum value of the current loss function) under the organ to be segmented, wherein the structure of the Model0 is the same as that of the Model.
(3) Based on the Model0, training is carried out through the data subset S1, and a Model1 with the optimal segmentation performance under the organ to be segmented is obtained. The structure of Model1 is the same as that of Model 0.
Similarly, the data subsets S2 to S4 are processed according to the same method as the above steps (2) to (3) until the final optimal segmentation model is obtained for the segmentation of the organ.
The method can effectively transfer the characteristics learned from the old task to the new task by using the incremental learning method, improves the segmentation effect of the new task, can keep the segmentation performance of organs in the old scene, and effectively solves the segmentation problem of multiple organs in different scenes by using a single model.
As an optional implementation manner, in the process of training the next optimal segmentation model through the corresponding data subset based on the previous optimal segmentation model, the method further includes the steps of:
and obtaining the K-L divergence of the previous optimal segmentation model and the next optimal segmentation model in the training process based on training the corresponding data subset of the previous optimal segmentation model, and updating the weight through the K-L divergence in the training process of the next optimal segmentation model.
For convenience of understanding, taking Model0 and Model as examples:
in the generation process of the optimal segmentation Model1, the Model is divided into two parts, one part is training of the Model by using data of the numerical subset S1, and the Model is initially trained based on the weight of the Model 0; the other part is the KL divergence of the two outputs obtained by calculating the KL divergence of the Model and the KL divergence of the Model0 output by using the data input of S0 (the corresponding loss function can be referred to later)
Figure 488756DEST_PATH_IMAGE001
). The weights of the Model are adjusted by using the outputs of the two parts (refer to equation (7) in the loss function later), until the optimal segmentation Model1 is obtained through training (the data of the two parts are mixed, the data of the two parts exist in the input of the same batch of the network, and the Model is trained according to the data of the two parts, the weights are adjusted, and finally the Model1 is obtained).
Similarly, each of the other models is processed according to the above method, so that the next optimal segmentation model can maintain the prediction performance of the data subset used for training the previous optimal segmentation model.
As an optional implementation manner, before training by the corresponding data subset based on the previous optimal segmentation model, the method further includes the steps of:
dividing all slices in the corresponding data subsets into two partial subsets, and performing image transformation of a gray domain on all slices in the first partial subset;
and training the previous optimal segmentation model based on the second part of subsets and the first part of subsets which are subjected to the image transformation of the gray domain, and acting the loss function of the previous optimal segmentation model by the K-L divergence between the first part of subsets and the second part of subsets until the corresponding next optimal segmentation model is obtained by training.
For convenience of understanding, for example, the data subset S1 is divided into two subsets, S1 is divided into two subsets, the first subset performs image transformation in a grayscale domain to obtain a Model input a, the second subset does not perform the operation to obtain an input B, and the Model inputs a and B are simultaneously input into the Model, and the Model applies K-L divergence between the inputs a and B to a current loss function (refer to equations (7) and (8) in the later loss function) in the training process, so that based on the Model0, the Model1 trained by using the S1 can maximally focus on core features on an image, and the noise immunity of the Model1 and the robustness and generalization of the Model1 are improved.
As an alternative embodiment, the corresponding data subset is prior to the input of the model, further comprising the steps of: corresponding organ regions of all positive sections in the corresponding data subset were cropped. And guiding the training to finish the corresponding optimal segmentation model and paying more attention to the local information of the corresponding organ to be segmented.
The loss function used by the model in the embodiment of the method is as follows:
loss function based on Focal local:
Figure 567571DEST_PATH_IMAGE002
(1)
Figure 6161DEST_PATH_IMAGE003
(2)
Figure 777808DEST_PATH_IMAGE004
representing the probability that the prediction sample belongs to 1,
Figure 728447DEST_PATH_IMAGE005
representing pixel points inside the organ to be segmented on the slice,
Figure 610952DEST_PATH_IMAGE006
Figure 212703DEST_PATH_IMAGE007
the weight of the sharing is represented by,
Figure 155252DEST_PATH_IMAGE008
denotes fUsing parameter (focusing parameter), in the present embodiment
Figure 593186DEST_PATH_IMAGE007
Figure 951486DEST_PATH_IMAGE008
Set to 0.25, 2, respectively.
Dice loss function based on individual organ weighting:
the loss function for Dice is defined as follows:
Figure 424056DEST_PATH_IMAGE009
(3)
smooth was set to 1 in this example, and the abnormal case where the DSC denominator was 0 was calculated on the negative sections was prevented.
The loss function for channel _ dice is defined as follows:
Figure 537506DEST_PATH_IMAGE010
(4)
wherein the content of the first and second substances,
Figure 462736DEST_PATH_IMAGE011
representing the organ-dependent calculation of Dice for each output channel (3),
Figure 624727DEST_PATH_IMAGE012
the calculation method comprises the following steps:
Figure 686224DEST_PATH_IMAGE013
(5)
the loss function based on the K-L divergence is defined as follows:
Figure 970575DEST_PATH_IMAGE014
(6)
in the iterative training of the organ, the training is carried out,
Figure 383102DEST_PATH_IMAGE015
for reference image at corresponding pixel point
Figure 348784DEST_PATH_IMAGE016
The probability value of the prediction is determined,
Figure 530366DEST_PATH_IMAGE017
for matched images at corresponding pixel points
Figure 720039DEST_PATH_IMAGE016
The predicted probability value of (2).
In summary, the loss function used by the model in the training process is:
Figure 619862DEST_PATH_IMAGE018
(7)
wherein the content of the first and second substances,
Figure 185973DEST_PATH_IMAGE019
based on a previous data subset between the models defined in expression (6)
Figure 416535DEST_PATH_IMAGE019
Figure 777109DEST_PATH_IMAGE020
Is defined as:
Figure 164228DEST_PATH_IMAGE021
(8)
Figure 534030DEST_PATH_IMAGE022
representing the K-L loss function under different transformations of the same data subset. Setting of the embodiment
Figure 362308DEST_PATH_IMAGE023
The content of the organic acid is 0.5,
Figure 159363DEST_PATH_IMAGE024
Figure 768199DEST_PATH_IMAGE025
respectively 0.1 and 0.2.
The final prediction result of the optimal segmentation model obtained based on the external illumination data set and the post-loading data set is shown in fig. 4, wherein red, blue and green (dark colors) in the graph are gold marks marked by doctors, and light color is the model prediction result.
Referring to fig. 5, based on the above system embodiment and method embodiment, the present invention provides an apparatus for organ-at-risk segmentation under multiple scenarios based on incremental learning, which includes: one or more control processors and memory, one control processor being exemplified in fig. 5. The control processor and the memory may be connected by a bus or other means, as exemplified by the bus connection in fig. 5.
The memory, as a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the organ-at-risk segmentation apparatus in the incremental learning based multi-scenario in the embodiments of the present invention. The control processor executes the method for organ-at-risk segmentation in multi-scenarios based on incremental learning described in the above method embodiments by executing non-transitory software programs, instructions, and modules stored in the memory.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located from the control processor, and the remote memory may be networked to the organ-at-risk segmentation facility in the incremental learning-based multi-scenario. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory and, when executed by the one or more control processors, perform the method for organ-at-risk segmentation in multiple scenarios based on incremental learning in the above-described method embodiments.
In an embodiment of the present invention, a computer-readable storage medium is provided, and the computer-readable storage medium stores computer-executable instructions, which are executed by one or more control processors, for example, by one of the control processors in fig. 5, and can cause the one or more control processors to execute the organ-at-risk segmentation method in the incremental learning-based multi-scenario in the method embodiment.
Through the above description of the embodiments, those skilled in the art can clearly understand that the embodiments can be implemented by software plus a general hardware platform. Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program that can be executed by associated hardware, and the computer program may be stored in a computer readable storage medium, and when executed, may include the processes of the above embodiments of the methods. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (7)

1. A method for organ-at-risk segmentation under multiple scenes based on incremental learning is characterized by comprising the following steps:
selecting a plurality of data sets under different scenes, wherein at least the same organ to be segmented under different scenes is arranged between the data sets, each data set comprises a plurality of positive slices containing the organs to be segmented under the corresponding scenes and a plurality of negative slices not containing the organs to be segmented, a plurality of data subsets which are in one-to-one correspondence with each organ to be segmented are divided from each data set, each data subset comprises a plurality of positive slices containing the organs to be segmented under the corresponding scenes and a plurality of negative slices not containing the organs to be segmented, the number of the positive slices in each data subset is larger than that of the negative slices, and the positive slices of each data subset are preprocessed;
constructing a segmentation Model based on U-Net, selecting any one data set, training by using one data subset to obtain an optimal segmentation Model0 of the corresponding organ to be segmented corresponding to the data set, training by using the next data subset of the data set based on a Model0 to obtain an optimal segmentation Model1 of the corresponding organ to be segmented corresponding to the data set, and sequentially carrying out iterative training until training of all i data subsets in the data set is completed to obtain an optimal segmentation Model; and based on Modeli, sequentially carrying out iterative training on each data subset of the next data set by adopting the same method until the training of each data subset in all the data sets is completed, and obtaining an optimal segmentation model Modelk.
2. The method for organ-at-risk segmentation under multiple scenarios based on incremental learning of claim 1, wherein in the process of training the next optimal segmentation model through the corresponding data subsets based on the previous optimal segmentation model, the method further comprises the steps of:
and obtaining the K-L divergence of the previous optimal segmentation model and the next optimal segmentation model in the training process based on the trained corresponding data subset of the previous optimal segmentation model, wherein the K-L divergence acts on the weight to be updated in the training process of the next optimal segmentation model.
3. The method for organ-at-risk segmentation under multiple scenarios based on incremental learning of claim 1, wherein before training by the corresponding data subsets based on the previous optimal segmentation model, further comprising the steps of:
dividing all slices in the corresponding data subsets into two partial subsets, and performing image transformation of a gray domain on all slices in the first partial subset;
and training the previous optimal segmentation model based on the second part of subsets and the first part of subsets which are subjected to the image transformation of the gray domain, and acting the loss function of the previous optimal segmentation model by the K-L divergence between the first part of subsets and the second part of subsets until the corresponding next optimal segmentation model is obtained by training.
4. The method for organ-at-risk segmentation under multiple scenarios based on incremental learning of claim 3, wherein the corresponding data subsets are prior to input into the model, further comprising the steps of: corresponding organ regions of all positive sections in the corresponding data subset were cropped.
5. The method for organ-at-risk segmentation under multiple scenes based on incremental learning of claim 1, wherein the preprocessing of the positive slices of each data subset comprises the following steps:
marking the mask of each slice body, and setting the gray outside the slice body to be 0;
performing contrast, gamma adjustment, and clahe enhancement on the slices based on the gray space;
performing translation, rotation, flipping and grid distortion processing on the slices;
the slices are subjected to gray histogram based equalization, normalizing the slices to a fixed size.
6. An apparatus for organ-at-risk segmentation under multiple scenarios based on incremental learning, comprising: at least one control processor and a memory for communicative connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform the method of organ-at-risk segmentation under incremental learning-based multi-scenarios as claimed in any one of claims 1 to 5.
7. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the method for organ-at-risk segmentation under multiple scenarios based on incremental learning of any one of claims 1 to 5.
CN202011249376.8A 2020-11-10 2020-11-10 Method and device for organ-at-risk segmentation under multiple scenes based on incremental learning Active CN112070777B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011249376.8A CN112070777B (en) 2020-11-10 2020-11-10 Method and device for organ-at-risk segmentation under multiple scenes based on incremental learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011249376.8A CN112070777B (en) 2020-11-10 2020-11-10 Method and device for organ-at-risk segmentation under multiple scenes based on incremental learning

Publications (2)

Publication Number Publication Date
CN112070777A CN112070777A (en) 2020-12-11
CN112070777B true CN112070777B (en) 2021-10-08

Family

ID=73655824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011249376.8A Active CN112070777B (en) 2020-11-10 2020-11-10 Method and device for organ-at-risk segmentation under multiple scenes based on incremental learning

Country Status (1)

Country Link
CN (1) CN112070777B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116205289B (en) * 2023-05-05 2023-07-04 海杰亚(北京)医疗器械有限公司 Animal organ segmentation model training method, segmentation method and related products
CN117197625B (en) * 2023-08-29 2024-04-05 珠江水利委员会珠江水利科学研究院 Remote sensing image space-spectrum fusion method, system, equipment and medium based on correlation analysis
CN117912640B (en) * 2024-03-20 2024-06-25 合肥工业大学 Domain increment learning-based depressive disorder detection model training method and electronic equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012130251A1 (en) * 2011-03-28 2012-10-04 Al-Romimah Abdalslam Ahmed Abdalgaleel Image understanding based on fuzzy pulse - coupled neural networks
CN108447052A (en) * 2018-03-15 2018-08-24 深圳市唯特视科技有限公司 A kind of symmetry brain tumor dividing method based on neural network
CN109558810B (en) * 2018-11-12 2023-01-20 北京工业大学 Target person identification method based on part segmentation and fusion
EP3660785A1 (en) * 2018-11-30 2020-06-03 Laralab UG Method and system for providing an at least 3-dimensional medical image segmentation of a structure of an internal organ
CN111814813A (en) * 2019-04-10 2020-10-23 北京市商汤科技开发有限公司 Neural network training and image classification method and device
CN110210560B (en) * 2019-05-31 2021-11-30 北京市商汤科技开发有限公司 Incremental training method, classification method and device, equipment and medium of classification network
CN110647985A (en) * 2019-08-02 2020-01-03 杭州电子科技大学 Crowdsourcing data labeling method based on artificial intelligence model library
CN111080639A (en) * 2019-12-30 2020-04-28 四川希氏异构医疗科技有限公司 Multi-scene digestive tract endoscope image identification method and system based on artificial intelligence
CN111724371B (en) * 2020-06-19 2023-05-23 联想(北京)有限公司 Data processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN112070777A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
CN112070777B (en) Method and device for organ-at-risk segmentation under multiple scenes based on incremental learning
CN109949255B (en) Image reconstruction method and device
CN111950453B (en) Random shape text recognition method based on selective attention mechanism
US20220188999A1 (en) Image enhancement method and apparatus
EP4075374A1 (en) Image processing method and apparatus, and image processing system
CN109712165B (en) Similar foreground image set segmentation method based on convolutional neural network
CN110706214B (en) Three-dimensional U-Net brain tumor segmentation method fusing condition randomness and residual error
CN109816666B (en) Symmetrical full convolution neural network model construction method, fundus image blood vessel segmentation device, computer equipment and storage medium
CN112614072B (en) Image restoration method and device, image restoration equipment and storage medium
CN112991493A (en) Gray level image coloring method based on VAE-GAN and mixed density network
Liu et al. Very lightweight photo retouching network with conditional sequential modulation
CN112508827B (en) Deep learning-based multi-scene fusion endangered organ segmentation method
US11138693B2 (en) Attention-driven image manipulation
CN116188509A (en) High-efficiency three-dimensional image segmentation method
Zhu et al. Learning dual transformation networks for image contrast enhancement
CN113052768A (en) Method for processing image, terminal and computer readable storage medium
Ko et al. Learning lightweight low-light enhancement network using pseudo well-exposed images
CN114693898B (en) Pancreas and tumor three-dimensional image segmentation system and method
CN115688234A (en) Building layout generation method, device and medium based on conditional convolution
Li et al. Zero-referenced low-light image enhancement with adaptive filter network
CN111898465B (en) Method and device for acquiring face recognition model
CN112686906B (en) Image segmentation method and system based on uniform distribution migration guidance
Yuan et al. A plug-and-play image enhancement model for end-to-end object detection in low-light condition
CN117094882B (en) Lossless digital embroidery image style migration method, system, equipment and medium
CN117275075B (en) Face shielding detection method, system, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant