WO2021218215A1 - Procédé de détection d'image et procédé d'apprentissage de modèle pertinent, appareils pertinents et dispositif - Google Patents

Procédé de détection d'image et procédé d'apprentissage de modèle pertinent, appareils pertinents et dispositif Download PDF

Info

Publication number
WO2021218215A1
WO2021218215A1 PCT/CN2020/140325 CN2020140325W WO2021218215A1 WO 2021218215 A1 WO2021218215 A1 WO 2021218215A1 CN 2020140325 W CN2020140325 W CN 2020140325W WO 2021218215 A1 WO2021218215 A1 WO 2021218215A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection model
image
organ
medical image
original
Prior art date
Application number
PCT/CN2020/140325
Other languages
English (en)
Chinese (zh)
Inventor
黄锐
胡志强
张少霆
李鸿升
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Priority to JP2021576932A priority Critical patent/JP2022538137A/ja
Priority to KR1020217043241A priority patent/KR20220016213A/ko
Publication of WO2021218215A1 publication Critical patent/WO2021218215A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present disclosure relates to the field of artificial intelligence technology, and in particular to an image detection method and related model training method and related devices and equipment.
  • Medical images such as CT (Computed Tomography) and MRI (Magnetic Resonance Imaging, MRI scan) have important clinical significance.
  • CT Computer Tomography
  • MRI Magnetic Resonance Imaging, MRI scan
  • multi-organ detection on medical images such as CT and MRI to determine the region corresponding to each organ on the medical image
  • training an image detection model suitable for multi-organ detection has high application value.
  • model training relies on a large number of labeled data sets.
  • obtaining a large number of high-quality multi-organ annotations is very time-consuming and labor-intensive, and usually only experienced radiologists have the ability to annotate data.
  • the existing image detection models often have the problem of low accuracy when performing multi-organ detection. In view of this, how to improve the accuracy of detection in multi-organ detection has become an urgent problem to be solved.
  • the present disclosure provides an image detection method, a training method of related models, and related devices and equipment.
  • an embodiment of the present disclosure provides a method for training an image detection model, including: obtaining a sample medical image, wherein the sample medical image pseudo-labels at least one actual region of an unlabeled organ; The image is detected to obtain a first detection result, where the first detection result includes the first predicted region of the unlabeled organ; and the sample medical image is detected using the image detection model to obtain the second detection result, where the second detection The results include the second prediction area of the unlabeled organ.
  • the network parameters of the image detection model are determined based on the network parameters of the original detection model; the differences between the first prediction area and the actual area and the second prediction area are used to adjust the original detection The network parameters of the model.
  • the sample medical image pseudo-labels the actual area of at least one unlabeled organ, there is no need to actually label multiple organs in the sample medical image
  • the original detection model is used to detect the sample medical image to obtain Contain the first detection result of the first preset area of the unlabeled organ, and use the image detection model to detect the sample medical image to obtain the second detection result of the second prediction area containing the unlabeled organ, and then use the first prediction area
  • the original detection model includes a first original detection model and a second original detection model
  • the image detection model includes a first image detection model corresponding to the first original detection model and a second image detection model corresponding to the second original detection model
  • Using the original detection model to detect the sample medical image to obtain the first detection result includes: using the first original detection model and the second original detection model to perform the step of detecting the sample medical image to obtain the first detection result; using the image The detection model detects the sample medical image to obtain the second detection result, including: using the first image detection model and the second image detection model to perform the step of detecting the sample medical image to obtain the second detection result; using the first prediction Adjust the network parameters of the original detection model based on the differences between the regions and the actual region and the second prediction region, including: using the first prediction region of the first original detection model to compare the second prediction of the actual region and the second image detection model Adjust the network parameters of the first original detection model; and adjust the difference between the first prediction area of the second original detection model and the actual area and the second prediction area of the
  • the original detection model is set to include the first original detection model and the second original detection model
  • the image detection model is set to include the first image detection model corresponding to the first original detection model and the image detection model corresponding to the second original detection model.
  • the second image detection model, and the first original detection model and the second original detection model are used to perform the step of detecting the sample medical image to obtain the first detection result
  • the first image detection model and the second detection model are used respectively to execute
  • the step of detecting the sample medical image to obtain the second detection result so as to use the difference between the first prediction area of the first original detection model and the actual area and the second prediction area of the second image detection model to adjust the first
  • the network parameters of the original detection model, and the difference between the first prediction area of the second original detection model and the actual area and the second prediction area of the first image detection model are used to adjust the network parameters of the second original detection model.
  • the first image detection model corresponding to the first original detection model can be used to supervise the training of the second original detection model
  • the second image detection model corresponding to the second original detection model can be used to supervise the training of the first original detection model. Constrain the cumulative error of the network parameters due to the pseudo-labeled real area during multiple training processes, and improve the accuracy of the image detection model.
  • using the difference between the first prediction area and the actual area and the second prediction area to adjust the network parameters of the original detection model includes: using the difference between the first prediction area and the actual area to determine the first prediction of the original detection model. Loss value; and, using the difference between the first prediction region and the second prediction region, determine the second loss value of the original detection model; use the first loss value and the second loss value to adjust the network parameters of the original detection model.
  • the first loss value of the original detection model is determined by the difference between the first prediction region and the actual region
  • the second loss value of the original detection model is determined by the difference between the first prediction region and the second prediction region
  • the two dimensions of the difference between the second prediction regions are used to measure the loss of the original detection model, which is helpful to improve the accuracy of loss calculation, which can help improve the accuracy of the network parameters of the original detection model, and thus can help improve the image detection model. Accuracy.
  • using the difference between the first prediction area and the actual area to determine the first loss value of the original detection model includes at least one of the following: using a focus loss function to process the first prediction area and the actual area to obtain the first focus loss Value; the first prediction area and the actual area are processed using the ensemble similarity loss function to obtain the first loss value of the ensemble similarity.
  • using the difference between the first prediction region and the second prediction region to determine the second loss value of the original detection model includes: using a consistency loss function to process the first prediction region and the second prediction region to obtain the second loss value.
  • using the first loss value and the second loss value to adjust the network parameters of the original detection model includes: weighting the first loss value and the second loss value to obtain the weighted loss value; using the weighted loss value to adjust the original detection model Network parameters.
  • the model can increase the focus on difficult samples, which can help improve the accuracy of the image detection model;
  • the ensemble similarity loss function processes the first prediction area and the actual area to obtain the first loss value of the ensemble similarity, which can make the model fit the pseudo-labeled actual area, which can help improve the accuracy of the image detection model;
  • the consistency loss function processes the first prediction area and the second prediction area to obtain the second loss value, which can improve the prediction consistency of the original model and the image detection model, and thus can help improve the accuracy of the image detection model; Perform weighting processing on the first loss value and the second loss value to obtain the weighted loss value, and use the weighted loss value to adjust the network parameters of the original detection model, which can balance the importance of each loss value in the training process, thereby improving the network
  • the accuracy of the parameters can help improve the accuracy of the image detection model.
  • the sample medical image also contains the actual area of the marked organ
  • the first detection result also includes the first prediction area of the marked organ
  • the second detection result also includes the second prediction area of the marked organ
  • using the first prediction area Determine the first loss value of the original detection model based on the difference between the actual area and the original detection model, including: using the difference between the first prediction area and the actual area of the unlabeled organ and the labeled organ to determine the first loss value of the original detection model
  • Using the difference between the first prediction area and the second prediction area to determine the second loss value of the original detection model, including: using the difference between the first prediction area of the unlabeled organ and the corresponding second prediction area to determine the original Check the second loss value of the model.
  • the second detection result also includes the second prediction region of the labeled organ
  • the difference between the first prediction area and the actual area is comprehensively considered.
  • the difference between the prediction area and the corresponding second prediction area can improve the robustness of the consistency constraints of the original detection model and the image detection model, and thus can improve the accuracy of the image detection model.
  • the original detection model after adjusting the network parameters of the original detection model by using the differences between the first prediction area and the actual area and the second prediction area, it also includes: using the network parameters adjusted during this training and several previous trainings to correct The network parameters of the image detection model are updated.
  • the network parameters can be further constrained due to pseudo-labeled real regions during multiple training sessions.
  • the resulting cumulative error improves the accuracy of the image detection model.
  • the network parameters adjusted during this training and several previous trainings to update the network parameters of the image detection model, including: statistics the average of the network parameters adjusted by the original detection model during this training and several previous trainings Value; update the network parameters of the image detection model to the average value of the network parameters of the corresponding original detection model.
  • acquiring the sample medical image includes: acquiring a medical image to be pseudo-labeled, wherein at least one unlabeled organ exists in the medical image to be pseudo-labeled; and detecting the pseudo-labeled medical image using a single-organ detection model corresponding to each unlabeled organ. , To obtain the organ prediction area of each unlabeled organ; pseudo-label the organ prediction area of the unlabeled organ as the actual area of the unlabeled organ, and use the pseudo-labeled medical image to be pseudo-labeled as the sample medical image.
  • the single-organ detection model can be used to avoid the manual labeling of multiple organs. Workload, which can help reduce the labor cost of training an image detection model for multi-organ detection, and improve the efficiency of training.
  • the medical image to be pseudo-labeled includes at least one labeled organ; before the single-organ detection model corresponding to each unlabeled organ is used to detect the pseudo-labeled medical image, the method further includes: using the medical image to be pseudo-labeled, Annotate the single-organ detection model corresponding to the annotated organ in the medical image for training.
  • the medical image to be pseudo-labeled including at least one labeled organ in the medical image to be pseudo-labeled, and using the medical image to be pseudo-labeled to train the single-organ detection model corresponding to the labeled organ in the pseudo-labeled medical image can improve the accuracy of the single-organ detection model. Therefore, it can help improve the accuracy of subsequent pseudo-labeling, and in turn, can help improve the accuracy of the subsequent training image detection model.
  • acquiring a medical image to be pseudo-labeled includes: acquiring a three-dimensional medical image and preprocessing the three-dimensional medical image; and performing cropping processing on the pre-processed three-dimensional medical image to obtain at least one two-dimensional medical image to be pseudo-labeled.
  • the pre-processed three-dimensional medical images are cropped to obtain at least one two-dimensional medical image to be pseudo-labeled, which can help to obtain a model training Medical images can help improve the accuracy of subsequent image detection model training.
  • the preprocessing of the three-dimensional medical image includes at least one of the following: adjusting the voxel resolution of the three-dimensional medical image to a preset resolution; using a preset window value to normalize the voxel value of the three-dimensional medical image to Within a preset range; Gaussian noise is added to at least part of the voxels of the three-dimensional medical image.
  • adjusting the voxel resolution of the 3D medical image to a preset resolution can facilitate subsequent model prediction processing; using the preset window value to normalize the voxel value of the 3D medical image to a preset range can be It is helpful for the model to extract accurate features; adding Gaussian noise to at least part of the voxels of the three-dimensional medical image can help achieve data augmentation, increase data diversity, and improve the accuracy of subsequent model training.
  • the embodiments of the present disclosure provide an image detection method, including: acquiring a medical image to be detected, wherein the medical image to be detected contains multiple organs; and using an image detection model to detect the medicine to be detected to obtain multiple organs The prediction area; wherein, the image detection model is obtained by using the training method of the image detection model in the first aspect.
  • the detection accuracy can be improved in the process of multiple organ detection.
  • an embodiment of the present disclosure provides a training device for an image detection model, including an image acquisition module, a first detection module, a second detection module, and a parameter adjustment module.
  • the image acquisition module is configured to acquire sample medical images, wherein , The sample medical image pseudo-labels the actual area of at least one unlabeled organ;
  • the first detection module is configured to use the original detection model to detect the sample medical image to obtain the first detection result, wherein the first detection result includes the unlabeled organ
  • the second detection module is configured to use the image detection model to detect the sample medical image to obtain the second detection result, and the network parameters of the image detection model are determined based on the network parameters of the original detection model, wherein ,
  • the second detection result includes a second prediction area of the unlabeled organ;
  • the parameter adjustment module is configured to adjust the network parameters of the original detection model by using the difference between the first prediction area and the actual area and the second prediction area, respectively.
  • an embodiment of the present disclosure provides an image detection device, including an image acquisition module and an image detection module, the image acquisition module is configured to acquire a medical image to be detected, wherein the medical image to be detected contains multiple organs; the image The detection module is configured to use the image detection model to detect the medicine to be detected to obtain the predicted regions of multiple organs; wherein the image detection model is obtained by training using the image detection model training device in the second aspect.
  • embodiments of the present disclosure provide an electronic device including a memory and a processor coupled to each other.
  • the processor is configured to execute program instructions stored in the memory to implement the image detection model in the first aspect. Training method, or implement the image detection method in the second aspect.
  • embodiments of the present disclosure provide a computer-readable storage medium on which program instructions are stored.
  • the program instructions are executed by a processor, the training method of the image detection model in the first aspect is realized, or the first aspect is realized.
  • the image detection method in the second aspect is realized.
  • the embodiments of the present disclosure also provide a computer program, including computer-readable code.
  • the processor in the electronic device executes the above-mentioned first aspect.
  • the sample medical image is acquired, and the sample medical image is pseudo-labeled with at least one actual region of an unlabeled organ, so there is no need to actually label multiple organs in the sample medical image, and the original detection model is used to detect the sample medical image.
  • the difference between the area and the actual area and the second predicted area, adjust the network parameters of the original detection model, and the network parameters of the image detection model are determined based on the network parameters of the original detection model, so the image detection model can supervise the original detection
  • the training of the model can constrain the cumulative error of the network parameters due to the pseudo-labeled real area during multiple training processes, and improve the accuracy of the image detection model, so that the image detection model can accurately supervise the training of the original detection model.
  • the original detection model can accurately adjust its network parameters during the training process. Therefore, the detection accuracy of the image detection model can be improved in the process of multi-organ detection.
  • FIG. 1 is a schematic flowchart of an embodiment of a training method for an image detection model provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of an embodiment of step S11 in FIG. 1;
  • FIG. 3 is a schematic flowchart of another embodiment of a training method for an image detection model provided by an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of an embodiment of the training process of an image detection model provided by an embodiment of the present disclosure
  • FIG. 5 is a schematic flowchart of an embodiment of an image detection method provided by an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of the framework of an embodiment of an image detection model training apparatus provided by an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of a framework of an embodiment of an image detection device provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a framework of an embodiment of an electronic device provided by an embodiment of the present disclosure.
  • FIG. 9 is a schematic framework diagram of an embodiment of a computer-readable storage medium provided by an embodiment of the present disclosure.
  • system and "network” in this article are often used interchangeably in this article.
  • the term “and/or” in this article is only an association relationship describing the associated objects, which means that there can be three relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, exist alone B these three situations.
  • the character "/” in this text generally indicates that the associated objects before and after are in an "or” relationship.
  • "many” in this document means two or more than two.
  • FIG. 1 is a schematic flowchart of an embodiment of a method for training an image detection model provided by an embodiment of the present disclosure. Among them, the following steps can be included:
  • Step S11 Obtain a sample medical image, where the sample medical image pseudo-labels at least one actual region of an unlabeled organ.
  • the sample medical images may include CT images and MR images, which are not limited here.
  • the sample medical image can be obtained by scanning the abdomen, chest, head, etc., and can be set according to actual application conditions, which is not limited here.
  • the organs in the sample medical image may include: kidney, spleen, liver, pancreas, etc.; or, when the chest is scanned, the organs in the sample medical image may include: heart, lung lobes, thyroid, etc.; or,
  • the head is scanned, and the organs in the sample medical image can include: brain stem, cerebellum, diencephalon, and telencephalon.
  • the actual area of the unlabeled organ may be detected by using a single-organ detection model corresponding to the unlabeled organ.
  • the unlabeled organ may include: At least one of the kidney, spleen, liver, and pancreas can use the single organ detection model corresponding to the kidney to detect the sample medical image to obtain the organ prediction area corresponding to the kidney, and the single organ detection model corresponding to the spleen can be used Detect the sample medical image to obtain the organ prediction area corresponding to the spleen, and use the single organ detection model corresponding to the liver to detect the sample medical image to obtain the organ prediction area corresponding to the liver, and use the single organ detection corresponding to the pancreas
  • the model detects the sample medical image and obtains the organ prediction region corresponding to the pancreas, so that the organ prediction regions corresponding to the kidney, spleen, liver, and pancreas are pseudo-labeled in the sample medical image,
  • pseudo-labeling refers to the process of taking the organ prediction regions of unlabeled organs detected by the single-organ detection model as the actual regions.
  • the organ is not marked as other organs, it can be deduced by analogy, and we will not give examples one by one here.
  • the single-organ detection model for unlabeled organs is trained using a single-organ data set labeled with the actual region of the unlabeled organ.
  • the single-organ detection model corresponding to the kidney uses the labeled kidney
  • the kidney data set of the actual area is trained, and the single-organ detection model corresponding to the spleen is trained using the spleen data set of the actual area marked with the spleen.
  • Step S12 Use the original detection model to detect the sample medical image to obtain a first detection result, where the first detection result includes a first prediction region of an unlabeled organ.
  • the original detection model can include any one of Mask R-CNN (Mask Region with Convolutional Neural Network), FCN (Fully Convolutional Network), PSP-net (Pyramid Scene Parsing Network, pyramid scene analysis network),
  • the original detection model can also be set-net, U-net, etc., which can be set according to the actual situation, which is not limited here.
  • the first detection result of the first prediction region containing the unlabeled organ can be obtained.
  • the sample medical image is an image obtained by scanning the abdomen.
  • the unlabeled organs include the kidney, spleen, and pancreas. Therefore, the original detection model is used to detect the sample medical image, and the first prediction area of the kidney and the first prediction area of the spleen can be obtained.
  • the first prediction area of the pancreas, and other scenarios can be deduced by analogy, so I won’t give an example one by one here.
  • Step S13 Use the image detection model to detect the sample medical image to obtain a second detection result, where the second detection result includes a second prediction region of an unlabeled organ.
  • the network structure of the original detection model and the network structure of the image detection model corresponding to the original detection model may be the same.
  • the corresponding image detection model can also be Mask R-CNN; or, when the original detection model is FCN, the corresponding image detection model can also be FCN; Or, when the original detection model is PSP-net, the corresponding image detection model can also be PSP-net; when the original detection model is another network, the analogy can be used, and no examples are given here.
  • the network parameters of the image detection model may be determined based on the network parameters of the original detection model.
  • the network parameters of the image detection model may be obtained based on the network parameters adjusted by the original detection model in multiple training processes.
  • the network parameters of the image detection model can be obtained by using the network parameters adjusted by the original detection model from the knth to the k-1th training process; or, in the k+th
  • the network parameters of the image detection model can be obtained by using the network parameters adjusted by the original detection model from the k+1-nth to the kth training process, and so on.
  • the number of times (ie, n) of the foregoing multiple trainings can be set according to actual conditions, for example, it can be set to 5, 10, 15, etc., which are not limited here.
  • the second detection result of the second prediction region containing the unlabeled organ can be obtained.
  • the unlabeled organs include the kidney, spleen, and pancreas. Therefore, the image detection model is used to detect the sample medical image, and the second prediction area of the kidney and the second prediction area of the spleen can be obtained.
  • the prediction area, the second prediction area of the pancreas, and other scenarios can be deduced by analogy, so we will not give examples one by one here.
  • the above steps S12 and S13 may be performed in a sequential order, for example, step S12 is performed first, and then step S13; or, step S13 is performed first, and then step S12 is performed.
  • the above step S12 and step S13 can also be performed at the same time, and can be set according to actual applications, which is not limited here.
  • Step S14 Use the differences between the first prediction area and the actual area and the second prediction area to adjust the network parameters of the original detection model.
  • the difference between the first prediction area and the actual area can be used to determine the first loss value of the original detection model.
  • the focal loss function can be used to process the first prediction area and the actual area to obtain the first focal loss value; or, in order to be able to make the model fit pseudo-labeled In the actual area, the first prediction area and the actual area can also be processed by using the dice loss function to obtain the first loss value of the dice loss.
  • the difference between the first prediction area and the second prediction area can also be used to determine the second loss value of the original detection model.
  • the consistency loss function can be used to process the first prediction area and the second prediction area to obtain the second loss value.
  • the performance loss function can be a cross-entropy loss function, which can be set according to actual application conditions, and is not limited here.
  • the above-mentioned first loss value and second loss value can also be used to adjust the network parameters of the original detection model.
  • the first loss value and the second loss value can be weighted to obtain a weighted loss value, so that the weighted loss value can be used to adjust the network parameters of the original detection model.
  • the weights corresponding to the first loss value and the second loss value can be set according to the actual situation, for example, both are set to 0.5; or, the weight corresponding to the first loss value is set to 0.6, and the weight corresponding to the second loss value is set Set to 0.4, which is not limited here.
  • the first loss value includes the first loss value of the focus and the first loss value of the set similarity
  • the first loss value of the focus, the first loss value of the set similarity, and the second loss value can be weighted to obtain The weighted loss value is used to adjust the network parameters of the original detection model.
  • Stochastic Gradient Descent (SGD), Batch Gradient Descent (BGD), Mini-Batch Gradient Descent (MBGD), etc. can be used, and weighted The loss value adjusts the network parameters of the original detection model.
  • batch gradient descent refers to the use of all samples for parameter updates during each iteration; stochastic gradient descent refers to the use of one during each iteration Samples are used to update parameters; mini-batch gradient descent refers to using a batch of samples to update parameters during each iteration, which will not be repeated here.
  • the sample medical image may also include the actual area of the marked organ
  • the first detection result may also include the first prediction area of the marked organ
  • the second detection result may also include the second area of the marked organ. Forecast area.
  • the unlabeled organs include the kidney, spleen, and pancreas
  • the labeled organs include the liver. Therefore, the original detection model is used to detect the sample medical image, and the corresponding kidneys of the unlabeled organs can be obtained.
  • the first prediction area corresponding to the unlabeled organ spleen, the first prediction area corresponding to the unlabeled organ pancreas, and the first prediction area corresponding to the labeled organ liver, and the image detection model corresponding to the original detection model is used Detecting the sample medical image can obtain the second prediction area corresponding to the unlabeled organ kidney, the second prediction area corresponding to the unlabeled organ spleen, the second prediction area corresponding to the unlabeled organ pancreas, and the second prediction area corresponding to the labeled organ liver. Forecast area.
  • the difference between the first prediction region and the actual region of the unlabeled organ and the labeled organ can be used to determine the first loss value of the original detection model, and the difference between the first prediction region of the unlabeled organ and the corresponding second prediction region can be used.
  • the difference between the two can determine the second loss value of the original detection model.
  • the unlabeled organs include the kidney, spleen, and pancreas
  • the labeled organs include the liver. You can use the first prediction area corresponding to the unlabeled organ kidney and the pseudo-labeled actual area.
  • the difference between the first prediction area corresponding to the unlabeled organ spleen and the pseudo-labeled actual area, the difference between the first prediction area corresponding to the unlabeled organ pancreas and the pseudo-labeled actual area, and the labeled organ liver Determine the first loss value of the original detection model according to the difference between the corresponding first prediction area and the actual area marked by the real label.
  • the first loss value may include at least one of the first loss value of focus and the first loss value of set similarity. Alternatively, please refer to the previous steps, which will not be repeated here.
  • the difference between the first prediction region and the second prediction region corresponding to the unlabeled organ kidney, the difference between the first prediction region and the second prediction region corresponding to the spleen of the unlabeled organ, and the pancreas corresponding to the unlabeled organ is determined to determine the second loss value of the original detection model.
  • the second loss value can be calculated by using the cross-entropy loss function. You can refer to the foregoing steps and will not be repeated here. Therefore, in the process of determining the first loss value of the original detection model, the difference between the first prediction area and the actual area is comprehensively considered, and in the process of determining the second loss value of the original detection model, only unlabeled organs are considered.
  • the difference between the first prediction region and the corresponding second prediction region can improve the robustness of the consistency constraint of the original detection model and the image detection model, and thus can improve the accuracy of the image detection model.
  • the network parameters of the image detection model may not be updated, but after a preset number of times (for example, 2 times, 3 times, etc.) training, reuse The network parameters adjusted during this training and several previous trainings will update the network parameters of the image detection model, which is not limited here. For example, during the kth training process, the network parameters of the image detection model may not be updated.
  • the original detection model can be used to train from the k+inth to the k+ith time.
  • i can be set to an integer not less than 1 according to the actual situation, for example, it can be set to 1, 2, 3, etc., which is not limited here.
  • the original detection model in the process of updating the network parameters of the image detection model, can be counted in this training and the average value of the network parameters adjusted by several previous trainings, and then the image detection model The network parameters of is updated to the average value of the network parameters of the corresponding original detection model.
  • the average value of network parameters refers to the average value corresponding to the same network parameter, which may be a certain weight (or bias) corresponding to the same neuron after being adjusted in multiple training processes.
  • the average value of the value, so the average value of each weight (or bias) of each neuron after adjustment in multiple training processes can be obtained by statistics, so as to use the average value to the corresponding weight of the corresponding neuron in the image detection model (Or offset) to update.
  • this training is the kth training, and the average value of the network parameters adjusted by the original detection model during this training and the previous n-1 training can be counted.
  • the value of n can be set according to the actual application, for example , Can be set to 5, 10, 15, etc., which is not limited here.
  • the network parameters of the image detection model are updated using the average value of the adjusted network parameters from the k-n+1 training process to the k training process, so as to be able to It is conducive to quickly constrain the accumulated errors generated in the process of multiple training, and improve the accuracy of the image detection model.
  • a preset training end condition can also be set. If the preset training end condition is not met, the above step S12 and subsequent steps can be re-executed to continue to perform the network parameters of the original detection model. Adjustment.
  • the preset training end conditions may include any of the following: the current number of training times reaches a preset number threshold (eg, 500 times, 1000 times, etc.), and the loss value of the original detection model is less than a preset loss threshold. For one, there is no limitation here.
  • the image detection model can be used to detect the medical image to be tested, so that the regions corresponding to multiple organs in the medical image to be tested can be directly obtained, thereby eliminating the need to use multiple units.
  • Organ detection performs separate detection operations on medical images to be detected, so the amount of detection calculations can be reduced.
  • the sample medical image is acquired, and the sample medical image is pseudo-labeled with at least one actual region of an unlabeled organ, so there is no need to actually label multiple organs in the sample medical image, and the original detection model is used to detect the sample medical image.
  • the difference between the area and the actual area and the second predicted area, adjust the network parameters of the original detection model, and the network parameters of the image detection model are determined by the network parameters of the original detection model, so the image detection model can supervise the original detection
  • the training of the model can constrain the cumulative error of the network parameters due to the pseudo-labeled real area during multiple training processes, and improve the accuracy of the image detection model, so that the image detection model can accurately supervise the training of the original detection model.
  • the original detection model can accurately adjust its network parameters during the training process. Therefore, the detection accuracy of the image detection model can be improved in the process of multi-organ detection.
  • FIG. 2 is a schematic flowchart of an embodiment of step S11 in FIG. 1.
  • FIG. 2 is a schematic diagram of an embodiment of obtaining a sample medical image, which includes the following steps:
  • Step S111 Obtain a medical image to be pseudo-labeled, where at least one unlabeled organ exists in the medical image to be pseudo-labeled.
  • the medical image to be pseudo-labeled can be obtained by scanning the abdomen, the unlabeled organs in the medical image to be pseudo-labeled can include: kidney, spleen, pancreas, etc., and the medical image to be pseudo-labeled can also be obtained by scanning other parts.
  • the chest, head, etc. can refer to the relevant steps in the foregoing embodiment, which is not limited here.
  • the acquired original medical image can be a three-dimensional medical image, for example, a three-dimensional CT image, a three-dimensional MR image, which is not limited here, so the three-dimensional medical image can be preprocessed and the preprocessed
  • the three-dimensional medical image is cropped to obtain at least one medical image to be pseudo-labeled.
  • the cropping process may be center cropping of the preprocessed three-dimensional medical image, which is not limited here.
  • cropping can be performed along a plane parallel to the three-dimensional medical image in the dimensions perpendicular to the plane to obtain a two-dimensional medical image to be pseudo-labeled.
  • the size of the medical image to be pseudo-labeled can be set according to the actual situation, for example, it can be 352*352, which is not limited here.
  • the preprocessing may include adjusting the voxel resolution of the three-dimensional medical image to a preset resolution.
  • the voxel of the 3D medical image is the smallest unit of 3D medical image segmentation in the 3D space.
  • the preset resolution can be 1*1*3mm, and the preset resolution can also be set to other resolutions according to the actual situation, for example, 1*1 *4mm, 2*2*3mm, etc., are not limited here. Adjusting the voxel resolution of the three-dimensional medical image to a preset resolution can facilitate subsequent model prediction processing.
  • the preprocessing may also include using a preset window value to normalize the voxel value of the three-dimensional medical image to a preset range.
  • the voxel value can be a different value depending on the three-dimensional medical image.
  • the voxel value can be a Hu (houns field unit) value.
  • the preset window value can be set according to the part corresponding to the 3D medical image.
  • the preset window value can be set from -125 to 275, and other parts can be set according to the actual situation. , I will not give an example one by one here.
  • the preset range can be set according to the actual application.
  • the preset range can be set from 0 to 1, still taking 3D CT images as an example.
  • the preset window value can be set from -125 to 275.
  • voxels with a voxel value less than or equal to -125 can be reset to a voxel value
  • voxels with a voxel value greater than or equal to 275 can be reset to a voxel uniformly.
  • Voxel value 1 you can reset voxels with voxel values between -125 to 275 to voxel values between 0 and 1, which can help enhance the contrast between different organs in the image, thereby improving the accuracy of the model extraction feature.
  • the preprocessing may also include adding Gaussian noise to at least part of the voxels of the three-dimensional medical image. At least part of the voxels can be set according to actual applications, for example, 1/3 voxels of 3D medical images, or 1/2 voxels of 3D medical images, or all voxels of 3D medical images, which are not limited here. .
  • Gaussian noise By adding Gaussian noise to at least part of the voxels of the three-dimensional medical image, the subsequent two-dimensional medical image to be pseudo-labeled can be cropped on the basis of the three-dimensional medical image and the three-dimensional medical image without Gaussian noise, so it can be beneficial to implementation Data augmentation, increase data diversity, and improve the accuracy of subsequent model training.
  • Step S112 Use the single-organ detection model corresponding to each unlabeled organ to detect the pseudo-labeled medical image to obtain the organ prediction area of each unlabeled organ.
  • the single-organ detection model corresponding to each unlabeled organ may be trained using a single-organ data set labeled with unlabeled organs.
  • the single-organ detection model corresponding to the kidney may be obtained by using a single-organ detection model labeled with kidneys.
  • the single-organ detection model corresponding to the spleen can be trained using the single-organ data set labeled with the spleen, and the other organs can be deduced by analogy, so we will not give an example one by one here.
  • the medical image to be pseudo-labeled may also include at least one labeled organ, and the medical image to be pseudo-labeled including the labeled organ may be used to treat a single organ corresponding to the labeled organ in the pseudo-labeled medical image.
  • the detection model is trained to obtain the corresponding single-organ detection model. For example, if the medical image to be pseudo-labeled includes the labeled liver, the medical image to be pseudo-labeled that includes the labeled liver can be used to train the single-organ detection model corresponding to the liver to obtain the single-organ detection model corresponding to the liver. It can be deduced by analogy, and I will not give examples one by one here.
  • single-organ detection models can include any of Mask R-CNN (Mask Region with Convolutional Neural Network), FCN (Fully Convolutional Network), and PSP-net (Pyramid Scene Parsing Network).
  • Mask R-CNN Mask Region with Convolutional Neural Network
  • FCN Full Convolutional Network
  • PSP-net Pyramid Scene Parsing Network
  • the single-organ detection model can also be set-net, U-net, etc., which can be set according to actual conditions, which is not limited here.
  • the organ prediction area of each unlabeled organ can be obtained.
  • the medical image to be pseudo-labeled is an image obtained by scanning the abdomen as an example
  • the unlabeled organs include the kidney, spleen, and pancreas.
  • the single-organ detection model corresponding to the kidney is used to detect the pseudo-labeled medical image, and the organ prediction area of the kidney can be obtained.
  • the single-organ detection model corresponding to the kidney can be used to detect pseudo-labeled medical images
  • the single-organ detection model corresponding to the spleen can be used to detect pseudo-labeled medical images
  • the use of The single-organ detection model corresponding to the pancreas requires the steps of detecting and detecting pseudo-labeled medical images, and finally uniformly pseudo-labeling the single-organ prediction regions of the kidney, spleen, and pancreas on the medical image to be pseudo-labeled; or, using each of the above
  • the single-organ detection model corresponding to the unlabeled organ can also perform the steps of detecting the pseudo-labeled medical image in sequence, so that it is no longer necessary to pseudo-label the organ prediction region of each unlabeled organ in the pseudo-labeled medical image.
  • the final medical image to be pseudo-labeled can include the single-organ prediction regions of the kidney, spleen, and pancreas. It can be set according to the actual situation and is not limited here.
  • Step S113 pseudo-label the organ prediction region of the unlabeled organ as the actual region of the unlabeled organ, and use the pseudo-labeled medical image to be pseudo-labeled as a sample medical image.
  • the organ prediction region of each unlabeled organ can be pseudo-labeled as the actual region of the unlabeled organ, and the pseudo-labeled medical image to be pseudo-labeled can be used as the sample medical image.
  • the organ prediction area of the non-labeled organ is pseudo-labeled as the actual area of the unlabeled organ, and the pseudo-labeled medical image after the pseudo-labeling is used as the sample medical image.
  • the single-organ detection model can be used to eliminate the need for manual pairing. The workload of organ labeling can help reduce the labor cost of training an image detection model for multi-organ detection and improve the efficiency of training.
  • FIG. 3 is a schematic flowchart of another embodiment of a training method for an image detection model provided by an embodiment of the present disclosure. Among them, the following steps can be included:
  • Step S31 Obtain a sample medical image, where the sample medical image pseudo-labels at least one actual region of an unlabeled organ.
  • step S31 can refer to related steps in the foregoing embodiment.
  • Step S32 using the first original detection model and the second original detection model to perform the step of detecting the sample medical image to obtain the first detection result.
  • the original detection model may include a first original detection model and a second original detection model.
  • the first original detection model can include any of Mask R-CNN (Mask Region with Convolutional Neural Network), FCN (Fully Convolutional Network), PSP-net (Pyramid Scene Parsing Network, pyramid scene analysis network)
  • the first original detection model can also be set-net, U-net, etc., which can be set according to the actual situation, which is not limited here.
  • the second original detection model can include any of Mask R-CNN (Mask Region with Convolutional Neural Network), FCN (Fully Convolutional Network), PSP-net (Pyramid Scene Parsing Network, pyramid scene analysis network)
  • the second original detection model can also be set-net, U-net, etc., which can be set according to the actual situation, which is not limited here.
  • the first detection result detected by the first original detection model may include the first prediction area of the unlabeled organ, or the first detection result detected by the first original detection model may also include the unlabeled organ. The first prediction area and the first prediction area of the labeled organ.
  • the first detection result detected by the second original detection model may include the first prediction region of the unlabeled organ, or the first detection result detected by the second original detection model may also include the unlabeled organ The first prediction area of and the first prediction area of the labeled organ.
  • FIG. 4 is a schematic diagram of an embodiment of the training process of the image detection model.
  • the first original detection model is denoted as net1
  • the second original detection model is denoted as net2.
  • the first original detection model net1 detects the sample medical image, and the first detection result corresponding to the first original detection model net1 is obtained.
  • the second original detection model net2 detects the sample medical image, and obtains the first detection result corresponding to the first original detection model net1. 2.
  • Step S33 using the first image detection model and the second image detection model to perform the step of detecting the sample medical image to obtain the second detection result.
  • the image detection model may include a first image detection model corresponding to the first original detection model and a second image detection model corresponding to the second original detection model, the network structure and network parameters of the first image detection model and the second image detection model You can refer to the relevant steps in the foregoing embodiment, which will not be repeated here.
  • the second detection result detected by the first image detection model may include the second prediction area of the unlabeled organ, or the second detection result detected by the first image detection model may also include the unlabeled organ.
  • the second detection result detected by the second image detection model may include the second prediction area of the unlabeled organ, or the second detection result detected by the second image detection model may also include the unlabeled organ The second prediction area of and the second prediction area of the labeled organ.
  • the first image detection model corresponding to the first original detection model net1 is denoted as EMA net1
  • the second image detection model corresponding to the second original detection model net2 is denoted as EMAnet2.
  • the first image detection model EMAnet1 detects the sample medical image
  • the second detection result corresponding to the first image detection model EMAnet1 is obtained
  • the second image detection model EMAnet2 detects the sample medical image.
  • steps S32 and S33 can be performed in a sequential order, for example, step S32 is performed first, and then step S33 is performed, or step S33 is performed first, and then step S32 is performed.
  • the above step S32 and step S33 can also be performed at the same time, and can be set according to actual applications, which is not limited here.
  • Step S34 Use the differences between the first prediction area of the first original detection model and the actual area and the second prediction area of the second image detection model to adjust the network parameters of the first original detection model.
  • the difference between the first prediction area of the first original detection model and the pseudo-labeled actual area can be used to determine the first loss value of the first original detection model, and the first prediction area of the first original detection model and the pseudo-labeled actual area can be used.
  • the difference between the second prediction regions of the second image detection model determines the second loss value of the first original detection model, so that the first loss value and the second loss value are used to adjust the network parameters of the first original detection model.
  • the calculation methods of the first loss value and the second loss value can refer to the relevant steps in the foregoing embodiment, and will not be repeated here.
  • the process of calculating the second loss value only the first prediction area and the second prediction area of the unlabeled organs can be calculated, so as to improve the consistency between the first original detection model and the second image detection model.
  • the robustness of sexual constraints can in turn improve the accuracy of the image detection model.
  • Step S35 Use the difference between the first prediction area of the second original detection model and the actual area and the second prediction area of the first image detection model to adjust the network parameters of the second original detection model.
  • the difference between the first prediction area of the second original detection model and the pseudo-labeled actual area can be used to determine the first loss value of the second original detection model, and the first prediction area and the pseudo-labeled actual area of the second original detection model can be used.
  • the difference between the second prediction regions of the first image detection model determines the second loss value of the second original detection model, so that the first loss value and the second loss value are used to adjust the network parameters of the second original detection model.
  • the calculation methods of the first loss value and the second loss value can refer to the relevant steps in the foregoing embodiment, and will not be repeated here.
  • only the first prediction area and the second prediction area of the unlabeled organ can be calculated, so as to improve the consistency between the second original detection model and the first image detection model.
  • the robustness of sexual constraints can in turn improve the accuracy of the image detection model.
  • steps S34 and S35 may be performed in a sequential order, for example, step S34 is performed first, and then step S35 is performed, or step S35 is performed first, and then step S34 is performed.
  • the above step S24 and step S35 can also be performed at the same time, and can be set according to actual applications, which is not limited here.
  • Step S36 Utilize the network parameters adjusted during the current training of the first original detection model and several previous trainings to update the network parameters of the first image detection model.
  • the average value of the network parameters adjusted by the first original detection model during this training and several previous trainings can be counted, and the network parameters of the first image detection model can be updated to the corresponding network parameters of the first original detection model. average value.
  • Step S37 The network parameters of the second image detection model are updated by using the network parameters adjusted during the current training of the second original detection model and several previous trainings.
  • the average value of the network parameters adjusted by the second original detection model during this training and several previous trainings can be counted, and the network parameters of the second image detection model can be updated to the corresponding network parameters of the second original detection model. average value.
  • steps S36 and S37 can be performed in a sequential order, for example, step S36 is performed first, and then step S37, or step S37 is performed first, and step S36 is performed later.
  • the above step S36 and step S37 can also be performed at the same time, and can be set according to actual applications, which is not limited here.
  • the above step S32 and subsequent steps can be re-executed to continue Adjust the network parameters of the first original detection model and the second original detection model, and adjust the network parameters of the first image detection model corresponding to the first original detection model and the second image detection model corresponding to the second original detection model
  • the network parameters are updated.
  • the preset training end conditions may include: the current number of training times reaches the preset number threshold (eg, 500 times, 1000 times, etc.), and the loss values of the first original detection model and the second original detection model are less than Any one of a preset loss threshold is not limited here.
  • any one of the first image detection model and the second image detection model can be used as the network model for subsequent image detection, so that the number of medical images to be detected can be directly obtained.
  • the area corresponding to each organ can eliminate the need to use multiple single organs to detect the medical image to be detected separately, so the amount of detection calculation can be reduced.
  • the original detection model is set to include the first original detection model and the second original detection model
  • the image detection model is set to include the first image detection model corresponding to the first original detection model and the second original detection model.
  • Detect the second image detection model corresponding to the model and use the first original detection model and the second original detection model to perform the step of detecting the sample medical image to obtain the first detection result, and use the first image detection model and the first image detection model respectively.
  • the second detection model executes the step of detecting the sample medical image to obtain the second detection result, thereby using the difference between the first prediction area of the first original detection model and the actual area and the second prediction area of the second image detection model.
  • Adjust the network parameters of the first original detection model and use the difference between the first prediction area of the second original detection model and the actual area and the second prediction area of the first image detection model to adjust the second original detection model Network parameters, so the first image detection model corresponding to the first original detection model can be used to supervise the training of the second original detection model, and the second image detection model corresponding to the second original detection model can be used to supervise the training of the first original detection model. Therefore, it is possible to further constrain the cumulative error of the network parameters due to the pseudo-labeled real region during multiple training processes, and improve the accuracy of the image detection model.
  • FIG. 5 is a schematic flowchart of an embodiment of an image detection method provided by an embodiment of the present disclosure. Among them, the following steps can be included:
  • Step S51 Obtain a medical image to be tested, where the medical image to be tested contains multiple organs.
  • the medical images to be detected may include CT images and MR images, which are not limited here.
  • the medical image to be detected can be obtained by scanning the abdomen, chest, head, etc., and can be set according to actual application conditions, which is not limited here.
  • the organs in the medical image to be tested may include: kidney, spleen, liver, pancreas, etc.; or scanning the chest, the organs in the medical image to be tested may include: heart, lung lobes, thyroid, etc.;
  • the head is scanned, and the organs in the medical image to be detected may include: brain stem, cerebellum, diencephalon, and telencephalon.
  • Step S52 Use the image detection model to detect the medicine to be detected to obtain predicted regions of multiple organs.
  • the image detection model is obtained by training using the steps in any of the above-mentioned image detection model training method embodiments. You can refer to the relevant steps in the foregoing embodiment, which will not be repeated here.
  • the image detection model to detect the medical image to be detected, the predicted regions of multiple organs can be directly obtained, and the operation of using multiple single organs to detect the medical image to be detected can be avoided, and the amount of detection calculation can be reduced.
  • the image detection model trained by using the steps in the embodiment of the training method of any of the above-mentioned image detection models detects and detects the medical image to be detected, and obtains the predicted regions of multiple organs, which can improve the detection in the process of multiple organ detection. accuracy.
  • FIG. 6 is a schematic diagram of an embodiment of an image detection model training apparatus provided by an embodiment of the present disclosure.
  • the training device 60 for the image detection model includes an image acquisition module 61, a first detection module 62, a second detection module 63, and a parameter adjustment module 64.
  • the image acquisition module 61 is configured to acquire sample medical images, wherein the sample medical images are pseudo-labeled At least one actual region of an unlabeled organ; the first detection module 62 is configured to use the original detection model to detect the sample medical image to obtain a first detection result, where the first detection result includes the first predicted region of the unlabeled organ; And, the second detection module 63 is configured to use the image detection model to detect the sample medical image to obtain a second detection result, wherein the second detection result includes a second predicted region of an unlabeled organ, and the network parameter of the image detection model is Determined by using the network parameters of the original detection model; the parameter adjustment module 64 is configured to adjust the network parameters of the original detection model by using the differences between the first prediction area and the actual area and the second prediction area, respectively.
  • the sample medical image is acquired, and the sample medical image is pseudo-labeled with at least one actual region of an unlabeled organ, so there is no need to actually label multiple organs in the sample medical image, and the original detection model is used to detect the sample medical image.
  • the difference between the area and the actual area and the second predicted area, adjust the network parameters of the original detection model, and the network parameters of the image detection model are determined by the network parameters of the original detection model, so the image detection model can supervise the original detection
  • the training of the model can constrain the cumulative error of the network parameters due to the pseudo-labeled real area during multiple training processes, and improve the accuracy of the image detection model, so that the image detection model can accurately supervise the training of the original detection model.
  • the original detection model can accurately adjust its network parameters during the training process. Therefore, the detection accuracy of the image detection model can be improved in the process of multi-organ detection.
  • the original detection model includes a first original detection model and a second original detection model
  • the image detection model includes a first image detection model corresponding to the first original detection model and a second image detection model corresponding to the second original detection model.
  • Image detection model the first detection module 62 is also configured to use the first original detection model and the second original detection model to perform the step of detecting the sample medical image to obtain the first detection result
  • the second detection model 63 is also configured
  • the parameter adjustment module 64 is further configured to use the first prediction area of the first original detection model respectively.
  • the difference between the actual area and the second prediction area of the second image detection model is adjusted to adjust the network parameters of the first original detection model.
  • the parameter adjustment module 64 is also configured to use the first prediction area of the second original detection model. The difference between the actual area and the second prediction area of the first image detection model is adjusted to adjust the network parameters of the second original detection model.
  • the original detection model is set to include the first original detection model and the second original detection model
  • the image detection model is set to include the first image detection model corresponding to the first original detection model and the second original detection model.
  • Detect the second image detection model corresponding to the model and use the first original detection model and the second original detection model to perform the step of detecting the sample medical image to obtain the first detection result, and use the first image detection model and the first image detection model respectively.
  • the second detection model performs the step of detecting the sample medical image to obtain the second detection result, thereby using the difference between the first prediction area of the first original detection model and the actual area and the second prediction area of the second image detection model.
  • Adjust the network parameters of the first original detection model and use the difference between the first prediction area of the second original detection model and the actual area and the second prediction area of the first image detection model to adjust the second original detection model Network parameters, so the first image detection model corresponding to the first original detection model can be used to supervise the training of the second original detection model, and the second image detection model corresponding to the second original detection model can be used to supervise the training of the first original detection model. Therefore, it is possible to further constrain the cumulative error of the network parameters due to the pseudo-labeled real area during multiple training processes, and improve the accuracy of the image detection model.
  • the parameter adjustment module 64 includes a first loss determination sub-module configured to use the difference between the first prediction area and the actual area to determine the first loss value of the original detection model, and the parameter adjustment module 64 includes a first loss value.
  • the second loss determination sub-module is configured to use the difference between the first prediction area and the second prediction area to determine the second loss value of the original detection model.
  • the parameter adjustment module 64 includes a parameter adjustment sub-module configured to use the first The loss value and the second loss value adjust the network parameters of the original detection model.
  • the first loss value of the original detection model is determined by the difference between the first prediction area and the actual area, and the difference between the first prediction area and the second prediction area is used to determine the value of the original detection model.
  • the second loss value and use the first loss value and the second loss value to adjust the network parameters of the original detection model, so that the first prediction area predicted from the original detection model can be detected with the pseudo-labeled actual area and the corresponding image respectively.
  • the two dimensions of the difference between the second prediction regions predicted by the model are used to measure the loss of the original detection model, which is conducive to improving the accuracy of the loss calculation, which can help improve the accuracy of the network parameters of the original detection model, which in turn can help Improve the accuracy of the image detection model.
  • the first loss determination submodule includes a focus loss determination unit configured to process the first prediction area and the actual area using a focus loss function to obtain the first focus loss value
  • the first loss determination submodule includes The collective similarity loss determination unit is configured to use the collective similarity loss function to process the first prediction region and the actual region to obtain the first loss value of the collective similarity
  • the second loss determination sub-module is also configured to use the consistency loss The function processes the first prediction area and the second prediction area to obtain the second loss value.
  • the parameter adjustment sub-module includes a weighting processing unit configured to perform weighting processing on the first loss value and the second loss value to obtain the weighted loss value ,
  • the parameter adjustment sub-module includes a parameter adjustment unit configured to adjust the network parameters of the original detection model by using the weighted loss value.
  • the model can increase the focus on difficult samples, which can help improve the accuracy of the image detection model.
  • the first loss value of the collective similarity is obtained, which can make the model fit the pseudo-labeled actual area, which can help improve the accuracy of the image detection model Performance
  • the consistency loss function to process the first prediction area and the second prediction area to obtain the second loss value, which can improve the prediction consistency of the original model and the image detection model, which can further improve the performance of the image detection model
  • Accuracy By weighting the first loss value and the second loss value, the weighted loss value is obtained, and the weighted loss value is used to adjust the network parameters of the original detection model, which can balance the importance of each loss value in the training process. Thereby, the accuracy of the network parameters can be improved, which in turn can help improve the accuracy of the image detection model.
  • the sample medical image further includes the actual region of the labeled organ
  • the first detection result further includes the first prediction region of the labeled organ
  • the second detection result further includes the second prediction region of the labeled organ.
  • the first loss determination submodule is further configured to determine the first loss value of the original detection model by using the difference between the first predicted region and the actual region of the unlabeled organ and the labeled organ
  • the second loss determination submodule is also configured In order to use the difference between the first prediction area of the unlabeled organ and the corresponding second prediction area, the second loss value of the original detection model is determined.
  • the second detection result also includes the second prediction of the marked organ
  • the difference between the first prediction area and the actual area is comprehensively considered, and in the process of determining the second loss value of the original detection model, only the unmarked value is considered
  • the difference between the first prediction region of the organ and the corresponding second prediction region can improve the robustness of the consistency constraint of the original detection model and the image detection model, and thus can improve the accuracy of the image detection model.
  • the training device 60 of the image detection model further includes a parameter update module configured to update the network parameters of the image detection model by using the network parameters adjusted during this training and several previous trainings.
  • the network parameters of the image detection model can be updated by using the network parameters adjusted by the original detection model in this training and several previous trainings, which can further restrict the network parameters in the process of multiple training due to false
  • the cumulative error generated by the marked real area improves the accuracy of the image detection model.
  • the parameter update module includes a statistics sub-module configured to count the average value of the network parameters adjusted by the original detection model during the current training and several previous trainings, and the parameter update module includes an update sub-module configured to Update the network parameters of the image detection model to the average value of the network parameters of the corresponding original detection model.
  • the average value of the network parameters adjusted by the original detection model during the current training and the previous training is counted, and the network parameters of the image detection model are updated to the average value of the network parameters of the corresponding original detection model.
  • the image acquisition module 61 includes an image acquisition sub-module configured to acquire a medical image to be pseudo-labeled, wherein at least one unlabeled organ exists in the medical image to be pseudo-labeled, and the image acquisition module 61 includes a single-organ detection sub-module , Is configured to detect pseudo-labeled medical images using a single-organ detection model corresponding to each unlabeled organ to obtain the organ prediction area of each unlabeled organ.
  • the image acquisition module 61 includes a pseudo-labeled sub-module and is configured In order to pseudo-label the organ prediction region of the unlabeled organ as the actual region of the unlabeled organ, and use the pseudo-labeled medical image to be pseudo-labeled as the sample medical image.
  • the organ prediction area of the non-labeled organ is pseudo-labeled as the actual area of the unlabeled organ, and the pseudo-labeled medical image after the pseudo-labeling is used as the sample medical image.
  • the single-organ detection model can be used to eliminate the need for manual pairing. The workload of organ labeling can help reduce the labor cost of training an image detection model for multi-organ detection and improve the efficiency of training.
  • the medical image to be pseudo-labeled includes at least one labeled organ
  • the image acquisition module 61 further includes a single organ training sub-module configured to use the medical image to be pseudo-labeled to use the labeled organ in the medical image to be pseudo-labeled.
  • the corresponding single-organ detection model is trained.
  • the medical image to be pseudo-labeled includes at least one labeled organ
  • the single-organ detection model corresponding to the labeled organ in the pseudo-labeled medical image is trained by using the medical image to be pseudo-labeled, which can improve the single organ
  • the accuracy of the detection model can thus help to improve the accuracy of subsequent pseudo-labeling, which in turn can help improve the accuracy of the subsequent training image detection model.
  • the image acquisition sub-module includes a three-dimensional image acquisition unit configured to acquire a three-dimensional medical image
  • the image acquisition sub-module includes a pre-processing unit configured to preprocess the three-dimensional medical image
  • the image acquisition sub-module includes an image
  • the cropping unit is configured to perform cropping processing on the preprocessed three-dimensional medical image to obtain at least one two-dimensional medical image to be pseudo-labeled.
  • the pre-processed three-dimensional medical images are cropped to obtain at least one two-dimensional medical image to be pseudo-labeled, which can be beneficial to obtain Medical images that meet model training can help improve the accuracy of subsequent image detection model training.
  • the preprocessing unit is further configured to perform at least one of the following: adjust the voxel resolution of the three-dimensional medical image to a preset resolution; use a preset window value to adjust the voxel value of the three-dimensional medical image Normalize to a preset range; add Gaussian noise to at least part of the voxels of the three-dimensional medical image.
  • adjusting the voxel resolution of the three-dimensional medical image to a preset resolution can facilitate subsequent model prediction processing; the preset window value is used to normalize the voxel value of the three-dimensional medical image to a preset Within the range, it can help the model to extract accurate features; adding Gaussian noise to at least part of the voxels of the three-dimensional medical image can help achieve data augmentation, increase data diversity, and improve the accuracy of subsequent model training.
  • FIG. 7 is a schematic diagram of a framework of an embodiment of an image detection device provided by an embodiment of the present disclosure.
  • the image detection device 70 includes an image acquisition module 71 and an image detection module 72.
  • the image acquisition module 71 is configured to acquire a medical image to be detected, wherein the medical image to be detected contains multiple organs;
  • the image detection module 72 is configured to use image detection
  • the model detects the medicine to be detected to obtain predicted regions of multiple organs; wherein, the image detection model is trained by the training device of the image detection model in any of the above-mentioned image detection model training device embodiments.
  • the image detection model trained by the training device of the image detection model in the embodiment of the training device for any of the above-mentioned image detection models is used for detection and detection of medical images to be detected, and the predicted regions of multiple organs are obtained. In the process, improve the detection accuracy.
  • FIG. 8 is a schematic diagram of a framework of an embodiment of an electronic device provided by an embodiment of the present disclosure.
  • the electronic device 80 includes a memory 81 and a processor 82 that are coupled to each other.
  • the processor 82 is configured to execute program instructions stored in the memory 81 to implement the steps of any of the foregoing image detection model training method embodiments, or implement any of the foregoing. Steps in an embodiment of an image detection method.
  • the electronic device 80 may include but is not limited to: a microcomputer and a server.
  • the electronic device 80 may also include mobile devices such as a notebook computer and a tablet computer, which are not limited herein.
  • the processor 82 is configured to control itself and the memory 81 to implement the steps of any of the foregoing image detection model training method embodiments, or implement the steps of any of the foregoing image detection method embodiments.
  • the processor 82 may also be referred to as a CPU (Central Processing Unit, central processing unit).
  • the processor 82 may be an integrated circuit chip with signal processing capability.
  • the processor 82 may also be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a field programmable gate array (Field-Programmable Gate Array, FPGA) or other Programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the processor 82 may be jointly implemented by an integrated circuit chip.
  • the above solution can improve the accuracy of detection in the process of multi-organ detection.
  • FIG. 9 is a schematic framework diagram of an embodiment of a computer-readable storage medium provided by an embodiment of the present disclosure.
  • the computer-readable storage medium 90 stores program instructions 901 that can be executed by the processor.
  • the program instructions 901 are configured to implement the steps of any of the foregoing image detection model training method embodiments, or implement any of the foregoing image detection method embodiments. A step of.
  • the above solution can improve the accuracy of detection in the process of multi-organ detection.
  • the training method of the image detection model or the computer program product of the image detection method provided by the embodiments of the present disclosure includes a computer-readable storage medium storing program code, and the instructions included in the program code can be configured to execute the above method embodiments
  • the training method of the image detection model or the steps of the image detection method described in the above please refer to the above method embodiment, which will not be repeated here.
  • the embodiments of the present disclosure also provide a computer program, which, when executed by a processor, implements any one of the methods in the foregoing embodiments.
  • the computer program product can be implemented by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium.
  • the computer program product is embodied as a software product, such as a software development kit (SDK) and so on.
  • SDK software development kit
  • the disclosed method and device may be implemented in other ways.
  • the device implementation described above is only illustrative, for example, the division of modules or units is only a logical function division, and there may be other divisions in the actual implementation process, for example, units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of this embodiment.
  • the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present disclosure essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium.
  • a computer device which may be a personal computer, a server, or a network device, etc.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. .
  • a sample medical image is obtained, and the sample medical image pseudo-labels at least one actual region of an unlabeled organ; the original detection model is used to detect the sample medical image to obtain a first detection including the first predicted region of the unlabeled organ Results; use the image detection model to detect the sample medical image to obtain the second detection result including the second prediction area of the unlabeled organ, the network parameters of the image detection model are determined based on the network parameters of the original detection model; use the first prediction The difference between the area and the actual area and the second prediction area respectively, adjust the network parameters of the original detection model. In this way, the detection accuracy can be improved in the process of multi-organ detection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

Un procédé de détection d'image, un procédé d'apprentissage de modèle pertinent, des appareils pertinents et un dispositif sont divulgués. Le procédé d'apprentissage de modèle de détection d'image comprend : l'acquisition d'une image médicale d'échantillon, une région réelle d'au moins un organe non marqué étant pseudo-marquée sur l'image médicale d'échantillon ; l'utilisation d'un modèle de détection d'origine pour détecter l'image médicale d'échantillon, de façon à obtenir un premier résultat de détection comprenant une première région de prédiction de l'organe non marqué ; l'utilisation du modèle de détection d'image pour détecter l'image médicale d'échantillon pour obtenir un second résultat de détection comprenant une seconde région de prédiction de l'organe non marqué, les paramètres de réseau du modèle de détection d'image étant déterminés sur la base des paramètres de réseau du modèle de détection d'origine ; et le réglage des paramètres de réseau du modèle de détection d'origine à l'aide d'une différence entre la première région de prédiction et la région réelle et d'une différence entre la première région de prédiction et la seconde région de prédiction, respectivement.
PCT/CN2020/140325 2020-04-30 2020-12-28 Procédé de détection d'image et procédé d'apprentissage de modèle pertinent, appareils pertinents et dispositif WO2021218215A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021576932A JP2022538137A (ja) 2020-04-30 2020-12-28 画像検出方法及び関連モデルの訓練方法並びに関連装置、機器
KR1020217043241A KR20220016213A (ko) 2020-04-30 2020-12-28 이미지 검출 방법 및 관련 모델의 훈련 방법 및 관련 장치, 기기

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010362766.X 2020-04-30
CN202010362766.XA CN111539947B (zh) 2020-04-30 2020-04-30 图像检测方法及相关模型的训练方法和相关装置、设备

Publications (1)

Publication Number Publication Date
WO2021218215A1 true WO2021218215A1 (fr) 2021-11-04

Family

ID=71967825

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/140325 WO2021218215A1 (fr) 2020-04-30 2020-12-28 Procédé de détection d'image et procédé d'apprentissage de modèle pertinent, appareils pertinents et dispositif

Country Status (5)

Country Link
JP (1) JP2022538137A (fr)
KR (1) KR20220016213A (fr)
CN (1) CN111539947B (fr)
TW (1) TW202145249A (fr)
WO (1) WO2021218215A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114391828A (zh) * 2022-03-01 2022-04-26 郑州大学 一种用于脑卒中患者的积极心理护理干预***
CN117041531A (zh) * 2023-09-04 2023-11-10 无锡维凯科技有限公司 一种基于图像质量评估的手机摄像头聚焦检测方法和***

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539947B (zh) * 2020-04-30 2024-03-29 上海商汤智能科技有限公司 图像检测方法及相关模型的训练方法和相关装置、设备
CN112132206A (zh) * 2020-09-18 2020-12-25 青岛商汤科技有限公司 图像识别方法及相关模型的训练方法及相关装置、设备
CN113850179A (zh) * 2020-10-27 2021-12-28 深圳市商汤科技有限公司 图像检测方法及相关模型的训练方法、装置、设备、介质
CN112200802B (zh) * 2020-10-30 2022-04-26 上海商汤智能科技有限公司 图像检测模型的训练方法及相关装置、设备、存储介质
CN112669293A (zh) * 2020-12-31 2021-04-16 上海商汤智能科技有限公司 图像检测方法和检测模型的训练方法及相关装置、设备
CN112785573A (zh) * 2021-01-22 2021-05-11 上海商汤智能科技有限公司 图像处理方法及相关装置、设备
CN112749801A (zh) * 2021-01-22 2021-05-04 上海商汤智能科技有限公司 神经网络训练和图像处理方法及装置
CN114049344A (zh) * 2021-11-23 2022-02-15 上海商汤智能科技有限公司 图像分割方法及其模型的训练方法及相关装置、电子设备
CN114429459A (zh) * 2022-01-24 2022-05-03 上海商汤智能科技有限公司 目标检测模型的训练方法及对应的检测方法
CN114155365B (zh) * 2022-02-07 2022-06-14 北京航空航天大学杭州创新研究院 模型训练方法、图像处理方法及相关装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090116737A1 (en) * 2007-10-30 2009-05-07 Siemens Corporate Research, Inc. Machine Learning For Tissue Labeling Segmentation
CN109166107A (zh) * 2018-04-28 2019-01-08 北京市商汤科技开发有限公司 一种医学图像分割方法及装置、电子设备和存储介质
CN109658419A (zh) * 2018-11-15 2019-04-19 浙江大学 一种医学图像中小器官的分割方法
CN110097557A (zh) * 2019-01-31 2019-08-06 卫宁健康科技集团股份有限公司 基于3D-UNet的医学图像自动分割方法及***
CN110188829A (zh) * 2019-05-31 2019-08-30 北京市商汤科技开发有限公司 神经网络的训练方法、目标识别的方法及相关产品
CN111539947A (zh) * 2020-04-30 2020-08-14 上海商汤智能科技有限公司 图像检测方法及相关模型的训练方法和相关装置、设备

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018033154A1 (fr) * 2016-08-19 2018-02-22 北京市商汤科技开发有限公司 Procédé, dispositif et appareil électronique de commande gestuelle
CN108229267B (zh) * 2016-12-29 2020-10-16 北京市商汤科技开发有限公司 对象属性检测、神经网络训练、区域检测方法和装置
JP6931579B2 (ja) * 2017-09-20 2021-09-08 株式会社Screenホールディングス 生細胞検出方法、プログラムおよび記録媒体
EP3474192A1 (fr) * 2017-10-19 2019-04-24 Koninklijke Philips N.V. Classification de données
JP7325414B2 (ja) * 2017-11-20 2023-08-14 コーニンクレッカ フィリップス エヌ ヴェ 第1のニューラルネットワークモデルと第2のニューラルネットワークモデルとの訓練
JP7066385B2 (ja) * 2017-11-28 2022-05-13 キヤノン株式会社 情報処理方法、情報処理装置、情報処理システム及びプログラム
CN109086656B (zh) * 2018-06-06 2023-04-18 平安科技(深圳)有限公司 机场异物检测方法、装置、计算机设备及存储介质
CN109523526B (zh) * 2018-11-08 2021-10-22 腾讯科技(深圳)有限公司 组织结节检测及其模型训练方法、装置、设备和***
CN110148142B (zh) * 2019-05-27 2023-04-18 腾讯科技(深圳)有限公司 图像分割模型的训练方法、装置、设备和存储介质
JP2021039748A (ja) * 2019-08-30 2021-03-11 キヤノン株式会社 情報処理装置、情報処理方法、情報処理システム及びプログラム
CN111028206A (zh) * 2019-11-21 2020-04-17 万达信息股份有限公司 一种基于深度学习***癌自动检测和分类***
CN111062390A (zh) * 2019-12-18 2020-04-24 北京推想科技有限公司 感兴趣区域标注方法、装置、设备和存储介质
CN110969245B (zh) * 2020-02-28 2020-07-24 北京深睿博联科技有限责任公司 医学图像的目标检测模型训练方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090116737A1 (en) * 2007-10-30 2009-05-07 Siemens Corporate Research, Inc. Machine Learning For Tissue Labeling Segmentation
CN109166107A (zh) * 2018-04-28 2019-01-08 北京市商汤科技开发有限公司 一种医学图像分割方法及装置、电子设备和存储介质
CN109658419A (zh) * 2018-11-15 2019-04-19 浙江大学 一种医学图像中小器官的分割方法
CN110097557A (zh) * 2019-01-31 2019-08-06 卫宁健康科技集团股份有限公司 基于3D-UNet的医学图像自动分割方法及***
CN110188829A (zh) * 2019-05-31 2019-08-30 北京市商汤科技开发有限公司 神经网络的训练方法、目标识别的方法及相关产品
CN111539947A (zh) * 2020-04-30 2020-08-14 上海商汤智能科技有限公司 图像检测方法及相关模型的训练方法和相关装置、设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114391828A (zh) * 2022-03-01 2022-04-26 郑州大学 一种用于脑卒中患者的积极心理护理干预***
CN117041531A (zh) * 2023-09-04 2023-11-10 无锡维凯科技有限公司 一种基于图像质量评估的手机摄像头聚焦检测方法和***
CN117041531B (zh) * 2023-09-04 2024-03-15 无锡维凯科技有限公司 一种基于图像质量评估的手机摄像头聚焦检测方法和***

Also Published As

Publication number Publication date
KR20220016213A (ko) 2022-02-08
CN111539947A (zh) 2020-08-14
CN111539947B (zh) 2024-03-29
JP2022538137A (ja) 2022-08-31
TW202145249A (zh) 2021-12-01

Similar Documents

Publication Publication Date Title
WO2021218215A1 (fr) Procédé de détection d'image et procédé d'apprentissage de modèle pertinent, appareils pertinents et dispositif
CN109584254B (zh) 一种基于深层全卷积神经网络的心脏左心室分割方法
Bi et al. Automatic liver lesion detection using cascaded deep residual networks
US11941807B2 (en) Artificial intelligence-based medical image processing method and medical device, and storage medium
WO2021128825A1 (fr) Procédé de détection de cible tridimensionnelle, procédé et dispositif de formation de modèle de détection de cible tridimensionnelle, appareil, et support d'enregistrement
CN110363760B (zh) 用于识别医学图像的计算机***
CN109102490A (zh) 自动图像注册质量评估
US9142030B2 (en) Systems, methods and computer readable storage media storing instructions for automatically segmenting images of a region of interest
CN109215014B (zh) Ct图像预测模型的训练方法、装置、设备及存储介质
US20220335600A1 (en) Method, device, and storage medium for lesion segmentation and recist diameter prediction via click-driven attention and dual-path connection
CN109949280B (zh) 图像处理方法、装置、设备存储介质及生长发育评估***
Yang et al. A deep learning segmentation approach in free‐breathing real‐time cardiac magnetic resonance imaging
CN112767504A (zh) 用于图像重建的***和方法
KR102328198B1 (ko) 인공신경망을 이용한 장기의 부피 측정 방법 및 그 장치
EP3973508A1 (fr) Échantillonnage de variables latentes pour générer de multiples segmentations d'une image
CN116130090A (zh) 射血分数测量方法和装置、电子设备和存储介质
CN111724371B (zh) 一种数据处理方法、装置及电子设备
CN113284145A (zh) 图像处理方法及装置、计算机可读存储介质及电子设备
CN115862119B (zh) 基于注意力机制的人脸年龄估计方法及装置
CN114787816A (zh) 针对机器学习方法的数据增强
CN115496703A (zh) 肺炎区域检测方法及***
US20240177839A1 (en) Image annotation systems and methods
TWI778670B (zh) 肺炎區域檢測方法及系統
CN114581438B (zh) Mri图像分类方法、装置、电子设备及存储介质
TWI771141B (zh) 腦部影像之腦部提取與線圈勻場校正模型的建立方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933616

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021576932

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20217043241

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20933616

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20933616

Country of ref document: EP

Kind code of ref document: A1