TW202145249A - Image detection method and training method of related model, electronic device and computer-readable storage medium - Google Patents

Image detection method and training method of related model, electronic device and computer-readable storage medium Download PDF

Info

Publication number
TW202145249A
TW202145249A TW110109420A TW110109420A TW202145249A TW 202145249 A TW202145249 A TW 202145249A TW 110109420 A TW110109420 A TW 110109420A TW 110109420 A TW110109420 A TW 110109420A TW 202145249 A TW202145249 A TW 202145249A
Authority
TW
Taiwan
Prior art keywords
detection model
image
organ
medical image
original
Prior art date
Application number
TW110109420A
Other languages
Chinese (zh)
Inventor
黃銳
胡志强
張少霆
李鴻升
Original Assignee
大陸商上海商湯智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商上海商湯智能科技有限公司 filed Critical 大陸商上海商湯智能科技有限公司
Publication of TW202145249A publication Critical patent/TW202145249A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present disclosure provides an image detection method, a training method of a related model, an electronic device and a computer-readable storage medium. The training method of the image detection model includes: obtaining a sample medical image, and the sample medical image pseudo-labeling at least one actual region of an unlabeled organ; use the original detection model to detect the sample medical image to obtain the first detection result including the first prediction region of the unlabeled organ; use the image detection model to detect the sample medical image to obtain the second detection result of the second prediction region including the unlabeled organs; the network parameters of the image detection model are determined based on the network parameters of the original detection model; the differences between the first prediction region and the actual area and the second prediction region are used to adjust the network parameters of the original detection model.

Description

圖像檢測方法及相關模型的訓練方法、電子設備和電腦可讀儲存介質Image detection method and related model training method, electronic device and computer-readable storage medium

本發明關於人工智慧技術領域,特別是關於一種圖像檢測方法及相關模型的訓練方法、電子設備和電腦可讀儲存介質。The present invention relates to the technical field of artificial intelligence, in particular to an image detection method and a training method of a related model, an electronic device and a computer-readable storage medium.

CT(Computed Tomography,電腦斷層掃描)和MRI(Magnetic Resonance Imaging,核磁共振掃描)等醫學圖像在臨床具有重要意義。其中,在CT、MRI等醫學圖像上進行多器官檢測,以確定醫學圖像上各器官對應的區域,在臨床實踐中具有廣泛的應用,例如,電腦輔助診斷、放療計畫制定等。故此,訓練出適用於多器官檢測的圖像檢測模型具有較高的應用價值。Medical images such as CT (Computed Tomography, Computed Tomography) and MRI (Magnetic Resonance Imaging, Magnetic Resonance Imaging) are of great clinical significance. Among them, multi-organ detection is performed on medical images such as CT and MRI to determine the region corresponding to each organ on the medical image, which has a wide range of applications in clinical practice, such as computer-aided diagnosis and radiotherapy planning. Therefore, training an image detection model suitable for multi-organ detection has high application value.

目前,模型訓練依賴於大量的具有標注的資料集。然而,在醫療影像領域,獲得大量的高品質的多器官標注是非常耗時耗力的,而且通常只有經驗豐富的放射科醫生才有能力對資料進行標注。受限於此,現有的圖像檢測模型在進行多器官檢測時,往往存在準確性低的問題。有鑑於此,如何在多器官檢測時,提高檢測準確性成為亟待解決的問題。Currently, model training relies on a large number of annotated datasets. However, in the field of medical imaging, obtaining a large number of high-quality multi-organ annotations is very time-consuming and labor-intensive, and usually only experienced radiologists have the ability to annotate the data. Due to this limitation, the existing image detection models often have the problem of low accuracy when performing multi-organ detection. In view of this, how to improve the detection accuracy in multi-organ detection has become an urgent problem to be solved.

本發明提供一種圖像檢測方法及相關模型的訓練方法、電子設備和電腦可讀儲存介質。The present invention provides an image detection method and a training method of a related model, an electronic device and a computer-readable storage medium.

第一方面,本發明實施例提供了一種圖像檢測模型的訓練方法,包括:獲取樣本醫學圖像,其中,樣本醫學圖像偽標注出至少一個未標注器官的實際區域;利用原始檢測模型對樣本醫學圖像進行檢測以得到第一檢測結果,其中,第一檢測結果包括未標注器官的第一預測區域;以及,利用圖像檢測模型對樣本醫學圖像進行檢測以得到第二檢測結果,其中,第二檢測結果包括未標注器官的第二預測區域,圖像檢測模型的網路參數是基於原始檢測模型的網路參數確定的;利用第一預測區域分別與實際區域、第二預測區域之間的差異,調整原始檢測模型的網路參數。In a first aspect, an embodiment of the present invention provides a method for training an image detection model, including: acquiring a sample medical image, wherein the sample medical image pseudo-labels at least one actual area of an unlabeled organ; using the original detection model to The sample medical image is detected to obtain a first detection result, wherein the first detection result includes a first prediction area of an unlabeled organ; and the sample medical image is detected by using an image detection model to obtain a second detection result, Wherein, the second detection result includes the second prediction area of the unlabeled organ, and the network parameters of the image detection model are determined based on the network parameters of the original detection model; the first prediction area is used to correspond to the actual area and the second prediction area respectively. The difference between, adjust the network parameters of the original detection model.

因此,通過獲取樣本醫學圖像,且樣本醫學圖像偽標注出至少一個未標注器官的實際區域,故樣本醫學圖像中無需對多器官進行真實標注,從而利用原始檢測模型對樣本醫學圖像檢測檢測以得到包含未標注器官的第一預設區域的第一檢測結果,並利用圖像檢測模型對樣本醫學圖像進行檢測以得到包含未標注器官的第二預測區域的第二檢測結果,進而利用第一預測區域分別與實際區域、第二預測區域之間的差異,調整原始檢測模型的網路參數,且圖像檢測模型的網路參數是基於原始檢測模型的網路參數確定的,故能夠使得圖像檢測模型監督原始檢測模型的訓練,故能夠約束網路參數在多次訓練過程中由於偽標注的真實區域所產生的累積誤差,提高圖像檢測模型的準確性,從而使得圖像檢測模型得以準確地監督原始檢測模型進行訓練,進而使得原始檢測模型在訓練過程中能夠準確地調整其網路參數,故此,能夠在多器官檢測的過程中,提升圖像檢測模型的檢測準確性。Therefore, by obtaining the sample medical image, and the sample medical image pseudo-marks the actual area of at least one unmarked organ, it is not necessary to actually mark multiple organs in the sample medical image, so that the original detection model is used to detect the sample medical image. Detecting to obtain a first detection result of a first preset region containing unlabeled organs, and using an image detection model to detect a sample medical image to obtain a second detection result of a second predicted region containing unlabeled organs, Then, the network parameters of the original detection model are adjusted by using the differences between the first prediction area and the actual area and the second prediction area, and the network parameters of the image detection model are determined based on the network parameters of the original detection model, Therefore, the image detection model can be made to supervise the training of the original detection model, so it can constrain the cumulative error of the network parameters due to the pseudo-annotated real area during the multiple training process, improve the accuracy of the image detection model, and make the image The image detection model can accurately supervise the training of the original detection model, so that the original detection model can accurately adjust its network parameters during the training process. Therefore, in the process of multi-organ detection, the detection accuracy of the image detection model can be improved. sex.

其中,原始檢測模型包括第一原始檢測模型和第二原始檢測模型,圖像檢測模型包括與第一原始檢測模型對應的第一圖像檢測模型和與第二原始檢測模型對應的第二圖像檢測模型;利用原始檢測模型對樣本醫學圖像進行檢測以得到第一檢測結果,包括:分別利用第一原始檢測模型和第二原始檢測模型執行對樣本醫學圖像進行檢測以得到第一檢測結果的步驟;利用圖像檢測模型對樣本醫學圖像進行檢測以得到第二檢測結果,包括:分別利用第一圖像檢測模型和第二圖像檢測模型執行對樣本醫學圖像進行檢測以得到第二檢測結果的步驟;利用第一預測區域分別與實際區域、第二預測區域之間的差異,調整原始檢測模型的網路參數,包括:利用第一原始檢測模型的第一預測區域分別與實際區域、第二圖像檢測模型的第二預測區域之間的差異,調整第一原始檢測模型的網路參數;以及,利用第二原始檢測模型的第一預測區域分別與實際區域、第一圖像檢測模型的第二預測區域之間的差異,調整第二原始檢測模型的網路參數。The original detection model includes a first original detection model and a second original detection model, and the image detection model includes a first image detection model corresponding to the first original detection model and a second image corresponding to the second original detection model Detection model; using the original detection model to detect the sample medical image to obtain the first detection result, comprising: respectively using the first original detection model and the second original detection model to perform detection on the sample medical image to obtain the first detection result The step of using the image detection model to detect the sample medical image to obtain the second detection result includes: using the first image detection model and the second image detection model to perform detection on the sample medical image to obtain the first image detection model and the second image detection model respectively. 2. The step of detecting the result; using the difference between the first prediction area and the actual area and the second prediction area, respectively, to adjust the network parameters of the original detection model, including: using the first prediction area of the first original detection model to be respectively different from the actual area The difference between the region and the second prediction region of the second image detection model, adjust the network parameters of the first original detection model; Like the difference between the second prediction regions of the detection model, the network parameters of the second original detection model are adjusted.

因此,將原始檢測模型設置為包括第一原始檢測模型和第二原始檢測模型,且圖像檢測模型設置為包括與第一原始檢測模型對應的第一圖像檢測模型和與第二原始檢測模型對應的第二圖像檢測模型,並分別利用第一原始檢測模型和第二原始檢測模型執行對樣本醫學圖像進行檢測以得到第一檢測結果的步驟,並分別利用第一圖像檢測模型和第二檢測模型執行對樣本醫學圖像進行檢測以得到第二檢測結果的步驟,從而利用第一原始檢測模型的第一預測區域分別與實際區域、第二圖像檢測模型的第二預測區域之間的差異,調整第一原始檢測模型的網路參數,並利用第二原始檢測模型的第一預測區域分別與實際區域、第一圖像檢測模型的第二預測區域之間的差異,調整第二原始檢測模型的網路參數,故能夠利用與第一原始檢測模型對應的第一圖像檢測模型監督第二原始檢測模型的訓練,利用與第二原始檢測模型對應的第二圖像檢測模型監督第一原始檢測模型的訓練,故能夠進一步約束網路參數在多次訓練過程中由於偽標注的真實區域所產生的累積誤差,提高圖像檢測模型的準確性。Therefore, the original detection model is set to include the first original detection model and the second original detection model, and the image detection model is set to include the first image detection model corresponding to the first original detection model and the second original detection model. The corresponding second image detection model, and respectively use the first original detection model and the second original detection model to perform the step of detecting the sample medical image to obtain the first detection result, and use the first image detection model and the second original detection model respectively. The second detection model performs the step of detecting the sample medical image to obtain the second detection result, so as to use the difference between the first prediction area of the first original detection model and the actual area and the second prediction area of the second image detection model, respectively. Adjust the network parameters of the first original detection model, and use the difference between the first prediction area of the second original detection model and the actual area and the second prediction area of the first image detection model to adjust the The network parameters of the two original detection models, so the first image detection model corresponding to the first original detection model can be used to supervise the training of the second original detection model, and the second image detection model corresponding to the second original detection model can be used to supervise the training of the second original detection model. Supervising the training of the first original detection model, it can further constrain the cumulative error of the network parameters due to the pseudo-labeled real area during multiple training processes, and improve the accuracy of the image detection model.

其中,利用第一預測區域分別與實際區域、第二預測區域之間的差異,調整原始檢測模型的網路參數包括:利用第一預測區域和實際區域之間的差異,確定原始檢測模型的第一損失值;以及,利用第一預測區域和第二預測區域之間的差異,確定原始檢測模型的第二損失值;利用第一損失值和第二損失值,調整原始檢測模型的網路參數。Wherein, using the difference between the first prediction area and the actual area and the second prediction area respectively, adjusting the network parameters of the original detection model includes: using the difference between the first prediction area and the actual area to determine the first prediction area of the original detection model. a loss value; and, using the difference between the first prediction area and the second prediction area, to determine a second loss value of the original detection model; using the first loss value and the second loss value to adjust the network parameters of the original detection model .

因此,通過第一預測區域和實際區域之間的差異,確定原始檢測模型的第一損失值,並通過第一預測區域和第二預測區域之間的差異,確定原始檢測模型的第二損失值,並利用第一損失值和第二損失值,調整原始檢測模型的網路參數,從而能夠從原始檢測模型預測出的第一預測區域分別和偽標注的實際區域、對應的圖像檢測模型預測出的第二預測區域之間差異這兩個維度來度量原始檢測模型的損失,有利於提高損失計算的準確性,從而能夠有利於提高原始檢測模型網路參數的準確性,進而能夠有利於提升圖像檢測模型的準確性。Therefore, the first loss value of the original detection model is determined by the difference between the first prediction area and the actual area, and the second loss value of the original detection model is determined by the difference between the first prediction area and the second prediction area , and use the first loss value and the second loss value to adjust the network parameters of the original detection model, so that the first prediction area predicted from the original detection model can be predicted from the actual area of the pseudo-label and the corresponding image detection model. The two dimensions of the difference between the second prediction regions are used to measure the loss of the original detection model, which is conducive to improving the accuracy of the loss calculation, which can help to improve the accuracy of the network parameters of the original detection model, which in turn can help improve the The accuracy of the image detection model.

其中,利用第一預測區域和實際區域之間的差異,確定原始檢測模型的第一損失值包括以下至少之一:利用焦點損失函數對第一預測區域和實際區域進行處理,得到焦點第一損失值;利用集合相似度損失函數對第一預測區域和實際區域進行處理,得到集合相似度第一損失值。Wherein, using the difference between the first predicted area and the actual area to determine the first loss value of the original detection model includes at least one of the following: using a focus loss function to process the first predicted area and the actual area to obtain a focus first loss value; use the set similarity loss function to process the first predicted area and the actual area to obtain the first set similarity loss value.

其中,利用第一預測區域和第二預測區域之間的差異,確定原始檢測模型的第二損失值包括:利用一致性損失函數對第一預測區域和第二預測區域進行處理,得到第二損失值。Wherein, using the difference between the first prediction area and the second prediction area to determine the second loss value of the original detection model includes: using a consistency loss function to process the first prediction area and the second prediction area to obtain the second loss value.

其中,利用第一損失值和第二損失值,調整原始檢測模型的網路參數包括:對第一損失值和第二損失值進行加權處理,得到加權損失值;利用加權損失值,調整原始檢測模型的網路參數。Wherein, using the first loss value and the second loss value to adjust the network parameters of the original detection model includes: performing weighting processing on the first loss value and the second loss value to obtain a weighted loss value; using the weighted loss value to adjust the original detection model Network parameters for the model.

因此,通過利用焦點損失函數對第一預測區域和實際區域進行處理,得到焦點第一損失值,能夠使得模型提升對於難樣本的關注度,從而能夠有利於提高圖像檢測模型的準確性;通過利用集合相似度損失函數對第一預測區域和實際區域進行處理,得到集合相似度第一損失值,能夠使得模型擬合偽標注的實際區域,從而能夠有利於提高圖像檢測模型的準確性;通過利用一致性損失函數對第一預測區域和第二預測區域進行處理,得到第二損失值,從而能夠提高原始模型和圖像檢測模型預測的一致性,進而能夠有利於提高圖像檢測模型的準確性;通過對第一損失值和第二損失值進行加權處理,得到加權損失值,並利用加權損失值,調整原始檢測模型的網路參數,能夠平衡各損失值在訓練過程中的重要程度,從而能夠提高網路參數的準確性,進而能夠有利於提高圖像檢測模型的準確性。Therefore, by using the focal loss function to process the first predicted area and the actual area to obtain the focal first loss value, the model can increase the attention to difficult samples, which can help to improve the accuracy of the image detection model; The first predicted area and the actual area are processed by using the set similarity loss function to obtain the first loss value of the set similarity, which can make the model fit the actual area of the pseudo-label, which can help to improve the accuracy of the image detection model; By using the consistency loss function to process the first prediction area and the second prediction area, the second loss value is obtained, so that the consistency of prediction between the original model and the image detection model can be improved, which can help to improve the performance of the image detection model. Accuracy: By weighting the first loss value and the second loss value, the weighted loss value is obtained, and the weighted loss value is used to adjust the network parameters of the original detection model, which can balance the importance of each loss value in the training process. , so that the accuracy of network parameters can be improved, and then the accuracy of the image detection model can be improved.

其中,樣本醫學圖像中還包含已標注器官的實際區域,第一檢測結果還包括已標注器官的第一預測區域,第二檢測結果還包括已標注器官的第二預測區域;利用第一預測區域和實際區域之間的差異,確定原始檢測模型的第一損失值,包括:利用未標注器官和已標注器官的第一預測區域和實際區域之間的差異,確定原始檢測模型的第一損失值;利用第一預測區域和第二預測區域之間的差異,確定原始檢測模型的第二損失值,包括:利用未標注器官的第一預測區域和對應第二預測區域之間的差異,確定原始檢測模型的第二損失值。The sample medical image also includes the actual area of the marked organ, the first detection result also includes the first predicted area of the marked organ, and the second detection result also includes the second predicted area of the marked organ; using the first prediction Determine the first loss value of the original detection model based on the difference between the region and the actual region, including: using the difference between the first predicted region and the actual region of the unlabeled organ and the labeled organ to determine the first loss of the original detection model using the difference between the first prediction area and the second prediction area to determine the second loss value of the original detection model, including: using the difference between the first prediction area of the unlabeled organ and the corresponding second prediction area to determine The second loss value of the original detection model.

因此,通過在樣本醫學圖像中設置已標注器官的實際區域,且第一檢測結果中還包括已標注器官的第一預測區域,第二檢測結果還包括已標注器官的第二預測區域,並在確定原始檢測模型的第一損失值的過程中,綜合考慮第一預測區域和實際區域之間的差異,而在確定原始檢測模型的第二損失值的過程中,只考慮未標注器官的第一預測區域和對應的第二預測區域之間的差異,從而能夠提升原始檢測模型和圖像檢測模型一致性約束的魯棒性,進而能夠提高圖像檢測模型的準確性。Therefore, by setting the actual region of the labeled organ in the sample medical image, and the first detection result also includes the first predicted region of the labeled organ, the second detection result also includes the second predicted region of the labeled organ, and In the process of determining the first loss value of the original detection model, the difference between the first predicted area and the actual area is comprehensively considered, while in the process of determining the second loss value of the original detection model, only the first loss value of the unlabeled organ is considered. The difference between the first prediction area and the corresponding second prediction area can improve the robustness of the consistency constraint between the original detection model and the image detection model, thereby improving the accuracy of the image detection model.

其中,利用第一預測區域分別與實際區域、第二預測區域之間的差異,調整原始檢測模型的網路參數之後,還包括:利用本次訓練以及之前若干次訓練時調整後的網路參數,對圖像檢測模型的網路參數進行更新。Wherein, after adjusting the network parameters of the original detection model by using the differences between the first prediction area and the actual area and the second prediction area, the method further includes: using the network parameters adjusted during this training and several previous trainings , to update the network parameters of the image detection model.

因此,通過利用原始檢測模型在本次訓練以及之前若干次訓練時調整後的網路參數,對圖像檢測模型的網路參數進行更新,能夠進一步約束網路參數在多次訓練過程中由於偽標注的真實區域所產生的累積誤差,提高圖像檢測模型的準確性。Therefore, by updating the network parameters of the image detection model by using the network parameters adjusted by the original detection model in this training and several previous trainings, it is possible to further constrain the network parameters in the multiple training processes due to false The cumulative error generated by the labeled ground-truth area improves the accuracy of the image detection model.

其中,利用本次訓練以及之前若干次訓練時調整後的網路參數,對圖像檢測模型的網路參數進行更新,包括:統計原始檢測模型在本次訓練和之前若干次訓練所調整的網路參數的平均值;將圖像檢測模型的網路參數更新為對應的原始檢測模型的網路參數的平均值。Among them, the network parameters of the image detection model are updated by using the network parameters adjusted in this training and several previous trainings, including: counting the network parameters adjusted by the original detection model in this training and several previous trainings. The average value of the road parameters; the network parameters of the image detection model are updated to the average value of the network parameters of the corresponding original detection model.

因此,通過統計原始檢測模型在本次訓練和之前若干次訓練所調整的網路參數的平均值,並將圖像檢測模型的網路參數更新為對應的原始檢測模型的網路參數的平均值,能夠有利於快速地約束多次訓練過程中所產生的累積誤差,提升圖像檢測模型的準確性。Therefore, by calculating the average value of the network parameters adjusted by the original detection model in this training and several previous trainings, and updating the network parameters of the image detection model to the average value of the corresponding network parameters of the original detection model , which can help to quickly constrain the cumulative error generated in the multiple training process and improve the accuracy of the image detection model.

其中,獲取樣本醫學圖像包括:獲取待偽標注醫學圖像,其中,待偽標注醫學圖像存在至少一個未標注器官;分別利用與每一未標注器官對應的單器官檢測模型對待偽標注醫學圖像進行檢測,以得到每個未標注器官的器官預測區域;將未標注器官的器官預測區域偽標注為未標注器官的實際區域,並將偽標注後的待偽標注醫學圖像作為樣本醫學圖像。The obtaining of the sample medical image includes: obtaining a medical image to be pseudo-labeled, wherein there is at least one unlabeled organ in the medical image to be pseudo-labeled; using a single-organ detection model corresponding to each unlabeled organ to treat the pseudo-labeled medical image respectively The image is detected to obtain the organ prediction area of each unlabeled organ; the organ prediction area of the unlabeled organ is pseudo-labeled as the actual area of the unlabeled organ, and the pseudo-labeled medical image to be pseudo-labeled is used as the sample medical image. image.

因此,通過獲取存在至少一個未標注器官的待偽標注醫學圖像,並利用與每一未標注器官對應的單器官檢測模型對待偽標注醫學圖像進行檢測,以得到每個未標注器官的器官預測區域,並將未標注器官的器官預測區域偽標注為未標注器官的實際區域,將偽標注後的待偽標注醫學圖像作為樣本醫學圖像,能夠利用單器官檢測模型免去人工對多器官進行標注的工作量,從而能夠有利於降低訓練用於多器官檢測的圖像檢測模型的人工成本,並提升訓練的效率。Therefore, by acquiring a medical image to be pseudo-labeled with at least one unlabeled organ, and using a single-organ detection model corresponding to each unlabeled organ to detect the pseudo-labeled medical image, the organ of each unlabeled organ is obtained. Predict the area, and pseudo-label the organ prediction area of the unlabeled organ as the actual area of the unlabeled organ, and use the pseudo-labeled medical image to be pseudo-labeled as the sample medical image, which can use the single-organ detection model to avoid manual one-to-many. The workload of organ labeling can help reduce the labor cost of training an image detection model for multi-organ detection and improve the training efficiency.

其中,待偽標注醫學圖像包括至少一個已標注器官;分別利用與每一未標注器官對應的單器官檢測模型對待偽標注醫學圖像進行檢測之前,方法還包括:利用待偽標注醫學圖像,對待偽標注醫學圖像中的已標注器官對應的單器官檢測模型進行訓練。Wherein, the medical image to be pseudo-labeled includes at least one labeled organ; before using the single-organ detection model corresponding to each unlabeled organ to detect the pseudo-labeled medical image, the method further includes: using the medical image to be pseudo-labeled , and train the single-organ detection model corresponding to the labeled organs in the pseudo-labeled medical images.

因此,在待偽標注醫學圖像中包括至少一個已標注器官,並利用待偽標注醫學圖像對待偽標注醫學圖像中的已標注器官對應的單器官檢測模型進行訓練,能夠提升單器官檢測模型的準確性,從而能夠有利於提升後續偽標注的準確性,進而能夠有利於提升後續訓練圖像檢測模型的準確性。Therefore, including at least one labeled organ in the medical image to be pseudo-labeled, and using the medical image to be pseudo-labeled to train a single-organ detection model corresponding to the labeled organ in the pseudo-labeled medical image, the single-organ detection can be improved. The accuracy of the model can be beneficial to improve the accuracy of the subsequent pseudo-annotation, which in turn can help to improve the accuracy of the subsequent training image detection model.

其中,獲取待偽標注醫學圖像,包括:獲取三維醫學圖像,並對三維醫學圖像進行預處理;將預處理後的三維醫學圖像進行裁剪處理,得到至少一個二維的待偽標注醫學圖像。Wherein, obtaining the medical image to be pseudo-labeled includes: obtaining a three-dimensional medical image and preprocessing the three-dimensional medical image; cropping the pre-processed three-dimensional medical image to obtain at least one two-dimensional medical image to be pseudo-labeled medical image.

因此,通過獲取三維醫學圖像,並對三維醫學圖像進行預處理,從而對預處理後的三維醫學圖像進行裁剪處理,得到至少一個二維的待偽標注醫學圖像,能夠有利於得到滿足模型訓練的醫學圖像,從而能夠有利於提升後續圖像檢測模型訓練的準確性。Therefore, by acquiring a 3D medical image and preprocessing the 3D medical image, the preprocessed 3D medical image is cropped to obtain at least one 2D medical image to be pseudo-labeled, which is beneficial to obtaining Medical images that meet the model training requirements can help improve the accuracy of subsequent image detection model training.

其中,對三維醫學圖像進行預處理包括以下至少一者:將三維醫學圖像的體素解析度調整至一預設解析度;利用一預設窗值將三維醫學圖像的體素值歸一化至預設範圍內;在三維醫學圖像的至少部分體素中加入高斯雜訊。The preprocessing of the 3D medical image includes at least one of the following: adjusting the voxel resolution of the 3D medical image to a preset resolution; using a preset window value to normalize the voxel value of the 3D medical image normalized to a preset range; adding Gaussian noise to at least some voxels of the three-dimensional medical image.

因此,將三維醫學圖像的體素解析度調整至一預設解析度,能夠有利於後續模型預測處理;利用預設窗值將三維醫學圖像的體素值歸一化至預設範圍內,能夠有利於模型提取到準確的特徵;在三維醫學圖像的至少部分體素中加入高斯雜訊,能夠有利於實現資料增廣,提高資料多樣性,提升後續模型訓練的準確性。Therefore, adjusting the voxel resolution of the 3D medical image to a preset resolution can be beneficial to the subsequent model prediction processing; using the preset window value to normalize the voxel value of the 3D medical image to a preset range , which can help the model to extract accurate features; adding Gaussian noise to at least part of the voxels of the three-dimensional medical image can help to achieve data augmentation, improve data diversity, and improve the accuracy of subsequent model training.

第二方面,本發明實施例提供了一種圖像檢測方法,包括:獲取待檢測醫學圖像,其中,待檢測醫學圖像中包含多個器官;利用圖像檢測模型對待檢測醫學進行檢測,得到多個器官的預測區域;其中,圖像檢測模型是利用上述第一方面中的圖像檢測模型的訓練方法訓練得到的。In a second aspect, an embodiment of the present invention provides an image detection method, including: acquiring a medical image to be detected, wherein the medical image to be detected includes multiple organs; using an image detection model to detect the medicine to be detected, and obtaining Prediction regions of multiple organs; wherein, the image detection model is obtained by using the training method of the image detection model in the first aspect above.

因此,利用上述第一方面中訓練得到的圖像檢測模型對待檢測醫學圖像檢測檢測,得到多個器官的預測區域,能夠在多器官檢測的過程中,提高檢測準確性。Therefore, by using the image detection model trained in the above-mentioned first aspect to detect and detect the medical images to be detected, the predicted regions of multiple organs can be obtained, which can improve the detection accuracy in the process of multi-organ detection.

第三方面,本發明實施例提供了一種圖像檢測模型的訓練裝置,包括圖像獲取模組、第一檢測模組、第二檢測模組、參數調整模組,圖像獲取模組被配置為獲取樣本醫學圖像,其中,樣本醫學圖像偽標注出至少一個未標注器官的實際區域;第一檢測模組被配置為利用原始檢測模型對樣本醫學圖像進行檢測以得到第一檢測結果,其中,第一檢測結果包括未標注器官的第一預測區域;以及,第二檢測模組被配置為利用圖像檢測模型對樣本醫學圖像進行檢測以得到第二檢測結果,圖像檢測模型的網路參數是基於原始檢測模型的網路參數確定的,其中,第二檢測結果包括未標注器官的第二預測區域;參數調整模組被配置為利用第一預測區域分別與實際區域、第二預測區域之間的差異,調整原始檢測模型的網路參數。In a third aspect, an embodiment of the present invention provides an apparatus for training an image detection model, including an image acquisition module, a first detection module, a second detection module, and a parameter adjustment module, wherein the image acquisition module is configured with In order to obtain a sample medical image, wherein the sample medical image pseudo-marks the actual region of at least one unmarked organ; the first detection module is configured to use the original detection model to detect the sample medical image to obtain a first detection result , wherein the first detection result includes a first prediction region of an unlabeled organ; and the second detection module is configured to use an image detection model to detect the sample medical image to obtain a second detection result, and the image detection model The network parameters of the original detection model are determined based on the network parameters of the original detection model, wherein the second detection result includes the second prediction area of the unlabeled organ; the parameter adjustment module is configured to use the first prediction area to be respectively related to the actual area, the The difference between the two prediction regions is adjusted by adjusting the network parameters of the original detection model.

第四方面,本發明實施例提供了一種圖像檢測裝置,包括圖像獲取模組和圖像檢測模組,圖像獲取模組被配置為獲取待檢測醫學圖像,其中,待檢測醫學圖像中包含多個器官;圖像檢測模組被配置為利用圖像檢測模型對待檢測醫學進行檢測,得到多個器官的預測區域;其中,圖像檢測模型是利用上述第二方面中的圖像檢測模型的訓練裝置訓練得到的。In a fourth aspect, an embodiment of the present invention provides an image detection device, including an image acquisition module and an image detection module, the image acquisition module is configured to acquire a medical image to be detected, wherein the medical image to be detected The image contains multiple organs; the image detection module is configured to use the image detection model to detect the medicine to be detected, and obtain the prediction regions of the multiple organs; wherein, the image detection model uses the image in the second aspect above It is obtained by training the training device of the detection model.

第五方面,本發明實施例提供了一種電子設備,包括相互耦接的記憶體和處理器,處理器被配置為執行記憶體中儲存的程式指令,以實現上述第一方面中的圖像檢測模型的訓練方法,或實現上述第二方面中的圖像檢測方法。In a fifth aspect, an embodiment of the present invention provides an electronic device, including a memory and a processor coupled to each other, the processor is configured to execute program instructions stored in the memory, so as to implement the image detection in the first aspect above A training method for a model, or implementing the image detection method in the second aspect above.

第六方面,本發明實施例提供了一種電腦可讀儲存介質,其上儲存有程式指令,程式指令被處理器執行時實現上述第一方面中的圖像檢測模型的訓練方法,或實現上述第二方面中的圖像檢測方法。In a sixth aspect, an embodiment of the present invention provides a computer-readable storage medium on which program instructions are stored, and when the program instructions are executed by a processor, the training method of the image detection model in the above-mentioned first aspect is implemented, or the above-mentioned first method is implemented. The image detection method in the second aspect.

第七方面,本發明實施例還提供了一種電腦程式,包括電腦可讀代碼,當所述電腦可讀代碼在電子設備中運行時,所述電子設備中的處理器執行如上述第一方面中的圖像檢測模型的訓練方法,或實現上述第二方面中的圖像檢測方法。In a seventh aspect, an embodiment of the present invention further provides a computer program, including computer-readable code, when the computer-readable code is executed in an electronic device, the processor in the electronic device executes the above-mentioned first aspect. The training method of the image detection model, or the image detection method in the second aspect above.

上述方案,通過獲取樣本醫學圖像,且樣本醫學圖像偽標注出至少一個未標注器官的實際區域,故樣本醫學圖像中無需對多器官進行真實標注,從而利用原始檢測模型對樣本醫學圖像檢測檢測以得到包含未標注器官的第一預設區域的第一檢測結果,並利用圖像檢測模型對樣本醫學圖像進行檢測以得到包含未標注器官的第二預測區域的第二檢測結果,進而利用第一預測區域分別與實際區域、第二預測區域之間的差異,調整原始檢測模型的網路參數,且圖像檢測模型的網路參數是基於原始檢測模型的網路參數確定的,故能夠使得圖像檢測模型監督原始檢測模型的訓練,故能夠約束網路參數在多次訓練過程中由於偽標注的真實區域所產生的累積誤差,提高圖像檢測模型的準確性,從而使得圖像檢測模型得以準確地監督原始檢測模型進行訓練,進而使得原始檢測模型在訓練過程中能夠準確地調整其網路參數,故此,能夠在多器官檢測的過程中,提升圖像檢測模型的檢測準確性。In the above solution, by obtaining a sample medical image, and the sample medical image pseudo-labels the actual area of at least one unlabeled organ, it is not necessary to actually label multiple organs in the sample medical image, so that the original detection model is used to detect the sample medical image. image detection and detection to obtain a first detection result of a first preset region containing unlabeled organs, and use an image detection model to detect a sample medical image to obtain a second detection result of a second predicted region containing unlabeled organs , and then use the difference between the first prediction area and the actual area and the second prediction area to adjust the network parameters of the original detection model, and the network parameters of the image detection model are determined based on the network parameters of the original detection model. , so that the image detection model can supervise the training of the original detection model, so it can constrain the cumulative error of the network parameters due to the pseudo-labeled real area in the multiple training process, improve the accuracy of the image detection model, and make The image detection model can accurately supervise the training of the original detection model, so that the original detection model can accurately adjust its network parameters during the training process. Therefore, the detection of the image detection model can be improved in the process of multi-organ detection. accuracy.

下面結合說明書附圖,對本發明實施例的方案進行詳細說明。The solutions of the embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

以下描述中,為了說明而不是為了限定,提出了諸如特定系統結構、介面、技術之類的細節,以便透徹理解本發明實施例。In the following description, for the purpose of illustration rather than limitation, details such as specific system structures, interfaces, technologies and the like are set forth in order to provide a thorough understanding of the embodiments of the present invention.

本文中術語“系統”和“網路”在本文中常被可互換使用。本文中術語“和/或”,僅僅是一種描述關聯物件的關聯關係,表示可以存在三種關係,例如,A和/或B,可以表示:單獨存在A,同時存在A和B,單獨存在B這三種情況。另外,本文中字元“/”,一般表示前後關聯物件是一種“或”的關係。此外,本文中的“多”表示兩個或者多於兩個。The terms "system" and "network" are often used interchangeably herein. The term "and/or" in this article is only a relationship to describe related objects, which means that there can be three relationships, for example, A and/or B, which can mean that A exists alone, A and B exist at the same time, and B exists alone. three situations. In addition, the character "/" in this text generally indicates that the contextually related objects are in an "or" relationship. Also, "multiple" herein means two or more than two.

請參閱圖1,圖1是本發明實施例提供的圖像檢測模型的訓練方法一實施例的流程示意圖。其中,可以包括如下步驟。Please refer to FIG. 1. FIG. 1 is a schematic flowchart of an embodiment of a training method for an image detection model provided by an embodiment of the present invention. Among them, the following steps may be included.

步驟S11:獲取樣本醫學圖像,其中,樣本醫學圖像偽標注出至少一個未標注器官的實際區域。Step S11: Obtain a sample medical image, wherein the sample medical image pseudo-marks at least one actual region of an unmarked organ.

樣本醫學圖像可以包括CT圖像、MR圖像,在此不做限定。在一個可能的實施場景中,樣本醫學圖像可以是對腹部、胸部、頭顱等部位進行掃描得到的,可以根據實際應用情況進行設置,在此不做限定。例如,對腹部進行掃描,樣本醫學圖像中的器官可以包括:腎臟、脾臟、肝臟、胰腺等;或者,對胸部進行掃描,樣本醫學圖像中的器官可以包括:心臟、肺葉、甲狀腺等;或者,對頭顱進行掃描,樣本醫學圖像中的器官可以包括:腦幹、小腦、間腦、端腦等。The sample medical images may include CT images and MR images, which are not limited herein. In a possible implementation scenario, the sample medical image may be obtained by scanning the abdomen, chest, skull, etc., and may be set according to actual application conditions, which is not limited herein. For example, scanning the abdomen, the organs in the sample medical image may include: kidney, spleen, liver, pancreas, etc.; or, scanning the chest, the organs in the sample medical image may include: heart, lung, thyroid, etc.; Alternatively, the skull is scanned, and the organs in the sample medical image may include: brain stem, cerebellum, diencephalon, telencephalon, etc.

在一個可能的實施場景中,未標注器官的實際區域可以是利用與未標注器官對應的單器官檢測模型檢測得到的,例如,樣本醫學圖像為腹部掃描得到的,則其中未標注器官可以包括:腎臟、脾臟、肝臟、胰腺中的至少一者,則可以利用與腎臟對應的單器官檢測模型對樣本醫學圖像進行檢測,得到與腎臟對應的器官預測區域,可以利用與脾臟對應的單器官檢測模型對樣本醫學圖像進行檢測,得到與脾臟對應的器官預測區域,可以利用與肝臟對應的單器官檢測模型對樣本醫學圖像進行檢測,得到與肝臟對應的器官預測區域,利用與胰腺對應的單器官檢測模型對樣本醫學圖像進行檢測,得到與胰腺對應的器官預測區域,從而在樣本醫學圖像中將與腎臟、脾臟、肝臟、胰腺分別對應的器官預測區域進行偽標注,即得到未標注器官腎臟、脾臟、肝臟和胰腺偽標注出的實際區域,本發明實施例中,偽標注是指將單器官檢測模型檢測出的未標注器官的器官預測區域作為實際區域的過程。在未標注器官為其他器官的情況下,可以以此類推,在此不再一一舉例。在一個可能的實施場景中,未標注器官的單器官檢測模型是利用標注有未標注器官的實際區域的單器官資料集訓練得到的,例如,與腎臟對應的單器官檢測模型是利用標注有腎臟的實際區域的腎臟資料集訓練得到的,與脾臟對應的單器官檢測模型是利用標注有脾臟的實際區域的脾臟資料集訓練得到的,以此類推,在此不再一一舉例。In a possible implementation scenario, the actual area of the unlabeled organ may be detected by using a single-organ detection model corresponding to the unlabeled organ. For example, if the sample medical image is obtained by scanning the abdomen, the unlabeled organ may include : at least one of the kidney, spleen, liver, and pancreas, the sample medical image can be detected by using the single-organ detection model corresponding to the kidney to obtain the organ prediction area corresponding to the kidney, and the single-organ corresponding to the spleen can be used. The detection model detects the sample medical image to obtain the organ prediction area corresponding to the spleen. The single-organ detection model corresponding to the liver can be used to detect the sample medical image to obtain the organ prediction area corresponding to the liver. The single-organ detection model is used to detect the sample medical image, and obtain the organ prediction area corresponding to the pancreas, so that the organ prediction area corresponding to the kidney, spleen, liver, and pancreas is pseudo-labeled in the sample medical image, that is, the obtained The actual regions that are pseudo-labeled for the organs kidney, spleen, liver, and pancreas are not labeled. In this embodiment of the present invention, pseudo-labeling refers to the process of using the predicted region of the unlabeled organ detected by the single-organ detection model as the actual region. In the case where the organ is not marked as another organ, it can be deduced by analogy, and no examples will be given here. In a possible implementation scenario, the single-organ detection model for unlabeled organs is trained using a single-organ dataset that labels the actual region of the unlabeled organ. For example, the single-organ detection model corresponding to the kidney uses The single-organ detection model corresponding to the spleen is trained using the spleen data set of the actual area marked with the spleen, and so on, and will not be listed here.

步驟S12:利用原始檢測模型對樣本醫學圖像進行檢測以得到第一檢測結果,其中,第一檢測結果包括未標注器官的第一預測區域。Step S12: Detecting the sample medical image by using the original detection model to obtain a first detection result, wherein the first detection result includes a first prediction region of an unlabeled organ.

原始檢測模型可以包括Mask R-CNN(Mask Region with Convolutional Neural Network)、FCN(Fully Convolutional Network,全卷積網路)、PSP-net(Pyramid Scene Parsing Network,金字塔場景分析網路)中的任一者,此外,原始檢測模型還可以是set-net、U-net等,可以根據實際情況進行設置,在此不做限定。The original detection model can include any one of Mask R-CNN (Mask Region with Convolutional Neural Network), FCN (Fully Convolutional Network), PSP-net (Pyramid Scene Parsing Network, Pyramid Scene Analysis Network). In addition, the original detection model can also be set-net, U-net, etc., which can be set according to the actual situation, which is not limited here.

利用原始檢測模型對樣本醫學圖像進行檢測,可以得到包含未標注器官的第一預測區域的第一檢測結果。例如,樣本醫學圖像是對腹部掃描得到的圖像,未標注器官包括腎臟、脾臟、胰腺,故利用原始檢測模型對樣本醫學圖像進行檢測,能夠得到腎臟的第一預測區域、脾臟的第一預測區域、胰腺的第一預測區域,其他場景可以以此類推,在此不再一一舉例。The sample medical image is detected by using the original detection model, and the first detection result of the first prediction region including the unlabeled organ can be obtained. For example, the sample medical image is an image obtained by scanning the abdomen, and the unlabeled organs include the kidney, spleen, and pancreas. Therefore, by using the original detection model to detect the sample medical image, the first predicted area of the kidney and the first predicted area of the spleen can be obtained. A prediction area, the first prediction area of the pancreas, and other scenarios can be deduced by analogy, which will not be exemplified one by one here.

步驟S13:利用圖像檢測模型對樣本醫學圖像進行檢測以得到第二檢測結果,其中,第二檢測結果包括未標注器官的第二預測區域。Step S13: Detecting the sample medical image by using the image detection model to obtain a second detection result, wherein the second detection result includes a second prediction area of an unlabeled organ.

原始檢測模型的網路結構、與原始檢測模型對應的圖像檢測模型的網路結構可以是相同的。例如,在原始檢測模型為Mask R-CNN的情況下,對應的圖像檢測模型也可以是Mask R-CNN;或者,在原始檢測模型為FCN的情況下,對應的圖像檢測模型也可以是FCN;或者,在原始檢測模型為PSP-net的情況下,對應的圖像檢測模型也可以是PSP-net;在原始檢測模型為其他網路的情況下,可以以此類推,在此不再一一舉例。The network structure of the original detection model and the network structure of the image detection model corresponding to the original detection model may be the same. For example, when the original detection model is Mask R-CNN, the corresponding image detection model can also be Mask R-CNN; or, when the original detection model is FCN, the corresponding image detection model can also be FCN; or, when the original detection model is PSP-net, the corresponding image detection model can also be PSP-net; when the original detection model is other networks, it can be deduced by analogy, which is not repeated here. One by one example.

圖像檢測模型的網路參數可以是基於原始檢測模型的網路參數確定的,例如,圖像檢測模型的網路參數可以是基於原始檢測模型在多次訓練過程中調整後的網路參數得到的。例如,在第k次訓練的過程中,圖像檢測模型的網路參數可以是利用原始檢測模型在第k-n次至第k-1次訓練過程中調整後的網路參數得到的;或者,在第k+1次訓練的過程中,圖像檢測模型的網路參數可以是利用原始檢測模型在第k+1-n次至第k次訓練過程中調整後的網路參數得到的,以此類推,在此不再一一舉例。其中,上述多次訓練的次數(即n)可以根據實際情況進行設置,如可以設置為5、10、15等等,在此不做限定。The network parameters of the image detection model may be determined based on the network parameters of the original detection model. For example, the network parameters of the image detection model may be obtained based on the network parameters adjusted by the original detection model in multiple training processes. of. For example, during the kth training process, the network parameters of the image detection model may be obtained by using the network parameters adjusted by the original detection model during the knth to k-1th training; In the process of the k+1th training, the network parameters of the image detection model can be obtained by using the network parameters adjusted by the original detection model during the k+1-nth to the kth training process. By analogy, we will not give examples one by one here. Wherein, the number of times of the above-mentioned multiple trainings (ie, n) can be set according to the actual situation, for example, it can be set to 5, 10, 15, etc., which is not limited here.

利用圖像檢測模型對樣本醫學圖像進行檢測,可以得到包含未標注器官的第二預測區域的第二檢測結果。仍以樣本醫學圖像是對腹部掃描得到的圖像為例,未標注器官包括腎臟、脾臟、胰腺,故利用圖像檢測模型對樣本醫學圖像進行檢測,能夠得到腎臟的第二預測區域、脾臟的第二預測區域、胰腺的第二預測區域,其他場景可以以此類推,在此不再一一舉例。The sample medical image is detected by using the image detection model, and the second detection result of the second prediction region including the unlabeled organ can be obtained. Still taking the sample medical image as an example obtained by scanning the abdomen, the unlabeled organs include kidney, spleen, and pancreas. Therefore, the image detection model is used to detect the sample medical image, and the second prediction area of the kidney, spleen, and pancreas can be obtained. The second prediction area of the spleen, the second prediction area of the pancreas, and other scenarios can be deduced by analogy, and will not be listed one by one here.

在一個可能的實施場景中,上述步驟S12和步驟S13可以按照先後循序執行,例如,先執行步驟S12,後執行步驟S13;或者,先執行步驟S13,後執行步驟S12。在另一個可能的實施場景中,上述步驟S12和步驟S13還可以同時執行,可以根據實際應用進行設置,在此不做限定。In a possible implementation scenario, the above steps S12 and S13 may be performed sequentially, for example, step S12 is performed first, and then step S13 is performed; or, step S13 is performed first, and then step S12 is performed. In another possible implementation scenario, the foregoing step S12 and step S13 may also be performed simultaneously, which may be set according to actual applications, which is not limited herein.

步驟S14:利用第一預測區域分別與實際區域、第二預測區域之間的差異,調整原始檢測模型的網路參數。Step S14: Adjust the network parameters of the original detection model by using the differences between the first prediction area and the actual area and the second prediction area, respectively.

其中,可以利用第一預測區域和實際區域之間的差異,確定原始檢測模型的第一損失值。例如,為了提升模型對於難樣本的關注度,可以利用焦點損失(focal loss)函數對第一預測區域和實際區域進行處理,得到焦點第一損失值;或者,為了能夠使模型擬合偽標注的實際區域,還可以利用集合相似度損失(dice loss)函數對第一預測區域和實際區域進行處理,得到集合相似度第一損失值。Wherein, the difference between the first predicted area and the actual area can be used to determine the first loss value of the original detection model. For example, in order to improve the model's attention to difficult samples, the focal loss function can be used to process the first predicted area and the actual area to obtain the focal first loss value; or, in order to enable the model to fit the pseudo-labeled For the actual area, the first predicted area and the actual area may also be processed by using the dice loss function to obtain the first loss value of the ensemble similarity.

其中,還可以利用第一預測區域和第二預測區域之間的差異,確定原始檢測模型的第二損失值。例如,為了提高原始檢測模型和圖像檢測模型預測的一致性,可以利用一致性損失函數對第一預測區域和第二預測區域進行處理,得到第二損失值,在一個可能的實施場景中,一致性損失函數可以是交叉熵損失函數,可以根據實際應用情況進行設置,在此不做限定。Wherein, the difference between the first prediction area and the second prediction area may also be used to determine the second loss value of the original detection model. For example, in order to improve the consistency of prediction between the original detection model and the image detection model, the first prediction area and the second prediction area can be processed by using the consistency loss function to obtain the second loss value. In a possible implementation scenario, The consistency loss function can be a cross-entropy loss function, which can be set according to the actual application, which is not limited here.

其中,還可以利用上述第一損失值和第二損失值調整原始檢測模型的網路參數。例如,為了能夠平衡各損失值在訓練過程中的重要程度,可以對第一損失值和第二損失值進行加權處理,得到加權損失值,從而利用加權損失值調整原始檢測模型的網路參數。第一損失值和第二損失值對應的權值可以根據實際情況進行設置,例如,均設置為0.5;或者,將第一損失值對應的權值設置為0.6,第二損失值對應的權值設置為0.4,在此不做限定。此外,在第一損失值包括焦點第一損失值和集合相似度第一損失值的情況下,可以對焦點第一損失值、集合相似度第一損失值、第二損失值進行加權處理,得到加權損失值,並利用加權損失值調整原始檢測模型的網路參數。在一個可能的實施場景中,可以採用隨機梯度下降(Stochastic Gradient Descent,SGD)、批量梯度下降(Batch Gradient Descent,BGD)、小批量梯度下降(Mini-Batch Gradient Descent,MBGD)等方式,利用加權損失值對原始檢測模型的網路參數進行調整,其中,批量梯度下降是指在每一次反覆運算的過程中,使用所有樣本來進行參數更新;隨機梯度下降是指在每一次反覆運算的過程中,使用一個樣本來進行參數更新;小批量梯度下降是指在每一次反覆運算的過程中,使用一批樣本來進行參數更新,在此不再贅述。Wherein, the network parameters of the original detection model can also be adjusted by using the above-mentioned first loss value and second loss value. For example, in order to balance the importance of each loss value in the training process, the first loss value and the second loss value can be weighted to obtain a weighted loss value, so as to use the weighted loss value to adjust the network parameters of the original detection model. The weights corresponding to the first loss value and the second loss value can be set according to the actual situation, for example, both are set to 0.5; or, the weight corresponding to the first loss value is set to 0.6, and the weight corresponding to the second loss value is set to 0.6. Set to 0.4, which is not limited here. In addition, when the first loss value includes the focus first loss value and the set similarity first loss value, the focus first loss value, the set similarity first loss value, and the second loss value may be weighted to obtain Weight the loss value, and use the weighted loss value to adjust the network parameters of the original detection model. In a possible implementation scenario, Stochastic Gradient Descent (SGD), Batch Gradient Descent (BGD), Mini-Batch Gradient Descent (MBGD), etc. The loss value adjusts the network parameters of the original detection model. Among them, batch gradient descent refers to using all samples to update parameters in the process of each repeated operation; stochastic gradient descent refers to the process of each repeated operation. , using one sample to update parameters; mini-batch gradient descent refers to using a batch of samples to update parameters in the process of each iterative operation, which will not be repeated here.

在一個實施場景中,樣本醫學圖像中還可以包括已標注器官的實際區域,第一檢測結果中還可以包括已標注器官的第一預測區域,第二檢測結果還可以包括已標注器官的第二預測區域。仍以樣本醫學圖像是對腹部掃描得到的圖像為例,未標注器官包括腎臟、脾臟、胰腺,已標注器官包括肝臟,故利用原始檢測模型對樣本醫學圖像檢測檢測,能夠得到未標注器官腎臟對應的第一預測區域、未標注器官脾臟對應的第一預測區域、未標注器官胰腺對應的第一預測區域和已標注器官肝臟對應的第一預測區域,而利用與原始檢測模型對應的圖像檢測模型對樣本醫學圖像進行檢測,能夠得到未標注器官腎臟對應的第二預測區域、未標注器官脾臟對應的第二預測區域、未標注器官胰腺對應的第二預測區域和已標注器官肝臟對應的第二預測區域。故此,可以利用未標注器官和已標注器官的第一預測區域和實際區域之間的差異,確定原始檢測模型的第一損失值,利用未標注器官的第一預測區域和對應第二預測區域之間的差異,可以確定原始檢測模型的第二損失值。仍以樣本醫學圖像是對腹部掃描得到的圖像為例,未標注器官包括腎臟、脾臟、胰腺,已標注器官包括肝臟,可以利用未標注器官腎臟對應的第一預測區域和偽標注的實際區域之間的差異、未標注器官脾臟對應的第一預測區域和偽標注的實際區域之間的差異、未標注器官胰腺對應的第一預測區域和偽標注的實際區域之間的差異和已標注器官肝臟對應的第一預測區域和真實標注的實際區域之間的差異,確定原始檢測模型的第一損失值,第一損失值可以包括焦點第一損失值、集合相似度第一損失值中的至少一者,可以參閱前述步驟,在此不再贅述。此外,還可以利用未標注器官腎臟對應的第一預測區域和第二預測區域之間的差異、未標注器官脾臟對應的第一預測區域和第二預測區域之間的差異、未標注器官胰腺對應的第一預測區域和第二預測區域之間的差異,確定原始檢測模型的第二損失值,第二損失值可以採用交叉熵損失函數進行計算,可以參閱前述步驟,在此不再贅述。故此,在確定原始檢測模型的第一損失值的過程中,綜合考慮第一預測區域和實際區域之間的差異,而在確定原始檢測模型的第二損失值的過程中,只考慮未標注器官的第一預測區域和對應的第二預測區域之間的差異,從而能夠提升原始檢測模型和圖像檢測模型一致性約束的魯棒性,進而能夠提高圖像檢測模型的準確性。In an implementation scenario, the sample medical image may also include the actual area of the labeled organ, the first detection result may also include the first predicted area of the labeled organ, and the second detection result may also include the first predicted area of the labeled organ. 2. Prediction area. Still taking the sample medical image as an example obtained by scanning the abdomen, the unlabeled organs include kidney, spleen, and pancreas, and the labeled organs include the liver. Therefore, the original detection model is used to detect and detect the sample medical image, and the unlabeled organs can be obtained. The first prediction region corresponding to the organ kidney, the first prediction region corresponding to the unlabeled organ spleen, the first prediction region corresponding to the unlabeled organ pancreas, and the first prediction region corresponding to the labeled organ liver, and the first prediction region corresponding to the original detection model is used. The image detection model detects the sample medical images, and can obtain the second prediction area corresponding to the unlabeled organ kidney, the second prediction area corresponding to the unlabeled organ spleen, the second prediction area corresponding to the unlabeled organ pancreas, and the labeled organ The second prediction region corresponding to the liver. Therefore, the difference between the first prediction area and the actual area of the unlabeled organ and the labeled organ can be used to determine the first loss value of the original detection model, and the difference between the first prediction area of the unlabeled organ and the corresponding second prediction area can be used. The difference between the two can determine the second loss value of the original detection model. Still taking the sample medical image as an image obtained by scanning the abdomen as an example, the unlabeled organs include the kidney, spleen, and pancreas, and the labeled organs include the liver. The first prediction region corresponding to the unlabeled organ kidney and the pseudo-labeled actual region can be used. The difference between regions, the difference between the first predicted region corresponding to the unlabeled organ spleen and the pseudo-labeled actual region, the difference between the first predicted region corresponding to the unlabeled organ pancreas and the pseudo-labeled actual region and the labeled The difference between the first predicted area corresponding to the liver of the organ and the actual area actually marked, determines the first loss value of the original detection model, and the first loss value may include one of the focus first loss value and the set similarity first loss value. For at least one, you can refer to the aforementioned steps, which will not be repeated here. In addition, the difference between the first prediction region and the second prediction region corresponding to the unlabeled organ kidney, the difference between the first prediction region and the second prediction region corresponding to the unlabeled organ spleen, and the unlabeled organ pancreas can also be used. The difference between the first prediction area and the second prediction area is determined to determine the second loss value of the original detection model. The second loss value can be calculated by using the cross-entropy loss function, and you can refer to the above steps, and will not be repeated here. Therefore, in the process of determining the first loss value of the original detection model, the difference between the first predicted area and the actual area is comprehensively considered, while in the process of determining the second loss value of the original detection model, only the unlabeled organs are considered. The difference between the first prediction area and the corresponding second prediction area can improve the robustness of the consistency constraint between the original detection model and the image detection model, thereby improving the accuracy of the image detection model.

在另一個實施場景中,在對原始檢測模型的網路參數進行調整之後,還可以利用本次訓練以及之前若干次訓練時調整後的網路參數,對圖像檢測模型的網路參數進行更新,以進一步約束網路參數在多次訓練過程中由於偽標注的真實區域所產生的累積誤差,提高圖像檢測模型的準確性。此外,在對原始檢測模型的網路參數進行調整之後,根據需要,也可以不對圖像檢測模型的網路參數進行更新,而在預設數量次(例如,2次、3次等等)訓練之後,再利用本次訓練以及之前若干次訓練時調整後的網路參數,對圖像檢測模型的網路參數進行更新,在此不做限定。例如,在第k次訓練的過程中,可以不對圖像檢測模型的網路參數進行更新,在第k+i次訓練的過程中,可以利用原始檢測模型在第k+i-n至第k+i次訓練,其中,i可以根據實際情況設置為不小於1的整數,如可以設置為1、2、3等等,在此不做限定。In another implementation scenario, after adjusting the network parameters of the original detection model, the network parameters of the image detection model can also be updated by using the network parameters adjusted during this training and several previous trainings , in order to further constrain the cumulative error of the network parameters due to the pseudo-annotated real regions in the multiple training process, and improve the accuracy of the image detection model. In addition, after adjusting the network parameters of the original detection model, as needed, the network parameters of the image detection model may not be updated, and the training is performed for a preset number of times (for example, 2 times, 3 times, etc.) After that, the network parameters of the image detection model are updated by using the network parameters adjusted during this training and several previous trainings, which are not limited here. For example, in the process of the kth training, the network parameters of the image detection model may not be updated. In the process of the k+ith training, the original detection model can be used in the k+in to k+ith training times, in which i can be set to an integer not less than 1 according to the actual situation, for example, it can be set to 1, 2, 3, etc., which is not limited here.

在一個可能的實施場景中,在對圖像檢測模型的網路參數進行更新的過程中,可以統計原始檢測模型在本次訓練和之前若干次訓練所調整的網路參數的平均值,再將圖像檢測模型的網路參數更新為對應的原始檢測模型的網路參數的平均值。本發明實施例中,網路參數的平均值均是指對應於同一網路參數的平均值,其中,可以是對應於同一神經元的某一權重(或偏置)在多次訓練過程中調整後的數值的平均值,故可以統計得到各神經元的各個權重(或偏置)在多次訓練過程中調整後的數值的平均值,從而利用該平均值對圖像檢測模型中對應神經元的對應權重(或偏置)進行更新。例如,本次訓練為第k次訓練,可以統計原始檢測模型在本次訓練和之前n-1次訓練所調整的網路參數的平均值,其中,n的數值可以根據實際應用情況進行設置,例如,可以設置為5、10、15等,在此不做限定。故此,在第k+1次訓練的過程中,圖像檢測模型的網路參數是利用第k-n+1次至第k次訓練過程中所調整後的網路參數的平均值更新得到的,從而能夠有利於快速地約束多次訓練過程中所產生的累積誤差,提升圖像檢測模型的準確性。In a possible implementation scenario, in the process of updating the network parameters of the image detection model, the average value of the network parameters adjusted by the original detection model in this training and several previous trainings can be counted, and then the The network parameters of the image detection model are updated to the average value of the network parameters of the corresponding original detection model. In the embodiment of the present invention, the average value of the network parameters refers to the average value corresponding to the same network parameter, which may be a certain weight (or bias) corresponding to the same neuron adjusted in multiple training processes Therefore, the average value of the adjusted values of each weight (or bias) of each neuron in multiple training processes can be obtained statistically, and the corresponding neuron in the image detection model can be analyzed by this average value. The corresponding weights (or biases) of are updated. For example, this training is the kth training, and the average value of the network parameters adjusted by the original detection model in this training and the previous n-1 trainings can be counted. The value of n can be set according to the actual application. For example, it can be set to 5, 10, 15, etc., which is not limited here. Therefore, in the process of the k+1th training, the network parameters of the image detection model are updated by using the average value of the network parameters adjusted during the k-n+1th to the kth training process. , which can help to quickly constrain the cumulative error generated in the multiple training process and improve the accuracy of the image detection model.

在又一個實施場景中,還可以設置一預設訓練訓練結束條件,在不滿足預設訓練結束條件的情況下,可以重新執行上述步驟S12以及後續步驟,以繼續對原始檢測模型的網路參數進行調整。在一個可能的實施場景中,預設訓練結束條件可以包括:當前訓練次數達到預設次數閾值(如,500次、1000次等)、原始檢測模型的損失值小於一預設損失閾值中的任一者,在此不做限定。在另一個可能的實施場景中,在訓練結束後,可以利用圖像檢測模型對待檢測醫學圖像進行檢測,從而能夠直接得到待檢測醫學圖像中多個器官對應的區域,進而能夠免去利用多個單器官檢測對待檢測醫學圖像進行分別檢測的操作,故能夠降低檢測計算量。In yet another implementation scenario, a preset training end condition may also be set, and if the preset training end condition is not met, the above step S12 and subsequent steps may be re-executed to continue checking the network parameters of the original detection model. make adjustments. In a possible implementation scenario, the preset training end condition may include: the current training times reaches a preset times threshold (eg, 500 times, 1000 times, etc.), the loss value of the original detection model is less than any of a preset loss threshold value One is not limited here. In another possible implementation scenario, after the training, the image detection model can be used to detect the medical image to be detected, so that the regions corresponding to multiple organs in the medical image to be detected can be directly obtained, thereby eliminating the need to use Multiple single-organ detection operations are performed to separately detect the medical images to be detected, so the amount of detection calculation can be reduced.

上述方案,通過獲取樣本醫學圖像,且樣本醫學圖像偽標注出至少一個未標注器官的實際區域,故樣本醫學圖像中無需對多器官進行真實標注,從而利用原始檢測模型對樣本醫學圖像檢測檢測以得到包含未標注器官的第一預設區域的第一檢測結果,並利用圖像檢測模型對樣本醫學圖像進行檢測以得到包含未標注器官的第二預測區域的第二檢測結果,進而利用第一預測區域分別與實際區域、第二預測區域之間的差異,調整原始檢測模型的網路參數,且圖像檢測模型的網路參數是利用原始檢測模型的網路參數確定的,故能夠使得圖像檢測模型監督原始檢測模型的訓練,故能夠約束網路參數在多次訓練過程中由於偽標注的真實區域所產生的累積誤差,提高圖像檢測模型的準確性,從而使得圖像檢測模型得以準確地監督原始檢測模型進行訓練,進而使得原始檢測模型在訓練過程中能夠準確地調整其網路參數,故此,能夠在多器官檢測的過程中,提升圖像檢測模型的檢測準確性。In the above solution, by obtaining a sample medical image, and the sample medical image pseudo-labels the actual area of at least one unlabeled organ, it is not necessary to actually label multiple organs in the sample medical image, so that the original detection model is used to detect the sample medical image. image detection and detection to obtain a first detection result of a first preset region containing unlabeled organs, and use an image detection model to detect a sample medical image to obtain a second detection result of a second predicted region containing unlabeled organs , and then use the difference between the first prediction area and the actual area and the second prediction area to adjust the network parameters of the original detection model, and the network parameters of the image detection model are determined by using the network parameters of the original detection model. , so that the image detection model can supervise the training of the original detection model, so it can constrain the cumulative error of the network parameters due to the pseudo-labeled real area in the multiple training process, improve the accuracy of the image detection model, and make The image detection model can accurately supervise the training of the original detection model, so that the original detection model can accurately adjust its network parameters during the training process. Therefore, the detection of the image detection model can be improved in the process of multi-organ detection. accuracy.

請參閱圖2,圖2是圖1中步驟S11一實施例的流程示意圖。其中,圖2是獲取樣本醫學圖像一實施例的流程示意圖,包括如下步驟。Please refer to FIG. 2 , which is a schematic flowchart of an embodiment of step S11 in FIG. 1 . 2 is a schematic flowchart of an embodiment of acquiring a sample medical image, including the following steps.

步驟S111:獲取待偽標注醫學圖像,其中,待偽標注醫學圖像存在至少一個未標注器官。Step S111: Acquire a medical image to be pseudo-labeled, wherein the medical image to be pseudo-labeled has at least one unlabeled organ.

待偽標注醫學圖像可以是對腹部進行掃描得到的,待偽標注醫學圖像中的未標注器官可以包括:腎臟、脾臟、胰腺等,待偽標注醫學圖像也可以是對其他部位進行掃描得到的,例如,胸部、頭顱等,可以參閱前述實施例中的相關步驟,在此不做限定。The medical image to be pseudo-labeled can be obtained by scanning the abdomen. The unlabeled organs in the medical image to be pseudo-labeled can include kidneys, spleen, pancreas, etc. The medical image to be pseudo-labeled can also be scanned from other parts. To obtain, for example, the chest, head, etc., reference may be made to the relevant steps in the foregoing embodiments, which are not limited herein.

在一個實施場景中,採集得到的原始醫學圖像可以是三維醫學圖像,例如,三維CT圖像、三維MR圖像,在此不做限定,故可以對三維醫學圖像進行預處理,並將預處理後的三維醫學圖像進行裁剪處理,得到至少一個待偽標注醫學圖像。裁剪處理可以是對預處理後的三維醫學圖像進行中心裁剪,在此不做限定。例如,可以沿平行於三維醫學圖像的某一平面在垂直與該平面的維度上進行裁剪,得到二維的待偽標注醫學圖像。待偽標注醫學圖像的尺寸可以根據實際情況進行設置,例如,可以為352*352,在此不做限定。In an implementation scenario, the acquired original medical image may be a three-dimensional medical image, for example, a three-dimensional CT image, a three-dimensional MR image, which is not limited here, so the three-dimensional medical image may be preprocessed, and the The preprocessed three-dimensional medical image is cropped to obtain at least one medical image to be pseudo-labeled. The cropping process may be center cropping of the preprocessed 3D medical image, which is not limited herein. For example, the two-dimensional medical image to be pseudo-labeled can be obtained by cropping along a certain plane parallel to the three-dimensional medical image in the dimension perpendicular to the plane. The size of the medical image to be pseudo-labeled can be set according to the actual situation, for example, it can be 352*352, which is not limited here.

在一個可能的實施場景中,預處理可以包括將三維醫學圖像的體素解析度調整至一預設解析度。三維醫學圖像的體素是三維醫學圖像在三維空間分割的最小單位,預設解析度可以為1*1*3mm,預設解析度還可以根據實際情況設置為其他解析度,例如,1*1*4mm、2*2*3mm等等,在此不做限定。通過將三維醫學圖像的體素解析度調整至一預設解析度,能夠有利於後續模型預測處理。In a possible implementation scenario, the preprocessing may include adjusting the voxel resolution of the three-dimensional medical image to a preset resolution. The voxel of a 3D medical image is the smallest unit of 3D medical image segmentation in 3D space. The preset resolution can be 1*1*3mm, and the preset resolution can also be set to other resolutions according to the actual situation, for example, 1 *1*4mm, 2*2*3mm, etc., which are not limited here. By adjusting the voxel resolution of the three-dimensional medical image to a preset resolution, it can be beneficial to the subsequent model prediction processing.

在另一個可能的實施場景中,預處理還可以包括利用一預設窗值將三維醫學圖像的體素值歸一化至預設範圍內。體素值根據三維醫學圖像的不同,可以為不同的數值,例如,對於三維CT圖像而言,體素值可以為Hu(houns field unit,即亨氏單位)值。預設窗值可以根據三維醫學圖像所對應的部位進行設置,仍以三維CT圖像為例,對於腹部CT而言,預設窗值可以設置為-125至275,其他部位可以根據實際情況進行設置,在此不再一一舉例。預設範圍可以根據實際應用進行設置,例如,預設範圍可以設置為0至1,仍以三維CT圖像為例,對於腹部CT而言,預設窗值可以設置為-125至275,則在預設範圍為0至1的情況下,可以統一將體素值小於或等於-125的體素重置為體素值0,可以統一將體素值大於或等於275的體素重置為體素值1,可以將體素值位於-125至275的體素重置為體素值0至1之間,從而能夠有利於加強圖像內不同器官間的對比度,進而能夠提高模型提取到準確的特徵。In another possible implementation scenario, the preprocessing may further include using a preset window value to normalize the voxel value of the three-dimensional medical image to a preset range. The voxel value may be a different value according to different three-dimensional medical images. For example, for a three-dimensional CT image, the voxel value may be a Hu (houns field unit, ie Heinz unit) value. The preset window value can be set according to the part corresponding to the 3D medical image. Still taking the 3D CT image as an example, for abdominal CT, the preset window value can be set from -125 to 275, and other parts can be set according to the actual situation. Set it up, and we will not give examples one by one here. The preset range can be set according to the actual application. For example, the preset range can be set from 0 to 1. Still taking the three-dimensional CT image as an example, for abdominal CT, the preset window value can be set from -125 to 275, then When the preset range is 0 to 1, the voxels with a voxel value less than or equal to -125 can be reset to the voxel value of 0, and the voxels with a voxel value greater than or equal to 275 can be uniformly reset to The voxel value of 1 can reset the voxel value between -125 to 275 to the voxel value of 0 to 1, which can help to enhance the contrast between different organs in the image, which can improve the model extraction accuracy. accurate features.

在又一個可能的實施場景中,預處理還可以包括在三維醫學圖像的至少部分體素中加入高斯雜訊。至少部分體素可以根據實際應用進行設置,例如,三維醫學圖像的1/3體素,或者,三維醫學圖像的1/2體素,或者,三維醫學圖像的全部體素,在此不做限定。通過在三維醫學圖像的至少部分體素中加入高斯雜訊,能夠使得後續在三維醫學圖像和未加入高斯雜訊的三維醫學圖像基礎上裁剪得到的二維的待偽標注醫學圖像,故能夠有利於實現資料增廣,提高資料多樣性,提升後續模型訓練的準確性。In yet another possible implementation scenario, the preprocessing may further include adding Gaussian noise to at least some voxels of the three-dimensional medical image. At least part of the voxels can be set according to practical applications, for example, 1/3 voxels of a 3D medical image, or 1/2 voxels of a 3D medical image, or all voxels of a 3D medical image, here Not limited. By adding Gaussian noise to at least part of the voxels of the three-dimensional medical image, the two-dimensional medical image to be pseudo-labeled can be cropped on the basis of the three-dimensional medical image and the three-dimensional medical image without Gaussian noise. , so it can help to achieve data augmentation, improve data diversity, and improve the accuracy of subsequent model training.

步驟S112:分別利用與每一未標注器官對應的單器官檢測模型對待偽標注醫學圖像進行檢測,以得到每個未標注器官的器官預測區域。Step S112: Detecting the pseudo-labeled medical image by using the single-organ detection model corresponding to each unlabeled organ to obtain an organ prediction area of each unlabeled organ.

在一個實施場景中,每一未標注器官對應的單器官檢測模型可以是利用標注有未標注器官的單器官資料集訓練得到的,例如,與腎臟對應的單器官檢測模型可以是利用標注有腎臟的單器官資料集訓練得到的,與脾臟對應的單器官檢測模型可以是利用標注有脾臟的單器官資料集訓練得到的,其他器官可以以此類推,在此不再一一舉例。In an implementation scenario, the single-organ detection model corresponding to each unlabeled organ may be trained by using a single-organ data set labeled with unlabeled organs. For example, the single-organ detection model corresponding to the kidney may be obtained by using the labeled kidney The single-organ detection model corresponding to the spleen can be trained by using the single-organ data set marked with the spleen, and other organs can be deduced in the same way, and will not be listed one by one here.

在另一個實施場景中,待偽標注醫學圖像中還可以包括至少一個已標注器官,則可以利用包括已標注器官的待偽標注醫學圖像,對待偽標注醫學圖像中的已標注器官對應的單器官檢測模型進行訓練,從而得到對應的單器官檢測模型。例如,待偽標注醫學圖像中包括已標注肝臟,則可以利用包括已標注肝臟的待偽標注醫學圖像,對肝臟對應的單器官檢測模型進行訓練,從而得到肝臟對應的單器官檢測模型,其他器官可以以此類推,在此不再一一舉例。In another implementation scenario, the medical image to be pseudo-labeled may further include at least one labeled organ, then the medical image to be pseudo-labeled including the labeled organ may be used to correspond to the labeled organ in the pseudo-labeled medical image. The single-organ detection model is trained to obtain the corresponding single-organ detection model. For example, if the medical image to be pseudo-labeled includes the labeled liver, the medical image to be pseudo-labeled including the labeled liver can be used to train the single-organ detection model corresponding to the liver, thereby obtaining the single-organ detection model corresponding to the liver. Other organs can be deduced in the same way, and will not be listed one by one here.

此外,單器官檢測模型可以包括Mask R-CNN(Mask Region with Convolutional Neural Network)、FCN(Fully Convolutional Network,全卷積網路)、PSP-net(Pyramid Scene Parsing Network,金字塔場景分析網路)中的任一者,或者,單器官檢測模型還可以是set-net、U-net等,可以根據實際情況進行設置,在此不做限定。In addition, the single-organ detection model can include Mask R-CNN (Mask Region with Convolutional Neural Network), FCN (Fully Convolutional Network, fully convolutional network), PSP-net (Pyramid Scene Parsing Network, pyramid scene analysis network) in Any one of the above, or the single-organ detection model can also be set-net, U-net, etc., which can be set according to the actual situation, which is not limited here.

通過利用與每一未標注器官對應的單器官檢測模型對待偽標注醫學圖像進行檢測,能夠得到每個未標注器官的器官預測區域。以待偽標注醫學圖像是對腹部掃描得到的圖像為例,未標注器官包括腎臟、脾臟、胰腺,利用與腎臟對應的單器官檢測模型對待偽標注醫學圖像進行檢測,能夠得到腎臟的器官預測區域,利用與脾臟對應的單器官檢測模型對待偽標注醫學圖像進行檢測,能夠得到脾臟的器官預測區域,利用與胰腺對應的單器官檢測模型對待偽標注醫學圖像檢測檢測,能夠得到胰腺的器官預測區域,上述利用每一未標注器官對應的單器官檢測模型對待偽標注醫學圖像進行檢測的步驟可以同時執行,最終將每個未標注器官的器官預測區域在待偽標注醫學圖像進行偽標注即可,從而能夠提升偽標注的效率,例如,可以同時執行上述利用與腎臟對應的單器官檢測模型對待偽標注醫學圖像進行檢測、利用與脾臟對應的單器官檢測模型對待偽標注醫學圖像進行檢測,以及利用與胰腺對應的單器官檢測模型對待偽標注醫學圖像檢測檢測的步驟,最終統一在待偽標注醫學圖像上對腎臟、脾臟和胰腺的單器官預測區域進行偽標注即可;或者,上述利用每一未標注器官對應的單器官檢測模型對待偽標注醫學圖像進行檢測的步驟還可以依次執行,從而無需再將每個未標注器官的器官預測區域在待偽標注醫學圖像進行偽標注,例如,可以依次執行上述利用與腎臟對應的單器官檢測模型對待偽標注醫學圖像進行檢測、利用與脾臟對應的單器官檢測模型對待偽標注醫學圖像進行檢測,以及利用與胰腺對應的單器官檢測模型對待偽標注醫學圖像檢測檢測的步驟,最終得到的待偽標注醫學圖像中即可包含腎臟、脾臟和胰腺的單器官預測區域。可以根據實際情況進行設置,在此不做限定。By using the single-organ detection model corresponding to each unlabeled organ to detect the pseudo-labeled medical image, the organ prediction area of each unlabeled organ can be obtained. Taking the medical image to be pseudo-labeled as an image obtained by scanning the abdomen as an example, the unlabeled organs include kidney, spleen, and pancreas. The single-organ detection model corresponding to the kidney is used to detect the pseudo-labeled medical image, and the kidney can be obtained. Organ prediction area, use the single-organ detection model corresponding to the spleen to detect pseudo-labeled medical images, and obtain the organ prediction area of the spleen, and use the single-organ detection model corresponding to the pancreas to detect and detect pseudo-labeled medical images, you can get For the organ prediction region of the pancreas, the above-mentioned steps of using the single-organ detection model corresponding to each unlabeled organ to detect the pseudo-labeled medical image can be performed simultaneously, and finally the organ prediction region of each unlabeled organ is placed in the pseudo-labeled medical image. It is enough to perform pseudo-labeling, which can improve the efficiency of pseudo-labeling. For example, the above-mentioned single-organ detection model corresponding to the kidney can be used to detect pseudo-labeled medical images, and the single-organ detection model corresponding to the spleen can be used to treat pseudo-labeled medical images. Label medical images for detection, and use the single-organ detection model corresponding to the pancreas to detect and detect pseudo-labeled medical images. Finally, the single-organ prediction regions of kidney, spleen and pancreas are unified on the medical images to be pseudo-labeled. Pseudo-labeling suffices; or, the above-mentioned steps of using the single-organ detection model corresponding to each unlabeled organ to detect the pseudo-labeled medical images can also be performed in sequence, so that the organ prediction area of each unlabeled organ does not need to be placed in the waiting area. Pseudo-labeled medical images are pseudo-labeled. For example, the above-mentioned detection of pseudo-labeled medical images using a single-organ detection model corresponding to the kidney and detection of pseudo-labeled medical images using a single-organ detection model corresponding to the spleen can be performed in sequence. , and the step of using the single-organ detection model corresponding to the pancreas to detect and detect the pseudo-labeled medical image, and the finally obtained pseudo-labeled medical image can include the single-organ prediction area of the kidney, spleen and pancreas. It can be set according to the actual situation, which is not limited here.

步驟S113:將未標注器官的器官預測區域偽標注為未標注器官的實際區域,並將偽標注後的待偽標注醫學圖像作為樣本醫學圖像。Step S113 : pseudo-label the organ prediction region of the unlabeled organ as the actual region of the unlabeled organ, and use the pseudo-labeled medical image to be pseudo-labeled as a sample medical image.

在得到每個未標注器官的器官預測區域之後,即可將未標注器官的器官預測區域偽標注為未標注器官的實際區域,並將偽標注後的待偽標注醫學圖像作為樣本醫學圖像。After obtaining the organ prediction region of each unlabeled organ, the organ prediction region of the unlabeled organ can be pseudo-labeled as the actual region of the unlabeled organ, and the pseudo-labeled medical image to be pseudo-labeled can be used as the sample medical image .

區別於前述實施例,通過獲取存在至少一個未標注器官的待偽標注醫學圖像,並利用與每一未標注器官對應的單器官檢測模型對待偽標注醫學圖像進行檢測,以得到每個未標注器官的器官預測區域,並將未標注器官的器官預測區域偽標注為未標注器官的實際區域,將偽標注後的待偽標注醫學圖像作為樣本醫學圖像,能夠利用單器官檢測模型免去人工對多器官進行標注的工作量,從而能夠有利於降低訓練用於多器官檢測的圖像檢測模型的人工成本,並提升訓練的效率。Different from the foregoing embodiments, the medical image to be pseudo-labeled with at least one unlabeled organ is acquired, and the single-organ detection model corresponding to each unlabeled organ is used to detect the medical image to be pseudo-labeled, so as to obtain each unlabeled medical image. Label the organ prediction area of the organ, pseudo-label the organ prediction area of the unlabeled organ as the actual area of the unlabeled organ, and use the pseudo-labeled medical image to be pseudo-labeled as the sample medical image. Eliminating the workload of manually labeling multiple organs can help reduce the labor cost of training an image detection model for multiple organ detection, and improve the training efficiency.

請參閱圖3,圖3是本發明實施例提供的圖像檢測模型的訓練方法另一實施例的流程示意圖。其中,可以包括如下步驟。Please refer to FIG. 3 . FIG. 3 is a schematic flowchart of another embodiment of a training method for an image detection model provided by an embodiment of the present invention. Among them, the following steps may be included.

步驟S31:獲取樣本醫學圖像,其中,樣本醫學圖像偽標注出至少一個未標注器官的實際區域。Step S31 : acquiring a sample medical image, wherein the sample medical image pseudo-marks at least one actual region of an unmarked organ.

其中,步驟S31可以參閱前述實施例中的相關步驟。Wherein, for step S31, reference may be made to the relevant steps in the foregoing embodiments.

步驟S32:分別利用第一原始檢測模型和第二原始檢測模型執行對樣本醫學圖像進行檢測以得到第一檢測結果的步驟。Step S32 : respectively using the first original detection model and the second original detection model to perform the step of detecting the sample medical image to obtain the first detection result.

原始檢測模型可以包括第一原始檢測模型和第二原始檢測模型。第一原始檢測模型可以包括Mask R-CNN(Mask Region with Convolutional Neural Network)、FCN(Fully Convolutional Network,全卷積網路)、PSP-net(Pyramid Scene Parsing Network,金字塔場景分析網路)中的任一者,此外,第一原始檢測模型還可以是set-net、U-net等,可以根據實際情況進行設置,在此不做限定。第二原始檢測模型可以包括Mask R-CNN(Mask Region with Convolutional Neural Network)、FCN(Fully Convolutional Network,全卷積網路)、PSP-net(Pyramid Scene Parsing Network,金字塔場景分析網路)中的任一者,此外,第二原始檢測模型還可以是set-net、U-net等,可以根據實際情況進行設置,在此不做限定。The original detection models may include a first original detection model and a second original detection model. The first original detection model may include Mask R-CNN (Mask Region with Convolutional Neural Network), FCN (Fully Convolutional Network, fully convolutional network), PSP-net (Pyramid Scene Parsing Network, Pyramid Scene Parsing Network). Either one, in addition, the first original detection model may also be set-net, U-net, etc., which may be set according to the actual situation, which is not limited here. The second original detection model may include Mask R-CNN (Mask Region with Convolutional Neural Network), FCN (Fully Convolutional Network, fully convolutional network), PSP-net (Pyramid Scene Parsing Network, pyramid scene analysis network) in Either one, in addition, the second original detection model may also be set-net, U-net, etc., which may be set according to the actual situation, which is not limited here.

利用第一原始檢測模型和第二原始檢測模型執行對樣本醫學圖像進行檢測以得到第一檢測結果的步驟,可以參閱前述實施例中的相關步驟,在此不再贅述。在一個實施場景中,第一原始檢測模型檢測得到的第一檢測結果可以包括未標注器官的第一預測區域,或者,第一原始檢測模型檢測得到的第一檢測結果還可以包括未標注器官的第一預測區域和已標注器官的第一預測區域。在另一個實施場景中,第二原始檢測模型檢測得到的第一檢測結果可以包括未標注器官的第一預測區域,或者,第二原始檢測模型檢測得到的第一檢測結果還可以包括未標注器官的第一預測區域和已標注器官的第一預測區域。For the step of performing detection on the sample medical image by using the first original detection model and the second original detection model to obtain the first detection result, reference may be made to the relevant steps in the foregoing embodiments, and details are not repeated here. In one implementation scenario, the first detection result detected by the first original detection model may include the first prediction region of the unlabeled organ, or the first detection result detected by the first original detection model may also include the unlabeled organ. The first predicted region and the first predicted region of the labeled organ. In another implementation scenario, the first detection result detected by the second original detection model may include the first prediction region of the unlabeled organ, or the first detection result detected by the second original detection model may also include the unlabeled organ The first prediction region of and the first prediction region of the annotated organ.

請結合參閱圖4,圖4是圖像檢測模型的訓練過程一實施例的示意圖。如圖4所示,為了便於描述,第一原始檢測模型表示為net1,第二原始檢測模型表示為net2。如圖4所示,第一原始檢測模型net1對樣本醫學圖像進行檢測,得到與第一原始檢測模型net1對應的第一檢測結果,第二原始檢測模型net2對樣本醫學圖像進行檢測,得到與第二原始檢測模型net2對應的第一檢測結果。Please refer to FIG. 4 , which is a schematic diagram of an embodiment of a training process of an image detection model. As shown in Fig. 4, for the convenience of description, the first original detection model is denoted as net1, and the second original detection model is denoted as net2. As shown in Figure 4, the first original detection model net1 detects the sample medical image, and obtains the first detection result corresponding to the first original detection model net1, and the second original detection model net2 detects the sample medical image, and obtains The first detection result corresponding to the second original detection model net2.

步驟S33:分別利用第一圖像檢測模型和第二圖像檢測模型執行對樣本醫學圖像進行檢測以得到第二檢測結果的步驟。Step S33 : respectively using the first image detection model and the second image detection model to perform the step of detecting the sample medical image to obtain the second detection result.

圖像檢測模型可以包括與第一原始檢測模型對應的第一圖像檢測模型和與第二原始檢測模型對應的第二圖像檢測模型,第一圖像檢測模型和第二圖像檢測模型的網路結構、網路參數可以參閱前述實施例中的相關步驟,在此不再贅述。The image detection model may include a first image detection model corresponding to the first original detection model and a second image detection model corresponding to the second original detection model. For the network structure and network parameters, reference may be made to the relevant steps in the foregoing embodiments, which will not be repeated here.

利用第一圖像檢測模型和第二圖像檢測模型執行對樣本醫學圖像進行檢測以得到第二檢測結果的步驟,可以參閱前述實施例中的相關步驟,在此不再贅述。在一個實施場景中,第一圖像檢測模型檢測得到的第二檢測結果可以包括未標注器官的第二預測區域,或者,第一圖像檢測模型檢測得到的第二檢測結果還可以包括未標注器官的第二預測區域和已標注器官的第二預測區域。在另一個實施場景中,第二圖像檢測模型檢測得到的第二檢測結果可以包括未標注器官的第二預測區域,或者,第二圖像檢測模型檢測得到的第二檢測結果還可以包括未標注器官的第二預測區域和已標注器官的第二預測區域。The step of performing the detection of the sample medical image by using the first image detection model and the second image detection model to obtain the second detection result may refer to the relevant steps in the foregoing embodiments, which will not be repeated here. In one implementation scenario, the second detection result detected by the first image detection model may include the second prediction region of the unlabeled organ, or the second detection result detected by the first image detection model may also include the unlabeled second prediction region. The second prediction region of the organ and the second prediction region of the annotated organ. In another implementation scenario, the second detection result detected by the second image detection model may include the second prediction region of the unlabeled organ, or the second detection result detected by the second image detection model may also include the unmarked second prediction region. The second prediction region of the annotated organ and the second prediction region of the annotated organ are labeled.

請結合參閱圖4,為了便於描述,與第一原始檢測模型net1對應的第一圖像檢測模型表示為EMA net1,與第二原始檢測模型net2對應的第二圖像檢測模型表示為EMA net2。如圖4所示,第一圖像檢測模型EMA net1對樣本醫學圖像進行檢測,得到與第一圖像檢測模型EMA net1對應的第二檢測結果,第二圖像檢測模型EMA net2對樣本醫學圖像進行檢測,得到與第二圖像檢測模型EMA net2對應的第二檢測結果。Please refer to FIG. 4 , for the convenience of description, the first image detection model corresponding to the first original detection model net1 is denoted as EMA net1, and the second image detection model corresponding to the second original detection model net2 is denoted as EMA net2. As shown in FIG. 4 , the first image detection model EMA net1 detects the sample medical image to obtain a second detection result corresponding to the first image detection model EMA net1, and the second image detection model EMA net2 detects the sample medical image. The image is detected, and a second detection result corresponding to the second image detection model EMA net2 is obtained.

在一個實施場景中,上述步驟S32和步驟S33可以按照先後循序執行,例如,先執行步驟S32,後執行步驟S33,或者,先執行步驟S33,後執行步驟S32。在另一個實施場景中,上述步驟S32和步驟S33還可以同時執行,可以根據實際應用進行設置,在此不做限定。In an implementation scenario, the above steps S32 and S33 may be performed sequentially, for example, step S32 is performed first, and then step S33 is performed, or, step S33 is performed first, and then step S32 is performed. In another implementation scenario, the foregoing steps S32 and S33 may also be performed simultaneously, and may be set according to actual applications, which is not limited herein.

步驟S34:利用第一原始檢測模型的第一預測區域分別與實際區域、第二圖像檢測模型的第二預測區域之間的差異,調整第一原始檢測模型的網路參數。Step S34: Adjust the network parameters of the first original detection model by using the differences between the first prediction area of the first original detection model and the actual area and the second prediction area of the second image detection model.

其中,可以利用第一原始檢測模型的第一預測區域與偽標注的實際區域之間的差異,確定第一原始檢測模型的第一損失值,並利用第一原始檢測模型的第一預測區域與第二圖像檢測模型的第二預測區域之間的差異,確定第一原始檢測模型的第二損失值,從而利用第一損失值和第二損失值,調整第一原始檢測模型的網路參數。第一損失值和第二損失值的計算方式可以參閱前述實施例中的相關步驟,在此不再贅述。在一個可能的實施場景中,在計算第二損失值的過程中,可以僅對未標注器官的第一預測區域和第二預測區域,從而能夠提升第一原始檢測模型和第二圖像檢測模型一致性約束的魯棒性,進而能夠提高圖像檢測模型的準確性。The difference between the first prediction area of the first original detection model and the pseudo-annotated actual area can be used to determine the first loss value of the first original detection model, and the difference between the first prediction area of the first original detection model and the pseudo-labeled actual area can be used. The difference between the second prediction regions of the second image detection model determines the second loss value of the first original detection model, so that the network parameters of the first original detection model are adjusted by using the first loss value and the second loss value . For the calculation method of the first loss value and the second loss value, reference may be made to the relevant steps in the foregoing embodiments, and details are not described herein again. In a possible implementation scenario, in the process of calculating the second loss value, only the first prediction area and the second prediction area of the unlabeled organ may be used, so that the first original detection model and the second image detection model can be improved Robustness to consistency constraints, which in turn can improve the accuracy of image detection models.

步驟S35:利用第二原始檢測模型的第一預測區域分別與實際區域、第一圖像檢測模型的第二預測區域之間的差異,調整第二原始檢測模型的網路參數。Step S35: Adjust the network parameters of the second original detection model by using the difference between the first prediction area of the second original detection model and the actual area and the second prediction area of the first image detection model respectively.

其中,可以利用第二原始檢測模型的第一預測區域與偽標注的實際區域之間的差異,確定第二原始檢測模型的第一損失值,並利用第二原始檢測模型的第一預測區域和第一圖像檢測模型的第二預測區域之間的差異,確定第二原始檢測模型的第二損失值,從而利用第一損失值和第二損失值,調整第二原始檢測模型的網路參數。第一損失值和第二損失值的計算方式可以參閱前述實施例中的相關步驟,在此不再贅述。在一個可能的實施場景中,在計算第二損失值的過程中,可以僅對未標注器官的第一預測區域和第二預測區域,從而能夠提升第二原始檢測模型和第一圖像檢測模型一致性約束的魯棒性,進而能夠提高圖像檢測模型的準確性。Wherein, the difference between the first prediction area of the second original detection model and the pseudo-labeled actual area can be used to determine the first loss value of the second original detection model, and the first prediction area of the second original detection model and The difference between the second prediction regions of the first image detection model determines the second loss value of the second original detection model, so that the network parameters of the second original detection model are adjusted using the first loss value and the second loss value . For the calculation method of the first loss value and the second loss value, reference may be made to the relevant steps in the foregoing embodiments, and details are not described herein again. In a possible implementation scenario, in the process of calculating the second loss value, only the first prediction area and the second prediction area of the unlabeled organ may be used, so that the second original detection model and the first image detection model can be improved Robustness to consistency constraints, which in turn can improve the accuracy of image detection models.

在一個實施場景中,上述步驟S34和步驟S35可以按照先後循序執行,例如,先執行步驟S34,後執行步驟S35,或者,先執行步驟S35,後執行步驟S34。在另一個實施場景中,上述步驟S24和步驟S35還可以同時執行,可以根據實際應用進行設置,在此不做限定。In an implementation scenario, the above steps S34 and S35 may be performed sequentially, for example, step S34 is performed first, and then step S35 is performed, or, step S35 is performed first, and then step S34 is performed. In another implementation scenario, the foregoing steps S24 and S35 may also be performed simultaneously, and may be set according to actual applications, which is not limited herein.

步驟S36:利用第一原始檢測模型本次訓練以及之前若干次訓練時調整後的網路參數,對第一圖像檢測模型的網路參數進行更新。Step S36: Update the network parameters of the first image detection model by using the network parameters adjusted during the current training of the first original detection model and several previous trainings.

其中,可以統計第一原始檢測模型在本次訓練和之前若干次訓練所調整的網路參數的平均值,並將第一圖像檢測模型的網路參數更新為對應的第一原始檢測模型的網路參數的平均值。可以參閱前述實施例中的相關步驟,在此不再贅述。Among them, the average value of the network parameters adjusted by the first original detection model in this training and several previous trainings can be counted, and the network parameters of the first image detection model can be updated to the corresponding first original detection model. Average value of network parameters. The relevant steps in the foregoing embodiments may be referred to, which will not be repeated here.

請結合參閱圖4,可以統計第一原始檢測模型net1在本次訓練和之前若干次訓練所調整的網路參數的平均值,並將第一圖像檢測模型EMA net1的網路參數更新為第一原始檢測模型net1網路參數的平均值。Please refer to FIG. 4 , the average value of the network parameters adjusted by the first original detection model net1 in this training and several previous trainings can be counted, and the network parameters of the first image detection model EMA net1 can be updated to An average of the network parameters of the original detection model net1.

步驟S37:利用第二原始檢測模型本次訓練以及之前若干次訓練時調整後的網路參數,對第二圖像檢測模型的網路參數進行更新。Step S37: Update the network parameters of the second image detection model by using the network parameters adjusted during the current training of the second original detection model and several previous trainings.

其中,可以統計第二原始檢測模型在本次訓練和之前若干次訓練所調整的網路參數的平均值,並將第二圖像檢測模型的網路參數更新為對應的第二原始檢測模型的網路參數的平均值。可以參閱前述實施例中的相關步驟,在此不再贅述。Among them, the average value of the network parameters adjusted by the second original detection model in this training and several previous trainings can be counted, and the network parameters of the second image detection model can be updated to the corresponding second original detection model. Average value of network parameters. The relevant steps in the foregoing embodiments may be referred to, which will not be repeated here.

請結合參閱圖4,可以統計第二原始檢測模型net2在本次訓練和之前若干次訓練所調整的網路參數的平均值,並將第二圖像檢測模型EMA net2的網路參數更新為第二原始檢測模型net2網路參數的平均值。Please refer to FIG. 4 , the average value of the network parameters adjusted by the second original detection model net2 during this training and several previous trainings can be counted, and the network parameters of the second image detection model EMA net2 are updated to The average of the network parameters of the two original detection models net2.

在一個實施場景中,上述步驟S36和步驟S37可以按照先後循序執行,例如,先執行步驟S36,後執行步驟S37,或者,先執行步驟S37,後執行步驟S36。在另一個實施場景中,上述步驟S36和步驟S37還可以同時執行,可以根據實際應用進行設置,在此不做限定。In an implementation scenario, the above steps S36 and S37 may be performed sequentially, for example, step S36 is performed first, and then step S37 is performed, or, step S37 is performed first, and then step S36 is performed. In another implementation scenario, the foregoing steps S36 and S37 may also be performed simultaneously, and may be set according to actual applications, which is not limited herein.

在一個實施場景中,在對第一圖像檢測模型和第二圖像檢測模型的網路參數進行更新之後,在不滿足預設訓練結束條件的情況下,可以重新執行上述步驟S32以及後續步驟,以繼續對第一原始檢測模型和第二原始檢測模型的網路參數進行調整,並對與第一原始檢測模型對應的第一圖像檢測模型的網路參數和與第二原始檢測模型對應的第二圖像檢測模型的網路參數進行更新。在一個可能的實施場景中,預設訓練結束條件可以包括:當前訓練次數達到預設次數閾值(如,500次、1000次等)、第一原始檢測模型和第二原始檢測模型的損失值小於一預設損失閾值中的任一者,在此不做限定。在另一個可能的實施場景中,在訓練結束後,可以將第一圖像檢測模型、第二圖像檢測模型中的任一者作為後續圖像檢測的網路模型,從而能夠直接得到待檢測醫學圖像中多個器官對應的區域,進而能夠免去利用多個單器官檢測對待檢測醫學圖像進行分別檢測的操作,故能夠降低檢測計算量。In an implementation scenario, after updating the network parameters of the first image detection model and the second image detection model, if the preset training end condition is not met, the above step S32 and subsequent steps may be re-executed , to continue to adjust the network parameters of the first original detection model and the second original detection model, and to adjust the network parameters of the first image detection model corresponding to the first original detection model and the network parameters corresponding to the second original detection model. The network parameters of the second image detection model are updated. In a possible implementation scenario, the preset training end condition may include: the current training times reach a preset times threshold (eg, 500 times, 1000 times, etc.), and the loss values of the first original detection model and the second original detection model are less than Any one of the preset loss thresholds is not limited here. In another possible implementation scenario, after the training, either the first image detection model or the second image detection model can be used as the network model for subsequent image detection, so that the to-be-detected can be directly obtained. The regions corresponding to multiple organs in the medical image can avoid the operation of using multiple single organs to detect the medical images to be detected separately, so the amount of detection calculation can be reduced.

區別於前述實施例,將原始檢測模型設置為包括第一原始檢測模型和第二原始檢測模型,且圖像檢測模型設置為包括與第一原始檢測模型對應的第一圖像檢測模型和與第二原始檢測模型對應的第二圖像檢測模型,並分別利用第一原始檢測模型和第二原始檢測模型執行對樣本醫學圖像進行檢測以得到第一檢測結果的步驟,並分別利用第一圖像檢測模型和第二檢測模型執行對樣本醫學圖像進行檢測以得到第二檢測結果的步驟,從而利用第一原始檢測模型的第一預測區域分別與實際區域、第二圖像檢測模型的第二預測區域之間的差異,調整第一原始檢測模型的網路參數,並利用第二原始檢測模型的第一預測區域分別與實際區域、第一圖像檢測模型的第二預測區域之間的差異,調整第二原始檢測模型的網路參數,故能夠利用與第一原始檢測模型對應的第一圖像檢測模型監督第二原始檢測模型的訓練,利用與第二原始檢測模型對應的第二圖像檢測模型監督第一原始檢測模型的訓練,故能夠進一步約束網路參數在多次訓練過程中由於偽標注的真實區域所產生的累積誤差,提高圖像檢測模型的準確性。Different from the foregoing embodiments, the original detection model is set to include a first original detection model and a second original detection model, and the image detection model is set to include a first image detection model corresponding to the first original detection model and a first image detection model corresponding to the first original detection model. The second image detection model corresponding to the two original detection models, and the first original detection model and the second original detection model are used to perform the step of detecting the sample medical image to obtain the first detection result, and the first image is respectively used. The image detection model and the second detection model perform the step of detecting the sample medical image to obtain the second detection result, so that the first prediction area of the first original detection model is used to be respectively different from the actual area and the first prediction area of the second image detection model. The difference between the two prediction areas, adjust the network parameters of the first original detection model, and use the difference between the first prediction area of the second original detection model and the actual area and the second prediction area of the first image detection model respectively. The network parameters of the second original detection model are adjusted, so the first image detection model corresponding to the first original detection model can be used to supervise the training of the second original detection model, and the second original detection model corresponding to the second original detection model can be used The image detection model supervises the training of the first original detection model, so it can further constrain the cumulative error of the network parameters due to the pseudo-labeled real area during multiple training processes, and improve the accuracy of the image detection model.

請參閱圖5,圖5是本發明實施例提供的圖像檢測方法一實施例的流程示意圖。其中,可以包括如下步驟。Please refer to FIG. 5 , which is a schematic flowchart of an embodiment of an image detection method provided by an embodiment of the present invention. Among them, the following steps may be included.

步驟S51:獲取待檢測醫學圖像,其中,待檢測醫學圖像中包含多個器官。Step S51: Acquire a medical image to be detected, wherein the medical image to be detected includes multiple organs.

待檢測醫學圖像可以包括CT圖像、MR圖像,在此不做限定。在一個可能的實施場景中,待檢測醫學圖像可以是對腹部、胸部、頭顱等部位進行掃描得到的,可以根據實際應用情況進行設置,在此不做限定。例如,對腹部進行掃描,待檢測醫學圖像中的器官可以包括:腎臟、脾臟、肝臟、胰腺等;或者,對胸部進行掃描,待檢測醫學圖像中的器官可以包括:心臟、肺葉、甲狀腺等;或者,對頭顱進行掃描,待檢測醫學圖像中的器官可以包括:腦幹、小腦、間腦、端腦等。The medical images to be detected may include CT images and MR images, which are not limited herein. In a possible implementation scenario, the medical image to be detected may be obtained by scanning the abdomen, chest, skull, etc., and may be set according to actual application conditions, which is not limited herein. For example, scanning the abdomen, the organs to be detected in the medical image may include: kidney, spleen, liver, pancreas, etc.; or, scanning the chest, the organs to be detected in the medical image may include: heart, lung lobes, thyroid etc.; or, scan the skull, and the organs in the medical image to be detected may include: brain stem, cerebellum, diencephalon, telencephalon, and the like.

步驟S52:利用圖像檢測模型對待檢測醫學進行檢測,得到多個器官的預測區域。Step S52: Use the image detection model to detect the medicine to be detected, and obtain the predicted regions of multiple organs.

圖像檢測模型是利用上述任一圖像檢測模型的訓練方法實施例中的步驟訓練得到的,可以參閱前述實施例中的相關步驟,在此不再贅述。通過利用圖像檢測模型對待檢測醫學圖像進行檢測,能夠直接得到多個器官的預測區域,進而能夠免去利用多個單器官檢測對待檢測醫學圖像進行分別檢測的操作,故能夠降低檢測計算量。The image detection model is obtained by training the steps in any of the above-mentioned image detection model training method embodiments, and reference may be made to the relevant steps in the foregoing embodiments, and details are not repeated here. By using the image detection model to detect the medical images to be detected, the prediction regions of multiple organs can be directly obtained, and the operation of using multiple single organs to detect the medical images to be detected can be eliminated, so the detection calculation can be reduced. quantity.

上述方案,利用上述任一圖像檢測模型的訓練方法實施例中的步驟訓練得到的圖像檢測模型對待檢測醫學圖像檢測檢測,得到多個器官的預測區域,能夠在多器官檢測的過程中,提高檢測準確性。In the above scheme, the image detection model obtained by training the steps in any of the above-mentioned image detection model training method embodiments is used to detect and detect the medical images to be detected, and obtain the prediction regions of multiple organs, which can be used in the process of multi-organ detection. , to improve the detection accuracy.

請參閱圖6,圖6是本發明實施例提供的圖像檢測模型的訓練裝置一實施例的框架示意圖。圖像檢測模型的訓練裝置60包括圖像獲取模組61、第一檢測模組62、第二檢測模組63、參數調整模組64,圖像獲取模組61被配置為獲取樣本醫學圖像,其中,樣本醫學圖像偽標注出至少一個未標注器官的實際區域;第一檢測模組62被配置為利用原始檢測模型對樣本醫學圖像進行檢測以得到第一檢測結果,其中,第一檢測結果包括未標注器官的第一預測區域;以及,第二檢測模組63被配置為利用圖像檢測模型對樣本醫學圖像進行檢測以得到第二檢測結果,其中,第二檢測結果包括未標注器官的第二預測區域,圖像檢測模型的網路參數是利用原始檢測模型的網路參數確定的;參數調整模組64被配置為利用第一預測區域分別與實際區域、第二預測區域之間的差異,調整原始檢測模型的網路參數。Please refer to FIG. 6. FIG. 6 is a schematic diagram of a framework of an embodiment of an apparatus for training an image detection model provided by an embodiment of the present invention. The image detection model training device 60 includes an image acquisition module 61, a first detection module 62, a second detection module 63, and a parameter adjustment module 64. The image acquisition module 61 is configured to acquire sample medical images , wherein the sample medical image pseudo-marks the actual region of at least one unmarked organ; the first detection module 62 is configured to use the original detection model to detect the sample medical image to obtain a first detection result, wherein the first detection module 62 The detection result includes the first prediction area of the unmarked organ; and the second detection module 63 is configured to use the image detection model to detect the sample medical image to obtain a second detection result, wherein the second detection result includes The second prediction area of the organ is marked, and the network parameters of the image detection model are determined by using the network parameters of the original detection model; the parameter adjustment module 64 is configured to use the first prediction area to be respectively associated with the actual area and the second prediction area. The difference between, adjust the network parameters of the original detection model.

上述方案,通過獲取樣本醫學圖像,且樣本醫學圖像偽標注出至少一個未標注器官的實際區域,故樣本醫學圖像中無需對多器官進行真實標注,從而利用原始檢測模型對樣本醫學圖像檢測檢測以得到包含未標注器官的第一預設區域的第一檢測結果,並利用圖像檢測模型對樣本醫學圖像進行檢測以得到包含未標注器官的第二預測區域的第二檢測結果,進而利用第一預測區域分別與實際區域、第二預測區域之間的差異,調整原始檢測模型的網路參數,且圖像檢測模型的網路參數是利用原始檢測模型的網路參數確定的,故能夠使得圖像檢測模型監督原始檢測模型的訓練,故能夠約束網路參數在多次訓練過程中由於偽標注的真實區域所產生的累積誤差,提高圖像檢測模型的準確性,從而使得圖像檢測模型得以準確地監督原始檢測模型進行訓練,進而使得原始檢測模型在訓練過程中能夠準確地調整其網路參數,故此,能夠在多器官檢測的過程中,提升圖像檢測模型的檢測準確性。In the above solution, by obtaining a sample medical image, and the sample medical image pseudo-labels the actual area of at least one unlabeled organ, it is not necessary to actually label multiple organs in the sample medical image, so that the original detection model is used to detect the sample medical image. image detection and detection to obtain a first detection result of a first preset region containing unlabeled organs, and use an image detection model to detect a sample medical image to obtain a second detection result of a second predicted region containing unlabeled organs , and then use the difference between the first prediction area and the actual area and the second prediction area to adjust the network parameters of the original detection model, and the network parameters of the image detection model are determined by using the network parameters of the original detection model. , so that the image detection model can supervise the training of the original detection model, so it can constrain the cumulative error of the network parameters due to the pseudo-labeled real area in the multiple training process, improve the accuracy of the image detection model, and make The image detection model can accurately supervise the training of the original detection model, so that the original detection model can accurately adjust its network parameters during the training process. Therefore, the detection of the image detection model can be improved in the process of multi-organ detection. accuracy.

在一些實施例中,原始檢測模型包括第一原始檢測模型和第二原始檢測模型,圖像檢測模型包括與第一原始檢測模型對應的第一圖像檢測模型和與第二原始檢測模型對應的第二圖像檢測模型,第一檢測模組62還被配置為分別利用第一原始檢測模型和第二原始檢測模型執行對樣本醫學圖像進行檢測以得到第一檢測結果的步驟,第二檢測模型63還被配置為分別利用第一圖像檢測模型和第二圖像檢測模型執行對樣本醫學圖像進行檢測以得到第二檢測結果的步驟,參數調整模組64還被配置為利用第一原始檢測模型的第一預測區域分別與實際區域、第二圖像檢測模型的第二預測區域之間的差異,調整第一原始檢測模型的網路參數,參數調整模組64還還被配置為利用第二原始檢測模型的第一預測區域分別與實際區域、第一圖像檢測模型的第二預測區域之間的差異,調整第二原始檢測模型的網路參數。In some embodiments, the original detection model includes a first original detection model and a second original detection model, and the image detection model includes a first image detection model corresponding to the first original detection model and a second original detection model corresponding to The second image detection model, the first detection module 62 is further configured to perform the step of detecting the sample medical image to obtain the first detection result by using the first original detection model and the second original detection model respectively, the second detection The model 63 is further configured to perform the step of detecting the sample medical image to obtain the second detection result using the first image detection model and the second image detection model, respectively, and the parameter adjustment module 64 is further configured to use the first image detection model. The difference between the first prediction area of the original detection model and the actual area and the second prediction area of the second image detection model, respectively, adjust the network parameters of the first original detection model, and the parameter adjustment module 64 is also configured to The network parameters of the second original detection model are adjusted by using the differences between the first prediction area of the second original detection model and the actual area and the second prediction area of the first image detection model.

區別於前述實施例,將原始檢測模型設置為包括第一原始檢測模型和第二原始檢測模型,且圖像檢測模型設置為包括與第一原始檢測模型對應的第一圖像檢測模型和與第二原始檢測模型對應的第二圖像檢測模型,並分別利用第一原始檢測模型和第二原始檢測模型執行對樣本醫學圖像進行檢測以得到第一檢測結果的步驟,並分別利用第一圖像檢測模型和第二檢測模型執行對樣本醫學圖像進行檢測以得到第二檢測結果的步驟,從而利用第一原始檢測模型的第一預測區域分別與實際區域、第二圖像檢測模型的第二預測區域之間的差異,調整第一原始檢測模型的網路參數,並利用第二原始檢測模型的第一預測區域分別與實際區域、第一圖像檢測模型的第二預測區域之間的差異,調整第二原始檢測模型的網路參數,故能夠利用與第一原始檢測模型對應的第一圖像檢測模型監督第二原始檢測模型的訓練,利用與第二原始檢測模型對應的第二圖像檢測模型監督第一原始檢測模型的訓練,故能夠進一步約束網路參數在多次訓練過程中由於偽標注的真實區域所產生的累積誤差,提高圖像檢測模型的準確性。Different from the foregoing embodiments, the original detection model is set to include a first original detection model and a second original detection model, and the image detection model is set to include a first image detection model corresponding to the first original detection model and a first image detection model corresponding to the first original detection model. The second image detection model corresponding to the two original detection models, and the first original detection model and the second original detection model are used to perform the step of detecting the sample medical image to obtain the first detection result, and the first image is respectively used. The image detection model and the second detection model perform the step of detecting the sample medical image to obtain the second detection result, so that the first prediction area of the first original detection model is used to be respectively different from the actual area and the first prediction area of the second image detection model. The difference between the two prediction areas, adjust the network parameters of the first original detection model, and use the difference between the first prediction area of the second original detection model and the actual area and the second prediction area of the first image detection model respectively. The network parameters of the second original detection model are adjusted, so the first image detection model corresponding to the first original detection model can be used to supervise the training of the second original detection model, and the second original detection model corresponding to the second original detection model can be used The image detection model supervises the training of the first original detection model, so it can further constrain the cumulative error of the network parameters due to the pseudo-labeled real area during multiple training processes, and improve the accuracy of the image detection model.

在一些實施例中,參數調整模組64包括第一損失確定子模組,被配置為利用第一預測區域和實際區域之間的差異,確定原始檢測模型的第一損失值,參數調整模組64包括第二損失確定子模組,被配置為利用第一預測區域和第二預測區域之間的差異,確定原始檢測模型的第二損失值,參數調整模組64包括參數調整子模組,被配置為利用第一損失值和第二損失值,調整原始檢測模型的網路參數。In some embodiments, the parameter adjustment module 64 includes a first loss determination sub-module configured to utilize the difference between the first predicted region and the actual region to determine the first loss value of the original detection model, the parameter adjustment module 64 includes a second loss determination submodule configured to utilize the difference between the first predicted region and the second predicted region to determine a second loss value of the original detection model, the parameter adjustment module 64 includes a parameter adjustment submodule, is configured to adjust the network parameters of the original detection model using the first loss value and the second loss value.

區別於前述實施例,通過第一預測區域和實際區域之間的差異,確定原始檢測模型的第一損失值,並通過第一預測區域和第二預測區域之間的差異,確定原始檢測模型的第二損失值,並利用第一損失值和第二損失值,調整原始檢測模型的網路參數,從而能夠從原始檢測模型預測出的第一預測區域分別和偽標注的實際區域、對應的圖像檢測模型預測出的第二預測區域之間差異這兩個維度來度量原始檢測模型的損失,有利於提高損失計算的準確性,從而能夠有利於提高原始檢測模型網路參數的準確性,進而能夠有利於提升圖像檢測模型的準確性。Different from the previous embodiment, the first loss value of the original detection model is determined by the difference between the first prediction area and the actual area, and the difference between the first prediction area and the second prediction area is used to determine the original detection model. The second loss value is used to adjust the network parameters of the original detection model by using the first loss value and the second loss value, so that the first prediction area predicted from the original detection model can be compared with the actual area of the pseudo-annotation and the corresponding graph. Measuring the loss of the original detection model by using two dimensions like the difference between the second prediction regions predicted by the detection model is beneficial to improve the accuracy of the loss calculation, which can help to improve the accuracy of the network parameters of the original detection model, and then It can help to improve the accuracy of the image detection model.

在一些實施例中,第一損失確定子模組包括焦點損失確定單元,被配置為利用焦點損失函數對第一預測區域和實際區域進行處理,得到焦點第一損失值,第一損失確定子模組包括集合相似度損失確定單元,被配置為利用集合相似度損失函數對第一預測區域和實際區域進行處理,得到集合相似度第一損失值,第二損失確定子模組還被配置為利用一致性損失函數對第一預測區域和第二預測區域進行處理,得到第二損失值,參數調整子模組包括加權處理單元,被配置為對第一損失值和第二損失值進行加權處理,得到加權損失值,參數調整子模組包括參數調整單元,被配置為利用加權損失值,調整原始檢測模型的網路參數。In some embodiments, the first loss determination sub-module includes a focus loss determination unit configured to process the first predicted area and the actual area using a focus loss function to obtain a focus first loss value, and the first loss determination sub-module The group includes an ensemble similarity loss determining unit configured to process the first predicted area and the actual area by using the ensemble similarity loss function to obtain a first loss value of the ensemble similarity, and the second loss determining sub-module is further configured to utilize The consistency loss function processes the first prediction area and the second prediction area to obtain a second loss value, and the parameter adjustment sub-module includes a weighting processing unit configured to perform weighting processing on the first loss value and the second loss value, The weighted loss value is obtained, and the parameter adjustment sub-module includes a parameter adjustment unit configured to use the weighted loss value to adjust the network parameters of the original detection model.

區別於前述實施例,通過利用焦點損失函數對第一預測區域和實際區域進行處理,得到焦點第一損失值,能夠使得模型提升對於難樣本的關注度,從而能夠有利於提高圖像檢測模型的準確性;通過利用集合相似度損失函數對第一預測區域和實際區域進行處理,得到集合相似度第一損失值,能夠使得模型擬合偽標注的實際區域,從而能夠有利於提高圖像檢測模型的準確性;通過利用一致性損失函數對第一預測區域和第二預測區域進行處理,得到第二損失值,從而能夠提高原始模型和圖像檢測模型預測的一致性,進而能夠有利於提高圖像檢測模型的準確性;通過對第一損失值和第二損失值進行加權處理,得到加權損失值,並利用加權損失值,調整原始檢測模型的網路參數,能夠平衡各損失值在訓練過程中的重要程度,從而能夠提高網路參數的準確性,進而能夠有利於提高圖像檢測模型的準確性。Different from the foregoing embodiments, by using the focus loss function to process the first prediction area and the actual area to obtain the focus first loss value, the model can increase the attention to difficult samples, which can help improve the image detection model. Accuracy; by using the set similarity loss function to process the first predicted area and the actual area, the first loss value of the set similarity can be obtained, which can make the model fit the actual area of the pseudo-label, which can help improve the image detection model By using the consistency loss function to process the first prediction area and the second prediction area, the second loss value is obtained, which can improve the consistency of the prediction of the original model and the image detection model, which can help improve the graph The accuracy of the image detection model; by weighting the first loss value and the second loss value, the weighted loss value is obtained, and the weighted loss value is used to adjust the network parameters of the original detection model, which can balance each loss value in the training process. Therefore, the accuracy of the network parameters can be improved, and the accuracy of the image detection model can be improved.

在一些實施例中,樣本醫學圖像中還包含已標注器官的實際區域,第一檢測結果還包括已標注器官的第一預測區域,第二檢測結果還包括已標注器官的第二預測區域。第一損失確定子模組還被配置為利用未標注器官和已標注器官的第一預測區域和實際區域之間的差異,確定原始檢測模型的第一損失值,第二損失確定子模組還被配置為利用未標注器官的第一預測區域和對應第二預測區域之間的差異,確定原始檢測模型的第二損失值。In some embodiments, the sample medical image further includes the actual region of the labeled organ, the first detection result further includes the first predicted region of the labeled organ, and the second detection result further includes the second predicted region of the labeled organ. The first loss determination submodule is further configured to utilize the difference between the first predicted region and the actual region of the unlabeled organ and the labeled organ to determine the first loss value of the original detection model, and the second loss determination submodule is further is configured to utilize the difference between the first predicted region of the unlabeled organ and the corresponding second predicted region to determine a second loss value of the original detection model.

區別於前述實施例,通過在樣本醫學圖像中設置已標注器官的實際區域,且第一檢測結果中還包括已標注器官的第一預測區域,第二檢測結果還包括已標注器官的第二預測區域,並在確定原始檢測模型的第一損失值的過程中,綜合考慮第一預測區域和實際區域之間的差異,而在確定原始檢測模型的第二損失值的過程中,只考慮未標注器官的第一預測區域和對應的第二預測區域之間的差異,從而能夠提升原始檢測模型和圖像檢測模型一致性約束的魯棒性,進而能夠提高圖像檢測模型的準確性。Different from the foregoing embodiments, by setting the actual area of the marked organ in the sample medical image, and the first detection result also includes the first predicted area of the marked organ, the second detection result also includes the marked organ. Predict the area, and in the process of determining the first loss value of the original detection model, comprehensively consider the difference between the first prediction area and the actual area, and in the process of determining the second loss value of the original detection model, only consider the The difference between the first prediction area of the organ and the corresponding second prediction area is marked, so that the robustness of the consistency constraint of the original detection model and the image detection model can be improved, and the accuracy of the image detection model can be improved.

在一些實施例中,圖像檢測模型的訓練裝置60還包括參數更新模組,被配置為利用本次訓練以及之前若干次訓練時調整後的網路參數,對圖像檢測模型的網路參數進行更新。In some embodiments, the image detection model training device 60 further includes a parameter update module, configured to use the network parameters adjusted during this training and several previous trainings to update the network parameters of the image detection model. to update.

區別於前述實施例,通過利用原始檢測模型在本次訓練以及之前若干次訓練時調整後的網路參數,對圖像檢測模型的網路參數進行更新,能夠進一步約束網路參數在多次訓練過程中由於偽標注的真實區域所產生的累積誤差,提高圖像檢測模型的準確性。Different from the foregoing embodiments, by using the network parameters adjusted by the original detection model during this training and several previous trainings to update the network parameters of the image detection model, it is possible to further constrain the network parameters during multiple training sessions. During the process, the accumulative error generated by the pseudo-annotated real area improves the accuracy of the image detection model.

在一些實施例中,參數更新模組包括統計子模組,被配置為統計原始檢測模型在本次訓練和之前若干次訓練所調整的網路參數的平均值,參數更新模組包括更新子模組,被配置為將圖像檢測模型的網路參數更新為對應的原始檢測模型的網路參數的平均值。In some embodiments, the parameter updating module includes a statistical sub-module configured to count the average value of network parameters adjusted by the original detection model in this training and several previous trainings, and the parameter updating module includes an updating sub-module The group is configured to update the network parameters of the image detection model to an average value of the network parameters of the corresponding original detection model.

區別於前述實施例,通過統計原始檢測模型在本次訓練和之前若干次訓練所調整的網路參數的平均值,並將圖像檢測模型的網路參數更新為對應的原始檢測模型的網路參數的平均值,能夠有利於快速地約束多次訓練過程中所產生的累積誤差,提升圖像檢測模型的準確性。Different from the foregoing embodiments, the average value of the network parameters adjusted by the original detection model in this training and several previous trainings is calculated, and the network parameters of the image detection model are updated to the corresponding network parameters of the original detection model. The average value of the parameters can help to quickly constrain the accumulated errors generated in the multiple training process and improve the accuracy of the image detection model.

在一些實施例中,圖像獲取模組61包括圖像獲取子模組,被配置為獲取待偽標注醫學圖像,其中,待偽標注醫學圖像存在至少一個未標注器官,圖像獲取模組61包括單器官檢測子模組,被配置為分別利用與每一未標注器官對應的單器官檢測模型對待偽標注醫學圖像進行檢測,以得到每個未標注器官的器官預測區域,圖像獲取模組61包括偽標注子模組,被配置為將未標注器官的器官預測區域偽標注為未標注器官的實際區域,並將偽標注後的待偽標注醫學圖像作為樣本醫學圖像。In some embodiments, the image acquisition module 61 includes an image acquisition sub-module configured to acquire a medical image to be pseudo-labeled, wherein the medical image to be pseudo-labeled has at least one unlabeled organ, and the image acquisition module Group 61 includes single-organ detection sub-modules configured to detect pseudo-labeled medical images using a single-organ detection model corresponding to each unlabeled organ, respectively, to obtain an organ prediction region for each unlabeled organ, the image The acquisition module 61 includes a pseudo-labeling sub-module, which is configured to pseudo-label the organ prediction region of the unlabeled organ as the actual region of the unlabeled organ, and use the pseudo-labeled medical image to be pseudo-labeled as a sample medical image.

區別於前述實施例,通過獲取存在至少一個未標注器官的待偽標注醫學圖像,並利用與每一未標注器官對應的單器官檢測模型對待偽標注醫學圖像進行檢測,以得到每個未標注器官的器官預測區域,並將未標注器官的器官預測區域偽標注為未標注器官的實際區域,將偽標注後的待偽標注醫學圖像作為樣本醫學圖像,能夠利用單器官檢測模型免去人工對多器官進行標注的工作量,從而能夠有利於降低訓練用於多器官檢測的圖像檢測模型的人工成本,並提升訓練的效率。Different from the foregoing embodiments, the medical image to be pseudo-labeled with at least one unlabeled organ is acquired, and the single-organ detection model corresponding to each unlabeled organ is used to detect the medical image to be pseudo-labeled, so as to obtain each unlabeled medical image. Label the organ prediction area of the organ, pseudo-label the organ prediction area of the unlabeled organ as the actual area of the unlabeled organ, and use the pseudo-labeled medical image to be pseudo-labeled as the sample medical image. Eliminating the workload of manually labeling multiple organs can help reduce the labor cost of training an image detection model for multiple organ detection, and improve the training efficiency.

在一些實施例中,待偽標注醫學圖像包括至少一個已標注器官,圖像獲取模組61還包括單器官訓練子模組,被配置為利用待偽標注醫學圖像,對待偽標注醫學圖像中的已標注器官對應的單器官檢測模型進行訓練。In some embodiments, the medical image to be pseudo-labeled includes at least one labeled organ, and the image acquisition module 61 further includes a single-organ training sub-module, which is configured to use the medical image to be pseudo-labeled and the medical image to be pseudo-labeled. The single-organ detection model corresponding to the labeled organs in the image is trained.

區別於前述實施例,在待偽標注醫學圖像中包括至少一個已標注器官,並利用待偽標注醫學圖像對待偽標注醫學圖像中的已標注器官對應的單器官檢測模型進行訓練,能夠提升單器官檢測模型的準確性,從而能夠有利於提升後續偽標注的準確性,進而能夠有利於提升後續訓練圖像檢測模型的準確性。Different from the foregoing embodiments, the medical image to be pseudo-labeled includes at least one labeled organ, and the single-organ detection model corresponding to the labeled organ in the pseudo-labeled medical image is trained by using the medical image to be pseudo-labeled. Improving the accuracy of the single-organ detection model can help to improve the accuracy of subsequent pseudo-labeling, which in turn can help improve the accuracy of the subsequent training image detection model.

在一些實施例中,圖像獲取子模組包括三維圖像獲取單元,被配置為獲取三維醫學圖像,圖像獲取子模組包括預處理單元,被配置為對三維醫學圖像進行預處理,圖像獲取子模組包括圖像裁剪單元,被配置為將預處理後的三維醫學圖像進行裁剪處理,得到至少一個二維的待偽標注醫學圖像。In some embodiments, the image acquisition sub-module includes a three-dimensional image acquisition unit configured to acquire a three-dimensional medical image, and the image acquisition sub-module includes a preprocessing unit configured to preprocess the three-dimensional medical image , the image acquisition sub-module includes an image cropping unit, which is configured to crop the preprocessed three-dimensional medical image to obtain at least one two-dimensional medical image to be pseudo-labeled.

區別於前述實施例,通過獲取三維醫學圖像,並對三維醫學圖像進行預處理,從而對預處理後的三維醫學圖像進行裁剪處理,得到至少一個二維的待偽標注醫學圖像,能夠有利於得到滿足模型訓練的醫學圖像,從而能夠有利於提升後續圖像檢測模型訓練的準確性。Different from the foregoing embodiments, by acquiring a 3D medical image and preprocessing the 3D medical image, the preprocessed 3D medical image is cropped to obtain at least one 2D medical image to be pseudo-labeled, It can be beneficial to obtain medical images that satisfy the model training, so as to improve the accuracy of subsequent image detection model training.

在一些實施例中,預處理單元還被配置為執行以下至少一者:將三維醫學圖像的體素解析度調整至一預設解析度;利用一預設窗值將三維醫學圖像的體素值歸一化至預設範圍內;在三維醫學圖像的至少部分體素中加入高斯雜訊。In some embodiments, the preprocessing unit is further configured to perform at least one of the following: adjusting the voxel resolution of the three-dimensional medical image to a predetermined resolution; The voxel values are normalized to a preset range; Gaussian noise is added to at least some voxels of the three-dimensional medical image.

區別於前述實施例,將三維醫學圖像的體素解析度調整至一預設解析度,能夠有利於後續模型預測處理;利用預設窗值將三維醫學圖像的體素值歸一化至預設範圍內,能夠有利於模型提取到準確的特徵;在三維醫學圖像的至少部分體素中加入高斯雜訊,能夠有利於實現資料增廣,提高資料多樣性,提升後續模型訓練的準確性。Different from the foregoing embodiments, adjusting the voxel resolution of the three-dimensional medical image to a preset resolution can be beneficial to the subsequent model prediction processing; the voxel value of the three-dimensional medical image is normalized to a predetermined window value. Within the preset range, it can help the model to extract accurate features; adding Gaussian noise to at least part of the voxels of the three-dimensional medical image can help to achieve data augmentation, improve data diversity, and improve the accuracy of subsequent model training. sex.

請參閱圖7,圖7是本發明實施例提供的圖像檢測裝置一實施例的框架示意圖。圖像檢測裝置70包括圖像獲取模組71和圖像檢測模組72,圖像獲取模組71被配置為獲取待檢測醫學圖像,其中,待檢測醫學圖像中包含多個器官;圖像檢測模組72被配置為利用圖像檢測模型對待檢測醫學進行檢測,得到多個器官的預測區域;其中,圖像檢測模型是利用上述任一圖像檢測模型的訓練裝置實施例中的圖像檢測模型的訓練裝置訓練得到的。Please refer to FIG. 7 . FIG. 7 is a schematic frame diagram of an embodiment of an image detection apparatus provided by an embodiment of the present invention. The image detection device 70 includes an image acquisition module 71 and an image detection module 72, and the image acquisition module 71 is configured to acquire a medical image to be detected, wherein the medical image to be detected includes a plurality of organs; Fig. The image detection module 72 is configured to use the image detection model to detect the medicine to be detected, and obtain the prediction regions of multiple organs; wherein, the image detection model is the figure in the embodiment of the training device using any of the above-mentioned image detection models. Like the training device of the detection model trained.

上述方案,利用上述任一圖像檢測模型的訓練裝置實施例中的圖像檢測模型的訓練裝置訓練得到的圖像檢測模型對待檢測醫學圖像檢測檢測,得到多個器官的預測區域,能夠在多器官檢測的過程中,提高檢測準確性。In the above solution, the image detection model trained by the image detection model training device in any of the above-mentioned image detection model training device embodiments is used to detect and detect the medical images to be detected, and the prediction regions of multiple organs are obtained, which can be used in the image detection model. In the process of multi-organ detection, the detection accuracy is improved.

請參閱圖8,圖8是本發明實施例提供的電子設備一實施例的框架示意圖。電子設備80包括相互耦接的記憶體81和處理器82,處理器82被配置為執行記憶體81中儲存的程式指令,以實現上述任一圖像檢測模型的訓練方法實施例的步驟,或實現上述任一圖像檢測方法實施例中的步驟。在一個可能的實施場景中,電子設備80可以包括但不限於:微型電腦、伺服器,此外,電子設備80還可以包括筆記型電腦、平板電腦等移動設備,在此不做限定。Please refer to FIG. 8 . FIG. 8 is a schematic frame diagram of an embodiment of an electronic device provided by an embodiment of the present invention. The electronic device 80 includes a memory 81 and a processor 82 that are coupled to each other, and the processor 82 is configured to execute program instructions stored in the memory 81 to implement the steps of any of the above image detection model training method embodiments, or The steps in any of the above image detection method embodiments are implemented. In a possible implementation scenario, the electronic device 80 may include, but is not limited to, a microcomputer and a server. In addition, the electronic device 80 may also include a mobile device such as a notebook computer and a tablet computer, which is not limited herein.

其中,處理器82被配置為控制其自身以及記憶體81以實現上述任一圖像檢測模型的訓練方法實施例的步驟,或實現上述任一圖像檢測方法實施例中的步驟。處理器82還可以稱為CPU(Central Processing Unit,中央處理單元)。處理器82可能是一種積體電路晶片,具有信號的處理能力。處理器82還可以是通用處理器、數位訊號處理器(Digital Signal Processor,DSP)、專用積體電路(Application Specific Integrated Circuit,ASIC)、現場可程式設計閘陣列(Field-Programmable Gate Array,FPGA)或者其他可程式設計邏輯器件、分立門或者電晶體邏輯器件、分立硬體元件。通用處理器可以是微處理器或者該處理器也可以是任何常規的處理器等。另外,處理器82可以由積體電路晶片共同實現。The processor 82 is configured to control itself and the memory 81 to implement the steps of any of the above image detection model training method embodiments, or to implement any of the above image detection method embodiments. The processor 82 may also be referred to as a CPU (Central Processing Unit, central processing unit). The processor 82 may be an integrated circuit chip with signal processing capabilities. The processor 82 can also be a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), or a field-programmable gate array (FPGA) Or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Additionally, the processor 82 may be commonly implemented by an integrated circuit die.

上述方案,能夠在多器官檢測的過程中,提高檢測準確性。The above solution can improve the detection accuracy in the process of multi-organ detection.

請參閱圖9,圖9為本發明實施例提供的電腦可讀儲存介質一實施例的框架示意圖。電腦可讀儲存介質90儲存有能夠被處理器運行的程式指令901,程式指令901被配置為實現上述任一圖像檢測模型的訓練方法實施例的步驟,或實現上述任一圖像檢測方法實施例中的步驟。Please refer to FIG. 9. FIG. 9 is a schematic diagram of a framework of an embodiment of a computer-readable storage medium provided by an embodiment of the present invention. The computer-readable storage medium 90 stores program instructions 901 that can be executed by the processor, and the program instructions 901 are configured to implement the steps of any of the above-mentioned image detection model training method embodiments, or to implement any of the above-mentioned image detection methods. steps in the example.

上述方案,能夠在多器官檢測的過程中,提高檢測準確性。The above solution can improve the detection accuracy in the process of multi-organ detection.

本發明實施例所提供的圖像檢測模型的訓練方法或圖像檢測方法的電腦程式產品,包括儲存了程式碼的電腦可讀儲存介質,所述程式碼包括的指令可被配置為執行上述方法實施例中所述的圖像檢測模型的訓練方法或圖像檢測方法的步驟,可參見上述方法實施例,在此不再贅述。The image detection model training method or the computer program product of the image detection method provided by the embodiment of the present invention includes a computer-readable storage medium storing program codes, and the instructions included in the program codes can be configured to execute the above methods For the steps of the image detection model training method or the image detection method described in the embodiments, reference may be made to the foregoing method embodiments, and details are not described herein again.

本發明實施例還提供一種電腦程式,該電腦程式被處理器執行時實現前述實施例的任意一種方法。該電腦程式產品可以通過硬體、軟體或其結合的方式實現。在一個可選實施例中,所述電腦程式產品體現為電腦儲存介質,在另一個可選實施例中,電腦程式產品體現為軟體產品,例如軟體發展包(Software Development Kit,SDK)等等。An embodiment of the present invention further provides a computer program, which implements any one of the methods of the foregoing embodiments when the computer program is executed by a processor. The computer program product can be implemented in hardware, software or a combination thereof. In an optional embodiment, the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.

在本發明所提供的幾個實施例中,應該理解到,所揭露的方法和裝置,可以通過其它的方式實現。例如,以上所描述的裝置實施方式僅僅是示意性的,例如,模組或單元的劃分,僅僅為一種邏輯功能劃分,實際實現的過程中可以有另外的劃分方式,例如單元或元件可以結合或者可以集成到另一個系統,或一些特徵可以忽略,或不執行。另一點,所顯示或討論的相互之間的耦合或直接耦合或通信連接可以是通過一些介面,裝置或單元的間接耦合或通信連接,可以是電性、機械或其它的形式。In the several embodiments provided by the present invention, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the device implementations described above are only illustrative, for example, the division of modules or units is only a logical function division, and there may be other divisions in the actual implementation process, for example, units or elements may be combined or Can be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.

作為分離部件說明的單元可以是或者也可以不是物理上分開的,作為單元顯示的部件可以是或者也可以不是物理單元,即可以位於一個地方,或者也可以分佈到網路單元上。可以根據實際的需要選擇其中的部分或者全部單元來實現本實施方式方案的目的。Units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed over network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this implementation manner.

另外,在本發明各個實施例中的各功能單元可以集成在一個處理單元中,也可以是各個單元單獨物理存在,也可以兩個或兩個以上單元集成在一個單元中。上述集成的單元既可以採用硬體的形式實現,也可以採用軟體功能單元的形式實現。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware, or can be implemented in the form of software functional units.

集成的單元如果以軟體功能單元的形式實現並作為獨立的產品銷售或使用時,可以儲存在一個電腦可讀取儲存介質中。基於這樣的理解,本發明的技術方案本質上或者說對現有技術做出貢獻的部分或者該技術方案的全部或部分可以以軟體產品的形式體現出來,該電腦軟體產品儲存在一個儲存介質中,包括若干指令用以使得一台電腦設備(可以是個人電腦,伺服器,或者網路設備等)或處理器(processor)執行本發明各個實施方式方法的全部或部分步驟。而前述的儲存介質包括:U盤、移動硬碟、唯讀記憶體(ROM,Read-Only Memory)、隨機存取記憶體(RAM,Random Access Memory)、磁碟或者光碟等各種可以儲存程式碼的介質。The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention is essentially or the part that contributes to the prior art or the whole or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, Several instructions are included to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods of the various embodiments of the present invention. The aforementioned storage medium includes: U disk, removable hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or CD, etc. medium.

工業實用性 本發明實施例通過獲取樣本醫學圖像,樣本醫學圖像偽標注出至少一個未標注器官的實際區域;利用原始檢測模型對樣本醫學圖像進行檢測以得到包括未標注器官的第一預測區域的第一檢測結果;利用圖像檢測模型對樣本醫學圖像進行檢測以得到包括未標注器官的第二預測區域的第二檢測結果,圖像檢測模型的網路參數是基於原始檢測模型的網路參數確定的;利用第一預測區域分別與實際區域、第二預測區域之間的差異,調整原始檢測模型的網路參數。這樣,能夠在多器官檢測的過程中,提高檢測準確性。Industrial Applicability In the embodiment of the present invention, a sample medical image is acquired, and the sample medical image pseudo-marks the actual region of at least one unmarked organ; the sample medical image is detected by using the original detection model to obtain the first prediction region including the unmarked organ. The first detection result; the sample medical image is detected by the image detection model to obtain the second detection result of the second prediction area including the unlabeled organ, and the network parameters of the image detection model are based on the network of the original detection model. The parameters are determined; the network parameters of the original detection model are adjusted by using the differences between the first prediction area and the actual area and the second prediction area respectively. In this way, the detection accuracy can be improved in the process of multi-organ detection.

60:圖像檢測模型的訓練裝置 61:圖像獲取模組 62:第一檢測模組 63:第二檢測模組 64:參數調整模組 70:圖像檢測裝置 71:圖像獲取模組 72:圖像檢測模組 80:電子設備 81:記憶體 82:處理器 90:電腦可讀儲存介質 901:程式指令 S11~S14,S111~S113,S31~S37,S51~S52:步驟60: Training device for image detection model 61: Image acquisition module 62: The first detection module 63: Second detection module 64: Parameter adjustment module 70: Image detection device 71: Image acquisition module 72: Image detection module 80: Electronic equipment 81: Memory 82: Processor 90: Computer-readable storage media 901: Program command S11~S14, S111~S113, S31~S37, S51~S52: Steps

圖1是本發明實施例提供的圖像檢測模型的訓練方法一實施例的流程示意圖; 圖2是圖1中步驟S11一實施例的流程示意圖; 圖3是本發明實施例提供的圖像檢測模型的訓練方法另一實施例的流程示意圖; 圖4是本發明實施例提供的圖像檢測模型的訓練過程一實施例的示意圖; 圖5是本發明實施例提供的圖像檢測方法一實施例的流程示意圖; 圖6是本發明實施例提供的圖像檢測模型的訓練裝置一實施例的框架示意圖; 圖7是本發明實施例提供的圖像檢測裝置一實施例的框架示意圖; 圖8是本發明實施例提供的電子設備一實施例的框架示意圖; 圖9是本發明實施例提供的電腦可讀儲存介質一實施例的框架示意圖。1 is a schematic flowchart of an embodiment of a training method for an image detection model provided by an embodiment of the present invention; FIG. 2 is a schematic flowchart of an embodiment of step S11 in FIG. 1; 3 is a schematic flowchart of another embodiment of a training method for an image detection model provided by an embodiment of the present invention; 4 is a schematic diagram of an embodiment of a training process of an image detection model provided by an embodiment of the present invention; 5 is a schematic flowchart of an embodiment of an image detection method provided by an embodiment of the present invention; 6 is a schematic diagram of a framework of an embodiment of an apparatus for training an image detection model provided by an embodiment of the present invention; 7 is a schematic diagram of a framework of an embodiment of an image detection apparatus provided by an embodiment of the present invention; 8 is a schematic diagram of a framework of an embodiment of an electronic device provided by an embodiment of the present invention; FIG. 9 is a schematic diagram of a framework of an embodiment of a computer-readable storage medium provided by an embodiment of the present invention.

S11~S14:步驟S11~S14: Steps

Claims (16)

一種圖像檢測模型的訓練方法,包括: 獲取樣本醫學圖像,其中,所述樣本醫學圖像偽標注出至少一個未標注器官的實際區域; 利用原始檢測模型對所述樣本醫學圖像進行檢測以得到第一檢測結果,其中,所述第一檢測結果包括所述未標注器官的第一預測區域;以及, 利用圖像檢測模型對所述樣本醫學圖像進行檢測以得到第二檢測結果,其中,所述第二檢測結果包括所述未標注器官的第二預測區域;所述圖像檢測模型的網路參數是基於所述原始檢測模型的網路參數確定的; 利用所述第一預測區域分別與所述實際區域、所述第二預測區域之間的差異,調整所述原始檢測模型的網路參數。An image detection model training method, comprising: acquiring a sample medical image, wherein the sample medical image pseudo-labels the actual region of at least one unlabeled organ; Detecting the sample medical image using an original detection model to obtain a first detection result, wherein the first detection result includes a first predicted region of the unlabeled organ; and, The sample medical image is detected by an image detection model to obtain a second detection result, wherein the second detection result includes the second prediction area of the unlabeled organ; the network of the image detection model parameters are determined based on network parameters of the original detection model; The network parameters of the original detection model are adjusted by using the differences between the first prediction area and the actual area and the second prediction area, respectively. 根據請求項1所述的訓練方法,其中,所述原始檢測模型包括第一原始檢測模型和第二原始檢測模型,所述圖像檢測模型包括與所述第一原始檢測模型對應的第一圖像檢測模型和與所述第二原始檢測模型對應的第二圖像檢測模型; 所述利用原始檢測模型對所述樣本醫學圖像進行檢測以得到第一檢測結果,包括: 分別利用所述第一原始檢測模型和所述第二原始檢測模型執行所述對所述樣本醫學圖像進行檢測以得到第一檢測結果的步驟; 所述利用圖像檢測模型對所述樣本醫學圖像進行檢測以得到第二檢測結果,包括: 分別利用所述第一圖像檢測模型和第二圖像檢測模型執行所述對所述樣本醫學圖像進行檢測以得到第二檢測結果的步驟; 所述利用所述第一預測區域分別與所述實際區域、所述第二預測區域之間的差異,調整所述原始檢測模型的網路參數,包括: 利用所述第一原始檢測模型的第一預測區域分別與所述實際區域、所述第二圖像檢測模型的第二預測區域之間的差異,調整所述第一原始檢測模型的網路參數;以及, 利用所述第二原始檢測模型的第一預測區域分別與所述實際區域、所述第一圖像檢測模型的第二預測區域之間的差異,調整所述第二原始檢測模型的網路參數。The training method according to claim 1, wherein the original detection model includes a first original detection model and a second original detection model, and the image detection model includes a first image corresponding to the first original detection model an image detection model and a second image detection model corresponding to the second original detection model; The use of the original detection model to detect the sample medical image to obtain a first detection result includes: respectively using the first original detection model and the second original detection model to perform the step of detecting the sample medical image to obtain a first detection result; The using an image detection model to detect the sample medical image to obtain a second detection result includes: respectively using the first image detection model and the second image detection model to perform the step of detecting the sample medical image to obtain a second detection result; The adjusting the network parameters of the original detection model by using the differences between the first prediction area and the actual area and the second prediction area, including: Adjust the network parameters of the first original detection model by using the differences between the first prediction area of the first original detection model and the actual area and the second prediction area of the second image detection model ;as well as, Adjust the network parameters of the second original detection model by using the difference between the first prediction area of the second original detection model and the actual area and the second prediction area of the first image detection model . 根據請求項1或2所述的訓練方法,其中,所述利用所述第一預測區域分別與所述實際區域、所述第二預測區域之間的差異,調整所述原始檢測模型的網路參數包括: 利用所述第一預測區域和所述實際區域之間的差異,確定所述原始檢測模型的第一損失值;以及, 利用所述第一預測區域和所述第二預測區域之間的差異,確定所述原始檢測模型的第二損失值; 利用所述第一損失值和所述第二損失值,調整所述原始檢測模型的網路參數。The training method according to claim 1 or 2, wherein the network of the original detection model is adjusted by using the difference between the first prediction area and the actual area and the second prediction area, respectively. Parameters include: Using the difference between the first predicted region and the actual region, determining a first loss value for the original detection model; and, Using the difference between the first prediction area and the second prediction area, determine a second loss value of the original detection model; Using the first loss value and the second loss value, the network parameters of the original detection model are adjusted. 根據請求項3所述的訓練方法,其中,所述利用所述第一預測區域和所述實際區域之間的差異,確定所述原始檢測模型的第一損失值包括以下至少之一: 利用焦點損失函數對所述第一預測區域和所述實際區域進行處理,得到焦點第一損失值; 利用集合相似度損失函數對所述第一預測區域和所述實際區域進行處理,得到集合相似度第一損失值。The training method according to claim 3, wherein the determining the first loss value of the original detection model by using the difference between the first predicted area and the actual area includes at least one of the following: Process the first predicted area and the actual area by using a focal loss function to obtain a focal first loss value; The first predicted area and the actual area are processed by using an ensemble similarity loss function to obtain a first ensemble similarity loss value. 根據請求項3所述的訓練方法,其中,所述利用所述第一預測區域和所述第二預測區域之間的差異,確定所述原始檢測模型的第二損失值包括: 利用一致性損失函數對所述第一預測區域和所述第二預測區域進行處理,得到所述第二損失值。The training method according to claim 3, wherein the determining the second loss value of the original detection model using the difference between the first prediction area and the second prediction area includes: The first prediction area and the second prediction area are processed by using a consistency loss function to obtain the second loss value. 根據請求項3所述的訓練方法,其中,所述利用所述第一損失值和所述第二損失值,調整所述原始檢測模型的網路參數包括: 對所述第一損失值和所述第二損失值進行加權處理,得到加權損失值; 利用所述加權損失值,調整所述原始檢測模型的網路參數。The training method according to claim 3, wherein the adjusting the network parameters of the original detection model by using the first loss value and the second loss value includes: weighting the first loss value and the second loss value to obtain a weighted loss value; Using the weighted loss value, the network parameters of the original detection model are adjusted. 根據請求項3所述的訓練方法,其中,所述樣本醫學圖像中還包含已標注器官的實際區域,所述第一檢測結果還包括所述已標注器官的第一預測區域,所述第二檢測結果還包括所述已標注器官的第二預測區域; 所述利用所述第一預測區域和所述實際區域之間的差異,確定所述原始檢測模型的第一損失值,包括: 利用所述未標注器官和所述已標注器官的第一預測區域和所述實際區域之間的差異,確定所述原始檢測模型的第一損失值; 所述利用所述第一預測區域和所述第二預測區域之間的差異,確定所述原始檢測模型的第二損失值,包括: 利用所述未標注器官的第一預測區域和對應所述第二預測區域之間的差異,確定所述原始檢測模型的第二損失值。The training method according to claim 3, wherein the sample medical image further includes the actual region of the labeled organ, the first detection result further includes the first predicted region of the labeled organ, and the first detection result further includes the first predicted region of the labeled organ. 2. The detection result further includes the second prediction area of the marked organ; The determining the first loss value of the original detection model by using the difference between the first predicted area and the actual area includes: determining a first loss value of the original detection model using the difference between the first predicted region and the actual region of the unlabeled organ and the labeled organ; The determining the second loss value of the original detection model by using the difference between the first prediction area and the second prediction area includes: A second loss value of the original detection model is determined using the difference between the first prediction region of the unlabeled organ and the corresponding second prediction region. 根據請求項1或2所述的訓練方法,其中,所述利用所述第一預測區域分別與所述實際區域、所述第二預測區域之間的差異,調整所述原始檢測模型的網路參數之後,所述方法還包括: 利用本次訓練以及之前若干次訓練時調整後的網路參數,對所述圖像檢測模型的網路參數進行更新。The training method according to claim 1 or 2, wherein the network of the original detection model is adjusted by using the difference between the first prediction area and the actual area and the second prediction area, respectively. After parameters, the method further includes: The network parameters of the image detection model are updated by using the network parameters adjusted during this training and several previous trainings. 根據請求項8所述的訓練方法,其中,所述利用本次訓練以及之前若干次訓練時調整後的網路參數,對所述圖像檢測模型的網路參數進行更新,包括: 統計所述原始檢測模型在本次訓練和之前若干次訓練所調整的網路參數的平均值; 將所述圖像檢測模型的網路參數更新為對應的所述原始檢測模型的所述網路參數的平均值。The training method according to claim 8, wherein the network parameters of the image detection model are updated by using the network parameters adjusted during this training and several previous trainings, including: Count the average value of the network parameters adjusted by the original detection model in this training and several previous trainings; The network parameters of the image detection model are updated to the average value of the network parameters of the corresponding original detection model. 根據請求項1或2所述的訓練方法,其中,所述獲取樣本醫學圖像包括: 獲取待偽標注醫學圖像,其中,所述待偽標注醫學圖像存在至少一個所述未標注器官; 分別利用與每一所述未標注器官對應的單器官檢測模型對所述待偽標注醫學圖像進行檢測,以得到每個所述未標注器官的器官預測區域; 將所述未標注器官的器官預測區域偽標注為所述未標注器官的實際區域,並將所述偽標注後的待偽標注醫學圖像作為所述樣本醫學圖像。The training method according to claim 1 or 2, wherein the obtaining a sample medical image comprises: acquiring a medical image to be pseudo-labeled, wherein the medical image to be pseudo-labeled has at least one of the unlabeled organs; Using a single-organ detection model corresponding to each of the unlabeled organs to detect the medical image to be pseudo-labeled, to obtain an organ prediction area for each of the unlabeled organs; The organ prediction region of the unlabeled organ is pseudo-labeled as the actual region of the unlabeled organ, and the pseudo-labeled medical image to be pseudo-labeled is used as the sample medical image. 根據請求項10所述的訓練方法,其中,所述待偽標注醫學圖像包括至少一個已標注器官;所述分別利用與每一所述未標注器官對應的單器官檢測模型對所述待偽標注醫學圖像進行檢測之前,所述方法還包括: 利用所述待偽標注醫學圖像,對所述待偽標注醫學圖像中的已標注器官對應的單器官檢測模型進行訓練。The training method according to claim 10, wherein the medical image to be pseudo-labeled includes at least one labeled organ; Before labeling the medical image for detection, the method further includes: Using the medical image to be pseudo-labeled, a single-organ detection model corresponding to the labeled organ in the medical image to be pseudo-labeled is trained. 根據請求項10所述的訓練方法,其中,所述獲取待偽標注醫學圖像,包括: 獲取三維醫學圖像,並對所述三維醫學圖像進行預處理; 將預處理後的所述三維醫學圖像進行裁剪處理,得到至少一個二維的待偽標注醫學圖像。The training method according to claim 10, wherein the acquiring the medical image to be pseudo-labeled includes: acquiring a three-dimensional medical image, and preprocessing the three-dimensional medical image; The preprocessed three-dimensional medical image is cropped to obtain at least one two-dimensional medical image to be pseudo-labeled. 根據請求項12所述的訓練方法,其中,所述對所述三維醫學圖像進行預處理包括以下至少之一: 將所述三維醫學圖像的體素解析度調整至一預設解析度; 利用一預設窗值將所述三維醫學圖像的體素值歸一化至預設範圍內; 在所述三維醫學圖像的至少部分體素中加入高斯雜訊。The training method according to claim 12, wherein the preprocessing of the three-dimensional medical image includes at least one of the following: adjusting the voxel resolution of the three-dimensional medical image to a preset resolution; Using a preset window value to normalize the voxel value of the three-dimensional medical image to a preset range; Gaussian noise is added to at least some of the voxels of the three-dimensional medical image. 一種圖像檢測方法,包括 獲取待檢測醫學圖像,其中,所述待檢測醫學圖像中包含多個器官; 利用圖像檢測模型對所述待檢測醫學進行檢測,得到所述多個器官的預測區域; 其中,所述圖像檢測模型是利用請求項1至13任一項所述的圖像檢測模型的訓練方法訓練得到的。An image detection method, comprising acquiring a medical image to be detected, wherein the medical image to be detected includes multiple organs; Detecting the medicine to be detected by using an image detection model to obtain the predicted regions of the multiple organs; Wherein, the image detection model is obtained by using the training method for the image detection model described in any one of request items 1 to 13. 一種電子設備,包括相互耦接的記憶體和處理器,所述處理器被配置為執行所述記憶體中儲存的程式指令,以實現請求項1至13任一項所述的圖像檢測模型的訓練方法,或實現請求項14所述的圖像檢測方法。An electronic device, comprising a mutually coupled memory and a processor, the processor is configured to execute program instructions stored in the memory to implement the image detection model described in any one of claim 1 to 13 training method, or implement the image detection method described in request item 14. 一種電腦可讀儲存介質,其上儲存有程式指令,所述程式指令被處理器執行時實現請求項1至13任一項所述的圖像檢測模型的訓練方法,或實現請求項14所述的圖像檢測方法。A computer-readable storage medium on which program instructions are stored, and when the program instructions are executed by a processor, the training method of the image detection model described in any one of claim 1 to 13 is realized, or the training method described in claim 14 is realized image detection method.
TW110109420A 2020-04-30 2021-03-16 Image detection method and training method of related model, electronic device and computer-readable storage medium TW202145249A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010362766.X 2020-04-30
CN202010362766.XA CN111539947B (en) 2020-04-30 2020-04-30 Image detection method, related model training method, related device and equipment

Publications (1)

Publication Number Publication Date
TW202145249A true TW202145249A (en) 2021-12-01

Family

ID=71967825

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110109420A TW202145249A (en) 2020-04-30 2021-03-16 Image detection method and training method of related model, electronic device and computer-readable storage medium

Country Status (5)

Country Link
JP (1) JP2022538137A (en)
KR (1) KR20220016213A (en)
CN (1) CN111539947B (en)
TW (1) TW202145249A (en)
WO (1) WO2021218215A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539947B (en) * 2020-04-30 2024-03-29 上海商汤智能科技有限公司 Image detection method, related model training method, related device and equipment
CN112132206A (en) * 2020-09-18 2020-12-25 青岛商汤科技有限公司 Image recognition method, training method of related model, related device and equipment
CN113850179A (en) * 2020-10-27 2021-12-28 深圳市商汤科技有限公司 Image detection method, and training method, device, equipment and medium of related model
CN112200802B (en) * 2020-10-30 2022-04-26 上海商汤智能科技有限公司 Training method of image detection model, related device, equipment and storage medium
CN112669293A (en) * 2020-12-31 2021-04-16 上海商汤智能科技有限公司 Image detection method, training method of detection model, related device and equipment
CN112785573A (en) * 2021-01-22 2021-05-11 上海商汤智能科技有限公司 Image processing method and related device and equipment
CN112749801A (en) * 2021-01-22 2021-05-04 上海商汤智能科技有限公司 Neural network training and image processing method and device
CN114049344A (en) * 2021-11-23 2022-02-15 上海商汤智能科技有限公司 Image segmentation method, training method of model thereof, related device and electronic equipment
CN114429459A (en) * 2022-01-24 2022-05-03 上海商汤智能科技有限公司 Training method of target detection model and corresponding detection method
CN114155365B (en) * 2022-02-07 2022-06-14 北京航空航天大学杭州创新研究院 Model training method, image processing method and related device
CN114391828B (en) * 2022-03-01 2023-06-06 郑州大学 Active psychological care intervention system for cerebral apoplexy patient
CN117041531B (en) * 2023-09-04 2024-03-15 无锡维凯科技有限公司 Mobile phone camera focusing detection method and system based on image quality evaluation

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8170330B2 (en) * 2007-10-30 2012-05-01 Siemens Aktiengesellschaft Machine learning for tissue labeling segmentation
WO2018033154A1 (en) * 2016-08-19 2018-02-22 北京市商汤科技开发有限公司 Gesture control method, device, and electronic apparatus
CN108229267B (en) * 2016-12-29 2020-10-16 北京市商汤科技开发有限公司 Object attribute detection, neural network training and region detection method and device
JP6931579B2 (en) * 2017-09-20 2021-09-08 株式会社Screenホールディングス Live cell detection methods, programs and recording media
EP3474192A1 (en) * 2017-10-19 2019-04-24 Koninklijke Philips N.V. Classifying data
JP7325414B2 (en) * 2017-11-20 2023-08-14 コーニンクレッカ フィリップス エヌ ヴェ Training a First Neural Network Model and a Second Neural Network Model
JP7066385B2 (en) * 2017-11-28 2022-05-13 キヤノン株式会社 Information processing methods, information processing equipment, information processing systems and programs
CN109166107A (en) * 2018-04-28 2019-01-08 北京市商汤科技开发有限公司 A kind of medical image cutting method and device, electronic equipment and storage medium
CN109086656B (en) * 2018-06-06 2023-04-18 平安科技(深圳)有限公司 Airport foreign matter detection method, device, computer equipment and storage medium
CN109523526B (en) * 2018-11-08 2021-10-22 腾讯科技(深圳)有限公司 Tissue nodule detection and model training method, device, equipment and system thereof
CN109658419B (en) * 2018-11-15 2020-06-19 浙江大学 Method for segmenting small organs in medical image
CN110097557B (en) * 2019-01-31 2021-02-12 卫宁健康科技集团股份有限公司 Medical image automatic segmentation method and system based on 3D-UNet
CN110148142B (en) * 2019-05-27 2023-04-18 腾讯科技(深圳)有限公司 Training method, device and equipment of image segmentation model and storage medium
CN110188829B (en) * 2019-05-31 2022-01-28 北京市商汤科技开发有限公司 Neural network training method, target recognition method and related products
JP2021039748A (en) * 2019-08-30 2021-03-11 キヤノン株式会社 Information processor, information processing method, information processing system, and program
CN111028206A (en) * 2019-11-21 2020-04-17 万达信息股份有限公司 Prostate cancer automatic detection and classification system based on deep learning
CN111062390A (en) * 2019-12-18 2020-04-24 北京推想科技有限公司 Region-of-interest labeling method, device, equipment and storage medium
CN110969245B (en) * 2020-02-28 2020-07-24 北京深睿博联科技有限责任公司 Target detection model training method and device for medical image
CN111539947B (en) * 2020-04-30 2024-03-29 上海商汤智能科技有限公司 Image detection method, related model training method, related device and equipment

Also Published As

Publication number Publication date
KR20220016213A (en) 2022-02-08
CN111539947A (en) 2020-08-14
WO2021218215A1 (en) 2021-11-04
CN111539947B (en) 2024-03-29
JP2022538137A (en) 2022-08-31

Similar Documents

Publication Publication Date Title
TW202145249A (en) Image detection method and training method of related model, electronic device and computer-readable storage medium
Tatsuta et al. Geometric morphometrics in entomology: Basics and applications
WO2021128825A1 (en) Three-dimensional target detection method, method and device for training three-dimensional target detection model, apparatus, and storage medium
KR101599219B1 (en) system and method for automatic registration of anatomic points in 3d medical images
CN109102490A (en) Automated graphics register quality evaluation
US9280819B2 (en) Image segmentation techniques
US8396531B2 (en) System and method for quasi-real-time ventricular measurements from M-mode echocardiogram
Yao et al. Pneumonia Detection Using an Improved Algorithm Based on Faster R‐CNN
US20120251010A1 (en) Method and apparatus for acquiring descriptive information of a plurality of images and image matching method
US9142030B2 (en) Systems, methods and computer readable storage media storing instructions for automatically segmenting images of a region of interest
CN111047611A (en) Focal volume measuring method and device
GB2577656A (en) Method and apparatus for generating a derived image using images of different types
WO2023138190A1 (en) Training method for target detection model, and corresponding detection method therefor
WO2020147408A1 (en) Facial recognition model evaluation method and apparatus, and storage medium and computer device
Bizopoulos et al. Comprehensive comparison of deep learning models for lung and COVID-19 lesion segmentation in CT scans
CN111724371B (en) Data processing method and device and electronic equipment
CN111462203B (en) DR focus evolution analysis device and method
CN110414562B (en) X-ray film classification method, device, terminal and storage medium
WO2023104464A1 (en) Selecting training data for annotation
US10390798B2 (en) Computer-aided tracking and motion analysis with ultrasound for measuring joint kinematics
Pulido et al. Classification of Alzheimer's disease using regional saliency maps from brain MR volumes
CN110934565A (en) Method and device for measuring pupil diameter and computer readable storage medium
Liu et al. Automatic localization of the fetal cerebellum on 3D ultrasound volumes
CN114581438B (en) MRI image classification method, device, electronic device and storage medium
TWI778670B (en) Method and system for pneumonia area detection