CN111724371B - Data processing method and device and electronic equipment - Google Patents

Data processing method and device and electronic equipment Download PDF

Info

Publication number
CN111724371B
CN111724371B CN202010565168.2A CN202010565168A CN111724371B CN 111724371 B CN111724371 B CN 111724371B CN 202010565168 A CN202010565168 A CN 202010565168A CN 111724371 B CN111724371 B CN 111724371B
Authority
CN
China
Prior art keywords
labeling
loss
value
data set
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010565168.2A
Other languages
Chinese (zh)
Other versions
CN111724371A (en
Inventor
张耀
田疆
张杨
贺志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202010565168.2A priority Critical patent/CN111724371B/en
Publication of CN111724371A publication Critical patent/CN111724371A/en
Application granted granted Critical
Publication of CN111724371B publication Critical patent/CN111724371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a data processing method, a device and electronic equipment, wherein the method comprises the following steps: after obtaining the training image corresponding to a plurality of labeling data sets, since each labeling data set contains a labeling value corresponding to each pixel point of the training image to which the labeling data set belongs, and pixels of a background region pixel point and pixels of different target objects are respectively labeled with different labeling data sets, based on the labeling data sets, after inputting the training image into a pre-constructed object segmentation model, test data sets respectively corresponding to the background region pixel point and pixels of different target objects can be obtained, and each set contains a test value on each pixel point, thereby obtaining current loss data between the labeling data set and the test data sets and adjusting model parameters in the object segmentation model according to the loss data.

Description

Data processing method and device and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a data processing method, a data processing device, and an electronic device.
Background
Disease diagnosis based on medical imaging (e.g., computerized tomography CT (Computed Tomography), magnetic resonance imaging MRI (Magnetic Resonance Imaging), etc.) is of great importance in clinical decisions. The method can accurately divide various organs in the medical image, and can provide necessary auxiliary information for diagnosis and treatment of doctors.
At present, each organ in a medical image is segmented, and a medical image segmentation algorithm based on machine learning is generally adopted.
However, in this approach, a separate model is usually trained for each organ segmentation, and a large number of training samples are labeled for each model. Therefore, segmentation of multiple organs in a medical image by one model cannot be achieved.
Disclosure of Invention
In view of this, the present application provides a data processing method, apparatus and electronic device, including:
a data processing method, comprising:
obtaining training images, wherein the training images correspond to N+1 labeling data sets, N is a positive integer greater than or equal to 2, each labeling data set comprises Q labeling values, each labeling value corresponds to one pixel point of the training image to which the labeling value belongs, the pixel point to which the labeling value belongs is characterized as belonging to a background area pixel point under the condition that each labeling value in the 1 st labeling data set is greater than a labeling threshold value, the pixel point to which the labeling value belongs is characterized as belonging to an ith target object under the condition that each labeling value in the ith labeling data set is greater than the labeling threshold value, and i is a positive integer greater than or equal to 2 and less than or equal to N+1;
Inputting the training image into a pre-constructed object segmentation model to obtain a test result output by the object segmentation model, wherein the test result comprises N+1 test data sets, each test data set comprises Q test values, each test value corresponds to one pixel point of the training image to which the test value belongs, each test value in the 1 st test data set represents the probability that the pixel point to which the test value belongs to a background area pixel point, and each test value in the i th test data set represents the probability that the pixel point to which the test value belongs to an i-th target object;
obtaining current loss data between the n+1 labeling data sets and the n+1 test data sets, wherein the current loss data comprises a first component corresponding to a first loss value and a second component corresponding to N second loss values, the first loss value is a loss value between the 1 st labeling data set and the 1 st test data set, and the i second loss value is a loss value between the i th labeling data set and the i th test data set;
wherein, in the case that there is no pixel point marked as the i-th type target object in the training image, the i-th second component is 0; in the case that no pixel point marked as any kind of target object exists in the training image, the first component is 0;
And adjusting model parameters in the object segmentation model at least according to the current loss data, wherein the object segmentation model is used for segmenting an image area corresponding to the N types of target objects in the target image.
The method, preferably, obtains current loss data between the n+1 label data sets and the n+1 test data sets, including:
obtaining a first loss value between the 1 st labeling data set and the 1 st testing data set by using a preset loss function;
respectively obtaining second loss values between the ith marked data set and the ith test data set by using the loss function;
multiplying the first loss value by a first coefficient to obtain a first component;
multiplying each second loss value by a second coefficient to obtain a second component corresponding to each second loss value;
summing the first component and the second component to obtain current loss data;
wherein the first coefficient is 0 when there is no pixel labeled as any type of target object in the training image, and the ith second coefficient is 0 when there is no pixel labeled as any type of target object in the training image.
In the above method, preferably, the adjusting the model parameters in the object segmentation model at least according to the current loss data includes:
comparing the current loss data with previous loss data corresponding to a previous frame image input into the object segmentation model before the training image to obtain loss variation;
and adjusting model parameters in the object segmentation model when the loss variation satisfies an adjustment condition.
In the above method, preferably, the adjusting condition includes: the loss variation is greater than or equal to a loss threshold.
In the above method, preferably, the adjusting the model parameters in the object segmentation model includes:
and adjusting model parameters in the object segmentation model based on a gradient descent algorithm.
A data processing apparatus comprising:
the image obtaining unit is used for obtaining training images, the training images correspond to N+1 labeling data sets, N is a positive integer greater than or equal to 2, each labeling data set comprises Q labeling values, each labeling value corresponds to one pixel point of the training image to which the labeling value belongs, the pixel point to which the labeling value belongs is characterized as belonging to a background area pixel point when each labeling value in the 1 st labeling data set is greater than a labeling threshold value, and the pixel point to which the labeling value belongs is characterized as belonging to an i-th target object when each labeling value in the i-th labeling data set is greater than the labeling threshold value, and i is a positive integer greater than or equal to 2 and less than or equal to N+1;
The image testing unit is used for inputting the training image into a pre-constructed object segmentation model to obtain a testing result output by the object segmentation model, wherein the testing result comprises N+1 testing data sets, each testing data set comprises Q testing values, each testing value corresponds to one pixel point of the training image, each testing value in the 1 st testing data set represents the probability that the pixel point of the testing data set belongs to a background area pixel point, and each testing value in the i th testing data set represents the probability that the pixel point of the testing data set belongs to an i th class target object;
a loss obtaining unit, configured to obtain current loss data between the n+1 label data sets and the n+1 test data sets, where the current loss data includes a first component corresponding to a first loss value and a second component corresponding to N second loss values, where the first loss value is a loss value between the 1 st label data set and the 1 st test data set, and the i second loss value is a loss value between the i th label data set and the i th test data set;
Wherein, in the case that there is no pixel point marked as the i-th type target object in the training image, the i-th second component is 0; in the case that no pixel point marked as any kind of target object exists in the training image, the first component is 0;
and the parameter adjustment unit is used for adjusting model parameters in the object segmentation model at least according to the current loss data, and the object segmentation model is used for segmenting an image area corresponding to the N types of target objects in the target image.
The above apparatus, preferably, wherein:
the loss obtaining unit is specifically configured to: obtaining a first loss value between the 1 st labeling data set and the 1 st testing data set by using a preset loss function; respectively obtaining second loss values between the ith marked data set and the ith test data set by using the loss function; multiplying the first loss value by a first coefficient to obtain a first component; multiplying each second loss value by a second coefficient to obtain a second component corresponding to each second loss value; summing the first component and the second component to obtain loss data;
Wherein the first coefficient is 0 when there is no pixel labeled as any type of target object in the training image, and the ith second coefficient is 0 when there is no pixel labeled as any type of target object in the training image.
The above apparatus, preferably, wherein:
the parameter adjusting unit is specifically configured to: comparing the current loss data with previous loss data corresponding to a previous frame image input into the object segmentation model before the training image to obtain loss variation; and adjusting model parameters in the object segmentation model when the loss variation satisfies an adjustment condition.
The above device, preferably, the adjustment condition includes: the loss variation is greater than or equal to a loss threshold.
An electronic device, comprising:
a memory for storing an application program and data generated by the operation of the application program;
a processor for executing the application program to realize:
obtaining training images, wherein the training images correspond to N+1 labeling data sets, N is a positive integer greater than or equal to 2, each labeling data set comprises Q labeling values, each labeling value corresponds to one pixel point of the training image to which the labeling value belongs, the pixel point to which the labeling value belongs is characterized as belonging to a background area pixel point under the condition that each labeling value in the 1 st labeling data set is greater than a labeling threshold value, the pixel point to which the labeling value belongs is characterized as belonging to an ith target object under the condition that each labeling value in the ith labeling data set is greater than the labeling threshold value, and i is a positive integer greater than or equal to 2 and less than or equal to N+1;
Inputting the training image into a pre-constructed object segmentation model to obtain a test result output by the object segmentation model, wherein the test result comprises N+1 test data sets, each test data set comprises Q test values, each test value corresponds to one pixel point of the training image to which the test value belongs, each test value in the 1 st test data set represents the probability that the pixel point to which the test value belongs to a background area pixel point, and each test value in the i th test data set represents the probability that the pixel point to which the test value belongs to an i-th target object;
obtaining current loss data between the n+1 labeling data sets and the n+1 test data sets, wherein the current loss data comprises a first component corresponding to a first loss value and a second component corresponding to N second loss values, the first loss value is a loss value between the 1 st labeling data set and the 1 st test data set, and the i second loss value is a loss value between the i th labeling data set and the i th test data set;
wherein, in the case that there is no pixel point marked as the i-th type target object in the training image, the i-th second component is 0; in the case that no pixel point marked as any kind of target object exists in the training image, the first component is 0;
And adjusting model parameters in the object segmentation model at least according to the current loss data, wherein the object segmentation model is used for segmenting an image area corresponding to the N types of target objects in the target image.
According to the technical scheme, after obtaining the training image corresponding to the plurality of labeling data sets, each labeling data set comprises a labeling value corresponding to each pixel point of the training image, and different labeling data sets are used for labeling the pixel points of the background area and the pixel points of different target objects respectively, based on the labeling values, after the training image is input into the pre-built object segmentation model, test data sets respectively corresponding to the pixel points of the background area and the pixel points of the different target objects can be obtained, each set comprises a test value on each pixel point, and therefore, current loss data between the labeling data sets and the test data sets are obtained, model parameters in the object segmentation model are adjusted according to the loss data, the obtained current loss data comprises a first component corresponding to the first loss value of the pixel point of the background area and a second component corresponding to the second loss value of the different target objects respectively, and when the first component corresponding to the pixel point of the target object does not exist in the training image is a certain training image, and when the first component corresponding to the pixel point of the target object does not exist is a certain training image, the obtained as the first component is labeled 0. Therefore, when the loss data is calculated, the loss between the labeling data and the test data of the target objects belonging to the background area and the object area is calculated separately, so that the situation that the loss calculation is wrong due to the fact that the labeling data of some types of target objects are regarded as the background when the labeling data of some types of target objects are absent in the training image is avoided, the training image lacking the labeling data of some types of target objects can be used for accurately training the object segmentation model, and finally the trained model can segment various types of target objects, namely, even if the labeling data of some types of target objects are absent in the training image, the situation that the training error exists due to the fact that the labeling is absent is avoided, and therefore the trained model can accurately segment various target objects.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a data processing method according to a first embodiment of the present disclosure;
FIGS. 2 and 3 are respectively exemplary diagrams of embodiments of the present application;
FIG. 4 is a partial flow chart of a data processing method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a data processing apparatus according to a second embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to a third embodiment of the present application;
fig. 7 is a schematic diagram of training a segmentation model of a CT image in the present application.
Detailed Description
At present, a medical image segmentation algorithm based on machine learning achieves a very good effect. The inventors of the present application have studied to find that: most algorithms tend to be directed to specific organs, with a separate model for each organ, adding significant computational overhead. In addition, the machine learning-based algorithm needs to rely on a large amount of labeling data, and the existing dataset often only has labeling of specific organs, so that the data is incomplete in labeling for multi-organ segmentation and cannot be directly used for multi-organ segmentation.
Through further research, the inventor of the application provides a technical scheme for realizing multi-object (organ) segmentation by using part of marked marking data, which comprises the following steps:
firstly, a training image corresponding to a plurality of labeling data sets is obtained, then, since each labeling data set contains a labeling value corresponding to each pixel point of the training image to which the labeling data set belongs, and different labeling data sets are used for labeling the pixels of a background area and the pixels of different target objects respectively, on the basis of the labeling data sets, after the training image is input into a pre-constructed object segmentation model, test data sets corresponding to the pixels of the background area and the pixels of the different target objects respectively can be obtained, and each set contains a test value on each pixel point, thereby, current loss data between the labeling data sets and the test data sets is obtained, model parameters in the object segmentation model are adjusted according to the loss data, the obtained current loss data contains a first loss value about the pixels of the background area and a second loss value about the pixels of the different target objects respectively, and the corresponding second loss value is 0 when the pixels labeled as a certain target object are not present in the training image, and the corresponding first loss value is 0 when the pixels labeled as all target objects are not present in the training image.
Therefore, according to the technical scheme, when the loss data is calculated, the loss between the labeling data and the test data of the target objects belonging to the background area and the object area is calculated separately, so that the situation that the loss calculation errors are caused by the fact that the labeling data of some types of target objects are regarded as the background when the labeling data of some types of target objects are absent in the training image is avoided, the training image lacking the labeling data of some types of target objects can be used for accurately training the object segmentation model, and finally the trained model can segment various types of target objects, namely, the situation that the training errors exist due to the fact that the labeling is absent in the training image even though the labeling data of some types of target objects are absent is avoided, and the trained model can accurately segment various target objects.
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Fig. 1 is a flowchart of an implementation of a data processing method according to an embodiment of the present application. The method is suitable for electronic equipment capable of processing image data, such as a computer or a server. The technical scheme in the embodiment is mainly used for training the object segmentation model by using the labeling data of the partial labeling, so that the trained model can realize multi-object (organ) segmentation.
Specifically, the method in this embodiment may include the following steps:
step 101: a training image is obtained.
The training image may be a medical image such as CT or MRI, or may be another sample image having labeling data, which needs to be segmented for a plurality of types of target objects. N+1 labeling data sets are corresponding, N is a positive integer greater than or equal to 2, and the size of N depends on the number of types of target objects to be segmented. Referring to fig. 2, one training image corresponds to n+1 sets of annotation data, the 1 st set of annotation data corresponds to a background region, and the remaining N sets of annotation data respectively correspond to one target object.
For example, if the object segmentation model at the training site in the embodiment implements image segmentation of 6 internal organs, such as segmenting the heart, the lung, the stomach, the gall bladder, the liver and the pancreas in one image, and the training image includes labeling data of 6 organs, N may be set to be 6 in the embodiment; if only the object segmentation model at the training site is needed to realize the image segmentation of 3 organs in the embodiment, for example, the liver, the gall bladder and the pancreas in one image are segmented, and the image contains labeling data of 3 organs, then N may be set to be 3 in the embodiment, and so on.
It should be noted that, each labeling data set includes Q labeling values, and each labeling value corresponds to one pixel point of the associated training image. Referring to fig. 2, a training image has Q pixels, and for a background area and each target object, there is one set of labeling data, where each set of labeling data includes labeling values corresponding to the Q pixels.
Wherein, for the background region, each labeling value in the corresponding labeling data set characterizes the confidence value of the pixel point to which the labeling value belongs belonging to the pixel point of the background region, based on the confidence value, for the background region, the pixel point to which the labeling value belongs can be characterized under the condition that each labeling value in the 1 st labeling data set in the N+1 labeling data sets is larger than a labeling threshold value;
for any one of N target objects, each labeling value in the corresponding labeling data set represents a confidence value of the pixel point to which the labeling value belongs to the pixel point of the target object, based on the confidence value, and for the target object, the pixel point to which the labeling value belongs can be represented under the condition that each labeling value in the (i) th labeling data set in the (n+1) th labeling data set is larger than a labeling threshold value, wherein i is a positive integer which is larger than or equal to 2 and smaller than or equal to (n+1).
As shown in fig. 2, a training image has Q pixels, each Q pixels has n+1 labeling values, the 1 st labeling value of each pixel represents a confidence value of a pixel to which the labeling value belongs to a background region pixel, and the rest of labeling values of each pixel represent confidence values of pixels to which the labeling value belongs to corresponding types of target object pixels.
Step 102: and inputting the training image into a pre-constructed object segmentation model to obtain a test result output by the object segmentation model.
The test result comprises n+1 test data sets, and each test data set corresponds to one labeling data set. Each test data set contains Q test values, and each test value corresponds to a pixel point of the training image to which the test value belongs and corresponds to N+1 labeling values corresponding to the pixel point. Each test value in the 1 st test data set represents the probability that the pixel point of the 1 st test data set belongs to the pixel point of the background area, and each test value in the i-th test data set in the rest test data sets represents the probability that the pixel point of the 1 st test data set belongs to the pixel point of the i-th target object.
As shown in fig. 3, the segmentation processing of the object segmentation model on the training image obtains n+1 test data sets, that is, n+1 test data sets formed by n+1 test values corresponding to Q pixel points respectively, where each test value in each test data set characterizes a probability value that a pixel point to which the test data set belongs to a background area corresponding to the test data set or a pixel point of the target object.
For example, the object segmentation model trained on the training image is to segment 6 organs such as heart, lung, stomach, gall bladder, liver and pancreas, and then 7 labeling data sets and 7 test data sets are respectively corresponding to one frame of training image:
wherein, each labeling value in the 1 st labeling data set characterizes whether the pixel point corresponding to each labeling value is a background region pixel point, each labeling value in the 2 nd to 7 th labeling data sets characterizes whether the pixel point corresponding to each labeling value is a corresponding organ pixel point, if each labeling value in the 2 nd labeling data set characterizes whether the pixel point corresponding to each labeling value is a heart pixel point, each labeling value in the 3 rd labeling data set characterizes whether the pixel point corresponding to each labeling value is a lung pixel point, each labeling value in the 4 th labeling data set characterizes whether the pixel point corresponding to each labeling value is a stomach pixel point, and the like;
And each test value in the 1 st test data set characterizes whether the pixel corresponding to the test value is a background area pixel, each test value in the 2 nd to 7 th test data sets characterizes whether the pixel corresponding to the test value is a pixel of a corresponding organ, such as whether each test value in the 2 nd test data set characterizes whether the pixel corresponding to the test value is a pixel of a heart, each test value in the 3 rd test data set characterizes whether the pixel corresponding to the test value is a pixel of a lung, each test value in the 4 th test data set characterizes whether the pixel corresponding to the test value is a pixel of a stomach, and the like
Step 103: current loss data between the n+1 labeled data sets and the n+1 test data sets is obtained.
The current loss data comprises a first component corresponding to a first loss value and N second loss values, wherein the second component corresponds to each of the first loss value, the first loss value is a loss value between the 1 st marked data set and the 1 st test data set, and the ith second loss value in the N second loss values is a loss value between the ith marked data set and the ith test data set.
For example, in the case that the object segmentation model trained on the training image is needed to segment 6 organs such as heart, lung, stomach, gall bladder, liver and pancreas, 7 labeling data sets and 7 test data sets are respectively corresponding to one frame of training image, and accordingly, in this embodiment, the following may be: obtaining a first loss value between the 1 st labeling data set and the 1 st testing data set, and obtaining a first component based on the first loss value, and representing training loss components on the background region; obtaining a second loss value between the 2 nd labeling data set and the 2 nd testing data set, and obtaining a second component corresponding to the 1 st category target object such as heart based on the second loss value; obtaining a second loss value between the 3 rd labeling data set and the 3 rd testing data set, and obtaining a second component corresponding to the 2 nd category target object such as a lung based on the second loss value; obtaining a second loss value between the 4 th labeled data set and the 4 th test data set, and obtaining a second component corresponding to the 3 rd category target object, such as the stomach, based on the second loss value, and so on.
In the case where there is no pixel point marked as the i-th type target object in the training image, that is, there is no marked value of the i-th type target object pixel point in the training image, the i-th second component is 0. For example, the object segmentation model trained using the training image needs to segment 6 target objects, but the training image only includes labeling values of pixels of 5 target objects, and at this time, the second component corresponding to the target object lacking labeling is 0. Moreover, under the condition that the pixel point marked as any type of target object does not exist in the training image, namely the labeling value of the pixel point of any type of target image is absent in the training image, the first component is 0. For example, the object segmentation model trained by using the training image needs to segment 6 target objects, but the labeling values of pixels of any type of target object or pixels of all types of target objects in the 6 target objects in the training image are not the same, and at this time, the first loss value for the background area is 0.
Based on this, in this embodiment, loss data calculation is performed on the background area and various target objects at the same time, so as to obtain corresponding loss values, and current loss data is formed.
Step 104: model parameters in the object segmentation model are adjusted based at least on the current loss data.
When the model parameters in the object segmentation model are adjusted in this embodiment, the model parameters may be increased or decreased based on the principle of decreasing the loss data, so that the loss data of the next object segmentation model is decreased, and with multiple training of the multi-frame training image, the loss data of the object segmentation model can be reduced to the minimum and no longer changed, and at this time, the training of the object segmentation model is completed.
It should be noted that the trained object segmentation model is used for segmenting the image region corresponding to the N types of target objects in the target image. The target image may be an image such as a CT image or an MRI image, which needs to be segmented in an image area where multiple types of target objects are located.
As can be seen from the foregoing technical solution, in the first data processing method provided in the first embodiment of the present application, after obtaining the training image corresponding to the plurality of labeling data sets, since each labeling data set includes a labeling value corresponding to each pixel point of the training image to which the labeling data set belongs, and the background area pixel points and the pixels of different target objects are respectively labeled with different labeling data sets, based on this, after inputting the training image into the pre-constructed object segmentation model, test data sets respectively corresponding to the background area pixel points and the pixels of different target objects can be obtained, and each set includes a test value on each pixel point, thereby obtaining current loss data between the labeling data sets and the test data sets, and adjusting model parameters in the object segmentation model according to the loss data, the obtained current loss data includes both a first component corresponding to the first loss value of the background area pixel point and a second component corresponding to the second loss value of the different target object, and when the training image does not have the corresponding second loss value of the target pixel point, the current loss data set is labeled with 0, the obtained current loss data includes the first component corresponding to the target object, and when the training image does not have the corresponding second component of the target pixel point, which is labeled with 0. Therefore, when the loss data is calculated, the loss between the labeling data and the test data of the target objects belonging to the background area and the object area is calculated separately, so that the situation that the loss calculation is wrong due to the fact that the labeling data of some types of target objects are regarded as the background when the labeling data of some types of target objects are absent in the training image is avoided, the training image lacking the labeling data of some types of target objects can be used for accurately training the object segmentation model, and finally the trained model can segment various types of target objects, namely, even if the labeling data of some types of target objects are absent in the training image, the situation that the training error exists due to the fact that the labeling is absent is avoided, and therefore the trained model can accurately segment various target objects.
In one implementation, when obtaining the current loss data between the n+1 label data sets and the n+1 test data sets in step 103, the current loss data may be specifically obtained as shown in fig. 4:
step 401: and obtaining a first loss value between the 1 st marked data set and the 1 st test data set by using a preset loss function.
The loss function may be a loss function constructed based on a regression algorithm or a multi-classification algorithm. Based on the loss function, in this embodiment, loss calculation is performed on the labeling data and the test data corresponding to the pixel points of the background area, that is, a first loss value between the 1 st labeling data set and the 1 st test data set about the labeling value and the test value is obtained, and the first loss value is represented by: for the pixel points of the background area, the difference between the test value and the label obtained after the segmentation test of the object segmentation model is carried out, the larger the first loss value is, the worse the segmentation test effect of the object segmentation model on the background area in the training image is, and the smaller the first loss value is, the better the segmentation test effect of the object segmentation model on the background area in the training image is.
Step 402: and respectively obtaining second loss values between the ith marked data set and the ith test data set by using the loss function.
That is, in this embodiment, loss calculation is performed for the labeling data and the test data corresponding to each class of the target object, that is: calculating each labeling data set and the test data set corresponding to the same class of target objects by using a loss function according to the 2 nd to (n+1) th labeling data sets and the 2 nd to (n+1) th test data sets respectively corresponding to all classes of target objects to obtain a second loss value, wherein the second loss value represents the difference between the obtained test value and the labeling after the segmentation test of the corresponding class of target objects (such as livers or hearts) is carried out on the pixel points of the corresponding class of target objects, the larger the second loss value is, the worse the segmentation test effect of the object segmentation model on the class of target objects in a training image is, and the smaller the second loss value is, the smaller the segmentation test effect of the object segmentation model on the class of target objects in the training image is.
Step 403: and multiplying the first loss value by a first coefficient to obtain a first component, and multiplying each second loss value by a second coefficient to obtain a second component corresponding to each second loss value.
The first coefficient is 0 when no pixel point marked as any type of target object exists in the training image, so that when no pixel point marked as any type of target object exists in the training image, the first coefficient is 1, and the corresponding first component is a first loss value in order to avoid the condition that the loss calculation is inaccurate due to the fact that the pixel point of a background area is considered as lacking the marking value of the type of target object; the ith second coefficient is 0 when no pixel point marked as the ith target object exists in the training image, so that when no pixel point marked as the ith target object exists in the training image, in order to avoid the condition that the loss calculation is inaccurate due to the lack of the marking value of the ith target object, the second coefficient is set to 0, so that the second component corresponding to the ith target object is 0, and when the pixel point marked as the ith target object exists in the training image, the second coefficient corresponding to the ith target object is 1, and the corresponding second component is the second loss value corresponding to the ith target object.
For example, when the object segmentation model trained by the training image is to segment 6 organs such as heart, lung, stomach, gall bladder, liver and pancreas, after the test and loss calculation of the object segmentation model are performed in one frame of training image, 1 first loss value is obtained, corresponding to the background area, and 6 second loss values are obtained, corresponding to each organ of heart, lung, stomach, gall bladder, liver and pancreas respectively, if the pixel point marked as the heart does not exist in the training image, the first coefficient is set to 0 at this time, so that the first component is 0, and when the marked value of all the organs exist in the training image, the first coefficient is 1, and the corresponding first component is the first loss value; and under the condition that the pixel point marked as the lung does not exist in the training image, the second coefficient corresponding to the lung is 0, therefore, by setting the second coefficient corresponding to the lung to be 0, the second component corresponding to the lung is 0, and if the pixel point marked as the liver exists in the training image, the second coefficient corresponding to the liver is 1, and the corresponding second component is the second loss value corresponding to the liver object.
Step 404: the first component and the second component are summed to obtain current loss data.
In a specific implementation, the current loss data in this embodiment may refer to the following formula (1):
Figure BDA0002547381700000151
wherein S is N+1 test data sets, P is N+1 labeling data sets, S 0 For the 1 st test data set, p 0 Set of 1 st annotation data, s i For the ith test data set, p i For the ith test data set, C is the category number of the target object, m 0 For the first coefficient, m, corresponding to the first test data set i For the second coefficient corresponding to the ith test data set, loss (S, P) is the current loss data, loss (S 0 ,p 0 ) For the first loss value, m 0 loss(s 0 ,p 0 ) As a first component, loss (s i ,p i ) For the second loss value, m, corresponding to the i-th class target object i loss(s i ,p i ) A second component corresponding to the i-th class target object; m when the labeling value of the ith target object exists in the training image i 1, otherwise, m i Is 0; m when the labeling values of all classes of target objects exist in the training image 0 1, otherwise, m 0 Is 0. Based on this, the current loss data is obtained by summing the two components.
In one implementation, when the model parameters in the object segmentation model are adjusted according to at least the current loss data in step 404, this may be achieved by:
First, the current loss data is compared with previous loss data corresponding to a previous frame image input into the object segmentation model before the training image, to obtain a loss variation. Specifically, the difference between the previous loss data and the current loss data may be used as the loss variation amount.
And judging whether the loss variation meets the adjustment condition or not, and if the loss variation meets the adjustment condition, adjusting the model parameters in the object segmentation model.
Wherein, the adjustment condition can be: when the loss variation is greater than or equal to the loss threshold, that is, when the current loss data is increased or decreased relative to the previous loss data and the variation exceeds the loss threshold, the loss variation is considered to meet the adjustment condition, and at this time, the model parameters in the object segmentation model are adjusted; if the current loss data is completely unchanged relative to the previous loss data or the change amount is small enough to be lower than the loss threshold value, the loss change amount is considered to not meet the adjustment condition, and the test data of the object segmentation model on the training image and the corresponding labeling data are considered to be very similar, namely the test accuracy of the object segmentation model is higher.
Specifically, in the embodiment, when the model parameters in the object segmentation model are adjusted, the model parameters in the object segmentation model may be adjusted based on a gradient descent algorithm. For example, in this embodiment, based on a gradient descent algorithm, model parameters in the object segmentation model are increased or decreased by means of counter-propagating gradients, so that loss data corresponding to the object segmentation model when testing the next training image is reduced until the loss data is reduced to the minimum and no longer changes, i.e. the training of the object segmentation model is completed.
Referring to fig. 5, a schematic structural diagram of a data processing apparatus according to a second embodiment of the present application may be configured in an electronic device capable of performing image data processing, such as a computer or a server. The technical scheme in the embodiment is mainly used for training the object segmentation model by using the labeling data of the partial labeling, so that the trained model can realize multi-object (organ) segmentation.
Specifically, the apparatus in this embodiment may include the following structure
An image obtaining unit 501, configured to obtain a training image, where the training image corresponds to n+1 labeling data sets, N is a positive integer greater than or equal to 2, each labeling data set includes Q labeling values, each labeling value corresponds to one pixel point of the training image to which the labeling value belongs, a pixel point to which the labeling value belongs is represented to belong to a background area pixel point when each labeling value in the 1 st labeling data set is greater than a labeling threshold, and a pixel point to which the labeling value belongs is represented to belong to an i-th target object when each labeling value in the i-th labeling data set is greater than the labeling threshold, and i is a positive integer greater than or equal to 2 and less than or equal to n+1;
The image testing unit 502 is configured to input the training image into a pre-constructed object segmentation model to obtain a test result output by the object segmentation model, where the test result includes n+1 test data sets, each test data set includes Q test values, each test value corresponds to one pixel point of the training image to which the test value belongs, each test value in the 1 st test data set characterizes a probability that the pixel point to which the test value belongs to a background area pixel point, and each test value in the i th test data set characterizes a probability that the pixel point to which the test value belongs to an i-th class target object pixel point;
a loss obtaining unit 503, configured to obtain current loss data between the n+1 label data sets and the n+1 test data sets, where the current loss data includes a first component corresponding to a first loss value and a second component corresponding to N second loss values, where the first loss value is a loss value between the 1 st label data set and the 1 st test data set, and the i second loss value is a loss value between the i th label data set and the i th test data set;
Wherein, in the case that there is no pixel point marked as the i-th type target object in the training image, the i-th second component is 0; in the case that no pixel point marked as any kind of target object exists in the training image, the first component is 0;
and a parameter adjustment unit 504, configured to adjust model parameters in the object segmentation model at least according to the current loss data, where the object segmentation model is used to segment an image region corresponding to the N types of target objects in the target image.
As can be seen from the above technical solution, in the data processing apparatus provided in the second embodiment of the present application, after obtaining the training image corresponding to the plurality of labeling data sets, since each labeling data set includes a labeling value corresponding to each pixel point of the training image to which the labeling data set belongs, and the background area pixel points and the pixels of different target objects are respectively labeled with different labeling data sets, based on this, after inputting the training image into the pre-constructed object segmentation model, test data sets respectively corresponding to the background area pixel points and the pixels of different target objects can be obtained, and each set includes a test value on each pixel point, thereby obtaining current loss data between the labeling data sets and the test data sets, and adjusting model parameters in the object segmentation model according to the loss data, the obtained current loss data includes both a first component corresponding to the first loss value of the background area pixel point and a second component corresponding to the second loss value of the different target object, and when the training image does not exist, the current loss data set includes a first component corresponding to the first loss value of the background area pixel point, and when the training image does not exist, the second component corresponding to the first component corresponding to the target object is labeled 0. Therefore, in this embodiment, when calculating the loss data, the loss between the labeling data and the test data of the target objects belonging to the background area and the object area is calculated separately, so as to avoid the situation that the loss calculation is wrong because the labeling data of some types of target objects are regarded as the background when the labeling data of some types of target objects are absent in the training image, so that the training image lacking the labeling data of some types of target objects can be used for accurately training the object segmentation model, and the finally trained model can segment various types of target objects, that is, even if the labeling data of some types of target objects are absent in the training image, the situation that the training error exists because of the lack of the labeling is avoided, so that the trained model can accurately segment various target objects.
In one implementation, the loss obtaining unit 503 is specifically configured to: obtaining a first loss value between the 1 st labeling data set and the 1 st testing data set by using a preset loss function; respectively obtaining second loss values between the ith marked data set and the ith test data set by using the loss function; multiplying the first loss value by a first coefficient to obtain a first component; multiplying each second loss value by a second coefficient to obtain a second component corresponding to each second loss value; summing the first component and the second component to obtain loss data;
wherein the first coefficient is 0 when there is no pixel labeled as any type of target object in the training image, and the ith second coefficient is 0 when there is no pixel labeled as any type of target object in the training image.
In one implementation, the parameter adjustment unit 504 is specifically configured to: comparing the current loss data with previous loss data corresponding to a previous frame image input into the object segmentation model before the training image to obtain loss variation; and adjusting model parameters in the object segmentation model when the loss variation satisfies an adjustment condition.
Optionally, the adjusting conditions include: the loss variation is greater than or equal to a loss threshold.
It should be noted that, the specific implementation of each unit in this embodiment may refer to the corresponding content in the foregoing, which is not described in detail herein.
Referring to fig. 6, a schematic structural diagram of an electronic device according to a third embodiment of the present application may be an electronic device capable of performing image data processing, such as a computer or a server. The technical scheme in the embodiment is mainly used for training the object segmentation model by using the labeling data of the partial labeling, so that the trained model can realize multi-object (organ) segmentation.
Specifically, the electronic device in this embodiment may include the following structure:
a memory 601 for storing an application program and data generated by the running of the application program;
a processor 602, configured to execute the application program to implement:
obtaining training images, wherein the training images correspond to N+1 labeling data sets, N is a positive integer greater than or equal to 2, each labeling data set comprises Q labeling values, each labeling value corresponds to one pixel point of the training image to which the labeling value belongs, the pixel point to which the labeling value belongs is characterized as belonging to a background area pixel point under the condition that each labeling value in the 1 st labeling data set is greater than a labeling threshold value, the pixel point to which the labeling value belongs is characterized as belonging to an ith target object under the condition that each labeling value in the ith labeling data set is greater than the labeling threshold value, and i is a positive integer greater than or equal to 2 and less than or equal to N+1;
Inputting the training image into a pre-constructed object segmentation model to obtain a test result output by the object segmentation model, wherein the test result comprises N+1 test data sets, each test data set comprises Q test values, each test value corresponds to one pixel point of the training image to which the test value belongs, each test value in the 1 st test data set represents the probability that the pixel point to which the test value belongs to a background area pixel point, and each test value in the i th test data set represents the probability that the pixel point to which the test value belongs to an i-th target object;
obtaining current loss data between the n+1 labeling data sets and the n+1 test data sets, wherein the current loss data comprises a first component corresponding to a first loss value and a second component corresponding to N second loss values, the first loss value is a loss value between the 1 st labeling data set and the 1 st test data set, and the i second loss value is a loss value between the i th labeling data set and the i th test data set;
wherein, in the case that there is no pixel point marked as the i-th type target object in the training image, the i-th second component is 0; in the case that no pixel point marked as any kind of target object exists in the training image, the first component is 0;
And adjusting model parameters in the object segmentation model at least according to the current loss data, wherein the object segmentation model is used for segmenting an image area corresponding to the N types of target objects in the target image.
As can be seen from the above technical solution, in the electronic device provided in the third embodiment of the present application, after obtaining the training image corresponding to the plurality of labeling data sets, since each labeling data set includes a labeling value corresponding to each pixel point of the training image to which the labeling data set belongs, and the background area pixel points and the pixels of different target objects are respectively labeled with different labeling data sets, based on this, after inputting the training image into the pre-constructed object segmentation model, test data sets respectively corresponding to the background area pixel points and the pixels of different target objects can be obtained, and each set includes a test value on each pixel point, thereby obtaining current loss data between the labeling data sets and the test data sets, and adjusting model parameters in the object segmentation model according to the loss data, the obtained current loss data includes both a first component corresponding to the first loss value of the background area pixel point and a second component corresponding to the second loss value of the different target object, and when there is no corresponding training image with a corresponding pixel point of a certain target point, the obtained current loss data includes a first component corresponding to a corresponding pixel point of 0 in the training image. Therefore, in this embodiment, when calculating the loss data, the loss between the labeling data and the test data of the target objects belonging to the background area and the object area is calculated separately, so as to avoid the situation that the loss calculation is wrong because the labeling data of some types of target objects are regarded as the background when the labeling data of some types of target objects are absent in the training image, so that the training image lacking the labeling data of some types of target objects can be used for accurately training the object segmentation model, and the finally trained model can segment various types of target objects, that is, even if the labeling data of some types of target objects are absent in the training image, the situation that the training error exists because of the lack of the labeling is avoided, so that the trained model can accurately segment various target objects.
Taking a segmentation model for training to segment an organ from a CT image as an example, the following exemplifies the technical scheme of the present application:
in the technical scheme of the application, the multi-organ segmentation is realized by using part of labeled data, and as shown in fig. 7, the left training image only contains labeled data of kidney, liver and gall bladder respectively. The technical scheme of the application mainly comprises the following steps: and carrying out self-adaptive training on the segmentation model according to the organ types with the labels in the current training image. Therefore, in the technical scheme, the training target can be adaptively determined according to the marked type in the current training image so as to train the model parameters in the segmentation model, so that the segmentation of multiple targets is realized by using part of marked data, and based on the method, the segmentation of multiple targets can be realized by using only one segmentation model, so that the calculation cost can be saved, and meanwhile, the extra complicated data marking is not needed, so that the algorithm realization efficiency is greatly improved. The specific implementation scheme is referred as follows:
(1) Firstly, defining (creating) a segmentation model M (D; theta), wherein theta is a model parameter, D is an input image (namely a training image), the label corresponding to D is L (label data set), and the number of classes to be segmented is C;
(2) Thereafter, a score map s= { S0,..si,..sc } (i.e., a test data set) generated after M tests D is defined, si represents a score map of category i (S0 represents background), and each element in si represents a probability that the point belongs to category i, such as a probability that the point belongs to the heart or lung;
(3) Meanwhile, the one hot encoding form p= { P0,..pi,..pc }, pi represents the label of the category i (s 0 represents the pixel point of the background area), the pixel point with the element 1 in pi is the area pixel point belonging to the category i, and the pixel point composition with the element 1 in pi represents that the point belongs to the category i, and vice versa.
(4) A loss function loss (a, B) is defined, a representing score map (test data set), and B representing annotated one hot encoding (annotation data set).
Based on the definition, the technical scheme in the application specifically comprises the following steps:
1. inputting D into a segmentation model M to obtain S;
2. calculate the loss between S and L, wherein:
for partial annotation data, the unlabeled class in the input image may have been equated with the class being annotated as background, so when loss is calculated, loss (s 0, p 0) and loss (si, pi) (where i > 0) are considered separately, as follows:
For loss (si, pi), where i >0, a flag vector v= { mi|i=1 is introduced in the present application, C when input images have class i labels, mi=1; otherwise, mi=0;
for loss (s 0, p 0), a flag variable m0 is introduced in the application, and m0=1 when all class C labels exist in the input image; otherwise, m0=0;
based on this, the total loss (loss data) is as shown in formula (1).
3. To minimize the loss, the model parameters θ are updated by solving the gradients, by back propagating the gradients in this application;
4. iteration 1-3 is performed until the loss is no longer reduced, resulting in a final model M for region segmentation of the CT image to be segmented, as shown in fig. 7, enabling simultaneous segmentation of organ image areas of the kidneys, liver and gall bladder in the CT image.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A data processing method, comprising:
obtaining training images, wherein the training images correspond to N+1 labeling data sets, N is a positive integer greater than or equal to 2, each labeling data set comprises Q labeling values, each labeling value corresponds to one pixel point of the training image to which the labeling value belongs, the pixel point to which the labeling value belongs is characterized as belonging to a background area pixel point under the condition that each labeling value in the 1 st labeling data set is greater than a labeling threshold value, the pixel point to which the labeling value belongs is characterized as belonging to an ith target object under the condition that each labeling value in the ith labeling data set is greater than the labeling threshold value, and i is a positive integer greater than or equal to 2 and less than or equal to N+1;
inputting the training image into a pre-constructed object segmentation model to obtain a test result output by the object segmentation model, wherein the test result comprises N+1 test data sets, each test data set comprises Q test values, each test value corresponds to one pixel point of the training image to which the test value belongs, each test value in the 1 st test data set represents the probability that the pixel point to which the test value belongs to a background area pixel point, and each test value in the i th test data set represents the probability that the pixel point to which the test value belongs to an i-th target object;
Obtaining current loss data between the n+1 labeling data sets and the n+1 test data sets, wherein the current loss data comprises a first component corresponding to a first loss value and a second component corresponding to N second loss values, the first loss value is a loss value between the 1 st labeling data set and the 1 st test data set, and the i second loss value is a loss value between the i th labeling data set and the i th test data set;
wherein, in the case that there is no pixel point marked as the i-th type target object in the training image, the i-th second component is 0; in the case that no pixel point marked as any kind of target object exists in the training image, the first component is 0;
and adjusting model parameters in the object segmentation model at least according to the current loss data, wherein the object segmentation model is used for segmenting an image area corresponding to the N types of target objects in the target image.
2. The method of claim 1, obtaining current loss data between the n+1 sets of annotation data and the n+1 sets of test data, comprising:
Obtaining a first loss value between the 1 st labeling data set and the 1 st testing data set by using a preset loss function;
respectively obtaining second loss values between the ith marked data set and the ith test data set by using the loss function;
multiplying the first loss value by a first coefficient to obtain a first component;
multiplying each second loss value by a second coefficient to obtain a second component corresponding to each second loss value;
summing the first component and the second component to obtain current loss data;
wherein the first coefficient is 0 when there is no pixel labeled as any type of target object in the training image, and the ith second coefficient is 0 when there is no pixel labeled as any type of target object in the training image.
3. The method of claim 1, adjusting model parameters in the object segmentation model based at least on the current loss data, comprising:
comparing the current loss data with previous loss data corresponding to a previous frame image input into the object segmentation model before the training image to obtain loss variation;
And adjusting model parameters in the object segmentation model when the loss variation satisfies an adjustment condition.
4. A method according to claim 3, the adjustment conditions comprising: the loss variation is greater than or equal to a loss threshold.
5. A method according to claim 3, adjusting model parameters in the object segmentation model, comprising:
and adjusting model parameters in the object segmentation model based on a gradient descent algorithm.
6. A data processing apparatus comprising:
the image obtaining unit is used for obtaining training images, the training images correspond to N+1 labeling data sets, N is a positive integer greater than or equal to 2, each labeling data set comprises Q labeling values, each labeling value corresponds to one pixel point of the training image to which the labeling value belongs, the pixel point to which the labeling value belongs is characterized as belonging to a background area pixel point when each labeling value in the 1 st labeling data set is greater than a labeling threshold value, and the pixel point to which the labeling value belongs is characterized as belonging to an i-th target object when each labeling value in the i-th labeling data set is greater than the labeling threshold value, and i is a positive integer greater than or equal to 2 and less than or equal to N+1;
The image testing unit is used for inputting the training image into a pre-constructed object segmentation model to obtain a testing result output by the object segmentation model, wherein the testing result comprises N+1 testing data sets, each testing data set comprises Q testing values, each testing value corresponds to one pixel point of the training image, each testing value in the 1 st testing data set represents the probability that the pixel point of the testing data set belongs to a background area pixel point, and each testing value in the i th testing data set represents the probability that the pixel point of the testing data set belongs to an i th class target object;
a loss obtaining unit, configured to obtain current loss data between the n+1 label data sets and the n+1 test data sets, where the current loss data includes a first component corresponding to a first loss value and a second component corresponding to N second loss values, where the first loss value is a loss value between the 1 st label data set and the 1 st test data set, and the i second loss value is a loss value between the i th label data set and the i th test data set;
Wherein, in the case that there is no pixel point marked as the i-th type target object in the training image, the i-th second component is 0; in the case that no pixel point marked as any kind of target object exists in the training image, the first component is 0;
and the parameter adjustment unit is used for adjusting model parameters in the object segmentation model at least according to the current loss data, and the object segmentation model is used for segmenting an image area corresponding to the N types of target objects in the target image.
7. The apparatus of claim 6, wherein:
the loss obtaining unit is specifically configured to: obtaining a first loss value between the 1 st labeling data set and the 1 st testing data set by using a preset loss function; respectively obtaining second loss values between the ith marked data set and the ith test data set by using the loss function; multiplying the first loss value by a first coefficient to obtain a first component; multiplying each second loss value by a second coefficient to obtain a second component corresponding to each second loss value; summing the first component and the second component to obtain loss data;
Wherein the first coefficient is 0 when there is no pixel labeled as any type of target object in the training image, and the ith second coefficient is 0 when there is no pixel labeled as any type of target object in the training image.
8. The apparatus of claim 6, wherein:
the parameter adjusting unit is specifically configured to: comparing the current loss data with previous loss data corresponding to a previous frame image input into the object segmentation model before the training image to obtain loss variation; and adjusting model parameters in the object segmentation model when the loss variation satisfies an adjustment condition.
9. The apparatus of claim 8, the adjustment condition comprising: the loss variation is greater than or equal to a loss threshold.
10. An electronic device, comprising:
a memory for storing an application program and data generated by the operation of the application program;
a processor for executing the application program to realize:
obtaining training images, wherein the training images correspond to N+1 labeling data sets, N is a positive integer greater than or equal to 2, each labeling data set comprises Q labeling values, each labeling value corresponds to one pixel point of the training image to which the labeling value belongs, the pixel point to which the labeling value belongs is characterized as belonging to a background area pixel point under the condition that each labeling value in the 1 st labeling data set is greater than a labeling threshold value, the pixel point to which the labeling value belongs is characterized as belonging to an ith target object under the condition that each labeling value in the ith labeling data set is greater than the labeling threshold value, and i is a positive integer greater than or equal to 2 and less than or equal to N+1;
Inputting the training image into a pre-constructed object segmentation model to obtain a test result output by the object segmentation model, wherein the test result comprises N+1 test data sets, each test data set comprises Q test values, each test value corresponds to one pixel point of the training image to which the test value belongs, each test value in the 1 st test data set represents the probability that the pixel point to which the test value belongs to a background area pixel point, and each test value in the i th test data set represents the probability that the pixel point to which the test value belongs to an i-th target object;
obtaining current loss data between the n+1 labeling data sets and the n+1 test data sets, wherein the current loss data comprises a first component corresponding to a first loss value and a second component corresponding to N second loss values, the first loss value is a loss value between the 1 st labeling data set and the 1 st test data set, and the i second loss value is a loss value between the i th labeling data set and the i th test data set;
wherein, in the case that there is no pixel point marked as the i-th type target object in the training image, the i-th second component is 0; in the case that no pixel point marked as any kind of target object exists in the training image, the first component is 0;
And adjusting model parameters in the object segmentation model at least according to the current loss data, wherein the object segmentation model is used for segmenting an image area corresponding to the N types of target objects in the target image.
CN202010565168.2A 2020-06-19 2020-06-19 Data processing method and device and electronic equipment Active CN111724371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010565168.2A CN111724371B (en) 2020-06-19 2020-06-19 Data processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010565168.2A CN111724371B (en) 2020-06-19 2020-06-19 Data processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111724371A CN111724371A (en) 2020-09-29
CN111724371B true CN111724371B (en) 2023-05-23

Family

ID=72567667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010565168.2A Active CN111724371B (en) 2020-06-19 2020-06-19 Data processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111724371B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070777B (en) * 2020-11-10 2021-10-08 中南大学湘雅医院 Method and device for organ-at-risk segmentation under multiple scenes based on incremental learning
CN112561921A (en) * 2020-11-10 2021-03-26 联想(北京)有限公司 Image segmentation method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107967688A (en) * 2017-12-21 2018-04-27 联想(北京)有限公司 The method and system split to the object in image
CN109191476A (en) * 2018-09-10 2019-01-11 重庆邮电大学 The automatic segmentation of Biomedical Image based on U-net network structure
CN109493346A (en) * 2018-10-31 2019-03-19 浙江大学 It is a kind of based on the gastric cancer pathology sectioning image dividing method more lost and device
CN109614973A (en) * 2018-11-22 2019-04-12 华南农业大学 Rice seedling and Weeds at seedling image, semantic dividing method, system, equipment and medium
CN110689548A (en) * 2019-09-29 2020-01-14 浪潮电子信息产业股份有限公司 Medical image segmentation method, device, equipment and readable storage medium
CN110930417A (en) * 2019-11-26 2020-03-27 腾讯科技(深圳)有限公司 Training method and device of image segmentation model, and image segmentation method and device
WO2020088076A1 (en) * 2018-10-31 2020-05-07 阿里巴巴集团控股有限公司 Image labeling method, device, and system
WO2020087974A1 (en) * 2018-10-30 2020-05-07 北京字节跳动网络技术有限公司 Model generation method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8073220B2 (en) * 2009-04-20 2011-12-06 Siemens Aktiengesellschaft Methods and systems for fully automatic segmentation of medical images
US10699412B2 (en) * 2017-03-23 2020-06-30 Petuum Inc. Structure correcting adversarial network for chest X-rays organ segmentation
CN108335313A (en) * 2018-02-26 2018-07-27 阿博茨德(北京)科技有限公司 Image partition method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107967688A (en) * 2017-12-21 2018-04-27 联想(北京)有限公司 The method and system split to the object in image
CN109191476A (en) * 2018-09-10 2019-01-11 重庆邮电大学 The automatic segmentation of Biomedical Image based on U-net network structure
WO2020087974A1 (en) * 2018-10-30 2020-05-07 北京字节跳动网络技术有限公司 Model generation method and device
CN109493346A (en) * 2018-10-31 2019-03-19 浙江大学 It is a kind of based on the gastric cancer pathology sectioning image dividing method more lost and device
WO2020088076A1 (en) * 2018-10-31 2020-05-07 阿里巴巴集团控股有限公司 Image labeling method, device, and system
CN109614973A (en) * 2018-11-22 2019-04-12 华南农业大学 Rice seedling and Weeds at seedling image, semantic dividing method, system, equipment and medium
CN110689548A (en) * 2019-09-29 2020-01-14 浪潮电子信息产业股份有限公司 Medical image segmentation method, device, equipment and readable storage medium
CN110930417A (en) * 2019-11-26 2020-03-27 腾讯科技(深圳)有限公司 Training method and device of image segmentation model, and image segmentation method and device

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Alexander Kirillov et al..PointRend: Image Segmentation as Rendering.《arXiv》.2020,全文. *
Chengjia Wang et al..A two-stage 3D Unet framework for multi-class segmentation on full resolution image.《arXiv》.2018,全文. *
Yao Zhang et al..SequentialSegNet: Combination with Sequential Feature for Multi-organ Segmentation.《2018 24th International Conference on Pattern Recognition (ICPR)》.2018,全文. *
Yubing Li et al..Grab Cut Image Segmentation Based on Image Region.《2018 3rd IEEE International Conference on Image, Vision and Computing》.2018,全文. *
杨兵等.融合组织特性的磁共振图像自动分割.《浙江省医学会医学工程学术大会(2019)》.2019,全文. *
王小芳等.一种复杂背景下的电力设备红外图像分割方法.《红外技术》.2019,全文. *
肖宁等.基于分割对抗网络的肺结节分割.《计算机工程与设计》.2019,全文. *

Also Published As

Publication number Publication date
CN111724371A (en) 2020-09-29

Similar Documents

Publication Publication Date Title
Bi et al. Automatic liver lesion detection using cascaded deep residual networks
Mahapatra et al. Joint registration and segmentation of xray images using generative adversarial networks
US20190220977A1 (en) Cross-Domain Image Analysis and Cross-Domain Image Synthesis Using Deep Image-to-Image Networks and Adversarial Networks
WO2021218215A1 (en) Image detection method and relevant model training method, relevant apparatuses, and device
Jafari et al. Automatic biplane left ventricular ejection fraction estimation with mobile point-of-care ultrasound using multi-task learning and adversarial training
Li et al. Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images
CN111724371B (en) Data processing method and device and electronic equipment
US20220335600A1 (en) Method, device, and storage medium for lesion segmentation and recist diameter prediction via click-driven attention and dual-path connection
US11798161B2 (en) Method and apparatus for determining mid-sagittal plane in magnetic resonance images
CN113112486B (en) Tumor motion estimation method and device, terminal equipment and storage medium
WO2023092959A1 (en) Image segmentation method, training method for model thereof, and related apparatus and electronic device
US11886543B2 (en) Interactive iterative image annotation
CN116130090A (en) Ejection fraction measuring method and device, electronic device, and storage medium
US20220351863A1 (en) Method and System for Disease Quantification of Anatomical Structures
Gheorghiță et al. Improving robustness of automatic cardiac function quantification from cine magnetic resonance imaging using synthetic image data
CN114266896A (en) Image labeling method, model training method and device, electronic equipment and medium
CN114943690A (en) Medical image processing method, device, computer equipment and readable storage medium
CN112801999B (en) Method and device for determining heart coronary artery dominance
Yu et al. Reducing positional variance in cross-sectional abdominal CT slices with deep conditional generative models
CN112990367A (en) Image processing method, device, equipment and storage medium
Liu et al. Pseudo-3D network for multi-sequence cardiac MR segmentation
CN114730382A (en) Constrained training of artificial neural networks using labeled medical data with mixed quality
CN115810032A (en) Image registration method and device, and medical image motion correction method and system
CN113159202B (en) Image classification method, device, electronic equipment and storage medium
Chan et al. Automated quality controlled analysis of 2d phase contrast cardiovascular magnetic resonance imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant