CN112102315B - Medical image processing method, medical image processing device, computer equipment and storage medium - Google Patents

Medical image processing method, medical image processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN112102315B
CN112102315B CN202011199755.0A CN202011199755A CN112102315B CN 112102315 B CN112102315 B CN 112102315B CN 202011199755 A CN202011199755 A CN 202011199755A CN 112102315 B CN112102315 B CN 112102315B
Authority
CN
China
Prior art keywords
dimensional medical
medical image
original
dimensional
fused
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011199755.0A
Other languages
Chinese (zh)
Other versions
CN112102315A (en
Inventor
李悦翔
陈嘉伟
魏东
马锴
郑冶枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011199755.0A priority Critical patent/CN112102315B/en
Publication of CN112102315A publication Critical patent/CN112102315A/en
Application granted granted Critical
Publication of CN112102315B publication Critical patent/CN112102315B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the field of artificial intelligence, in particular to a medical image processing method, a medical image processing device, a computer device and a storage medium. The method comprises the following steps: acquiring at least two original three-dimensional medical images corresponding to a target task; performing overlapping fusion processing on at least two original three-dimensional medical images to obtain a fused three-dimensional medical image; performing at least one interpolation process on the fused three-dimensional medical image based on the voxel information of each voxel in the fused three-dimensional medical image to obtain a reconstructed three-dimensional medical image; determining a target category label corresponding to the reconstructed three-dimensional medical image based on the category labels to which the original three-dimensional medical images belong respectively; the reconstructed three-dimensional medical image and the corresponding target class label are used for forming training data for performing target task training on the medical image classification model. By adopting the method, the generalization of the medical image classification model can be improved.

Description

Medical image processing method, medical image processing device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a medical image processing method and apparatus, a computer device, and a storage medium.
Background
With the development of Artificial Intelligence (AI), AI is becoming more widely used in the medical field, for example, normal medical images and abnormal medical images can be classified based on a machine learning model. However, in some emergency situations or situations where the number of samples is particularly small, there is often a lack of categories of medical data provided by the hospital. For example, in the case of sudden new coronary pneumonia, a hospital can only provide a small number of abnormal medical images taken by a patient with new coronary pneumonia, and such medical data with missing categories brings great difficulty to training of a machine learning model, so that the machine learning model after training lacks generalization, and therefore, a medical image processing method capable of improving the generalization of the machine learning model is urgently needed.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a medical image processing method, an apparatus, a computer device and a storage medium capable of improving machine learning model generalization.
A method of medical image processing, the method comprising:
acquiring at least two original three-dimensional medical images corresponding to a target task;
performing overlapping fusion processing on the at least two original three-dimensional medical images to obtain a fused three-dimensional medical image;
performing at least one interpolation processing on the fused three-dimensional medical image based on the voxel information of each voxel in the fused three-dimensional medical image to obtain a reconstructed three-dimensional medical image;
determining a target class label corresponding to the reconstructed three-dimensional medical image based on the class label to which each original three-dimensional medical image belongs; the reconstructed three-dimensional medical image and the corresponding target class label are used for forming training data for performing target task training on the medical image classification model.
A medical image processing apparatus, the apparatus comprising:
the overlapping fusion module is used for acquiring at least two original three-dimensional medical images corresponding to the target task; performing overlapping fusion processing on the at least two original three-dimensional medical images to obtain a fused three-dimensional medical image;
the reconstruction module is used for performing at least one interpolation processing on the fused three-dimensional medical image based on the voxel information of each voxel in the fused three-dimensional medical image to obtain a reconstructed three-dimensional medical image;
the label determining module is used for determining a target class label corresponding to the reconstructed three-dimensional medical image based on the class label to which each original three-dimensional medical image belongs; the reconstructed three-dimensional medical image and the corresponding target class label are used for forming training data for performing target task training on the medical image classification model.
In one embodiment, the overlapping fusion module is further configured to obtain a target task and determine a target portion corresponding to the target task; acquiring original three-dimensional medical images of the target part of different biological objects, wherein the number of the acquired original three-dimensional medical images is at least two; the data sources corresponding to the at least two original three-dimensional medical images are different, and/or the category labels corresponding to the at least two original three-dimensional medical images are different.
In one embodiment, the overlap fusion module further comprises a resampling module for determining the number of original two-dimensional medical images included in each of the at least two original three-dimensional medical images; when the number is inconsistent, resampling the at least two original three-dimensional medical images to enable the at least two original three-dimensional medical images to comprise the same number of original two-dimensional medical images; and performing overlapping fusion processing on at least two original three-dimensional medical images after resampling processing to obtain a fused three-dimensional medical image.
In one embodiment, the reconstruction module further includes a first interpolation processing module, configured to perform a first interpolation process on the fused three-dimensional medical image based on voxel information of each voxel in the fused three-dimensional medical image, so as to obtain an intermediate three-dimensional medical image; when the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image is smaller than the preset number, performing second interpolation processing on the intermediate three-dimensional medical image to obtain a reconstructed three-dimensional medical image; the reconstructed three-dimensional medical images comprise the preset number of reconstructed two-dimensional medical images; and when the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image is equal to the preset number threshold, setting the intermediate three-dimensional medical image as a reconstructed three-dimensional medical image.
In one embodiment, the first interpolation processing module is further configured to obtain preset compression parameters, and based on the compression parameters, perform grouping and division on each fused two-dimensional medical image in the fused three-dimensional medical images to obtain at least one fused two-dimensional medical image group; performing trilinear interpolation processing on each group of fused two-dimensional medical image groups respectively based on voxel information of each voxel in the fused three-dimensional medical image to obtain intermediate two-dimensional medical images respectively corresponding to each fused two-dimensional medical image group; and constructing a corresponding intermediate three-dimensional medical image based on each intermediate two-dimensional medical image.
In one embodiment, the first interpolation processing module is further configured to determine a target space region formed by the current fused two-dimensional medical image group and voxel information of each voxel in the target space region; performing trilinear interpolation processing on the target space region based on the voxel information of each voxel to obtain interpolation three-dimensional information of an interpolation point; and determining a corresponding intermediate two-dimensional medical image based on the interpolation three-dimensional information of the interpolation point.
In one embodiment, the first interpolation processing module is further configured to determine a layer relationship between the intermediate two-dimensional medical images according to interpolation three-dimensional information of interpolation points corresponding to the intermediate two-dimensional medical images; and sequencing the intermediate two-dimensional medical images according to the layer relation to obtain corresponding intermediate three-dimensional medical images.
In one embodiment, the reconstruction module further includes a second interpolation processing module for determining a number difference between the number of intermediate two-dimensional medical images included in the intermediate three-dimensional medical image and the preset number; performing nearest interpolation processing on the intermediate three-dimensional medical image to obtain the intermediate two-dimensional medical image to be supplemented with the quantity difference; and supplementing the intermediate two-dimensional medical image to be supplemented into the intermediate three-dimensional medical image to obtain a corresponding reconstructed three-dimensional medical image.
In one embodiment, the object class labels include a first object class label and a second object class label; the label determining module further comprises a hard label module for determining the category label to which each original three-dimensional medical image belongs; when the category label to which each original three-dimensional medical image belongs comprises a preset category label, setting a target category label corresponding to the reconstructed three-dimensional medical image as a first target category label; and when the category label to which each original three-dimensional medical image belongs does not comprise a preset category label, setting the target category label corresponding to the reconstructed three-dimensional medical image as a second target category label.
In one embodiment, the label determining module further includes a soft label module, configured to determine a category label to which each of the original three-dimensional medical images belongs, and a weight value corresponding to each category label; and performing label fusion processing on the category labels to which the original three-dimensional medical images belong respectively based on the weight values corresponding to the category labels respectively to obtain target category labels corresponding to the reconstructed three-dimensional medical images.
In one embodiment, the medical image processing apparatus is further configured to input the original three-dimensional medical image and the reconstructed three-dimensional medical image as sample images to a medical image classification model to be trained, and output a prediction classification result corresponding to the input sample image; determining a target loss function according to a prediction classification result corresponding to the input sample image and a class label to which the input sample image belongs; and training the medical image classification model to be trained through the target loss function until a training stopping condition is reached.
In one embodiment, the medical image processing apparatus is further configured to input the original three-dimensional medical image and the reconstructed three-dimensional medical image as sample images to a medical image classification model to be trained, and output a prediction classification result corresponding to the input sample image; determining a target loss function according to a prediction classification result corresponding to the input sample image and a class label to which the input sample image belongs; and training the medical image classification model to be trained through the target loss function until a training stopping condition is reached.
In one embodiment, the medical image processing apparatus is further configured to acquire a trained medical image classification model and acquire a to-be-processed three-dimensional medical image corresponding to the target portion; inputting the three-dimensional medical image to be processed into the trained medical image classification model, and classifying the three-dimensional medical image to be processed through the trained medical image classification model to obtain a class label corresponding to the three-dimensional medical image to be processed.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring at least two original three-dimensional medical images corresponding to a target task;
performing overlapping fusion processing on the at least two original three-dimensional medical images to obtain a fused three-dimensional medical image;
performing at least one interpolation processing on the fused three-dimensional medical image based on the voxel information of each voxel in the fused three-dimensional medical image to obtain a reconstructed three-dimensional medical image;
determining a target class label corresponding to the reconstructed three-dimensional medical image based on the class label to which each original three-dimensional medical image belongs; the reconstructed three-dimensional medical image and the corresponding target class label are used for forming training data for performing target task training on the medical image classification model.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring at least two original three-dimensional medical images corresponding to a target task;
performing overlapping fusion processing on the at least two original three-dimensional medical images to obtain a fused three-dimensional medical image;
performing at least one interpolation processing on the fused three-dimensional medical image based on the voxel information of each voxel in the fused three-dimensional medical image to obtain a reconstructed three-dimensional medical image;
determining a target class label corresponding to the reconstructed three-dimensional medical image based on the class label to which each original three-dimensional medical image belongs; the reconstructed three-dimensional medical image and the corresponding target class label are used for forming training data for performing target task training on the medical image classification model.
According to the medical image processing method, the medical image processing device, the computer equipment and the storage medium, at least two original three-dimensional medical images corresponding to the target task are obtained, and the at least two original three-dimensional medical images can be subjected to overlapping fusion processing to obtain a fused three-dimensional medical image; by determining the fused three-dimensional medical image, the fused three-dimensional medical image can be interpolated at least once based on the voxel information of each voxel in the fused three-dimensional medical image to obtain a reconstructed three-dimensional medical image fused with the image information of each original three-dimensional medical image, and a target class label corresponding to the reconstructed three-dimensional medical image is determined based on the class label to which each original three-dimensional medical image belongs, so that missing training data can be compensated based on the reconstructed three-dimensional medical image and the corresponding target class label to obtain complete training data, and the medical image classification model obtained based on the complete training data training has good generalization.
In addition, because the fused three-dimensional medical image is subjected to at least one interpolation processing based on the voxel information of each voxel in the fused three-dimensional medical image to obtain the reconstructed three-dimensional medical image, the reconstructed three-dimensional medical image not only contains the pixel information of each fused two-dimensional medical image, but also contains the layer information of each fused two-dimensional medical image, so that the generated reconstructed three-dimensional medical image is more accurate.
Drawings
FIG. 1 is a diagram of an exemplary medical image processing system;
FIG. 2 is a flow diagram illustrating a method for medical image processing according to one embodiment;
FIG. 3 is a schematic diagram of a three-dimensional medical image in one embodiment;
FIG. 4 is a process diagram illustrating an overlapping fusion process performed on an original three-dimensional medical image according to an embodiment;
FIG. 5 is a flow diagram illustrating the processing of a double interpolation process in one embodiment;
FIG. 6 is a schematic diagram of a tri-linear interpolation process based on voxel information in one embodiment;
FIG. 7 is a flow diagram illustrating a method for medical image processing according to one embodiment;
FIG. 8 is a block diagram of an embodiment of a medical image processing apparatus;
FIG. 9 is a block diagram showing the construction of a medical image processing apparatus according to another embodiment;
FIG. 10 is a diagram showing an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Fig. 1 is a diagram illustrating an application environment of a medical image processing method according to an embodiment. Referring to fig. 1, the medical image processing method is applied to a medical image processing system. The medical image processing system includes a terminal 102 and a server 104. The terminal 102 and the server 104 are connected via a network. The terminal 102 may be a desktop terminal or a mobile terminal, and the mobile terminal may be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers. The terminal 102 and the server 104 can be used independently to execute the medical image processing method provided in the embodiment of the present application. The terminal 102 and the server 104 may also be cooperatively used to execute the medical image processing method provided in the embodiment of the present application.
It is also noted that the present application relates to the field of Artificial Intelligence (AI) technology, which is a theory, method, technique and application system that utilizes a digital computer or a machine controlled by a digital computer to simulate, extend and extend human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
The present application relates specifically to Machine Learning techniques (ML) in the field of artificial intelligence. The machine learning is a multi-field cross subject and relates to a plurality of subjects such as probability theory, statistics, approximation theory, convex analysis and algorithm complexity theory. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
It should be understood that the use of "first," "second," and similar terms in the present disclosure are not intended to indicate any order, quantity, or importance, but rather are used to distinguish one element from another. The singular forms "a," "an," or "the" and similar referents do not denote a limitation of quantity, but rather denote the presence of at least one, unless the context clearly dictates otherwise.
In an embodiment, as shown in fig. 2, a medical image processing method is provided, which is described by taking an example that the method is applied to a computer device in fig. 1 (the computer device may be specifically the terminal or the server in fig. 1), and includes the following steps:
step S202, at least two original three-dimensional medical images corresponding to the target task are obtained.
The three-dimensional medical image is a three-dimensional tissue image having spatial position information, which is obtained in a non-invasive manner with respect to a target portion of a biological object. Specifically, the three-dimensional tissue image may be obtained based on CT (computed tomography), MR (magnetic resonance), PT (penetration detection), and other techniques. The three-dimensional medical image comprises at least one two-dimensional medical image, and taking CT as an example, the CT equipment can continuously sample the target part of the biomedical object by a certain thickness to obtain two-dimensional CT slices of each layer, so that the two-dimensional CT slices of each layer are integrated to obtain corresponding three-dimensional medical data. The thickness in this application refers to the thickness of the scanning layer, i.e. the interval between two adjacent two-dimensional CT slices.
Specifically, when the target task is obtained, the computer device may directly extract the corresponding original three-dimensional medical image from the target task, or may use the target task as an index to pull the corresponding original three-dimensional medical image from other computer devices. For example, when the target task is an abnormal classification task for a lung, the computer device acquires at least two original three-dimensional medical images acquired for the lung; for another example, when the target task is an abnormal classification task for a brain, the computer device acquires at least two original three-dimensional medical images acquired for the brain.
In one embodiment, a research and development staff may acquire a plurality of three-dimensional medical images from a hospital or a network in advance, and use the acquired three-dimensional medical images as original three-dimensional medical images, so that the computer device may perform medical image processing on the original three-dimensional medical images to obtain reconstructed three-dimensional medical images.
In one embodiment, acquiring at least two original three-dimensional medical images corresponding to a target task comprises: acquiring a target task and determining a target part corresponding to the target task; acquiring original three-dimensional medical images of target parts of different biological objects, wherein the number of the acquired original three-dimensional medical images is at least two; the data sources corresponding to the at least two original three-dimensional medical images are different, and/or the category labels corresponding to the at least two original three-dimensional medical images are different.
Specifically, when the target task is obtained, the computer device may perform task analysis on the target task to obtain a corresponding target portion, and pull at least two original three-dimensional medical images corresponding to the target portion and the target task from a preset original three-dimensional medical image library. The data sources corresponding to the at least two original three-dimensional medical images are different, and/or the category labels corresponding to the at least two original three-dimensional medical images are different.
In one embodiment, the computer device may specifically determine to acquire at least two original three-dimensional medical images from different data sources or acquire at least two original three-dimensional medical images from a plurality of original three-dimensional medical images with different category labels according to a preset training task. The training task is a task to be completed when the machine learning model is trained. With the development of machine learning, machine learning models are applied more and more widely in the medical field, for example, normal medical images and abnormal medical images can be distinguished based on medical image classification models. Sometimes, limited to various practical situations, the data category of the medical data provided by the hospital often has a deficiency, for example, only a normal three-dimensional medical image and a small number of abnormal three-dimensional medical images can be provided, and the medical data with such a deficiency in category can bring great difficulty to the training of the machine learning model, thereby resulting in a lack of generalization of the trained machine learning model. Therefore, in this case, the computer device may complement the data types missing in the medical data by using the medical image processing method of the present application based on the original three-dimensional medical image, and further train the machine learning model based on the complemented data types to improve the generalization of the machine learning model.
Furthermore, in other scenarios, since the three-dimensional medical images may have different distributions due to different image capturing devices and different capturing objects, when the test three-dimensional medical image (test data) and the training three-dimensional medical image (training data) do not belong to the same distribution, that is, when the data source providing the test data is too fixed, the machine learning model may not be suitable for the model parameters fitted based on the training data when in use, thereby affecting the performance of the machine learning model. Therefore, under the condition, the computer equipment can enrich the data sources by adopting the medical image processing method based on the original three-dimensional medical images, and further train the machine learning model based on the three-dimensional medical images provided by the plurality of data sources so as to improve the generalization of the machine learning model.
For example, when the target task is a lung abnormality classification task, the computer device may pull at least two original three-dimensional medical images corresponding to the existing category labels and captured for the lung from a preset original three-dimensional medical image library, where the category labels corresponding to the at least two pulled original three-dimensional medical images are different. For another example, the computer device may pull at least two original three-dimensional medical images taken for the lung from different data sources from a predetermined library of original three-dimensional medical images.
The original three-dimensional medical image library may include original three-dimensional medical images with different types of labels and original three-dimensional medical images acquired from different data sources. For example, when the three-dimensional medical image is a medical image acquired for a lung with new coronary pneumonia, the corresponding category label may be "normal", when the three-dimensional medical image is a medical image acquired for a lung with new coronary pneumonia, the corresponding category label may be "new coronary", and when the three-dimensional medical image is a medical image acquired for a lung with community pneumonia, the corresponding category label may be "community pneumonia".
In one embodiment, the computer device acquires at least two original three-dimensional medical images shot aiming at a target part, and the category labels corresponding to the at least two original three-dimensional medical images are different. In another embodiment, the computer device obtains at least two original three-dimensional medical images captured for the target portion, and the data sources of the at least two original three-dimensional medical images are different. In other embodiments, the computer device optionally pulls at least two original three-dimensional medical images from the library of original three-dimensional medical images.
In one embodiment, the computer device may correspondingly display the target task confirmation interface, so that the user may input the target task in the target task confirmation interface, and the computer device may acquire the original three-dimensional medical image corresponding to the target task.
In the above embodiment, because the acquired original three-dimensional medical images have different corresponding data sources and/or different corresponding category labels, a specific task can be completed based on the acquired specific original three-dimensional medical image, thereby satisfying the user requirements.
And step S204, performing overlapping fusion processing on at least two original three-dimensional medical images to obtain a fused three-dimensional medical image.
Specifically, when at least two original three-dimensional medical images are obtained, the computer device counts the number of original two-dimensional medical images contained in each original three-dimensional medical image, and determines the sum of the number of original two-dimensional medical images contained in the at least two original three-dimensional medical images according to the counting result. Further, the computer device creates an empty reference three-dimensional medical image, respectively determines the layer corresponding to each original two-dimensional medical image in each original three-dimensional medical image, and inserts the original two-dimensional medical images in at least two original three-dimensional medical images into the reference three-dimensional medical image in an overlapping manner according to the layer corresponding to each original two-dimensional medical image to obtain a fused three-dimensional medical image. Wherein, the layer surface is the mark information of the mark acquisition section. As shown in fig. 3, when the image capturing device (e.g., CT device) continuously samples a target portion of a biomedical object, two-dimensional medical images of each layer can be obtained, wherein a layer corresponding to a two-dimensional medical image (CT slice) acquired by the image capturing device for the first time can be set as a first layer, a layer corresponding to a two-dimensional medical image (CT slice) acquired by the image capturing device for the second time can be set as a second layer, and so on, until the two-dimensional medical image of the last layer is obtained, and the two-dimensional medical images of each layer are synthesized, so that a corresponding three-dimensional medical image can be obtained. Fig. 3 shows a schematic representation of a three-dimensional medical image in one embodiment.
For example, when two original three-dimensional medical images are obtained
Figure 322556DEST_PATH_IMAGE001
And
Figure 890941DEST_PATH_IMAGE002
the computer device creates an empty three-dimensional medical image
Figure 435054DEST_PATH_IMAGE003
And will be
Figure 379877DEST_PATH_IMAGE001
As the original two-dimensional medical image of the first layer in
Figure 529098DEST_PATH_IMAGE003
The original two-dimensional medical image of the first layer of (1) is
Figure 940488DEST_PATH_IMAGE002
As the original two-dimensional medical image of the first layer in
Figure 440739DEST_PATH_IMAGE003
The original two-dimensional medical image of the second layer of (1) is
Figure 923673DEST_PATH_IMAGE001
As the original two-dimensional medical image of the second layer in
Figure 255298DEST_PATH_IMAGE003
The original two-dimensional medical image of the third layer, and so on, until will
Figure 899906DEST_PATH_IMAGE001
And
Figure 559557DEST_PATH_IMAGE002
the original two-dimensional medical image of the last layer in (1) is inserted into
Figure 840322DEST_PATH_IMAGE003
And obtaining a fused three-dimensional medical image.
In one embodiment, in order to represent the importance degree between the original three-dimensional medical images, weights corresponding to the original three-dimensional medical images may be obtained, and based on the weights, overlapping fusion processing is performed on the original three-dimensional medical images to obtain corresponding fused three-dimensional medical images. For example, when two original three-dimensional medical images are obtained
Figure 964136DEST_PATH_IMAGE001
And
Figure 107542DEST_PATH_IMAGE002
Figure 316806DEST_PATH_IMAGE001
corresponding weight is
Figure 548067DEST_PATH_IMAGE004
Figure 323125DEST_PATH_IMAGE002
Corresponding weight is
Figure 247219DEST_PATH_IMAGE005
Then, the computer device will make the original three-dimensional medical image
Figure 412621DEST_PATH_IMAGE001
Multiplying each original two-dimensional medical image by the weight
Figure 572207DEST_PATH_IMAGE004
The original three-dimensional medical image is obtained
Figure 77138DEST_PATH_IMAGE002
Multiplying each original two-dimensional medical image by the weight
Figure 31187DEST_PATH_IMAGE005
Will be
Figure 887148DEST_PATH_IMAGE006
And
Figure 256949DEST_PATH_IMAGE007
and performing overlapping fusion processing. Wherein, the weighted value can be freely set according to the requirement.
For example, the step of performing overlapping fusion processing on each original three-dimensional medical image based on the weight may be implemented based on the following codes:
Figure 6599DEST_PATH_IMAGE008
Z
Figure 6916DEST_PATH_IMAGE009
1
Repeat
If mod(Z/2)=1
Figure 209228DEST_PATH_IMAGE010
Else
Figure 648299DEST_PATH_IMAGE011
Z
Figure 127822DEST_PATH_IMAGE009
Z+1
Until Z>2Z
wherein the content of the first and second substances,
Figure 423674DEST_PATH_IMAGE012
representing the fused three-dimensional medical image, X, Y, Z represent three-dimensional spatial coordinates. Init (S) represents the creation of a space S of volume S.
Figure 988648DEST_PATH_IMAGE013
It is shown that the fused three-dimensional medical image is initialized and an empty fused three-dimensional medical image with a data thickness of 2Z is created. Mod () represents a remainder function. Z
Figure 28148DEST_PATH_IMAGE009
1 represents that Z is given 1. "RepeatIf mod (Z/2) =1
Figure 424494DEST_PATH_IMAGE014
Else
Figure 32193DEST_PATH_IMAGE015
Z
Figure 943517DEST_PATH_IMAGE009
Z+1 Until Z>2Z' represents the loop process of the overlapping fusion process, and when the remainder of Z/2 is 1, the loop process will be executed
Figure 662075DEST_PATH_IMAGE001
Multiplying the original two-dimensional medical image of the corresponding slice by the weight
Figure 912927DEST_PATH_IMAGE016
And is inserted into
Figure 816161DEST_PATH_IMAGE012
When the remainder of Z/2 is not 1, the process will be repeated
Figure 90148DEST_PATH_IMAGE002
Multiplying the original two-dimensional medical image of the corresponding slice by the weight
Figure 737030DEST_PATH_IMAGE005
And is inserted into
Figure 45651DEST_PATH_IMAGE012
In, up to Z>2Z. Therefore, the overlapping fusion processing of the two original three-dimensional medical images can be realized from the operation level, and the corresponding fused three-dimensional medical image is obtained.
In one embodiment, when at least two original three-dimensional medical images are obtained, the computer device determines a layer corresponding to each original two-dimensional medical image in the original three-dimensional medical images, determines a reference three-dimensional medical image from the at least two original three-dimensional medical images, and inserts each original two-dimensional medical image in the rest of the original three-dimensional medical images into the reference three-dimensional medical image based on the layer corresponding to each original two-dimensional medical image to obtain a fused three-dimensional medical image. For example, referring to FIG. 4, when two original three-dimensional medical images as shown in FIG. 4 are obtained
Figure 791891DEST_PATH_IMAGE001
And
Figure 677807DEST_PATH_IMAGE002
when required, the computer device
Figure 738167DEST_PATH_IMAGE001
As a reference, will
Figure 25929DEST_PATH_IMAGE002
The original two-dimensional medical image of the first layer in (1) is inserted into
Figure 943069DEST_PATH_IMAGE001
Between the two-dimensional medical image of the first layer and the original two-dimensional medical image of the second layer
Figure 457227DEST_PATH_IMAGE002
The original two-dimensional medical image of the second layer in (1) is inserted into
Figure 451771DEST_PATH_IMAGE001
Between the original two-dimensional medical image of the second slice and the two-dimensional medical image of the third slice, and so on until it is to be
Figure 203826DEST_PATH_IMAGE002
The original two-dimensional medical image of the last layer in (1) is inserted into
Figure 885343DEST_PATH_IMAGE001
And obtaining a fused three-dimensional medical image. FIG. 4 is a diagram illustrating a process of performing an overlap fusion process on an original three-dimensional medical image according to an embodiment.
Similarly, three original three-dimensional medical images are obtained
Figure 621218DEST_PATH_IMAGE001
Figure 820118DEST_PATH_IMAGE002
And
Figure 82473DEST_PATH_IMAGE017
when required, the computer device
Figure 810257DEST_PATH_IMAGE001
As a reference, will
Figure 892483DEST_PATH_IMAGE002
And
Figure 98336DEST_PATH_IMAGE017
the original two-dimensional medical image of the first layer in (1) is inserted into
Figure 215197DEST_PATH_IMAGE001
Between the original two-dimensional medical image of the first slice and the original two-dimensional medical image of the second slice, and so on until it is to
Figure 848303DEST_PATH_IMAGE002
And
Figure 355508DEST_PATH_IMAGE017
the original two-dimensional medical image of the last layer in (1) is inserted into
Figure 489686DEST_PATH_IMAGE001
And obtaining a fused three-dimensional medical image.
In one embodiment, when obtaining the double number of original three-dimensional medical images, the computer device may perform overlapping fusion processing on every two original three-dimensional medical images according to the above method, perform further overlapping fusion processing on the fused original three-dimensional medical images, and iterate until obtaining a fused three-dimensional medical image. For example, when four original medical images are obtained, the computer device performs overlapping fusion processing on every two original three-dimensional medical images to obtain two fused original three-dimensional medical images, and then performs further overlapping fusion processing on the two fused original three-dimensional medical images according to the above manner to obtain a fused three-dimensional medical image.
In one embodiment, before the overlapping fusion processing is performed on the at least two original three-dimensional medical images, the image size of each original two-dimensional medical image included in each original three-dimensional medical image may be subjected to size processing so that the image sizes of each original two-dimensional medical image are the same.
And step S206, performing at least one interpolation process on the fused three-dimensional medical image based on the voxel information of each voxel in the fused three-dimensional medical image to obtain a reconstructed three-dimensional medical image.
The voxel is a volume element, which is a minimum unit constituting a three-dimensional space region, and may be specifically a corner of each two-dimensional medical image in the three-dimensional space region. The voxel information is information related to a voxel, and the voxel information may include a coordinate value of the corresponding voxel and a pixel value of the corresponding voxel.
Specifically, when the fused three-dimensional medical image is obtained, the computer device re-determines the respective corresponding layer of each fused two-dimensional medical image according to the arrangement sequence of each fused two-dimensional medical image in the fused three-dimensional medical image, and determines the target space region corresponding to the fused three-dimensional medical image and the voxel information of the corresponding voxel in the target space region based on the re-determined respective layer of each fused two-dimensional medical image. It is easily understood that when a two-dimensional medical image is located in an original three-dimensional medical image, the two-dimensional medical image is referred to as an original two-dimensional medical image, and when the two-dimensional medical images are fused to obtain a fused three-dimensional medical image, the two-dimensional medical image is referred to as a fused two-dimensional medical image.
For example, referring to fig. 4, when the fused three-dimensional medical images are obtained, the computer device sets the layer of the fused two-dimensional medical images in the first order as the first layer, sets the layer of the fused two-dimensional medical images in the second order as the second layer, and so on until the last layer, and establishes a three-dimensional coordinate system with the first layer medical image as a reference, determines the Z-axis coordinate with the layer corresponding to each fused two-dimensional medical image, and determines the X-axis coordinate and the Y-axis coordinate with the size corresponding to each fused two-dimensional medical image. Further, after determining the coordinate information of each fused two-dimensional medical image in the three-dimensional coordinate system, the computer device may determine, based on the coordinate information, a target space region occupied by the fused three-dimensional medical image, and determine coordinate values of corner points of each fused two-dimensional medical image and pixel values of the corner points in the target space region, that is, determine voxel information of each voxel.
Further, the computer device performs at least one interpolation process on the fused three-dimensional medical image based on the voxel information of each voxel in the fused three-dimensional medical image, and fuses the image information in the original three-dimensional medical image based on the interpolation process, so as to obtain at least one reconstructed two-dimensional medical image. And the computer equipment determines the Z-axis coordinate of each reconstructed two-dimensional medical image, and sequences each reconstructed two-dimensional medical image according to the Z-axis coordinate to obtain a reconstructed three-dimensional medical image. Illustratively, the computer device performs a trilinear interpolation process on every two fused two-dimensional medical images in the fused three-dimensional medical images according to the arrangement sequence of the fused two-dimensional medical images in the reconstructed three-dimensional medical images based on the voxel information of each voxel, so as to fuse the image information of every two fused two-dimensional medical images into one two-dimensional medical image to obtain a reconstructed two-dimensional medical image, and the computer device synthesizes the reconstructed two-dimensional medical images to obtain a reconstructed three-dimensional medical image. Wherein each two fused two-dimensional medical images are derived from different original three-dimensional medical images.
In one embodiment, the computer device may determine the Z-axis coordinate of each of the fused three-dimensional medical images in the three-dimensional coordinate system according to the slice plane corresponding to each of the two-dimensional medical images in the fused three-dimensional medical image. For example, when the layer corresponding to the current fused two-dimensional medical image is the first layer, the Z-axis coordinate of the current fused two-dimensional medical image may be set to 0; when the layer corresponding to the current fused two-dimensional medical image is the second layer, the Z-axis coordinate of the current fused two-dimensional medical image may be set to 1, and so on.
In one embodiment, the computer device determines a target spatial region corresponding to the fused three-dimensional medical image based on the top surface and the bottom surface by using the fused two-dimensional medical image of the first layer as the bottom surface of the target spatial region and using the fused two-dimensional medical image of the last layer as the top surface of the target spatial region.
In one embodiment, the computer device determines corner points of each fused two-dimensional medical image in the fused three-dimensional medical image and sets the corner points as voxels corresponding to the fused three-dimensional medical image. The corner points are corner points of the fused two-dimensional medical image, for example, when the fused two-dimensional medical image is a rectangle, the corresponding corner points are vertices on four corners.
Step S208, determining a target class label corresponding to the reconstructed three-dimensional medical image based on the class labels to which the original three-dimensional medical images belong respectively; the reconstructed three-dimensional medical image and the corresponding target class label are used for forming training data for performing target task training on the medical image classification model.
Specifically, the computer device may determine classification labels to which the original three-dimensional medical images belong, and determine a target category label corresponding to the reconstructed three-dimensional medical image based on the classification labels to which the original three-dimensional medical images belong, so that the computer device may perform target task training on the medical image classification model using the reconstructed three-dimensional medical image and the corresponding target category label as training data, so that the trained medical image classification model may classify the medical images and has generalization.
In one embodiment, the object class labels include a first object class label and a second object class label; determining a target class label corresponding to the reconstructed three-dimensional medical image based on the class labels to which the original three-dimensional medical images belong respectively, wherein the target class label comprises: determining category labels to which the original three-dimensional medical images belong respectively; when the category label to which each original three-dimensional medical image belongs comprises a preset category label, setting a target category label corresponding to the reconstructed three-dimensional medical image as a first target category label; and when the category label to which each original three-dimensional medical image belongs does not comprise a preset category label, setting the target category label corresponding to the reconstructed three-dimensional medical image as a second target category label.
The first target category label or the second target category label may be the same as the preset category label, and the first target category label, the second target category label, and the preset category label may be different from each other. For example, when the medical image classification model is a binary classification model (which distinguishes normal medical images from abnormal medical images), and the training data for training the medical image classification model has only two kinds of class labels (a "normal" label corresponding to a normal medical image and an "abnormal" label corresponding to an abnormal medical image), the first target class label or the second target class label may be the same as the preset class label, and the preset class label, the first target class label, and the second target class label are all one of the "normal" label or the "abnormal" label. When the medical image classification model is a binary classification model (i.e., a new coronary medical image taken for a new coronary lung and a normal medical image taken for a non-new coronary lung are distinguished), and training data for training the medical image classification model has more than two target class labels (a "normal" label corresponding to a normal medical image, a "new coronary" label corresponding to a new coronary medical image, and a "community pneumonia" label corresponding to a community pneumonia medical image), the first target class label, the second target class label, and the preset class label may be different from each other, wherein the preset class label may be a "new coronary", the first target class label may be a "positive", and the second target class label may be a "negative".
Specifically, the computer device determines category labels to which the original three-dimensional medical images belong respectively, acquires preset category labels, and judges whether the category labels to which the original three-dimensional medical images belong respectively include the preset category labels. When the category label to which each original three-dimensional medical image belongs comprises a preset category label, the computer equipment sets a target category label corresponding to the reconstructed three-dimensional medical image as a first target category label; when the category label to which each original three-dimensional medical image belongs does not include a preset category label, the computer device sets the target category label corresponding to the reconstructed three-dimensional medical image as a second target category label.
For example, when the obtained two original three-dimensional medical images respectively belong to the category labels of "community pneumonia" and "normal", and the preset category label is "new crown", the computer device determines that the category label to which each original three-dimensional medical image respectively belongs does not include the preset category label, and the computer device sets the target label corresponding to the reconstructed three-dimensional medical image as a second target category label "negative". For another example, when the obtained two original three-dimensional medical images respectively belong to the category labels of "new crown" and "normal", and the preset category label is "new crown", the computer device determines that the category label to which each original three-dimensional medical image respectively belongs includes the preset category label, and the computer device sets the target label corresponding to the reconstructed three-dimensional medical image as the first target category label "positive".
In this embodiment, when the category label to which each of the original three-dimensional medical images belongs includes a preset category label, that is, the target category label corresponding to the reconstructed three-dimensional medical image is set as the first target category label, so that the embodiment is more prone to assign the first target category label to the reconstructed three-dimensional medical image, and thus, the medical data with the first target category label is greatly increased, thereby implementing corresponding supplementation of the medical data of the first target category missing in the data source, and making a medical image classification model obtained based on the medical data after the supplementation training more sensitive.
In a specific application scenario, such as in certain sudden disease scenarios, such as in the case of sudden new coronary pneumonia, hospitals can only provide a small number of original three-dimensional medical images taken of the lungs of the new coronary pneumonia due to the small number of patients with the new coronary pneumonia. At this time, the research and development staff may acquire a large number of original three-dimensional medical images taken for the normal lung, and set the category label of the original three-dimensional medical image taken for the new coronary pneumonia lung as "positive", and set the category label of the original three-dimensional medical image taken for the normal lung as "negative". And the computer equipment processes at least two original three-dimensional medical images according to the medical image processing method to obtain reconstructed three-dimensional medical images, and determines that the target class label of the reconstructed three-dimensional medical image is positive when the class label of each original three-dimensional medical image comprises a positive label. And performing iteration to obtain a plurality of reconstructed three-dimensional medical images and target class labels corresponding to the reconstructed three-dimensional medical images respectively, and training the medical classification model based on the original three-dimensional medical image, the reconstructed three-dimensional medical image, the class labels of the original three-dimensional medical image and the class labels of the reconstructed three-dimensional medical image.
Because the target class label of the reconstructed three-dimensional medical image is set as 'positive' when the class label to which each original three-dimensional medical image belongs does not comprise the 'positive' label, the method can be more prone to distributing the 'positive' label to the reconstructed three-dimensional medical image, so that the number of the three-dimensional medical images shot for the new coronary pneumonia lung in the training data can be increased, the purpose of making up for the missing data class is achieved, and the medical image classification model obtained through training of a large amount of training data has good generalization.
It should be noted that, for some other sudden disease scenes, or scenes lacking training data, or scenes of some rare diseases, the medical application processing method mentioned in the embodiments of the present application is also applicable. The embodiments in the present application can be executed only by the corresponding original three-dimensional medical image in the scene to expand the training data set, thereby improving the classification performance of the corresponding medical image classification model.
In the medical image processing method, at least two original three-dimensional medical images corresponding to the target task are obtained, and overlapping and fusing processing can be performed on the at least two original three-dimensional medical images to obtain a fused three-dimensional medical image; by determining the fused three-dimensional medical image, the fused three-dimensional medical image can be interpolated at least once based on the voxel information of each voxel in the fused three-dimensional medical image to obtain a reconstructed three-dimensional medical image fused with the image information of each original three-dimensional medical image, and a target class label corresponding to the reconstructed three-dimensional medical image is determined based on the class label to which each original three-dimensional medical image belongs, so that missing training data can be compensated based on the reconstructed three-dimensional medical image and the corresponding target class label to obtain complete training data, and the medical image classification model obtained based on the complete training data training has good generalization.
In addition, because the fused three-dimensional medical image is subjected to at least one interpolation processing based on the voxel information of each voxel in the fused three-dimensional medical image to obtain the reconstructed three-dimensional medical image, the reconstructed three-dimensional medical image not only contains the pixel information of each fused two-dimensional medical image, but also contains the layer information of each fused two-dimensional medical image, so that the generated reconstructed three-dimensional medical image is more accurate.
In one embodiment, performing an overlapping fusion process on at least two original three-dimensional medical images to obtain a fused three-dimensional medical image includes: determining the number of original two-dimensional medical images included in each of at least two original three-dimensional medical images; when the number of the original three-dimensional medical images is inconsistent, resampling the at least two original three-dimensional medical images to enable the at least two original three-dimensional medical images to comprise the same number of original two-dimensional medical images; and performing overlapping fusion processing on at least two original three-dimensional medical images after resampling processing to obtain a fused three-dimensional medical image.
Specifically, when at least two original three-dimensional medical images are acquired, the computer device determines the number of original two-dimensional medical images included in each original three-dimensional medical image, and determines whether the number of original two-dimensional medical images included in each original three-dimensional medical image is consistent. When the number of the original two-bit medical sound images included in each original three-dimensional medical image is determined to be inconsistent, the computer equipment conducts resampling processing on at least two original three-bit medical images, so that the at least two original three-dimensional medical images all include the same number of original two-dimensional medical images. Further, the computer equipment performs overlapping fusion processing on at least two original three-dimensional medical images after resampling processing to obtain a fused three-dimensional medical image.
In one embodiment, when it is determined that the number of the original two-dimensional medical images included in each of the original three-dimensional medical images is not consistent, the computer device performs interpolation processing and/or sampling processing on at least two original three-dimensional medical images by using a method of interlayer sampling, so that the at least two original three-dimensional medical images each include the same number of original two-dimensional medical images. The interpolation processing method includes, but is not limited to, a trilinear interpolation processing method and a nearest neighbor interpolation processing method. For example, when two original three-dimensional medical images are obtained and the number of two-dimensional medical images included in a first original three-dimensional medical image is twice that of a second original three-dimensional medical image, the computer device performs interlayer sampling on the first original three-dimensional medical image, so that the number of two-dimensional medical images included in the first original three-dimensional medical image after interlayer sampling is consistent with that of the second original three-dimensional medical image. For another example, when two original three-dimensional medical images are obtained and the number of two-dimensional medical images included in the first original three-dimensional medical image is twice that of the second original three-dimensional medical image, the computer device may perform nearest neighbor interpolation processing on the second original three-dimensional medical image to expand the number of images included in the second original three-dimensional medical image, so that the number of two-dimensional medical images included in the first original three-dimensional medical image is consistent with that of the second original three-dimensional medical image.
In the above embodiment, by performing resampling processing on at least two original three-dimensional medical images, the at least two original three-dimensional medical images can both include the same number of original two-dimensional medical images, so that the computer device can perform overlapping and integrating processing on the at least two original three-dimensional medical images with the same number, and the processing efficiency of overlapping and integrating processing is improved.
In one embodiment, the performing at least one interpolation process on the fused three-dimensional medical image based on voxel information of each voxel in the fused three-dimensional medical image to obtain a reconstructed three-dimensional medical image includes: performing first interpolation processing on the fused three-dimensional medical image based on voxel information of each voxel in the fused three-dimensional medical image to obtain an intermediate three-dimensional medical image; when the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image is smaller than the preset number, performing second interpolation processing on the intermediate three-dimensional medical image to obtain a reconstructed three-dimensional medical image; the reconstructed three-dimensional medical images comprise a preset number of reconstructed two-dimensional medical images; and when the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image is equal to a preset number threshold, setting the intermediate three-dimensional medical image as a reconstructed three-dimensional medical image.
Specifically, the research and development staff may preset the number of the reconstructed two-dimensional medical images included in the reconstructed three-dimensional medical image to be generated, so that the computer device takes the number preset by the research and development staff as the preset number. Further, the computer device performs first interpolation processing on the fused three-dimensional medical image based on voxel information of each voxel in the fused three-dimensional medical image to obtain at least one intermediate two-dimensional medical image, and combines each intermediate two-dimensional medical image to obtain a corresponding intermediate three-dimensional medical image. The computer device counts the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image, and judges whether the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image is consistent with a preset number. When the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image is smaller than the preset number, the computer equipment performs second interpolation processing on the intermediate three-dimensional medical image to obtain a reconstructed three-dimensional medical image, and the reconstructed three-dimensional medical image includes the preset number of reconstructed two-dimensional medical images. When the number of intermediate two-dimensional medical images included in the intermediate three-dimensional medical image is equal to a preset number, the computer device sets the intermediate three-dimensional medical image as a reconstructed three-dimensional medical image.
The first interpolation process and the second interpolation process may be the same or different, and for example, both the first interpolation process and the second interpolation process may be trilinear interpolation processes, or the first interpolation process may be a trilinear interpolation process and the second interpolation process may be a nearest neighbor interpolation process.
To better describe the present embodiment, the following description will be given taking an example in which the first interpolation process is a trilinear interpolation process and the second interpolation process is a nearest neighbor interpolation process. Referring to FIG. 5, as shown in FIG. 5, when the predetermined number is Z, the original three-dimensional medical images
Figure 70840DEST_PATH_IMAGE001
And
Figure 265061DEST_PATH_IMAGE002
when the number of the fused two-dimensional medical images contained in the fused three-dimensional medical image is 2Z, the computer equipment performs trilinear interpolation processing on the 2Z fused two-dimensional medical images, so that the data thickness of the fused three-dimensional medical image is compressed from 2Z to Z/lambda, and the middle three-dimensional medical image with the data thickness of Z/lambda is obtained. Wherein the content of the first and second substances,λ is a preset compression parameter. Further, when Z/λ is smaller than Z, in order to unify the data scale, the computer device reduces the data thickness of the intermediate three-dimensional medical image to Z by using a nearest neighbor interpolation method, so as to obtain a reconstructed three-dimensional medical image. Wherein the number of two-dimensional reconstructed medical images included in the reconstructed three-dimensional medical image is Z. FIG. 5 is a flow chart illustrating a process of two-pass interpolation process in one embodiment.
For example, the first interpolation processing and the second interpolation processing may be realized by:
Repeat
Figure 197245DEST_PATH_IMAGE018
Until K>
Figure 807218DEST_PATH_IMAGE019
Figure 898671DEST_PATH_IMAGE020
wherein the content of the first and second substances,
Figure 608001DEST_PATH_IMAGE012
representing a fused three-dimensional medical image. X, Y, Z represent three-dimensional space coordinates. "teiilinear" and "nearest" represent trilinear interpolation processing and nearest neighbor interpolation processing, respectively.
Figure 417694DEST_PATH_IMAGE019
Representing the compression parameters.
Figure 565778DEST_PATH_IMAGE021
(V, S, type) represents that V is changed to S by the type method. "Repeat
Figure 387104DEST_PATH_IMAGE023
Until K>
Figure 391969DEST_PATH_IMAGE019
Based on trilinear interpolation processing, the fused three-dimensional medical image with the data thickness of 2Z is compressed to Z/lambda, and an intermediate three-dimensional medical image with the data thickness of Z/lambda is obtained. "
Figure 298745DEST_PATH_IMAGE025
"represents the reduction of the data thickness of the intermediate three-dimensional medical image to Z.
In this embodiment, based on the first interpolation processing, the image information included in each original two-dimensional medical image may be subjected to information fusion to obtain an intermediate two-dimensional medical image including new image information; based on the second interpolation processing, the number of the generated images of the reconstructed two-dimensional medical image can be freely controlled to obtain a reconstructed three-dimensional medical image containing a preset number of reconstructed two-dimensional medical images, and the two times of interpolation processing supplement each other, so that the reconstructed three-dimensional medical image meeting the requirements is obtained.
In one embodiment, performing a first interpolation process on the fused three-dimensional medical image based on voxel information of each voxel in the fused three-dimensional medical image to obtain an intermediate three-dimensional medical image includes: acquiring preset compression parameters, and grouping and dividing each fused two-dimensional medical image in the fused three-dimensional medical images based on the compression parameters to obtain at least one fused two-dimensional medical image group; performing trilinear interpolation processing on each group of fused two-dimensional medical image groups respectively based on voxel information of each voxel in the fused three-dimensional medical image to obtain intermediate two-dimensional medical images respectively corresponding to each fused two-dimensional medical image group; and forming corresponding intermediate three-dimensional medical images based on the intermediate two-dimensional medical images.
Specifically, in order to better fuse image information in the Z-axis direction of each original three-dimensional medical image, a developer may set compression parameters in advance. Wherein the compression parameter represents a compression multiple of the fused three-dimensional medical image. The larger the compression parameter is, the greater the compression parameter is, the more compression is performed on the fused three-dimensional medical image, that is, each fused two-dimensional medical image is fused with the image information of more original two-dimensional medical images. Conversely, the smaller the compression parameter, the greater the compression of the fused three-dimensional medical image is not performed. However, when the compression parameter is larger, the fused three-dimensional medical image is compressed to a greater extent, which causes that the image information of the original three-dimensional medical image is more seriously lost, so that the compression parameter needs to be set reasonably, so that each fused two-dimensional medical image can fuse more image information of the original two-dimensional medical image, and the image information loss rate of the original three-dimensional medical image can be reduced to a reasonable range.
Further, when the compression parameters are obtained, the computer equipment determines the compression multiple of the fused three-dimensional medical images according to the compression parameters, and divides each fused two-dimensional medical image in the fused three-dimensional medical images into groups according to the compression multiple to obtain at least one fused two-dimensional medical image group. For example, when the compression parameter λ is 1, the computer device determines that the data thickness 2Z of the fused three-dimensional medical image needs to be compressed by one time based on the compression parameter, and thus, the computer device divides each two fused two-dimensional medical images into a fused two-dimensional medical image group. For another example, when the compression parameter λ is 2, the computer device determines that the data thickness 2Z of the fused three-dimensional medical image needs to be compressed by two times based on the compression parameter, and thus, the computer device divides each four fused two-dimensional medical images into a fused two-dimensional medical image group.
Further, the computer device performs trilinear interpolation processing on each group of fused two-dimensional medical image groups respectively based on voxel information of each voxel in the fused three-dimensional medical images to obtain intermediate two-dimensional medical images corresponding to each fused two-dimensional medical image group one to one, and forms corresponding intermediate three-dimensional medical images based on each intermediate two-dimensional medical image. In the above example, when the computer device divides each two fused two-dimensional medical images in the fused three-dimensional medical images into one fused two-dimensional medical image group, the computer device performs trilinear interpolation processing on each fused two-dimensional medical image group, so as to obtain intermediate two-dimensional medical images corresponding to each fused two-dimensional medical image group one by one, and further obtain intermediate three-dimensional medical images including Z intermediate two-dimensional medical images, so as to compress the data thickness 2Z of the fused three-dimensional medical images by one time.
In one embodiment, the fused three-dimensional medical images may be divided according to the layer information of each fused two-dimensional medical image in the fused three-dimensional medical images, and each two fused two-dimensional medical images are divided into a fused two-dimensional medical image group, for example, the fused two-dimensional medical image of the first layer and the fused two-dimensional medical image of the second layer are divided into a fused two-dimensional medical image group, the fused two-dimensional medical image of the third layer and the fused two-dimensional medical image of the fourth layer are divided into a fused two-dimensional medical image group, and so on. Any two fused two-dimensional medical images can also be divided into a fused medical image group, for example, the fused two-dimensional medical image of the first layer and the fused two-dimensional medical image of the last layer are divided into a fused two-dimensional medical image group, and the fused two-dimensional medical image of the fourth layer and the fused two-dimensional medical image of the eighth layer are divided into a fused two-dimensional medical image group. The present embodiment is not limited thereto.
In the above embodiment, by presetting the compression parameters, the compression degree of the fused three-dimensional medical image can be freely controlled based on the compression parameters, so that the intermediate two-dimensional medical image fused with the image information of the reasonable original two-dimensional medical image is obtained.
In one embodiment, the three-linear interpolation processing is performed on each group of fused two-dimensional medical image groups respectively based on voxel information of each voxel in the fused three-dimensional medical image, so as to obtain intermediate two-dimensional medical images corresponding to each fused two-dimensional medical image group respectively, and the method includes: determining a target space region formed by the current fusion two-dimensional medical image group and voxel information of each voxel in the target space region; performing trilinear interpolation processing on the target space region based on the voxel information of each voxel to obtain interpolation three-dimensional information of an interpolation point; and determining a corresponding intermediate two-dimensional medical image based on the interpolation three-dimensional information of the interpolation point.
Specifically, when determining the current fused two-dimensional medical image group, the computer device may determine coordinate information of each fused two-dimensional medical image in the current fused two-dimensional medical image group in the three-dimensional coordinate system, and determine a target space region occupied by the current fused two-dimensional medical image group according to the coordinate information of each fused two-dimensional medical image in the three-dimensional coordinate system. For convenience of description, the target space region occupied by the fused three-dimensional medical image is referred to as a first target space region, and the target space region occupied by the fused two-dimensional medical image group is referred to as a second target space region. Further, the computer device determines coordinate values of corner points and pixel values of the corner points of each fused two-dimensional medical image in the second target space region according to coordinate information of each fused two-dimensional medical image in the current fused two-dimensional medical image group in the three-dimensional coordinate system, and sets the coordinate values of the corner points and the pixel values of the corner points of each fused two-dimensional medical image as voxel information of each voxel in the second target space region.
Further, the computer device performs trilinear interpolation processing on the second target space region based on the voxel information of each voxel to obtain interpolated three-dimensional information of an interpolated point, and determines an intermediate two-dimensional medical image corresponding to the currently fused two-dimensional medical image group based on the interpolated three-dimensional information of the interpolated point.
In one embodiment, referring to fig. 6, as shown in fig. 6, in the current fused two-dimensional medical image group, two fused two-dimensional medical images are included, and C000 (x0, y0, z0), C100(x1, y0, z0) and C110(x1, y1, z0), C010 (x0, y1, z0) are corner coordinates of four corners in one fused two-dimensional medical image, C001 (x0, y0, z1), C101(x1, y0, z1), C111(x1, y1, z1) and C011 (x0, y0, z1) are coordinates of four corners in another fused two-dimensional medical image, the corresponding second target space region is 011 formed by vertices C000, C100, C110, C010, C001, C101, C111 and C111. Wherein C000, C100, C110, C010, C001, C101, C111, and C011 are voxels in the second target spatial region, and coordinate values and pixel values corresponding to C000, C100, C110, C010, C001, C101, C111, and C011, respectively, are voxel information of each voxel. Fig. 6 is a schematic diagram of a tri-linear interpolation process based on voxel information according to an embodiment.
Further, the computer device is set up
Figure 578417DEST_PATH_IMAGE026
Figure 50986DEST_PATH_IMAGE027
Figure 633277DEST_PATH_IMAGE028
Wherein the content of the first and second substances,
Figure 886404DEST_PATH_IMAGE029
Figure 579554DEST_PATH_IMAGE030
Figure 172209DEST_PATH_IMAGE031
in respective three-dimensional coordinate systems
Figure 50035DEST_PATH_IMAGE029
Coordinate values,
Figure 400245DEST_PATH_IMAGE030
Coordinate values and
Figure 21719DEST_PATH_IMAGE031
and coordinate values.
Further, the computer device performs linear interpolation along the X-axis to obtain C00, C01, C10, and C11:
Figure 672143DEST_PATH_IMAGE032
Figure 392975DEST_PATH_IMAGE033
Figure 886273DEST_PATH_IMAGE034
Figure 921225DEST_PATH_IMAGE035
wherein, C000, C001, C010, and C011 in the formula are pixel values corresponding to voxels C000, C001, C010, and C011, respectively.
Further, the computer device performs linear interpolation along the Y-axis, resulting in C0 and C1:
Figure 285210DEST_PATH_IMAGE036
Figure 380205DEST_PATH_IMAGE037
further, when obtaining C0 and C1, the computer device performs linear interpolation along the Z axis based on C0 and C1 to obtain three-dimensional information of an interpolation point C:
Figure 767324DEST_PATH_IMAGE038
when fixed
Figure 748179DEST_PATH_IMAGE040
In (1)
Figure 373196DEST_PATH_IMAGE031
During coordinate operation, interpolation three-dimensional information of at least one interpolation point located on the same plane can be obtained, so that the computer equipment can determine the corresponding middle two-dimensional medical image based on the interpolation three-dimensional information of each interpolation point. The interpolation three-dimensional information comprises three-dimensional coordinates of the interpolation point and pixel values of the interpolation point. For example, when determining
Figure 498146DEST_PATH_IMAGE031
The coordinates are (
Figure 575824DEST_PATH_IMAGE042
) At time/2, the layer at the level of (A) is obtained
Figure 749316DEST_PATH_IMAGE043
) The intermediate two-dimensional medical image of/2.
It is easy to understand that, when the number of the images of the fused two-dimensional medical image included in the fused two-dimensional medical image group is greater than 2, the computer device may perform the trilinear interpolation processing on every two-dimensional medical images, and then perform the further interpolation processing on the two-dimensional medical images after the interpolation processing in the above manner until an intermediate two-dimensional medical image is obtained. For example, when the fused two-dimensional medical image group includes four fused two-dimensional medical images, the computer device performs trilinear interpolation processing on each two-dimensional medical images to obtain two processed two-dimensional medical images, and further performs trilinear interpolation processing on the two processed two-dimensional medical images to obtain a final intermediate two-dimensional medical image.
In the above embodiment, each group of fused two-dimensional medical image groups is subjected to trilinear interpolation processing, so that a corresponding number of intermediate two-dimensional medical images can be obtained, and an intermediate three-dimensional medical image can be obtained based on the corresponding number of intermediate two-dimensional medical images. In addition, the fused two-dimensional medical image group is subjected to trilinear interpolation processing, so that information fusion can be performed on the Z-axis information of each fused two-dimensional medical image, the generated middle two-dimensional medical image not only contains the pixel information of each fused two-dimensional medical image, but also contains the Z-axis information of each fused two-dimensional medical image, and the generated fused two-dimensional medical image is more accurate.
In one embodiment, constructing the respective intermediate three-dimensional medical image based on the respective intermediate two-dimensional medical images comprises: determining the layer relation between the intermediate two-dimensional medical images according to the interpolation three-dimensional information of the interpolation points corresponding to the intermediate two-dimensional medical images; and sequencing the intermediate two-dimensional medical images according to the layer relation to obtain corresponding intermediate three-dimensional medical images.
Specifically, the computer device determines the Z-axis coordinate of the corresponding interpolation point according to the interpolation three-dimensional information of the interpolation point corresponding to each intermediate two-dimensional medical image, that is, the computer device determines the interpolation point included in each intermediate two-dimensional medical image and determines the Z-axis coordinate of each interpolation point. Further, the computer device determines the layer relation between the intermediate two-dimensional medical images according to the Z-axis coordinates of interpolation points contained in the intermediate two-dimensional medical images, and sorts the intermediate two-dimensional medical images according to the layer relation to obtain corresponding intermediate three-dimensional medical images. For example, when the larger the Z-axis coordinate is set, the larger the layer corresponding to the middle two-dimensional medical image is, the computer device sorts the middle two-dimensional medical images according to the Z-axis from small to large to obtain the corresponding middle three-dimensional medical images.
In the embodiment, because the intermediate two-dimensional medical images are correspondingly sequenced based on the interpolated three-dimensional information, the finally obtained intermediate three-dimensional medical images can be closer to the real three-dimensional medical images, and the medical image classification model trained based on the more real three-dimensional medical images is more accurate.
In one embodiment, when the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image is smaller than the preset number, performing a second interpolation process on the intermediate three-dimensional medical image to obtain a reconstructed three-dimensional medical image, including: determining a number difference between the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image and a preset number; carrying out nearest interpolation processing on the intermediate three-dimensional medical image to obtain a number of differential intermediate two-dimensional medical images to be supplemented; and supplementing the intermediate two-dimensional medical image to be supplemented into the intermediate three-dimensional medical image to obtain a corresponding reconstructed three-dimensional medical image.
Specifically, when generating the intermediate three-dimensional medical image, the computer device counts the number of intermediate two-dimensional medical images included in the intermediate three-dimensional medical image, and determines a number difference between the number of intermediate two-dimensional medical images included in the intermediate three-dimensional medical image and a preset number. And the computer equipment performs recent interpolation processing on the intermediate three-dimensional medical image to obtain a number of differential intermediate two-dimensional medical images to be supplemented, and supplements the intermediate two-dimensional medical images to be supplemented to the intermediate three-dimensional medical image to obtain a corresponding reconstructed three-dimensional medical image. The reconstructed three-dimensional medical images comprise a preset number of reconstructed two-dimensional medical images.
In one embodiment, the computer device randomly extracts a number of differential intermediate two-dimensional medical images from the intermediate three-dimensional medical image, performs nearest interpolation processing on the number of differential intermediate two-dimensional medical images to obtain a number of differential intermediate two-dimensional medical images to be supplemented, and supplements the number of differential intermediate two-dimensional medical images to be supplemented to the intermediate three-dimensional medical image to obtain a reconstructed three-dimensional medical image.
In the above embodiment, by determining the number difference between the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image and the preset number, the nearest adjacent interpolation processing may be performed on the intermediate three-dimensional medical image based on the number difference, so that the number of the reconstructed two-dimensional medical images in the reconstructed three-dimensional medical image may be freely controlled based on the nearest adjacent interpolation processing, and the finally generated reconstructed three-dimensional medical image meets the preset requirement.
In one embodiment, determining the object class label corresponding to the reconstructed three-dimensional medical image based on the class labels to which the original three-dimensional medical images respectively belong includes: determining the class label to which each original three-dimensional medical image belongs respectively and the weight value corresponding to each class label respectively; and performing label fusion processing on the class labels to which the original three-dimensional medical images belong respectively based on the weight values corresponding to the class labels respectively to obtain target class labels corresponding to the reconstructed three-dimensional medical images.
Specifically, research personnel can set corresponding weight values according to the importance degree of each category label, so that when the target category label to which the reconstructed three-dimensional medical image belongs needs to be determined, the computer equipment can determine that each original three-dimensional medical image belongs toAnd performing label fusion processing on the category labels to which the original three-dimensional medical images respectively belong based on the weight values respectively corresponding to the category labels to obtain target category labels corresponding to the reconstructed three-dimensional medical images. For example, when the original three-dimensional medical image is taken
Figure 353473DEST_PATH_IMAGE001
Belonging to a category label of
Figure 524691DEST_PATH_IMAGE044
And the corresponding weight is
Figure 214299DEST_PATH_IMAGE004
Original three-dimensional medical image
Figure 394744DEST_PATH_IMAGE002
Belonging to a category label of
Figure 791090DEST_PATH_IMAGE045
And the corresponding weight is
Figure 523423DEST_PATH_IMAGE046
Then, the target category label corresponding to the reconstructed three-dimensional medical image is
Figure 44534DEST_PATH_IMAGE048
In this embodiment, since the target class labels are obtained by performing weighted summation on the class labels to which the original three-dimensional medical images belong, the target class labels corresponding to the reconstructed three-dimensional medical images can better describe the similarity between the fused three-dimensional medical images and the original three-dimensional medical images, so that the specificity of the medical image classification model obtained by training based on the reconstructed three-dimensional medical images and the corresponding target class labels is improved.
In one embodiment, after obtaining the reconstructed three-dimensional medical image, the medical image classification model may be further trained for a target task based on the reconstructed three-dimensional medical image and the corresponding target class label. The training step of the target task of the medical image classification model comprises the following steps: respectively taking the original three-dimensional medical image and the reconstructed three-dimensional medical image as sample images to be input into a medical image classification model to be trained, and outputting a prediction classification result corresponding to the input sample images; determining a target loss function according to a prediction classification result corresponding to the input sample image and a class label to which the input sample image belongs; and training the medical image classification model to be trained through the target loss function until the training stopping condition is reached.
Specifically, the computer device inputs the original three-dimensional medical image and the corresponding reconstructed three-dimensional medical image into a medical image classification model to be trained as sample images respectively, extracts image features in the sample images through the medical image classification model, classifies the sample images based on the image features, and outputs a prediction classification result. Further, the medical image classification model obtains a class label to which the sample image belongs, determines the difference degree between a prediction classification result corresponding to the sample image and the class label to which the sample image belongs, determines a target loss function according to the difference degree, trains the medical image classification model to be trained through the target loss function until a preset training stop condition is reached, and obtains the trained medical image classification model. The training stopping condition is a condition for stopping training, and specifically may be that a preset iteration number is reached, the classification performance of the medical image classification model reaches a preset index, or a preset iteration time is reached, and the like. The training stopping condition in the present application can be freely set according to the requirement, for example, when the difference degree between the prediction classification result corresponding to the sample image and the class label to which the sample image belongs is smaller than a preset value, it is determined that the training stopping condition is reached.
In the above embodiment, the corresponding target loss function is determined, so that the medical image classification model is trained based on the target loss function, and the trained medical classification model can be more accurate. In addition, because the reconstructed three-dimensional medical image makes up the data category missing in the original three-dimensional medical image, the medical image classification model trained on the original three-dimensional medical image and the reconstructed three-dimensional medical image has good generalization.
In one embodiment, the medical image processing method further includes: acquiring a trained medical image classification model, and acquiring a to-be-processed three-dimensional medical image corresponding to a target part; inputting the three-dimensional medical image to be processed into the trained medical image classification model, and classifying the three-dimensional medical image to be processed through the trained medical image classification model to obtain the class label corresponding to the three-dimensional medical image to be processed.
Specifically, the computer device obtains a trained medical image classification model, inputs a to-be-processed three-dimensional medical image corresponding to the target part into the trained medical image classification model, classifies the input three-dimensional medical image according to the trained medical image classification model, and outputs a class label corresponding to the to-be-processed three-dimensional medical image. For example, when the target task is a lung abnormality classification task, the computer device inputs a three-dimensional medical image to be processed, which is taken for the lung, into the medical image classification model, and the medical image classification model outputs a classification label "normal" or "abnormal" corresponding to the three-dimensional medical image.
In this embodiment, through acquiring the trained medical image classification model, the three-dimensional medical image to be processed can be classified based on the medical image classification model, so that the processing efficiency of processing the three-dimensional medical image is improved.
In another embodiment, as shown in fig. 7, the medical image processing method provided by the present application includes the following steps:
s702, acquiring a target task and determining a target part corresponding to the target task.
S704, acquiring original three-dimensional medical images of target parts of different biological objects, wherein the number of the acquired original three-dimensional medical images is at least two; the data sources corresponding to the at least two original three-dimensional medical images are different, and/or the category labels corresponding to the at least two original three-dimensional medical images are different.
S706, determining the number of original two-dimensional medical images included in each original three-dimensional medical image in the at least two original three-dimensional medical images, and when the number of the original two-dimensional medical images is inconsistent, performing resampling processing on the at least two original three-dimensional medical images to enable the at least two original three-dimensional medical images to include the same number of original two-dimensional medical images.
And S708, overlapping and fusing the at least two original three-dimensional medical images after resampling processing to obtain a fused three-dimensional medical image.
S710, acquiring preset compression parameters, and grouping and dividing each fused two-dimensional medical image in the fused three-dimensional medical images based on the compression parameters to obtain at least one fused two-dimensional medical image group.
And S712, determining a target space region formed by the current fusion two-dimensional medical image group and voxel information of each voxel in the target space region.
And S714, performing trilinear interpolation processing on the target space region based on the voxel information of each voxel to obtain interpolation three-dimensional information of an interpolation point, and determining a corresponding middle two-dimensional medical image based on the interpolation three-dimensional information of the interpolation point.
And S716, determining the layer relation among the intermediate two-dimensional medical images according to the interpolation three-dimensional information of the interpolation points corresponding to the intermediate two-dimensional medical images, and sequencing the intermediate two-dimensional medical images according to the layer relation to obtain the corresponding intermediate three-dimensional medical images.
And S718, determining the quantity difference between the quantity of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical images and the preset quantity, and performing nearest interpolation processing on the intermediate three-dimensional medical images to obtain quantity-difference intermediate two-dimensional medical images to be supplemented.
And S720, supplementing the intermediate two-dimensional medical image to be supplemented into the intermediate three-dimensional medical image to obtain a corresponding reconstructed three-dimensional medical image.
And S722, determining the category label to which each original three-dimensional medical image belongs.
And S724, when the category label to which each original three-dimensional medical image belongs comprises a preset category label, setting the target category label corresponding to the reconstructed three-dimensional medical image as a first target category label.
S726, when the category label to which each original three-dimensional medical image belongs does not include the preset category label, setting the target category label corresponding to the reconstructed three-dimensional medical image as the second target category label.
And S728, inputting the original three-dimensional medical image and the reconstructed three-dimensional medical image into a medical image classification model to be trained as sample images respectively, and outputting a prediction classification result corresponding to the input sample images.
And S730, determining a target loss function according to the prediction classification result corresponding to the input sample image and the class label to which the input sample image belongs, and training the medical image classification model to be trained through the target loss function until a training stop condition is reached.
At least two original three-dimensional medical images corresponding to the target task are obtained, and overlapping fusion processing can be carried out on the at least two original three-dimensional medical images to obtain a fused three-dimensional medical image; by determining the fused three-dimensional medical image, the fused three-dimensional medical image can be interpolated at least once based on the voxel information of each voxel in the fused three-dimensional medical image to obtain a reconstructed three-dimensional medical image fused with the image information of each original three-dimensional medical image, and a target class label corresponding to the reconstructed three-dimensional medical image is determined based on the class label to which each original three-dimensional medical image belongs, so that missing training data can be compensated based on the reconstructed three-dimensional medical image and the corresponding target class label to obtain complete training data, and the medical image classification model obtained based on the complete training data training has good generalization.
In addition, because the fused three-dimensional medical image is subjected to at least one interpolation processing based on the voxel information of each voxel in the fused three-dimensional medical image to obtain the reconstructed three-dimensional medical image, the reconstructed three-dimensional medical image not only contains the pixel information of each fused two-dimensional medical image, but also contains the layer information of each fused two-dimensional medical image, so that the generated reconstructed three-dimensional medical image is more accurate.
It should be understood that although the steps in the flowcharts of fig. 2 and 7 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2 and 7 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the other steps or stages.
The application also provides an application scene, and the application scene applies the medical image processing method. Specifically, the application of the medical image processing method in the application scenario is as follows:
the researchers collected 1800 original three-dimensional medical images (including 600 original three-dimensional medical images taken of the lung of new coronary pneumonia, 600 original three-dimensional medical images taken of the lung of community pneumonia, and 600 original three-dimensional medical images taken of the normal lung) from three hospitals, and used the 1800 original three-dimensional medical images as an original three-dimensional medical image library. Further, the computer equipment pulls at least two original three-dimensional medical images corresponding to the classification tasks from the original three-dimensional medical image library, and performs overlapping fusion processing on the two original three-dimensional medical images to obtain a fused three-dimensional medical image. Wherein, the category labels corresponding to at least two original three-dimensional medical images are different. Further, the computer device determines category labels to which the at least two original three-dimensional medical images belong respectively, and determines a target category label corresponding to the reconstructed three-dimensional medical image according to the category labels to which the at least two original three-dimensional medical images belong respectively. And repeating the steps until a preset number of reconstructed three-dimensional medical images are obtained. Further, the computer device takes the original three-dimensional medical image, the class label corresponding to the original three-dimensional medical image, the reconstructed three-dimensional medical image and the class label corresponding to the reconstructed three-dimensional medical image as training data and a training label, trains the medical image classification model until the training stop condition is reached, so that the trained medical image classification model can classify the three-dimensional medical image shot for the lung.
In order to test the classification performance of the trained medical image classification model, the research and development staff obtain 300 an original three-dimensional medical image taken for the lung of new coronary pneumonia, 150 an original three-dimensional medical image taken for the lung of community pneumonia, 150 an original three-dimensional medical image taken for the normal lung from the fourth hospital as test data, and test the trained medical image classification model based on the test data, so as to further improve the medical image classification model according to the test result.
The application also provides another application scenario, and the application scenario applies the medical image processing method to expand the training data, so that the classification performance of the medical image classification model is improved. For example, for some diseases, the outbreak may be limited to geographical areas, where the distribution of patients varies somewhat. For example, patients with a disease are concentrated in area A, while area B has only sporadic patients. The number of original three-dimensional medical images belonging to the abnormal category of the B region that can be acquired by the computer device is very limited. In this case, the computer device may process the original three-dimensional medical image belonging to the normal category and the original three-dimensional medical image belonging to the abnormal category in the B region by using the medical image processing method mentioned in the embodiments of the present application, so as to obtain the target category label corresponding to the reconstructed three-dimensional medical image, so as to compensate the data in the B region. Therefore, the medical image classification model is trained according to the expanded training data, and the medical image classification model with better classification performance can be obtained.
In another application scenario, since three-dimensional medical images may present different situations due to different image acquisition devices, when a data source providing training data is too limited, a machine learning model may not be suitable for model parameters fitted based on the training data when in use, thereby affecting the performance of the medical model. Therefore, when acquiring the original three-dimensional medical image captured by two imaging devices (device a and device B), the developer may set the category label of the original three-dimensional medical image captured by device a to "a" and the corresponding weight is "a
Figure 153305DEST_PATH_IMAGE004
Setting the category label of the original three-dimensional medical image acquired by the B equipment as 'B' and the corresponding weight as
Figure 607420DEST_PATH_IMAGE046
So that the computer device can generate the classification category (A) according to the medical image processing method
Figure 182757DEST_PATH_IMAGE050
B) Is obtained (i.e. the reconstructed image data of)
Figure 581378DEST_PATH_IMAGE051
Original three-dimensional medical images shot by the B equipment), and then data sources can be enriched according to the generated reconstructed image data, so that the purpose of improving the generalization of the medical image classification model is achieved.
In another application scenario, for some diseases, there may be fewer instances of abnormalities due to limitations in the form of disease or the difficulty of data collection. In this way, when training the medical image classification model by using multi-source data, the medical image processing method mentioned in the embodiments of the present application may be used to generate training data corresponding to more categories. The problem that classification models tend to achieve sample identification through data source identification due to multi-source data category loss is solved, and the medical image classification models can really learn useful information from data. In this way, the model can see more various training samples in the training phase, and the performance (i.e. generalization capability) of the model on unknown data sources is improved.
In one embodiment, as shown in fig. 8, there is provided a medical image processing apparatus 800, which may be a part of a computer device using software modules or hardware modules, or a combination of the two modules, the apparatus specifically including: an overlap fusion module 802, a reconstruction module 804, and a label determination module 806, wherein:
an overlap fusion module 802, configured to obtain at least two original three-dimensional medical images corresponding to a target task; performing overlapping fusion processing on at least two original three-dimensional medical images to obtain a fused three-dimensional medical image;
the reconstruction module 804 is configured to perform at least one interpolation process on the fused three-dimensional medical image based on voxel information of each voxel in the fused three-dimensional medical image to obtain a reconstructed three-dimensional medical image;
a tag determining module 806, configured to determine a target class tag corresponding to the reconstructed three-dimensional medical image based on the class tags to which the original three-dimensional medical images respectively belong; the reconstructed three-dimensional medical image and the corresponding target class label are used for forming training data for performing target task training on the medical image classification model.
In one embodiment, as shown in fig. 9, the overlap-and-merge module 802 is further configured to obtain a target task and determine a target portion corresponding to the target task; acquiring original three-dimensional medical images of target parts of different biological objects, wherein the number of the acquired original three-dimensional medical images is at least two; the data sources corresponding to the at least two original three-dimensional medical images are different, and/or the category labels corresponding to the at least two original three-dimensional medical images are different.
In one embodiment, the blending module 802 further comprises a resampling module 8021 for determining the number of original two-dimensional medical images included in each of the at least two original three-dimensional medical images; when the number of the original three-dimensional medical images is inconsistent, resampling the at least two original three-dimensional medical images to enable the at least two original three-dimensional medical images to comprise the same number of original two-dimensional medical images; and performing overlapping fusion processing on at least two original three-dimensional medical images after resampling processing to obtain a fused three-dimensional medical image.
In an embodiment, the reconstructing module 804 further includes a first interpolation processing module 8041, configured to perform a first interpolation process on the fused three-dimensional medical image based on voxel information of each voxel in the fused three-dimensional medical image, so as to obtain an intermediate three-dimensional medical image; when the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image is smaller than the preset number, performing second interpolation processing on the intermediate three-dimensional medical image to obtain a reconstructed three-dimensional medical image; the reconstructed three-dimensional medical images comprise a preset number of reconstructed two-dimensional medical images; and when the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image is equal to a preset number threshold, setting the intermediate three-dimensional medical image as a reconstructed three-dimensional medical image.
In an embodiment, the first interpolation processing module 8041 is further configured to obtain preset compression parameters, and based on the compression parameters, perform grouping and dividing on each fused two-dimensional medical image in the fused three-dimensional medical images to obtain at least one fused two-dimensional medical image group; performing trilinear interpolation processing on each group of fused two-dimensional medical image groups respectively based on voxel information of each voxel in the fused three-dimensional medical image to obtain intermediate two-dimensional medical images respectively corresponding to each fused two-dimensional medical image group; and forming corresponding intermediate three-dimensional medical images based on the intermediate two-dimensional medical images.
In one embodiment, the first interpolation processing module 8041 is further configured to determine a target space region formed by the current fused two-dimensional medical image group and voxel information of each voxel in the target space region; performing trilinear interpolation processing on the target space region based on the voxel information of each voxel to obtain interpolation three-dimensional information of an interpolation point; and determining a corresponding intermediate two-dimensional medical image based on the interpolation three-dimensional information of the interpolation point.
In one embodiment, the first interpolation processing module 8041 is further configured to determine a layer relationship between the intermediate two-dimensional medical images according to the interpolation three-dimensional information of the interpolation points corresponding to the intermediate two-dimensional medical images; and sequencing the intermediate two-dimensional medical images according to the layer relation to obtain corresponding intermediate three-dimensional medical images.
In one embodiment, the reconstruction module 804 further includes a second interpolation processing module 8042 for determining a number difference between the number of intermediate two-dimensional medical images included in the intermediate three-dimensional medical image and a preset number; carrying out nearest interpolation processing on the intermediate three-dimensional medical image to obtain a number of differential intermediate two-dimensional medical images to be supplemented; and supplementing the intermediate two-dimensional medical image to be supplemented into the intermediate three-dimensional medical image to obtain a corresponding reconstructed three-dimensional medical image.
In one embodiment, the object class labels include a first object class label and a second object class label; the tag determining module 806 further includes a hard tag module 8061, configured to determine category tags to which the original three-dimensional medical images belong, respectively; when the category label to which each original three-dimensional medical image belongs comprises a preset category label, setting a target category label corresponding to the reconstructed three-dimensional medical image as a first target category label; and when the category label to which each original three-dimensional medical image belongs does not comprise a preset category label, setting the target category label corresponding to the reconstructed three-dimensional medical image as a second target category label.
In one embodiment, the label determining module 806 further includes a soft label module 8062, configured to determine a category label to which each original three-dimensional medical image belongs and a weight value corresponding to each category label; and performing label fusion processing on the class labels to which the original three-dimensional medical images belong respectively based on the weight values corresponding to the class labels respectively to obtain target class labels corresponding to the reconstructed three-dimensional medical images.
In one embodiment, the medical image processing apparatus 800 is further configured to input the original three-dimensional medical image and the reconstructed three-dimensional medical image as sample images to a medical image classification model to be trained, and output a prediction classification result corresponding to the input sample image; determining a target loss function according to a prediction classification result corresponding to the input sample image and a class label to which the input sample image belongs; and training the medical image classification model to be trained through the target loss function until the training stopping condition is reached.
In one embodiment, the medical image processing apparatus 800 is further configured to acquire a trained medical image classification model and acquire a to-be-processed three-dimensional medical image corresponding to the target portion; inputting the three-dimensional medical image to be processed into the trained medical image classification model, and classifying the three-dimensional medical image to be processed through the trained medical image classification model to obtain the class label corresponding to the three-dimensional medical image to be processed.
For specific limitations of the medical image processing apparatus, reference may be made to the above limitations of the medical image processing method, which are not described herein again. The modules in the medical image processing apparatus can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing medical image processing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a medical image processing method.
Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (25)

1. A method of medical image processing, the method comprising:
acquiring at least two original three-dimensional medical images corresponding to a target task;
performing overlapping fusion processing on the at least two original three-dimensional medical images to obtain a fused three-dimensional medical image;
performing first interpolation processing on the fused three-dimensional medical image based on voxel information of each voxel in the fused three-dimensional medical image to obtain an intermediate three-dimensional medical image;
when the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image is smaller than the preset number, performing second interpolation processing on the intermediate three-dimensional medical image to obtain a reconstructed three-dimensional medical image; the reconstructed three-dimensional medical images comprise the preset number of reconstructed two-dimensional medical images;
when the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image is equal to the preset number threshold, setting the intermediate three-dimensional medical image as a reconstructed three-dimensional medical image;
determining a target class label corresponding to the reconstructed three-dimensional medical image based on the class label to which each original three-dimensional medical image belongs; the reconstructed three-dimensional medical image and the corresponding target class label are used for forming training data for performing target task training on the medical image classification model.
2. The method of claim 1, wherein said acquiring at least two original three-dimensional medical images corresponding to a target task comprises:
acquiring a target task and determining a target part corresponding to the target task;
acquiring original three-dimensional medical images of the target part of different biological objects, wherein the number of the acquired original three-dimensional medical images is at least two; the data sources corresponding to the at least two original three-dimensional medical images are different, and/or the category labels corresponding to the at least two original three-dimensional medical images are different.
3. The method according to claim 1, wherein said performing an overlapping fusion process on said at least two original three-dimensional medical images to obtain a fused three-dimensional medical image comprises:
determining the number of original two-dimensional medical images included in each of the at least two original three-dimensional medical images;
when the number is inconsistent, resampling the at least two original three-dimensional medical images to enable the at least two original three-dimensional medical images to comprise the same number of original two-dimensional medical images;
and performing overlapping fusion processing on at least two original three-dimensional medical images after resampling processing to obtain a fused three-dimensional medical image.
4. The method according to claim 1, wherein the three-dimensional medical image is a three-dimensional tissue image having spatial position information acquired non-invasively with respect to a target portion of a biological object.
5. The method according to claim 1, wherein the performing a first interpolation process on the fused three-dimensional medical image based on voxel information of each voxel in the fused three-dimensional medical image to obtain an intermediate three-dimensional medical image comprises:
acquiring preset compression parameters, and grouping and dividing each fused two-dimensional medical image in the fused three-dimensional medical images based on the compression parameters to obtain at least one fused two-dimensional medical image group;
performing trilinear interpolation processing on each group of fused two-dimensional medical image groups respectively based on voxel information of each voxel in the fused three-dimensional medical image to obtain intermediate two-dimensional medical images respectively corresponding to each fused two-dimensional medical image group;
and constructing a corresponding intermediate three-dimensional medical image based on each intermediate two-dimensional medical image.
6. The method according to claim 5, wherein the performing trilinear interpolation processing on each group of fused two-dimensional medical image groups respectively based on voxel information of each voxel in the fused three-dimensional medical image to obtain an intermediate two-dimensional medical image corresponding to each fused two-dimensional medical image group respectively comprises:
determining a target space region formed by the current fusion two-dimensional medical image group and voxel information of each voxel in the target space region;
performing trilinear interpolation processing on the target space region based on the voxel information of each voxel to obtain interpolation three-dimensional information of an interpolation point;
and determining a corresponding intermediate two-dimensional medical image based on the interpolation three-dimensional information of the interpolation point.
7. The method of claim 6, wherein said constructing respective intermediate three-dimensional medical images based on each of said intermediate two-dimensional medical images comprises:
determining the layer relation between the intermediate two-dimensional medical images according to the interpolation three-dimensional information of the interpolation points corresponding to the intermediate two-dimensional medical images;
and sequencing the intermediate two-dimensional medical images according to the layer relation to obtain corresponding intermediate three-dimensional medical images.
8. The method according to claim 1, wherein when the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image is smaller than a preset number, performing a second interpolation process on the intermediate three-dimensional medical image to obtain a reconstructed three-dimensional medical image includes:
determining a number difference between the number of intermediate two-dimensional medical images included in the intermediate three-dimensional medical image and the preset number;
performing nearest interpolation processing on the intermediate three-dimensional medical image to obtain the intermediate two-dimensional medical image to be supplemented with the quantity difference;
and supplementing the intermediate two-dimensional medical image to be supplemented into the intermediate three-dimensional medical image to obtain a corresponding reconstructed three-dimensional medical image.
9. The method of claim 1, wherein the object class labels comprise a first object class label and a second object class label;
the determining a target category label corresponding to the reconstructed three-dimensional medical image based on the category label to which each of the original three-dimensional medical images belongs includes:
determining a category label to which each original three-dimensional medical image belongs;
when the category label to which each original three-dimensional medical image belongs comprises a preset category label, setting a target category label corresponding to the reconstructed three-dimensional medical image as a first target category label;
and when the category label to which each original three-dimensional medical image belongs does not comprise a preset category label, setting the target category label corresponding to the reconstructed three-dimensional medical image as a second target category label.
10. The method according to claim 1, wherein the determining a target class label corresponding to the reconstructed three-dimensional medical image based on the class label to which each of the original three-dimensional medical images belongs includes:
determining the class label to which each original three-dimensional medical image belongs respectively and the weight value corresponding to each class label respectively;
and performing label fusion processing on the category labels to which the original three-dimensional medical images belong respectively based on the weight values corresponding to the category labels respectively to obtain target category labels corresponding to the reconstructed three-dimensional medical images.
11. The method according to any one of claims 1 to 10, further comprising:
respectively taking the original three-dimensional medical image and the reconstructed three-dimensional medical image as sample images to be input into a medical image classification model to be trained, and outputting a prediction classification result corresponding to the input sample images;
determining a target loss function according to a prediction classification result corresponding to the input sample image and a class label to which the input sample image belongs;
and training the medical image classification model to be trained through the target loss function until a training stopping condition is reached.
12. The method of claim 11, further comprising:
acquiring a trained medical image classification model, and acquiring a to-be-processed three-dimensional medical image corresponding to a target part;
inputting the three-dimensional medical image to be processed into the trained medical image classification model, and classifying the three-dimensional medical image to be processed through the trained medical image classification model to obtain a class label corresponding to the three-dimensional medical image to be processed.
13. A medical image processing apparatus, characterized in that the apparatus comprises:
the overlapping fusion module is used for acquiring at least two original three-dimensional medical images corresponding to the target task; performing overlapping fusion processing on the at least two original three-dimensional medical images to obtain a fused three-dimensional medical image;
the reconstruction module is used for performing first interpolation processing on the fused three-dimensional medical image based on voxel information of each voxel in the fused three-dimensional medical image to obtain an intermediate three-dimensional medical image; when the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image is smaller than the preset number, performing second interpolation processing on the intermediate three-dimensional medical image to obtain a reconstructed three-dimensional medical image; the reconstructed three-dimensional medical images comprise the preset number of reconstructed two-dimensional medical images; when the number of the intermediate two-dimensional medical images included in the intermediate three-dimensional medical image is equal to the preset number threshold, setting the intermediate three-dimensional medical image as a reconstructed three-dimensional medical image;
the label determining module is used for determining a target class label corresponding to the reconstructed three-dimensional medical image based on the class label to which each original three-dimensional medical image belongs; the reconstructed three-dimensional medical image and the corresponding target class label are used for forming training data for performing target task training on the medical image classification model.
14. The apparatus of claim 13, wherein the overlap-and-merge module is further configured to obtain a target task and determine a target portion corresponding to the target task; acquiring original three-dimensional medical images of the target part of different biological objects, wherein the number of the acquired original three-dimensional medical images is at least two; the data sources corresponding to the at least two original three-dimensional medical images are different, and/or the category labels corresponding to the at least two original three-dimensional medical images are different.
15. The apparatus of claim 13, wherein the blending module further comprises a resampling module for determining a number of original two-dimensional medical images each of the at least two original three-dimensional medical images comprises; when the number is inconsistent, resampling the at least two original three-dimensional medical images to enable the at least two original three-dimensional medical images to comprise the same number of original two-dimensional medical images; and performing overlapping fusion processing on at least two original three-dimensional medical images after resampling processing to obtain a fused three-dimensional medical image.
16. The apparatus according to claim 13, wherein the reconstruction module includes a first interpolation processing module, and the first interpolation processing module is further configured to obtain preset compression parameters, and perform grouping and dividing on each of the fused three-dimensional medical images based on the compression parameters to obtain at least one fused two-dimensional medical image group; performing trilinear interpolation processing on each group of fused two-dimensional medical image groups respectively based on voxel information of each voxel in the fused three-dimensional medical image to obtain intermediate two-dimensional medical images respectively corresponding to each fused two-dimensional medical image group; and constructing a corresponding intermediate three-dimensional medical image based on each intermediate two-dimensional medical image.
17. The apparatus of claim 16, wherein the first interpolation processing module is further configured to determine a target space region formed by the currently fused two-dimensional medical image group and voxel information of each voxel in the target space region; performing trilinear interpolation processing on the target space region based on the voxel information of each voxel to obtain interpolation three-dimensional information of an interpolation point; and determining a corresponding intermediate two-dimensional medical image based on the interpolation three-dimensional information of the interpolation point.
18. The apparatus according to claim 17, wherein the first interpolation processing module is further configured to determine a layer relationship between the intermediate two-dimensional medical images according to the interpolated three-dimensional information of the interpolation points corresponding to the intermediate two-dimensional medical images; and sequencing the intermediate two-dimensional medical images according to the layer relation to obtain corresponding intermediate three-dimensional medical images.
19. The apparatus of claim 13, wherein the reconstruction module further comprises a second interpolation processing module for determining a number difference between the number of intermediate two-dimensional medical images included in the intermediate three-dimensional medical image and the preset number; performing nearest interpolation processing on the intermediate three-dimensional medical image to obtain the intermediate two-dimensional medical image to be supplemented with the quantity difference; and supplementing the intermediate two-dimensional medical image to be supplemented into the intermediate three-dimensional medical image to obtain a corresponding reconstructed three-dimensional medical image.
20. The apparatus of claim 13, wherein the object class labels comprise a first object class label and a second object class label; the label determining module further comprises a hard label module for determining the category label to which each original three-dimensional medical image belongs; when the category label to which each original three-dimensional medical image belongs comprises a preset category label, setting a target category label corresponding to the reconstructed three-dimensional medical image as a first target category label; and when the category label to which each original three-dimensional medical image belongs does not comprise a preset category label, setting the target category label corresponding to the reconstructed three-dimensional medical image as a second target category label.
21. The apparatus according to claim 13, wherein the label determining module further comprises a soft label module, configured to determine a category label to which each of the original three-dimensional medical images belongs and a weight value corresponding to each category label; and performing label fusion processing on the category labels to which the original three-dimensional medical images belong respectively based on the weight values corresponding to the category labels respectively to obtain target category labels corresponding to the reconstructed three-dimensional medical images.
22. The apparatus according to any one of claims 13 to 21, wherein the medical image processing apparatus is further configured to input the original three-dimensional medical image and the reconstructed three-dimensional medical image as sample images to a medical image classification model to be trained, and output a prediction classification result corresponding to the input sample image; determining a target loss function according to a prediction classification result corresponding to the input sample image and a class label to which the input sample image belongs; and training the medical image classification model to be trained through the target loss function until a training stopping condition is reached.
23. The apparatus according to claim 22, wherein the medical image processing apparatus is further configured to acquire a trained medical image classification model and acquire a three-dimensional medical image to be processed corresponding to the target portion; inputting the three-dimensional medical image to be processed into the trained medical image classification model, and classifying the three-dimensional medical image to be processed through the trained medical image classification model to obtain a class label corresponding to the three-dimensional medical image to be processed.
24. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 12.
25. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 12.
CN202011199755.0A 2020-11-02 2020-11-02 Medical image processing method, medical image processing device, computer equipment and storage medium Active CN112102315B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011199755.0A CN112102315B (en) 2020-11-02 2020-11-02 Medical image processing method, medical image processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011199755.0A CN112102315B (en) 2020-11-02 2020-11-02 Medical image processing method, medical image processing device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112102315A CN112102315A (en) 2020-12-18
CN112102315B true CN112102315B (en) 2021-02-19

Family

ID=73784448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011199755.0A Active CN112102315B (en) 2020-11-02 2020-11-02 Medical image processing method, medical image processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112102315B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052831B (en) * 2021-04-14 2024-04-23 清华大学 Brain medical image anomaly detection method, device, equipment and storage medium
CN114529772B (en) * 2022-04-19 2022-07-15 广东唯仁医疗科技有限公司 OCT three-dimensional image classification method, system, computer device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146830A (en) * 2018-07-17 2019-01-04 北京旷视科技有限公司 For generating the method, apparatus, system and storage medium of training data
CN110251231A (en) * 2019-06-13 2019-09-20 艾瑞迈迪科技石家庄有限公司 The method and device that ultrasonic three-dimensional is rebuild
CN110910467A (en) * 2019-12-03 2020-03-24 浙江啄云智能科技有限公司 X-ray image sample generation method, system and application
CN111242905A (en) * 2020-01-06 2020-06-05 科大讯飞(苏州)科技有限公司 Method and equipment for generating X-ray sample image and storage device
CN111553436A (en) * 2020-04-30 2020-08-18 上海鹰瞳医疗科技有限公司 Training data generation method, model training method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6045355B2 (en) * 2013-01-11 2016-12-14 オリンパス株式会社 Image processing apparatus, microscope system, and image processing program
CN110245757B (en) * 2019-06-14 2022-04-01 上海商汤智能科技有限公司 Image sample processing method and device, electronic equipment and storage medium
CN111292396B (en) * 2020-01-16 2023-08-29 武汉轻工大学 Image sample set generation method, device, apparatus and storage medium
CN111539957B (en) * 2020-07-07 2023-04-18 浙江啄云智能科技有限公司 Image sample generation method, system and detection method for target detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146830A (en) * 2018-07-17 2019-01-04 北京旷视科技有限公司 For generating the method, apparatus, system and storage medium of training data
CN110251231A (en) * 2019-06-13 2019-09-20 艾瑞迈迪科技石家庄有限公司 The method and device that ultrasonic three-dimensional is rebuild
CN110910467A (en) * 2019-12-03 2020-03-24 浙江啄云智能科技有限公司 X-ray image sample generation method, system and application
CN111242905A (en) * 2020-01-06 2020-06-05 科大讯飞(苏州)科技有限公司 Method and equipment for generating X-ray sample image and storage device
CN111553436A (en) * 2020-04-30 2020-08-18 上海鹰瞳医疗科技有限公司 Training data generation method, model training method and device

Also Published As

Publication number Publication date
CN112102315A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
US11887311B2 (en) Method and apparatus for segmenting a medical image, and storage medium
CN111598998B (en) Three-dimensional virtual model reconstruction method, three-dimensional virtual model reconstruction device, computer equipment and storage medium
CN111429460B (en) Image segmentation method, image segmentation model training method, device and storage medium
CN109978037B (en) Image processing method, model training method, device and storage medium
CN108735279B (en) Virtual reality upper limb rehabilitation training system for stroke in brain and control method
CN111597946B (en) Processing method of image generator, image generation method and device
CN112102315B (en) Medical image processing method, medical image processing device, computer equipment and storage medium
CN114283151A (en) Image processing method, device, equipment and storage medium for medical image
CN111242948B (en) Image processing method, image processing device, model training method, model training device, image processing equipment and storage medium
CN111429421A (en) Model generation method, medical image segmentation method, device, equipment and medium
CN111667459B (en) Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion
CN111754396A (en) Face image processing method and device, computer equipment and storage medium
US20230326173A1 (en) Image processing method and apparatus, and computer-readable storage medium
CN116721210A (en) Real-time efficient three-dimensional reconstruction method and device based on neurosigned distance field
Zhao et al. Occupancy planes for single-view rgb-d human reconstruction
CN116993948B (en) Face three-dimensional reconstruction method, system and intelligent terminal
CN114494543A (en) Action generation method and related device, electronic equipment and storage medium
CN112906675B (en) Method and system for detecting non-supervision human body key points in fixed scene
Chen et al. Prior-knowledge-based self-attention network for 3D human pose estimation
CN114972026A (en) Image processing method and storage medium
CN112418399B (en) Method and device for training gesture estimation model and method and device for gesture estimation
JP7178016B2 (en) Image processing device and its image processing method
CN112101371B (en) Data processing method and device, electronic equipment and computer storage medium
Zhou et al. Automatic segmentation algorithm of femur and tibia based on Vnet-C network
CN111598904B (en) Image segmentation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40034941

Country of ref document: HK