CN111210441A - Tumor prediction method and device, cloud platform and computer-readable storage medium - Google Patents

Tumor prediction method and device, cloud platform and computer-readable storage medium Download PDF

Info

Publication number
CN111210441A
CN111210441A CN202010001251.7A CN202010001251A CN111210441A CN 111210441 A CN111210441 A CN 111210441A CN 202010001251 A CN202010001251 A CN 202010001251A CN 111210441 A CN111210441 A CN 111210441A
Authority
CN
China
Prior art keywords
features
data
tumor
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010001251.7A
Other languages
Chinese (zh)
Inventor
邓胡川
赵安江
高杰临
丁瑞鹏
谢庆国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Raycan Technology Co Ltd
Original Assignee
Raycan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Raycan Technology Co Ltd filed Critical Raycan Technology Co Ltd
Priority to CN202010001251.7A priority Critical patent/CN111210441A/en
Publication of CN111210441A publication Critical patent/CN111210441A/en
Priority to PCT/CN2020/132372 priority patent/WO2021135774A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a tumor prediction method, a tumor prediction device, a cloud platform and a computer-readable storage medium, wherein the tumor prediction method is executed on the cloud platform and comprises the following steps: calling the acquired target prediction model to segment the acquired target image of the target patient so as to obtain a segmented image containing a tumor region; extracting high-dimensional features and depth features from the obtained segmentation image; screening the high-dimensional features and the depth features according to preset conditions; and calling the target prediction model to fuse the screened depth features and the screened high-dimensional features to obtain fusion features, and predicting the tumor classification of the target patient according to the fusion features. According to the technical scheme provided by the embodiment of the application, the accuracy of the tumor classification prediction result can be improved, and a doctor can be efficiently assisted to diagnose.

Description

Tumor prediction method and device, cloud platform and computer-readable storage medium
Technical Field
The present application relates to the field of medical data processing technologies, and in particular, to a tumor prediction method, an apparatus, a cloud platform, and a computer-readable storage medium.
Background
Tumor tissue differs from normal tissue from which it originates to varying degrees, both in cell morphology and in tissue structure, and this difference is called heterogeneity. The magnitude of the heterogeneity can be expressed in terms of the degree of differentiation and maturation of the tumor tissue. The tumor tissue has small heterogeneity, which indicates that the differentiation degree is high, and the malignancy degree is low; conversely, if the differentiation degree of the tumor tissue is low, the malignancy degree is high.
The malignant tumors are divided into early stage, middle stage and late stage, most of the early stage malignant tumors can be cured, and the middle stage malignant tumors can relieve pain and prolong life, so that the classification and prediction of the tumors are particularly important. At present, a method of Imaging omics is mainly used for tumor classification prediction, which mainly extracts a large number of quantitative image features from a region of interest in medical images such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), and the like, screens and analyzes the quantitative image features by using a machine learning method, selects the most valuable features associated with clinical problems, constructs a model by using the selected features, and diagnoses and predicts clinical phenotypes of tumors by using the constructed model.
In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art:
(1) the existing imaging omics method needs to realize different processing steps on a plurality of systems or software, for example, segmentation of a tumor region is performed on image processing software such as 3D slicer, Mazda and the like, then the result is imported into software such as MATLAB, Python and the like to train a model, and finally software such as SPSS, R and the like is used to draw an Area Under a Curve (AUC for short), a survival Curve and the like. The method not only needs to configure complex environments required by a plurality of software, but also leads the data collection and arrangement work to be heavy due to multi-system processing, and is easy to cause the problems of data loss, incomplete data information, incapability of sharing data and the like.
(2) The existing imaging group method generally adopts the imaging data in a single data center for testing, so that the repeatability, the universality and the anti-interference performance of the established tumor classification prediction model are low and the wide application is difficult.
(3) The existing image omics method basically only performs manual delineation segmentation of an image layer by layer aiming at a certain tumor (for example, kidney cancer or lung cancer), extracts high-dimensional features such as gray intensity features, three-dimensional shape features, texture features and wavelet features from a delineated segmentation region, and then performs analysis and research by using the high-dimensional features. However, due to the large difference between different tumors, extracting the same features cannot sufficiently express the deep information and hidden information of different tumor regions. Therefore, the existing imaging omics methods are not universal for different tumors.
(4) In the existing imaging omics technology, accurate and fast segmentation of tumors is a great challenge. In the aspect of accuracy, the manual segmentation result of a doctor is still used as a gold standard, so that the skill and experience of the doctor are greatly depended, and the reproducibility is low. The traditional manual segmentation method is mainly implemented by manual segmentation by professional imaging physicians, but the professional imaging physicians cannot process image data of patients in large batch and cannot avoid the limitations of time and labor consumption. Even with the semi-automatic segmentation method, the physician is required to label the target region and the background region for a plurality of image data of each patient, which is still time-consuming and labor-consuming although the operating frequency of the physician is reduced.
Disclosure of Invention
An embodiment of the present application provides a tumor prediction method, a tumor prediction apparatus, a cloud platform, and a computer-readable storage medium, so as to solve at least one problem in the prior art.
In order to solve the above technical problem, an embodiment of the present application provides a tumor prediction method, which may be executed on a cloud platform and may include:
calling the acquired target prediction model to segment the acquired target image of the target patient so as to obtain a segmented image containing a tumor region;
extracting high-dimensional features and depth features from the obtained segmentation image;
screening the high-dimensional features and the depth features according to preset conditions;
and calling the target prediction model to fuse the screened depth features and the screened high-dimensional features to obtain fusion features, and predicting the tumor classification of the target patient according to the fusion features.
Optionally, the target prediction model is obtained by:
the target prediction model is obtained from an external device or locally.
Optionally, locally obtaining the target prediction model comprises:
training and verifying a pre-constructed machine learning model by using the acquired sample image data, wherein the sample image data comprises training data and verification data and is matched with the target image;
and determining the machine learning model which achieves the optimal training effect and passes the verification as the target prediction model.
Optionally, before training the machine learning model, the tumor prediction method comprises:
selecting the pre-stored sample image data from a local database; or
The sample image data is obtained by processing the received patient data.
Optionally, obtaining the sample image data by processing the received patient data comprises:
performing format parsing on the received patient data;
and selecting the sample image data from the analyzed patient data according to a preset standard.
Optionally, the preset criteria include whether the patient data is complete, clinically validated, and meeting a clinical index.
Optionally, the screening the depth features and the high-dimensional features extracted from the target image according to a preset condition includes:
and screening the high-dimensional features and the depth features by utilizing a sparse representation algorithm, a lasso algorithm, a Fisher discriminant method, a feature selection algorithm based on maximum correlation-minimum redundancy or a feature selection algorithm based on conditional mutual information to screen out the high-dimensional features and the depth features meeting the preset conditions.
Optionally, the target image comprises a CT image, an MRI image, a PET image, an US image, a SPECT image and/or a PET/CT image.
Optionally, the target prediction model comprises an AlexNet model or a VGGNet model.
The embodiment of the present application further provides a tumor prediction apparatus, which may be disposed on a cloud platform, and may include:
a segmentation unit configured to invoke the acquired target prediction model to segment the acquired target image of the target patient to obtain a segmented image containing a tumor region;
an extraction unit configured to extract high-dimensional features and depth features from the obtained segmented image;
a screening unit configured to screen the high-dimensional features and the depth features according to a preset condition;
a fusion unit configured to invoke the target prediction model to fuse the screened depth features and the high-dimensional features to obtain fusion features;
a prediction unit configured to predict a tumor classification of the target patient according to the fused features.
Optionally, the tumor prediction apparatus further comprises:
an acquisition unit configured to acquire the target prediction model by: training and verifying a pre-constructed machine learning model by using the acquired sample image data, wherein the sample image data comprises training data and verification data and is matched with the target image, and determining the machine learning model which achieves the optimal training effect and passes the verification as the target prediction model.
The embodiment of the application also provides a cloud platform, and the cloud platform comprises the tumor prediction device.
Optionally, the cloud platform further comprises:
a data management device configured to manage user rights and received user data, the user data including patient data and user account information.
Optionally, the cloud platform further comprises one or more of the following:
a resource monitoring device configured to monitor usage of the resource and a performance parameter of the network according to the received monitoring instruction;
a visualization processing device configured to display the received user data, the processing results output by the lesion prediction device, and the constructed nomogram and/or survival graph;
a data storage device configured to store various data output by the data management device and the tumor prediction device;
a control device configured to operate the tumor prediction device, the data management device, the resource monitoring device, the visualization processing device, and the data storage device.
The present application further provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and the computer program can implement the above tumor prediction method when executed.
According to the technical scheme provided by the embodiment of the application, the tumor classification of the target patient is predicted by calling the target prediction model on the cloud platform instead of being executed on a plurality of systems or software, so that the operation environment for realizing the tumor classification prediction is simplified, and the accuracy of the tumor classification prediction can be improved. In addition, the embodiment of the application not only extracts the high-dimensional features in the segmented image, but also extracts the deep features in the segmented image, so that the heterogeneity of the tumor is fully explained by considering the difference of the features to be extracted of the images of different tumors or different image devices, and the deep information and hidden information of different tumor regions can be fully expressed, so that the method has universality. In addition, the tumor prediction method provided by the embodiment of the application can be used for realizing automatic segmentation of the image, so that the image segmentation speed and accuracy can be improved, and the labor and time cost can be saved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a diagram of an application environment of a tumor prediction method in an embodiment of the present application;
FIG. 2 is a schematic flow chart of a method of tumor prediction provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a tumor prediction apparatus provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a cloud platform provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only used for explaining a part of the embodiments of the present application, but not all embodiments, and are not intended to limit the scope of the present application or the claims. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that when an element is referred to as being "disposed on" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected/coupled" to another element, it can be directly connected/coupled to the other element or intervening elements may also be present. The term "connected/coupled" as used herein may include electrical and/or mechanical physical connections/couplings. The term "comprises/comprising" as used herein refers to the presence of features, steps or elements, but does not preclude the presence or addition of one or more other features, steps or elements. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
In addition, in the description of the present application, the terms "first", "second", "third", and the like are used for descriptive purposes only and to distinguish similar objects, and there is no order of precedence between the two, and no indication or implication of relative importance is to be inferred. In addition, in the description of the present application, "a plurality" means two or more unless otherwise specified.
Fig. 1 is an application environment diagram of a tumor prediction method in an embodiment. Referring to fig. 1, the method may be applied to a cloud platform. The cloud platform includes a terminal 100 and a server 200 connected through a network. The method may be executed in the terminal 100 or the server 200, for example, the terminal 100 may directly acquire patient data including image data of a target patient from a medical device, and execute the above method on the terminal side; alternatively, the terminal 100 may also transmit the patient data to the server 200 after acquiring the patient data of the target patient, so that the server 200 acquires the patient data of the target patient and performs the above-described method. The terminal 100 may specifically be a desktop terminal (e.g., a desktop computer) or a mobile terminal (e.g., a notebook computer or a tablet computer), and the like. The server 200 may be implemented as a stand-alone server or as a server cluster comprising a plurality of servers.
Fig. 2 is a tumor prediction method provided in an embodiment of the present application, which may be executed on a cloud platform and may include the following steps:
s1: and obtaining a target prediction model.
The target prediction model may be any neural network model used for predicting a tumor classification, for example, an AlexNet model or a VGGNet model. The AlexNet model mainly comprises 8 layers of structures such as 5 layers of convolution layers and 3 layers of full connecting layers; the VGGNet model may include a 16-layer structure including 8 convolutional layers, 5 pooling layers, and 3 full-link layers, or may be a 19-layer structure, but is not limited thereto.
Upon receiving an instruction indicating that a tumor prediction is imminent, the target prediction model may be obtained from an external device or locally. Here, the external device may refer to a device outside the cloud platform, and accordingly, the local device may refer to the cloud platform; alternatively, the external device may also refer to a device on the cloud platform other than the tumor prediction device, and accordingly, the local device may refer to the tumor prediction device.
Locally obtaining the target prediction model may include:
(1) the sample image data is selected from a local database as previously stored or obtained by processing the received patient data.
The sample image data may include at least one of CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), US (ultrasound), SPECT (Single-Photon Emission Computed Tomography), and other image data from a plurality of medical institutions, and PET/CT (Positron Emission Tomography/Computed Tomography, PET/CT) and other multi-mode image data. The sample image data can be divided into training data and verification data, wherein the training data can be used for training the machine learning model, and the verification data can be used for verifying the training result of the machine learning model. In the sample image data, the ratio of the two data is typically 7:3 or 8: 2.
The patient data may include image data of different types of patients and patient case information, such as gender, age, height, weight, and the like.
In one embodiment, a corresponding large amount of image data may be selected from the local database as sample image data according to the received instruction, and most of the image data may be randomly used as training data and the rest of the image data may be used as verification data according to a preset ratio.
In another embodiment, the received patient data may be format parsed according to the received instructions, e.g., different types of image data may be parsed into a DICOM format; then, corresponding image data can be selected from the analyzed patient data according to a preset standard to serve as sample image data. The preset criteria may include whether the patient data is complete, whether it is clinically validated, and whether it meets clinical criteria, etc. For example, it can be determined whether the clinical data of the patient and the case information thereof are incomplete, and if not, the image data of the patient is not selected as the sample image data; it is also possible to determine whether the image data of the patient is confirmed by clinical means, for example, if the patient is diagnosed with malignant tumor by biopsy, the image data of the patient may be selected as sample image data; whether the image data of the patient is judged to be too small or abnormal by a doctor or not can be judged, and if the image data of the patient does not meet the requirement, the image data of the patient is not selected; the image data of the patient can also be selected according to the actual research requirements of the doctor and combined with clinical indexes.
(2) And training and verifying a pre-constructed machine learning model by using the acquired sample image data, and determining the machine learning model which achieves the optimal training effect and passes the verification as a target prediction model.
After sample image data is acquired, the machine learning model may be trained using the acquired training data. Specifically, the segmentation model in the machine learning model can be trained according to the received user instruction, the tumor region and the background region in the sample image data can be separated, then extracting high-dimensional features and depth features from the tumor region, then screening the high-dimensional features and the depth features, then carrying out fusion processing on the screened high-dimensional features and the screened depth features to obtain fusion features, and finally, processing the fusion characteristics and the classification labels of the tumors (for example, the label of a benign tumor is 1, and the label of a malignant tumor is 0) by using machine learning algorithms such as a support vector machine, LASSO (LASSO) logistic regression or random forest, therefore, the characteristics highly related to the benign and malignant tumors are selected from the fusion characteristics, the training effect can be considered to be optimal at the moment, and each network parameter in the machine learning model is determined.
After the training effect of the machine learning model is determined to be optimal, cross validation of 5 folds or 10 folds and the like can be performed on the trained machine learning model by using validation data, and corresponding accuracy, precision and recall rate are calculated. When the obtained accuracy, precision and recall reach corresponding preset thresholds, the trained machine learning model can be determined as a target prediction model.
S2: and calling the acquired target prediction model to segment the acquired target image of the target patient to obtain a segmented image containing the tumor region.
The target image may be an image obtained by scanning a target patient with a medical imaging apparatus, and includes a region where a tumor site of the target patient is located (i.e., a tumor region). The target image may include a CT image, an MRI image, a PET image, an US image, a SPECT image, and/or a PET/CT image. The sample image data is matched to the target image, including type and/or content matches, e.g., they are both CT images of the patient, or both lung images.
After the target prediction model is acquired and the target image of the target patient is received, the acquired target prediction model may be called to perform tumor region and background region segmentation on the acquired target image of the target patient to extract a tumor region in the target image, so as to obtain a segmented image containing the tumor region.
S3: and extracting high-dimensional features and depth features from the obtained segmentation image.
The high-dimensional features may refer to data with higher dimensionality, which may include at least one of histogram features, three-dimensional shape features, texture features, and filtering features. Wherein the histogram feature may indicate an image gray scale, which may include features such as a maximum, a minimum, a median, a mean, a span (maximum-minimum), a variance, a standard deviation, a mean absolute deviation, and/or a root mean square; three-dimensional shape characteristics may include volume, surface area, compactness characteristics, sphericity, spherical asymmetry, and/or surface area to volume ratio, among others; the texture feature may indicate information on relative positions of various Gray levels of the image, and may include various kinds of features such as a Gray Level Dependency Matrix (GLDM), a Gray Level Co-occurrence Matrix (GLCM), a Gray Level Run Length Matrix (GLRLM), a Gray Level area Size Matrix (GLSZM), and/or a local Gray Level Difference Matrix (NGTDM), which may be constructed according to a spatial distribution of lesion tumor voxel intensities; the filter characteristics can be obtained by: decomposing image texture information by means of wavelet transform to obtain high-frequency and low-frequency sampling images, decomposing the images into a plurality of components, namely performing high-low pass filtering along the X, Y, Z direction to obtain sub-bands in different directions, and calculating a histogram feature, a three-dimensional shape feature and a texture feature for each sub-band to obtain a filtered feature. The filtering characteristics can be used for eliminating noise mixed in the image and improving the definition of the image.
Depth features may refer to hidden information of an image that is invisible to the naked eye and not characterized by common features, which are features that need to be extracted by invoking a deep neural network and can be used to predict a tumor classification.
After obtaining a segmented image of the target patient, high-dimensional features and depth features may be extracted from the segmented image.
As to a method how to extract high-dimensional features from an image, reference may be made to the description in the prior art, which is not described herein in detail.
With respect to depth features, a target prediction model may be invoked to extract from a target image.
In one embodiment, when the target prediction model is an AlexNet model, first, a first convolution layer in the AlexNet model may be called to perform convolution processing such as local response normalization, pooling on the input target image, and output the extracted feature map; then, the second convolution layer can be called to carry out convolution processing such as local response normalization, maximum pooling and the like on the feature graph output by the first convolution layer and output a corresponding feature graph; then, the third convolution layer and the fourth convolution layer can be called in sequence to carry out convolution processing and output corresponding feature maps; calling the fifth convolutional layer to directly perform maximum pooling on the feature map output by the fourth convolutional layer; and finally, calling three full-connection layers for classification processing to extract depth features and output the extracted depth features.
In one embodiment, when the target prediction model is the VGGNet model, a convolution layer in the VGGNet model may be called to perform convolution processing on the input target image, then the pooling layer is called to perform maximum pooling processing, and finally the fully-connected layer is called to perform classification processing to extract depth features and output the extracted depth features.
The number, size, step size, and other settings of the convolution kernels of the network layers of the same type may be the same or different, and are not limited herein. Each network layer processes the extracted feature map of the previous network layer connected thereto and outputs the extracted feature map to the next network layer connected thereto.
S4: and screening the extracted high-dimensional features and the extracted depth features according to preset conditions.
The preset conditions may be set based on empirical data or actual requirements, which may indicate that the screened features have the greatest impact on the predicted objective. The biggest impact on the prediction target can be reflected in the relevance aspect, and can also be reflected in other measurement indexes.
After the high-dimensional features and the depth features are extracted from the target image, the high-dimensional features and the depth features can be screened by using algorithms such as a sparse representation algorithm, a lasso algorithm, a Fisher discrimination method, a feature selection algorithm based on maximum correlation-minimum redundancy, a feature selection algorithm based on conditional mutual information and the like, so as to screen out the features meeting preset conditions from the features, namely, screening out the features having the largest influence on a predicted target.
The main idea of sparse representation-based algorithms is that natural signals can be sparsely represented by dictionaries. In general, the model of the algorithm can be expressed as follows:
Figure BDA0002353586270000081
where y is the classification label of the sparse representation set (e.g., benign, malignant; metastatic, non-metastatic, etc.); d ═ D1,d2,...,di,...,dk]Is a sparse representation set consisting of high-dimensional features and depth features, diRepresenting one of a high-dimensional feature and a depth feature, α being sparse representation coefficients in the form of a matrix, and
Figure BDA0002353586270000091
is an estimate thereof and contains some non-zero elements; μ is a regularization parameter greater than 0 and is used to balance between fidelity and sparsity.
α is obtained by solving the above equation, and the feature in which the coefficient is not 0 is taken as the high-dimensional feature and the depth feature screened out.
A Fisher-based discrimination method is a qualitative classification discrimination method, and is mainly based on the idea of projection, a feature vector corresponding to the maximum feature value in high-dimensional features and depth features is obtained, and image data is projected to a high-dimensional space formed by the feature vectors, so that the high-dimensional features and the depth features with the minimum distance of the same category and the maximum distance of different categories are screened in the high-dimensional space.
For a related description of other algorithms, reference may be made to the prior art, which is not described in any further detail herein.
By screening the high-dimensional features and the depth features, redundant features and features with low correlation in the two features can be effectively screened out, and therefore the accuracy of a prediction result can be improved.
S5: and calling a target prediction model to fuse the screened high-dimensional features and the depth features to obtain fusion features, and predicting the tumor classification of the target patient according to the fusion features.
After the high-dimensional features and the depth features meeting the preset conditions are screened out, a target prediction model can be called to perform fusion processing on the screened high-dimensional features and the screened depth features to obtain fusion features.
As to how to perform the fusion processing on the image data by using the machine learning model, reference may be made to the related description in the related art.
After the fusion features are obtained, the obtained fusion features can be matched with a preset tumor classification label, so that the tumor classification of the target patient can be predicted according to the matching result. For example, when the fused features match a benign tumor signature, the patient's tumor can be predicted to be benign; when the fusion signature matches the malignancy tag, the patient's tumor can be predicted to be malignant.
It should be noted that the matching relationship between the fusion feature and the tumor classification label may be determined when the machine learning model is trained.
As can be seen from the above description, the embodiments of the present application predict the tumor classification of the target patient by invoking the target prediction model on the cloud platform, instead of being executed on multiple systems or software, which simplifies the operating environment for implementing the tumor classification prediction, and may prevent data loss and data insufficiency, and may also implement data sharing and improve data processing efficiency. Moreover, the adopted sample image data come from a plurality of imaging devices or medical institutions, so that the established target prediction model has strong repeatability, universality and interference resistance and can be widely used. In addition, the embodiment of the application not only extracts the high-dimensional features in the segmented image, but also extracts the deep features in the segmented image, so that the heterogeneity of the tumor is fully explained by considering the difference of the features to be extracted of the images of different tumors or different image devices, and the deep information and hidden information of different tumor regions can be fully expressed, so that the method has universality. In addition, the tumor prediction method provided by the embodiment of the application can be used for realizing automatic segmentation of the image, so that the image segmentation speed and accuracy can be improved, and the labor and time cost can be saved.
As shown in fig. 3, an embodiment of the present application further provides a tumor prediction apparatus 300, which may be disposed on a cloud platform and may include:
an obtaining unit 310, which may be configured to obtain a target prediction model for tumor prediction;
a segmentation unit 320, which may be configured to invoke the acquired target prediction model to segment the acquired target image of the target patient to obtain a segmented image containing the tumor region;
an extraction unit 330, which may be configured to extract high-dimensional features and depth features from the resulting segmented image;
a filtering unit 340, which may be configured to filter the high-dimensional features and the depth features according to a preset condition;
a fusion unit 350, which may be configured to invoke the target prediction model to fuse the screened depth features and the high-dimensional features to obtain fusion features;
a prediction unit 360, which may be configured to predict a tumor classification of the target patient based on the fused features.
In an embodiment, the obtaining unit 310 may be further specifically configured to train a pre-constructed machine learning model with the obtained sample image data, and determine the machine learning model that is optimal in training effect and passes verification as the target prediction model.
For a detailed description of the above units, reference may be made to the description relating to the above method embodiments, which are not described in any further detail herein.
By utilizing the tumor prediction device provided by the embodiment of the application, the full-automatic segmentation of the tumor region can be realized, the accuracy of tumor classification prediction can be improved, and a doctor can be effectively assisted in diagnosis.
Another cloud platform is provided in the embodiments of the present application, which may include the lesion prediction apparatus 300 in fig. 3, and may further include a data management apparatus 100, which may be configured to manage user rights and user data, including patient data and user account information. Specifically, the data management apparatus 100 may manage the user authority according to the account authorization information, for example, a certain user account authorizes N image centers or medical institutions to have an upload authority, but authorizes only M image centers or medical institutions to have an authority to operate data, and then the authorized N image centers or medical institutions may upload data to the account, but only M image centers or medical institutions have an authority to operate data, where N and M are positive integers greater than 1, and N is greater than M. The data management device 100 may also manage account information registered by the user, screen patient data uploaded by the user to screen out patient data meeting preset requirements, and send patient data meeting preset requirements such as preset size and preset format to the data storage device for storage.
In addition, the cloud platform may further include one or more of a resource monitoring apparatus 200, a visualization processing apparatus 400, a data storage apparatus 500, and a control apparatus 600.
The resource monitoring device 200 may be configured to monitor the usage of the resource and the performance parameters of the network, including CPU, memory, GPU, concurrency, bandwidth, packet loss rate, etc., according to the received monitoring instruction, and perform corresponding scheduling according to the usage of the resource.
The visualization processing device 400 may display the corresponding data according to the received instruction (including a user instruction or a preset script instruction). For example, the visualization processing device 400 may display the received user data, may display the processing results (including the image segmentation results and/or the tumor classification prediction results) output by the tumor prediction device 300, and may combine the imaging omics labels obtained by linear combination of the high-dimensional features and the depth features after screening and their feature coefficients with clinical indicators (e.g., age, sex, gene mutation, etc.) to construct a personalized and visualized nomogram and/or survival graph, etc., so as to effectively assist the doctor in performing medical diagnosis.
With regard to the specific form of nomogram and survival diagram, reference may be made to the prior art and no further description is made here.
The data storage device 500 may be used to store various data output by the data management device 100 and/or the lesion prediction device 300. The data storage device 500 may have a MySQL database disposed therein, which may be used to store high-dimensional features, depth features, system dynamic information, DICOM file storage paths, system usage records, and the like. The data storage device 500 supports cloud storage and real-time viewing of raw data and analysis results, and also supports data sharing between cross-regions and multiple centers.
The control device 600 may be used to control the operation of the data management device 100, the resource monitoring device 200, the lesion prediction device 300, the visualization processing device 400, and the data storage device 500.
By utilizing the cloud platform, efficient management of image data, accurate prediction of tumor classification and real-time sharing of data can be realized.
In one embodiment, the present application further provides a computer-readable storage medium, in which a computer program is stored, and the computer program can implement the corresponding functions described in the above method embodiments when executed. The computer program may also be run on a terminal or server as shown in figure 1.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage media, databases, or other media used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The systems, devices, apparatuses, units and the like set forth in the above embodiments may be specifically implemented by semiconductor chips, computer chips and/or entities, or implemented by products with certain functions. For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the units may be implemented in the same or multiple chips when implementing the present application.
Although the present application provides method steps as described in the above embodiments or flowcharts, additional or fewer steps may be included in the method, based on conventional or non-inventive efforts. In the case of steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In addition, the technical features of the above embodiments may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The embodiments described above are described in order to enable those skilled in the art to understand and use the present application. It will be readily apparent to those skilled in the art that various modifications to these embodiments may be made, and the generic principles described herein may be applied to other embodiments without the use of the inventive faculty. Therefore, the present application is not limited to the above embodiments, and those skilled in the art should make improvements and modifications within the scope of the present application based on the disclosure of the present application.

Claims (15)

1. A tumor prediction method, wherein the tumor prediction method is performed on a cloud platform and comprises:
calling the acquired target prediction model to segment the acquired target image of the target patient so as to obtain a segmented image containing a tumor region;
extracting high-dimensional features and depth features from the obtained segmentation image;
screening the high-dimensional features and the depth features according to preset conditions;
and calling the target prediction model to fuse the screened depth features and the screened high-dimensional features to obtain fusion features, and predicting the tumor classification of the target patient according to the fusion features.
2. The method of claim 1, wherein the object prediction model is obtained by:
the target prediction model is obtained from an external device or locally.
3. The method of claim 2, wherein locally obtaining the target predictive model comprises:
training and verifying a pre-constructed machine learning model by using the acquired sample image data, wherein the sample image data comprises training data and verification data and is matched with the target image;
and determining the machine learning model which achieves the optimal training effect and passes the verification as the target prediction model.
4. The method of claim 3, wherein prior to training the machine learning model, the method of tumor prediction comprises:
selecting the pre-stored sample image data from a local database; or
The sample image data is obtained by processing the received patient data.
5. The method of claim 4, wherein obtaining the sample image data by processing the received patient data comprises:
performing format parsing on the received patient data;
and selecting the sample image data from the analyzed patient data according to a preset standard.
6. The method of claim 5, wherein the predetermined criteria include whether the patient data is complete, clinically validated, and meeting a clinical criteria.
7. The method of claim 1, wherein the screening the depth features and the high-dimensional features extracted from the target image according to a preset condition comprises:
and screening the high-dimensional features and the depth features by utilizing a sparse representation algorithm, a lasso algorithm, a Fisher discriminant method, a feature selection algorithm based on maximum correlation-minimum redundancy or a feature selection algorithm based on conditional mutual information to screen out the high-dimensional features and the depth features meeting the preset conditions.
8. The method of tumor prediction according to claim 1, characterized in that the target image comprises a CT image, an MRI image, a PET image, a US image, a SPECT image and/or a PET/CT image.
9. The method of claim 1, wherein the target prediction model comprises an AlexNet model or a VGGNet model.
10. A lesion prediction apparatus disposed on a cloud platform, comprising:
a segmentation unit configured to invoke the acquired target prediction model to segment the acquired target image of the target patient to obtain a segmented image containing a tumor region;
an extraction unit configured to extract high-dimensional features and depth features from the obtained segmented image;
a screening unit configured to screen the high-dimensional features and the depth features according to a preset condition;
a fusion unit configured to invoke the target prediction model to fuse the screened depth features and the high-dimensional features to obtain fusion features;
a prediction unit configured to predict a tumor classification of the target patient according to the fused features.
11. The tumor prediction apparatus of claim 10, further comprising:
an acquisition unit configured to acquire the target prediction model by: training and verifying a pre-constructed machine learning model by using the acquired sample image data, wherein the sample image data comprises training data and verification data and is matched with the target image, and determining the machine learning model which achieves the optimal training effect and passes the verification as the target prediction model.
12. A cloud platform comprising the tumor prediction apparatus of any one of claims 10-11.
13. The cloud platform of claim 12, wherein the cloud platform further comprises:
a data management device configured to manage user rights and received user data, the user data including patient data and user account information.
14. The cloud platform of claim 13, wherein the cloud platform further comprises one or more of:
a resource monitoring device configured to monitor usage of the resource and a performance parameter of the network according to the received monitoring instruction;
a visualization processing device configured to display the received user data, the processing results output by the lesion prediction device, and the constructed nomogram and/or survival graph;
a data storage device configured to store various data output by the data management device and the tumor prediction device;
a control device configured to operate the tumor prediction device, the data management device, the resource monitoring device, the visualization processing device, and the data storage device.
15. A computer-readable storage medium, characterized in that it stores a computer program which, when executed, is capable of implementing a tumor prediction method according to any one of claims 1 to 9.
CN202010001251.7A 2020-01-02 2020-01-02 Tumor prediction method and device, cloud platform and computer-readable storage medium Pending CN111210441A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010001251.7A CN111210441A (en) 2020-01-02 2020-01-02 Tumor prediction method and device, cloud platform and computer-readable storage medium
PCT/CN2020/132372 WO2021135774A1 (en) 2020-01-02 2020-11-27 Tumor prediction method and device, cloud platform, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010001251.7A CN111210441A (en) 2020-01-02 2020-01-02 Tumor prediction method and device, cloud platform and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN111210441A true CN111210441A (en) 2020-05-29

Family

ID=70788310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010001251.7A Pending CN111210441A (en) 2020-01-02 2020-01-02 Tumor prediction method and device, cloud platform and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN111210441A (en)
WO (1) WO2021135774A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112309576A (en) * 2020-09-22 2021-02-02 江南大学 Colorectal cancer survival period prediction method based on deep learning CT (computed tomography) image omics
CN112837324A (en) * 2021-01-21 2021-05-25 山东中医药大学附属医院 Automatic tumor image region segmentation system and method based on improved level set
WO2021135774A1 (en) * 2020-01-02 2021-07-08 苏州瑞派宁科技有限公司 Tumor prediction method and device, cloud platform, and computer-readable storage medium
CN113744801A (en) * 2021-09-09 2021-12-03 首都医科大学附属北京天坛医院 Method, device and system for determining tumor type, electronic equipment and storage medium
CN115100130A (en) * 2022-06-16 2022-09-23 慧影医疗科技(北京)股份有限公司 Image processing method, device and equipment based on MRI (magnetic resonance imaging) image omics and storage medium
CN115631370A (en) * 2022-10-09 2023-01-20 北京医准智能科技有限公司 Identification method and device of MRI (magnetic resonance imaging) sequence category based on convolutional neural network
CN117253584A (en) * 2023-02-14 2023-12-19 南雄市民望医疗有限公司 Hemodialysis component detection-based dialysis time prediction system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146848A (en) * 2018-07-23 2019-01-04 东北大学 A kind of area of computer aided frame of reference and method merging multi-modal galactophore image
CN109934832A (en) * 2019-03-25 2019-06-25 北京理工大学 Liver neoplasm dividing method and device based on deep learning
CN110264454A (en) * 2019-06-19 2019-09-20 四川智动木牛智能科技有限公司 Cervical cancer tissues pathological image diagnostic method based on more hidden layer condition random fields
US20190370969A1 (en) * 2018-05-30 2019-12-05 Siemens Healthcare Gmbh Methods for generating synthetic training data and for training deep learning algorithms for tumor lesion characterization, method and system for tumor lesion characterization, computer program and electronically readable storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4184842B2 (en) * 2003-03-19 2008-11-19 富士フイルム株式会社 Image discrimination device, method and program
US20080267499A1 (en) * 2007-04-30 2008-10-30 General Electric Company Method and system for automatic detection of objects in an image
CN106780448B (en) * 2016-12-05 2018-07-17 清华大学 A kind of pernicious categorizing system of ultrasonic Benign Thyroid Nodules based on transfer learning and Fusion Features
CN108596247A (en) * 2018-04-23 2018-09-28 南方医科大学 A method of fusion radiation group and depth convolution feature carry out image classification
CN110399902B (en) * 2019-06-27 2021-08-06 华南师范大学 Method for extracting melanoma texture features
CN110533683B (en) * 2019-08-30 2022-04-29 东南大学 Image omics analysis method fusing traditional features and depth features
CN111210441A (en) * 2020-01-02 2020-05-29 苏州瑞派宁科技有限公司 Tumor prediction method and device, cloud platform and computer-readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190370969A1 (en) * 2018-05-30 2019-12-05 Siemens Healthcare Gmbh Methods for generating synthetic training data and for training deep learning algorithms for tumor lesion characterization, method and system for tumor lesion characterization, computer program and electronically readable storage medium
CN109146848A (en) * 2018-07-23 2019-01-04 东北大学 A kind of area of computer aided frame of reference and method merging multi-modal galactophore image
CN109934832A (en) * 2019-03-25 2019-06-25 北京理工大学 Liver neoplasm dividing method and device based on deep learning
CN110264454A (en) * 2019-06-19 2019-09-20 四川智动木牛智能科技有限公司 Cervical cancer tissues pathological image diagnostic method based on more hidden layer condition random fields

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021135774A1 (en) * 2020-01-02 2021-07-08 苏州瑞派宁科技有限公司 Tumor prediction method and device, cloud platform, and computer-readable storage medium
CN112309576A (en) * 2020-09-22 2021-02-02 江南大学 Colorectal cancer survival period prediction method based on deep learning CT (computed tomography) image omics
CN112837324A (en) * 2021-01-21 2021-05-25 山东中医药大学附属医院 Automatic tumor image region segmentation system and method based on improved level set
CN113744801A (en) * 2021-09-09 2021-12-03 首都医科大学附属北京天坛医院 Method, device and system for determining tumor type, electronic equipment and storage medium
CN113744801B (en) * 2021-09-09 2023-05-26 首都医科大学附属北京天坛医院 Tumor category determining method, device and system, electronic equipment and storage medium
CN115100130A (en) * 2022-06-16 2022-09-23 慧影医疗科技(北京)股份有限公司 Image processing method, device and equipment based on MRI (magnetic resonance imaging) image omics and storage medium
CN115631370A (en) * 2022-10-09 2023-01-20 北京医准智能科技有限公司 Identification method and device of MRI (magnetic resonance imaging) sequence category based on convolutional neural network
CN117253584A (en) * 2023-02-14 2023-12-19 南雄市民望医疗有限公司 Hemodialysis component detection-based dialysis time prediction system

Also Published As

Publication number Publication date
WO2021135774A1 (en) 2021-07-08

Similar Documents

Publication Publication Date Title
Scapicchio et al. A deep look into radiomics
CN111210441A (en) Tumor prediction method and device, cloud platform and computer-readable storage medium
Arunkumar et al. Fully automatic model‐based segmentation and classification approach for MRI brain tumor using artificial neural networks
Thawani et al. Radiomics and radiogenomics in lung cancer: a review for the clinician
US20210210177A1 (en) System and method for fusing clinical and image features for computer-aided diagnosis
CN112768072B (en) Cancer clinical index evaluation system constructed based on imaging omics qualitative algorithm
WO2010115885A1 (en) Predictive classifier score for cancer patient outcome
Mehmood et al. An efficient computerized decision support system for the analysis and 3D visualization of brain tumor
WO2023020366A1 (en) Medical image information computing method and apparatus, edge computing device, and storage medium
CN114926477A (en) Brain tumor multi-modal MRI (magnetic resonance imaging) image segmentation method based on deep learning
Li et al. A review of radiomics and genomics applications in cancers: the way towards precision medicine
CN115440383B (en) System for predicting curative effect of PD-1/PD-L1 monoclonal antibody of advanced cancer patient
Kang et al. Fully automated MRI segmentation and volumetric measurement of intracranial meningioma using deep learning
Elayaraja et al. An efficient approach for detection and classification of cancer regions in cervical images using optimization based CNN classification approach
Hou et al. 1D CNN-based intracranial aneurysms detection in 3D TOF-MRA
Ye et al. Prediction of placenta accreta spectrum by combining deep learning and radiomics using T2WI: a multicenter study
Park et al. Unsupervised anomaly detection with generative adversarial networks in mammography
OK et al. Mammogram pectoral muscle removal and classification using histo-sigmoid based ROI clustering and SDNN
Yildirim et al. Detection and classification of glioma, meningioma, pituitary tumor, and normal in brain magnetic resonance imaging using deep learning-based hybrid model
Giannini et al. Specificity improvement of a CAD system for multiparametric MR prostate cancer using texture features and artificial neural networks
US20220375077A1 (en) Method for generating models to automatically classify medical or veterinary images derived from original images into at least one class of interest
CN115274119A (en) Construction method of immunotherapy prediction model fusing multi-image mathematical characteristics
CN112329876A (en) Colorectal cancer prognosis prediction method and device based on image omics
Liu et al. The predictive accuracy of CT radiomics combined with machine learning in predicting the invasiveness of small nodular lung adenocarcinoma
Lyu et al. Machine learning-based CT radiomics model to discriminate the primary and secondary intracranial hemorrhage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200529

RJ01 Rejection of invention patent application after publication