CN108537773A - Intelligence auxiliary mirror method for distinguishing is carried out for cancer of pancreas and pancreas inflammatory disease - Google Patents

Intelligence auxiliary mirror method for distinguishing is carried out for cancer of pancreas and pancreas inflammatory disease Download PDF

Info

Publication number
CN108537773A
CN108537773A CN201810141703.4A CN201810141703A CN108537773A CN 108537773 A CN108537773 A CN 108537773A CN 201810141703 A CN201810141703 A CN 201810141703A CN 108537773 A CN108537773 A CN 108537773A
Authority
CN
China
Prior art keywords
image
pancreas
fusion
feature
classification network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810141703.4A
Other languages
Chinese (zh)
Other versions
CN108537773B (en
Inventor
杨晓冬
程超
左长京
张玉全
刘兆邦
孙高峰
潘桂霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Institute of Biomedical Engineering and Technology of CAS
Original Assignee
Suzhou Institute of Biomedical Engineering and Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Institute of Biomedical Engineering and Technology of CAS filed Critical Suzhou Institute of Biomedical Engineering and Technology of CAS
Priority to CN201810141703.4A priority Critical patent/CN108537773B/en
Publication of CN108537773A publication Critical patent/CN108537773A/en
Application granted granted Critical
Publication of CN108537773B publication Critical patent/CN108537773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention discloses one kind carrying out intelligence auxiliary mirror method for distinguishing for cancer of pancreas and pancreas inflammatory disease, including is read out to pancreas medical image data and normalization operation, obtains normalized image;Denoising, registration and image co-registration are carried out to normalized image, obtain multi-modal fusion image;Region of interest is chosen in pancreas structure shows clearly image and is mapped on other images, while the region of interest is saved as into the identifiable natural image format of subsequent classification network;According to selected region of interest, the feature of multi-modal fusion image is extracted, classified and merged, and basic classification network model is established for the feature after fusion;The classification results of each basic classification network are differentiated, final taxonomic history result is obtained;The present invention has very strong universality, is not only suitable for clinical practice, it can also be used to the scientific research in cancer of pancreas and pancreatitis field.

Description

Intelligence auxiliary mirror method for distinguishing is carried out for cancer of pancreas and pancreas inflammatory disease
Technical field
The present invention relates to intelligent auxiliary diagnosis technical fields, and in particular to one kind for cancer of pancreas and pancreas inflammatory disease into Row intelligence auxiliary mirror method for distinguishing.
Background technology
Cancer of pancreas (pancreatic cancer, PC) is a kind of common alimentary system malignant tumour, pernicious swollen in China In tumor, incidence occupies the 7th, and the death rate occupies the 6th, and triennial deposits rate less than 5%.The more unobvious of cancer of pancreas early symptom, when Occur often having located late period when abdominal pain, jaundice, weight are decreased obviously.In terms of diagnosis of pancreatic cancer, due to its clinical manifestation and its His pancreas inflammatory disease is very much like, such as chronic pancreatitis (chronic pancreatitis, CP), there is abdominal pain, digestion The performances such as bad, apocleisis, Nausea and vomiting, weight loss and obstructive jaundice, and in conventional image slice, thin piece with other pancreases The overlapping of gland inflammatory disease is more, therefore preoperative cancer of pancreas of clarifying a diagnosis is more difficult, is especially difficult to accurately identify cancer of pancreas and pancreas Gland inflammatory disease.
It is well known that imageological examination plays key effect in the diagnosis of pancreatic disorders, but imageological examination is only Be capable of providing most intuitive image, the knowledge of wherein information is taken be limited to check directly display effect and image doctor itself Horizontal and experience cannot make full use of the image picture to be included often because of human eye resolution capability and human negligence etc. More information such as has pathological tissues the super visualization low-level image feature of separating capacity, doctor inevitable with traditional read tablet mode It can skip.Therefore, doctor needs a kind of advanced ancillary technique by various inspection informixes, is carried out to multi-modality images Processing, to improve the recall rate of the lesions such as tumour, calcification, inflammation or fibrosis, here it is intelligent auxiliary diagnosis technology (CAD skills Art).The technology may recognize that the diagnostic message that human eye cannot identify, as the second eyes of doctor, improve cancer of pancreas The accuracy rate of diagnosis plays increasingly important role during the diagnosis of cancer of pancreas.
Imageological examination includes multiple rows of computed tomography, magnetic resonance, ultrasound, endoscopic ultrasound (EUS), PET etc., but this There is limitation in a little Examined effects, such as:CT is to diameter<The minimal neoplastic and isodensity lesion sensibility of 2cm is poor, and in pancreas Shortcomings in the antidiastole of head cancer and chronic pancreatitis, because the appearance of calcification, the expanding of ductus pancreaticus, part swollen object go out Phenomena such as obstruction of existing, dual drainage, ductus pancreaticus blocking, the infiltration of peripheral adipose and pancreas peripheral vein, is in two kinds of diseases Occur;To there is the patient of metallic foreign body such as intravascular stent that can not carry out MRI inspections, and diagnosis of the MRI for pancreatic disorders in vivo Value is still disputable;Ultrasound is not good enough for there is after the peritonaeum of more intestinal gas and obese patient image show;EUS belongs to a kind of Intrusive imaging device can cause patient uncomfortable, and for the discriminating of chronic pancreatitis and cancer of pancreas, the performance of EUS does not enable People is satisfied with, the patient for the cancer of pancreas that occurs together especially for chronic pancreatitis, it is reported that the chronic pancreatitis of 22-36% is missed It examines as cancer of pancreas;PET is essentially a kind of function phenomenon, reflects specific metabolic process, but inflammation foci is especially certainly Body autoimmune chronic pancreatitis also will appear the 18F FDG high intakes of similar cancer of pancreas.
As described above, any one based on above-mentioned Examined effect can not all make pancreatic disease accurate judgement, therefore, Intelligently auxiliary identification system and method apply valence to cancer of pancreas based on image group to clinical research with very high with pancreatitis Value.The present invention is directed to the super visualization bottom layer image information of the medical image to multiple modalities to carry out going deep into excavation, according to disease Kitchen range have the characteristics of the underlying image of separating capacity, realize that the classification to cancer of pancreas and pancreas inflammatory disease is reflected by medical image Not, while the present invention can also be applied to the field of scientific study to cancer of pancreas and pancreas inflammatory disease.
The intelligent auxiliary diagnosis of pancreatic disease is concentrated mainly on using image processing techniques both at home and abroad at present following Aspect:
2001, Norton I D etc. proposed the artificial neural network of an autonomous learning to analyze EUS images and area Divide malignant tumour and pancreatitis.2008, Das A etc. carried out texture analysis using image analysis software to pancreas EUS images, Through principal component analysis (PCA) dimensionality reduction, the cancer of pancreas prediction model based on neural network is established.2013, Zhu M etc. utilized figure Then picture treatment technology utilizes class algorithm to be calculated with sequential advancement search from pancreas EUS interesting image regions texture feature extractions The distance between method (SFS) preferably combines feature, establishes support vector machines (SVM) prediction model.Cai Zheyuan etc. Just proposed that similar algorithm utilizes the further feature of sequential advancement searching algorithm using the first feature selecting of class spacing in 2008 Optimization, hereafter, Cai Zheyuan etc. is again improved texture feature extraction, selects the Multifractal Dimension based on M m-bands wavelet transforms Feature, the disaggregated model based on this foundation at runtime with the better than method that is previously proposed on classification accuracy.Master Wu The computer diagnosis result of fuzzy classification is combined by instrument person of outstanding talent with seeds implantation, is expanded entire categorizing system Exhibition, can not only identify cancer of pancreas and non-cancer, and can further identify cancer of pancreas and pancreatitis.2015, Zhu J etc. were in order to carry The performance of high score class model introduces a kind of new lesion description --- local tertiary mode variance.
2016, Hanania etc. using gray level co-occurrence matrixes to the grade malignancies of intraductal papillary mucinous tumors into Row classification.2016, Chakraborty etc. was based on the texture analysis of enhanced CT imagery exploitation to using the pancreas of new adjuvant chemotherapy to lead Pipe adenocarcinoma patients carry out Prediction of survival, they are extracted 169 standard texture features, including gray scale symbiosis square out of focal area Battle array, run-length matrix, local binary patterns, fractal dimension and first-order statistical properties etc. are established based on Naive Bayes Classification mould The prediction model of type.2017, it is swollen to intraductal papillary mucinous tumors and pancreas capsule that Gazit etc. is based on enhanced CT image Tumor is classified, one new feature for representing solid constituent in tumour of their hand-designeds, and combines 255 standard texture spies Sign, establishes the disaggregated model based on Ada-boost disaggregated models.
1993, Du-Yih Tsai etc. proposed the subtle anomalies detection method based on CT pancreas images.This is a kind of simple Cascading filter detection method, the first step introduce gray level logarithm operation square, improve the edge of low gray level, then will Gray level is transferred to the fuzzy region of deletion, and final step enhances the profile of details with logarithm operation.2013, master Zhao Chao A kind of support vector cassification side of the quantum genetic algorithm optimization for cancer of pancreas detection is proposed Deng based on pancreas CT images Method.
The studies above is analyzed it can be found that current pancreatic disease intelligently auxiliary identification system exist more it is following not Foot:(1) it needs to carry out smart segmentation to pancreas or focal area, this needs doctor to have deep specialty background and abundant face Bed experience, and take time and effort, inevitably there is segmentation error;(2) feature extraction is carried out by the way of hand-designed, is extracted Characteristic present is poor with generalization ability, and researcher is needed to carry out in-depth study to problem domain to be solved, with design Go out the feature of better adaptability;(3) the studies above is studied for the image of single mode, has ignored other mode images The performance boost that may be brought.
Invention content
For the shortcomings of the prior art, the present invention is based on image groups and deep learning, provide one kind Intelligence auxiliary mirror method for distinguishing is carried out for cancer of pancreas and pancreas inflammatory disease.
The present invention provides one kind for cancer of pancreas and pancreas inflammatory disease progress intelligence auxiliary mirror method for distinguishing, including under State step:
1) pancreas medical image data is read out and normalization operation, obtains normalized image;
2) denoising, registration and image co-registration are carried out to normalized image, obtains multi-modal fusion image;
3) it chooses and region of interest and is mapped on other images in pancreas structure shows clearly image, while by the sense Region of interest saves as the identifiable natural image format of subsequent classification network;
4) according to selected region of interest, the feature of multi-modality images or blending image is extracted, classify and Fusion, and establish basic classification network model for the feature after fusion;
5) classification results of each basic classification network are differentiated, obtains final taxonomic history result.
Preferably, pancreas medical image data described in step 1) derives from PACS system and medical imaging devices.
Preferably, image co-registration described in step 2) use pixel-level image fusion technology, including spatial-domain algorithm and Transform-domain algorithm.
Preferably, region of interest described in step 3) is the rectangle for including affected area whole pancreatic tissue, the nature Picture format is .png or .bmp.
It is preferably that basic classification network model is merged and established in feature extraction, classification described in step 4) It is as follows:
1) to multi-modal fusion picture construction special depth pyramid convolutional neural networks, the structure of the network is to connect entirely A series of pyramid ponds layer is used before connecing layer, to allow input picture to be arbitrary dimension;
2) multi-modal fusion image special depth pyramid convolutional neural networks are entered data into, extraction is defeated by full articulamentum The feature gone out generates characteristic pattern;
3) features described above is merged based on bilinearity fusion function, i.e., by the corresponding position element of two characteristic patterns into It sums after row apposition operation, obtains fusion feature figure, the port number of the fusion feature figure is square of primitive character figure port number, It is expressed as
Wherein, ybilIndicate fusion feature figure, xaAnd xbIndicate characteristic pattern, xa、xb∈RH×W×D, H, W, D indicate feature respectively Length, width and the number of channels of figure,
4) it uses convolution fusion function to carry out dimension-reduction treatment to fusion feature figure, obtains the fusion feature figure of dimensionality reduction, i.e., will The fusion results of bilinearity fusion function carry out convolution algorithm with filter f, while introducing deviation b, to realize dimensionality reduction, table It is shown as
yconv=ybil*f+b;
Wherein, yconvFor convolution fusion function, f ∈ R1×1×2D×D, b ∈ RD
5) disaggregated model is trained according to the fusion feature figure of dimensionality reduction, that is, establishes basic classification network model, wherein institute It is the supporting vector for combining to form strong disaggregated model by weak typing model, or training a Kernel-Based Methods with sorting technique Machine.
Preferably, steps are as follows for the discriminating described in step 5):
1) each basic classification network model established is trained by training data, calculates error in classification rate;
2) coefficient of each basic classification network model is calculated according to error in classification rate;
3) class label of each basic classification network model is unified, ask each basic classification network model to exist example to be measured Prediction probability on each class label is weighted voting to remaining predicted probability after removing deviation point, and obtains finally Taxonomic history result.
Preferably, the computational methods of the error in classification rate are:If basic classification network model shares M, it is denoted as Cm, M={ 1,2 ..., M }, training dataset T={ (y1, x1), (y2, x2) ..., (yN, xN), whereinyi∈ Y=- 1 ,+1 }, error in classification rate e of m-th of disaggregated model on training dataset is calculatedm, formula is
The coefficient of each basic classification network model is αm, calculation formula is
Preferably, the class label of each basic classification network model is unified in { -1,1 }, unified function Am(x) For,
The prediction probability PmComputational methods be, wherein Label be class label,
Pm(Label=1)=(Am(x)+1)/2
Pm(Label=-1)=1- (Am(x)+1)/2。
Preferably, the preparation method of the taxonomic history result is as follows, and the prediction of M basic classification network model is general Rate PLmIt is expressed as,
By the prediction probability P of any one basic classification network modelmmaxIt is expressed as,
Pmmax=max [Pm(Label=1), Pm(Label=-1)],
Calculate PLm+PmmaxResult and be ranked up, remove maximum value and the corresponding basic classification network model of minimum value, Weighted voting is realized by building linear combination f (x) to remaining M-2 basic network disaggregated model, and then obtains final classification Identification result C (x)=sign (f (x));
Wherein, linear combination
Elaboration to the present invention and advantage:(1) intelligence auxiliary discrimination method of the present invention can be manually to hook The form for drawing rectangle chooses area-of-interest.Since pancreas is different from other human bodies such as lung, mammary gland, a certain region hair Sick change often leads to the integrally-built variation of pancreas, if carcinoma of head of pancreas is while causing head of pancreas enlargement, also often results in tail of pancreas Atrophy, therefore the present invention is different from other diseases CAD system is used only can carry out smart segmentation to focal area, and the present invention will be by Veteran radiologist selects region of interest in pancreas structure shows clearly single mode or blending image, and the sense is emerging Interesting area is the rectangle for including whole pancreatic tissues that lesion is included, and the construction of the region of interest is more than artificial essence segmentation Simply, while the uncertainty that the automatic cutting techniques of jejune pancreas are brought is avoided;
(2) in the present invention, final identification result comes from the feature of three aspects:It is the respective feature of multi-modality images, more The feature of modality fusion image, the top layer fusion feature of multi-modality images.Wherein, the respective feature of multi-modality images can provide The different collected different human body features of imaging device;The feature of multi-modal fusion image can be by the visual fusion of different modalities Into piece image, in the feature extraction and classifying stage, make the feature of different modalities that can train jointly;Multi-modal figure The top layer fusion feature of picture is that the top-level feature for having extracted different modalities image is fused together, then is entered into classification Model utilizes one strong disaggregated model of top-level feature combined training of different modalities image, preferably utilizes each mode image Top-level feature;
(3) present invention proposes a kind of formula obtaining final identification result by classification results fusion, which first removes Classification peels off as a result, being weighted voting to remaining classification results again obtains final identification result, the power of each sorter network result Value takes into account it in the error in classification rate of training stage and to the certainty factor of Exemplary classes;
(4) in the present invention, pyramid pond is introduced into feature extraction network so that input picture need not be unified to identical Size can be inputted sorter network in the form of arbitrary dimension, avoid the loss of useful information and the introducing of redundant information;
(5) present invention have very strong universality, both can under the selection of doctor to the medical image of multiple modalities into Row combinatory analysis can also be analyzed just for the medical image of a certain mode, be suitable for clinical practice, it can also be used to pancreas The scientific research of gland cancer and pancreas inflammatory disease areas.
Description of the drawings
Fig. 1 is the flow chart that intelligence of the present invention assists discrimination method;
Wherein, dotted line flow is optional flow, i.e., only can just be carried out when acquiring two kinds and two or more mode images, Otherwise solid line flow can only be carried out, that is, is directed to a certain single mode image and carries out taxonomic history;
Fig. 2 is PET/CT image co-registration mode examples;
Fig. 3 is that depth pyramid pond convolutional neural networks build example;
Wherein, DCNN indicates depth convolutional neural networks.
Specific implementation mode
The intelligence auxiliary discrimination method for elaborating cancer of pancreas medical image below in conjunction with the accompanying drawings, to enable people in the art Member can implement according to this with reference to specification word.
Embodiment 1
The invention discloses one kind carrying out intelligence auxiliary mirror method for distinguishing for cancer of pancreas and pancreas inflammatory disease, specific to walk It is rapid as follows:
1) reading of multi-modal image, and carry out gray scale normalization operation;
2) image preprocessing carries out the operations such as denoising, registration to the normalized image that step 1) obtains, obtains increased quality The unified multi-modality images of sampling interval, and then carry out image co-registration;
3) according to multi-modality images and blending image obtained by step 2), by veteran radiologist in pancreas structure Show that a rectangle is drawn in clearly single mode or blending image includes region of interest, that is, chooses rectangular target areas, and This area-of-interest is mapped to other modality images up, it is identifiable that area-of-interest is saved as subsequent classification network .png, the natural images format such as .bmp;
4) structure depth pyramid pond convolutional neural networks extract multimode according to the area-of-interest that step 3) obtains And blending image feature and classify;Fusion Features are carried out using the multimode feature that above-mentioned network extracts simultaneously, and are directed to Feature after fusion carries out establishing disaggregated model;
5) classification results of each basic classification network in step 4) are differentiated, is tested in the training stage in conjunction with them Error in classification rate in data and the performance on specific example remove the classification results that peel off, add to remaining classification results Power voting, obtains final classification identification result and its certainty factor.
Wherein, step 1) is to obtain image data, and it is normalized operation;The source of the image data include but Be not limited to PACS system and medical imaging devices, specific medical imaging devices include but not limited to CT scan, PET/CT scannings, SPECT scannings, MRI, ultrasound, X-ray, angiogram, fluorescence photo, microphoto;To collected number According to operation is normalized, this normalization operation includes but not limited to the cutting and compression etc. to medical image tonal range, with Enhance the useful details in image.
Step 2) is the image after obtaining different modalities denoising and being registrated, and carries out image co-registration, is included the following steps:
S2-1, the normalized image obtained to step 1 carry out denoising, denoising method include but not limited to mean filter, in The combination of value filtering, adaptive median filter, frequency domain filtering etc. and above-mentioned filtering method;
S2-2, image registration is carried out, will goes, obtains in the low image registration of spatial resolution to the high image of spatial resolution Unified sampling interval is obtained, for example, being scanned for PET/CT, becoming PET image of changing commanders using simple scaling is matched to CT images In, method for registering includes but not limited to feature based and the correlation registration technology based on mutual information;
S2-3, the image after registration is merged, the pixel-level image fusion technology used include but not limited to Spatial-domain algorithm based on Logical filtering method, gray moment and comparison modulation method and with pyramid decomposition fusion method and small Transform-domain algorithm based on wave conversion method, port number can be according to input mode numbers and depth convolutional neural networks to input layer The actual demands such as demand are adjusted.
Step 3) is to select region of interest in pancreas structure shows clearly single mode or blending image by doctor, simultaneously By in region of interest reflection to the image of other mode, include the following steps:
S3-1, it is selected in multiple modalities image and blending image by veteran radiologist, selects pancreas Structure shows most clearly image;
Region of interest is extracted in the image that S3-2, radiologist select in step 3-1, which is covering Include the rectangle of whole pancreatic tissues of lesion, the construction of the region of interest is more simpler than artificial essence segmentation, while by the sense Region of interest reflection is on the image of other mode, wherein pancreas boundary with apart from nearest region of interest square boundary be 5-10 A pixel;
S3-3, the area-of-interest in each mode and blending image is saved as the identifiable .png of subsequent classification network, .bmp equal natural images format.
Step 4) is the special depth pyramid pond convolutional neural networks disaggregated model of each mode of structure and blending image, The characteristics of each network all has arbitrary input, while the multi-modal feature that above-mentioned disaggregated model extracts being merged, to melting One disaggregated model of feature construction is closed, is specifically comprised the following steps:
S4-1, respective special depth pyramid convolutional neural networks classification is built for each modality images and blending image Model, network structure uses a series of pyramid ponds layer before full articulamentum, to allow input picture that can be Arbitrary dimension is carried in addition, network further includes the other structures such as short connect structures, Webweb with accelerating training speed High-performance, optimization algorithm are divided using the methods of stochastic gradient descent, Adm, Nadam, Adagrad, Adadelta and RMSprop Class layer using softmax or Linear SVM as activation primitive, by by the experimental methods such as grid search come adjust the depth of network with Width so that each disaggregated model can reach highest accuracy rate, i.e., network, which can to the maximum extent extract in image, has differentiation The feature of ability;
S4-2, data are inputted to above-mentioned trained each modality images special depth pyramid convolutional neural networks again, The feature of the full articulamentum output of extraction classification layer preceding layer (layer second from the bottom);
S4-3, bilinearity fusion function is primarily based on by above-mentioned each modality images Fusion Features.Bilinearity merges It sums after the corresponding position element of 2 characteristic patterns is carried out apposition operation, the port number of fusion feature figure is that primitive character figure is logical Square of road number, is expressed as
Wherein, xaAnd xbIndicate the characteristic pattern of different modalities image, ybilIndicate fusion space characteristics figure;
xa、xb∈RH×W×D, H, W, D indicate length, width and the number of channels of characteristic pattern respectively,
The characteristic pattern that S4-4, above-mentioned steps obtain has that dimension is excessively high, and the present invention further uses convolution to merge Function yconv=fconv(xa, xb) dimension-reduction treatment is carried out to fusion feature figure, by the fusion results of bilinearity fusion function and filtering Device f carries out convolution algorithm, and introduces deviation b, to realize dimensionality reduction, is expressed as
yconv=ybil*f+b
Wherein, f ∈ R1×1×2D×D,b∈RD.Multi-modality images top-level feature is preferably merged as a result,;
S4-5, a disaggregated model is trained according to the fusion feature that S4-4 is obtained, and it is trained to reach highest accuracy rate;It can Include but not limited to that the strong disaggregated model formed is combined in training one by weak typing model with the sorting technique of selection, such as The support vector machines etc. of Adaboost disaggregated models, random forest disaggregated model etc., or one Kernel-Based Methods of training.
Step 5) is differentiated to the classification results of each basic classification model in step 4), in conjunction with them in training rank The error in classification rate of section and the performance on specific example remove the classification results that peel off, table are weighted to remaining classification results Certainly.Include the following steps:
S5-1, the error in classification rate for calculating each basic classification model classification results in training stage test data, if base This disaggregated model shares M, they are denoted as Cm, m={ 1,2 ..., M }, training dataset T={ (y1, x1), (x2, y2) ..., (xN,yN), whereinyi∈ Y={ -1 ,+1 } calculate classification of m-th of disaggregated model on training dataset and miss Rate em,
The factor alpha for the M disaggregated model that S5-2, calculating are obtained based on error in classification ratem,
S5-3, different classifications model class label is unified, each disaggregated model is acquired to test case in each classification mark The prediction probability signed removes two judgement deviation points, calculates the weighted sum that remaining disaggregated model differentiates probability, obtains and finally examine Disconnected opinion and certainty factor.
It is after the completion of softmax is predicted, its class label is unified to { -1,1 } in some implementation examples, and calculate Prediction probability on all kinds of distinguishing labels;For softmax, output valve inherently prediction probability;For SVM, Adaboost Equal disaggregated models, for example x, calculating its prediction probability specific method is, by the class label of each basic classification network model It is unified in { -1,1 }, unified function Am(x) it is,
Prediction probability PmComputational methods be, wherein Label be class label,
Pm(Label=1)=(Am(x)+1)/2
Pm(Label=-1)=1- (Am(x)+1)/2。
The preparation method of taxonomic history result is as follows, by the prediction probability PL of M basic classification network modelmIt is expressed as,
By the prediction probability P of any one basic classification network modelmmaxIt is expressed as,
Pmmax=max [Pm(Label=1), Pm(Label=-1)],
Calculate PLm+PmmaxResult and be ranked up, remove maximum value and the corresponding basic classification network model of minimum value, Linear combination is built to remaining M-2 basic network disaggregated model,
Obtain final classification identification result C (x)=sign (f (x)), wherein linear combination f (x) realizes M-2 basic point The weighted voting of class model, Cm(x) coefficient before illustrates m-th of disaggregated model Cm(x) significance level, here, all coefficients The sum of the class of example x, the accuracy of the absolute value representation classification of f (x) are determined for the symbol of 1, f (x).
Embodiment 2
Image is provided by Changhai Hospital, Shanghai City in Fig. 2, is to illustrate image fusion process by taking PET/CT as an example.First will PET image a) after registration and CT images b) permeates width puppet figure c), and gray level image d) is changed into using greyscale transformation, To which the information of both modalities which to be fused together, subsequent processing is carried out, can also subsequent processing directly directly be carried out to puppet figure, The building mode of above-mentioned puppet figure is using the CT images of different HU value ranges as two channels of pseudo- figure, and PET image is third Channel.
Embodiment 3
Depth pyramid pond convolutional neural networks build example, although the depth pyramid pond convolution of each modality images The specific network structure of neural network is different, but all includes following 4 parts:
1) input of arbitrary dimension image, input picture can pass through the processing such as decentralization, standardization, ZCA albefactions so that It is easier to restrain in training, accelerates training process;
2) structure of depth convolutional neural networks (DCNN), the neural network include convolutional layer, pond layer, BN layers, short Connect etc., by carrying out tuning in network depth, width, optimization algorithm, activation primitive, learning rate etc. so that it has There is best ability in feature extraction;
3) pyramid pond layer, by introducing pyramid pond layer so that 2) the different rulers that convolutional neural networks generate in Very little characteristic pattern can unify the full articulamentum to identical size, also therefore so that this network can be to various sizes of input Image is handled;
4) classify layer, to from input picture collected feature classify, to converting medical diagnosis on disease problem to Tagsort problem, activation primitive include but not limited to softmax and Linear SVM.
Although the embodiments of the present invention have been disclosed as above, but its is not only in the description and the implementation listed With it can be fully applied to various fields suitable for the present invention, for those skilled in the art, can be easily Realize other modification, therefore without departing from the general concept defined in the claims and the equivalent scope, the present invention is simultaneously unlimited In specific details and embodiment shown here.

Claims (9)

1. one kind carrying out intelligence auxiliary mirror method for distinguishing for cancer of pancreas and pancreas inflammatory disease, which is characterized in that including following Step:
1) pancreas medical image data is read out and normalization operation, obtains normalized image;
2) denoising, registration and image co-registration are carried out to normalized image, obtains multi-modal fusion image;
3) region of interest is chosen in pancreas structure shows clearly image and is mapped on other images, while this is interested Area saves as the identifiable natural image format of subsequent classification network;
4) according to selected region of interest, the feature of multi-modal fusion image is extracted, classified and merged, and be directed to and melt Feature after conjunction establishes basic classification network model;
5) classification results of each basic classification network are differentiated, obtains final taxonomic history result.
2. according to the method described in claim 1, it is characterized in that, pancreas medical image data described in step 1) derives from PACS system and medical imaging devices.
3. according to the method described in claim 1, it is characterized in that, image co-registration described in step 2) is melted using pixel-level image Conjunction technology, including spatial-domain algorithm and transform-domain algorithm.
4. according to the method described in claim 1, it is characterized in that, region of interest described in step 3) is comprising affected area whole The rectangle of pancreatic tissue, the natural image format are .png or .bmp.
5. according to the method described in claim 1, it is characterized in that, feature extraction, classification, fusion described in step 4) and Basic classification network model is established to be as follows:
1) to multi-modal fusion picture construction special depth pyramid convolutional neural networks, the structure of the network is in full articulamentum A series of pyramid ponds layer is used before, to allow input picture to be arbitrary dimension;
2) multi-modal fusion image special depth pyramid convolutional neural networks are entered data into, what extraction was exported by full articulamentum Feature generates characteristic pattern;
3) features described above is merged based on bilinearity fusion function, i.e., carried out the corresponding position element of two characteristic patterns outer It sums after product operation, obtains fusion feature figure, the port number of the fusion feature figure is square of primitive character figure port number, is indicated For
Wherein, ybilIndicate fusion feature figure, xaAnd xbIndicate characteristic pattern, xa、xb∈RH×W×D, H, W, D indicate characteristic pattern respectively Length, width and number of channels,
4) it uses convolution fusion function to carry out dimension-reduction treatment to fusion feature figure, obtains the fusion feature figure of dimensionality reduction, i.e., by two-wire Property fusion function fusion results carry out convolution algorithm with filter f, while introducing deviation b, to realize dimensionality reduction, be expressed as
yconv=ybil* f+b,
Wherein, yconvFor convolution fusion function, f ∈ R1×1×2D×D, b ∈ RD
5) disaggregated model is trained according to the fusion feature figure of dimensionality reduction, that is, establishes basic classification network model, wherein used point Class method is the support vector machines for combining to form strong disaggregated model by weak typing model, or training a Kernel-Based Methods.
6. according to the method described in claim 1, it is characterized in that, the discriminating described in step 5) steps are as follows:
1) each basic classification network model established is trained by training data, calculates error in classification rate;
2) coefficient of each basic classification network model is calculated according to error in classification rate;
3) class label of each basic classification network model is unified, ask each basic classification network model to example to be measured each Prediction probability on class label is weighted voting to remaining predicted probability after removing deviation point, and obtains final classification Identification result.
7. according to the method described in claim 6, it is characterized in that, the computational methods of the error in classification rate are:If basic point Class network model shares M, is denoted as Cm, m={ 1,2 ..., M }, training dataset T={ (y1, x1), (y2, x2) ..., (yN, xN), whereinyi∈ Y={ -1 ,+1 } calculate error in classification rate of m-th of disaggregated model on training dataset em, formula is
The coefficient of each basic classification network model is αm, calculation formula is
8. the method according to the description of claim 7 is characterized in that the class label of each basic classification network model is unified In { -1,1 }, unified function Am(x) it is,
The prediction probability PmComputational methods be,
Pm(Label=1)=(Am(x)+1)/2
Pm(Label=-1)=1- (Am(x)+1)/2,
Wherein, Label is class label.
9. according to the method described in claim 8, it is characterized in that, the preparation method of the taxonomic history result is as follows, by M The prediction probability PL of basic classification network modelmIt is expressed as,
By the prediction probability P of any one basic classification network modelmmaxIt is expressed as,
Pmmax=max [Pm(Label=1), Pm(Label=-1)],
Calculate PLm+PmmaxResult and be ranked up, remove maximum value and the corresponding basic classification network model of minimum value, pass through Weighted voting is realized to remaining M-2 basic network disaggregated model structure linear combination f (x), and then obtains final classification discriminating As a result C (x)=sign (f (x));
Wherein, linear combination
CN201810141703.4A 2018-02-11 2018-02-11 Method for intelligently assisting in identifying pancreatic cancer and pancreatic inflammatory diseases Active CN108537773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810141703.4A CN108537773B (en) 2018-02-11 2018-02-11 Method for intelligently assisting in identifying pancreatic cancer and pancreatic inflammatory diseases

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810141703.4A CN108537773B (en) 2018-02-11 2018-02-11 Method for intelligently assisting in identifying pancreatic cancer and pancreatic inflammatory diseases

Publications (2)

Publication Number Publication Date
CN108537773A true CN108537773A (en) 2018-09-14
CN108537773B CN108537773B (en) 2022-06-17

Family

ID=63485999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810141703.4A Active CN108537773B (en) 2018-02-11 2018-02-11 Method for intelligently assisting in identifying pancreatic cancer and pancreatic inflammatory diseases

Country Status (1)

Country Link
CN (1) CN108537773B (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109273084A (en) * 2018-11-06 2019-01-25 中山大学附属第医院 The method and system of feature modeling are learned based on multi-modal ultrasound group
CN109544512A (en) * 2018-10-26 2019-03-29 浙江大学 It is a kind of based on multi-modal embryo's pregnancy outcome prediction meanss
CN109544517A (en) * 2018-11-06 2019-03-29 中山大学附属第医院 Method and system are analysed in multi-modal ultrasound group credit based on deep learning
CN109559296A (en) * 2018-10-08 2019-04-02 广州市本真网络科技有限公司 Medical image registration method and system based on full convolutional neural networks and mutual information
CN109815965A (en) * 2019-02-13 2019-05-28 腾讯科技(深圳)有限公司 A kind of image filtering method, device and storage medium
CN109949288A (en) * 2019-03-15 2019-06-28 上海联影智能医疗科技有限公司 Tumor type determines system, method and storage medium
CN109998599A (en) * 2019-03-07 2019-07-12 华中科技大学 A kind of light based on AI technology/sound double-mode imaging fundus oculi disease diagnostic system
CN110188788A (en) * 2019-04-15 2019-08-30 浙江工业大学 The classification method of cystic Tumor of Pancreas CT image based on radiation group feature
CN110349662A (en) * 2019-05-23 2019-10-18 复旦大学 The outliers across image collection that result is accidentally surveyed for filtering pulmonary masses find method and system
CN110619639A (en) * 2019-08-26 2019-12-27 苏州同调医学科技有限公司 Method for segmenting radiotherapy image by combining deep neural network and probability map model
CN110909755A (en) * 2018-09-17 2020-03-24 阿里巴巴集团控股有限公司 Object feature processing method and device
CN110909672A (en) * 2019-11-21 2020-03-24 江苏德劭信息科技有限公司 Smoking action recognition method based on double-current convolutional neural network and SVM
CN111243711A (en) * 2018-11-29 2020-06-05 皇家飞利浦有限公司 Feature identification in medical imaging
CN111667486A (en) * 2020-04-29 2020-09-15 杭州深睿博联科技有限公司 Multi-mode fusion pancreas segmentation method and system based on deep learning
CN111680687A (en) * 2020-06-09 2020-09-18 江西理工大学 Depth fusion model applied to mammary X-ray image anomaly identification and classification method thereof
CN111798410A (en) * 2020-06-01 2020-10-20 深圳市第二人民医院(深圳市转化医学研究院) Cancer cell pathological grading method, device, equipment and medium based on deep learning model
CN111833332A (en) * 2020-07-15 2020-10-27 中国医学科学院肿瘤医院深圳医院 Generation method and identification method of energy spectrum CT identification model of bone metastasis tumor and bone island
CN112070809A (en) * 2020-07-22 2020-12-11 中国科学院苏州生物医学工程技术研究所 Accurate diagnosis system of pancreatic cancer based on two time formation of image of PET/CT
CN112419306A (en) * 2020-12-11 2021-02-26 长春工业大学 Lung nodule detection method based on NAS-FPN
CN112951426A (en) * 2021-03-15 2021-06-11 山东大学齐鲁医院 Construction method and evaluation system of pancreatic ductal adenoma inflammatory infiltration degree judgment model
CN113066110A (en) * 2021-05-06 2021-07-02 北京爱康宜诚医疗器材有限公司 Method and device for selecting marking points in pelvis registration
CN113449770A (en) * 2021-05-18 2021-09-28 科大讯飞股份有限公司 Image detection method, electronic device and storage device
CN115240854A (en) * 2022-07-29 2022-10-25 中国医学科学院北京协和医院 Method and system for processing pancreatitis prognosis data

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976436A (en) * 2010-10-14 2011-02-16 西北工业大学 Pixel-level multi-focus image fusion method based on correction of differential image
CN105956532A (en) * 2016-04-25 2016-09-21 大连理工大学 Traffic scene classification method based on multi-scale convolution neural network
CN106682435A (en) * 2016-12-31 2017-05-17 西安百利信息科技有限公司 System and method for automatically detecting lesions in medical image through multi-model fusion
CN107291822A (en) * 2017-05-24 2017-10-24 北京邮电大学 The problem of based on deep learning disaggregated model training method, sorting technique and device
CN107403201A (en) * 2017-08-11 2017-11-28 强深智能医疗科技(昆山)有限公司 Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method
CN107492097A (en) * 2017-08-07 2017-12-19 北京深睿博联科技有限责任公司 A kind of method and device for identifying MRI image area-of-interest
US20180032846A1 (en) * 2016-08-01 2018-02-01 Nvidia Corporation Fusing multilayer and multimodal deep neural networks for video classification

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976436A (en) * 2010-10-14 2011-02-16 西北工业大学 Pixel-level multi-focus image fusion method based on correction of differential image
CN105956532A (en) * 2016-04-25 2016-09-21 大连理工大学 Traffic scene classification method based on multi-scale convolution neural network
US20180032846A1 (en) * 2016-08-01 2018-02-01 Nvidia Corporation Fusing multilayer and multimodal deep neural networks for video classification
CN106682435A (en) * 2016-12-31 2017-05-17 西安百利信息科技有限公司 System and method for automatically detecting lesions in medical image through multi-model fusion
CN107291822A (en) * 2017-05-24 2017-10-24 北京邮电大学 The problem of based on deep learning disaggregated model training method, sorting technique and device
CN107492097A (en) * 2017-08-07 2017-12-19 北京深睿博联科技有限责任公司 A kind of method and device for identifying MRI image area-of-interest
CN107403201A (en) * 2017-08-11 2017-11-28 强深智能医疗科技(昆山)有限公司 Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909755B (en) * 2018-09-17 2023-05-30 阿里巴巴集团控股有限公司 Object feature processing method and device
CN110909755A (en) * 2018-09-17 2020-03-24 阿里巴巴集团控股有限公司 Object feature processing method and device
CN109559296A (en) * 2018-10-08 2019-04-02 广州市本真网络科技有限公司 Medical image registration method and system based on full convolutional neural networks and mutual information
CN109544512B (en) * 2018-10-26 2020-09-18 浙江大学 Multi-mode-based embryo pregnancy result prediction device
CN109544512A (en) * 2018-10-26 2019-03-29 浙江大学 It is a kind of based on multi-modal embryo's pregnancy outcome prediction meanss
CN109544517A (en) * 2018-11-06 2019-03-29 中山大学附属第医院 Method and system are analysed in multi-modal ultrasound group credit based on deep learning
CN109273084A (en) * 2018-11-06 2019-01-25 中山大学附属第医院 The method and system of feature modeling are learned based on multi-modal ultrasound group
CN109273084B (en) * 2018-11-06 2021-06-22 中山大学附属第一医院 Method and system based on multi-mode ultrasound omics feature modeling
CN111243711A (en) * 2018-11-29 2020-06-05 皇家飞利浦有限公司 Feature identification in medical imaging
CN111243711B (en) * 2018-11-29 2024-02-20 皇家飞利浦有限公司 Feature recognition in medical imaging
CN109815965A (en) * 2019-02-13 2019-05-28 腾讯科技(深圳)有限公司 A kind of image filtering method, device and storage medium
CN109815965B (en) * 2019-02-13 2021-07-06 腾讯科技(深圳)有限公司 Image filtering method and device and storage medium
CN109998599A (en) * 2019-03-07 2019-07-12 华中科技大学 A kind of light based on AI technology/sound double-mode imaging fundus oculi disease diagnostic system
CN109949288A (en) * 2019-03-15 2019-06-28 上海联影智能医疗科技有限公司 Tumor type determines system, method and storage medium
CN110188788A (en) * 2019-04-15 2019-08-30 浙江工业大学 The classification method of cystic Tumor of Pancreas CT image based on radiation group feature
CN110349662A (en) * 2019-05-23 2019-10-18 复旦大学 The outliers across image collection that result is accidentally surveyed for filtering pulmonary masses find method and system
CN110349662B (en) * 2019-05-23 2023-01-13 复旦大学 Cross-image set outlier sample discovery method and system for filtering lung mass misdetection results
CN110619639A (en) * 2019-08-26 2019-12-27 苏州同调医学科技有限公司 Method for segmenting radiotherapy image by combining deep neural network and probability map model
CN110909672A (en) * 2019-11-21 2020-03-24 江苏德劭信息科技有限公司 Smoking action recognition method based on double-current convolutional neural network and SVM
CN111667486A (en) * 2020-04-29 2020-09-15 杭州深睿博联科技有限公司 Multi-mode fusion pancreas segmentation method and system based on deep learning
CN111667486B (en) * 2020-04-29 2023-11-17 杭州深睿博联科技有限公司 Multi-modal fusion pancreas segmentation method and system based on deep learning
CN111798410A (en) * 2020-06-01 2020-10-20 深圳市第二人民医院(深圳市转化医学研究院) Cancer cell pathological grading method, device, equipment and medium based on deep learning model
CN111680687B (en) * 2020-06-09 2022-05-10 江西理工大学 Depth fusion classification method applied to mammary X-ray image anomaly identification
CN111680687A (en) * 2020-06-09 2020-09-18 江西理工大学 Depth fusion model applied to mammary X-ray image anomaly identification and classification method thereof
CN111833332A (en) * 2020-07-15 2020-10-27 中国医学科学院肿瘤医院深圳医院 Generation method and identification method of energy spectrum CT identification model of bone metastasis tumor and bone island
CN112070809A (en) * 2020-07-22 2020-12-11 中国科学院苏州生物医学工程技术研究所 Accurate diagnosis system of pancreatic cancer based on two time formation of image of PET/CT
CN112070809B (en) * 2020-07-22 2024-01-26 中国科学院苏州生物医学工程技术研究所 Pancreatic cancer accurate diagnosis system based on PET/CT double-time imaging
CN112419306A (en) * 2020-12-11 2021-02-26 长春工业大学 Lung nodule detection method based on NAS-FPN
CN112419306B (en) * 2020-12-11 2024-03-15 长春工业大学 NAS-FPN-based lung nodule detection method
CN112951426A (en) * 2021-03-15 2021-06-11 山东大学齐鲁医院 Construction method and evaluation system of pancreatic ductal adenoma inflammatory infiltration degree judgment model
CN113066110A (en) * 2021-05-06 2021-07-02 北京爱康宜诚医疗器材有限公司 Method and device for selecting marking points in pelvis registration
CN113449770A (en) * 2021-05-18 2021-09-28 科大讯飞股份有限公司 Image detection method, electronic device and storage device
CN113449770B (en) * 2021-05-18 2024-02-13 科大讯飞股份有限公司 Image detection method, electronic device and storage device
CN115240854A (en) * 2022-07-29 2022-10-25 中国医学科学院北京协和医院 Method and system for processing pancreatitis prognosis data
CN115240854B (en) * 2022-07-29 2023-10-03 中国医学科学院北京协和医院 Pancreatitis prognosis data processing method and system

Also Published As

Publication number Publication date
CN108537773B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN108537773A (en) Intelligence auxiliary mirror method for distinguishing is carried out for cancer of pancreas and pancreas inflammatory disease
Sharif et al. A comprehensive review on multi-organs tumor detection based on machine learning
Adegun et al. Deep learning-based system for automatic melanoma detection
Qureshi et al. Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends
Hambarde et al. Prostate lesion segmentation in MR images using radiomics based deeply supervised U-Net
CN106372390A (en) Deep convolutional neural network-based lung cancer preventing self-service health cloud service system
Huang et al. Semantic segmentation of breast ultrasound image with fuzzy deep learning network and breast anatomy constraints
Fan et al. Lung nodule detection based on 3D convolutional neural networks
Xu et al. A review of medical image detection for cancers in digestive system based on artificial intelligence
Wang et al. A dual-mode deep transfer learning (D2TL) system for breast cancer detection using contrast enhanced digital mammograms
Nayan et al. A deep learning approach for brain tumor detection using magnetic resonance imaging
Li et al. A novel deep learning framework based mask-guided attention mechanism for distant metastasis prediction of lung cancer
Sharma et al. A survey on cancer detection via convolutional neural networks: current challenges and future directions
Liu et al. Automated classification of cervical Lymph-Node-Level from ultrasound using depthwise separable convolutional swin transformer
CN109635866B (en) Method of processing an intestinal image
Affane et al. Robust deep 3-d architectures based on vascular patterns for liver vessel segmentation
Li et al. Gleason grading of prostate cancer based on improved AlexNet
Zhang et al. ASE-Net: A tumor segmentation method based on image pseudo enhancement and adaptive-scale attention supervision module
Serpa-Andrade et al. An approach based on Fourier descriptors and decision trees to perform presumptive diagnosis of esophagitis for educational purposes
Chen et al. Research related to the diagnosis of prostate cancer based on machine learning medical images: A review
Amritha et al. Liver tumor segmentation and classification using deep learning
Sallam et al. Skin Lesions Recognition System Using Various Pre-trained Models
Mu et al. Channel context and dual-domain attention based U-Net for skin lesion attributes segmentation
Rahman et al. Automated melanoma recognition in dermoscopic images based on extreme learning machine (ELM)
Bagherieh et al. Mass detection in lung CT images using region growing segmentation and decision making based on fuzzy systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant