CN107451615A - Thyroid papillary carcinoma Ultrasound Image Recognition Method and system based on Faster RCNN - Google Patents
Thyroid papillary carcinoma Ultrasound Image Recognition Method and system based on Faster RCNN Download PDFInfo
- Publication number
- CN107451615A CN107451615A CN201710647846.8A CN201710647846A CN107451615A CN 107451615 A CN107451615 A CN 107451615A CN 201710647846 A CN201710647846 A CN 201710647846A CN 107451615 A CN107451615 A CN 107451615A
- Authority
- CN
- China
- Prior art keywords
- layer
- training
- faster rcnn
- ultrasonoscopy
- networks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/032—Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Radiology & Medical Imaging (AREA)
- Heart & Thoracic Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Pathology (AREA)
- Veterinary Medicine (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
Disclosure a kind of thyroid papillary carcinoma Ultrasound Image Recognition Method and system based on Faster RCNN, method include:Obtaining includes the training sample of ultrasonoscopy and corresponding diagnostic result;Wherein, ultrasonoscopy is to the image obtained after the progress ultrasonic imaging of the thyroid papillary carcinoma affected areas of patient;The Faster RCNN networks to be trained based on ZF networks are trained using training sample, model after being trained accordingly;Wherein, after training Faster RCNN networks be to sharing the 4th layer in convolutional layer and obtained Faster RCNN networks after layer 5 is attached;When getting ultrasonoscopy to be detected, then ultrasonoscopy to be detected is inputted into model after training, the testing result that model exports after being trained.Faster RCNN networks used in the present invention are obtained after being attached to the 4th layer of shared convolutional layer and layer 5, the effect that the feature of the 4th layer and layer 5 output is pooled to feature stream all the way can so be reached, can effectively improve the precision of image recognition.
Description
Technical field
The present invention relates to field of image recognition, more particularly to a kind of thyroid papillary carcinoma based on Faster RCNN surpasses
Acoustic image recognition methods and system.
Background technology
Problem of image recognition is a kind of study hotspot problem.With the development of Digital image technology and the need of practical application
Will, it is exactly not require that the output of its result is the complete image of a width new difficult point occur, but by image after treatment,
Effective feature is extracted through over-segmentation and description again, and then is adjudicated classification, here it is one to grow up in the past 20 years
Emerging technology science --- image recognition.It is to study the classification of some objects or process and be described as main contents, to develop
The NI Vision Builder for Automated Inspection of some information can be automatically processed, is accomplished manually instead of traditional for the purpose of the task of classification and identification.
And the application on medical ultrasonic image is relatively fewer.The automatic Classification of medical ultrasonic image is substantially the similar of image
Property contrast problem, or referred to as pattern recognition problem.In CNN (convolutional neural networks, Convolutional Neural
Network before) occurring, the main method of the pattern-recognition of image includes SIFT (Scale invariant features transform, Scale-
Invariant feature transform), (BOW is bag of words to BOW, is the conventional document table of information retrieval field
Show method), SVM (SVMs, Support Vector Machine), K scheduling algorithms.In terms of histopathology, for
Medical ultrasonic imaging system is influenceed by its surrounding environment and image-forming mechanism, and ultrasonic medical image is in the process for generating and transmitting
In, it can be disturbed by various noise sources so that the picture quality of the medical ultrasonic image collected is poor.
The characteristics of difficult, the accuracy rate of traditional image-recognizing method identification are identified for cancer ultrasonoscopy pathological cells
It is low, training time length, and the network parameter needed is more.At present, CNN form is had proved to be in deep learning structure preferably
Structure.Equally, CNN is also a kind of method of more suitable image characteristics extraction.But in order to obtain more accurately model, CNN is often
Need to train substantial amounts of picture.For example famous ImageNet is exactly to be formed by 15,000,000 photo training.In medical domain, obtain
It is almost impossible to obtain so many picture.And the picture feature complexity of the aspect ratio nature of histopathology image
More, these images often only have a passage;Pathological characters often with the tissue signature of surrounding is closely similar or unobvious, have
When specialty attending doctor all be difficult distinguish.This causes applications of the current CNN in medical image not universal.In addition,
For SIFT, BOW, SVM scheduling algorithm of the pattern-recognition of image, and for incomplete Gland characters, Y Toki, T
Tanaka identifies prostate cancer using the feature of SIFT methods extraction image.Also it is directed to the biopsy specimen of breast cancer tissue
Color and vein feature, Niwas, S.I., Palanisamy carry out breast cancer using least square method supporting vector machine sorting algorithm
Diagnosis.The above algorithm relies on the resource of limited artificial mark, can only be directed to limited feature and carry out pattern match.Once
Feature occur change (such as distortion, upset, illumination variation, by destruction situations such as), the effect of these algorithms will be deteriorated, it
Applicability it is not strong.
Currently, CNN is using more and more extensive.For example, Angel Cruz-Roaa, Ajay Basavanhally use CNN
Technology is split automatically to the pathological image of infiltrative breast carcinoma, and ultimately generates cancer characteristic profile.Michiel
The mode that Kallenberg, Kersten Petersen are combined using supervised learning and unsupervised learning realizes breast density
Segmentation and the assessment of mammary gland risk.
However, the accuracy of identification for carrying out analysis and evaluation to pathological image using CNN technologies is not high, therefore how to improve
The accuracy of identification that CNN technologies carry out analysis and evaluation to pathological image is those skilled in the art's urgent problem to be solved.
The content of the invention
In view of this, it is an object of the invention to provide one kind based on Faster RCNN (supper-fast neutral net,
Faster Region-based Convolutional Neural Network) thyroid papillary carcinoma ultrasonoscopy from
Dynamic recognition methods and system, it is possible to increase the accuracy of identification of pathological image.Its concrete scheme is as follows:
On the one hand, the present invention provides a kind of thyroid papillary carcinoma Ultrasound Image Recognition Method based on Faster RCNN,
Including:
Obtaining includes the training sample of ultrasonoscopy and corresponding diagnostic result;Wherein, ultrasonoscopy is the first to patient
Shape papillocarcinoma of breast affected areas carries out the image obtained after ultrasonic imaging;
The Faster RCNN networks to be trained based on ZF networks (ZFnet) are trained using the training sample, obtained
Model after to corresponding training;Wherein, the Faster RCNN networks to be trained be to share convolutional layer in the 4th layer and
The Faster RCNN networks that layer 5 obtains after being attached;
When getting ultrasonoscopy to be detected, then the ultrasonoscopy to be detected is inputted into model after the training, is obtained
The testing result that model exports after to the training.
Preferably, it is described that the Faster RCNN networks to be trained based on ZF networks are instructed using the training sample
In experienced process, in addition to:
Before being attached to described 4th layer and the layer 5, to corresponding to described 4th layer and the layer 5
ROI ponds characteristic vector is normalized.
Preferably, it is described that the Faster RCNN networks to be trained based on ZF networks are instructed using the training sample
In experienced process, in addition to:
Tune is optimized using the network parameter of Faster RCNN network of the stochastic gradient descent method to currently training
It is whole;
Wherein, learning rate corresponding to the stochastic gradient descent method is 0.001.
Preferably, it is described that the Faster RCNN networks to be trained based on ZF networks are instructed using the training sample
Before experienced process, in addition to:
The object region unrelated with thyroid papillary carcinoma affected areas is subjected to rejecting processing from ultrasonoscopy;
Wherein, the object region includes white space and/or character area and/or parathyroid tissue region.
Preferably, the network frame of the Faster RCNN networks to be trained is the Faster RCNN nets of python versions
Network framework.
Preferably, in any of the above-described kind of method, the acquisition includes the training sample of ultrasonoscopy and corresponding diagnostic result
This process, including:
Obtain the ultrasonoscopy of N kind picture sizes and the training sample of corresponding diagnostic result;
Wherein, N is the integer more than 1.
On the other hand, the present invention also provides a kind of thyroid papillary carcinoma ultrasound image recognition based on Faster RCNN
System, including:
Sample acquisition module, the training sample of ultrasonoscopy and corresponding diagnostic result is included for obtaining;Wherein, it is described
Ultrasonoscopy is to the image obtained after the progress ultrasonic imaging of the thyroid papillary carcinoma affected areas of patient;
Network training module, for using the training sample to the Faster RCNN networks to be trained based on ZF networks
It is trained, model after being trained accordingly;Wherein, the Faster RCNN networks to be trained are to sharing convolutional layer
In the 4th layer and layer 5 be attached after obtained Faster RCNN networks;
Image detection module, described in when getting ultrasonoscopy to be detected, then being inputted into model after the training
Ultrasonoscopy to be detected, obtain the testing result of model output after the training.
Preferably, in addition to:
Normalize module, for before being attached to described 4th layer and the layer 5, to described 4th layer and
ROI corresponding to the layer 5 (area-of-interest, Region Of Interest) pond characteristic vector is normalized.
Preferably, in addition to:
Reject module, for from the ultrasonoscopy by the target image unrelated with thyroid papillary carcinoma affected areas
Region carries out rejecting processing;
Wherein, the object region includes white space and/or character area and/or parathyroid tissue region.
Preferably, in any of the above-described kind of system, the sample acquisition module, including:
More size sample acquisition submodules, for the ultrasonoscopy for obtaining N kind picture sizes and corresponding diagnostic result
Training sample;
Wherein, N is the integer more than 1
Compared with prior art, above-mentioned technical proposal has advantages below:
A kind of ultrasonoscopy automatic identification side of thyroid papillary carcinoma based on Faster RCNN provided by the present invention
Method, this method obtain the training sample for including ultrasonoscopy and corresponding diagnostic result;Wherein, ultrasonoscopy is the first to patient
Shape papillocarcinoma of breast affected areas carries out the image obtained after ultrasonic imaging;Using the training sample to being treated based on ZF networks
Training Faster RCNN networks are trained, model after being trained accordingly;Wherein, the Faster RCNN nets to be trained
Network is the Faster RCNN networks obtained after the 4th layer in shared convolutional layer and layer 5 are attached;When getting
Ultrasonoscopy to be detected, then the ultrasonoscopy to be detected is inputted into model after the training, obtains model after the training
The testing result of output.
It can be seen that the training of model is carried out in the present invention using Faster RCNN networks, also, it is used in the present invention
Faster RCNN networks be to share the 4th layer of convolutional layer and layer 5 be attached after obtain, can so reach by
The 4th layer of feature exported with layer 5 pools the effect of feature stream all the way, relative in the prior art not to sharing convolution
The situation that the 4th layer of layer and layer 5 are attached, it can effectively improve the precision of image recognition.
Present invention also offers a kind of thyroid papillary carcinoma ultrasound image recognition system based on Faster RCNN, energy
Above-mentioned method is enough run, can also improve the precision of image recognition, so as to improve operating efficiency.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
The embodiment of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can also basis
The accompanying drawing of offer obtains other accompanying drawings.
Fig. 1 surpasses for the thyroid papillary carcinoma based on Faster RCNN that the first specific embodiment of the invention is provided
The flow chart of acoustic image recognition methods.
Fig. 2 surpasses for the thyroid papillary carcinoma based on Faster RCNN that second of specific embodiment of the invention is provided
The flow chart of acoustic image recognition methods.
The thyroid papillary carcinoma ultrasonoscopy based on Faster RCNN that Fig. 3 is provided by the specific embodiment of the invention
The design sketch of recognition methods.
A kind of thyroid papillary carcinoma ultrasound based on Faster RCNN that Fig. 4 is provided by the specific embodiment of the invention
The composition schematic diagram of image identification system.
Embodiment
The core of the present invention is to provide a kind of thyroid papillary carcinoma ultrasound image recognition side based on Faster RCNN
Method, wherein, Faster RCNN networks are obtained after being attached to the 4th layer of shared convolutional layer and layer 5, so may be used
To reach the effect that the feature of the 4th layer and layer 5 output is pooled to feature stream all the way, relative to no pair in the prior art
The 4th layer of situation being attached with layer 5 of shared convolutional layer, it can effectively improve the precision of image recognition.
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art are obtained every other under the premise of creative work is not made
Embodiment, belong to the scope of protection of the invention.
As shown in figure 1, the thyroid gland based on Faster RCNN that Fig. 1 is provided for the first specific embodiment of the invention
The flow chart of papillary carcinoma Ultrasound Image Recognition Method, methods described include:
Step S101:Obtaining includes the training sample of ultrasonoscopy and corresponding diagnostic result;Wherein, ultrasonoscopy is pair
The thyroid papillary carcinoma affected areas of patient carries out the image obtained after ultrasonic imaging.
The present invention the first embodiment in, the embodiment of the present invention provided based on Faster RCNN's
Thyroid papillary carcinoma Ultrasound Image Recognition Method, first have to obtain training sample, and this method is mainly used in thyroid gland breast
In terms of head cancer ultrasound image recognition, so the training sample obtained will include the thyroid papillary carcinoma affected areas to patient
The image and the diagnostic result corresponding to these images obtained after progress ultrasonic imaging.
It is worthy of note that the training sample including ultrasonoscopy and corresponding diagnostic result, can be to have instructed
Perfect, can also voluntarily collect the thyroid papillary carcinoma affected areas of patient is carried out the image that is obtained after ultrasonic imaging and
Corresponding to the diagnostic result of these images, it is labeled, as long as containing the training object for needing to be trained in training sample
Can.
When voluntarily collecting sample, in order to obtain training sample when being labeled, professional can be used, such as experience is rich
Rich chief physician instructs the thyroid papillary carcinoma affected areas to patient to acquisition to carry out what is obtained after ultrasonic imaging
Image is labeled.In order to ensure the accuracy of mark, the subjective error of individual is reduced, other professional people can be taken through
The mode that member is audited to the image after mark, to obtain the image of more accurately annotation results.
Further, in order to guarantee easily to see the content of mark, during mark, can be more than using most short side
3mm rectangle frame is labeled, and rectangle frame will completely surround the symptom characteristic region in image.
Further, in order to clearly obtain the mark of various different symptoms, can be represented using different letters
Different symptoms.For example, in mark, each cancerous area will be labeled as " c " and represent cancer class.If obscure boundary
Clear, this region continues to be labeled as " b ", if in irregular shape, continue to be labeled as " x ", if Echo heterogenicity is even, continues to mark
" h " is designated as, if there is calcified regions or hot spot, continues to be labeled as " q ".So be advantageous to the judgement in final detection,
Finally when carrying out the detection of image, as long as one or more label in some region output { c b x h q }, just judges
This region is cancer characteristic area.
Step S102:The Faster RCNN networks to be trained based on ZF networks are trained using the training sample,
Model after being trained accordingly;Wherein, the Faster RCNN networks to be trained are to sharing the 4th layer in convolutional layer
The Faster RCNN networks obtained after being attached with layer 5.
The code of existing neutral net is increased income, and this is advantageous for making various trials.Such as to Faster RCNN
Network carries out some transformations.Can be by way of changing code, to transform the shared convolutional layer of neutral net, when adopting
When being ZF networks, shared convolutional layer has five layers, at this moment input of the output of layer 5 as next step.In order to by the 4th layer
Input also partly do input after layer 5, can use the output result of the 4th layer of output result and layer 5
It is attached, that is, the adoption rate factor, the output result of the 4th layer of output result and layer 5 is combined together work
The input of the step of for after script layer 5.
So, by connecting the 4th layer of its shared convolutional layer and layer 5 and obtaining after being normalized, this
The study of sample combination deep layer and shallow-layer carries out feature extraction, can effectively improve the precision of image recognition.
Step S103:When getting ultrasonoscopy to be detected, then inputted into model after the training described to be detected super
Acoustic image, obtain the testing result of model output after the training.
Faster RCNN networks by transformation, receive described in step S101 after the training of sample, just to become one
At this moment image to be detected can be input among the model after the training, institute with model after the training of detection image
Model can export testing result after stating training.This step is with being equally to carry out without the Faster RCNN networks by transformation
Operation, prior art is may be referred to, is repeated no more here.
It is a kind of ultrasonoscopy of thyroid papillary carcinoma based on Faster RCNN provided by the present invention as described above
The first embodiment of automatic identifying method, the embodiment are:Acquisition includes ultrasonoscopy and corresponding diagnosis knot
The training sample of fruit;Wherein, ultrasonoscopy is to be obtained after carrying out ultrasonic imaging to the thyroid papillary carcinoma affected areas of patient
Image;The Faster RCNN networks to be trained based on ZF networks are trained using the training sample, obtained corresponding
Model after training;Wherein, the Faster RCNN networks to be trained enter for the 4th layer in shared convolutional layer and layer 5
The Faster RCNN networks obtained after row connection;When getting ultrasonoscopy to be detected, then inputted into model after the training
The ultrasonoscopy to be detected, obtain the testing result of model output after the training.
In the first above-mentioned embodiment, the training of model, also, this are carried out using Faster RCNN networks
Faster RCNN networks used in inventive embodiments are obtained after being attached to the 4th layer of shared convolutional layer and layer 5
, it can so reach the effect that the feature of the 4th layer and layer 5 output is pooled to feature stream all the way, relative to existing skill
The situation not being attached in art to the 4th layer and layer 5 sharing convolutional layer, it can effectively improve the essence of image recognition
Degree
As shown in Fig. 2 the thyroid gland based on Faster RCNN that Fig. 2 is provided for second of specific embodiment of the invention
The flow chart of papillary carcinoma Ultrasound Image Recognition Method, methods described include:
Step S201:Obtain the ultrasonoscopy of N kind picture sizes and the training sample of corresponding diagnostic result;
Wherein, N be integer ultrasonoscopy more than 1 be the thyroid papillary carcinoma affected areas of patient is carried out ultrasound into
The image obtained as after.
It has been investigated that:Cancer characteristic size difference is very big in ultrasonoscopy, is inputted by the picture of more sizes, here
More sizes refer specifically to the more than one size of characteristic area on picture, so by including a variety of picture sizes
The training of the training sample of ultrasonoscopy and corresponding diagnostic result, Faster RCNN networks can just learn to different sizes
In the range of feature, add robustness and reduce influence of the down-sampling to character representation, improve to picture primitive character
Extraction efficiency.
Further, can be to picture in order to more effectively make good use of the mark that these pictures carry out cancer pathological image
Pre-processed, because tissue and structure containing substantial amounts of non-identifying object in conventional picture.It can pass through first
The method cut out removes the text informations such as blank and the LOGO (mark) of image peripheral;Secondly can use PhotoShop or its
The background removal of parathyroid tissue in image is produced the image for only including identification object by his image software mode.
Further, in order that must pass through mark the thyroid papillary carcinoma affected areas of patient is carried out ultrasound into
The image obtained as after meets the xml formatted files of Faster RCNN training requirements, can use VOC2007 data sets.Example
Such as, in a kind of specific embodiment, Faster RCNN Web vector graphic ZF networks, and instructed in advance using VOC2007 databases
Practice, draw target encirclement frame first, what is utilized is packaged opencv dynamic bases, generates the file of xml forms.By xml
File is saved in Annotations, and the samples pictures of training are saved in JPEGImages, then by VOC2007 database phases
Corresponding file is covered.So allow for the xml forms that training sample meets Faster RCNN training requirements.
Thus obtain the training sample including ultrasonoscopy and corresponding diagnostic result;Wherein, ultrasonoscopy is pair
The thyroid papillary carcinoma affected areas of patient carries out the image obtained after ultrasonic imaging.
Step S202:The Faster RCNN networks to be trained based on ZF networks are trained with the training sample, obtained
Model after to corresponding training;Wherein, the Faster RCNN networks to be trained be to share convolutional layer in the 4th layer and
Layer 5 is normalized, and the Faster RCNN networks obtained after connecting.
In the first embodiment, the 4th layer of described pair of shared convolutional layer and layer 5 are attached, however, connecting
At fourth, fifth layer of shared convolutional layer, depth characteristic of the object on multiple convolutional layers is defined, it is necessary to by two in order to expand
Individual characteristic vector, which combines, carries out ROI ponds, to reduce its dimension.In fact, the size of each layer of feature, quantity and picture
The value of element is different, generally has less value in deep layer.Therefore, the convolutional layer for connecting the different depth in ZF networks may
Cause performance bad because the difference of the scale of the convolutional layer of different depth influenceed for following weight it is too big.
On the other hand, a solution that can be realized of this problem is corresponding with layer 5 by the 4th layer before proceeding
ROI ponds characteristic vector do normalized.So, in second of embodiment of the present invention, Faster RCNN networks can
Learn the value of the scale factor to each layer.
Applying L2 norms (L2 norms:The quadratic sum and then extraction of square root of each element of vector) normalize each to
Amount, normalization are completed in each pixel of set feature vector.After normalization, application scales respectively on each vector,
It is as follows:
What x was represented is original pixels vector,What is represented is the pixel vectors of standardization.D represent each ROI ponds feature to
Port number in amount.
Then scale factor γiIt is applied to the ROI ponds characteristic vector of each passage:
During the training period, scale factor γiIn the x backpropagations and chain rule meter for being continuously updated and inputting
Calculate:
Wherein y=[y1,y2,...,yd]T, it is the adoption rate factor, by the output of the 4th layer of output result and layer 5
As a result the input for the step of being combined together as after script layer 5.
Further, in order that with GPU (graphics processor, Graphics Processing Unit) accelerate function,
Deep learning python (Python, being a kind of explanation type computer programming language of object-oriented) version can be used
Faster RCNN network frames, in one embodiment, the hardware environment used for:Ubutun 14.04 64;CPU:Xeon(R)CPU E5-1630 [email protected]×4;Internal memory:64G;GPU:Graphics:Quadro K2200, lead to
Cross such configuration and run Faster RCNN networks, can be trained and testing result than faster.And employ
The Faster RCNN network frames of python versions, can make full use of GPU:Graphics:Quadro K2200 performance,
Improve efficiency.
Further, when the training sample obtained in application is trained to Faster RCNN networks, in order that
Faster RCNN networks are adaptive to the detection of carcinoma image, can also use stochastic gradient descent method training Faster
RCNN network parameter.When training starts, the learning rate of stochastic gradient descent can be arranged to 0.001, naturally it is also possible to set
Other suitable numerical value are set to, are so that can be smoothed out to finely tune without upsetting initial parameter.
Step S203:When getting ultrasonoscopy to be detected, then inputted into model after the training described to be detected super
Acoustic image, obtain the testing result of model output after the training.
After Faster RCNN networks are completed in training, target detection can be started., can before target detection is started
To use large data collection training in advance Faster RCNN networks, this is due to that image in training sample is less, such as
In a kind of specific embodiment, training to mark altogether in carcinoma image has individual cancer feature, and the carcinoma image for test is marked altogether
Note has 805 cancer features.15,000,000 photographs that this quantity uses relative to the Image Net training mentioned in background technology
For piece, all too is small.Influence caused by order to make up the too small possibility of sample, using training in advance Faster RCNN networks
Method, this training are only the mark of image level, i.e., simply whole image are labeled, without to the tool on image
Body characteristicses make the information labeling of bounding-box (bounding box).It is of course also possible to large data collection training in advance is not used
Faster RCNN networks, directly start check image according to the flow of prior art, repeated no more in the present embodiment.
In order that testing result is more convincing, the side of check experiment is used in a kind of specific embodiment of the invention
Method, to illustrate that scheme provided by the present invention has higher recognition capability.
Four training groups can be established, respectively train 1, training 2, training 3, training 4, the parameter each used is as follows:
Training 1:Using original Faster RCNN networks, iterations is set as 40000,20000,40000,
20000;Learning rate is set to 0.001.
Training 2:Inputted using original Faster RCNN networks, but to the sample image for training using more sizes,
Iterations is set as 40000,20000,40000,20000;Learning rate is set to 0.001.
Training 3:The 4th layer of the shared convolutional layer of Faster RCNN networks and layer 5 are connected, carry out normalizing
After change processing, for training.Iterations is set as 40000,20000,40000,20000;Learning rate is set to 0.001.
Training 4:The 4th layer of the shared convolutional layer of Faster RCNN networks is connected with layer 5 and inputted using more sizes
It is trained.Iterations is set as 40000,20000,40000,20000;Learning rate is set to 0.001.
MAP (Average Accuracy of main set, the mean Average Precision) such as table 1 of each model training:
Table 1
Training 1 | Training 2 | Training 3 | Training 4 | |
mAP | 0.618 | 0.652 | 0.696 | 0.738 |
Testing result and analysis:
Ultrasonic picture sample for thyroid papillary carcinoma is input to the described thyroid gland nipple trained
Detected in shape cancer ultrasound image recognition model, the result of corresponding model measurement such as table 2 below after the completion of each training:
Table 2
TP | TPR | FN | FNR | |
Ground Truth | 805 | |||
Model 1 | 604 | 0.750 | 201 | 0.250 |
Model 2 | 678 | 0.842 | 127 | 0.158 |
Model 3 | 618 | 0.768 | 187 | 0.232 |
Model 4 | 715 | 0.888 | 90 | 0.112 |
In table 2:
TP:Represent the number of true positive;What is represented herein is the thyroid papillary carcinoma for being correctly validated out
Number, i.e. true positives;
TPR:What is represented is the discrimination of true positives;
FN:Expression is failed to report, and is not correctly found the number of matching;What is represented herein is the first not being identified
The number of shape papillocarcinoma of breast, i.e. false negative;
FNP:What is represented is the discrimination of false negative;
As can be seen from Table 2, the Detection results of model 4 are that effect is best in four models, have reached 88.8%, its mould
Type has used input and the 4th layer of ZF networks convolutional layer, the method for layer 5 connection of more sizes, and it is to papillary thyroid
The identification of cancer ultrasonoscopy has reached good effect.
Wherein, model 1 is the Faster RCNN not being improved, and its TPR has reached 75%, when having used more chis
After very little input, i.e. model 2, TPR has reached 84.2%, improves 9.2 percentage points.It will thus be seen that when sample inputs
Cancer feature can effectively be extracted by carrying out the input of more sizes so that accuracy of identification improves much.Connected in addition, working as using layer
During the method connect, i.e. model 3, TPR reaches 76.8%, improves nearly 2 percentage points relative to original model, thus may be used
See, it is effective to connect the 4th layer that shares convolutional layer and extraction of the layer 5 to cancer feature.Due to using more than one chi
The ultrasound mark image of very little thyroid papillary carcinoma to train Faster RCNN systems, using to sharing the in convolutional layer
The work that extraction of the method for the Faster RCNN networks that four layers and layer 5 obtain after being attached all to cancer feature is improved
With both approaches being combined and be used to, to extract cancer feature, thus just obtain model 4, its TPR reaches
88.8%, improve nearly 14 percentage points compared to model 1, it is seen that the application of the method is very effective.As shown in figure 3,
Fig. 3 is the design sketch after being detected using model 4.
As shown in figure 4, Fig. 4 is a kind of thyroid gland nipple based on Faster RCNN that the specific embodiment of the invention provides
The composition schematic diagram of shape cancer ultrasound image recognition system.
The present invention also provides a kind of thyroid papillary carcinoma ultrasound image recognition system based on Faster RCNN, including:
Sample acquisition module 401, the training sample of ultrasonoscopy and corresponding diagnostic result is included for obtaining;Wherein, the ultrasound
Image is to the image obtained after the progress ultrasonic imaging of the thyroid papillary carcinoma affected areas of patient;Network training module 402,
For being trained using the training sample to the Faster RCNN networks to be trained based on ZF networks, instructed accordingly
Model after white silk;Wherein, the Faster RCNN networks to be trained are to sharing the 4th layer and layer 5 progress in convolutional layer
The Faster RCNN networks obtained after connection;Image detection module 403, ultrasonoscopy to be detected is got for working as, then to institute
State and input the ultrasonoscopy to be detected in model after training, obtain the testing result of model output after the training.
The system can include:Module is normalized, for before being attached to described 4th layer and the layer 5,
ROI ponds characteristic vector corresponding to described 4th layer and the layer 5 is normalized.
The system can also include:Module is rejected, for will be ill with thyroid papillary carcinoma from the ultrasonoscopy
The unrelated object region in region carries out rejecting processing;Wherein, the object region includes white space and/or word
Region and/or parathyroid tissue region.
Wherein, the sample acquisition module, can include:More size sample acquisition submodules, for obtaining N kind image chis
Very little ultrasonoscopy and the training sample of corresponding diagnostic result;Wherein, N is the integer more than 1
Finally, it is to be noted that, herein, such as first and second or the like relational terms be used merely to by
One entity or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or operation
Between any this actual relation or order be present.Moreover, term " comprising ", "comprising" or its any other variant meaning
Covering including for nonexcludability, so that process, method, article or equipment including a series of elements not only include that
A little key elements, but also the other element including being not expressly set out, or also include for this process, method, article or
The intrinsic key element of equipment.In the absence of more restrictions, the key element limited by sentence "including a ...", is not arranged
Except other identical element in the process including the key element, method, article or equipment being also present.
Above to a kind of thyroid papillary carcinoma ultrasound image recognition side based on Faster RCNN provided by the present invention
Method and system are described in detail, and specific case used herein is explained the principle and embodiment of the present invention
State, the explanation of above example is only intended to help the method and its core concept for understanding the present invention;Meanwhile for this area
Those skilled in the art, according to the thought of the present invention, there will be changes in specific embodiments and applications, to sum up institute
State, this specification content should not be construed as limiting the invention.
Claims (10)
- A kind of 1. thyroid papillary carcinoma Ultrasound Image Recognition Method based on Faster RCNN, it is characterised in that including:Obtaining includes the training sample of ultrasonoscopy and corresponding diagnostic result;Wherein, ultrasonoscopy is the thyroid gland to patient Papillary carcinoma affected areas carries out the image obtained after ultrasonic imaging;The Faster RCNN networks to be trained based on ZF networks are trained using the training sample, instructed accordingly Model after white silk;Wherein, the Faster RCNN networks to be trained are to sharing the 4th layer and layer 5 progress in convolutional layer The Faster RCNN networks obtained after connection;When getting ultrasonoscopy to be detected, then the ultrasonoscopy to be detected is inputted into model after the training, obtains institute State the testing result that model exports after training.
- 2. according to the method for claim 1, it is characterised in that it is described using the training sample to being treated based on ZF networks During training Faster RCNN networks are trained, in addition to:Before being attached to described 4th layer and the layer 5, to ROI corresponding to described 4th layer and the layer 5 Pond characteristic vector is normalized.
- 3. according to the method for claim 1, it is characterised in that it is described using the training sample to being treated based on ZF networks During training Faster RCNN networks are trained, in addition to:Adjustment is optimized using the network parameter of Faster RCNN network of the stochastic gradient descent method to currently training;Wherein, learning rate corresponding to the stochastic gradient descent method is 0.001.
- 4. according to the method for claim 1, it is characterised in that it is described using the training sample to being treated based on ZF networks Before the process that training Faster RCNN networks are trained, in addition to:The object region unrelated with thyroid papillary carcinoma affected areas is subjected to rejecting processing from ultrasonoscopy;Wherein, the object region includes white space and/or character area and/or parathyroid tissue region.
- 5. according to the method for claim 1, it is characterised in that the network frame of the Faster RCNN networks to be trained For the Faster RCNN network frames of python versions.
- 6. according to the method described in any one of claim 1 to 5, it is characterised in that the acquisition includes ultrasonoscopy and phase The process of the training sample of diagnostic result is answered, including:Obtain the ultrasonoscopy of N kind picture sizes and the training sample of corresponding diagnostic result;Wherein, N is the integer more than 1.
- A kind of 7. thyroid papillary carcinoma ultrasound image recognition system based on Faster RCNN, it is characterised in that including:Sample acquisition module, the training sample of ultrasonoscopy and corresponding diagnostic result is included for obtaining;Wherein, the ultrasound Image is to the image obtained after the progress ultrasonic imaging of the thyroid papillary carcinoma affected areas of patient;Network training module, for being carried out using the training sample to the Faster RCNN networks to be trained based on ZF networks Training, model after being trained accordingly;Wherein, the Faster RCNN networks to be trained are in shared convolutional layer The 4th layer of Faster RCNN network obtained after being attached with layer 5;Image detection module, for when getting ultrasonoscopy to be detected, then being inputted into model after the training described to be checked Ultrasonoscopy is surveyed, obtains the testing result of model output after the training.
- 8. system according to claim 7, it is characterised in that also include:Module is normalized, it is to described 4th layer and described for before being attached to described 4th layer and the layer 5 ROI ponds characteristic vector is normalized corresponding to layer 5.
- 9. system according to claim 7, it is characterised in that also include:Reject module, for from the ultrasonoscopy by the object region unrelated with thyroid papillary carcinoma affected areas Carry out rejecting processing;Wherein, the object region includes white space and/or character area and/or parathyroid tissue region.
- 10. according to the system described in any one of claim 7 to 9, it is characterised in that the sample acquisition module, including:More size sample acquisition submodules, the training of ultrasonoscopy and corresponding diagnostic result for obtaining N kind picture sizes Sample;Wherein, N is the integer more than 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710647846.8A CN107451615A (en) | 2017-08-01 | 2017-08-01 | Thyroid papillary carcinoma Ultrasound Image Recognition Method and system based on Faster RCNN |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710647846.8A CN107451615A (en) | 2017-08-01 | 2017-08-01 | Thyroid papillary carcinoma Ultrasound Image Recognition Method and system based on Faster RCNN |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107451615A true CN107451615A (en) | 2017-12-08 |
Family
ID=60490746
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710647846.8A Pending CN107451615A (en) | 2017-08-01 | 2017-08-01 | Thyroid papillary carcinoma Ultrasound Image Recognition Method and system based on Faster RCNN |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107451615A (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107945181A (en) * | 2017-12-30 | 2018-04-20 | 北京羽医甘蓝信息技术有限公司 | Treating method and apparatus for breast cancer Lymph Node Metastasis pathological image |
CN108388841A (en) * | 2018-01-30 | 2018-08-10 | 浙江大学 | Cervical biopsy area recognizing method and device based on multiple features deep neural network |
CN108460758A (en) * | 2018-02-09 | 2018-08-28 | 河南工业大学 | The construction method of Lung neoplasm detection model |
CN108520518A (en) * | 2018-04-10 | 2018-09-11 | 复旦大学附属肿瘤医院 | A kind of thyroid tumors Ultrasound Image Recognition Method and its device |
CN108734694A (en) * | 2018-04-09 | 2018-11-02 | 华南农业大学 | Thyroid tumors ultrasonoscopy automatic identifying method based on faster r-cnn |
CN108814565A (en) * | 2018-07-04 | 2018-11-16 | 重庆邮电大学 | A kind of intelligent Chinese medicine health detection dressing table based on multi-sensor information fusion and deep learning |
CN108846828A (en) * | 2018-05-04 | 2018-11-20 | 上海交通大学 | A kind of pathological image target-region locating method and system based on deep learning |
CN109124764A (en) * | 2018-09-29 | 2019-01-04 | 上海联影医疗科技有限公司 | Guide device of performing the operation and surgery systems |
CN109620293A (en) * | 2018-11-30 | 2019-04-16 | 腾讯科技(深圳)有限公司 | A kind of image-recognizing method, device and storage medium |
CN110033042A (en) * | 2019-04-15 | 2019-07-19 | 青岛大学 | A kind of carcinoma of the rectum ring week incisxal edge MRI image automatic identifying method and system based on deep neural network |
CN110245716A (en) * | 2019-06-20 | 2019-09-17 | 杭州睿琪软件有限公司 | Sample labeling auditing method and device |
CN110338842A (en) * | 2019-07-11 | 2019-10-18 | 北京市朝阳区妇幼保健院 | A kind of image optimization method of newborn's lungs ultrasonic image-forming system |
CN110490892A (en) * | 2019-07-03 | 2019-11-22 | 中山大学 | A kind of Thyroid ultrasound image tubercle automatic positioning recognition methods based on USFaster R-CNN |
CN110517757A (en) * | 2018-05-21 | 2019-11-29 | 美国西门子医疗***股份有限公司 | The medical ultrasound image of tuning |
CN111062953A (en) * | 2019-12-17 | 2020-04-24 | 北京化工大学 | Method for identifying parathyroid hyperplasia in ultrasonic image |
CN111260641A (en) * | 2020-01-21 | 2020-06-09 | 珠海威泓医疗科技有限公司 | Palm ultrasonic imaging system and method based on artificial intelligence |
CN111311553A (en) * | 2020-01-21 | 2020-06-19 | 长沙理工大学 | Mammary tumor identification method and device based on region of interest and storage medium |
CN111612752A (en) * | 2020-05-15 | 2020-09-01 | 江苏省人民医院(南京医科大学第一附属医院) | Ultrasonic image thyroid nodule intelligent detection system based on fast-RCNN |
CN112102236A (en) * | 2020-08-07 | 2020-12-18 | 东南大学 | Polycrystalline subfissure detection method based on two deep stages |
CN112368781A (en) * | 2018-04-11 | 2021-02-12 | 帕伊医疗成像有限公司 | Method and system for assessing vascular occlusion based on machine learning |
CN112380900A (en) * | 2020-10-10 | 2021-02-19 | 深圳视见医疗科技有限公司 | Deep learning-based cervical fluid-based cell digital image classification method and system |
CN112446862A (en) * | 2020-11-25 | 2021-03-05 | 北京医准智能科技有限公司 | Dynamic breast ultrasound video full-focus real-time detection and segmentation device and system based on artificial intelligence and image processing method |
CN112614123A (en) * | 2020-12-29 | 2021-04-06 | 深圳开立生物医疗科技股份有限公司 | Ultrasonic image identification method and related device |
US10993653B1 (en) | 2018-07-13 | 2021-05-04 | Johnson Thomas | Machine learning based non-invasive diagnosis of thyroid disease |
CN112991166A (en) * | 2019-12-16 | 2021-06-18 | 无锡祥生医疗科技股份有限公司 | Intelligent auxiliary guiding method, ultrasonic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106339680A (en) * | 2016-08-25 | 2017-01-18 | 北京小米移动软件有限公司 | Human face key point positioning method and device |
CN106650655A (en) * | 2016-12-16 | 2017-05-10 | 北京工业大学 | Action detection model based on convolutional neural network |
CN106951928A (en) * | 2017-04-05 | 2017-07-14 | 广东工业大学 | The Ultrasound Image Recognition Method and device of a kind of thyroid papillary carcinoma |
-
2017
- 2017-08-01 CN CN201710647846.8A patent/CN107451615A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106339680A (en) * | 2016-08-25 | 2017-01-18 | 北京小米移动软件有限公司 | Human face key point positioning method and device |
CN106650655A (en) * | 2016-12-16 | 2017-05-10 | 北京工业大学 | Action detection model based on convolutional neural network |
CN106951928A (en) * | 2017-04-05 | 2017-07-14 | 广东工业大学 | The Ultrasound Image Recognition Method and device of a kind of thyroid papillary carcinoma |
Non-Patent Citations (1)
Title |
---|
XUDONG SUN 等: "Face Detection using Deep Learning:An Improved Faster RCNN Approach", 《ARXIV》 * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107945181A (en) * | 2017-12-30 | 2018-04-20 | 北京羽医甘蓝信息技术有限公司 | Treating method and apparatus for breast cancer Lymph Node Metastasis pathological image |
CN108388841A (en) * | 2018-01-30 | 2018-08-10 | 浙江大学 | Cervical biopsy area recognizing method and device based on multiple features deep neural network |
CN108460758A (en) * | 2018-02-09 | 2018-08-28 | 河南工业大学 | The construction method of Lung neoplasm detection model |
CN108734694A (en) * | 2018-04-09 | 2018-11-02 | 华南农业大学 | Thyroid tumors ultrasonoscopy automatic identifying method based on faster r-cnn |
CN108520518A (en) * | 2018-04-10 | 2018-09-11 | 复旦大学附属肿瘤医院 | A kind of thyroid tumors Ultrasound Image Recognition Method and its device |
CN112368781A (en) * | 2018-04-11 | 2021-02-12 | 帕伊医疗成像有限公司 | Method and system for assessing vascular occlusion based on machine learning |
CN108846828A (en) * | 2018-05-04 | 2018-11-20 | 上海交通大学 | A kind of pathological image target-region locating method and system based on deep learning |
CN110517757A (en) * | 2018-05-21 | 2019-11-29 | 美国西门子医疗***股份有限公司 | The medical ultrasound image of tuning |
CN110517757B (en) * | 2018-05-21 | 2023-08-04 | 美国西门子医疗***股份有限公司 | Tuned medical ultrasound imaging |
CN108814565A (en) * | 2018-07-04 | 2018-11-16 | 重庆邮电大学 | A kind of intelligent Chinese medicine health detection dressing table based on multi-sensor information fusion and deep learning |
US10993653B1 (en) | 2018-07-13 | 2021-05-04 | Johnson Thomas | Machine learning based non-invasive diagnosis of thyroid disease |
CN109124764A (en) * | 2018-09-29 | 2019-01-04 | 上海联影医疗科技有限公司 | Guide device of performing the operation and surgery systems |
CN109124764B (en) * | 2018-09-29 | 2020-07-14 | 上海联影医疗科技有限公司 | Surgical guide device and surgical system |
CN109620293B (en) * | 2018-11-30 | 2020-07-07 | 腾讯科技(深圳)有限公司 | Image recognition method and device and storage medium |
CN109620293A (en) * | 2018-11-30 | 2019-04-16 | 腾讯科技(深圳)有限公司 | A kind of image-recognizing method, device and storage medium |
CN110033042A (en) * | 2019-04-15 | 2019-07-19 | 青岛大学 | A kind of carcinoma of the rectum ring week incisxal edge MRI image automatic identifying method and system based on deep neural network |
CN110245716A (en) * | 2019-06-20 | 2019-09-17 | 杭州睿琪软件有限公司 | Sample labeling auditing method and device |
CN110245716B (en) * | 2019-06-20 | 2021-05-14 | 杭州睿琪软件有限公司 | Sample labeling auditing method and device |
CN110490892A (en) * | 2019-07-03 | 2019-11-22 | 中山大学 | A kind of Thyroid ultrasound image tubercle automatic positioning recognition methods based on USFaster R-CNN |
CN110338842A (en) * | 2019-07-11 | 2019-10-18 | 北京市朝阳区妇幼保健院 | A kind of image optimization method of newborn's lungs ultrasonic image-forming system |
CN112991166A (en) * | 2019-12-16 | 2021-06-18 | 无锡祥生医疗科技股份有限公司 | Intelligent auxiliary guiding method, ultrasonic equipment and storage medium |
CN111062953A (en) * | 2019-12-17 | 2020-04-24 | 北京化工大学 | Method for identifying parathyroid hyperplasia in ultrasonic image |
CN111311553A (en) * | 2020-01-21 | 2020-06-19 | 长沙理工大学 | Mammary tumor identification method and device based on region of interest and storage medium |
CN111260641A (en) * | 2020-01-21 | 2020-06-09 | 珠海威泓医疗科技有限公司 | Palm ultrasonic imaging system and method based on artificial intelligence |
CN111612752A (en) * | 2020-05-15 | 2020-09-01 | 江苏省人民医院(南京医科大学第一附属医院) | Ultrasonic image thyroid nodule intelligent detection system based on fast-RCNN |
CN112102236A (en) * | 2020-08-07 | 2020-12-18 | 东南大学 | Polycrystalline subfissure detection method based on two deep stages |
CN112380900A (en) * | 2020-10-10 | 2021-02-19 | 深圳视见医疗科技有限公司 | Deep learning-based cervical fluid-based cell digital image classification method and system |
CN112446862A (en) * | 2020-11-25 | 2021-03-05 | 北京医准智能科技有限公司 | Dynamic breast ultrasound video full-focus real-time detection and segmentation device and system based on artificial intelligence and image processing method |
CN112446862B (en) * | 2020-11-25 | 2021-08-10 | 北京医准智能科技有限公司 | Dynamic breast ultrasound video full-focus real-time detection and segmentation device and system based on artificial intelligence and image processing method |
CN112614123A (en) * | 2020-12-29 | 2021-04-06 | 深圳开立生物医疗科技股份有限公司 | Ultrasonic image identification method and related device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107451615A (en) | Thyroid papillary carcinoma Ultrasound Image Recognition Method and system based on Faster RCNN | |
Huang et al. | Fast and fully-automated detection and segmentation of pulmonary nodules in thoracic CT scans using deep convolutional neural networks | |
CN106056595B (en) | Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules | |
CN107748900B (en) | Mammary gland tumor classification device and storage medium based on discriminative convolutional neural network | |
Dharmawan et al. | A new hybrid algorithm for retinal vessels segmentation on fundus images | |
CN108257135A (en) | The assistant diagnosis system of medical image features is understood based on deep learning method | |
CN106780448A (en) | A kind of pernicious sorting technique of ultrasonic Benign Thyroid Nodules based on transfer learning Yu Fusion Features | |
US20230005140A1 (en) | Automated detection of tumors based on image processing | |
CN105913086A (en) | Computer-aided mammary gland diagnosing method by means of characteristic weight adaptive selection | |
Patel | Predicting invasive ductal carcinoma using a reinforcement sample learning strategy using deep learning | |
CN114332572B (en) | Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network | |
WO2020066257A1 (en) | Classification device, classification method, program, and information recording medium | |
CN110570419A (en) | Method and device for acquiring characteristic information and storage medium | |
Puch et al. | Global planar convolutions for improved context aggregation in brain tumor segmentation | |
CN114494215A (en) | Transformer-based thyroid nodule detection method | |
Aslam et al. | Liver-tumor detection using CNN ResUNet | |
Nayan et al. | A deep learning approach for brain tumor detection using magnetic resonance imaging | |
Hassan et al. | A dilated residual hierarchically fashioned segmentation framework for extracting Gleason tissues and grading prostate cancer from whole slide images | |
Gómez-Flores et al. | Gray-to-color image conversion in the classification of breast lesions on ultrasound using pre-trained deep neural networks | |
CN113379691B (en) | Breast lesion deep learning segmentation method based on prior guidance | |
Amini | Head circumference measurement with deep learning approach based on multi-scale ultrasound images | |
de Brito Silva et al. | Classification of breast masses in mammograms using geometric and topological feature maps and shape distribution | |
CN115049898A (en) | Automatic grading method for lumbar intervertebral disc degeneration based on region block characteristic enhancement and inhibition | |
Anas et al. | Advancing Breast Cancer Detection: Enhancing YOLOv5 Network for Accurate Classification in Mammogram Images | |
Badlani et al. | Melanoma Detection Using Convolutional Neural Networks and Group Normalization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171208 |
|
RJ01 | Rejection of invention patent application after publication |