CN108319977A - Cervical biopsy area recognizing method based on the multi-modal network of channel information and device - Google Patents

Cervical biopsy area recognizing method based on the multi-modal network of channel information and device Download PDF

Info

Publication number
CN108319977A
CN108319977A CN201810092566.XA CN201810092566A CN108319977A CN 108319977 A CN108319977 A CN 108319977A CN 201810092566 A CN201810092566 A CN 201810092566A CN 108319977 A CN108319977 A CN 108319977A
Authority
CN
China
Prior art keywords
image
network
uterine neck
feature
region recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810092566.XA
Other languages
Chinese (zh)
Other versions
CN108319977B (en
Inventor
吴健
应兴德
陈婷婷
马鑫军
吕卫国
袁春女
姚晔俪
王新宇
吴边
陈为
吴福理
吴朝晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201810092566.XA priority Critical patent/CN108319977B/en
Publication of CN108319977A publication Critical patent/CN108319977A/en
Application granted granted Critical
Publication of CN108319977B publication Critical patent/CN108319977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of cervical biopsy area recognizing methods based on the multi-modal network of channel information and device, device to include:Image acquisition units acquire the physiological saline image, acetic acid image and iodine image of uterine neck;Data processing unit, including the cervical biopsy region recognition model trained carry out analyzing processing to physiological saline image, acetic acid image and iodine image, and there are the probability tags of biopsy regions for output uterine neck;Cervical biopsy region recognition model includes:Network layer is detected, characteristic pattern and location information for extracting uterine neck face region in physiological saline image, acetic acid image and iodine image;Feature combines prediction network layer, and 3 characteristic patterns and location information are stitched together on channel dimension, then by Fusion Features and identification, there are the probability tags of biopsy regions for output uterine neck;Display unit, acquisition probability label are simultaneously shown.Whether the cervical biopsy region recognition device can assist doctor to need to do further biopsy to the uterine neck of patient and make accurate judgement.

Description

Cervical biopsy area recognizing method based on the multi-modal network of channel information and device
Technical field
The present invention relates to medical image process field more particularly to a kind of uterine neck work based on the multi-modal network of channel information Examine area recognizing method and device.
Background technology
Cervical carcinoma is most common gynecologic malignant tumor.Carcinoma in situ Gao Fa Nian Ling is 30~35 years old, and infiltrating carcinoma is 45~55 Year, in recent years its morbidity have the tendency that rejuvenation.
The checking step of cervical lesions mainly divides three steps:(1) cervical cytological examination, the most commonly used is conventional smears; (2) vaginoscopy needs to do vaginoscopy if cytological results have exception, observation epithelium of cervix uteri color, blood vessel etc. Variation;(3) cervical tissue biopsy inspection, if vaginoscopy has a question, doctor can be under gynecatoptron positioning, to suspicious Lesion takes a little cervical tissue row biopsy inspection, biopsy results also just to become the final conclusion of cervical lesions.
Comprising the concrete steps that after direct exposure uterine neck for vaginoscopy, uses 0.9% physiological saline, 3%-5% successively Acetum, Dobell's solution are applied to cervical surface, and cervix squama is examined for examiner by the uterine neck image of shooting Whether there is abnormal area (region for needing biopsy) to exist in column boundary and columnar epithelium region, instructs to select according to these information The precise location for selecting biopsy avoids blindness biopsy, improves the accuracy rate of positive rates of biopsy and diagnosis.
Vaginoscopy is a kind of detection method based on subjective experience, and the judgement of abnormal area and range is needed by doctor Raw experience accumulation and intuitive judgement, judgement it is accurate whether direct relation biopsy positive rate and accuracy rate of diagnosis. With the development of medical information and big data, a large amount of vaginoscopy results are accumulated in the form of image data to be preserved.Mesh Before, there are many applications of the method for machine learning and deep learning on colposcopy altimetric image wherein, including opening of the cervix Detection, the detection of vinegar white region, prediction of cervical lesions degree etc., although these methods can play certain booster action, But doctor can not be assisted to make more accurate judgement from the root cause.And these methods only use 3%-5% acetums mostly The gynecatoptron uterine neck image of effect, this passes through the figures such as 0.9% physiological saline, 3%-5% acetums, Dobell's solution with doctor As the changing features being combined determine whether that there are the common medical means of biopsy regions are inconsistent.Therefore, how reasonable land productivity With medical image and medical practice, a kind of auxiliary device of cervical biopsy region recognition that taking into account the above problem is designed, from root Upper auxiliary doctor makes more accurate judgement, is urgent problem to be solved at present.
Invention content
The present invention provides a kind of cervical biopsy region recognition devices based on the multi-modal network of channel information, acquire patient Physiological saline image, acetic acid image and the iodine image of uterine neck extract the feature of two class images and are merged respectively, export uterine neck There are the probability tags of biopsy regions, fundamentally doctor are assisted more accurately to sentence to whether the uterine neck of patient needs biopsy to make It is disconnected.
The present invention provides following technical solutions:
A kind of cervical biopsy region recognition device based on the multi-modal network of channel information, including:
Image acquisition units acquire the physiological saline image, acetic acid image and iodine image of uterine neck, are sent to data processing list Member;
Data processing unit, including the cervical biopsy region recognition model trained, cervical biopsy region recognition model pair Physiological saline image, acetic acid image and iodine image carry out analyzing processing, and there are the probability tags of biopsy regions for output uterine neck;
The cervical biopsy region recognition model includes:
Network layer, including 3 independent detection sub-network networks are detected, extraction physiological saline image, acetic acid image are respectively used to With the characteristic pattern and location information in uterine neck face region in iodine image;
Feature combines prediction network layer, by 3 characteristic patterns of detection network layer extraction and location information on channel dimension It is stitched together, then by Fusion Features and identification, there are the probability tags of biopsy regions for output uterine neck;
Display unit obtains the probability tag and display.
The cervical biopsy region recognition device of the present invention acquires physiological saline image, the vinegar of uterine neck by image acquisition units Sour image and iodine image extract uterine neck face region in each image by the detection network layer of cervical biopsy region recognition model Location information and characteristic pattern combine the location information in prediction 3 uterine neck faces of network layer pair region and characteristic pattern to carry out by feature In conjunction with and identification, obtain probability tag of the uterine neck there are biopsy regions, and by display unit display processing, doctor couple can be assisted The uterine neck of patient makes accurate judgement with the presence or absence of biopsy regions.
In cervical biopsy region recognition device, first using the characteristic pattern in detection network extraction uterine neck face region, then to uterine neck The characteristic pattern in face region is identified so that identification is more targeted, keeps the recognition result of output more acurrate.
The physiological saline image of uterine neck refers to the uterine neck image smeared after physiological saline, and acetic acid image refers to smearing life successively The uterine neck image after brine, 3%-5% acetums is managed, iodine image refers to that smearing physiological saline, 3%-5% acetic acid are molten successively Uterine neck image after liquid, Dobell's solution.
The region of biopsy is needed to be more easy to observe abnormal vascular under the action of physiological saline if uterine neck exists; Under the effect of 3%-5% acetums, the features such as " thick vinegar is white ", " inlaying blood vessel " are will present;Under Dobell's solution effect, can be in The features such as existing " bright crocus ", " mustard yellow ", " mottled coloring ", but there are these features also not can determine that uterine neck Just being bound to, there are lesions, it is also necessary to which doctor does further biopsy.
The detection sub-network network includes sequentially connected feature extraction network, uterine neck face referral networks and the detection of uterine neck face Network, added with channel selecting module on the feature extraction network.
The feature extraction network is used to extract the characteristic pattern of physiological saline image, acetic acid image or iodine image, and right The characteristic pattern carries out channel screen.
The feature extraction network is the ResNet50 network models plus channel selecting module;Feature extraction network packet Include the convolutional layer being sequentially connected, maximum pond layer and several convolution group;Several convolution group is by several residual unit groups At;Each residual unit includes several convolutional layers, in each residual unit, into first convolutional layer before characteristic pattern also Can flow directly into after the last one convolutional layer and the characteristic pattern that export with the last one convolutional layer be added after as the residual error list The output of member.
The channel selecting module is added in after the last one residual unit of each convolution group.
The channel selecting module carries out channel selecting operation to the characteristic pattern of input, obtains each channel of this feature figure Weight, weight is multiplied with this feature figure, the result of multiplication is added with this feature figure again, output channel screening after feature Figure;The numberical range of weight is 0~1.
The purpose built in this way is exactly to carry out a channel screen in the characteristic pattern to be exported to each convolution group, prevent Only the channel characteristics of redundancy are excessive.The operation of channel selecting module, which is equivalent to, multiplies the activation value of input feature vector figure in (1+ is weighed Weight), wherein the numerical values recited of weight is between 0~1.
The uterine neck face referral networks are RPN networks;By 1 convolution filter size be 3 × 3, convolution step-length it is 1 Convolutional layer and 2 parallel convolution filter sizes are the convolutional layer composition that 1 × 1, convolution step-length is 1.
The uterine neck face referral networks are used to obtain the location information in uterine neck face region in the characteristic pattern.
The uterine neck face detection network module is made of ROIPooling layers and two parallel full articulamentums.
Network is detected according to the location information in uterine neck face region in the uterine neck face, and Crop is carried out on the characteristic pattern Operation, obtains the characteristic pattern and its location information in uterine neck face region.
Preferably, the described uterine neck face detection network followed by the channel selecting module.
Channel selecting module respectively sieves the characteristic pattern in the uterine neck face region of 3 uterine neck face detection network output into row of channels It selects, the characteristic pattern in the uterine neck face region after screening passage enters feature and combines prediction network layer.
Training method to cervical biopsy region recognition model is:
(1) obtain uterine neck physiological saline image, acetic acid image and iodine image, after being pre-processed to uterine neck face region into Uterine neck is identified with the presence or absence of biopsy regions for line flag, builds training set;
The preprocess method is:Physiological saline image, acetic acid image and iodine image are carried out at z-score standardization Reason and the operation of data augmentation;
Using the physiological saline image of the same uterine neck, acetic acid image and iodine image as one group of data, a training is formed Sample marks this group of image to whether there is biopsy regions;
Specifically, identification and label refer to:It identifies and whether there is " thick vinegar is white ", " inlaying blood vessel " feature in acetic acid image, And it marks;It identifies and whether there is " bright crocus ", " mustard yellow ", " mottled coloring " feature in iodine image, and mark.
Preferably, it is 0.8~1.2 there are the sample number ratio of the sample number of biopsy regions and normal cervix in training set: 1;
(2) cervical biopsy region recognition model is trained using training set, including:
(2-1) carries out pre-training to detection network layer:
Physiological saline image, acetic acid image and the iodine image in training set are input to respective detection sub-network network respectively In, training to loss function convergences preserves the model parameter of detection sub-network network;
(2-2) is trained cervical biopsy region recognition model:
The model parameter for the detection sub-network network that step (2-1) obtains is loaded into cervical biopsy region recognition model;
Physiological saline image, acetic acid image and the iodine image in training set are input to respective detection sub-network network respectively In, after combining prediction network layer using feature, there are the probability tag of biopsy regions, training to cervical biopsy region to know for output Other model convergence;
Preserve the cervical biopsy region recognition model parameter that training obtains.
The present invention also provides using above-mentioned cervical biopsy region recognition device carry out cervical biopsy region recognition method, Include the following steps:
(1) physiological saline image, acetic acid image and the iodine image that uterine neck is acquired by image acquisition units, are input to data Cervical biopsy region recognition model in processing unit;
(2) physiological saline image, acetic acid image and iodine image are carried out by the cervical biopsy region recognition model Analyzing processing, there are the probability tags of biopsy regions for output uterine neck, and are shown in display unit.
Compared with prior art, beneficial effects of the present invention are:
The cervical biopsy region recognition device of the present invention is to pass through physiological saline, 3%-5% acetums, multiple based on doctor Whether the uterine neck characteristics of image variation judgement uterine neck after square iodine solution effect needs further to carry out the medical practice of biopsy, according to The uterine neck image of a large amount of vaginoscopies carries out learning model building, and cervical biopsy region is identified according to the model of foundation, Doctor can be fundamentally assisted whether to need to do further biopsy to uterine neck and make more accurate judgement.
The cervical biopsy region recognition device of the present invention does classification task using the thought of detection, by detecting network It assists precise positioning to the ROI region in uterine neck face, while having merged the characteristics of image of three phases in vaginoscopy, and by The help of channel selecting module eliminates the channel characteristics of redundancy, remains most effective channel characteristics most outstanding, these processing Great help all is provided to the accuracy of final classification result.
Description of the drawings
Fig. 1 is the workflow schematic diagram of the cervical biopsy region recognition device of the present invention;
Fig. 2 is the flow diagram that pre-training is carried out to detection sub-network network;
Fig. 3 is the structural schematic diagram of channel selecting module;
Fig. 4 is characterized the structural schematic diagram in conjunction with prediction network layer.
Specific implementation mode
Present invention is further described in detail with reference to the accompanying drawings and examples, it should be pointed out that reality as described below It applies example to be intended to be convenient for the understanding of the present invention, and does not play any restriction effect to it.
The cervical biopsy region recognition device of the present invention, including:
Image acquisition units acquire the physiological saline image, acetic acid image and iodine image of uterine neck, are sent to data processing list Member;
Data processing unit, including the cervical biopsy region recognition model trained, cervical biopsy region recognition model pair Physiological saline image, acetic acid image and iodine image carry out analyzing processing, and there are the probability tags of biopsy regions for output uterine neck;
Display unit, acquisition probability label are simultaneously shown.
The probability tag that doctor exports according to cervical biopsy region recognition device, in conjunction with patient physiological saline image, Acetic acid image and iodine image, whether comprehensive descision patient uterine neck needs further to carry out biopsy, and then judges that uterine neck whether there is Lesion.
The workflow of the cervical biopsy region recognition device of the present invention is as shown in Figure 1.
Image acquisition units are gynecatoptron.Clinically doctor, can be successively using life when carrying out vaginoscopy to patient It manages brine, 3%-5% acetums, Dobell's solution and smears uterine neck, by observing in cervix squama column boundary and columnar epithelium The variation of feature is to determine whether with the presence of biopsy regions.
Therefore, the feature in each stage plays very important effect to last lesion rank judgement, how to extract Effective and important characteristic information in three phases image is at the most important thing.For example, being acted in 3%-5% acetums Under, the features such as " thick vinegar is white ", " inlaying ", and under Dobell's solution effect, the spies such as " bright crocus ", " mottled coloring " Sign, is all the important evidence that doctor judges cervical biopsy region.After each stage vaginoscopy image zooming-out feature, need to tie The feature for closing each stage, to carry out the prediction in final cervical biopsy region.Therefore, how to be prediction using more characteristics of image Uterine neck whether there is the key of biopsy regions accuracy rate.
In order to accurately extract the validity feature in per stage, the cervical biopsy region recognition mould in data processing unit of the present invention Type tentatively extracts the feature of each stage image using residual error convolutional neural networks, extracts every figure using the method for detection later The feature in uterine neck face region as in, the uterine neck face provincial characteristics obtained again based on extraction later further utilize residual error convolutional Neural Network extracts feature.We selectively extract image data by using the method for detection, that is, only focus on most important The feature in uterine neck face region, uterine neck face peripheral region are then selectively ignored, this based on the Attention mechanism supervised by force Introducing can allow the region that Web Cams most merit attention, to enable network that can selectively learn the feature of important area.
It inherently explains, feature extraction network is by largely learning, by the 3 × M in tri- channels RGB × M sizes The three-dimensional tensor of image C × m × m for being mapped to more multichannel indicate, wherein C > 3 and M > m, in this process due to channel Several is multiplied, and necessarily will produce the channel of a large amount of characteristic information redundancies.
The cervical biopsy region recognition model of the present invention passes through three independent feature extractions with channel selecting module High-resolution image is mapped to the three-dimensional tensor of multichannel by network respectively, and before three phases feature combinations, Channel selecting module is added on the last one residual unit generated per category feature figure respectively, for the characteristic pattern that ultimately produces A weight is distributed in each channel, and convolutional neural networks are realized by learning this weight to the " free of a large amount of channels Select ", i.e., validity feature information is retained, redundancy or invalid characteristic information are inhibited.Finally by three phases Characteristic pattern effectively combine after by a sorter network obtain uterine neck with the presence or absence of biopsy regions final prediction knot Fruit.
All convolutional layers of verbal description all refer to the volume for being followed by batch regularization layer and ReLU activation primitive layers in the present invention Lamination, all full articulamentums all refer to the full articulamentum for being followed by ReLU activation primitive layers, are hereinafter no longer described in detail.
The structure of the cervical biopsy region recognition model of the present invention includes the following steps with training:
Step 1:The structure of training set
Physiological saline image, acetic acid image and the iodine image acquired when vaginoscopy is extracted, by each and every one same example Physiological saline image, acetic acid image and iodine image form a training sample as one group of data, are marked per group picture by doctor The uterine neck face location information and uterine neck of picture whether there is biopsy regions.
Specifically, whether there is " thick vinegar is white ", " inlaying blood vessel " feature in identification acetic acid image, and mark;Identify iodine figure It whether there is " bright crocus ", " mustard yellow ", " mottled coloring " feature as in, and mark.
Uterine neck is 1 there are the sample of biopsy regions and the normal sample size ratio of uterine neck:1.All samples are divided into instruction Practice collection (1373), test set (394), verification (192) three data sets of collection.
In order to allow image data to be easier to train, we, which have done all image datas, subtracts mean value and divided by standard deviation Z-score standardizes (zero-mean normalization) processing, while network over-fitting and to a certain degree in order to prevent Upper abundant training sample, we also at random carry out it data augmentation operation before image inputs network.
Step 2:Detect construction and the joint training of network layer
Detect network layer, including 3 independent detection sub-network networks.As shown in Fig. 2, detection sub-network network includes sequentially connected Feature extraction network, uterine neck face referral networks and uterine neck face detect network, added with channel selecting module on feature extraction network.
Feature extraction network is the ResNet50 network models plus channel selecting module.Feature extraction network includes successively Connected convolution filter size is the convolutional layer that 7 × 7, convolution step-length is 2, and pond filter size is 3 × 3, pond step-length Maximum pond layer for 2 and 4 convolution groups.
4 convolution groups are made of 3,4,6,3 residual units respectively.
Each residual unit is respectively 3 × 3,1 × 1,3 × 3 by 3 convolution filter sizes, and convolution step-length is 1 (the convolution step-length of first convolutional layer in first residual unit of each convolution group makes an exception, for convolutional layer composition 2).Often In a residual unit, into first convolutional layer before characteristic pattern can also flow directly into after third convolutional layer and same third Output after the characteristic pattern addition of a convolutional layer output as the residual unit.
Channel selecting module is added in after the last one residual unit of each convolution group.The structure of channel selecting module is such as Shown in Fig. 3, the characteristic pattern of the last one residual unit output of each convolution group passes through Pooling layers, volume 1 × 1 of the overall situation successively After lamination, ReLU active coatings, 1 × 1 convolutional layer and Sigmoid active coatings, the weight in each channel of characteristic pattern is obtained, by weight It is multiplied with characteristic pattern, the result of multiplication is added with characteristic pattern again, the characteristic pattern after output channel screening.
The purpose built in this way is exactly to carry out a channel screen in the characteristic pattern to be exported to each convolution group, prevent Only the channel characteristics of redundancy are excessive;And why the channel selecting module that we use will design on a branch and main path Characteristic pattern be added so that activation value is multiplied by the value (weight size is between 0~1) of one (1+ weights) on characteristic pattern, be because If without this branch, then the activation value on each characteristic pattern can all be multiplied by the value of a weight, although can also reach sieve The effect of channel information is selected, but activation value will be enabled to become very little after multiple such channel selecting modules, is influenced pair The deduction of final result.
Uterine neck face referral networks are mainly convolutional layer and 2 that 3 × 3, convolution step-length is 1 by 1 convolution filter size The convolutional layer that a parallel convolution filter size is 1 × 1, volume and step-length be 1 forms.
And detection network in uterine neck face is mainly made of 1 ROIPooling layers and two parallel full articulamentums.
Uterine neck face detect network followed by channel selecting module, to the uterine neck face region of uterine neck face detection network output Characteristic pattern carries out channel screen.
Physiological saline image, acetic acid image and the iodine image that 3 detection sub-network networks are respectively adopted in training set are trained, Here we only by taking the training of acetic acid image as an example.
Acetic acid image is input in feature extraction network first and obtains high-dimensional feature figure, later distinguishes this feature figure It is input in uterine neck face referral networks and ROIPooling layers.In the referral networks of region, two parallel convolutional layer difference Export uterine neck face location information that may be present and on the position possibility existing for uterine neck face in ROIPooling layers, By the error for being compared both predictive information and true tag, feature extraction network and uterine neck can be optimized Face referral networks.
The ROIPooling layers of feature exported in feature extraction network according to the location information of uterine neck face referral networks output On figure carry out Crop operations, obtain may the characteristic pattern containing uterine neck face and location information (being combined referred to as ROI), pay attention to Here it will be divided into two paths, a paths lead to feature and prediction network, another paths is combined then to continue to vent to uterine neck face Detect the full articulamentum of network.
ROI respectively obtains ROI there are the probability of biopsy regions and ROI with true after leading to the uterine neck face detection full articulamentum of network Position offset information between the position of real uterine neck face, the mistake for here being compared both output informations and actual value Difference, can optimize uterine neck face detection network and feature extraction network (pay attention to obtain here there are the probability of biopsy regions simultaneously It is not final result, is intended merely to optimizing detection sub-network).
ROI leads to feature and combines prediction network layer that can carry out channel screen by channel selecting module before.
When training, with training set training pattern, it can sentence after loss curves and accuracy rate curve settle out Disconnected training is completed.In the training process, the modelling effect in test training process is collected using verification.
The training of detection network layer is completed, the model parameter of detection network layer is preserved.
Step 3:Feature combines construction and the training of prediction network layer
Feature combines the structure of prediction network layer as shown in figure 4, including being sequentially connected feature combination network and Fusion Features Network.
ROI after the channel screen that 3 detection sub-network networks of feature combination network pair obtain carries out stack operation.
Fusion Features network includes being made of 3 full articulamentums and 1 cross entropy activation primitive layer.
When combining prediction network layer to be trained feature, first the model parameter of the detection network layer of preservation is loaded into In cervical biopsy region recognition model.Physiological saline image, acetic acid image and the iodine image in training set are distinguished respectively again defeated Enter to respective detection sub-network network layers, the ROI after the channel screen of output is separately input to characteristic set feature and combines pre- survey grid In network layers, by feature combine and " Fusion Features " after output obtain it is final there are the probability of biopsy regions, by this result The error compared with true value combines prediction network layer for training characteristics.
The training that feature combines prediction network layer is completed, the model parameter of detection network layer is preserved.
So far, cervical biopsy region recognition model training is completed.
When there is new patient, acquires the physiological saline of its uterine neck respectively using gynecatoptron, 3%-5% acetums, answers Square iodine solution image, data processing unit are entered into after obtaining above-mentioned image information in cervical biopsy region recognition model, I.e. exportable patient's uterine neck shows there are the probability tag of biopsy regions and in display unit, and doctor is according to the probability mark of output Label, in conjunction with the patient physiological saline, 3%-5% acetums, Dobell's solution image judge the patient whether need into The biopsy of one step, and then judge that the uterine neck of the patient whether there is lesion.
Technical scheme of the present invention and advantageous effect is described in detail in embodiment described above, it should be understood that Above is only a specific embodiment of the present invention, it is not intended to restrict the invention, it is all to be done in the spirit of the present invention Any modification, supplementary, and equivalent replacement etc., should all be included in the protection scope of the present invention.

Claims (9)

1. a kind of cervical biopsy region recognition device based on the multi-modal network of channel information, which is characterized in that including:
Image acquisition units acquire the physiological saline image, acetic acid image and iodine image of uterine neck, are sent to data processing unit;
Data processing unit, including the cervical biopsy region recognition model trained, cervical biopsy region recognition model is to physiology Salt water images, acetic acid image and iodine image carry out analyzing processing, and there are the probability tags of biopsy regions for output uterine neck;
The cervical biopsy region recognition model includes:
Network layer, including 3 independent detection sub-network networks are detected, extraction physiological saline image, acetic acid image and iodine are respectively used to The characteristic pattern and location information in uterine neck face region in image;
Feature combines prediction network layer, and 3 characteristic patterns for detecting network layer extraction and location information are spliced on channel dimension Get up, then by Fusion Features and identification, there are the probability tags of biopsy regions for output uterine neck;
Display unit obtains the probability tag and display.
2. the cervical biopsy region recognition device according to claim 1 based on the multi-modal network of channel information, feature It is, the detection sub-network network includes sequentially connected:
Feature extraction network, added with channel selecting module, the feature for extracting physiological saline image, acetic acid image or iodine image Figure, and channel screen is carried out to the characteristic pattern;
Uterine neck face referral networks, the location information for obtaining uterine neck face region in the characteristic pattern;
Network is detected in uterine neck face, according to the location information in uterine neck face region, Crop operations is carried out on the characteristic pattern, are obtained The characteristic pattern and its location information in uterine neck face region.
3. the cervical biopsy region recognition device according to claim 2 based on the multi-modal network of channel information, feature It is, the feature extraction network includes the convolutional layer being sequentially connected, maximum pond layer and several convolution group;
The convolution group is made of several residual units;
The channel selecting module is added in after the last one residual unit of each convolution group.
4. the cervical biopsy region recognition device according to claim 3 based on the multi-modal network of channel information, feature It is, the channel selecting module carries out channel selecting operation to the characteristic pattern of input, obtains each channel of this feature figure Weight is multiplied by weight with this feature figure, and the result of multiplication is added with this feature figure again, the characteristic pattern after output channel screening; The numberical range of weight is 0~1.
5. the cervical biopsy region recognition device according to claim 2 based on the multi-modal network of channel information, feature It is, the uterine neck face is detected network module and is made of ROIPooling layers and two parallel full articulamentums.
6. the cervical biopsy region recognition device according to claim 5 based on the multi-modal network of channel information, feature Be, the uterine neck face detect network followed by the channel selecting module.
7. being filled according to cervical biopsy region recognition of claim 1~6 any one of them based on the multi-modal network of channel information It sets, which is characterized in that the training method of the cervical biopsy region recognition model includes:
(1) physiological saline image, acetic acid image and the iodine image for obtaining uterine neck, to uterine neck face region into rower after being pre-processed Note, uterine neck is identified with the presence or absence of biopsy regions, builds training set;
(2) cervical biopsy region recognition model is trained using training set, including:
(2-1) is trained detection network layer:
Physiological saline image, acetic acid image and the iodine image in training set are input in respective detection sub-network network respectively, instructed Practice to detection sub-network network and restrain, preserves the model parameter of detection sub-network network;
(2-2) is trained cervical lesions identification model:
The model parameter for the detection sub-network network that step (2-1) obtains is loaded into cervical biopsy region recognition model;
Physiological saline image, acetic acid image and the iodine image in training set are input in respective detection sub-network network respectively, then After feature combines prediction network layer, there are the probability tag of biopsy regions, training to cervical biopsy region recognition mould for output Type restrains;
Preserve the cervical biopsy region recognition model parameter that training obtains.
8. the cervical biopsy region recognition device according to claim 7 based on the multi-modal network of channel information, feature It is, in step (1), the preprocess method is:Z-score is carried out to physiological saline image, acetic acid image and iodine image Standardization and the operation of data augmentation.
9. a kind of cervical biopsy area recognizing method based on the multi-modal network of channel information, which is characterized in that including following step Suddenly:
(1) physiological saline image, acetic acid image and the iodine image that uterine neck is acquired by image acquisition units, are input to data processing Cervical biopsy region recognition model in unit;
(2) physiological saline image, acetic acid image and iodine image are analyzed by the cervical biopsy region recognition model Processing, there are the probability tags of biopsy regions for output uterine neck, and are shown in display unit.
CN201810092566.XA 2018-01-30 2018-01-30 Cervical biopsy region identification method and device based on channel information multi-mode network Active CN108319977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810092566.XA CN108319977B (en) 2018-01-30 2018-01-30 Cervical biopsy region identification method and device based on channel information multi-mode network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810092566.XA CN108319977B (en) 2018-01-30 2018-01-30 Cervical biopsy region identification method and device based on channel information multi-mode network

Publications (2)

Publication Number Publication Date
CN108319977A true CN108319977A (en) 2018-07-24
CN108319977B CN108319977B (en) 2020-11-10

Family

ID=62887779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810092566.XA Active CN108319977B (en) 2018-01-30 2018-01-30 Cervical biopsy region identification method and device based on channel information multi-mode network

Country Status (1)

Country Link
CN (1) CN108319977B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376594A (en) * 2018-09-11 2019-02-22 百度在线网络技术(北京)有限公司 Visual perception method, apparatus, equipment and medium based on automatic driving vehicle
CN109363632A (en) * 2018-09-26 2019-02-22 北京三医智慧科技有限公司 The deciphering method of pulse profile data and the solution read apparatus of pulse profile data
CN109544512A (en) * 2018-10-26 2019-03-29 浙江大学 It is a kind of based on multi-modal embryo's pregnancy outcome prediction meanss
CN109543719A (en) * 2018-10-30 2019-03-29 浙江大学 Uterine neck atypia lesion diagnostic model and device based on multi-modal attention model
CN109859159A (en) * 2018-11-28 2019-06-07 浙江大学 A kind of cervical lesions region segmentation method and device based on multi-modal segmentation network
CN111369567A (en) * 2018-12-26 2020-07-03 腾讯科技(深圳)有限公司 Method and device for segmenting target object in three-dimensional image and electronic equipment
CN112602097A (en) * 2018-08-31 2021-04-02 奥林巴斯株式会社 Data processing system and data processing method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102282569A (en) * 2008-10-10 2011-12-14 国际科学技术医疗***有限责任公司 Methods for tissue classification in cervical imagery
CN103750810A (en) * 2013-12-30 2014-04-30 深圳市理邦精密仪器股份有限公司 Method and device for performing characteristic analysis for images acquired by electronic colposcope
CN104881631A (en) * 2015-04-16 2015-09-02 广西师范大学 Multi-characteristic integrated cervical cell image characteristic extraction and identification method, and cervical cell characteristic identification device
US20160093048A1 (en) * 2014-09-25 2016-03-31 Siemens Healthcare Gmbh Deep similarity learning for multimodal medical images
CN106529446A (en) * 2016-10-27 2017-03-22 桂林电子科技大学 Vehicle type identification method and system based on multi-block deep convolutional neural network
CN106780475A (en) * 2016-12-27 2017-05-31 北京市计算中心 A kind of image processing method and device based on histopathologic slide's image organizational region
CN107066583A (en) * 2017-04-14 2017-08-18 华侨大学 A kind of picture and text cross-module state sensibility classification method merged based on compact bilinearity

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102282569A (en) * 2008-10-10 2011-12-14 国际科学技术医疗***有限责任公司 Methods for tissue classification in cervical imagery
CN103750810A (en) * 2013-12-30 2014-04-30 深圳市理邦精密仪器股份有限公司 Method and device for performing characteristic analysis for images acquired by electronic colposcope
US20160093048A1 (en) * 2014-09-25 2016-03-31 Siemens Healthcare Gmbh Deep similarity learning for multimodal medical images
CN104881631A (en) * 2015-04-16 2015-09-02 广西师范大学 Multi-characteristic integrated cervical cell image characteristic extraction and identification method, and cervical cell characteristic identification device
CN106529446A (en) * 2016-10-27 2017-03-22 桂林电子科技大学 Vehicle type identification method and system based on multi-block deep convolutional neural network
CN106780475A (en) * 2016-12-27 2017-05-31 北京市计算中心 A kind of image processing method and device based on histopathologic slide's image organizational region
CN107066583A (en) * 2017-04-14 2017-08-18 华侨大学 A kind of picture and text cross-module state sensibility classification method merged based on compact bilinearity

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHE GUO ET AL: "Medical Image Segmentation Based on Multi-Modal", 《ARXIV》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112602097A (en) * 2018-08-31 2021-04-02 奥林巴斯株式会社 Data processing system and data processing method
CN109376594A (en) * 2018-09-11 2019-02-22 百度在线网络技术(北京)有限公司 Visual perception method, apparatus, equipment and medium based on automatic driving vehicle
US11120275B2 (en) 2018-09-11 2021-09-14 Baidu Online Network Technology (Beijing) Co., Ltd. Visual perception method, apparatus, device, and medium based on an autonomous vehicle
CN109363632A (en) * 2018-09-26 2019-02-22 北京三医智慧科技有限公司 The deciphering method of pulse profile data and the solution read apparatus of pulse profile data
CN109544512A (en) * 2018-10-26 2019-03-29 浙江大学 It is a kind of based on multi-modal embryo's pregnancy outcome prediction meanss
CN109543719A (en) * 2018-10-30 2019-03-29 浙江大学 Uterine neck atypia lesion diagnostic model and device based on multi-modal attention model
CN109859159A (en) * 2018-11-28 2019-06-07 浙江大学 A kind of cervical lesions region segmentation method and device based on multi-modal segmentation network
CN109859159B (en) * 2018-11-28 2020-10-13 浙江大学 Cervical lesion region segmentation method and device based on multi-mode segmentation network
CN111369567A (en) * 2018-12-26 2020-07-03 腾讯科技(深圳)有限公司 Method and device for segmenting target object in three-dimensional image and electronic equipment

Also Published As

Publication number Publication date
CN108319977B (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN108319977A (en) Cervical biopsy area recognizing method based on the multi-modal network of channel information and device
CN108388841A (en) Cervical biopsy area recognizing method and device based on multiple features deep neural network
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
CN112070772B (en) Blood leukocyte image segmentation method based on UNet++ and ResNet
CN109191457A (en) A kind of pathological image quality validation recognition methods
CN109543719A (en) Uterine neck atypia lesion diagnostic model and device based on multi-modal attention model
US11200668B2 (en) Methods and devices for grading a tumor
WO2023284341A1 (en) Deep learning-based context-sensitive detection method for urine formed element
CN109636805A (en) A kind of uterine neck image lesion region segmenting device and method based on classification priori
CN107564580A (en) Gastroscope visual aids processing system and method based on integrated study
CN117975453A (en) Image analysis method, device, program and method for manufacturing deep learning algorithm after learning
CN110097559A (en) Eye fundus image focal area mask method based on deep learning
CN110032985A (en) A kind of automatic detection recognition method of haemocyte
CN107330263A (en) A kind of method of area of computer aided breast invasive ductal carcinoma histological grading
CN110084237A (en) Detection model construction method, detection method and the device of Lung neoplasm
CN110097545A (en) Eye fundus image generation method based on deep learning
CN111951221A (en) Glomerular cell image identification method based on deep neural network
CN108038519A (en) A kind of uterine neck image processing method and device based on dense feature pyramid network
CN102282569A (en) Methods for tissue classification in cervical imagery
CN110265119A (en) Bone age assessment and prediction of height model, its system and its prediction technique
CN110189293A (en) Cell image processing method, device, storage medium and computer equipment
CN111986148B (en) Quick Gleason scoring system for digital pathology image of prostate
CN109671062A (en) Ultrasound image detection method, device, electronic equipment and readable storage medium storing program for executing
CN108596174A (en) A kind of lesion localization method of skin disease image
CN113902669A (en) Method and system for reading urine exfoliative cell fluid-based smear

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant