CN109815965A - A kind of image filtering method, device and storage medium - Google Patents
A kind of image filtering method, device and storage medium Download PDFInfo
- Publication number
- CN109815965A CN109815965A CN201910112755.3A CN201910112755A CN109815965A CN 109815965 A CN109815965 A CN 109815965A CN 201910112755 A CN201910112755 A CN 201910112755A CN 109815965 A CN109815965 A CN 109815965A
- Authority
- CN
- China
- Prior art keywords
- convolution
- feature
- image
- network
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses a kind of image filtering method, device and storage medium, wherein, the embodiment of the present application obtains the organization chart picture of multiple mode of destination organization, determine Quality Control Network model ready for use, Quality Control Network model includes: sequentially connected across channel convolution sub-network, convolution sub-network and classification sub-network, the tissue-image features of multiple mode are merged based on across channel convolution sub-network, obtain the first fusion feature, the intensive extraction of feature is carried out to the first fusion feature based on convolution sub-network, obtain the second fusion feature, classified based on classification sub-network to the second fusion feature, obtain the quality Identification result of organization chart picture, it is filtered according to organization chart picture of the quality Identification result to multiple mode, organization chart picture after being filtered, image filtering efficiency and accuracy rate can be improved in the program.
Description
Technical field
This application involves field of image recognition, and in particular to a kind of image filtering method, device and storage medium.
Background technique
Medical Imaging Technology is the interior tissue image that a kind of pair of human body or human body part are obtained with non-intruding mode.Doctor
Image is learned for medical treatment or medical research.For example, medical image may include optical coherence tomography (OCT) image, etc..
OCT image is higher than other inspection methods in the clarity to eyeground structure observation, therefore to diagnosis macula hole, central serous
Property chorioretinopathy, Cystoid macular edema etc. have good effect.
The current common filter type for medical image screens out undesirable mainly using manually selecting
Image, however the efficiency and accuracy rate of this method will receive limitation.
Due to depending on artificial filter for the filter type of medical image at present, for example, needing manually not being inconsistent quality
It closes desired image to be screened out, therefore, efficiency and the accuracy rate that will lead to image filtering are lower.
Summary of the invention
In view of this, the embodiment of the present application provides a kind of image filtering method, device and storage medium, figure can be improved
As the efficiency and accuracy rate of filtering.
In a first aspect, the embodiment of the present application provides a kind of image filtering method, comprising:
Obtain the organization chart picture of multiple mode of destination organization;
Determine that Quality Control Network model ready for use, the Quality Control Network model include: sequentially connected across logical
Road convolution sub-network, convolution sub-network and classification sub-network;
The tissue-image features of multiple mode are merged based on across the channel convolution sub-network, obtain the first fusion
Feature;
The intensive extraction for carrying out feature to first fusion feature based on the convolution sub-network obtains the second fusion spy
Sign;
Classified based on the classification sub-network to second fusion feature, obtains the quality Identification knot of organization chart picture
Fruit;
The organization chart picture of the multiple mode is filtered according to the quality Identification result, organization chart after being filtered
Picture.
Second aspect, a kind of image filtering device that the embodiment of the present application provides, comprising:
Obtain module, the organization chart picture of multiple mode for obtaining destination organization;
Determining module, for determining Quality Control Network model ready for use, the Quality Control Network model include: according to
Across channel convolution sub-network, convolution sub-network and the classification sub-network of secondary connection;
Across channel convolution module, for based on across the channel convolution sub-network to the tissue-image features of multiple mode into
Row fusion, obtains the first fusion feature;
Convolution module, for the intensive extraction of feature to be carried out to first fusion feature based on the convolution sub-network,
Obtain the second fusion feature;
Categorization module obtains organization chart for classifying based on the classification sub-network to second fusion feature
The quality Identification result of picture;
Filtering module is obtained for being filtered according to the quality Identification result to the organization chart picture of the multiple mode
Organization chart picture after to filtering.
The third aspect, storage medium provided by the embodiments of the present application, is stored thereon with computer program, when the computer
When program is run on computers, so that the computer executes the image filtering method provided such as the application any embodiment.
The embodiment of the present application obtains the organization chart picture of multiple mode of destination organization, determines Quality Control Network ready for use
Model, Quality Control Network model include: sequentially connected across channel convolution sub-network, convolution sub-network and classification sub-network,
The tissue-image features of multiple mode are merged based on across channel convolution sub-network, obtain the first fusion feature, based on volume
Product sub-network carries out the intensive extraction of feature to the first fusion feature, obtains the second fusion feature, based on classification sub-network to the
Two fusion features are classified, and obtain the quality Identification of organization chart picture as a result, according to quality Identification result to the group of multiple mode
It knits image to be filtered, organization chart picture after being filtered realizes the raising of image filtering efficiency and accuracy rate with this.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 is the application scenarios schematic diagram of image filtering method provided by the embodiments of the present application.
Fig. 2 is the first pass schematic diagram of image filtering method provided by the embodiments of the present application.
Fig. 3 is the second procedure schematic diagram of image filtering method provided by the embodiments of the present application.
Fig. 4 is the third flow diagram of image filtering method provided by the embodiments of the present application.
Fig. 5 is eye fundus image and OCT image schematic diagram provided by the embodiments of the present application.
Fig. 6 is Inception A structural schematic diagram provided by the embodiments of the present application.
Fig. 7 is Inception B structure schematic diagram provided by the embodiments of the present application.
Fig. 8 is Inception C-structure schematic diagram provided by the embodiments of the present application.
Fig. 9 is Inception V4 schematic network structure provided by the embodiments of the present application.
Figure 10 is stem structural schematic diagram provided by the embodiments of the present application.
Figure 11 is reduction structural schematic diagram provided by the embodiments of the present application.
Figure 12 is Dense Net schematic network structure provided by the embodiments of the present application.
Figure 13 is difficult specimen sample device schematic diagram provided by the embodiments of the present application.
Figure 14 is Quality Control Network model structure schematic diagram provided by the embodiments of the present application.
Figure 15 is across channel convolution schematic diagram provided by the embodiments of the present application.
Figure 16 is intensive connection schematic diagram provided by the embodiments of the present application.
Figure 17 is the Inception A structural schematic diagram provided by the embodiments of the present application for increasing short circuit connection.
Figure 18 is the Inception B structure schematic diagram provided by the embodiments of the present application for increasing short circuit connection.
Figure 19 is the Inception C-structure schematic diagram provided by the embodiments of the present application for increasing short circuit connection.
Figure 20 is picture scan type identification network architecture schematic diagram provided by the embodiments of the present application.
Figure 21 is experimental result schematic diagram provided by the embodiments of the present application.
Figure 22 is the first structure diagram of image filtering device provided by the embodiments of the present application.
Figure 23 is the second structural schematic diagram of image filtering device provided by the embodiments of the present application.
Figure 24 is network equipment schematic diagram provided by the embodiments of the present application.
Specific embodiment
Schema is please referred to, wherein identical component symbol represents identical component, the principle of the application is to implement one
It is illustrated in computing environment appropriate.The following description be based on illustrated by the application specific embodiment, should not be by
It is considered as limitation the application other specific embodiments not detailed herein.
In the following description, the specific embodiment of the application will refer to the step as performed by one or multi-section computer
And symbol illustrates, unless otherwise stating clearly.Therefore, these steps and operation will have to mention for several times is executed by computer, this paper institute
The computer execution of finger includes by representing with the computer processing unit of the electronic signal of the data in a structuring pattern
Operation.This operation is converted at the data or the position being maintained in the memory system of the computer, reconfigurable
Or in addition change the running of the computer in mode known to the tester of this field.The maintained data structure of the data
For the provider location of the memory, there is the specific feature as defined in the data format.But the application principle is with above-mentioned text
Word illustrates that be not represented as a kind of limitation, this field tester will appreciate that plurality of step and behaviour as described below
Also it may be implemented in hardware.
Term as used herein " module " can see the software object executed in the arithmetic system as.It is as described herein
Different components, module, engine and service can see the objective for implementation in the arithmetic system as.And device as described herein and side
Method can be implemented in the form of software, can also be implemented on hardware certainly, within the application protection scope.
Term " first ", " second " and " third " in the application etc. are for distinguishing different objects, rather than for retouching
State particular order.In addition, term " includes " and " having " and their any deformations, it is intended that cover and non-exclusive include.
Such as contain series of steps or module process, method, system, product or equipment be not limited to listed step or
Module, but some embodiments further include the steps that not listing or module or some embodiments further include for these processes,
Method, product or equipment intrinsic other steps or module.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments
It is contained at least one embodiment of the application.Each position in the description occur the phrase might not each mean it is identical
Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and
Implicitly understand, embodiment described herein can be combined with other embodiments.
The embodiment of the present application provides a kind of image filtering method, and the executing subject of the image filtering method can be the application
The image filtering device that embodiment provides, or it is integrated with the network equipment of the image filtering device, wherein the image filtering fills
Setting can be realized by the way of hardware or software.Wherein, the network equipment can be smart phone, tablet computer, palm electricity
The equipment such as brain, laptop or desktop computer.
Referring to Fig. 1, Fig. 1 is the application scenarios schematic diagram of image filtering method provided by the embodiments of the present application, with image
For filter device integrates in the network device, the organization chart picture of multiple mode of the available destination organization of the network equipment, really
Fixed Quality Control Network model ready for use, Quality Control Network model include: sequentially connected across channel convolution sub-network, volume
Product sub-network and classification sub-network, merge the tissue-image features of multiple mode based on across channel convolution sub-network, obtain
To the first fusion feature, the intensive extraction of feature is carried out to the first fusion feature based on convolution sub-network, obtains the second fusion spy
Sign classifies to the second fusion feature based on classification sub-network, obtains the quality Identification of organization chart picture as a result, knowing according to quality
Other result is filtered the organization chart picture of multiple mode, organization chart picture after being filtered.
Referring to Fig. 2, Fig. 2 is the flow diagram of image filtering method provided by the embodiments of the present application.The application is implemented
The detailed process for the image filtering method that example provides can be such that
201, the organization chart picture of multiple mode of destination organization is obtained.
Wherein, destination organization can also may be used for certain tissue of life entity for example, destination organization can be life body tissue
Think some part in life body tissue, can also be the tissue etc. of several life body tissues composition.For example, destination organization
It can be eyes, blood vessel, subcutaneous tissue etc..Life entity is to have form of life, and corresponding reflection can be made to environmental stimuli
Independent individual.For example, life entity can be human body or animal body etc..
Wherein, the organization chart picture of multiple mode of destination organization can be the target group got by different imaging techniques
The image knitted, for example, the organization chart picture of multiple mode can be the OCT being imaged using near infrared ray and principle of optical interference
Image and the medical image, etc. being imaged by optical mirror slip.
In one embodiment, for example, when destination organization is eyes, the organization chart pictures of multiple mode of destination organization can be with
For the OCT image being imaged using near infrared ray and principle of optical interference and the eye fundus image etc. obtained by fundus camera
Deng.
In practical applications, obtain the organization chart picture of multiple mode of destination organization mode can there are many, for example, working as
When the organization chart picture of multiple mode of destination organization is OCT image and eye fundus image, optical coherence tomography scanner can be passed through
Etc. getting OCT image, eye fundus image is got by fundus camera etc., it can also be by obtaining OCT image and eye from local
Base map picture, or be downloaded by network and obtain OCT image and eye fundus image, etc..
Wherein, as shown in figure 5, eye fundus image is the eye fundus image got by fundus camera etc., pass through eye fundus image
Vitreum, retina, choroid and optic nerve disease etc. can be checked.Many systemic diseases such as high blood pressure, kidney
Eyeground pathological changes can occur for disease, diabetes, eclampsia, sarcoidosis, certain blood diseases, central nervous system disease etc., very
The main reason for extremely going to a doctor as patient, so checking that eyeground can provide important diagnostic data.
Wherein, as shown in figure 5, OCT image is means of optical coherence tomography (optical coherence tomography, Optical
Coherence tomography) it is a kind of imaging technique that last decade develops rapidly, it utilizes weak coherent light interferometer
Basic principle, detection biological tissue's different depth level scattered signal to the back-reflection of incident weak coherent light or several times leads to
Biological tissue's two dimension or three-dimensional structure image can be obtained in overscanning.
202, Quality Control Network model ready for use is determined.
(1) determination of Quality Control Network model.
Wherein, Quality Control Network model is the network that new Methods of Quality Information Acquisition can be carried out to the organization chart picture of destination organization
Model, for example, as shown in figure 14, Quality Control Network model may include sequentially connected across channel convolution sub-network, convolution
Sub-network and classification sub-network.Convolution sub-network includes at least one convolution module and parallel Reduced Dimension Structure, categorization module can
To include average pond layer, classifier etc..The Quality Control Network model can be for the progress of Inception V4 network model
Improved network model, etc..
Since existing network model Inception V4 network model and Dense Net network model are on different dimensions
The promotion for network model performance is all realized, but although Inception V4 network model has paid close attention to network-wide, but
It is not concerned with network depth, although Dense Net network model can pay close attention to network depth, does not take into account network-wide.
Therefore, the thought intensively connected in Dense Net network model can be added in Inception V4 network model, thus
The network model for taking into account network-wide and network depth can be obtained.
Also, Inception V4 network model and Dense Net network model all only considered an a kind of mode (figure
Picture) situation as input, since organization chart picture may include multiple modalities, for example, organization chart picture may include OCT image and
Eye fundus image, therefore the network model of single structure is had no idea the organization chart picture that multiple modalities are effectively treated simultaneously, and lack
The information of certain mode is likely to result in network model to the classification inaccuracy of partial data.It therefore, can be by quality control
Increase across channel convolution sub-network in network model processed, the information of multiple modalities is merged, allows the network equipment same
When handle multi-modal input.
As shown in figure 9, Inception V4 network model be it is a kind of by by the convolution kernel of n × n be decomposed into 1 × n and n ×
The network model of 1 two convolution.For example, the convolution that one 3 × 3 convolution is equivalent to be first carried out one 1 × 3 executes one again
3 × 1 convolution, convolution of the convolution of this method in cost than single 3 × 3 are low.It as shown in Figure 10, is Inception
Stem structural schematic diagram in V4 network model.And the parallel Reduced Dimension Structure in Inception V4 network model
(Reduction), it as shown in figure 11, can be used for substituting single pond layer, network can also be increased to different size target
Sample information.
Wherein, Inception V4 network model is by the way of widening network-wide, Inception V4 network model
Basic unit (convolution submodule) include four convolution branches, i.e., in convolution submodule include a plurality of convolution branch.As Fig. 6,
Shown in Fig. 7 and Fig. 8, the basic unit (convolution submodule) of Inception V4 network model include Inception A unit,
Inception unit B and Inception C cell etc..Details branch (the Inception A unit of Inception A unit
In Article 2 and Article 3 convolution branch from left to right) include less convolutional layer and lesser convolution kernel, for paying close attention to the thin of image
Save part;Whole branch (first and Article 4 convolution branch from left to right in Inception A unit) includes pond layer and more
Convolutional layer finally four convolution branches are linked together and are output in next layer for paying close attention to the global portions of image, with
Increase the diversity of feature.Although Inception unit B and Inception C cell are used with Inception A unit not
Same connection type, but be substantially also the optimization that network model is carried out by the way of widening network-wide.
Wherein, Dense Net network model is a kind of with the convolutional neural networks model intensively connected.Such as Figure 12 institute
Showing, Dense Net network model promotes the performance of network by increasing the depth of network, in Dense Net network model,
There is direct connection between any two layers, that is to say, that the input of each layer of Dense Net network model all includes front institute
There is the output of layer, can also be the mathematical addition of all layers of output in front for example, can be the union of all layers of output in front,
Etc..And the characteristic pattern that this layer is learnt can also be directly passed to and be used as input for all layers behind, by intensively connecting, alleviate
Gradient disappearance problem reinforces feature propagation, feature multiplexing is encouraged, to greatly reduce parameter amount.
(2) training of Quality Control Network model.
Wherein, which can also include the training to quality control network model.
In one embodiment, specifically, which can also include:
Training set is obtained, the training set includes multiple training samples;
The training set is sampled according to the sampled probability of sample image, obtains target training sample;
Quality control network model is trained based on the target training sample, Quality Control Network after being trained
Model;
Based on Quality Control Network model after the training, the corresponding damage of sample image in the target training sample is obtained
Lose function;
It is updated according to sampled probability of the loss function to sample image in the target training sample, return is held
The step of row samples the training set according to the sampled probability of sample image, until meeting sampling termination condition.
(a) training set is obtained.
Wherein, training set includes multiple training samples, and training set and training sample all include several sample images, for example,
Training set includes B training sample, and it includes M sample images, training sample in training set that training sample, which is the subset of training set,
Including N sample images.
Wherein, training set includes multiple sample images, and sample image is the tissue samples figure of multiple mode of destination organization
Picture, and the quality information with mark, for example, image off quality may include as caused by human factor as shot
It obscures, cut out, image out of alignment, or such as figure of refractive media muddiness, high myopia as caused by non-artificial factor
Picture.Sample image can be multiple eye fundus images and OCT image with mark quality information, eye fundus image and OCT image
Quantitative proportion can be 1:1, eye fundus image and OCT image one-to-one correspondence, etc..
In one embodiment, it for example, the quantitative proportion of eye fundus image and OCT image may be other ratios, but needs
Guarantee that every OCT image can correspond to an eye fundus image.
In practical applications, training set can be obtained in several ways, for example, can be by medical instrument to sample graph
As being acquired, composing training collection, or training set can be obtained by modes such as network, database, locals.Specifically, may be used
To obtain 4476 sample images as training set, training set accounts for the 70% of total figure picture, available 1919 destination organizations
For organization chart picture as test set, test set accounts for the 30% of total figure picture, and the sample image in test set does not include the quality letter of mark
Breath, total figure picture includes test set and training set.
Wherein, the sample image in training set can also be pre-processed, to eliminate information unrelated in image, is restored
Useful real information enhances detectability for information about, and simplifies data to the maximum extent, to improve trained standard
True property.
In one embodiment, specifically, which can also include:
Size adjusting is carried out to the sample image, the sample image after obtaining size adjusting;
Pixel adjustment is carried out to the sample image after the size adjusting, obtains pixel sample image adjusted;
Data enhancing is carried out to pixel sample image adjusted, obtains enhanced sample image;
Using the enhanced sample image as the training sample image.
In one embodiment, for example, the size of eye fundus image can be 496 × 496 in sample image, OCT image
Size can be 496 × 496,496 × 768 or 496 × 1024, etc..Quality Control Network mode input image
It can be 496 × 496 eye fundus image and 496 × 768 OCT image.Due to OCT image in sample image size not
Only a kind of, therefore, it is necessary to the sizes to OCT image to be adjusted.For example, when picture traverse is less than 768, to the both sides of image
Carry out benefit 0 (black surround) operation;When picture traverse is greater than 768, symmetrical cutting operation is carried out to the both sides of image, so that all
The width of OCT image is all 768, to realize the unification of picture size.
The sample image of size after reunification can be standardized later, for example, can take subtract image mean value divided by
The mode of image variance carries out image standardization.It is then possible to carry out -30 °~+30 ° of Random-Rotation, Random Level to image
Overturning, Stochastic Elasticity deformation or random speckle (spot) noisy operation of addition, thus increase the data volume of sample image,
The generalization ability for improving model, can also increase noise data, to promote the robustness of network model.
(b) training set is sampled according to the sampled probability of sample image, obtains target training sample.
Wherein, the sampled probability of sample image is every sample image when sampling from training set to sample image
The probability sampled, for example, when including M sample images in training set, then the sampled probability of sample image is 1/M.
In practical applications, it can obtain including a fixed number according to the sampled probability of sample image from training cluster sampling
Measure the target training sample of sample image.For example, the sampled probability of sample image is when including that M opens sample images in training set
1/M can acquire N sample images, and obtain including target training sample that N opens sample image.
In one embodiment, for example, can also be realized according to the sampled probability of sample image by sampler to the instruction
Practice the step of collection is sampled, obtains target training sample.
Wherein, sampler (sampler) be it is a kind of for deep learning training tool, can be used for from training set into
Row sampling obtains being trained in target training sample input network model.
Wherein, conventional depth learns used sampler, can initialize all sample images in training set first, make
The probability that every sample image is sampled in training set is identical, for example, training set includes M sample images, then every sample
Image can be sampled with the probability of 1/M, and formed target training sample and carried out network model training, belonged to nothing and put back to sampling.Instruction
Practice after concentrating all sample images to be sampled entirely, completes a wheel network model training.However this sample mode can not needle
Repetition training is carried out to difficult sample, and will cause simple sample overlearning, to limit the promotion of network model performance.
In one embodiment, since common sampler cannot achieve difficult sample by the adjustment of sampled probability, it can be with
Training set is sampled using difficult sample Novel sampler (Hard Example Boosting Sampler).For example, such as
Shown in Figure 13, the sample image in training set can be initialized using difficult sample Novel sampler, so that each instruction
It is identical to practice the probability that sample image is sampled, then training set is sampled, obtains target training sample.
(c) quality control network model is trained based on target training sample, Quality Control Network after being trained
Model.
It in practical applications, can be based on target training sample from being sampled in training set after obtaining target training sample
This is trained quality control network model, Quality Control Network model after being trained.
(d) based on Quality Control Network model after training, the corresponding loss letter of sample image in target training sample is obtained
Number.
Wherein, loss function is one of machine learning function, since algorithm all in machine learning requires most
Bigization minimizes a function, this function is referred to as objective function.The Certain function summary wherein minimized is loss function.
Loss function can be according to prediction as a result, measuring out the quality of model predictive ability.
In practical applications, after target training sample is trained quality control network model, available target instruction
Practice the corresponding loss function of sample image in sample.The corresponding loss function of sample image is big, then it is assumed that the sample image category
In difficult sample.
(e) it is updated according to sampled probability of the loss function to sample image in target training sample, returns and execute root
The step of training set is sampled according to the sampled probability of sample image, until meeting sampling termination condition.
Wherein, meet sampling termination condition, for sampling can be terminated, and obtain the Quality Control Network model of training end
Condition.For example, when images all in training set can all be passed through to training, terminating item as sampling when without sampling is put back to
Part.When putting back to sampling, when trained sample image quantity can be reached sample image quantity in training set, as
Sample termination condition.
It in practical applications, can be according to the corresponding loss function of sample image of acquisition, to sample in target training sample
The sampled probability of this image is updated, and continues sampling step later, until meeting sampling termination condition.
It, can be according to the loss function of sample image to target training sample in order to realize the sampling repeatedly to difficult sample
Middle difficulty sample is judged, is sampled repeatedly to difficult sample later, with the accuracy rate of training for promotion network model.
Specifically, step " according to the loss function to the sampled probability of sample image in the target training sample into
Row updates " may include:
Sample image in the target training sample is ranked up by preset rules according to the loss function, obtains institute
State the sequence of sample image in target training sample;
According to the sequence of sample image in the target training sample, sample image in the target training sample is adopted
Sample probability is updated.
In practical applications, sample image in target training sample can be arranged by preset rules according to loss function
Sequence obtains the sequence of sample image in target training sample, and carries out to the sampled probability of sample image in target training sample
It updates.For example, can be ranked up according to the loss function of sample image in target training sample is ascending, loss function is big
May be considered difficult sample, later, can be general by the sampling of the maximum sample image of loss function in target training sample
Rate progress is double, and due to being sampled as putting back to sampling, difficult sample has bigger sampled probability, just has bigger probability to carry out
Multiple repairing weld and training.
In one embodiment, for example, training set includes B training sample, training set includes M sample images, can be with base
A training sample is obtained from training set as target training sample in difficult sample Novel sampler, every sample image
Sampled probability is 1/M, includes N sample images in the target training sample.After sample training, N in target training sample is obtained
The corresponding loss function of sample image is opened, the sample image in target training sample is arranged according to the size of loss function
Sequence, and the sampled probability of the maximum sample image of loss function is adjusted to 2/M from 1/M, to increase the sampling of the difficulty sample
Probability.
In one embodiment, for example, when sampling next time, being adopted in target training sample including this after sampled probability adjustment
Sample probability is the sample image of 2/M, and the loss function of the sample image is maximum, can continue to adjust adopting for the sample image
Sample probability is adjusted from 2/M to 4/M, to increase the sampled probability of the difficulty sample.
In one embodiment, for example, the parameter of Quality Control Network model can use Inception V4 network model
The parameter of pre-training, newly added convolutional layer can use the height that variance is 0 for 0.01, mean value on Image Net data set
This distribution is initialized.
Wherein, Image Net data set is the data set applied to deep learning image domains, can be according to the data set
Carry out the research work such as image classification, positioning, detection.
In one embodiment, for example, Quality Control Network model can be solved using the gradient descent method based on Adam
Convolution layer parameter w and offset parameter b, every 20K iteration can be used to decay with 90% pair of learning rate.
203, the tissue-image features of multiple mode are merged based on across channel convolution sub-network, obtains the first fusion
Feature.
Wherein, across channel convolution sub-network can merge the feature of a variety of images, to extract fusion feature, from
And improve the accuracy of network model.
In practical applications, for example, matter can will be carried out in OCT image and eye fundus image input quality control network model
Amount identification, obtains the quality information about OCT image and eye fundus image, by Quality Control Network model, can reject clinic
Not available data, for example shot as caused by human factor and the images such as obscure, cut out, is out of alignment;It is led by non-artificial factor
Such as refractive media muddiness of cause, high myopia image.
Wherein, as shown in figure 14, in order to improve the quality Identification result that Quality Control Network model obtains organization chart picture
Accuracy can carry out quality using the network model for including across channel convolution sub-network, convolution sub-network and sub-network of classifying
Identification.
In practical applications, for example, the Quality Control Network model may include sequentially connected across channel convolution subnet
Network, convolution sub-network and classification sub-network.It can pass through in eye fundus image and OCT image input quality control network model
Across channel convolution sub-network merges the feature of eye fundus image and the feature of OCT image, obtains the first fusion feature.
Wherein, as shown in figure 15, in order to which eye fundus image and OCT image is effectively treated simultaneously, across channel convolution can be used
Sub-network merges the feature of eye fundus image and OCT image, to improve characteristic polymorphic, to improve the standard of network model
True property.
In one embodiment, specifically, step is " based on across the channel convolution sub-network to the organization chart picture of multiple mode
Feature is merged, and the first fusion feature is obtained " may include:
The tissue-image features of multiple mode are extracted based on the feature extraction unit;
The tissue-image features of the multiple mode are subjected to feature reformation based on the mode reformer unit, are reformed
Feature afterwards;
The Fusion Features after the reformation are obtained into the first fusion feature based on the Fusion Features unit.
Wherein, across channel convolution sub-network includes feature extraction unit, mode reformer unit and Fusion Features unit.Feature
Extraction unit is that can extract the unit of characteristics of image, for example may include stem network structure.Mode reformer unit is can be with
The unit that characteristics of image is reformed.Fusion Features unit is can be to the unit that characteristics of image is merged, for example, feature
Integrated unit can merge characteristics of image by Three dimensional convolution.
In practical applications, for example, since the factor impacted to picture quality not only can be complete on OCT image
It observes, and can partially be observed on eye fundus image.Therefore it needs to examine two kinds of images of OCT image and eye fundus image
It tests, so across channel convolution sub-network is introduced, by the extraction to OCT image and eye fundus image feature, to increase feature
It is rich.
In practical applications, for example, as shown in figure 15, being based on across channel convolution sub-network, feature extraction list can be passed through
Member extracts the feature of OCT image and the feature of eye fundus image, obtains 2 × W × H × C characteristic image, passes through mould later
State reformer unit carries out feature reformation to characteristic image, C × 2 × H × W characteristic image is obtained, finally by Fusion Features list
Bimodal information is fused together using 2 × 1 × 1 Three dimensional convolution, obtains the first fusion feature by member, to complete across logical
Road convolution.
Wherein, Three dimensional convolution is a kind of convolution mode, carries out convolution by feature of the three-dimensional filter to multichannel, closes
And available fused feature.
204, the intensive extraction for carrying out feature to the first fusion feature based on convolution sub-network, obtains the second fusion feature.
Convolution sub-network can carry out the intensive extraction of feature to the feature of image, may include at least one convolution mould
Block, convolution module may include the multiple convolution submodules intensively connected.
In one embodiment, convolution sub-network may include at least one convolution module, can also include a convolution mould
Block, for example, can only include first convolution module, etc. in convolution sub-network.
Wherein, intensive extract of feature can extract to carry out the multiple densification of feature to the feature of input, for example, such as
Shown in Figure 12, the thought that can use Dense Net carries out feature extraction, can be by by current convolution submodule and convolution mould
All history convolution submodules are attached in block, so that the feature of input can pass through multiple feature extraction, thus real
Feature diversification is now extracted, the accuracy of network model is improved.
In practical applications, intensively mentioning for feature is carried out in convolution sub-network for example, the first fusion feature can be inputted
It takes, obtains the second fusion feature.
It wherein, can be using at least one convolution module in convolution sub-network in order to improve the accuracy of feature extraction
Feature is extracted.
In one embodiment, specifically, step " carries out feature to first fusion feature based on the convolution sub-network
Intensive extraction, obtain the second fusion feature " may include:
First fusion feature is determined as to the current input feature of current convolution module;
Intensively mentioning for feature is carried out to the current input feature based on the convolution submodule in the current convolution module
It takes;
Using the feature extracted as current input feature, and select in the convolution sub-network remaining convolution module as
Current convolution module;
It returns to execute and feature is carried out to the current input feature based on the convolution submodule in the current convolution module
Intensive extraction the step of, until meet fisrt feature extract termination condition, obtain the second fusion feature.
Wherein, fisrt feature extracts termination condition can judge whether characteristic extraction step terminates in convolution sub-network
Condition is extracted as fisrt feature and is terminated for example, can will work as all convolution modules in convolution sub-network all carries out feature extraction
Condition.
In one embodiment, it may include multiple convolution modules in convolution sub-network, may include multiple in convolution module
Convolution submodule.The first fusion feature obtained by across channel convolution sub-network can be determined as working as current convolution module
Preceding input feature vector carries out intensively mentioning for feature to current input feature based on the convolution submodule in current convolution module later
It takes, later using the feature extracted as current input feature, and selects in convolution sub-network remaining convolution module as current
Convolution module returns execute based on the convolution submodule in current convolution module to current input feature progress feature later
Intensive extraction the step of, until meet fisrt feature extract termination condition, obtain the second fusion feature.
In one embodiment, it may include multiple convolution submodules in convolution module, can also include one in convolution module
A convolution submodule, for example, can only include first convolution submodule, etc. in convolution module.
For example, as shown in figure 14, convolution sub-network may include the first convolution module, the second convolution module and third convolution
Module.It may include multiple convolution submodules in convolution module, for example the first convolution module may include multiple first convolution
Module.
First convolution module can be determined as current convolution module, first will obtained by across channel convolution sub-network
Fusion feature is determined as the current input feature of the first convolution module, later based on the convolution submodule pair in the first convolution module
Current input feature carries out the intensive extraction of feature, later using the feature extracted as current input feature, and selects second
Convolution module returns execute based on the convolution submodule in current convolution module to current input later as current convolution module
Feature carries out the step of intensive extraction of feature, until the first convolution module, the second convolution module and the in convolution sub-network
The all feature extractions of three convolution modules finish, and obtain the second fusion feature.
For example, may include the first convolution module, the second convolution module and third convolution module in convolution sub-network, three
Convolution module can be attached by series connection.It will be determined as the by the first fusion feature that across channel convolution sub-network obtains
The current input feature of one convolution module later carries out current input feature based on the convolution submodule in the first convolution module
The intensive extraction of feature.The feature that the first convolution module is extracted based on the convolution submodule in the second convolution module later into
The intensive extraction of row feature, the feature that the second convolution module is extracted based on the convolution submodule in third convolution module later
The intensive extraction for carrying out feature, finally can be using the feature that third convolution module extracts as the second fusion feature.
Wherein, in order to promote the efficiency of transmission of information and gradient in network model so that every layer network can directly from
Loss function takes gradient, and obtains input signal, so that deeper network model is trained, it can be by Dense Net's
Thought is added in Quality Control Network model, promotes network performance from the angle of feature reuse, therefore can be by any two layers
Between all directly connected, that is to say, that each layer network input include all layer networks in front output.
In one embodiment, specifically, step is " based on the convolution submodule in the current convolution module to described current
Input feature vector carries out the intensive extraction of feature " may include:
By the current input feature of the current convolution module, it is determined as the target input feature vector of current convolution submodule;
Feature extraction is carried out to the target input feature vector based on the current convolution submodule, obtains Objective extraction spy
Sign;
Based on the current convolution submodules, the Objective extraction feature and history are extracted into feature and merged,
In, it is all history volume in the current convolution module, before the current convolution submodule that the history, which extracts feature,
The feature that product submodule extracts;
Using fused feature as target input feature vector, and select remaining convolution submodule in the current convolution module
Block is as current convolution submodule;
It returns to execute and feature extraction is carried out to the target input feature vector based on the current convolution submodule, obtain target
The step of extracting feature extracts termination condition until meeting second feature.
Wherein, target input feature vector is the feature that current convolution submodule needs to extract.History extract feature be
In current convolution module, feature that all history convolution submodules before current convolution submodule extract.Convolution module packet
Include multiple convolution submodules intensively connected.
Wherein, second feature extracts termination condition can be to judge whether characteristic extraction step terminates in current convolution module
Condition extracted as second feature for example, convolution submodules all in current convolution module can all be carried out feature extraction
Termination condition.For example, when current convolution module is the first convolution module, it can be by the first convolution all in the first convolution module
Module all carries out feature extraction, extracts termination condition as second feature.
It in practical applications, can be by all convolution submodules in same convolution module in order to increase the reuse rate of feature
Block is intensively connected, and each road feature of same mode intermodule is reused, and the current input feature of current convolution module determines
For the target input feature vector of current convolution submodule, feature is carried out to target input feature vector based on current convolution submodule later and is mentioned
It takes, obtains Objective extraction feature, be based on current convolution submodule later, Objective extraction feature and history are extracted into feature and carried out
Fusion, later using fused feature as target input feature vector, and select remaining convolution submodule in convolution module as
Current convolution submodule finally returns to execution based on current convolution submodule and carries out feature extraction to target input feature vector, obtains
The step of Objective extraction feature, extracts termination condition until meeting second feature.
For example, as shown in figure 16, in order to increase the reuse rate of feature, convolution submodule can intensively be connected, weight
It may include multiple first convolution submodules in the first convolution module with each road feature of same mode intermodule, and first
Multiple first convolution submodules in convolution module are all attached by way of intensively connecting, that is to say, that each first
Convolution submodule all has connection with the first convolution submodule before.
The target input that by the current input feature of the first convolution module, can be determined as current first convolution submodule is special
Sign carries out feature extraction to target input feature vector based on current first convolution submodule later, obtains Objective extraction feature, later
Based on current first convolution submodule, Objective extraction feature and history are extracted into feature and merged, it later will be fused
Feature selects remaining first convolution submodule in the first convolution module as current first convolution as target input feature vector
Submodule finally returns to execution based on current first convolution submodule and carries out feature extraction to target input feature vector, obtains target
The step of extracting feature, until the first convolution submodules all in the first convolution module all carry out feature extraction.
For example, including four convolution submodules, respectively the first convolution submodule, the second convolution submodule in convolution module
Block, third convolution submodule and Volume Four product submodule.Four convolution submodules in convolution module pass through the side intensively connected
Formula is attached.For example, convolution submodule is in convolution module, according to the first convolution submodule, the second convolution submodule, third
The sequence of convolution submodule and Volume Four product submodule is arranged.The current input feature for inputting convolution module is determined as
The target input feature vector of one convolution submodule.
Four convolution submodules in convolution module are attached by way of intensively connecting, that is to say, that by the
The feature that one convolution submodule extracts, can be directly inputted to the second convolution submodule, third convolution submodule and the 4th respectively
In convolution submodule.
The input of second convolution submodule is the feature extracted by the first convolution submodule, and by the second convolution
The feature that module is extracted can be directly inputted to respectively in third convolution submodule and Volume Four product submodule.
The input of third convolution submodule is the feature extracted by the first convolution submodule and the second convolution submodule, and
And the feature extracted by third convolution submodule, it can be directly inputted in Volume Four product submodule.
The input of Volume Four product submodule is by the first convolution submodule, the second convolution submodule and third convolution submodule
The feature that block extracts, and the feature extracted by Volume Four product submodule, can be used as the feature of convolution module extraction.
Wherein, for lifting feature richness, it can be multiplexed former feature by using the mode of short circuit connection, from
And improve the accuracy of network model.In one embodiment, specifically, step is " based on the current convolution submodule to described
Target input feature vector carries out feature extraction, obtains Objective extraction feature " may include:
Feature extraction is carried out to the target input feature vector respectively based on the multiple convolution branch, it is defeated to obtain multiple convolution
Feature out;
The target input feature vector and the multiple convolution output feature are merged, Objective extraction feature is obtained.
Wherein, convolution submodule includes multiple convolution branches.It may include several convolutional layers in convolution branch, pass through convolution
The extraction of feature may be implemented in branch.
In practical applications, feature extraction can be carried out to target input feature vector respectively based on multiple convolution branches, obtained
Multiple convolution export feature, later merge target input feature vector and multiple convolution output feature, obtain Objective extraction
Feature.
For example, as shown in Figure 17, Figure 18 and Figure 19, it, can be in Inception V4 network in order to increase characteristic polymorphic
Inception A unit, Inception unit B and Inception C in the basic unit (convolution submodule) of model is mono-
Respectively increase a short circuit connection in member, passes through the short circuit connection multiplexing original input feature vector.Pass through what is exported to multiple convolution branches
Convolution exports feature and is merged by the target input feature vector that short circuit connection obtains, and passes through fused layer (Filter later
Concat) target input feature vector and multiple convolution output feature are merged, obtain Objective extraction feature.
In one embodiment, it in the basic unit of Inception V4 network model (convolution submodule), is not limited to only
It can also include Inception A mono- including Inception A unit, Inception unit B and Inception C cell
One or more of member, Inception unit B and Inception C cell basic unit can also include that other are basic
Unit.In the basic unit (convolution submodule) of Inception V4 network model, the sequence of basic unit can also basis
Actual conditions exchange, etc..
205, classified based on classification sub-network to the second fusion feature, obtain the quality Identification result of organization chart picture.
Classification sub-network can classify to the feature of image, may include full articulamentum, pond layer etc., for example, dividing
Class sub-network may include Pooling layers, SoftMax layers of Average etc..
Wherein, quality Identification result can for judge organization chart picture whether be clinic be able to use image as a result, for example,
Clinical not available image may include: the such as shooting as caused by human factor obscure, cut out, image out of alignment, or
Person such as refractive media muddiness, image of high myopia, etc. as caused by non-artificial factor.
In practical applications, for example, can classify by sub-network of classifying to the second fusion feature, eyeground figure is obtained
The quality Identification of picture and OCT image is as a result, can be such as probability related with image quality information etc..
206, it is filtered according to organization chart picture of the quality Identification result to multiple mode, organization chart picture after being filtered.
It in practical applications, can be according to quality Identification result to group after getting the quality Identification result of organization chart picture
It knits image to be filtered, organization chart picture after being filtered, for example, can be according to quality Identification result by clinical not available figure
Picture, as shooting is obscured, cut out, image out of alignment as caused by human factor;As between dioptric as caused by non-artificial factor
Matter muddiness, image of high myopia etc. are filtered deletion.Image after deletion is organization chart picture after filtering, group after the filtering
Knitting image may include useful clinically eye fundus image and useful clinically OCT image.
In one embodiment, which can also include the filtering to being unsatisfactory for requiring scanning mode image.
As shown in figure 3, the image filtering method can also include following process:
301, the scan type of organization chart picture after identification is filtered.
Wherein, the scan type used when the scan type of filtered organization chart picture can be shooting organization chart picture.Than
Such as, in organization chart picture for useful clinically eye fundus image, the scan type of useful clinically eye fundus image can after filtering
Think whether the green line of eye fundus image passes through macula lutea, etc..
In one embodiment, in order to improve obtain filtering after organization chart picture scan type accuracy, net can be used
Network model obtains scan type.
Specifically, step " scan type for identifying organization chart picture after the filtering " may include:
Organization chart picture after the filtering is identified based on picture scan type identification network model, obtains the filtering
The scan type of organization chart picture afterwards.
(1) organization chart picture after filtering is identified based on picture scan type identification network model, group after being filtered
Knit the scan type of image.
Wherein, picture scan type identification network model is that can obtain to the scan type of organization chart picture after filtering
Network model, for example, picture scan type identification network model can be to change Inception V4 network model
Into network model.
In practical applications, for example, eye fundus image input picture useful clinically in organization chart picture after filtering can be swept
It retouches in type identification network model, obtains the scan type of useful clinically eye fundus image, net is identified by picture scan type
Network model can reject the non-compliant eye fundus image of scanning, comprising: ring sweeps nipple image, and line sweeps only view nipple
The but eye fundus image etc. of macular area.It is picked it is then possible to which the corresponding OCT image of non-compliant eye fundus image will be scanned
It removes, the standard compliant OCT image of the scanning left is target image.
Wherein, in order to improve picture scan type identification network model obtain filtering after organization chart picture scan type standard
True property can be scanned type identification using the network model for including convolution sub-network and sub-network of classifying.
In practical applications, for example, as shown in figure 20, which identifies that network model may include successively connecting
The convolution sub-network and classification sub-network connect.Organization chart picture input picture scan type after filtering can be identified into network model
In, feature extraction is carried out to characteristics of image by convolution sub-network, the feature extracted is carried out by classification sub-network later
Classification, the scan type of organization chart picture after being filtered.
In one embodiment, the thought of Dense Net can also be added in picture scan type identification network model,
Network performance is promoted from the angle of feature reuse.
Wherein, convolution sub-network includes at least one convolution module.Convolution module includes multiple convolution intensively connected
Module.Convolution submodule includes multiple convolution branches.
In practical applications, for example, in order to increase the reuse rate of feature, convolution submodule can intensively be connected,
Each road feature for reusing same mode intermodule, the feature that the history convolution submodule before current convolution submodule is extracted
It is merged.
It wherein, can also be by way of increasing short circuit connection, so as to be multiplexed former spy for lifting feature richness
Sign, improves the accuracy of network model.
It in practical applications, can be in the basic of Inception V4 network model for example, in order to increase characteristic polymorphic
Increase by one in Inception A unit, Inception unit B and Inception C cell in unit (convolution submodule)
A short circuit connection, passes through the short circuit connection multiplexing original feature.Multiple convolution branches are exported by convolution submodule feature, with
And merged by the feature that short circuit connection obtains, later by fused layer (Filter Concat), to the feature of input,
And the feature of multiple convolution branch outputs is merged.
Wherein, intensively the structure of connection and short circuit connection is similar with Quality Control Network model, and details are not described herein again.
302, organization chart picture after filtering is filtered according to scan type, obtains target image.
In practical applications, it gets after filtering after the scan type of organization chart picture, it can be according to scan type to filtering
Organization chart picture is filtered afterwards, obtains target image.For example, when organization chart picture is useful clinically eye fundus image after filtering,
Non-compliant image, including ring can be swept into nipple image according to scan type, it is only yellow that line sweeps only view nipple
The filterings such as the image of macular area are deleted, and the corresponding OCT image of the non-compliant eye fundus image of scan type is deleted, and are left
The standard compliant OCT image of scan type is target image.
(2) training of picture scan type identification network model.
Wherein, which can also include the step being trained to picture scan type identification network model
Suddenly.
In practical applications, training set can be obtained in several ways, for example, can be by medical instrument to sample graph
As being acquired, composing training collection, or training set can be obtained by network, database, local etc..
In one embodiment, for example, the size of eye fundus image can be 496 × 496 in sample image in training set,
Eye fundus image can be standardized, be mentioned it is then possible to carry out data enhancing to image to increase trained data volume
The generalization ability of high model can also increase noise data, the robustness of lift scheme.
In practical applications, can by from training cluster sampling, obtain include certain amount sample image target instruct
Practice sample.
In one embodiment, for example, can also realize from training cluster sampling by sampler and obtain target training sample
This.
In one embodiment, since common sampler cannot achieve difficult sample by the adjustment of sampled probability, it can be with
Training set is sampled using difficult sample Novel sampler (Hard Example Boosting Sampler).In reality
In, Novel sampler can initialize all sample images, so that the probability that every sample image is sampled is identical, than
Such as, training set includes B training sample, includes M sample images in training set, then every sample image can be with the probability of 1/M
It is sampled, and obtains target training sample and carry out network model training, target training sample may include N sample images, then
After sample training, the corresponding loss function of N number of sample image in target training sample can be obtained, loss function numerical value it is big can be with
Think that the sample image belongs to difficult sample, it later can be according to loss function numerical value, to sample image in target training sample
Sequence from big to small is carried out, and double to the probability 1/M of the maximum sample image of loss function numerical value progress, then next time
When sampler samples, which has bigger probability by sampler samples, therefore can be adopted repeatedly to difficult sample
Sample, which, which belongs to, puts back to sampling.
In practical applications, it can use training sample to be trained picture scan type identification network model, for example,
Training sample can be added in picture scan type identification network model and be trained, the image scanning after being trained
Type identification network model, and the picture scan type after training is identified into network model, net is identified as picture scan type
Network model.
In one embodiment, for example, the parameter of picture scan type identification network model can use Inception V4
The parameter of network model pre-training on Image Net data set, newly added convolutional layer can use variance for 0.01, mean value
It is initialized for 0 Gaussian Profile.
In one embodiment, for example, picture scan type identification can be solved using the gradient descent method based on Adam
The convolution layer parameter w and offset parameter b of network model, use every 20K iteration to decay with 90% pair of learning rate.
From the foregoing, it will be observed that the embodiment of the present application obtains the organization chart picture of multiple mode of destination organization, matter ready for use is determined
Control network model is measured, Quality Control Network model includes: sequentially connected across channel convolution sub-network, convolution sub-network and divides
Class sub-network merges the tissue-image features of multiple mode based on across channel convolution sub-network, obtains the first fusion spy
Sign is carried out the intensive extraction of feature to the first fusion feature based on convolution sub-network, obtains the second fusion feature, based on classification
Network classifies to the second fusion feature, obtains the quality Identification of organization chart picture as a result, according to quality Identification result to multiple
The organization chart picture of mode is filtered, organization chart picture after being filtered.The program is by being added to net for the thought intensively connected
In network model, to take into account the width of network, depth, and increase the rich and reusability of feature under same mode.Also pass through
Across channel convolution can increase the feature of different modalities in combination with the information in multiple modalities image.Using novel sampling
Device increases the sample rate of difficult sample, realizes the sampling repeatedly to difficult sample, with this realize image filtering efficiency and
The raising of accuracy rate.
Citing, is described in further detail by the method according to described in above-described embodiment below.
Referring to Fig. 4, in the present embodiment, will be carried out so that the image filtering device specifically integrates in the network device as an example
Explanation.
401, the network equipment obtains the organization chart picture of multiple mode of destination organization.
In practical applications, obtain the organization chart picture of multiple mode of destination organization mode can there are many, for example, working as
When the organization chart picture of multiple mode of destination organization is OCT image and eye fundus image, optical coherence tomography scanner can be passed through
Etc. getting OCT image, eye fundus image is got by fundus camera etc., it can also be by obtaining OCT image and eye from local
Base map picture, or be downloaded by network and obtain OCT image and eye fundus image, etc..
402, the network equipment determines Quality Control Network model ready for use.
(1) determination of Quality Control Network model.
Wherein, as shown in figure 14, Quality Control Network model may include sequentially connected across channel convolution sub-network, volume
Product sub-network and classification sub-network, convolution sub-network include at least one convolution module, and categorization module may include average pond
Layer, classifier etc..The Quality Control Network model can be the network model improved to Inception V4 network model.
The thought intensively connected in Dense Net network model can be added in Inception V4 network model, so as to
Obtain the network model for taking into account network-wide and network depth.
Also, Inception V4 network model and Dense Net network model all only considered an a kind of mode (figure
Picture) situation as input, since organization chart picture may include multiple modalities, the network model of single structure is had no idea
The organization chart picture of multiple modalities is effectively treated simultaneously, and the information for lacking certain mode is likely to result in network model to part number
According to classification inaccuracy.It therefore, can be by increasing across channel convolution sub-network in Quality Control Network model, to a variety of moulds
The information of state is merged, and the network equipment is allowed to handle multi-modal input simultaneously.
(2) training of Quality Control Network model.
Wherein, which can also include the training to quality control network model.
(a) training set is obtained.
In practical applications, training set can be obtained in several ways, for example, can be by medical instrument to sample graph
As being acquired, composing training collection, or training set can be obtained by modes such as network, database, locals.Specifically, may be used
To obtain 4476 sample images as training set, training set accounts for the 70% of total figure picture, available 1919 destination organizations
For organization chart picture as test set, test set accounts for the 30% of total figure picture, and the sample image in test set does not include the quality letter of mark
Breath, total figure picture includes test set and training set.
Wherein, the sample image in training set can also be pre-processed, to eliminate information unrelated in image, is restored
Useful real information enhances detectability for information about, and simplifies data to the maximum extent, to improve trained standard
True property.
In one embodiment, for example, the size of eye fundus image can be 496 × 496 in sample image, OCT image
Size can be 496 × 496,496 × 768 or 496 × 1024, etc..Quality Control Network mode input image
It can be 496 × 496 eye fundus image and 496 × 768 OCT image.Due to OCT image in sample image size not
Only a kind of, therefore, it is necessary to the sizes to OCT image to be adjusted.For example, when picture traverse is less than 768, to the both sides of image
Carry out benefit 0 (black surround) operation;When picture traverse is greater than 768, symmetrical cutting operation is carried out to the both sides of image, so that all
The width of OCT image is all 768, to realize the unification of picture size.
The sample image of size after reunification can be standardized later, for example, can take subtract image mean value divided by
The mode of image variance carries out image standardization.It is then possible to carry out -30 °~+30 ° of Random-Rotation, Random Level to image
Overturning, Stochastic Elasticity deformation or random speckle (spot) noisy operation of addition, thus increase the data volume of sample image,
The generalization ability for improving model, can also increase noise data, to promote the robustness of network model.
(b) training set is sampled according to the sampled probability of sample image, obtains target training sample.
In practical applications, it can obtain including a fixed number according to the sampled probability of sample image from training cluster sampling
Measure the target training sample of sample image.For example, the sampled probability of sample image is when including that M opens sample images in training set
1/M can acquire N sample images, and obtain including target training sample that N opens sample image.
In one embodiment, for example, can also be realized according to the sampled probability of sample image by sampler to the instruction
Practice the step of collection is sampled, obtains target training sample.
It in one embodiment can be using difficult sample Novel sampler (Hard Example Boosting Sampler)
Training set is sampled.For example, can be carried out using difficult sample Novel sampler to the sample image in training set initial
Change, so that the probability that each training sample image is sampled is identical, then training set is sampled, obtains target training sample
This.
(c) quality control network model is trained based on target training sample, Quality Control Network after being trained
Model.
It in practical applications, can be based on target training sample from being sampled in training set after obtaining target training sample
This is trained quality control network model, Quality Control Network model after being trained.
(d) based on Quality Control Network model after training, the corresponding loss letter of sample image in target training sample is obtained
Number.
In practical applications, after target training sample is trained quality control network model, available target instruction
Practice the corresponding loss function of sample image in sample.The corresponding loss function of sample image is big, then it is assumed that the sample image category
In difficult sample.
(e) it is updated according to sampled probability of the loss function to sample image in target training sample, returns and execute root
The step of training set is sampled according to the sampled probability of sample image, until meeting sampling termination condition.
It in practical applications, can be according to the corresponding loss function of sample image of acquisition, to sample in target training sample
The sampled probability of this image is updated, and continues sampling step later, until meeting sampling termination condition.
It, can be according to the loss function of sample image to target training sample in order to realize the sampling repeatedly to difficult sample
Middle difficulty sample is judged, is sampled repeatedly to difficult sample later, with the accuracy rate of training for promotion network model.
In practical applications, sample image in target training sample can be arranged by preset rules according to loss function
Sequence obtains the sequence of sample image in target training sample, and carries out to the sampled probability of sample image in target training sample
It updates.For example, can be ranked up according to the loss function of sample image in target training sample is ascending, loss function is big
May be considered difficult sample, later, can be general by the sampling of the maximum sample image of loss function in target training sample
Rate progress is double, and due to being sampled as putting back to sampling, difficult sample has bigger sampled probability, just has bigger probability to carry out
Multiple repairing weld and training.
In one embodiment, for example, the parameter of Quality Control Network model can use Inception V4 network model
The parameter of pre-training, newly added convolutional layer can use the height that variance is 0 for 0.01, mean value on Image Net data set
This distribution is initialized.
In one embodiment, for example, Quality Control Network model can be solved using the gradient descent method based on Adam
Convolution layer parameter w and offset parameter b, every 20K iteration can be used to decay with 90% pair of learning rate.
403, the network equipment merges the tissue-image features of multiple mode based on across channel convolution sub-network, obtains
First fusion feature.
In practical applications, for example, matter can will be carried out in OCT image and eye fundus image input quality control network model
Amount identification, obtains the quality information about OCT image and eye fundus image, by Quality Control Network model, can reject clinic
Not available data, for example shot as caused by human factor and the images such as obscure, cut out, is out of alignment;It is led by non-artificial factor
Such as refractive media muddiness of cause, high myopia image.
As shown in figure 14, in order to improve Quality Control Network model obtain organization chart picture quality Identification result it is accurate
Property, quality Identification can be carried out using the network model for including across channel convolution sub-network, convolution sub-network and sub-network of classifying.
In practical applications, for example, the Quality Control Network model may include sequentially connected across channel convolution subnet
Network, convolution sub-network and classification sub-network.It can pass through in eye fundus image and OCT image input quality control network model
Across channel convolution sub-network merges the feature of eye fundus image and the feature of OCT image, obtains the first fusion feature.
As shown in figure 15, in order to which eye fundus image and OCT image is effectively treated simultaneously, across channel convolution sub-network can be used
The feature of eye fundus image and OCT image is merged, to improve characteristic polymorphic, to improve the accuracy of network model.
In practical applications, for example, as shown in figure 15, being based on across channel convolution sub-network, feature extraction list can be passed through
Member extracts the feature of OCT image and the feature of eye fundus image, obtains 2 × W × H × C characteristic image, passes through mould later
State reformer unit carries out feature reformation to characteristic image, C × 2 × H × W characteristic image is obtained, finally by feature integration list
Member is reformed bimodal information to the first fusion feature together, is obtained, to complete across logical using 2 × 1 × 1 Three dimensional convolution
Road convolution.
404, the network equipment carries out the intensive extraction of feature based on convolution sub-network to the first fusion feature, obtains second and melts
Close feature.
In practical applications, for example, the first fusion feature can be inputted in convolution sub-network, the second fusion spy is obtained
Sign.
As shown in figure 14, convolution sub-network may include the first convolution module, the second convolution module and third convolution module.
It may include multiple convolution submodules in convolution module, for example the first convolution module may include multiple first convolution submodules.
The first fusion feature obtained by across channel convolution sub-network is determined as to the current input feature of the first convolution module, later
Feature extraction is carried out to current input feature based on the convolution submodule in the first convolution module.It is based on the second convolution module later
In convolution submodule feature that the first convolution module is extracted extract, later based on the convolution in third convolution module
The feature that submodule extracts the second convolution module extracts, and can finally make the feature that third convolution module extracts
For the second fusion feature.
For example, all convolution submodules in same convolution module can be carried out close to increase the reuse rate of feature
Collection connection, reuses each road feature of same mode intermodule, may include multiple first convolution submodules in the first convolution module,
And multiple first convolution submodules are all attached by way of intensively connecting, each first convolution submodule with before
The first convolution submodule have connection.
For example, including four convolution submodules, respectively the first convolution submodule, the second convolution submodule in convolution module
Block, third convolution submodule and Volume Four product submodule.Four convolution submodules in convolution module pass through the side intensively connected
Formula is attached.For example, convolution submodule is in convolution module, according to the first convolution submodule, the second convolution submodule, third
The sequence of convolution submodule and Volume Four product submodule is arranged.The current input feature for inputting convolution module is determined as
The target input feature vector of one convolution submodule.
Four convolution submodules in convolution module are attached by way of intensively connecting, that is to say, that by the
The feature that one convolution submodule extracts, can be directly inputted to the second convolution submodule, third convolution submodule and the 4th respectively
In convolution submodule.
The input of second convolution submodule is the feature extracted by the first convolution submodule, and by the second convolution
The feature that module is extracted can be directly inputted to respectively in third convolution submodule and Volume Four product submodule.
The input of third convolution submodule is the feature extracted by the first convolution submodule and the second convolution submodule, and
And the feature extracted by third convolution submodule, it can be directly inputted in Volume Four product submodule.
The input of Volume Four product submodule is by the first convolution submodule, the second convolution submodule and third convolution submodule
The feature that block extracts, and the feature extracted by Volume Four product submodule, can be used as the feature of convolution module extraction.
It, can be in the basic unit (convolution submodule) of Inception V4 network model in order to increase characteristic polymorphic
Inception A unit, increase a short circuit connection in Inception unit B and Inception C cell, it is short by this
Road connection multiplexing original input feature vector.Feature is exported by the convolution exported to multiple convolution branches and is obtained by short circuit connection
The target input feature vector taken is merged, later by fused layer (Filter Concat) to target input feature vector and multiple
Convolution output feature is merged, and Objective extraction feature is obtained.
405, the network equipment classifies to the second fusion feature based on classification sub-network, and the quality for obtaining organization chart picture is known
Other result.
In practical applications, for example, the second fusion feature is inputted in classification sub-network, eye fundus image and OCT figure are obtained
The quality Identification of picture is as a result, can be such as probability related with image quality information etc..
406, the network equipment is filtered according to organization chart picture of the quality Identification result to multiple mode, group after being filtered
Knit image.
It in practical applications, can be according to quality Identification result to group after getting the quality Identification result of organization chart picture
It knits image to be filtered, organization chart picture after being filtered, for example, can be according to quality Identification result by clinical not available figure
Picture, as shooting is obscured, cut out, image out of alignment as caused by human factor;As between dioptric as caused by non-artificial factor
Matter muddiness, image of high myopia etc. are filtered deletion.Image after deletion is organization chart picture after filtering, group after the filtering
Knitting image may include useful clinically eye fundus image and useful clinically OCT image.
407, the scan type of organization chart picture after network equipment identification is filtered.
In one embodiment, in order to improve obtain filtering after organization chart picture scan type accuracy, net can be used
Network model obtains scan type.
(1) organization chart picture after filtering is identified based on picture scan type identification network model, group after being filtered
Knit the scan type of image.
Wherein, picture scan type identification network model is that can obtain to the scan type of organization chart picture after filtering
Network model, for example, picture scan type identification network model can be to change Inception V4 network model
Into network model.
In practical applications, for example, eye fundus image input picture useful clinically in organization chart picture after filtering can be swept
It retouches in type identification network model, obtains the scan type of useful clinically eye fundus image, net is identified by picture scan type
Network model can reject the non-compliant eye fundus image of scanning, comprising: ring sweeps nipple image, and line sweeps only view nipple
The but image etc. of macular area.And delete the corresponding OCT image of the non-compliant eye fundus image of scan type, it leaves
The standard compliant OCT image of scan type is target image.
Wherein, in order to improve picture scan type identification network model obtain filtering after organization chart picture scan type standard
True property can be scanned type identification using the network model for including convolution sub-network and sub-network of classifying.
In practical applications, for example, as shown in figure 20, which identifies that network model may include successively connecting
The convolution sub-network and classification sub-network connect.Organization chart picture input picture scan type after filtering can be identified into network model
In, feature extraction is carried out to characteristics of image by convolution sub-network, the feature extracted is carried out by classification sub-network later
Classification, the scan type of organization chart picture after being filtered.
In one embodiment, the thought of Dense Net can also be added in picture scan type identification network model,
Network performance is promoted from the angle of feature reuse.
Wherein, convolution sub-network includes at least one convolution module.Convolution module includes multiple convolution intensively connected
Module.Convolution submodule includes multiple convolution branches.
In practical applications, for example, in order to increase the reuse rate of feature, convolution submodule can intensively be connected,
Each road feature for reusing same mode intermodule, the feature that the history convolution submodule before current convolution submodule is extracted
It is merged.
It wherein, can also be by way of increasing short circuit connection, so as to be multiplexed former spy for lifting feature richness
Sign, improves the accuracy of network model.
It in practical applications, can be in the basic of Inception V4 network model for example, in order to increase characteristic polymorphic
Increase by one in Inception A unit, Inception unit B and Inception C cell in unit (convolution submodule)
A short circuit connection, passes through the short circuit connection multiplexing original feature.Multiple convolution branches are exported by convolution submodule feature, with
And merged by the feature that short circuit connection obtains, later by fused layer (Filter Concat), to the feature of input,
And the feature of multiple convolution branch outputs is merged.
408, the network equipment is filtered organization chart picture after filtering according to scan type, obtains target image.
In practical applications, it gets after filtering after the scan type of organization chart picture, it can be according to scan type to filtering
Organization chart picture is filtered afterwards, obtains target image.For example, when organization chart picture is useful clinically eye fundus image after filtering,
Non-compliant image, including ring can be swept into nipple image according to scan type, it is only yellow that line sweeps only view nipple
The filterings such as the eye fundus image of macular area are deleted, and the corresponding OCT image of the non-compliant eye fundus image of scan type is deleted, and are stayed
Under the standard compliant OCT image of scan type be target image.
(2) training of picture scan type identification network model.
Wherein, which can also include the step being trained to picture scan type identification network model
Suddenly.
In practical applications, training set can be obtained in several ways, for example, can be by medical instrument to sample graph
As being acquired, composing training collection, or training set can be obtained by network, database, local etc..
In one embodiment, for example, the size of eye fundus image can be 496 × 496 in sample image in training set,
Eye fundus image can be standardized, be mentioned it is then possible to carry out data enhancing to image to increase trained data volume
The generalization ability of high model can also increase noise data, the robustness of lift scheme.
In practical applications, can by from training cluster sampling, obtain include certain amount sample image target instruct
Practice sample.
In one embodiment, for example, can also realize from training cluster sampling by sampler and obtain target training sample
This.
In one embodiment, since common sampler cannot achieve difficult sample by the adjustment of sampled probability, may be used also
To be sampled using difficult sample Novel sampler (Hard Example Boosting Sampler) to training set.
In practical applications, Novel sampler can initialize all sample images, so that every sample image is sampled
Probability it is identical, for example, training set includes B training sample, training set includes M sample images, can be based on difficult sample
Novel sampler obtains a training sample as target training sample from training set, and the sampled probability of every sample image is
1/M includes N sample images in the target training sample.After sample training, N sample images in target training sample are obtained
Sample image in target training sample is ranked up by corresponding loss function according to the size of loss function, and will loss
The sampled probability of the maximum sample image of function is adjusted to 2/M from 1/M, to increase the sampled probability of the difficulty sample.
In one embodiment, for example, when sampling next time, being adopted in target training sample including this after sampled probability adjustment
Sample probability is the sample image of 2/M, and the loss function of the sample image is maximum, can continue to adjust adopting for the sample image
Sample probability is adjusted from 2/M to 4/M, to increase the sampled probability of the difficulty sample.
In practical applications, it can use training sample to be trained picture scan type identification network model, for example,
Training sample can be added in picture scan type identification network model and be trained, the image scanning after being trained
Type identification network model, and the picture scan type after training is identified into network model, net is identified as picture scan type
Network model.
In one embodiment, for example, the parameter of picture scan type identification network model can use Inception V4
The parameter of network model pre-training on Image Net data set, newly added convolutional layer can use variance for 0.01, mean value
It is initialized for 0 Gaussian Profile.
In one embodiment, for example, picture scan type identification can be solved using the gradient descent method based on Adam
The convolution layer parameter w and offset parameter b of network model, use every 20K iteration to decay with 90% pair of learning rate.
The image filtering method can effectively improve Quality Control Network model and picture scan type identification network mould
The accuracy rate of type, and can be applied to the cleaning of assistant images data, the construction of image data base, and can be used as
The quality Control module of the exploitation of subsequent algorithm.
It as shown in figure 21, is randomly select 100 according to the image filtering method, point of actual test clinical image
Cloth, wherein scan type includes four classes, crosses macula lutea line and sweeps type, crosses and depending on mammillary line sweep type but macula lutea sweeps away type, view cream
Head, which changes, sweeps type.OCT image quality type includes two classes, useful clinically type and second-rate type.Experimental result is
It crosses macula lutea line and sweeps type 74 and open, cross and sweep type 8 depending on mammillary line and open, but macula lutea sweeps away Class1 3 and opens, and changes depending on nipple and sweeps type 5
?.Useful clinically type 78 is opened, and second-rate type 22 is opened, and therefore, picture scan type identifies network model accuracy rate
It is 99%, the accuracy rate of Quality Control Network model is 99%.
Wherein, following table is shown under the premise of keeping view nipple and crossing macula lutea accuracy rate, picture scan type identification
Network model can greatly improve the accuracy rate that unqualified scan type differentiates.
From the foregoing, it will be observed that the embodiment of the present application obtains the organization chart picture of multiple mode of destination organization by the network equipment, really
Fixed Quality Control Network model ready for use, Quality Control Network model include: sequentially connected across channel convolution sub-network, volume
Product sub-network and classification sub-network, merge the tissue-image features of multiple mode based on across channel convolution sub-network, obtain
To the first fusion feature, the intensive extraction of feature is carried out to the first fusion feature based on convolution sub-network, obtains the second fusion spy
Sign classifies to the second fusion feature based on classification sub-network, obtains the quality Identification of organization chart picture as a result, knowing according to quality
Other result is filtered the organization chart picture of multiple mode, organization chart picture after being filtered.The program is by will intensively connect
Thought is added in network model, to take into account the width of network, depth, and increases the rich of feature under same mode and again
The property used.Also by across channel convolution, the feature of different modalities can be increased in combination with the information in multiple modalities image.It adopts
With Novel sampler, the sample rate of difficult sample is increased, the sampling repeatedly to difficult sample is realized, image is realized with this
The raising of filter efficiency and accuracy rate.
In order to better implement above method, the embodiment of the present application also provides a kind of image filtering device, the image filtering
Device specifically can integrate in the network equipment, such as the equipment such as terminal or server.
For example, as shown in figure 22, which may include obtaining module 221, determining module 222, across channel
Convolution module 223, convolution module 224, categorization module 225 and filtering module 226, as follows:
Obtain module 221, the organization chart picture of multiple mode for obtaining destination organization;
Determining module 222, for determining Quality Control Network model ready for use, the Quality Control Network model packet
It includes: sequentially connected across channel convolution sub-network, convolution sub-network and classification sub-network;
Across channel convolution module 223, for special based on organization chart picture of across the channel convolution sub-network to multiple mode
Sign is merged, and the first fusion feature is obtained;
Convolution module 224, for carrying out intensively mentioning for feature to first fusion feature based on the convolution sub-network
It takes, obtains the second fusion feature;
Categorization module 225 obtains tissue for classifying based on the classification sub-network to second fusion feature
The quality Identification result of image;
Filtering module 226, for being filtered according to the quality Identification result to the organization chart picture of the multiple mode,
Organization chart picture after being filtered.
In one embodiment, across the channel convolution module 223 can be specifically used for:
The tissue-image features of multiple mode are extracted based on the feature extraction unit;
The tissue-image features of the multiple mode are subjected to feature reformation based on the mode reformer unit, are reformed
Feature afterwards;
The Fusion Features after the reformation are obtained into the first fusion feature based on the Fusion Features unit.
In one embodiment, the convolution module 224 can be specifically used for:
First fusion feature is determined as to the current input feature of current convolution module;
Intensively mentioning for feature is carried out to the current input feature based on the convolution submodule in the current convolution module
It takes;
Using the feature extracted as current input feature, and select in the convolution sub-network remaining convolution module as
Current convolution module;
It returns to execute and feature is carried out to the current input feature based on the convolution submodule in the current convolution module
Intensive extraction the step of, until meet fisrt feature extract termination condition, obtain the second fusion feature.
In one embodiment, with reference to Figure 23, which can also include:
Scan type obtain module 227, for identification after the filtering organization chart picture scan type;
Target image obtains module 228, for being filtered according to the scan type to organization chart picture after the filtering,
Obtain target image.
In one embodiment, the scan type obtains module 227 and can be specifically used for:
Organization chart picture after the filtering is identified based on picture scan type identification network model, obtains the filtering
The scan type of organization chart picture afterwards.
When it is implemented, above each unit can be used as independent entity to realize, any combination can also be carried out, is made
It is realized for same or several entities, the specific implementation of above each unit can be found in the embodiment of the method for front, herein not
It repeats again.
From the foregoing, it will be observed that the organization chart picture that the embodiment of the present application obtains multiple mode of destination organization by obtaining module 221,
Determine Quality Control Network model ready for use by determining module 222, Quality Control Network model include: it is sequentially connected across
Channel convolution sub-network, convolution sub-network and classification sub-network, are based on across channel convolution subnet by across channel convolution module 223
Network merges the tissue-image features of multiple mode, obtains the first fusion feature, is based on convolution by convolution module 224
Network carries out the intensive extraction of feature to first fusion feature, obtains the second fusion feature, is based on by categorization module 225
Classification sub-network classifies to the second fusion feature, obtains the quality Identification of organization chart picture as a result, by filtering module 226
It is filtered according to organization chart picture of the quality Identification result to multiple mode, organization chart picture after being filtered.The program passes through will be close
The thought of collection connection is added in network model, to take into account the width of network, depth, and increases the rich of feature under same mode
Richness and reusability.Also by across channel convolution, different modalities can be increased in combination with the information in multiple modalities image
Feature.Using Novel sampler, the sample rate of difficult sample is increased, the sampling repeatedly to difficult sample is realized, with this reality
The raising of image filtering efficiency and accuracy rate is showed.
The embodiment of the present application also provides a kind of network equipment, which can be the equipment such as server or terminal,
It is integrated with any image filtering device provided by the embodiment of the present application.As shown in figure 24, Figure 24 is that the embodiment of the present application mentions
The structural schematic diagram of the network equipment of confession, specifically:
The network equipment may include one or more than one processing core processor 241, one or more
The components such as memory 242, power supply 243 and the input unit 244 of computer readable storage medium.Those skilled in the art can manage
It solves, network equipment infrastructure shown in Figure 24 does not constitute the restriction to the network equipment, may include more more or less than illustrating
Component, perhaps combine certain components or different component layouts.Wherein:
Processor 241 is the control centre of the network equipment, utilizes various interfaces and connection whole network equipment
Various pieces by running or execute the software program and/or module that are stored in memory 242, and are called and are stored in
Data in reservoir 242 execute the various functions and processing data of the network equipment, to carry out integral monitoring to the network equipment.
Optionally, processor 241 may include one or more processing cores;Preferably, processor 241 can integrate application processor and tune
Demodulation processor processed, wherein the main processing operation system of application processor, user interface and application program etc., modulatedemodulate is mediated
Reason device mainly handles wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 241
In.
Memory 242 can be used for storing software program and module, and processor 241 is stored in memory 242 by operation
Software program and module, thereby executing various function application and data processing.Memory 242 can mainly include storage journey
Sequence area and storage data area, wherein storing program area can the (ratio of application program needed for storage program area, at least one function
Such as sound-playing function, image player function) etc.;Storage data area, which can be stored, uses created number according to the network equipment
According to etc..In addition, memory 242 may include high-speed random access memory, it can also include nonvolatile memory, such as extremely
A few disk memory, flush memory device or other volatile solid-state parts.Correspondingly, memory 242 can also wrap
Memory Controller is included, to provide access of the processor 241 to memory 242.
The network equipment further includes the power supply 243 powered to all parts, it is preferred that power supply 243 can pass through power management
System and processor 241 are logically contiguous, to realize management charging, electric discharge and power managed etc. by power-supply management system
Function.Power supply 243 can also include one or more direct current or AC power source, recharging system, power failure monitor
The random components such as circuit, power adapter or inverter, power supply status indicator.
The network equipment may also include input unit 244, which can be used for receiving the number or character of input
Information, and generate keyboard related with user setting and function control, mouse, operating stick, optics or trackball signal
Input.
Although being not shown, the network equipment can also be including display unit etc., and details are not described herein.Specifically in the present embodiment
In, the processor 241 in the network equipment can be corresponding by the process of one or more application program according to following instruction
Executable file be loaded into memory 242, and the application program being stored in memory 242 is run by processor 241,
It is as follows to realize various functions:
The organization chart picture for obtaining multiple mode of destination organization, determines Quality Control Network model ready for use, quality control
Network model processed includes: sequentially connected across channel convolution sub-network, convolution sub-network and classification sub-network, is based on across channel volume
Product sub-network merges the tissue-image features of multiple mode, obtains the first fusion feature, based on convolution sub-network to institute
The intensive extraction that the first fusion feature carries out feature is stated, the second fusion feature is obtained, it is special to the second fusion based on classification sub-network
Sign is classified, obtain the quality Identification of organization chart picture as a result, according to quality Identification result to the organization chart pictures of multiple mode into
Row filtering, organization chart picture after being filtered.
Processor 241 can also run the application program being stored in memory 242, to implement function such as:
The organization chart picture for obtaining multiple mode of destination organization, determines Quality Control Network model ready for use, quality control
Network model processed includes: sequentially connected across channel convolution sub-network, convolution sub-network and classification sub-network, is based on across channel volume
Product sub-network merges the tissue-image features of multiple mode, obtains the first fusion feature, based on convolution sub-network to institute
The intensive extraction that the first fusion feature carries out feature is stated, the second fusion feature is obtained, it is special to the second fusion based on classification sub-network
Sign is classified, obtain the quality Identification of organization chart picture as a result, according to quality Identification result to the organization chart pictures of multiple mode into
Row filtering, organization chart picture after being filtered.
The specific implementation of above each operation can be found in the embodiment of front, and details are not described herein.
From the foregoing, it will be observed that the embodiment of the present application obtains the organization chart picture of multiple mode of destination organization, matter ready for use is determined
Control network model is measured, Quality Control Network model includes: sequentially connected across channel convolution sub-network, convolution sub-network and divides
Class sub-network merges the tissue-image features of multiple mode based on across channel convolution sub-network, obtains the first fusion spy
Sign is carried out the intensive extraction of feature to first fusion feature based on convolution sub-network, obtains the second fusion feature, is based on dividing
Class sub-network classifies to the second fusion feature, obtains the quality Identification of organization chart picture as a result, according to quality Identification result pair
The organization chart picture of multiple mode is filtered, organization chart picture after being filtered.The program is by the way that the thought intensively connected to be added
Into network model, to take into account the width of network, depth, and increase the rich and reusability of feature under same mode.Also
By across channel convolution, the feature of different modalities can be increased in combination with the information in multiple modalities image.It is adopted using novel
Sample device increases the sample rate of difficult sample, realizes the sampling repeatedly to difficult sample, realizes image filtering efficiency with this
With the raising of accuracy rate.
It will appreciated by the skilled person that all or part of the steps in the various methods of above-described embodiment can be with
It is completed by instructing, or relevant hardware is controlled by instruction to complete, which can store computer-readable deposits in one
In storage media, and is loaded and executed by processor.
For this purpose, the embodiment of the present application provides a kind of storage medium, wherein being stored with a plurality of instruction, which can be processed
Device is loaded, to execute the step in any image filtering method provided by the embodiment of the present application.For example, the instruction can
To execute following steps:
The organization chart picture for obtaining multiple mode of destination organization, determines Quality Control Network model ready for use, quality control
Network model processed includes: sequentially connected across channel convolution sub-network, convolution sub-network and classification sub-network, is based on across channel volume
Product sub-network merges the tissue-image features of multiple mode, obtains the first fusion feature, based on convolution sub-network to institute
The intensive extraction that the first fusion feature carries out feature is stated, the second fusion feature is obtained, it is special to the second fusion based on classification sub-network
Sign is classified, obtain the quality Identification of organization chart picture as a result, according to quality Identification result to the organization chart pictures of multiple mode into
Row filtering, organization chart picture after being filtered.
The specific implementation of above each operation can be found in the embodiment of front, and details are not described herein.
Wherein, which may include: read-only memory (ROM, Read Only Memory), random access memory
Body (RAM, Random Access Memory), disk or CD etc..
By the instruction stored in the storage medium, any image mistake provided by the embodiment of the present application can be executed
Step in filtering method, it is thereby achieved that achieved by any image filtering method provided by the embodiment of the present application
Beneficial effect is detailed in the embodiment of front, and details are not described herein.
A kind of image filtering method, device and storage medium provided by the embodiment of the present application have been carried out in detail above
It introduces, specific examples are used herein to illustrate the principle and implementation manner of the present application, the explanation of above embodiments
It is merely used to help understand the present processes and its core concept;Meanwhile for those skilled in the art, according to the application
Thought, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not be understood
For the limitation to the application.
Claims (12)
1. a kind of image filtering method characterized by comprising
Obtain the organization chart picture of multiple mode of destination organization;
Determine that Quality Control Network model ready for use, the Quality Control Network model include: sequentially connected across channel volume
Product sub-network, convolution sub-network and classification sub-network;
The tissue-image features of multiple mode are merged based on across the channel convolution sub-network, obtain the first fusion spy
Sign;
The intensive extraction for carrying out feature to first fusion feature based on the convolution sub-network, obtains the second fusion feature;
Classified based on the classification sub-network to second fusion feature, obtains the quality Identification result of organization chart picture;
The organization chart picture of the multiple mode is filtered according to the quality Identification result, organization chart picture after being filtered.
2. image filtering method according to claim 1, which is characterized in that the convolution sub-network includes at least one volume
Volume module, the convolution module include the convolution submodule that at least one is intensively connected;
The intensive extraction for carrying out feature to first fusion feature based on the convolution sub-network, obtains the second fusion feature,
Include:
First fusion feature is determined as to the current input feature of current convolution module;
The intensive extraction of feature is carried out to the current input feature based on the convolution submodule in the current convolution module;
Using the feature extracted as current input feature, and select in the convolution sub-network remaining convolution module as current
Convolution module;
It returns to execute and the close of feature is carried out to the current input feature based on the convolution submodule in the current convolution module
The step of collection extracts extracts termination condition until meeting fisrt feature, obtains the second fusion feature.
3. image filtering method according to claim 2, which is characterized in that based on the convolution in the current convolution module
Submodule carries out the intensive extraction of feature to the current input feature, comprising:
By the current input feature of the current convolution module, it is determined as the target input feature vector of current convolution submodule;
Feature extraction is carried out to the target input feature vector based on the current convolution submodule, obtains Objective extraction feature;
Based on the current convolution submodule, the Objective extraction feature and history are extracted into feature and merged, wherein institute
Stating history and extracting feature is all history convolution submodules in the current convolution module, before the current convolution submodule
The feature that block extracts;
Using fused feature as target input feature vector, and remaining convolution submodule in the current convolution module is selected to make
For current convolution submodule;
It returns to execute and feature extraction is carried out to the target input feature vector based on the current convolution submodule, obtain Objective extraction
The step of feature, extracts termination condition until meeting second feature.
4. image filtering method according to claim 3, which is characterized in that the convolution submodule includes multiple convolution branch
Road;
Feature extraction is carried out to the target input feature vector based on the current convolution submodule, Objective extraction feature is obtained, wraps
It includes:
Feature extraction is carried out to the target input feature vector respectively based on the multiple convolution branch, it is special to obtain multiple convolution outputs
Sign;
The target input feature vector and the multiple convolution output feature are merged, Objective extraction feature is obtained.
5. image filtering method according to claim 1, which is characterized in that across the channel convolution sub-network includes feature
Extraction unit, mode reformer unit and Fusion Features unit;
The tissue-image features of multiple mode are merged based on across the channel convolution sub-network, obtain the first fusion spy
Sign, comprising:
The tissue-image features of multiple mode are extracted based on the feature extraction unit;
The tissue-image features of the multiple mode are subjected to feature reformation based on the mode reformer unit, after being reformed
Feature;
The Fusion Features after the reformation are obtained into the first fusion feature based on the Fusion Features unit.
6. image filtering method according to claim 1, which is characterized in that the method also includes:
Identify the scan type of organization chart picture after the filtering;
Organization chart picture after the filtering is filtered according to the scan type, obtains target image.
7. image filtering method according to claim 6, which is characterized in that described image scan type identifies network model
It include: sequentially connected convolution sub-network and classification sub-network;The convolution sub-network includes at least one convolution module, described
Convolution module includes the multiple convolution submodules intensively connected;
Identify the scan type of organization chart picture after the filtering, comprising:
Organization chart picture after the filtering is identified based on picture scan type identification network model, obtains group after the filtering
Knit the scan type of image.
8. image filtering method according to claim 1, which is characterized in that the method also includes:
Training set is obtained, the training set includes multiple training samples;
The training set is sampled according to the sampled probability of sample image, obtains target training sample;
Quality control network model is trained based on the target training sample, Quality Control Network mould after being trained
Type;
Based on Quality Control Network model after the training, the corresponding loss letter of sample image in the target training sample is obtained
Number;
It is updated according to sampled probability of the loss function to sample image in the target training sample, returns and execute root
The step of training set is sampled according to the sampled probability of sample image, until meeting sampling termination condition.
9. image filtering method according to claim 8, which is characterized in that instructed according to the loss function to the target
The sampled probability for practicing sample image in sample is updated, comprising:
Sample image in the target training sample is ranked up by preset rules according to the loss function, obtains the mesh
Mark the sequence of sample image in training sample;
It is general to the sampling of sample image in the target training sample according to the sequence of sample image in the target training sample
Rate is updated.
10. a kind of image filtering device characterized by comprising
Obtain module, the organization chart picture of multiple mode for obtaining destination organization;
Determining module, for determining that Quality Control Network model ready for use, the Quality Control Network model include: successively to connect
Across channel convolution sub-network, convolution sub-network and the classification sub-network connect;
Across channel convolution module, for being melted based on across the channel convolution sub-network to the tissue-image features of multiple mode
It closes, obtains the first fusion feature;
Convolution module is obtained for being carried out the intensive extraction of feature to first fusion feature based on the convolution sub-network
Second fusion feature;
Categorization module obtains organization chart picture for classifying based on the classification sub-network to second fusion feature
Quality Identification result;
Filtering module is obtained for being filtered according to the quality Identification result to the organization chart picture of the multiple mode
Organization chart picture after filter.
11. a kind of network equipment, which is characterized in that including memory and processor, the memory is stored with instruction, the place
Device load described instruction is managed to execute such as the step of any one of claim 1-10 the method.
12. a kind of storage medium, is stored thereon with computer program, which is characterized in that when the computer program is in computer
When upper operation, so that the computer executes such as the described in any item image filtering methods of claim 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910112755.3A CN109815965B (en) | 2019-02-13 | 2019-02-13 | Image filtering method and device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910112755.3A CN109815965B (en) | 2019-02-13 | 2019-02-13 | Image filtering method and device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109815965A true CN109815965A (en) | 2019-05-28 |
CN109815965B CN109815965B (en) | 2021-07-06 |
Family
ID=66606526
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910112755.3A Active CN109815965B (en) | 2019-02-13 | 2019-02-13 | Image filtering method and device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109815965B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110232719A (en) * | 2019-06-21 | 2019-09-13 | 腾讯科技(深圳)有限公司 | A kind of classification method of medical image, model training method and server |
CN110458829A (en) * | 2019-08-13 | 2019-11-15 | 腾讯医疗健康(深圳)有限公司 | Image quality control method, device, equipment and storage medium based on artificial intelligence |
CN110503636A (en) * | 2019-08-06 | 2019-11-26 | 腾讯医疗健康(深圳)有限公司 | Parameter regulation means, lesion prediction technique, parameter adjustment controls and electronic equipment |
CN110533106A (en) * | 2019-08-30 | 2019-12-03 | 腾讯科技(深圳)有限公司 | Image classification processing method, device and storage medium |
CN111506760A (en) * | 2020-03-30 | 2020-08-07 | 杭州电子科技大学 | Depth integration measurement image retrieval method based on difficult perception |
CN112690809A (en) * | 2020-02-04 | 2021-04-23 | 首都医科大学附属北京友谊医院 | Method, device, server and storage medium for determining equipment abnormality reason |
CN113269279A (en) * | 2021-07-16 | 2021-08-17 | 腾讯科技(深圳)有限公司 | Multimedia content classification method and related device |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093087A (en) * | 2013-01-05 | 2013-05-08 | 电子科技大学 | Multimodal brain network feature fusion method based on multi-task learning |
US20130129174A1 (en) * | 2011-11-23 | 2013-05-23 | Sasa Grbic | Method and System for Model-Based Fusion of Computed Tomography and Non-Contrasted C-Arm Computed Tomography |
US20130182148A1 (en) * | 2007-02-28 | 2013-07-18 | National University Of Ireland | Separating Directional Lighting Variability in Statistical Face Modelling Based on Texture Space Decomposition |
CN106203497A (en) * | 2016-07-01 | 2016-12-07 | 浙江工业大学 | A kind of finger vena area-of-interest method for screening images based on image quality evaluation |
CN106909905A (en) * | 2017-03-02 | 2017-06-30 | 中科视拓(北京)科技有限公司 | A kind of multi-modal face identification method based on deep learning |
CN107184224A (en) * | 2017-05-18 | 2017-09-22 | 太原理工大学 | A kind of Lung neoplasm diagnostic method based on bimodal extreme learning machine |
CN107610123A (en) * | 2017-10-11 | 2018-01-19 | 中共中央办公厅电子科技学院 | A kind of image aesthetic quality evaluation method based on depth convolutional neural networks |
US9934436B2 (en) * | 2014-05-30 | 2018-04-03 | Leidos Innovations Technology, Inc. | System and method for 3D iris recognition |
CN108182456A (en) * | 2018-01-23 | 2018-06-19 | 哈工大机器人(合肥)国际创新研究院 | A kind of target detection model and its training method based on deep learning |
CN108449596A (en) * | 2018-04-17 | 2018-08-24 | 福州大学 | A kind of 3D stereo image quality appraisal procedures of fusion aesthetics and comfort level |
CN108537773A (en) * | 2018-02-11 | 2018-09-14 | 中国科学院苏州生物医学工程技术研究所 | Intelligence auxiliary mirror method for distinguishing is carried out for cancer of pancreas and pancreas inflammatory disease |
CN109063728A (en) * | 2018-06-20 | 2018-12-21 | 燕山大学 | A kind of fire image deep learning mode identification method |
CN109285149A (en) * | 2018-09-04 | 2019-01-29 | 杭州比智科技有限公司 | Appraisal procedure, device and the calculating equipment of quality of human face image |
-
2019
- 2019-02-13 CN CN201910112755.3A patent/CN109815965B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130182148A1 (en) * | 2007-02-28 | 2013-07-18 | National University Of Ireland | Separating Directional Lighting Variability in Statistical Face Modelling Based on Texture Space Decomposition |
US20130129174A1 (en) * | 2011-11-23 | 2013-05-23 | Sasa Grbic | Method and System for Model-Based Fusion of Computed Tomography and Non-Contrasted C-Arm Computed Tomography |
CN103093087A (en) * | 2013-01-05 | 2013-05-08 | 电子科技大学 | Multimodal brain network feature fusion method based on multi-task learning |
US9934436B2 (en) * | 2014-05-30 | 2018-04-03 | Leidos Innovations Technology, Inc. | System and method for 3D iris recognition |
CN106203497A (en) * | 2016-07-01 | 2016-12-07 | 浙江工业大学 | A kind of finger vena area-of-interest method for screening images based on image quality evaluation |
CN106909905A (en) * | 2017-03-02 | 2017-06-30 | 中科视拓(北京)科技有限公司 | A kind of multi-modal face identification method based on deep learning |
CN107184224A (en) * | 2017-05-18 | 2017-09-22 | 太原理工大学 | A kind of Lung neoplasm diagnostic method based on bimodal extreme learning machine |
CN107610123A (en) * | 2017-10-11 | 2018-01-19 | 中共中央办公厅电子科技学院 | A kind of image aesthetic quality evaluation method based on depth convolutional neural networks |
CN108182456A (en) * | 2018-01-23 | 2018-06-19 | 哈工大机器人(合肥)国际创新研究院 | A kind of target detection model and its training method based on deep learning |
CN108537773A (en) * | 2018-02-11 | 2018-09-14 | 中国科学院苏州生物医学工程技术研究所 | Intelligence auxiliary mirror method for distinguishing is carried out for cancer of pancreas and pancreas inflammatory disease |
CN108449596A (en) * | 2018-04-17 | 2018-08-24 | 福州大学 | A kind of 3D stereo image quality appraisal procedures of fusion aesthetics and comfort level |
CN109063728A (en) * | 2018-06-20 | 2018-12-21 | 燕山大学 | A kind of fire image deep learning mode identification method |
CN109285149A (en) * | 2018-09-04 | 2019-01-29 | 杭州比智科技有限公司 | Appraisal procedure, device and the calculating equipment of quality of human face image |
Non-Patent Citations (2)
Title |
---|
SASKIA CAMPS 等: "Quality Assessment of Transperineal Ultrasound Images of the Male Pelvic Region Using Deep Learning", 《2018 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IUS)》 * |
武利秀 等: "基于卷积神经网络的无参考混合失真图像质量评价", 《光 学 技 术》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020253773A1 (en) * | 2019-06-21 | 2020-12-24 | 腾讯科技(深圳)有限公司 | Medical image classification method, model training method, computing device and storage medium |
CN110428475A (en) * | 2019-06-21 | 2019-11-08 | 腾讯科技(深圳)有限公司 | A kind of classification method of medical image, model training method and server |
US11954852B2 (en) | 2019-06-21 | 2024-04-09 | Tencent Technology (Shenzhen) Company Limited | Medical image classification method, model training method, computing device, and storage medium |
CN110232719A (en) * | 2019-06-21 | 2019-09-13 | 腾讯科技(深圳)有限公司 | A kind of classification method of medical image, model training method and server |
CN110503636A (en) * | 2019-08-06 | 2019-11-26 | 腾讯医疗健康(深圳)有限公司 | Parameter regulation means, lesion prediction technique, parameter adjustment controls and electronic equipment |
CN110503636B (en) * | 2019-08-06 | 2024-01-26 | 腾讯医疗健康(深圳)有限公司 | Parameter adjustment method, focus prediction method, parameter adjustment device and electronic equipment |
CN110458829B (en) * | 2019-08-13 | 2024-01-30 | 腾讯医疗健康(深圳)有限公司 | Image quality control method, device, equipment and storage medium based on artificial intelligence |
CN110458829A (en) * | 2019-08-13 | 2019-11-15 | 腾讯医疗健康(深圳)有限公司 | Image quality control method, device, equipment and storage medium based on artificial intelligence |
CN110533106A (en) * | 2019-08-30 | 2019-12-03 | 腾讯科技(深圳)有限公司 | Image classification processing method, device and storage medium |
CN112690809A (en) * | 2020-02-04 | 2021-04-23 | 首都医科大学附属北京友谊医院 | Method, device, server and storage medium for determining equipment abnormality reason |
CN112690809B (en) * | 2020-02-04 | 2021-09-24 | 首都医科大学附属北京友谊医院 | Method, device, server and storage medium for determining equipment abnormality reason |
CN111506760A (en) * | 2020-03-30 | 2020-08-07 | 杭州电子科技大学 | Depth integration measurement image retrieval method based on difficult perception |
CN111506760B (en) * | 2020-03-30 | 2021-04-20 | 杭州电子科技大学 | Depth integration measurement image retrieval method based on difficult perception |
CN113269279A (en) * | 2021-07-16 | 2021-08-17 | 腾讯科技(深圳)有限公司 | Multimedia content classification method and related device |
CN113269279B (en) * | 2021-07-16 | 2021-10-15 | 腾讯科技(深圳)有限公司 | Multimedia content classification method and related device |
Also Published As
Publication number | Publication date |
---|---|
CN109815965B (en) | 2021-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109815965A (en) | A kind of image filtering method, device and storage medium | |
Bilal et al. | Diabetic retinopathy detection and classification using mixed models for a disease grading database | |
Dundar et al. | Computerized classification of intraductal breast lesions using histopathological images | |
Xiong et al. | An approach to evaluate blurriness in retinal images with vitreous opacity for cataract diagnosis | |
Kauppi | Eye fundus image analysis for automatic detection of diabetic retinopathy | |
CN108717554A (en) | A kind of thyroid tumors histopathologic slide image classification method and its device | |
CN109190540A (en) | Biopsy regions prediction technique, image-recognizing method, device and storage medium | |
Liu et al. | Computerized macular pathology diagnosis in spectral domain optical coherence tomography scans based on multiscale texture and shape features | |
CN107330449A (en) | A kind of BDR sign detection method and device | |
CN110363226A (en) | Ophthalmology disease classifying identification method, device and medium based on random forest | |
CN110349147A (en) | Training method, the lesion recognition methods of fundus flavimaculatus area, device and the equipment of model | |
CN110097545A (en) | Eye fundus image generation method based on deep learning | |
CN108596895A (en) | Eye fundus image detection method based on machine learning, apparatus and system | |
CN110188613A (en) | Image classification method and equipment | |
CN105868561A (en) | Health monitoring method and device | |
CN113158821A (en) | Multimodal eye detection data processing method and device and terminal equipment | |
CN109919932A (en) | The recognition methods of target object and device | |
Zheng et al. | Research on an intelligent lightweight‐assisted pterygium diagnosis model based on anterior segment images | |
Li et al. | Automated quality assessment and image selection of ultra-widefield fluorescein angiography images through deep learning | |
CN110033861A (en) | Suitable for the blood vessel of OCTA image and the quantitative analysis method of macula lutea avascular area and system | |
Panda et al. | A detailed systematic review on retinal image segmentation methods | |
CN109003659A (en) | Stomach Helicobacter pylori infects pathological diagnosis and supports system and method | |
CN111951950B (en) | Three-dimensional data medical classification system based on deep learning | |
CN108573267A (en) | A kind of method and device of liver organization textural classification | |
Iqbal et al. | Automatic diagnosis of diabetic retinopathy using fundus images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |