CN114399635A - Image two-classification ensemble learning method based on feature definition and deep learning - Google Patents

Image two-classification ensemble learning method based on feature definition and deep learning Download PDF

Info

Publication number
CN114399635A
CN114399635A CN202210299753.1A CN202210299753A CN114399635A CN 114399635 A CN114399635 A CN 114399635A CN 202210299753 A CN202210299753 A CN 202210299753A CN 114399635 A CN114399635 A CN 114399635A
Authority
CN
China
Prior art keywords
image
classification
roi
deep learning
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210299753.1A
Other languages
Chinese (zh)
Inventor
曾庆超
韩峰涛
庹华
袁顺宁
王利利
张立炀
李亚楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rokae Inc
Original Assignee
Rokae Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rokae Inc filed Critical Rokae Inc
Priority to CN202210299753.1A priority Critical patent/CN114399635A/en
Publication of CN114399635A publication Critical patent/CN114399635A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image two-classification integrated learning method based on feature definition and deep learning, which comprises the following steps: step S1, segmenting a region of interest ROI in the target image; step S2, classifying the ROI region acquired in step S1 based on feature definition, including: firstly, extracting and screening the characteristics of an ROI (region of interest), and then sending the obtained effective characteristics into a machine learning classifier for training and classification; step S3, classifying the ROI area of the image segmented in the step S1 based on deep learning, and predicting probability information of each class; step S4, result-integrating the classification probability of each image ROI region defined based on features of step S2 and the classification probability of the image ROI region by the depth learning of step S3.

Description

Image two-classification ensemble learning method based on feature definition and deep learning
Technical Field
The invention relates to the technical field of image classification, in particular to an image two-classification integrated learning method based on feature definition and deep learning.
Background
In the field of image classification, most of the current main classification algorithms are performed based on a single machine learning algorithm and a deep learning algorithm based on feature definition. In the industrial and medical fields, targets are frequently segmented firstly, and classified research is carried out on the segmented targets, wherein a self-defined feature extraction mode and a classifier method are adopted in a machine learning classification algorithm, an integrated framework of segmentation and classification is mostly adopted in deep learning, and a YOLO series and a Mask-RCNN deep learning neural network are commonly used. The single image classification algorithm has high requirements on image quality and sample balance, and the single classification algorithm can show the condition of poor robustness or low accuracy for the condition of high complexity of medical images or unbalanced data set in the field of industrial defect detection.
In the field of ensemble learning image classification, the existing ensemble algorithm is also implemented by a single multiple feature-definition-based machine learning classifiers or an ensemble learning implementation based on multiple deep learning classification models. The current mainstream algorithms for object classification are as follows:
firstly, classifying images based on a traditional feature definition method, firstly, segmenting a target object in a target image by using a traditional segmentation algorithm or a deep learning algorithm, then, extracting morphological features, statistical features and texture features of the target object, then, effectively fusing and screening the obtained features to obtain an effective feature set, and finally, selecting a proper machine learning classifier to classify the images.
And secondly, the classification method based on deep learning benefits from the development of CNN, and compared with the traditional manual feature extraction method, the feature extraction method based on the convolutional neural network based on deep learning greatly facilitates the feature extraction work. Deep learning is carried out by designing a convolution module to extract features, a feature matrix obtained by convolution is similar to features obtained by morphology, statistics and texture in a traditional mode, and then the obtained feature matrix is stretched into a one-dimensional vector which is connected to a full connection layer for classification and output. The different neural network classification models are only different in convolution modules for feature extraction, for example, the difference between the VGG series and the Resnet network model is that the network depth of Resnet is deeper than that of VGG, and the Resnet adds a short-circuit mode between each module, so that the gradient is less prone to disappear when the neural network reversely transmits, and the Incepton series widens the width of the neural network model compared with the former two. In summary, deep learning is an automatic feature extraction compared with the traditional feature definition-based approach, and the extracted features can be made to have stronger classification capability by designing various convolution modules.
And thirdly, an image classification algorithm based on ensemble learning. At present, most of image classification methods based on ensemble learning are based on single deep learning or machine learning algorithms based on feature definition, and most of the ensemble learning methods based on feature definition construct a plurality of machine learning classifiers, such as SVM (support vector machine), Bayesian classifier, decision tree and the like, and then carry out ensemble learning in a voting or weighting mode. Similar strategies are adopted in ensemble learning in the deep learning field, and classification accuracy is improved by integrating various classification models.
The main drawbacks and disadvantages of the above technical solutions are: the two classification algorithms can achieve high classification precision in a simple scene and a balanced data set, but the image classification effect is not ideal under the conditions of complex scenes and unbalanced samples, the scenes and the data sets are complex in many fields, such as the classification of nuclear magnetic resonance or ultrasonic tumor images in medicine, and the data sets with unbalanced samples, such as the product quality classification in the field of industrial quality inspection, and the single classification algorithm has limitation and poor robustness and accuracy.
The existing ensemble learning algorithm only considers a machine learning ensemble algorithm or a plurality of deep learning ensemble algorithms defined by single characteristics, and although various combination modes or parameter adjusting methods can improve accuracy and robustness, the machine learning algorithm and the deep learning algorithm are respectively good and bad, and the effect achieved by single integration is limited.
Disclosure of Invention
The object of the present invention is to solve at least one of the technical drawbacks mentioned.
Therefore, the invention aims to provide an image two-classification ensemble learning method based on feature definition and deep learning.
In order to achieve the above object, an embodiment of the present invention provides an image two-class ensemble learning method based on feature definition and deep learning, including the following steps:
step S1, segmenting a region of interest ROI in the target image;
step S2, classifying the ROI region acquired in step S1 based on feature definition, including: firstly, extracting and screening the characteristics of an ROI (region of interest), and then sending the obtained effective characteristics into a machine learning classifier for training and classification;
step S3, classifying the ROI area of the image segmented in the step S1 based on deep learning, and predicting probability information of each class;
step S4, integrating the classification probability of each image ROI region defined based on features in step S2 and the classification probability of the image ROI region by the deep learning in step S3, including one of the following forms:
(1) weighted integration of results
Figure 643843DEST_PATH_IMAGE001
Where p is the final classification probability of the image ROI area,
Figure 617615DEST_PATH_IMAGE002
Figure 679987DEST_PATH_IMAGE003
...
Figure 598265DEST_PATH_IMAGE004
and
Figure 640170DEST_PATH_IMAGE005
Figure 202869DEST_PATH_IMAGE006
...,
Figure 531083DEST_PATH_IMAGE007
respectively the output classification probabilities of the n machine-learned classifiers and the m depth-learned image ROI outputs based on the feature definition,
Figure 812022DEST_PATH_IMAGE008
Figure 893504DEST_PATH_IMAGE009
...
Figure 700923DEST_PATH_IMAGE010
and
Figure 75404DEST_PATH_IMAGE011
Figure 843640DEST_PATH_IMAGE012
...
Figure DEST_PATH_IMAGE013
are respectively defined by characteristicsThe coefficients of the output probability of the n machine learning classifiers and the classification probability coefficients output by the m deep learning classification models are respectively defined as follows:
Figure 929145DEST_PATH_IMAGE014
Figure DEST_PATH_IMAGE015
wherein
Figure 325491DEST_PATH_IMAGE016
,
Figure DEST_PATH_IMAGE017
...
Figure 667611DEST_PATH_IMAGE018
And
Figure DEST_PATH_IMAGE019
,
Figure 329668DEST_PATH_IMAGE020
...
Figure DEST_PATH_IMAGE021
the number of n machine learning classifiers and m deep learning classification models in the same sample population is correct, and then the final prediction probability p of the image ROI area is obtained in a weighting integration mode;
(2) integration of voting modes
Selecting n classifiers learned by a machine, then selecting m deep learning classification models, and when judging which class a certain image ROI belongs to, taking the class judged by most classifiers as a result, and then taking the maximum one of all classifiers as the final classification probability p of the class.
Further, in the step S1, the region of interest ROI is segmented by using a deep learning algorithm or an image segmentation algorithm.
Further, in step S2, the feature extraction and screening for the ROI region includes the following steps: and performing wavelet transformation on the ROI region, wherein the wavelet transformation is performed on two dimensions of a space, each dimension generates low frequency L and high frequency H, so that four sub-bands of low-low LL, low-high LH, high-low HL and high-high HH are generated in total, and then extracting statistical features and texture features from the original image and the four sub-bands respectively.
Further, the statistical features include: pixel distribution kurtosis, pixel skewness, pixel standard deviation, pixel variance, pixel energy value, pixel root mean square, and pixel entropy.
Further, the texture features include a gray level co-occurrence matrix, a gray level run length matrix, a gray level area size matrix and a field gray level difference matrix.
Further, in the step S3, m deep learning classification models are selected, and a classification probability of each model is obtained as
Figure 950355DEST_PATH_IMAGE005
Figure 935629DEST_PATH_IMAGE006
...,
Figure 42125DEST_PATH_IMAGE007
The method comprises the following steps: dividing the ROI obtained in the step S1 into a training set, a test set and a verification set, wherein the dividing proportion selects different proportions according to the size of each class of the data set; then the training set and the verification set are sent into a deep learning network for training, and finally, the classification probability of each model to the image ROI is respectively output
According to the image two-classification ensemble learning method based on feature definition and deep learning, the machine learning and the deep learning method based on feature definition are combined to predict the class of the image, and in the integration mode of the result, a specific weighting integration mode and a voting integration mode are provided.
The method integrates the advantages of machine learning and deep learning based on feature definition, combines the advantages of machine learning and deep learning together to decide and analyze the image types, and further improves the stability and accuracy of the result. The integrated learning image classification method provided by the invention can freely select the number and the class of the classification models, and can effectively improve the performance on the basis of the base classifier. The result integration part of the invention can select a weighting mode or a voting mode, and select a proper integration mode according to a specific data set.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flowchart of an image two-class ensemble learning method based on feature definition and deep learning according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an image two-classification ensemble learning method based on feature definition and deep learning according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The invention provides an image two-classification integrated learning method based on feature definition and deep learning. The method trains a plurality of base classification models, then carries out weighted integration on the classification output probabilities of the plurality of classification models to serve as the final classification probability, and can be applied to common two-classification and multi-classification conditions.
As shown in fig. 1 and fig. 2, the image two-class ensemble learning method based on feature definition and deep learning according to the embodiment of the present invention includes the following steps:
in step S1, a region of interest ROI in the target image is segmented.
In step S1, the region of interest ROI is segmented by using a deep learning algorithm or an image segmentation algorithm.
Specifically, the segmentation may adopt a deep learning method, or may also adopt a conventional image segmentation algorithm. The ROI area can be obtained by segmentation in a deep learning algorithm through a YOLO series or a Mask-rcnn. In the case of simpler images, a conventional segmentation algorithm may be used, for example, to find the outline of the ROI of the image and then perform hole filling to obtain the ROI region.
Step S2, classifying the ROI region acquired in step S1 based on feature definition, including: firstly, feature extraction and screening are carried out on the ROI, and then the obtained effective features are sent to a machine learning classifier for training and classification.
Specifically, feature extraction and screening are carried out on the ROI, and then the obtained effective features are sent to a machine learning classifier for training and classification. The method selects n machine learning classifiers, and then trains and predicts the n machine learning classifiers respectively to obtain image ROI region classification probabilities
Figure 191478DEST_PATH_IMAGE002
Figure 776043DEST_PATH_IMAGE003
...
Figure 786462DEST_PATH_IMAGE004
The features of the image are divided into morphological features, texture features and statistical features, and suitable features can be selected for classification, and the features concerned in different fields may be different, so that a feasible feature extraction scheme is provided.
Firstly, wavelet transformation is carried out on the ROI obtained in the step 1, images classified in the method are all 2D, so the wavelet transformation is carried out on two dimensions of a space, L (low frequency) and H (high frequency) are generated in each dimension, so four sub-bands of low-low LL, low-high LH, high-low HL and high-high HH are generated in total, and then statistical characteristics and texture characteristics are extracted from an original image and the four sub-bands respectively.
In an embodiment of the invention, the statistical features include: pixel distribution kurtosis, pixel skewness, pixel standard deviation, pixel variance, pixel energy value, pixel root mean square, and pixel entropy.
The statistical characteristics are divided into first-order statistical characteristics and multi-order statistical characteristics, the first-order statistical characteristics, namely intensity characteristics, are calculated in the method, distribution information of pixel gray levels is described, and the distribution condition of pixel points on an image is quantized by means of a statistical method. Some features are defined as follows, where X represents the pixel information of the ROI region, and p is the pixel distribution histogram of the ROI:
(1) peak pixel distribution (Kurtosis)
Figure 798280DEST_PATH_IMAGE022
(1)
Wherein the average value
Figure 762825DEST_PATH_IMAGE024
(2) Pixel Skewness (Skewness)
Figure 292027DEST_PATH_IMAGE026
(2)
(3) Standard Deviation of pixel
Figure DEST_PATH_IMAGE027
(3)
(4) Pixel Variance (Variance)
Figure 189575DEST_PATH_IMAGE028
(4)
(5) Pixel Energy value (Energy)
Figure DEST_PATH_IMAGE029
(5)
(6) Root Mean Square (Root Mean Square)
Figure 545864DEST_PATH_IMAGE030
(6)
(7) Pixel Entropy (Encopy)
Figure DEST_PATH_IMAGE031
(7)
In addition, the texture features include a gray level co-occurrence matrix, a gray level run length matrix, a gray level region size matrix, and a domain gray level difference matrix.
Specifically, in addition to the above features, the mean value of all the pixel gray values and the mean absolute difference, the median value of all the pixel gray values and the mean absolute difference, and the quartile range can be extracted, and the corresponding gray value of more than 75% of the corresponding gray values minus the corresponding gray value of more than 25% of the corresponding gray values are calculated, which is a reliable estimation of the data dispersion, and 31 statistical features are obtained.
Texture features are also an important basis for classification. The method uses a gray level co-occurrence matrix, a gray level run length matrix, a gray level region size matrix and a field gray level difference matrix, and obtains statistics for describing texture features by using the matrixes of four textures, wherein the specific information is as follows:
the gray level co-occurrence matrix is used to indicate the number of times a pair of pixels having a certain positional relationship appears in the image matrix. The gray level co-occurrence matrix represents information such as a change direction and a size of gray levels between adjacent image pixels. The statistical characteristics of 22 gray level co-occurrence matrixes, such as autocorrelation, contrast, correlation 1, correlation 2, cluster significance, cluster shadow and the like, of the image gray level co-occurrence matrix are calculated.
The gray run length matrix represents the number of times the same image gray value appears consecutively in a line, which is called the run length. The gray scale run length matrix can reflect the change of the image gray scale in a certain straight line direction. The method calculates 13 gray level run statistical characteristics of short run enhancement, long run enhancement, gray level nonuniformity, run length nonuniformity, run percentage, low gray level run enhancement, high gray level run enhancement, short run low gray level run enhancement, short run high gray level run enhancement, gray level variance and the like of a gray level run matrix.
The gray scale region size matrix is used for describing the two-variable conditional distribution probability density of the image gray scale. The gray area is defined as the number of pixels connected with the same gray level. Therefore, the gray scale area size matrix has obvious effect on representing texture consistency and aperiodic or spot-shaped texture. The statistical characteristics of 13 gray scale region size matrixes such as small region enhancement, large region enhancement, gray scale nonuniformity, region size nonuniformity, region percentage, low gray scale region enhancement and large, small region high gray scale enhancement, gray scale variance and the like of the gray scale region size matrix are calculated.
The field gray difference matrix is expressed in a digital image, the gray difference value of each pixel and pixels in the neighborhood of the spatial variation information of the gray is obtained within the appointed distance range, and 5 field gray difference matrix statistical values of the roughness information, the contrast, the busyness, the complexity and the texture length of the field gray difference matrix are calculated.
After 84 kinds of statistical features and texture features are calculated, the statistical features and the texture features are calculated for the ROI (region of interest) original image of the image and the corresponding four wavelet transform sub-bands respectively, and 420 features are obtained in total. Because only a few characteristics in the initial characteristic set have decisive significance, the decisive characteristics are found out according to a certain rule, the redundancy is reduced, and the classification accuracy is improved. A sequential forward feature selection algorithm is used herein to search for valid features. And finally, selecting a proper machine learning classifier for classification and outputting class probability, wherein the machine learning classifier can select an SVM (support vector machine), a Bayesian classifier and the like.
In summary, step S2 is to classify the ROI region based on feature definition, and finally output the belonging probability of each category.
In step S3, the image ROI region segmented in step S1 is classified based on deep learning, and probability information to which each class belongs is predicted.
Specifically, the image ROI region segmented in step 1 is classified based on deep learning, and probability information to which each class belongs is predicted. The patent selects m deep learning classification models and obtains a classification probability of each model as
Figure 669809DEST_PATH_IMAGE005
Figure 596176DEST_PATH_IMAGE006
...,
Figure 817073DEST_PATH_IMAGE007
In the step, m deep learning classification models are adopted, for example, Resnet, Densenet and inclusion can be selected, or other combinations can be selected, then the ROI area obtained in the first step is divided into a training set, a testing set and a verification set, the division ratio can be different according to the size of each class of the data set, and the general ratio is 3: 1: 1. and then, the training set and the verification set are sent to a deep learning network for training, and finally, the classification probabilities of Resnet, Densenet and acceptance on the image ROI are respectively output.
Step S4, integrating the classification probability of each image ROI region defined based on features in step S2 and the classification probability of the image ROI region by the deep learning in step S3, including one of the following forms:
(1) weighted integration of results
The probability of weighted integration is shown in equation (8) below:
Figure 75754DEST_PATH_IMAGE001
(8)
where p is the final classification probability of the image ROI area,
Figure 873946DEST_PATH_IMAGE002
Figure 744950DEST_PATH_IMAGE003
...
Figure 679408DEST_PATH_IMAGE004
and
Figure 610455DEST_PATH_IMAGE005
Figure 771309DEST_PATH_IMAGE006
...,
Figure 305058DEST_PATH_IMAGE007
respectively the output classification probabilities of the n machine-learned classifiers and the m depth-learned image ROI outputs based on the feature definition,
Figure 996153DEST_PATH_IMAGE008
Figure 629259DEST_PATH_IMAGE009
...
Figure 11830DEST_PATH_IMAGE010
and
Figure 224637DEST_PATH_IMAGE011
Figure 399266DEST_PATH_IMAGE012
...
Figure 905072DEST_PATH_IMAGE013
the coefficients of the output probability of the n machine learning classifiers defined by the characteristics and the classification probability coefficients output by the m deep learning classification models are respectively defined as follows:
Figure 165152DEST_PATH_IMAGE014
(9)
Figure 916070DEST_PATH_IMAGE015
(10)
wherein
Figure 86152DEST_PATH_IMAGE016
,
Figure 123378DEST_PATH_IMAGE017
...
Figure 11699DEST_PATH_IMAGE018
And
Figure 67773DEST_PATH_IMAGE019
,
Figure 951416DEST_PATH_IMAGE020
...
Figure 566068DEST_PATH_IMAGE021
the number of n machine learning classifiers and m deep learning classification models in the same sample population is correct, and then the final prediction probability p of the image ROI area is obtained in a weighting integration mode.
(2) Integration of voting modes
The principle of voting mode is that a few obey the principle of majority, namely, n classifiers learned by a machine are selected, then m deep learning classification models are selected, when a certain image ROI is judged to belong to which class, the class judged by the majority classifier is taken as a result (at this time, m + n is an odd number), and then the probability of the class is taken as the final classification probability p of the image ROI by the largest one of all the classifiers.
According to the image two-classification ensemble learning method based on feature definition and deep learning, the machine learning and the deep learning method based on feature definition are combined to predict the class of the image, and in the integration mode of the result, a specific weighting integration mode and a voting integration mode are provided.
The method integrates the advantages of machine learning and deep learning based on feature definition, combines the advantages of machine learning and deep learning together to decide and analyze the image types, and further improves the stability and accuracy of the result. The integrated learning image classification method provided by the invention can freely select the number and the class of the classification models, and can effectively improve the performance on the basis of the base classifier. The result integration part of the invention can select a weighting mode or a voting mode, and select a proper integration mode according to a specific data set.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art without departing from the principle and spirit of the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (6)

1. An image two-classification ensemble learning method based on feature definition and deep learning is characterized by comprising the following steps:
step S1, segmenting a region of interest ROI in the target image;
step S2, classifying the ROI region acquired in step S1 based on feature definition, including: firstly, extracting and screening the characteristics of an ROI (region of interest), and then sending the obtained effective characteristics into a machine learning classifier for training and classification;
step S3, classifying the ROI area of the image segmented in the step S1 based on deep learning, and predicting probability information of each class;
step S4, integrating the classification probability of each image ROI region defined based on features in step S2 and the classification probability of the image ROI region by the deep learning in step S3, including one of the following forms:
(1) weighted integration of results
Figure 526060DEST_PATH_IMAGE002
Where p is the final classification probability of the image ROI area,
Figure DEST_PATH_IMAGE004
Figure DEST_PATH_IMAGE006
...
Figure DEST_PATH_IMAGE008
and
Figure DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE012
...,
Figure DEST_PATH_IMAGE014
respectively the output classification probabilities of the n machine-learned classifiers and the m depth-learned image ROI outputs based on the feature definition,
Figure DEST_PATH_IMAGE016
Figure DEST_PATH_IMAGE018
...
Figure DEST_PATH_IMAGE020
and
Figure DEST_PATH_IMAGE022
Figure DEST_PATH_IMAGE024
...
Figure DEST_PATH_IMAGE026
the coefficients of the output probability of the n machine learning classifiers defined by the characteristics and the classification probability coefficients output by the m deep learning classification models are respectively defined as follows:
Figure DEST_PATH_IMAGE028
Figure DEST_PATH_IMAGE030
wherein
Figure DEST_PATH_IMAGE032
,
Figure DEST_PATH_IMAGE034
...
Figure DEST_PATH_IMAGE036
And
Figure DEST_PATH_IMAGE038
,
Figure DEST_PATH_IMAGE040
...
Figure DEST_PATH_IMAGE042
the number of n machine learning classifiers and m deep learning classification models in the same sample population is correct in classification, and thenObtaining the final prediction probability p of the ROI of the image in a weighted integration mode;
(2) integration of voting modes
Selecting n classifiers learned by a machine, then selecting m deep learning classification models, and when judging which class a certain image ROI belongs to, taking the class judged by most classifiers as a result, and then taking the maximum one of all classifiers as the final classification probability p of the class.
2. The image two-classification ensemble learning method based on feature definition and deep learning of claim 1, wherein in the step S1, a region of interest ROI is segmented using a deep learning algorithm or an image segmentation algorithm.
3. The image two-classification ensemble learning method based on feature definition and deep learning of claim 1, wherein in the step S2, the feature extraction and screening for the ROI region includes the following steps: and performing wavelet transformation on the ROI region, wherein the wavelet transformation is performed on two dimensions of a space, each dimension generates a low frequency band L and a high frequency band H, so that four sub-bands of low-low LL, low-high LH, high-low HL and high-high HH are generated in total, and then extracting statistical features and texture features from the original image and the four sub-bands respectively.
4. The image two-class ensemble learning method with feature definition and deep learning according to claim 3, wherein the statistical features include: pixel distribution kurtosis, pixel skewness, pixel standard deviation, pixel variance, pixel energy value, pixel root mean square, and pixel entropy.
5. The image two-class ensemble learning method according to claim 3, wherein the texture features include a gray level co-occurrence matrix, a gray level run length matrix, a gray level region size matrix, and a domain gray level difference matrix.
6. The image two-class ensemble learning method based on feature definition and deep learning of claim 1, wherein in said step S3, m deep learning classification models are selected and the classification probability of each model is obtained as
Figure 333742DEST_PATH_IMAGE010
Figure 97649DEST_PATH_IMAGE012
...,
Figure 590947DEST_PATH_IMAGE014
The method comprises the following steps: dividing the ROI obtained in the step S1 into a training set, a test set and a verification set, wherein the dividing proportion selects different proportions according to the size of each class of the data set; and then, the training set and the verification set are sent to a deep learning network for training, and finally, the classification probability of each model to the image ROI is respectively output.
CN202210299753.1A 2022-03-25 2022-03-25 Image two-classification ensemble learning method based on feature definition and deep learning Pending CN114399635A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210299753.1A CN114399635A (en) 2022-03-25 2022-03-25 Image two-classification ensemble learning method based on feature definition and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210299753.1A CN114399635A (en) 2022-03-25 2022-03-25 Image two-classification ensemble learning method based on feature definition and deep learning

Publications (1)

Publication Number Publication Date
CN114399635A true CN114399635A (en) 2022-04-26

Family

ID=81234713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210299753.1A Pending CN114399635A (en) 2022-03-25 2022-03-25 Image two-classification ensemble learning method based on feature definition and deep learning

Country Status (1)

Country Link
CN (1) CN114399635A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082418A (en) * 2022-07-14 2022-09-20 山东聊城富锋汽车部件有限公司 Precise identification method for automobile parts

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650830A (en) * 2017-01-06 2017-05-10 西北工业大学 Deep model and shallow model decision fusion-based pulmonary nodule CT image automatic classification method
CN111598871A (en) * 2020-05-15 2020-08-28 安徽医学高等专科学校 Multi-feature fusion auxiliary lung vitreous nodule detection system and medium
WO2020224406A1 (en) * 2019-05-08 2020-11-12 腾讯科技(深圳)有限公司 Image classification method, computer readable storage medium, and computer device
CN112926397A (en) * 2021-01-28 2021-06-08 中国石油大学(华东) SAR image sea ice type classification method based on two-round voting strategy integrated learning
CN113065430A (en) * 2021-03-22 2021-07-02 天津大学 Leukocyte classification method based on fusion of deep learning features and artificial extraction features
CN113408603A (en) * 2021-06-15 2021-09-17 西安理工大学 Coronary artery stenosis degree identification method based on multi-classifier fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650830A (en) * 2017-01-06 2017-05-10 西北工业大学 Deep model and shallow model decision fusion-based pulmonary nodule CT image automatic classification method
WO2020224406A1 (en) * 2019-05-08 2020-11-12 腾讯科技(深圳)有限公司 Image classification method, computer readable storage medium, and computer device
CN111598871A (en) * 2020-05-15 2020-08-28 安徽医学高等专科学校 Multi-feature fusion auxiliary lung vitreous nodule detection system and medium
CN112926397A (en) * 2021-01-28 2021-06-08 中国石油大学(华东) SAR image sea ice type classification method based on two-round voting strategy integrated learning
CN113065430A (en) * 2021-03-22 2021-07-02 天津大学 Leukocyte classification method based on fusion of deep learning features and artificial extraction features
CN113408603A (en) * 2021-06-15 2021-09-17 西安理工大学 Coronary artery stenosis degree identification method based on multi-classifier fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082418A (en) * 2022-07-14 2022-09-20 山东聊城富锋汽车部件有限公司 Precise identification method for automobile parts
CN115082418B (en) * 2022-07-14 2022-11-04 山东聊城富锋汽车部件有限公司 Precise identification method for automobile parts

Similar Documents

Publication Publication Date Title
CN107016677B (en) Cloud picture segmentation method based on FCN and CNN
CN110728192B (en) High-resolution remote sensing image classification method based on novel characteristic pyramid depth network
CN110110596B (en) Hyperspectral image feature extraction, classification model construction and classification method
CN111460912B (en) Dense crowd counting algorithm based on cascade high-resolution convolution neural network
CN110796667B (en) Color image segmentation method based on improved wavelet clustering
Bai et al. NHL Pathological Image Classification Based on Hierarchical Local Information and GoogLeNet‐Based Representations
CN109035196B (en) Saliency-based image local blur detection method
CN111028327A (en) Three-dimensional point cloud processing method, device and equipment
CN111986125A (en) Method for multi-target task instance segmentation
CN106157330B (en) Visual tracking method based on target joint appearance model
CN111027590B (en) Breast cancer data classification method combining deep network features and machine learning model
CN111986126B (en) Multi-target detection method based on improved VGG16 network
CN109726649B (en) Remote sensing image cloud detection method and system and electronic equipment
CN116402825B (en) Bearing fault infrared diagnosis method, system, electronic equipment and storage medium
CN112434172A (en) Pathological image prognosis feature weight calculation method and system
CN111339924A (en) Polarized SAR image classification method based on superpixel and full convolution network
CN113033602A (en) Image clustering method based on tensor low-rank sparse representation
CN114399635A (en) Image two-classification ensemble learning method based on feature definition and deep learning
CN116310466A (en) Small sample image classification method based on local irrelevant area screening graph neural network
Mourchid et al. An image segmentation algorithm based on community detection
CN112966748A (en) Polarized SAR image classification method based on edge perception double-branch FCN
CN111539966A (en) Colorimetric sensor array image segmentation method based on fuzzy c-means clustering
Rathore et al. A novel approach for ensemble clustering of colon biopsy images
CN114624715A (en) Radar echo extrapolation method based on self-attention space-time neural network model
Balachandran et al. Mass characterization in mammograms using an optimal ensemble classifier

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220426

RJ01 Rejection of invention patent application after publication