CN111652238A - Multi-model integration method and system - Google Patents

Multi-model integration method and system Download PDF

Info

Publication number
CN111652238A
CN111652238A CN201910302474.4A CN201910302474A CN111652238A CN 111652238 A CN111652238 A CN 111652238A CN 201910302474 A CN201910302474 A CN 201910302474A CN 111652238 A CN111652238 A CN 111652238A
Authority
CN
China
Prior art keywords
image
model
feature
original
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910302474.4A
Other languages
Chinese (zh)
Other versions
CN111652238B (en
Inventor
吴英平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Re Sr Information Technology Co ltd
Original Assignee
Shanghai Re Sr Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Re Sr Information Technology Co ltd filed Critical Shanghai Re Sr Information Technology Co ltd
Priority to CN201910302474.4A priority Critical patent/CN111652238B/en
Publication of CN111652238A publication Critical patent/CN111652238A/en
Application granted granted Critical
Publication of CN111652238B publication Critical patent/CN111652238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of machine learning, and discloses a multi-model integration method, which comprises the following steps: extracting a significant characteristic region of each image in an original image data set to generate a corresponding new image, and reconstructing all the new images into a plurality of characteristic image sets; performing model training on the original image data set to generate a corresponding original classification model; performing model training on each feature image set to generate a plurality of corresponding feature classification models; and integrating the original classification model and the plurality of characteristic classification models according to a preset model integration algorithm to obtain a final class prediction result. Correspondingly, the invention also discloses a multi-model integration system. The method reduces the influence of the homogeneous model on the integration method, and improves the overall accuracy of the model.

Description

Multi-model integration method and system
Technical Field
The invention relates to the technical field of machine learning, in particular to a multi-model integration method and system.
Background
Ensemble learning is a type of learning algorithm in machine learning, which is a method of training multiple learners and combining them for use. When the integration method is applied to deep learning, a final prediction result can be obtained by combining predictions of a plurality of neural networks. For example, in the network snapshot integration method, a plurality of intermediate network parameters are saved by using the weight snapshot when the model is trained, and the integrated model is constructed by using the models with the same saving efficiency and different structures but different weights after the training is finished, and the classification performance of the test set can be improved by using the integrated model. Generally, it is a good method to integrate neural networks of different structures, because the differentiated models may make mistakes on different training samples, so the integrated models will get more benefit. For example, patent application publication No. CN 109325516 a: an image classification-oriented ensemble learning method and device are provided, wherein the method comprises the following steps: dividing the image classification data set into a training set and a verification set, and constructing a plurality of image classification models by using the training set and the verification set to serve as models of a basic layer; splitting the verification set into P verification subsets including a verification subset 1 to a verification subset P, and executing the following operations on the basic layer by layer until an integrated model layer with the number of layers P is obtained; and integrating the model of the previous layer by using the verification subset x to obtain an integrated model of the x-th layer, and integrating a plurality of prediction results output by the P-th layer of the integrated model to obtain the final prediction result output.
However, in the network snapshot integration method or some other integration methods, models are used, which are trained on the same data set by the same network structure, and although the stored basic models have different network parameters, the basic models inevitably have the same attribute to a certain extent, so that the advantage of the integration method brought by the differentiation among multiple models is reduced.
Therefore, in order to solve the above technical problems, it is necessary to provide a multi-model integration solution to reduce the influence of the homogeneous model on the integration method.
Disclosure of Invention
The invention aims to provide a multi-model integration method and a multi-model integration system, which integrate a plurality of differential models trained based on data sets distributed by different image contents, reduce the influence of homogeneous models on the integration method and improve the overall accuracy of the models.
In order to achieve the above object, the present invention provides a multi-model integration method, including: extracting a significant characteristic region of each image in an original image data set to generate a corresponding new image, and reconstructing all the new images into a plurality of characteristic image sets; performing model training on the original image data set to generate a corresponding original classification model; performing model training on each feature image set to generate a plurality of corresponding feature classification models; and integrating the original classification model and the plurality of characteristic classification models according to a preset model integration algorithm to obtain a final class prediction result. By integrating a plurality of differential models formed by training based on different image content distribution, the influence of homogeneous models on the integration method is reduced, and the overall accuracy is improved.
Optionally, the step of extracting the salient feature region in step S1 includes: performing saliency filtering processing on each image in the original image dataset; carrying out normalization processing on each image to obtain a normalized image; manufacturing a mask plate corresponding to each image; and multiplying each image by each corresponding mask plate to obtain a salient region in each image. The step of manufacturing the mask plate corresponding to each image comprises the following steps: presetting a pixel threshold; setting a corresponding pixel in the normalized image having a pixel value greater than the pixel threshold value to 1; setting the corresponding pixel in the normalized image having a pixel value less than the pixel threshold value to 0. According to the salient feature region extraction method, the content with salient features in the images of the original image data set is subjected to feature extraction to form a plurality of feature image sets. Each feature image set has a different distribution of image content than the original image data set.
Optionally, the step S2 includes: and performing model training on the original image data set according to a preset deep convolution neural network model to generate a corresponding original classification model. The step S3 includes: and performing model training on each feature image set according to the preset deep convolutional neural network model to generate a feature classification model corresponding to each feature image set, and acquiring a plurality of feature classification models. By adopting image data sets with different image data contents and respectively training the corresponding classification models, the homogeneity property between the two classification models can be weakened.
Optionally, the step S4 includes: performing model integration on the original classification model and the plurality of feature classification models by adopting a preset weighted average integration algorithm; acquiring the average confidence of each category corresponding to each image; and obtaining the category information corresponding to the image according to the average confidence of all categories corresponding to each image. The integral classification accuracy is improved.
Optionally, the expression of the preset weighted average integration algorithm is as follows:
Figure BDA0002028687710000031
wherein N represents the number of total classification models, and the total classification models comprise original classification models and feature classification models; setting the weight value as 1/N; the total number of classification categories is set to L,
Figure BDA0002028687710000032
representing the confidence that the image is identified as class i in the nth classification model;
Figure BDA0002028687710000033
is the average confidence that the image is identified as class i.
To achieve the above object, the present invention provides a multi-model integration system, comprising: the extraction module is used for extracting a salient feature region of each image in an original image data set to generate a corresponding new image and reconstructing all the new images into a plurality of feature image sets; the first training module is used for carrying out model training on the original image data set to generate a corresponding original classification model; the second training module is used for carrying out model training on each feature image set to generate a plurality of corresponding feature classification models; and the integration module is used for integrating the original classification model and the plurality of characteristic classification models according to a preset model integration algorithm to obtain a final class prediction result. By integrating a plurality of differential models formed by training based on different image content distribution, the influence of homogeneous models on the integration method is reduced, and the overall accuracy is improved.
Optionally, the first training module is specifically configured to perform model training on the original image data set according to a preset deep convolutional neural network model, and generate a corresponding original classification model;
the second training module is specifically configured to perform model training on each feature image set according to the same deep convolutional neural network model as that in the first training module, generate a feature classification model corresponding to each feature image set, and acquire a plurality of feature classification models.
Optionally, the integrated module specifically includes: the calculation unit is used for performing model integration on the original classification model and the plurality of feature classification models by adopting a preset weighted average integration algorithm; the confidence coefficient unit is used for acquiring the average confidence coefficient of each category corresponding to each image; and the category unit is used for obtaining category information corresponding to the image according to the average confidence of all categories corresponding to each image.
Compared with the prior art, the multi-model integration method and the multi-model integration system have the beneficial effects that: the method comprises the steps of extracting significant characteristic regions of images in an original data set to generate a new data set, wherein the new data set and the original data set have different image content distributions, respectively carrying out model training on the new data set and the original data set to construct a plurality of differential models, and integrating the plurality of differential models, so that the images can be classified from the angles of different image contents, the homogeneity among a plurality of models trained on the basis of the same data set is reduced, the advantages are brought to an integration method, and the overall classification accuracy of the models is improved.
Drawings
FIG. 1 is a flow diagram illustrating a multi-model integration method according to an embodiment of the invention.
Fig. 2 is a block diagram of the components of a multi-model integration system according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to specific embodiments shown in the drawings. In the drawings, structurally identical elements are represented by like reference numerals, and structurally or functionally similar elements are represented by like reference numerals throughout the several views. The size and thickness of each component shown in the drawings are arbitrarily illustrated, and the present invention is not limited to the size and thickness of each component. The thickness of the components may be exaggerated where appropriate in the figures to improve clarity.
As shown in fig. 1, in an embodiment of the present invention, a multi-model integration method includes:
s1, extracting a salient feature region of each image in an original image data set to generate a corresponding new image, and reconstructing all the new images into a plurality of feature image sets;
s2, performing model training on the original image data set to generate a corresponding original classification model;
s3, performing model training on each feature image set to generate a plurality of corresponding feature classification models;
and S4, integrating the original classification model and the plurality of feature classification models according to a preset model integration algorithm to obtain a final class prediction result.
Step S1 is: and extracting the salient feature region of each image in an original image data set to generate a corresponding new image, and reconstructing all the new images into a plurality of feature image sets. Each feature image set has a different distribution of image content than the original image data set. The salient feature region is a region in the image which can attract the user's interest and express the image content, and is used for simulating a visual attention mechanism, suppressing useless information and further effectively analyzing information needing attention. The step of salient feature region extraction comprises the following steps: performing saliency filtering processing on each image in the original image dataset; normalizing each image to obtain a normalized image, and enabling the pixel value of the image to be in an interval [0, 1 ]; making a mask corresponding to each image, namely changing a significant area in the image into white and an insignificant area into black, wherein the making steps of the mask comprise: presetting a pixel threshold, setting the corresponding pixel of which the pixel value in the normalized image is greater than the pixel threshold to be 1, setting the pixel value to be 1, and displaying the image as white; setting the pixel value of the normalized image smaller than the pixel threshold value as 0, setting the pixel value as 0, and displaying the pixel value as black on the image to manufacture a mask plate corresponding to each image; and multiplying each image by each corresponding mask plate to obtain a salient region in each image. And correspondingly generating new images according to the saliency areas in each acquired image, and forming a plurality of feature image sets by all the new images.
Step S2 is: and carrying out model training on the original image data set to generate a corresponding original classification model. According to a specific embodiment of the present invention, model training is performed on the original image data set according to a preset deep convolutional neural network model to generate a corresponding original classification model.
Step S3 is: and carrying out model training on each feature image set to generate a plurality of corresponding feature classification models. According to a specific embodiment of the present invention, according to the preset deep convolutional neural network model, model training is performed on each feature image set to generate a feature classification model corresponding to each feature image set, so as to obtain a plurality of feature classification models. In step S2 and in step S3, the preset deep convolutional neural network model has the same structure. According to the technical scheme, the image data sets with different image data contents are adopted, the corresponding classification models are trained respectively, the homogeneity property between the two classification models can be weakened, the two classification models with more differentiation can be obtained, the contents of the images can be classified from different angles, and the adverse effect of the homogeneity models on the integration method is reduced.
Step S4 is: and integrating the original classification model and the plurality of characteristic classification models according to a preset model integration algorithm to obtain a final class prediction result. According to a specific embodiment of the invention, a preset weighted average integration algorithm is adopted to carry out model integration on the original classification model and the plurality of feature classification models; acquiring the average confidence of each category corresponding to each image; and obtaining the category information corresponding to the image according to the average confidence of all categories corresponding to each image. The weighted average algorithm adds weight on the basis of direct average to adjust the importance degree between different model outputs. The expression of the preset weighted average integration algorithm is as follows:
Figure BDA0002028687710000061
wherein N represents the number of total classification models, and the total classification models comprise original classification models and feature classification models; setting the weight value as 1/N; the total number of classification categories is set to L,
Figure BDA0002028687710000062
representing the confidence that the image is identified as class i in the nth classification model;
Figure BDA0002028687710000063
is the average confidence that the image is identified as class i. According to the above calculation formula, obtain the corresponding
Figure BDA0002028687710000071
And further acquiring the average confidence degrees corresponding to all the categories of the identified image, comparing all the average confidence degrees to obtain the category information corresponding to the image, for example, setting the category corresponding to the category with the maximum average confidence degree as the category of the image.
According to the technical scheme, the salient feature region extraction is carried out on the images in the original data set, so that a new data set is generated, the new data set and the original data set have different image content distribution, model training is respectively carried out on the new data set and the original data set, a plurality of differentiation models are constructed, the differentiation models are integrated, the images can be classified from the angle of different image contents, the homogeneity attribute among the models trained on the basis of the same data set is reduced, the advantages are brought to the integration method, and the integral classification accuracy is improved.
In another embodiment, as shown in fig. 2, the present invention further provides a multi-model integration system, which includes:
the extraction module 20 is configured to perform salient feature region extraction on each image in an original image data set, generate a corresponding new image, and reconstruct all the new images into a plurality of feature image sets;
a first training module 21, configured to perform model training on the original image data set to generate a corresponding original classification model;
the second training module 22 is configured to perform model training on each feature image set to generate a plurality of corresponding feature classification models;
the integration module 23 is configured to integrate the original classification model and the plurality of feature classification models according to a preset model integration algorithm to obtain a final class prediction result.
The extraction module 20 is configured to perform salient feature region extraction on each image in an original image data set, generate a corresponding new image, and reconstruct all the new images into a plurality of feature image sets. The step of salient feature region extraction comprises the following steps: performing saliency filtering processing on each image in the original image dataset; carrying out normalization processing on each image to obtain a normalized image; manufacturing a mask plate corresponding to each image; and multiplying each image by each corresponding mask plate to obtain a salient region in each image. And generating new images according to the acquired salient regions in each image, and forming a plurality of characteristic image sets by all the new images.
The first training module 21 is configured to perform model training on the original image data set to generate a corresponding original classification model. According to an embodiment of the present invention, the first training module 21 is specifically configured to perform model training on the original image data set according to a preset deep convolutional neural network model, so as to generate a corresponding original classification model.
The second training module 22 is configured to perform model training on each feature image set to generate a plurality of corresponding feature classification models. According to a specific embodiment of the present invention, the second training module 22 is specifically configured to perform model training on each feature image set according to the same deep convolutional neural network model as in the first training module, generate a feature classification model corresponding to each feature image set, and obtain a plurality of feature classification models. According to the technical scheme, the image data sets with different image data contents are adopted, the corresponding classification models are respectively trained, the homogeneity property between the two classification models can be weakened, the contents of the images can be classified from different angles, and the adverse effect of the homogeneity model on the integration method is reduced.
The integration module 23 is configured to integrate the original classification model and the plurality of feature classification models according to a preset model integration algorithm, so as to obtain a final class prediction result. According to an embodiment of the present invention, the integration module 23 specifically includes a calculation unit, a confidence unit and a category unit. And the calculating unit adopts a preset weighted average integration algorithm to carry out model integration on the original classification model and the plurality of characteristic classification models. The confidence unit acquires the average confidence of each category corresponding to each image. And the category unit obtains category information corresponding to the image according to the average confidence of all categories corresponding to each image. The weighted average algorithm adds weight on the basis of direct average to adjust the importance degree between different model outputs. Acquiring the average confidence degrees corresponding to all the categories of the identified image, and comparing all the average confidence degrees to obtain the category information corresponding to the image, for example, setting the category corresponding to the category with the maximum average confidence degree as the category of the image.
According to the technical scheme, the salient feature region extraction is carried out on the images in the original data set, so that a new data set is generated, the new data set and the original data set have different image content distribution, model training is respectively carried out on the new data set and the original data set, a plurality of differentiation models are constructed, the differentiation models are integrated, the images can be classified from the angle of different image contents, the homogeneity attribute among the models trained on the basis of the same data set is reduced, the advantages are brought to the integration method, and the integral classification accuracy is improved.
While the invention has been described in detail in the foregoing with reference to the drawings and examples, such illustration and description are to be considered illustrative or exemplary and not restrictive. The invention is not limited to the disclosed embodiments. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" or "a particular plurality" should be understood to mean at least one or at least a particular plurality. Any reference signs in the claims shall not be construed as limiting the scope. Other variations to the above-described embodiments can be understood and effected by those skilled in the art without inventive faculty, from a study of the drawings, the description and the appended claims, which will still fall within the scope of the invention as claimed.

Claims (10)

1. A method of multi-model integration, the method comprising:
s1, extracting a salient feature region of each image in an original image data set to generate a corresponding new image, and reconstructing all the new images into a plurality of feature image sets;
s2, performing model training on the original image data set to generate a corresponding original classification model;
s3, performing model training on each feature image set to generate a plurality of corresponding feature classification models;
and S4, integrating the original classification model and the plurality of feature classification models according to a preset model integration algorithm to obtain a final class prediction result.
2. The multi-model integration method according to claim 1, wherein the step of significant feature region extraction in step S1 comprises:
performing saliency filtering processing on each image in the original image dataset;
carrying out normalization processing on each image to obtain a normalized image;
manufacturing a mask plate corresponding to each image;
and multiplying each image by each corresponding mask plate to obtain a salient region in each image.
3. The multi-model integration method of claim 2, wherein the step of fabricating a mask corresponding to each image comprises:
presetting a pixel threshold;
setting a corresponding pixel in the normalized image having a pixel value greater than the pixel threshold value to 1;
setting the corresponding pixel in the normalized image having a pixel value less than the pixel threshold value to 0.
4. The multi-model integration method of claim 2, wherein the step S2 includes: and performing model training on the original image data set according to a preset deep convolution neural network model to generate a corresponding original classification model.
5. The multi-model integration method of claim 4, wherein the step S3 includes: and performing model training on each feature image set according to the preset deep convolutional neural network model to generate a feature classification model corresponding to each feature image set, and acquiring a plurality of feature classification models.
6. The multi-model integration method of claim 1, wherein the step S4 includes: performing model integration on the original classification model and the plurality of feature classification models by adopting a preset weighted average integration algorithm;
acquiring the average confidence of each category corresponding to each image;
and obtaining the category information corresponding to the image according to the average confidence of all categories corresponding to each image.
7. The multi-model integration method of claim 6, wherein the expression of the predetermined weighted-average integration algorithm is:
Figure FDA0002028687700000021
wherein the content of the first and second substances,
n represents the number of total classification models, wherein the total classification models comprise original classification models and feature classification models;
setting the weight value as 1/N;
setting the total number of classification categories as L, Vl nRepresenting the confidence that the image is identified as class i in the nth classification model;
Vl ensembleis the average confidence that the image is identified as class i.
8. A multi-model integration system, the system comprising:
the extraction module is used for extracting a salient feature region of each image in an original image data set, generating a corresponding new image and reconstructing all the new images into a plurality of feature image sets;
the first training module is used for carrying out model training on the original image data set to generate a corresponding original classification model;
the second training module is used for carrying out model training on each feature image set to generate a plurality of corresponding feature classification models;
and the integration module is used for integrating the original classification model and the plurality of characteristic classification models according to a preset model integration algorithm to obtain a final class prediction result.
9. The multi-model integration system of claim 8,
the first training module is specifically used for performing model training on the original image data set according to a preset deep convolutional neural network model to generate a corresponding original classification model;
the second training module is specifically configured to perform model training on each feature image set according to the same deep convolutional neural network model as that in the first training module, generate a feature classification model corresponding to each feature image set, and acquire a plurality of feature classification models.
10. The multi-model integration system of claim 8, wherein the integration module specifically comprises:
the calculation unit is used for performing model integration on the original classification model and the plurality of feature classification models by adopting a preset weighted average integration algorithm;
the confidence coefficient unit is used for acquiring the average confidence coefficient of each category corresponding to each image;
and the category unit is used for obtaining category information corresponding to the image according to the average confidence of all categories corresponding to each image.
CN201910302474.4A 2019-04-16 2019-04-16 Multi-model integration method and system Active CN111652238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910302474.4A CN111652238B (en) 2019-04-16 2019-04-16 Multi-model integration method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910302474.4A CN111652238B (en) 2019-04-16 2019-04-16 Multi-model integration method and system

Publications (2)

Publication Number Publication Date
CN111652238A true CN111652238A (en) 2020-09-11
CN111652238B CN111652238B (en) 2023-06-02

Family

ID=72342477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910302474.4A Active CN111652238B (en) 2019-04-16 2019-04-16 Multi-model integration method and system

Country Status (1)

Country Link
CN (1) CN111652238B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111950A (en) * 2021-04-19 2021-07-13 中国农业科学院农业资源与农业区划研究所 Wheat rust classification method based on ensemble learning
CN113379686A (en) * 2021-05-26 2021-09-10 广东炬森智能装备有限公司 PCB defect detection method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110081073A1 (en) * 2009-10-06 2011-04-07 Wright State University Methods And Logic For Autonomous Generation Of Ensemble Classifiers, And Systems Incorporating Ensemble Classifiers
CN106683104A (en) * 2017-01-06 2017-05-17 西北工业大学 Prostate magnetic resonance image segmentation method based on integrated depth convolution neural network
CN107180426A (en) * 2017-06-06 2017-09-19 西北工业大学 Area of computer aided Lung neoplasm sorting technique based on transportable multiple-model integration
CN107958219A (en) * 2017-12-06 2018-04-24 电子科技大学 Image scene classification method based on multi-model and Analysis On Multi-scale Features
US20180189949A1 (en) * 2016-12-30 2018-07-05 Skinio, Llc Skin Abnormality Monitoring Systems and Methods
CN108921092A (en) * 2018-07-02 2018-11-30 浙江工业大学 A kind of melanoma classification method based on convolutional neural networks model Two-level ensemble

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110081073A1 (en) * 2009-10-06 2011-04-07 Wright State University Methods And Logic For Autonomous Generation Of Ensemble Classifiers, And Systems Incorporating Ensemble Classifiers
US20180189949A1 (en) * 2016-12-30 2018-07-05 Skinio, Llc Skin Abnormality Monitoring Systems and Methods
CN106683104A (en) * 2017-01-06 2017-05-17 西北工业大学 Prostate magnetic resonance image segmentation method based on integrated depth convolution neural network
CN107180426A (en) * 2017-06-06 2017-09-19 西北工业大学 Area of computer aided Lung neoplasm sorting technique based on transportable multiple-model integration
CN107958219A (en) * 2017-12-06 2018-04-24 电子科技大学 Image scene classification method based on multi-model and Analysis On Multi-scale Features
CN108921092A (en) * 2018-07-02 2018-11-30 浙江工业大学 A kind of melanoma classification method based on convolutional neural networks model Two-level ensemble

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张晓男;钟兴;朱瑞飞;高放;张作省;鲍松泽;李竺强;: "基于集成卷积神经网络的遥感影像场景分类" *
蒋杰;熊昌镇;: "一种数据增强和多模型集成的细粒度分类算法" *
黄浩然;: "基于集成学习的MINIST手写数字识别" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111950A (en) * 2021-04-19 2021-07-13 中国农业科学院农业资源与农业区划研究所 Wheat rust classification method based on ensemble learning
CN113379686A (en) * 2021-05-26 2021-09-10 广东炬森智能装备有限公司 PCB defect detection method and device

Also Published As

Publication number Publication date
CN111652238B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN107633513B (en) 3D image quality measuring method based on deep learning
CN104023230B (en) A kind of non-reference picture quality appraisement method based on gradient relevance
CN110287777B (en) Golden monkey body segmentation algorithm in natural scene
CN110084149B (en) Face verification method based on hard sample quadruple dynamic boundary loss function
CN110400293B (en) No-reference image quality evaluation method based on deep forest classification
CN113554599B (en) Video quality evaluation method based on human visual effect
CN110264407B (en) Image super-resolution model training and reconstruction method, device, equipment and storage medium
CN111553438A (en) Image identification method based on convolutional neural network
CN110705428B (en) Facial age recognition system and method based on impulse neural network
CN107392251B (en) Method for improving target detection network performance by using classified pictures
CN112733665B (en) Face recognition method and system based on lightweight network structure design
CN112766283B (en) Two-phase flow pattern identification method based on multi-scale convolution network
CN115966010A (en) Expression recognition method based on attention and multi-scale feature fusion
CN111652238B (en) Multi-model integration method and system
CN110991554B (en) Improved PCA (principal component analysis) -based deep network image classification method
Wang et al. Distortion recognition for image quality assessment with convolutional neural network
CN112263224B (en) Medical information processing method based on FPGA edge calculation
CN106909944B (en) Face picture clustering method
CN111612732A (en) Image quality evaluation method, image quality evaluation device, computer equipment and storage medium
CN111369559A (en) Makeup evaluation method, makeup evaluation device, makeup mirror, and storage medium
CN109887023B (en) Binocular fusion stereo image quality evaluation method based on weighted gradient amplitude
CN101504723B (en) Projection space establishing method and apparatus
CN110427892B (en) CNN face expression feature point positioning method based on depth-layer autocorrelation fusion
Junhua et al. No-reference image quality assessment based on AdaBoost_BP neural network in wavelet domain
CN113239730B (en) Method for automatically eliminating structural false modal parameters based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant