CN112819032A - Multi-model-based slice feature classification method, device, equipment and medium - Google Patents

Multi-model-based slice feature classification method, device, equipment and medium Download PDF

Info

Publication number
CN112819032A
CN112819032A CN202110033842.7A CN202110033842A CN112819032A CN 112819032 A CN112819032 A CN 112819032A CN 202110033842 A CN202110033842 A CN 202110033842A CN 112819032 A CN112819032 A CN 112819032A
Authority
CN
China
Prior art keywords
image
feature
slice
model
image block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110033842.7A
Other languages
Chinese (zh)
Other versions
CN112819032B (en
Inventor
谢春梅
李风仪
王佳平
侯晓帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202110033842.7A priority Critical patent/CN112819032B/en
Publication of CN112819032A publication Critical patent/CN112819032A/en
Application granted granted Critical
Publication of CN112819032B publication Critical patent/CN112819032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of artificial intelligence, is applied to the field of intelligent medical treatment so as to promote the construction of a smart city, and discloses a multi-model-based slice feature classification method, device, equipment and medium. The method comprises the steps of obtaining a human tissue slice image, and cutting the human tissue slice image into a plurality of image blocks; acquiring a preset combination model, wherein the preset combination model comprises a plurality of feature recognition models; inputting each image block into a preset combination model, and acquiring an image feature vector obtained after a plurality of feature recognition models perform feature recognition on each image block; performing feature fusion on each image feature vector corresponding to the same image block to obtain a fusion feature vector corresponding to the image block; and inputting each fusion feature vector into a preset classification model, and determining the section classification features corresponding to the human tissue section images. The invention improves the accuracy of feature identification and classification precision, thereby enabling the classification result to be more comprehensive.

Description

Multi-model-based slice feature classification method, device, equipment and medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a slice feature classification method, a slice feature classification device, slice feature classification equipment and a slice feature classification medium based on multiple models.
Background
With the development of scientific technology, the requirements on medical technology are higher and higher, and doctors and medical technology are more and more required particularly on the analysis of some symptom characteristics. Such as for characteristic analysis of various types of radiation sections.
In the prior art, characteristic analysis is carried out on various radiation sections, most of the radiation sections are subjected to dyeing, scanning and other treatment, and then the radiation sections are observed and analyzed by doctors through naked eyes.
Disclosure of Invention
The embodiment of the invention provides a multi-model-based slice feature classification method, a multi-model-based slice feature classification device and a multi-model-based slice feature classification medium, and aims to solve the problem of low feature classification accuracy.
A slice feature classification method based on multiple models comprises the following steps:
acquiring a human tissue slice image, and cutting the human tissue slice image into a plurality of image blocks;
acquiring a preset combination model, wherein the preset combination model comprises a plurality of feature recognition models;
inputting each image block into the preset combination model, and acquiring an image feature vector obtained after a plurality of feature recognition models perform feature recognition on each image block; each feature recognition model obtains an image feature vector for each image block;
performing feature fusion on each image feature vector corresponding to the same image block to obtain a fusion feature vector corresponding to the image block;
and inputting each fusion feature vector into a preset classification model, and determining the section classification features corresponding to the human tissue section images.
A slice feature classification device based on multiple models comprises:
the image acquisition module is used for acquiring a human tissue slice image and cutting the human tissue slice image into a plurality of image blocks;
the model acquisition module is used for acquiring a preset combination model, and the preset combination model comprises a plurality of feature recognition models;
the characteristic identification module is used for inputting each image block into the preset combination model and acquiring an image characteristic vector obtained after the characteristic identification of each image block is carried out by a plurality of characteristic identification models; each feature recognition model obtains an image feature vector for each image block;
the feature fusion module is used for performing feature fusion on each image feature vector corresponding to the same image block to obtain a fusion feature vector corresponding to the image block;
and the feature classification module is used for inputting each fusion feature vector into a preset classification model and determining the slice classification features corresponding to the human tissue slice images.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the multi-model based slice feature classification method when executing the computer program.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, implements the above-mentioned multi-model based slice feature classification method.
According to the multi-model-based slice feature classification method, device, equipment and medium, the human tissue slice image is obtained and cut into a plurality of image blocks; acquiring a preset combination model, wherein the preset combination model comprises a plurality of feature recognition models; inputting each image block into the preset combination model, and acquiring an image feature vector obtained after a plurality of feature recognition models perform feature recognition on each image block; each feature recognition model obtains an image feature vector for each image block; performing feature fusion on each image feature vector corresponding to the same image block to obtain a fusion feature vector corresponding to the image block; and inputting each fusion feature vector into a preset classification model, and determining the section classification features corresponding to the human tissue section images.
According to the invention, the image blocks of the gastric cancer tissue image are subjected to feature recognition through the mixed model comprising a plurality of feature recognition models, and the advantages of each feature recognition model are combined, so that the accuracy of slice feature recognition is improved; meanwhile, the features obtained by the feature recognition models are fused, and the high-dimensional fused feature vector is analyzed through a preset classification model (the random forest model is preferred in the embodiment), so that the finally obtained gastric cancer classification features are more comprehensive, and the classification precision is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic diagram of an application environment of a multi-model based slice feature classification method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a multi-model based slice feature classification method according to an embodiment of the present invention;
FIG. 3 is a flowchart of step S50 in the multi-model based slice feature classification method according to an embodiment of the present invention;
FIG. 4 is a flowchart of step S502 of the multi-model based slice feature classification method according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of a multi-model based slice feature classification apparatus according to an embodiment of the present invention;
FIG. 6 is a functional block diagram of a feature classification module in the multi-model based slice feature classification apparatus according to an embodiment of the present invention;
FIG. 7 is a schematic block diagram of a classification feature determination unit in the multi-model-based slice feature classification apparatus according to an embodiment of the present invention
FIG. 8 is a schematic diagram of a computer device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The multi-model-based slice feature classification method provided by the embodiment of the invention can be applied to an application environment shown in fig. 1. Specifically, the multi-model-based slice feature classification method is applied to a multi-model-based slice feature classification system, which includes a client and a server as shown in fig. 1, and the client and the server communicate with each other through a network to solve the problem of low feature classification accuracy. The client is also called a user side, and refers to a program corresponding to the server and providing local services for the client. The client may be installed on, but is not limited to, various personal computers, laptops, smartphones, tablets, and portable wearable devices. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers.
In an embodiment, as shown in fig. 2, a multi-model-based slice feature classification method is provided, which is described by taking the example that the method is applied to the server in fig. 1, and includes the following steps:
s10: the method comprises the steps of obtaining a human tissue slice image, and cutting the human tissue slice image into a plurality of image blocks.
Illustratively, the human tissue slice image may be a gastric cancer tissue image or the like. Generally, in order to show details of the tissue, the human tissue slice image is typically an image with a resolution of 20000 × 20000 or more; positive regions and negative regions may be included in the human tissue section image; the image block corresponding to the negative area is a negative image block; the image block corresponding to the positive area is a positive image block, and further, the positive image block comprises a tumor image block and a cancer image block, and the positive area is obtained by manually labeling in advance.
Further, in order to directly input the human tissue slice image into the model, the human tissue slice image needs to be sliced into a plurality of smaller image blocks. Preferably, the size of each image block is set to 512 × 512, that is, the size of each image block after the human tissue slice image is cut is 512 × 512, and in order to obtain more image blocks, in the process of cutting the human tissue slice image, the overlapping area between adjacent image blocks is set to 256; i.e. the overlapping part between each two adjacent image blocks is 256.
In one embodiment, before acquiring the human tissue slice image, the method further comprises:
randomly selecting any one or more pretreatment methods of rotation, turnover, cutting and illumination change;
randomly selecting a human tissue slice image from the human tissue image set, and preprocessing the randomly selected human tissue slice image by a randomly selected preprocessing method.
Wherein the human tissue image set comprises at least one human tissue slice image; it can be understood that there may be differences in brightness, color saturation, etc. of the human tissue slice images due to staining, slide making, and different scanning instruments, and in order to improve the classification accuracy of the subsequent models on the human tissue slice images, the human tissue slice images randomly selected in the human tissue image set are preprocessed by randomly selecting any one or more preprocessing methods of rotation, folding, cutting, and illumination variation, and the randomly selected human tissue slice images are preprocessed by the randomly selected preprocessing method.
S20: and acquiring a preset combination model, wherein the preset combination model comprises a plurality of feature recognition models.
The feature recognition model can be a timing-v 4 model, a Dual-Path-Net model, a ResNet model, a DenseNet model, a SENET model and the like. Preferably, the network structures of the ResNet model, the densneet model and the SENet model are simple, the number of layers is small, and the rate of feature recognition on the image by the three models is high, so that the ResNet model, the densneet model and the SENet model are selected as the feature recognition models in the embodiment.
S30: inputting each image block into the preset combination model, and acquiring an image feature vector obtained after a plurality of feature recognition models perform feature recognition on each image block; and each feature recognition model obtains an image feature vector aiming at each image block.
Specifically, after a human tissue slice image is obtained and cut into a plurality of image blocks, each image block is input into a preset combination model, and an image feature vector after feature recognition is performed on each image block by a plurality of feature recognition models in the preset combination model is obtained.
In one embodiment, step S30 includes:
s301: and respectively carrying out feature recognition on each image block through the ResNet model, the DenseNet model and the SENet model to obtain an initial feature vector corresponding to each image block.
Specifically, feature recognition is respectively carried out on each image block through a ResNet model, a DenseNet model and a SENet model to obtain an initial feature vector corresponding to each image block; it is understood that the ResNet model, the DenseNet model, and the SENet model derive one initial feature vector for each image block.
S302: and converting the characteristic length corresponding to each initial characteristic vector into the preset length through a full connection layer in the ResNet model, the DenseNet model and the SENet model to obtain the image characteristic vector corresponding to each image block.
It should be noted that, in order to provide sufficient image features for the preset classification model in the subsequent step S50, the feature lengths of all connection layers in the ResNet model, the densnet model and the SENet model are all set to be a uniform preset length, and preferably, the preset length is 1028.
Specifically, after feature recognition is performed on each image block through the ResNet model, the DenseNet model and the SENet model to obtain an initial feature vector corresponding to each image block, when the initial feature vector passes through all connection layers in the ResNet model, the DenseNet model and the SENet model, a feature length corresponding to each initial feature vector is converted into a preset length, and the converted initial feature vector is recorded as an image feature vector corresponding to each image block.
S40: and performing feature fusion on each image feature vector corresponding to the same image block to obtain a fusion feature vector corresponding to the image block.
Specifically, after each image block is input into the preset combination model, an image feature vector obtained by performing feature recognition on each image block by the plurality of feature recognition models is obtained, feature fusion is performed on each image feature vector corresponding to the same image block, and a fusion feature vector corresponding to the image block is obtained. It can be understood that, for an image block, after the image feature vectors identified by the above-mentioned ResNet model, DenseNet model and SENet model are fused, the corresponding fused feature vector is 3084 dimensions.
S50: and inputting each fusion feature vector into a preset classification model, and determining the section classification features corresponding to the human tissue section images.
One of the slice classification features corresponds to one classification, and the slice classification feature may be, for example, a negative slice feature (the negative slice feature corresponds to the negative classification) or a positive slice feature, and further, the positive slice feature includes a tumor slice feature (the tumor slice feature corresponds to the tumor classification) and a cancer slice feature (the cancer slice feature corresponds to the cancer classification).
Preferably, a random forest model is selected as the preset classification model, the random forest model is an ensemble learning model based on a plurality of individual trees, and the random forest model takes high-latitude features as input and takes categories as output. Therefore, in the above description, the feature lengths of all connection layers in the ResNet model, the densneet model, and the SENet model are all set to 1028, and the dimension of the obtained fusion feature vector is 3084 dimensions.
Specifically, as shown in fig. 3, step S50 includes the following steps:
s501: and identifying and classifying the fusion feature vectors through the preset classification model, and determining the image block category corresponding to each image block.
The image block categories refer to categories to which each image block belongs, each image block has one image block category corresponding to the image block, and each image block category comprises a negative image block, a tumor image block and a tumor image block.
Specifically, after feature fusion is performed on each image feature vector corresponding to the same image block to obtain a fusion feature vector corresponding to the image block, each fusion feature vector is input into a preset classification model, and each fusion feature vector is identified and classified through the preset classification model to determine an image block category corresponding to each image block
S502: and determining the slice classification characteristics corresponding to the human tissue slice images according to the ratio of the number of the image blocks corresponding to each image block category to the total number of all the image blocks.
Specifically, after the fusion feature vectors are identified and classified through the preset classification model, and the image block categories corresponding to the image blocks are determined, the ratio of the number of the image blocks corresponding to the image block categories to the total number of all the image blocks in the human tissue slice image is recorded, and the slice classification features corresponding to the human tissue slice image are determined.
Further, as shown in fig. 4, step S502 includes:
s5021: and recording the number corresponding to the negative image blocks as a first number.
S5022: and when the first ratio of the first number to the total number is greater than or equal to a preset ratio threshold, determining that the section classification feature corresponding to the human tissue section image is a negative section feature.
S5023: and when the first ratio of the first quantity to the total quantity is smaller than a preset ratio threshold value, determining that the section classification characteristic corresponding to the human tissue section image is a positive section characteristic.
The preset ratio threshold is set according to historical experience, and is preferably selected to be 85% and the like.
Specifically, after identifying and classifying each fusion feature vector through the preset classification model and determining the image block category corresponding to each image block, counting the number of the image block categories which are negative image blocks and recording the number as a first number; and determining the ratio of the first number to the total number of all image blocks in the human tissue slice image, and determining the slice classification characteristic corresponding to the human tissue slice image as a negative slice characteristic when the ratio is greater than or equal to a preset ratio threshold.
Further, when the ratio of the first number to the total number of all image blocks in the human tissue slice image is smaller than a preset ratio threshold, determining that the slice classification feature corresponding to the human tissue slice image is a positive slice feature.
In a specific embodiment, after the step S5023, that is, when the first ratio of the first quantity to the total quantity is smaller than the preset ratio threshold, determining that the slice classification feature corresponding to the human tissue slice image is a positive slice feature, the method further includes:
s5024: and detecting whether the image block type corresponding to each image block simultaneously comprises the tumor image block and the cancerous image block.
Specifically, when the first ratio of the first number to the total amount is smaller than a preset ratio threshold, after determining that the slice classification feature corresponding to the human tissue slice image is a positive slice feature, it needs to further determine whether the positive slice feature is a neoplastic slice feature or a cancerous slice feature, so as to detect whether an image block category corresponding to each image block includes a neoplastic image block and a cancerous image block at the same time.
S5025: if the tumor image blocks and the cancerous image blocks are contained at the same time, recording the number corresponding to the tumor image blocks as a second number; and recording the number corresponding to the cancerous image patch as a third number.
S5026: determining the positive slice feature as a neoplastic slice feature when the second number is greater than the third number.
S5027: determining the positive slice features as cancerous slice features when the second number is less than or equal to the third number.
Specifically, after detecting whether the image block type corresponding to each of the image blocks simultaneously includes the lesion image block and the cancerous image block, if the image block type corresponding to each of the image blocks simultaneously includes the lesion image block and the cancerous image block, the number corresponding to the lesion image block is recorded as a second number; recording the number corresponding to the cancerous image block as a third number; determining the positive slice features as neoplastic slice features when the second number is greater than the third number; determining the positive slice features as cancerous slice features when the second number is less than or equal to the third number.
Further, after detecting whether the image block category corresponding to each of the image blocks includes the neoplastic image block and the cancerous image block at the same time, when the image block category does not include the neoplastic image block, that is, the current image block category only includes negative image blocks and cancerous image blocks, and the number corresponding to the negative image blocks is smaller than the number corresponding to the cancerous image blocks, the positive slice feature is determined as the cancerous slice feature.
Further, after detecting whether the image block category corresponding to each of the image blocks includes the lesion image block and the cancerous image block at the same time, when the image block category does not include a cancerous image block, that is, the current image block category only includes negative image blocks and lesion image blocks, and the number corresponding to the negative image blocks is smaller than the number corresponding to the lesion image blocks, the positive slice feature is determined as a lesion slice feature.
In the embodiment, the image blocks of the human tissue slice image are subjected to feature recognition through the mixed model comprising a plurality of feature recognition models, and the advantages of each feature recognition model are combined, so that the accuracy of feature recognition is improved; the features obtained by each model are fused, and the fused feature vectors with high dimensionality are input into a preset classification model (preferably a random forest model in the embodiment), so that the obtained slice classification features are more comprehensive, the classification precision is improved, and the recognition bias of a single model is avoided (for example, when a certain model is trained, if the model is only trained to recognize a correct classification result, the model may have an overfitting condition, so that the model can only recognize a certain slice, for example, only can recognize a negative slice, and the like).
In another embodiment, in order to ensure the privacy and safety of the human tissue slice image and the preset combination model in the above embodiments, the human tissue slice image and the preset combination model may be stored in the block chain. The Block chain (Blockchain) is an encrypted and chained transaction storage structure formed by blocks (blocks).
For example, the header of each block may include hash values of all transactions in the block, and also include hash values of all transactions in the previous block, so as to achieve tamper resistance and forgery resistance of the transactions in the block based on the hash values; newly generated transactions, after being filled into the tiles and passing through the consensus of nodes in the blockchain network, are appended to the end of the blockchain to form a chain growth.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In an embodiment, a multi-model-based slice feature classification device is provided, and the multi-model-based slice feature classification device corresponds to the multi-model-based slice feature classification method in the above embodiments one to one. As shown in fig. 5, the multi-model based slice feature classification apparatus includes an image acquisition module 10, a model acquisition module 20, a feature recognition module 30, a feature fusion module 40, and a feature classification module 50. The functional modules are explained in detail as follows:
the image acquisition module 10 is configured to acquire a human tissue slice image and cut the human tissue slice image into a plurality of image blocks.
The model obtaining module 20 is configured to obtain a preset combination model, where the preset combination model includes a plurality of feature recognition models.
The feature identification module 30 is configured to input each image block into the preset combination model, and obtain an image feature vector obtained after the feature identification of each image block is performed by the multiple feature identification models; and each feature recognition model obtains an image feature vector aiming at each image block.
And the feature fusion module 40 is configured to perform feature fusion on each image feature vector corresponding to the same image block to obtain a fusion feature vector corresponding to the image block.
And the feature classification module 50 is configured to input each of the fusion feature vectors into a preset classification model, and determine a slice classification feature corresponding to the human tissue slice image.
Preferably, the feature recognition module 30 comprises the following units:
and the feature identification unit is used for respectively carrying out feature identification on each image block through the ResNet model, the DenseNet model and the SEnet model to obtain an initial feature vector corresponding to each image block.
And the characteristic length conversion is used for converting the characteristic length corresponding to each initial characteristic vector into the preset length through a full connection layer in the ResNet model, the DenseNet model and the SENet model to obtain the image characteristic vector corresponding to each image block.
Preferably, as shown in fig. 6, the feature classification module 50 includes the following units:
an identifying and classifying unit 501, configured to identify and classify each of the fusion feature vectors through the preset classification model, and determine an image block category corresponding to each of the image blocks;
a classification feature determining unit 502, configured to determine a slice classification feature corresponding to the human tissue slice image according to a ratio of the number of image blocks corresponding to each of the image block categories to the total number of all image blocks.
Preferably, as shown in fig. 7, the classification feature determination unit 502 includes the following sub-units:
a first data recording subunit 5021, configured to record the number corresponding to the negative image block as a first number.
The first feature determination subunit 5022 is configured to determine that the slice classification feature corresponding to the human tissue slice image is a negative slice feature when a first ratio of the first number to the total number is greater than or equal to a preset occupation ratio threshold.
A second feature determination subunit 5023, configured to determine that the slice classification feature corresponding to the human tissue slice image is a positive slice feature when the first ratio of the first number to the total amount is smaller than a preset ratio threshold.
Preferably, the classification feature determination unit 502 further comprises the following sub-units:
an image block detection subunit, configured to detect whether the image block category corresponding to each image block includes the cancerous image block and the lesion image block at the same time;
a second data recording subunit, configured to record, as a second number, the number corresponding to the lesion image block when the lesion image block and the cancerous image block are included at the same time; recording the number corresponding to the cancerous image block as a third number;
a third feature determination subunit operable to determine the positive slice feature as a neoplastic slice feature when the second number is greater than the third number;
a fourth feature determination subunit for determining the positive slice feature as a cancerous slice feature when the second number is less than or equal to the third number.
Preferably, the classification feature determination unit 502 further comprises the following sub-units:
a fifth feature determination subunit, configured to determine the positive slice feature as the cancerous slice feature when the image block category does not include the cancerous image block;
a sixth feature determination subunit, configured to determine the positive slice feature as a tumor slice feature when the image block category does not include the cancerous image block.
Preferably, the multi-model based slice feature classification device further comprises the following modules:
the pretreatment selection module is used for randomly selecting any one or more pretreatment methods of rotation, turnover, cutting and illumination change;
and the image preprocessing module is used for randomly selecting a human tissue slice image from the human tissue image set and preprocessing the randomly selected human tissue slice image by a randomly selected preprocessing method.
For specific definition of the multi-model based slice feature classification apparatus, reference may be made to the above definition of the multi-model based slice feature classification method, which is not described herein again. The modules in the multi-model based slice feature classification apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing the data used in the multi-model based slice feature classification method in the above embodiments. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a multi-model based slice feature classification method.
In one embodiment, a computer device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the multi-model based slice feature classification method in the above embodiments is implemented.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, enables multi-model based slice feature classification in the above-described embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A slice feature classification method based on multiple models is characterized by comprising the following steps:
acquiring a human tissue slice image, and cutting the human tissue slice image into a plurality of image blocks;
acquiring a preset combination model, wherein the preset combination model comprises a plurality of feature recognition models;
inputting each image block into the preset combination model, and acquiring an image feature vector obtained after a plurality of feature recognition models perform feature recognition on each image block; each feature recognition model obtains an image feature vector for each image block;
performing feature fusion on each image feature vector corresponding to the same image block to obtain a fusion feature vector corresponding to the image block;
and inputting each fusion feature vector into a preset classification model, and determining the section classification features corresponding to the human tissue section images.
2. The multi-model based slice feature classification method of claim 1, wherein the feature recognition model comprises: ResNet model, DenseNet model and SENet model;
setting the characteristic lengths of all the connecting layers in the characteristic identification model as preset lengths;
the inputting of each image block into the preset combination model, and obtaining an image feature vector obtained by performing feature recognition on each image block by the plurality of feature recognition models, includes:
respectively carrying out feature recognition on each image block through the ResNet model, the DenseNet model and the SENet model to obtain an initial feature vector corresponding to each image block;
and converting the characteristic length corresponding to each initial characteristic vector into the preset length through a full connection layer in the ResNet model, the DenseNet model and the SENet model to obtain the image characteristic vector corresponding to each image block.
3. The method for classifying slice features based on multiple models according to claim 1, wherein the step of inputting each fused feature vector into a preset classification model and determining slice classification features corresponding to the human tissue slice image comprises:
identifying and classifying the fusion feature vectors through the preset classification model, and determining image block categories corresponding to the image blocks;
and determining the slice classification characteristics corresponding to the human tissue slice images according to the ratio of the number of the image blocks corresponding to each image block category to the total number of all the image blocks.
4. The multi-model based slice feature classification method of claim 3, wherein the image block categories include negative image blocks;
determining slice classification characteristics corresponding to the human tissue slice images according to the ratio of the number of the image blocks corresponding to each image block category to the total number of all the image blocks, wherein the slice classification characteristics comprise:
recording the number corresponding to the negative image blocks as a first number;
when a first ratio of the first number to the total number is greater than or equal to a preset ratio threshold, determining that the slice classification feature corresponding to the human tissue slice image is a negative slice feature;
and when the first ratio of the first quantity to the total quantity is smaller than a preset ratio threshold value, determining that the section classification characteristic corresponding to the human tissue section image is a positive section characteristic.
5. The multi-model based slice feature classification method of claim 4, wherein the image patch categories further include neoplastic image patches and cancerous image patches;
when the first ratio of the first quantity to the total quantity is smaller than a preset ratio threshold, after determining that the section classification feature corresponding to the human tissue section image is a positive section feature, the method includes:
detecting whether the image block type corresponding to each image block simultaneously comprises the neoplastic image block and the cancerous image block;
if the tumor image blocks and the cancerous image blocks are contained at the same time, recording the number corresponding to the tumor image blocks as a second number; recording the number corresponding to the cancerous image block as a third number;
determining the positive slice features as neoplastic slice features when the second number is greater than the third number;
determining the positive slice features as cancerous slice features when the second number is less than or equal to the third number.
6. The method for multi-model-based slice feature classification as claimed in claim 5, wherein after detecting whether the image block category corresponding to each of the image blocks includes the cancerous image block and the neoplastic image block at the same time, further comprising:
determining the positive slice feature as the cancerous slice feature when the image patch category does not include the neoplastic image patch;
and when the image block category does not comprise the cancerous image block, determining the positive slice feature as a neoplastic slice feature.
7. The multi-model based slice feature classification method of claim 1, characterized by: before acquiring the gastric cancer case image, the method further comprises the following steps:
randomly selecting any one or more pretreatment methods of rotation, turnover, cutting and illumination change;
randomly selecting a human tissue slice image from the human tissue image set, and preprocessing the randomly selected human tissue slice image by a randomly selected preprocessing method.
8. A slice feature classification device based on multiple models is characterized by comprising:
the image acquisition module is used for acquiring a human tissue slice image and cutting the human tissue slice image into a plurality of image blocks;
the model acquisition module is used for acquiring a preset combination model, and the preset combination model comprises a plurality of feature recognition models;
the characteristic identification module is used for inputting each image block into the preset combination model and acquiring an image characteristic vector obtained after the characteristic identification of each image block is carried out by a plurality of characteristic identification models; each feature recognition model obtains an image feature vector for each image block;
the feature fusion module is used for performing feature fusion on each image feature vector corresponding to the same image block to obtain a fusion feature vector corresponding to the image block;
and the feature classification module is used for inputting each fusion feature vector into a preset classification model and determining the slice classification features corresponding to the human tissue slice images.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the multi-model based slice feature classification method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, implements the multi-model based slice feature classification method according to any one of claims 1 to 7.
CN202110033842.7A 2021-01-11 2021-01-11 Multi-model-based slice feature classification method, device, equipment and medium Active CN112819032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110033842.7A CN112819032B (en) 2021-01-11 2021-01-11 Multi-model-based slice feature classification method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110033842.7A CN112819032B (en) 2021-01-11 2021-01-11 Multi-model-based slice feature classification method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN112819032A true CN112819032A (en) 2021-05-18
CN112819032B CN112819032B (en) 2023-10-27

Family

ID=75868737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110033842.7A Active CN112819032B (en) 2021-01-11 2021-01-11 Multi-model-based slice feature classification method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN112819032B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033032A (en) * 2019-03-29 2019-07-19 中国科学院西安光学精密机械研究所 A kind of histotomy classification method based on micro- high light spectrum image-forming technology
CN111291789A (en) * 2020-01-19 2020-06-16 华东交通大学 Breast cancer image identification method and system based on multi-stage multi-feature deep fusion
CN111402268A (en) * 2020-03-16 2020-07-10 苏州科技大学 Method for segmenting liver and focus thereof in medical image
CN111583210A (en) * 2020-04-29 2020-08-25 北京小白世纪网络科技有限公司 Automatic breast cancer image identification method based on convolutional neural network model integration
US20200349697A1 (en) * 2019-05-02 2020-11-05 Curacloud Corporation Method and system for intracerebral hemorrhage detection and segmentation based on a multi-task fully convolutional network
CN111915613A (en) * 2020-08-11 2020-11-10 华侨大学 Image instance segmentation method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033032A (en) * 2019-03-29 2019-07-19 中国科学院西安光学精密机械研究所 A kind of histotomy classification method based on micro- high light spectrum image-forming technology
US20200349697A1 (en) * 2019-05-02 2020-11-05 Curacloud Corporation Method and system for intracerebral hemorrhage detection and segmentation based on a multi-task fully convolutional network
CN111291789A (en) * 2020-01-19 2020-06-16 华东交通大学 Breast cancer image identification method and system based on multi-stage multi-feature deep fusion
CN111402268A (en) * 2020-03-16 2020-07-10 苏州科技大学 Method for segmenting liver and focus thereof in medical image
CN111583210A (en) * 2020-04-29 2020-08-25 北京小白世纪网络科技有限公司 Automatic breast cancer image identification method based on convolutional neural network model integration
CN111915613A (en) * 2020-08-11 2020-11-10 华侨大学 Image instance segmentation method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112819032B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
CN111767707B (en) Method, device, equipment and storage medium for detecting Leideogue cases
CN111950656B (en) Image recognition model generation method and device, computer equipment and storage medium
JP2021532434A (en) Face feature extraction model Training method, face feature extraction method, device, equipment and storage medium
CN111860147A (en) Pedestrian re-identification model optimization processing method and device and computer equipment
CN110473172B (en) Medical image anatomical centerline determination method, computer device and storage medium
CN111832581B (en) Lung feature recognition method and device, computer equipment and storage medium
CN113705685B (en) Disease feature recognition model training, disease feature recognition method, device and equipment
CN110335248B (en) Medical image focus detection method, device, computer equipment and storage medium
CN112434556A (en) Pet nose print recognition method and device, computer equipment and storage medium
CN111275102A (en) Multi-certificate type synchronous detection method and device, computer equipment and storage medium
CN112017745A (en) Decision information recommendation method, decision information recommendation device, medicine information recommendation method, medicine information recommendation device, equipment and medium
CN114511547A (en) Pathological section image quality control method, device, equipment and storage medium
CN112016311A (en) Entity identification method, device, equipment and medium based on deep learning model
CN111783062A (en) Verification code identification method and device, computer equipment and storage medium
CN111767192A (en) Service data detection method, device, equipment and medium based on artificial intelligence
CN113283388B (en) Training method, device, equipment and storage medium of living body face detection model
CN110580507A (en) city texture classification and identification method
CN111460419B (en) Internet of things artificial intelligence face verification method and Internet of things cloud server
CN111738182B (en) Identity verification method, device, terminal and storage medium based on image recognition
CN109325448A (en) Face identification method, device and computer equipment
CN112819032A (en) Multi-model-based slice feature classification method, device, equipment and medium
CN113705270B (en) Method, device, equipment and storage medium for identifying two-dimensional code positioning code area
CN111428553B (en) Face pigment spot recognition method and device, computer equipment and storage medium
CN110597874B (en) Data analysis model creation method and device, computer equipment and storage medium
Kanwal et al. Evaluation method, dataset size or dataset content: how to evaluate algorithms for image matching?

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant