CN113724276B - Polyp image segmentation method and device - Google Patents

Polyp image segmentation method and device Download PDF

Info

Publication number
CN113724276B
CN113724276B CN202110889919.0A CN202110889919A CN113724276B CN 113724276 B CN113724276 B CN 113724276B CN 202110889919 A CN202110889919 A CN 202110889919A CN 113724276 B CN113724276 B CN 113724276B
Authority
CN
China
Prior art keywords
polyp
image
feature
features
shallow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110889919.0A
Other languages
Chinese (zh)
Other versions
CN113724276A (en
Inventor
李镇
魏军
胡译文
周少华
崔曙光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese University of Hong Kong Shenzhen
Original Assignee
Chinese University of Hong Kong Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese University of Hong Kong Shenzhen filed Critical Chinese University of Hong Kong Shenzhen
Priority to CN202110889919.0A priority Critical patent/CN113724276B/en
Publication of CN113724276A publication Critical patent/CN113724276A/en
Application granted granted Critical
Publication of CN113724276B publication Critical patent/CN113724276B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a polyp image segmentation method and device, comprising the following steps: acquiring a polyp image to be input; selecting a reference image with a color different from that of the polyp image from a preset training set, and exchanging the reference image with the color of the polyp image; extracting shallow features and deep features from the color-exchanged polyp image, inhibiting background noise of the shallow features by using a shallow attention model, and fusing the shallow features and the deep features; and (3) carrying out predictive response value re-equalization processing on the fused features by adopting a probability correction strategy model to obtain polyp feature images with clear edges. The invention can accurately and efficiently divide the polyp part from the image and has better generalization in various complex actual scenes.

Description

Polyp image segmentation method and device
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a polyp image segmentation method and device.
Background
Polyps are prone to cancerous changes, particularly multiple polyps, and early screening and treatment of polyps is therefore highly desirable. Polyp segmentation (Polyp Segmentation) is used as a computer vision task, and can automatically segment polyp parts in images or videos, so that the workload of doctors is greatly reduced, and the establishment of an accurate polyp segmentation model has great significance for clinical medical diagnosis.
At present PraNet based on parallel anti-attention networks is the most common prior art. PraNet firstly, extracting features with different semantic levels from a polyp image by using a Res2Net neural network, and then, aggregating the features with high semantic levels by using a parallel decoder so as to obtain global context information of the image, wherein the polyp segmentation result obtained by the method is relatively coarse because the features with high semantic levels lose excessive detail information. To further mine polyp boundary cues PraNet utilizes a reverse attention module to construct the relationship between polyp regions and polyp boundaries. By constantly alternating supplements between polyp regions and polyp boundaries, praNet may obtain more accurate polyp segmentation predictions.
Although PraNet can achieve relatively accurate results, it suffers from two important drawbacks: (1) poor segmentation of small polyp targets. Because small polyps lose too much information in high semantic level features, it is difficult to recover directly; in addition, the small polyp boundary labeling has larger error and has larger influence on the final segmentation result; (2) disregarding color deviations present in the dataset. In general, there are large differences in the colors of polyp images acquired under different conditions, which can interfere with the training of the polyp segmentation model, and particularly when the training images are fewer, the model is easy to be overfitted to the polyp colors, so that the model has obvious generalization capability in a practical application scene.
Disclosure of Invention
To solve the above technical problem, an embodiment of the present invention provides a method for segmenting a polyp image, including:
Acquiring a polyp image to be input;
Selecting a reference image with a color different from that of the polyp image from a preset training set, and exchanging the reference image with the color of the polyp image;
Extracting shallow features and deep features from the color-exchanged polyp image, inhibiting background noise of the shallow features by using a shallow attention model, and fusing the shallow features and the deep features;
and (3) carrying out predictive response value re-equalization processing on the fused features by adopting a probability correction strategy model to obtain polyp feature images with clear edges.
Further, the exchanging the reference image with the color of the polyp image includes:
Converting the colors of the polyp image X1 and the reference image X2 from an RGB color space to a LAB color space to obtain color values L1 and L2 of the polyp image X1 and the reference image X2 in the LAB color space;
calculating the mean value and standard deviation of the polyp image X1 in the LAB color space and the mean value and standard deviation of the reference image X2 in the LAB color space;
And obtaining the color value of the polyp image Y1 in the RGB color space and the color value of the reference image Y2 in the RGB color space by using a preset color conversion formula.
Further, the suppressing the background noise of the shallow feature using the shallow attention model includes:
Upsampling the deep features by bilinear difference values to enable the resolution of the sampled deep features to be the same as that of the shallow features;
Selecting elements larger than 0 from the sampled deep features, and determining the elements as an attention map of the shallow features to obtain deep features to be fused;
and multiplying the deep features to be fused with the shallow features element by element to obtain shallow features with background noise suppressed.
Further, the fusing the shallow features and the deep features includes:
Extracting a first feature, a second feature and a third feature of the last three scales when the shallow layer feature after the background noise is restrained is processed by adopting a convolutional neural network;
fusing the first feature and the second feature to obtain a first fused feature;
fusing the second feature and the third feature to obtain a second fused feature;
And splicing the first fusion feature and the second fusion feature according to the channel to obtain a final fusion feature.
Further, the probability correction strategy model is adopted to conduct predictive response value re-equalization processing on the fused features, and a polyp feature image with clear edges is obtained, and the method comprises the following steps:
Counting the number of pixels with the feature response value larger than 0 in the fused polyp feature image to obtain a first pixel value;
counting the number of pixels with the feature response value smaller than 0 in the fused polyp feature image to obtain a second pixel value;
And normalizing the first pixel value and the second pixel value, dividing a characteristic response value which is larger than 0 in the polyp characteristic image by the normalized first pixel value, and dividing a characteristic response value which is smaller than 0 in the polyp characteristic image by the normalized second pixel value to obtain the corrected polyp characteristic image.
A segmentation apparatus for polyp images, comprising:
the acquisition module is used for acquiring a polyp image to be input;
The processing module is used for selecting a reference image with a color different from that of the polyp image from a preset training set and exchanging the reference image with the color of the polyp image;
the processing module is used for extracting shallow features and deep features from the polyp images after color exchange, inhibiting background noise of the shallow features by using a shallow attention model, and fusing the shallow features and the deep features;
and the execution module is used for carrying out predictive response value re-equalization processing on the fused features by adopting a probability correction strategy model to obtain polyp feature images with clear edges.
Further, the processing module includes:
A first processing sub-module, configured to convert colors of the polyp image X1 and the reference image X2 from an RGB color space to a LAB color space, and obtain color values L1 and L2 of the polyp image X1 and the reference image X2 in the LAB color space;
a second processing sub-module, configured to calculate a mean value and a standard deviation of a channel of the polyp image X1 in the LAB color space and a mean value and a standard deviation of a channel of the reference image X2 in the LAB color space;
And the third processing submodule is used for obtaining the color value of the polyp image Y1 in the RGB color space and the color value of the reference image Y2 in the RGB color space by utilizing a preset color conversion formula.
Further, the processing module includes:
a fourth processing sub-module, configured to upsample the deep feature through a bilinear difference value so that the resolution of the sampled deep feature is the same as that of the shallow feature;
The first acquisition sub-module is used for selecting elements larger than 0 from the sampled deep features to determine attention force diagrams of the shallow features, and obtaining deep features to be fused;
And the first execution submodule is used for multiplying the deep features to be fused with the shallow features element by element to obtain shallow features with background noise suppressed.
Further, the processing module includes:
the second acquisition submodule is used for extracting first features, second features and third features of the last three scales when the shallow features after the background noise is restrained are processed by adopting a convolutional neural network;
a fifth processing sub-module, configured to fuse the first feature and the second feature to obtain a first fused feature;
A sixth processing sub-module, configured to fuse the second feature and the third feature to obtain a second fused feature;
and the second execution sub-module is used for splicing the first fusion feature and the second fusion feature according to the channel to obtain a final fusion feature.
Further, the execution module includes:
The third acquisition sub-module is used for counting the number of pixels with the feature response value larger than 0 in the fused polyp feature image to obtain a first pixel value;
a fourth obtaining sub-module, configured to count the number of pixels in the fused polyp feature image, where the feature response value is less than 0, to obtain a second pixel value;
And the third execution sub-module is used for carrying out normalization processing on the first pixel value and the second pixel value, dividing a characteristic response value which is larger than 0 in the polyp characteristic image by the normalized first pixel value, and dividing a characteristic response value which is smaller than 0 in the polyp characteristic image by the normalized second pixel value.
The embodiment of the invention has the beneficial effects that:
(1) Aiming at the problem of inaccurate target segmentation of small polyps, a Shallow Attention Module (SAM) in the invention can strengthen the extraction and utilization capacity of the model to the neural network shallow features, because the shallow features reserve more detail features for the small polyps. Unlike traditional methods of fusing multiple features directly by addition or splicing, the SAM uses deep features as assistance, and removes background noise in shallow features by directing attention mechanisms, thereby greatly improving usability of shallow features. In addition, the foreground and background pixel distribution of the small polyp image is unbalanced, and for this purpose, through a Probability Correction Strategy (PCS), the response value of the small polyp image can be dynamically and adaptively corrected according to the prediction result in a model reasoning stage, so that the edge of a segmentation target is optimized and the influence of the front background distribution unbalance is reduced.
(2) In response to the dataset color bias problem, the present invention proposes a Color Exchange (CE) operation to eliminate the impact of color bias on model training. In particular, the colors of different images of the CE can be mutually migrated, and the colors of the same image can be changed differently, so that decoupling of the image colors and the image contents is realized, and the model can concentrate on the image contents per se in training and cannot be interfered by the colors. A large number of quantitative and qualitative experiments show that the SANet model provided by the invention can accurately and efficiently divide the polyp part from the image and has better generalization in various complex actual scenes.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a polyp image segmentation method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of effect comparison provided by the embodiment of the invention;
FIG. 3 is a schematic diagram illustrating another comparison of effects provided by the embodiment of the present invention;
fig. 4 is a schematic structural diagram of a polyp image segmentation apparatus according to an embodiment of the present invention;
Fig. 5 is a basic structural block diagram of a computer device according to an embodiment of the present invention.
Detailed Description
In order to enable those skilled in the art to better understand the present invention, the following description will make clear and complete descriptions of the technical solutions according to the embodiments of the present invention with reference to the accompanying drawings.
In some of the flows described in the specification and claims of the present invention and in the foregoing figures, a plurality of operations occurring in a particular order are included, but it should be understood that the operations may be performed out of order or performed in parallel, with the order of operations such as 101, 102, etc., being merely used to distinguish between the various operations, the order of the operations themselves not representing any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
Referring to fig. 1, fig. 1 shows a method for segmenting a polyp image according to an embodiment of the present invention, including:
S1100, acquiring a polyp image to be input;
S1200, selecting a reference image with a color different from that of the polyp image from a preset training set, and exchanging the reference image with the color of the polyp image;
S1300, extracting shallow features and deep features from the polyp image after color exchange, inhibiting background noise of the shallow features by using a shallow attention model, and fusing the shallow features and the deep features;
S1400, performing predictive response value re-equalization processing on the fused features by adopting a probability correction strategy model to obtain polyp feature images with clear edges.
The Shallow Attention Module (SAM) of the present invention can enhance the model's ability to extract and utilize shallow features of the neural network, as the shallow features preserve more detailed features for small polyps. Unlike traditional methods of fusing multiple features directly by addition or splicing, the SAM uses deep features as assistance, and removes background noise in shallow features by directing attention mechanisms, thereby greatly improving usability of shallow features. In addition, the foreground and background pixel distribution of the small polyp image is unbalanced, and for this purpose, through a Probability Correction Strategy (PCS), the response value of the small polyp image can be dynamically and adaptively corrected according to the prediction result in a model reasoning stage, so that the edge of a segmentation target is optimized and the influence of the front background distribution unbalance is reduced. Furthermore, in response to the dataset color bias problem, the present invention proposes a Color Exchange (CE) operation to eliminate the impact of color bias on model training. In particular, the colors of different images of the CE can be mutually migrated, and the colors of the same image can be changed differently, so that decoupling of the image colors and the image contents is realized, and the model can concentrate on the image contents per se in training and cannot be interfered by the colors. A large number of quantitative and qualitative experiments show that the SANet model provided by the invention can accurately and efficiently divide the polyp part from the image and has better generalization in various complex actual scenes.
The embodiment of the invention comprises three models CE, SAM and PCS, wherein the CE can be used for migrating the colors of different images to an input image in a data augmentation stage; the SAM can fully play the potential of shallow layer characteristics when used in a characteristic fusion stage; PCS is used in model reasoning stage to optimally adjust the prediction result.
In particular, the color exchange operation acts directly on the input image so that the same input image may exhibit different color styles during the model training process, as shown in fig. 2. Specifically, for any one input image, an image with different colors is randomly selected from a training set to serve as a reference, and the colors of the images are migrated to the input image, and the same input image can present different color styles but corresponding labels are unchanged because each selected reference image has randomness, so that the model can focus on the image content in training and cannot be influenced by the image colors. Wherein exchanging colors of the reference image and the polyp image comprises:
step one, converting the colors of the polyp image X1 and the reference image X2 from an RGB color space to an LAB color space, and obtaining color values L1 and L2 of the polyp image X1 and the reference image X2 in the LAB color space;
Step two, calculating the mean value and standard deviation of the polyp image X1 in the LAB color space and the mean value and standard deviation of the reference image X2 in the LAB color space;
and thirdly, obtaining the color value of the polyp image Y1 in the RGB color space and the color value of the reference image Y2 in the RGB color space by using a preset color conversion formula.
In the embodiment of the invention, the small polyp image is subjected to serious information loss problem in feature downsampling, so that shallow features containing abundant details are fully utilized to have significance for small polyp target segmentation, but due to receptive field limitation, the features contain a large amount of background noise. Therefore, the invention provides the SAM which utilizes the deep features to restrain the background noise of the shallow features, can fully promote the usability of the shallow features and promote the segmentation effect of the model on the small polyp target. Specifically, suppressing background noise of the shallow features using a shallow attention model includes:
step one, upsampling the deep features through bilinear difference values to enable the resolution of the sampled deep features to be the same as that of the shallow features;
step two, selecting elements larger than 0 from the sampled deep features to determine attention force diagram of the shallow features, and obtaining deep features to be fused;
And thirdly, multiplying the deep features to be fused with the shallow features element by element to obtain shallow features with background noise suppressed.
One embodiment of the present invention upsamples to the same resolution size by bilinear interpolation, i.e. sets elements smaller than 0 to 0, as an attention map, i.e.; the sum is multiplied element by element to suppress background noise, i.e.
In the embodiment of the invention, the SAM can effectively fuse the deep layer and the shallow layer features together. Wherein fusing the shallow features and the deep features comprises:
Extracting a first feature, a second feature and a third feature of the last three scales when the shallow layer feature after the background noise is restrained is processed by adopting a convolutional neural network;
fusing the first feature and the second feature to obtain a first fused feature;
fusing the second feature and the third feature to obtain a second fused feature;
And splicing the first fusion feature and the second fusion feature according to the channel to obtain a final fusion feature.
In one embodiment of the present invention, in SANet model, we fuse the features (denoted as f3, f4, f5, respectively) output by stage3, stage4, and stage5 of Res2Net to reduce the model computation. SAM-based will be fused together to take full advantage of the features of each dimension. Fusing together to obtain the final product; fusing together to obtain the final product; and splicing the new channels to obtain the final fusion characteristic.
In the embodiment of the invention, the small polyp image has serious uneven front background pixel distribution. The negative samples (background pixels) dominate the model training process, and this a priori bias results in a model that is more prone to lower response values (logit) to the positive samples (foreground pixels), resulting in poor target edge segmentation. In order to correct the unbalance, the invention uses PCS to re-balance the predicted response value in the model reasoning stage, wherein, the probability correction strategy model is adopted to re-balance the predicted response value of the fused feature, so as to obtain the polyp feature image with clear edges, which comprises the following steps:
Counting the number of pixels with the feature response value larger than 0 in the fused polyp feature image to obtain a first pixel value;
counting the number of pixels with the feature response value smaller than 0 in the fused polyp feature image to obtain a second pixel value;
And normalizing the first pixel value and the second pixel value, dividing a characteristic response value which is larger than 0 in the polyp characteristic image by the normalized first pixel value, and dividing a characteristic response value which is smaller than 0 in the polyp characteristic image by the normalized second pixel value to obtain the corrected polyp characteristic image.
In one embodiment of the invention, the number of pixels with the statistical response value greater than 0 (logic > 0) is obtained; counting the number of pixels with the response value smaller than 0 (logic < 0), and obtaining the pixel; normalizing, namely; the response value of logit >0 is divided by the response value of logit <0, and the corrected polyp feature image is finally obtained. After the PCS, the deviation of the positive and negative sample numbers on the predicted result is eliminated, and the target edge portion can obtain a clearer predicted result, as shown in fig. 2, which shows some details of the result obtained by using the PCS.
TABLE 1 quantitative results of different models on datasets
In one embodiment of the present invention, table 1 shows the quantitative results of the different models on the 5 datasets Kvasir, CVC-ClinicDB, CVC-ColonDB, endoScene, ETIS, etc., and it can be seen that the present invention achieves the highest score on all datasets. Fig. 3 shows the results of qualitative experiments of different algorithms on specific images, it can be seen that the present invention can obtain more complete and clear polyp regions than previous models. By combining the experiments, the method can better remove the deviation and the background noise existing in the data set, thereby having excellent performance on polyp segmentation.
As shown in fig. 4, in order to solve the above problem, an embodiment of the present invention further provides a polyp image segmentation apparatus, including: the device comprises a module 2100, a processing module 2200 and an executing module 2300, wherein the module 2100 is used for acquiring a polyp image to be input; a processing module 2200, configured to select a reference image with a color different from the polyp image from a preset training set, and exchange the reference image with the color of the polyp image; a processing module 2200, configured to extract shallow features and deep features from the color-exchanged polyp image, suppress background noise of the shallow features by using a shallow attention model, and fuse the shallow features and the deep features; and the execution module 2300 is used for carrying out predictive response value re-equalization processing on the fused features by adopting a probability correction strategy model to obtain polyp feature images with clear edges.
In some embodiments, the processing module comprises: a first processing sub-module, configured to convert colors of the polyp image X1 and the reference image X2 from an RGB color space to a LAB color space, and obtain color values L1 and L2 of the polyp image X1 and the reference image X2 in the LAB color space; a second processing sub-module, configured to calculate a mean value and a standard deviation of a channel of the polyp image X1 in the LAB color space and a mean value and a standard deviation of a channel of the reference image X2 in the LAB color space; and the third processing submodule is used for obtaining the color value of the polyp image Y1 in the RGB color space and the color value of the reference image Y2 in the RGB color space by utilizing a preset color conversion formula.
In some embodiments, the processing module comprises: a fourth processing sub-module, configured to upsample the deep feature through a bilinear difference value so that the resolution of the sampled deep feature is the same as that of the shallow feature; the first acquisition sub-module is used for selecting elements larger than 0 from the sampled deep features to determine attention force diagrams of the shallow features, and obtaining deep features to be fused; and the first execution submodule is used for multiplying the deep features to be fused with the shallow features element by element to obtain shallow features with background noise suppressed.
In some embodiments, the processing module comprises: the second acquisition submodule is used for extracting first features, second features and third features of the last three scales when the shallow features after the background noise is restrained are processed by adopting a convolutional neural network; a fifth processing sub-module, configured to fuse the first feature and the second feature to obtain a first fused feature; a sixth processing sub-module, configured to fuse the second feature and the third feature to obtain a second fused feature; and the second execution sub-module is used for splicing the first fusion feature and the second fusion feature according to the channel to obtain a final fusion feature.
In some embodiments, the execution module comprises: the third acquisition sub-module is used for counting the number of pixels with the feature response value larger than 0 in the fused polyp feature image to obtain a first pixel value; a fourth obtaining sub-module, configured to count the number of pixels in the fused polyp feature image, where the feature response value is less than 0, to obtain a second pixel value; and the third execution sub-module is used for carrying out normalization processing on the first pixel value and the second pixel value, dividing a characteristic response value which is larger than 0 in the polyp characteristic image by the normalized first pixel value, and dividing a characteristic response value which is smaller than 0 in the polyp characteristic image by the normalized second pixel value.
In order to solve the technical problems, the embodiment of the invention also provides computer equipment. Referring specifically to fig. 5, fig. 5 is a basic structural block diagram of a computer device according to the present embodiment.
As shown in fig. 5, the internal structure of the computer device is schematically shown. As shown in fig. 5, the computer device includes a processor, a non-volatile storage medium, a memory, and a network interface connected by a system bus. The non-volatile storage medium of the computer device stores an operating system, a database, and computer readable instructions, where the database may store a control information sequence, and the computer readable instructions, when executed by the processor, may cause the processor to implement an image processing method. The processor of the computer device is used to provide computing and control capabilities, supporting the operation of the entire computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, cause the processor to perform an image processing method. The network interface of the computer device is for communicating with a terminal connection. It will be appreciated by those skilled in the art that the structure shown in FIG. 5 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
The processor in this embodiment is configured to execute the specific contents of the acquisition module 2100, the processing module 2200, and the execution module 2300 in fig. 4, and the memory stores program codes and various types of data required for executing the above modules. The network interface is used for data transmission between the user terminal or the server. The memory in the present embodiment stores program codes and data necessary for executing all the sub-modules in the image processing method, and the server can call the program codes and data of the server to execute the functions of all the sub-modules.
According to the computer equipment provided by the embodiment of the invention, the reference feature map is obtained by extracting the features of the high-definition image set in the reference pool, and because of the diversification of the images in the high-definition image set, the reference feature map contains all possible local features, so that high-frequency texture information can be provided for each low-resolution image, the feature richness is ensured, and the memory burden is reduced. In addition, the reference feature map is searched according to the low-resolution image, and the selected reference feature map can adaptively shield or enhance various different features, so that the details of the low-resolution image are richer.
The present invention also provides a storage medium storing computer readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of the image processing method of any of the embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored in a computer-readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. The storage medium may be a nonvolatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The foregoing is only a partial embodiment of the present invention, and it should be noted that it will be apparent to those skilled in the art that modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.

Claims (6)

1. A method of segmenting a polyp image, comprising:
Acquiring a polyp image to be input;
Selecting a reference image with a color different from that of the polyp image from a preset training set, and exchanging the reference image with the color of the polyp image;
Extracting shallow features and deep features from the color-exchanged polyp image, inhibiting background noise of the shallow features by using a shallow attention model, and fusing the shallow features and the deep features;
Carrying out predictive response value re-equalization processing on the fused features by adopting a probability correction strategy model to obtain polyp feature images with clear edges;
Wherein the suppressing the background noise of the shallow features using the shallow attention model comprises:
Upsampling the deep features by bilinear difference values to enable the resolution of the sampled deep features to be the same as that of the shallow features;
Selecting elements larger than 0 from the sampled deep features, and determining the elements as an attention map of the shallow features to obtain deep features to be fused;
multiplying the deep features to be fused with the shallow features element by element to obtain shallow features with background noise suppressed;
And carrying out predictive response value re-equalization processing on the fused features by adopting a probability correction strategy model to obtain polyp feature images with clear edges, wherein the method comprises the following steps:
Counting the number of pixels with the feature response value larger than 0 in the fused polyp feature image to obtain a first pixel value;
counting the number of pixels with the feature response value smaller than 0 in the fused polyp feature image to obtain a second pixel value;
And normalizing the first pixel value and the second pixel value, dividing a characteristic response value which is larger than 0 in the polyp characteristic image by the normalized first pixel value, and dividing a characteristic response value which is smaller than 0 in the polyp characteristic image by the normalized second pixel value to obtain the corrected polyp characteristic image.
2. The segmentation method as set forth in claim 1, wherein the exchanging the colors of the reference image and the polyp image comprises:
Converting the colors of the polyp image X1 and the reference image X2 from an RGB color space to a LAB color space to obtain color values L1 and L2 of the polyp image X1 and the reference image X2 in the LAB color space;
Calculating the mean value of the channels of the polyp image X1 in the LAB color space And standard deviation/>And the mean value/>, of the channels of the reference image X2 in the LAB color spaceAnd standard deviation/>
And obtaining the color value of the polyp image Y1 in the RGB color space and the color value of the reference image Y2 in the RGB color space by using a preset color conversion formula.
3. The segmentation method as set forth in claim 1, wherein the fusing the shallow features and the deep features comprises:
Extracting a first feature, a second feature and a third feature of the last three scales when the shallow layer feature after the background noise is restrained is processed by adopting a convolutional neural network;
fusing the first feature and the second feature to obtain a first fused feature;
fusing the second feature and the third feature to obtain a second fused feature;
And splicing the first fusion feature and the second fusion feature according to the channel to obtain a final fusion feature.
4. A polyp image segmentation apparatus, comprising:
the acquisition module is used for acquiring a polyp image to be input;
The processing module is used for selecting a reference image with a color different from that of the polyp image from a preset training set and exchanging the reference image with the color of the polyp image;
the processing module is used for extracting shallow features and deep features from the polyp images after color exchange, inhibiting background noise of the shallow features by using a shallow attention model, and fusing the shallow features and the deep features;
the execution module is used for carrying out predictive response value re-equalization processing on the fused features by adopting a probability correction strategy model to obtain polyp feature images with clear edges;
wherein the processing module comprises:
a fourth processing sub-module, configured to upsample the deep feature through a bilinear difference value so that the resolution of the sampled deep feature is the same as that of the shallow feature;
The first acquisition sub-module is used for selecting elements larger than 0 from the sampled deep features to determine attention force diagrams of the shallow features, and obtaining deep features to be fused;
the first execution submodule is used for multiplying the deep layer features to be fused with the shallow layer features element by element to obtain shallow layer features with background noise suppressed;
the execution module comprises:
The third acquisition sub-module is used for counting the number of pixels with the feature response value larger than 0 in the fused polyp feature image to obtain a first pixel value;
a fourth obtaining sub-module, configured to count the number of pixels in the fused polyp feature image, where the feature response value is less than 0, to obtain a second pixel value;
And the third execution sub-module is used for carrying out normalization processing on the first pixel value and the second pixel value, dividing a characteristic response value which is larger than 0 in the polyp characteristic image by the normalized first pixel value, and dividing a characteristic response value which is smaller than 0 in the polyp characteristic image by the normalized second pixel value.
5. The segmentation apparatus of claim 4, wherein the processing module comprises:
A first processing sub-module, configured to convert colors of the polyp image X1 and the reference image X2 from an RGB color space to a LAB color space, and obtain color values L1 and L2 of the polyp image X1 and the reference image X2 in the LAB color space;
a second processing sub-module for calculating the mean value of the channels of the polyp image X1 in the LAB color space And standard deviationAnd the mean value/>, of the channels of the reference image X2 in the LAB color spaceAnd standard deviation/>
And the third processing submodule is used for obtaining the color value of the polyp image Y1 in the RGB color space and the color value of the reference image Y2 in the RGB color space by utilizing a preset color conversion formula.
6. The segmentation apparatus of claim 4, wherein the processing module comprises:
the second acquisition submodule is used for extracting first features, second features and third features of the last three scales when the shallow features after the background noise is restrained are processed by adopting a convolutional neural network;
a fifth processing sub-module, configured to fuse the first feature and the second feature to obtain a first fused feature;
A sixth processing sub-module, configured to fuse the second feature and the third feature to obtain a second fused feature;
and the second execution sub-module is used for splicing the first fusion feature and the second fusion feature according to the channel to obtain a final fusion feature.
CN202110889919.0A 2021-08-04 2021-08-04 Polyp image segmentation method and device Active CN113724276B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110889919.0A CN113724276B (en) 2021-08-04 2021-08-04 Polyp image segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110889919.0A CN113724276B (en) 2021-08-04 2021-08-04 Polyp image segmentation method and device

Publications (2)

Publication Number Publication Date
CN113724276A CN113724276A (en) 2021-11-30
CN113724276B true CN113724276B (en) 2024-05-28

Family

ID=78674791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110889919.0A Active CN113724276B (en) 2021-08-04 2021-08-04 Polyp image segmentation method and device

Country Status (1)

Country Link
CN (1) CN113724276B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972155B (en) * 2021-12-30 2023-04-07 昆明理工大学 Polyp image segmentation method based on context information and reverse attention
CN116935051B (en) * 2023-07-20 2024-06-14 深圳大学 Polyp segmentation network method, system, electronic equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105430295A (en) * 2015-10-30 2016-03-23 努比亚技术有限公司 Device and method for image processing
WO2018224442A1 (en) * 2017-06-05 2018-12-13 Siemens Aktiengesellschaft Method and apparatus for analysing an image
CN109934789A (en) * 2019-03-26 2019-06-25 湖南国科微电子股份有限公司 Image de-noising method, device and electronic equipment
CN110852335A (en) * 2019-11-19 2020-02-28 燕山大学 Target tracking system based on multi-color feature fusion and depth network
CN111383214A (en) * 2020-03-10 2020-07-07 苏州慧维智能医疗科技有限公司 Real-time endoscope enteroscope polyp detection system
CN111768425A (en) * 2020-07-23 2020-10-13 腾讯科技(深圳)有限公司 Image processing method, device and equipment
CN111986204A (en) * 2020-07-23 2020-11-24 中山大学 Polyp segmentation method and device and storage medium
CN112001861A (en) * 2020-08-18 2020-11-27 香港中文大学(深圳) Image processing method and apparatus, computer device, and storage medium
CN112330688A (en) * 2020-11-02 2021-02-05 腾讯科技(深圳)有限公司 Image processing method and device based on artificial intelligence and computer equipment
CN112489061A (en) * 2020-12-09 2021-03-12 浙江工业大学 Deep learning intestinal polyp segmentation method based on multi-scale information and parallel attention mechanism
CN112669197A (en) * 2019-10-16 2021-04-16 顺丰科技有限公司 Image processing method, image processing device, mobile terminal and storage medium
CN112950461A (en) * 2021-03-27 2021-06-11 刘文平 Global and superpixel segmentation fused color migration method
CN113012150A (en) * 2021-04-14 2021-06-22 南京农业大学 Feature-fused high-density rice field unmanned aerial vehicle image rice ear counting method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160350B (en) * 2019-12-23 2023-05-16 Oppo广东移动通信有限公司 Portrait segmentation method, model training method, device, medium and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105430295A (en) * 2015-10-30 2016-03-23 努比亚技术有限公司 Device and method for image processing
WO2018224442A1 (en) * 2017-06-05 2018-12-13 Siemens Aktiengesellschaft Method and apparatus for analysing an image
CN109934789A (en) * 2019-03-26 2019-06-25 湖南国科微电子股份有限公司 Image de-noising method, device and electronic equipment
CN112669197A (en) * 2019-10-16 2021-04-16 顺丰科技有限公司 Image processing method, image processing device, mobile terminal and storage medium
CN110852335A (en) * 2019-11-19 2020-02-28 燕山大学 Target tracking system based on multi-color feature fusion and depth network
CN111383214A (en) * 2020-03-10 2020-07-07 苏州慧维智能医疗科技有限公司 Real-time endoscope enteroscope polyp detection system
CN111986204A (en) * 2020-07-23 2020-11-24 中山大学 Polyp segmentation method and device and storage medium
CN111768425A (en) * 2020-07-23 2020-10-13 腾讯科技(深圳)有限公司 Image processing method, device and equipment
CN112001861A (en) * 2020-08-18 2020-11-27 香港中文大学(深圳) Image processing method and apparatus, computer device, and storage medium
CN112330688A (en) * 2020-11-02 2021-02-05 腾讯科技(深圳)有限公司 Image processing method and device based on artificial intelligence and computer equipment
CN112489061A (en) * 2020-12-09 2021-03-12 浙江工业大学 Deep learning intestinal polyp segmentation method based on multi-scale information and parallel attention mechanism
CN112950461A (en) * 2021-03-27 2021-06-11 刘文平 Global and superpixel segmentation fused color migration method
CN113012150A (en) * 2021-04-14 2021-06-22 南京农业大学 Feature-fused high-density rice field unmanned aerial vehicle image rice ear counting method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automatized colon polyp segmentation via contour region analysis;Alain Sánchez-González;《Computers in Biology and Medicine》;全文 *
基于无线胶囊内窥镜图像的小肠病变智能检测与识别研究;刘士臣;《中国优秀硕士学位论文全文数据库 信息科技辑》;全文 *

Also Published As

Publication number Publication date
CN113724276A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
CN113724276B (en) Polyp image segmentation method and device
US20200210756A1 (en) 3D Refinement Module for Combining 3D Feature Maps
US20240029272A1 (en) Matting network training method and matting method
CN110378913B (en) Image segmentation method, device, equipment and storage medium
CN115661144A (en) Self-adaptive medical image segmentation method based on deformable U-Net
CN110866938B (en) Full-automatic video moving object segmentation method
CN113554665A (en) Blood vessel segmentation method and device
CN113763371B (en) Pathological image cell nucleus segmentation method and device
Chen et al. BPFINet: Boundary-aware progressive feature integration network for salient object detection
CN112800955A (en) Remote sensing image rotating target detection method and system based on weighted bidirectional feature pyramid
CN114283164A (en) Breast cancer pathological section image segmentation prediction system based on UNet3+
CN112465800A (en) Instance segmentation method for correcting classification errors by using classification attention module
CN115546570A (en) Blood vessel image segmentation method and system based on three-dimensional depth network
Chen et al. Adaptive fusion network for RGB-D salient object detection
CN114565768A (en) Image segmentation method and device
CN113283434B (en) Image semantic segmentation method and system based on segmentation network optimization
CN117437423A (en) Weak supervision medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement
CN113269764A (en) Automatic segmentation method and system for intracranial aneurysm, sample processing method and model training method
CN110782463B (en) Method and device for determining division mode, display method and equipment and storage medium
CN115546149B (en) Liver segmentation method and device, electronic equipment and storage medium
CN115272201A (en) Method, system, apparatus, and medium for enhancing generalization of polyp segmentation model
CN105516735A (en) Representation frame acquisition method and representation frame acquisition apparatus
CN113963166B (en) Training method and device of feature extraction model and electronic equipment
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN112750124B (en) Model generation method, image segmentation method, model generation device, image segmentation device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant