CN117456282A - Gastric withering parting detection method and system for digestive endoscopy - Google Patents

Gastric withering parting detection method and system for digestive endoscopy Download PDF

Info

Publication number
CN117456282A
CN117456282A CN202311739542.6A CN202311739542A CN117456282A CN 117456282 A CN117456282 A CN 117456282A CN 202311739542 A CN202311739542 A CN 202311739542A CN 117456282 A CN117456282 A CN 117456282A
Authority
CN
China
Prior art keywords
gastric
atrophy
network
area detection
withering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311739542.6A
Other languages
Chinese (zh)
Other versions
CN117456282B (en
Inventor
林煜
许妙星
胡延兴
钟晓泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Lingying Yunnuo Medical Technology Co ltd
Original Assignee
Suzhou Lingying Yunnuo Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Lingying Yunnuo Medical Technology Co ltd filed Critical Suzhou Lingying Yunnuo Medical Technology Co ltd
Priority to CN202311739542.6A priority Critical patent/CN117456282B/en
Publication of CN117456282A publication Critical patent/CN117456282A/en
Application granted granted Critical
Publication of CN117456282B publication Critical patent/CN117456282B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the field of computers, and relates to a digestive endoscope in application medicine, in particular to a gastric withering parting detection method and a gastric withering parting detection system aiming at the digestive endoscope, which are characterized in that gastric images to be parting detected by the digestive endoscope are acquired and respectively input into an atrophy area detection network, a core area detection network and a digestive tract part detection network; processing through the atrophy area detection network, the core area detection network and the digestive tract part detection network, and outputting to obtain a atrophy area detection result, a core area detection result and a digestive tract part detection result; and according to each detection result, deciding the gastric withering parting result. The method and the device have the advantages that the mask-head generated dynamically is used, the deployment is easy, the thought of box alignment is abandoned, the capability of processing irregular targets is realized, the method and the device are suitable for dividing atrophy areas, the generated example mask has high resolution, the size of an original image can be 1/2, the network structure is finer, the weight is reduced, and the calculation time is short.

Description

Gastric withering parting detection method and system for digestive endoscopy
Technical Field
The invention belongs to the technical field of computers, and relates to a digestive endoscope in medicine, in particular to a gastric withering parting detection method and system for the digestive endoscope.
Background
The detection of atrophic areas plays a major role in gastroscopy as follows:
diagnosing stomach diseases: atrophy of the gastric mucosa is often accompanied by gastric diseases such as chronic gastritis, gastric ulcer, gastric cancer, etc. By detecting the atrophic areas, these diseases can be found and diagnosed early.
Evaluation of disease extent and prognosis: the extent and scope of atrophy is generally related to the extent and prognosis of gastric disease. Thus, for patients with gastric disease, detection of atrophic areas can assess the extent and prognosis of the disease and guide the formulation of treatment regimens.
Scope of guide gastroscopy: in gastroscopy, if the position and the range of the atrophy area are determined in advance, the examination range of the gastroscope can be guided more accurately, and omission or excessive examination can be avoided.
Provides basis for pathological examination: the type and degree of the pathological changes can be determined by sampling and pathological examination of the atrophy area, and a basis is provided for diagnosis and treatment of the pathological conditions.
However, in the prior art, the traditional ROI scheme is difficult to deploy when the atrophy area is detected, the thinking of box alignment is taken, the capability of processing irregular targets is not provided, the method is not suitable for dividing the atrophy area, the network model is outstanding in redundancy, and the calculation time is long.
Disclosure of Invention
According to a first aspect of the invention, the invention claims a method for the detection of gastric withering and typing for a digestive endoscope comprising:
collecting stomach images to be typed detected by a digestive endoscope, and respectively inputting the stomach images to be typed into an atrophy area detection network, a core area detection network and a digestive tract part detection network;
processing through the atrophy area detection network, the core area detection network and the digestive tract part detection network, and outputting to obtain a atrophy area detection result, a core area detection result and a digestive tract part detection result;
and determining gastric withering and typing results according to the atrophy area detection results, the core area detection results and the digestive tract part detection results.
Further, collect the stomach image of waiting to type that the digestion scope detected, input the stomach image of waiting to type respectively atrophy area detection network, core area detection network and alimentary canal position detection network, still include: before the stomach image to be typed is respectively input into the atrophy area detection network, the core area detection network and the digestive tract part detection network, the atrophy area detection pretreatment, the core area detection pretreatment and the digestive tract part detection pretreatment are respectively carried out on the stomach image to be typed, and then the corresponding detection networks are input.
Further, the core area detection network and the digestive tract part detection network further include: the core region comprises at least antrum-pylorus, corner, cardia and greater curvature;
the digestive tract part includes at least: the cardiac lesser curvature, the greater curvature of the cardiac, the anterior cardiac wall, the posterior cardiac wall, the fundus, the anterior superior aspect of the gastric body, the posterior superior aspect of the gastric body, the anterior medial aspect of the gastric body, the posterior medial aspect of the gastric body, the lesser curvature of the gastric body, the anterior inferior aspect of the gastric body, the posterior inferior aspect of the gastric body, the inferior aspect of the lesser curvature of the gastric body, the angle of the stomach, the anterior aspect of the angle of the stomach, the posterior aspect of the angle of the stomach, the anterior aspect of the antrum, the posterior aspect of the antrum, the Dou Da curvature of the stomach, the Dou Xiaowan side of the stomach, the antrum and the pylorus.
Further, the atrophy area detection network comprises five parts, namely a backbone network, a network ck, a box_head, a mask_branch and a mask_head; the network structure used by the main network is ResNet-50, the convolution basic structure in the main network is represented as [ convolution kernel size, output channel number ]. Times of circulation, and convolution in the main network is provided with a relu activation function;
the Neck part adopts an FPN characteristic pyramid structure;
box_head completes classification of candidate boxes and detection of target positions from the feature map, and comprises two branches, wherein each branch is a convolution of 1*1, and an output channel is 256, so that category prediction and position prediction are completed respectively.
Further, inputting the stomach image to be typed into a backbone network to obtain a first characteristic image, a second characteristic image and a third characteristic image;
inputting the first feature map, the second feature map and the third feature map into a neg for multi-scale feature mapping to obtain a fourth feature map, a fifth feature map, a sixth feature map, a seventh feature map and an eighth feature map;
respectively inputting the fourth feature map, the fifth feature map, the sixth feature map, the seventh feature map and the eighth feature map into a box_head to obtain N corresponding examples, categories and position information;
inputting the fourth feature map into a mask_branch, executing 4 continuous 3*3 convolution layer operations, wherein the number of convolution channels of the mask_branch next to the layer 5 is 8, and the output of the mask_branch is Fmask, which is the same as the size of the fourth feature map;
splicing the Fmask of the 8 channels with the relative coordinate graph of the 2 channels to obtain Rmask of the 10 channels, wherein the relative coordinate graph is the relative position coordinate of the Fmask relative to the position (x, y) of the current example, taking (x, y) as the center coordinate (0, 0), and calculating the relative coordinate values of other points;
dynamically generating N mask_head based on the number N of the effective instances detected by the box_head;
utilizing the box characteristic of the coded position information in the box_head to generate a filter combination theta required to be used in the mask_head;
and sequentially passing the upsampled Rmask through a filter combination theta to obtain a binary image which represents mask segmentation results of corresponding examples, wherein N groups of mask_head are generated for an input image comprising N candidate examples, and finally N example segmentation results are obtained.
Further, using the box feature of the coded position information in the box_head, generating the filter combination θ required in the mask_head further includes:
the length of the filter is fixed to 169, a vector with 169 dimensions is obtained by convolution operation and pooling operation of the regression feature in the box_head, and then recombination is carried out to obtain network weight and bias of three 1*1 convolution layers;
the specific filter combination θ is constituted as follows:
the 169 parameters of the dynamic mask_head, which are referred to as mconv1, mconv2, mconv3 for the convolution of three 1*1, are reorganized as follows:
weight= (8+2) ×8+8×8+ 8*1, each term mconv1, mconv2, mconv3, respectively, and the summation result is 152;
bias=8+8+1, each term mconv1, mconv2, mconv3, respectively, and the summation result is 17;
the filter combination θ is completed.
Further, the digestive tract portion detection network further includes:
inputting the preprocessed stomach image to be typed into a MobileNet V2 neural network, wherein the first column of the MobileNet V2 neural network is an input dimension, the second column is an operation, the third column t is an expansion factor, the fourth column c is an output characteristic channel, the fifth column n is the repetition number of a bottleck structure, the sixth column s is the size of a stride, when a plurality of bottleck exists, s is only specific to the first bottleck, and the following s is 1, and k is an output category;
according to the MobileNet V2 neural network, finally, each image outputs a vector with the length of 38 as confidence coefficient, the probability of predicting each category by the network is represented, the sum of the confidence coefficients is 1, and the category with the maximum confidence coefficient is selected as the position recognition result of the image.
Further, according to the atrophy region detection result, the core region detection result and the digestive tract part detection result, deciding a gastric atrophy typing result, and further comprising:
the atrophy area detection result outputs the atrophy area condition, and when the atrophy area exists, the mask1 is output;
outputting a core part condition by a core region detection result, and outputting mask2 when the core part exists;
outputting an image class from the digestive tract part detection result;
gastric withering and typing results include at least: the gastric antrum area has atrophy or no atrophy, the gastric angle area has or does not have intersection with the atrophy, the gastric mid-portion lesser curvature has atrophy or does not have intersection with the atrophy, the cardiac and the atrophy have or does not have intersection with the atrophy, the anterior wall of the lower part of the gastric body has atrophy or does not have intersection with the greater curvature of the stomach and the atrophy;
deciding whether the gastric withering parting result intersects with atrophy or not according to the atrophy area detection result and the core area detection result;
determining whether atrophy exists in the gastric withering and parting result according to the atrophy area detection result and the digestive tract part detection result.
According to a second aspect of the invention, the invention claims a gastric withering-typing detection system for a digestive endoscope comprising:
the acquisition module acquires stomach images to be typed detected by the digestive endoscopes, and respectively inputs the stomach images to be typed into the atrophy area detection network, the core area detection network and the digestive tract part detection network;
the network processing module is used for processing the atrophy area detection network, the core area detection network and the digestive tract part detection network and outputting and obtaining the atrophy area detection result, the core area detection result and the digestive tract part detection result;
a parting decision result, which decides a gastric withering parting result according to the atrophy area detection result, the core area detection result and the digestive tract part detection result;
a gastric withering and parting detection system for a digestive endoscope for performing the above-described gastric withering and parting detection method for a digestive endoscope.
According to a third aspect of the invention, the invention claims a gastric withering-typing detection system for a digestive endoscope comprising: a processor and a memory; a memory having stored thereon a computer readable program executable by the processor; the processor, when executing the computer readable program, implements the steps of the above-described gastric withering typing detection method for a digestive endoscope.
The invention belongs to the field of computers, and relates to a digestive endoscope in application medicine, in particular to a gastric withering parting detection method and a gastric withering parting detection system aiming at the digestive endoscope, which are characterized in that gastric images to be parting detected by the digestive endoscope are acquired and respectively input into an atrophy area detection network, a core area detection network and a digestive tract part detection network; processing through the atrophy area detection network, the core area detection network and the digestive tract part detection network, and outputting to obtain a atrophy area detection result, a core area detection result and a digestive tract part detection result; and according to each detection result, deciding the gastric withering parting result. The mask-head which is dynamically generated is used, so that the mask-head is easy to deploy, the thought of box alignment is abandoned, the irregular target processing capability is provided, and the mask-head is suitable for segmentation of an atrophy region; the generated example mask has high resolution, can reach 1/2 of the original image size, is finer, has a light network structure, and is short in calculation time.
Drawings
FIG. 1 is a schematic diagram of prior art atrophic boundary detection;
FIG. 2 is a schematic diagram of a prior art atrophy area detection;
FIG. 3 is a flow chart of a method for detecting gastric withering and typing for a digestive endoscope according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a backbone network structure of a method for detecting gastric withering and parting for a digestive endoscope according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a atrophy area detection network of a method for detecting gastric wilting and typing for a digestive endoscope according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of the output of a atrophy zone detection network for a method of gastric withering typing detection of a digestive endoscope according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a first output result of a core area detection network of a method for detecting gastric wilting and typing for a digestive endoscope according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a second output result of a core area detection network of a method for detecting gastric withering and typing for a digestive endoscope according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a third output result of a core area detection network of a method for detecting gastric wilting and typing for a digestive endoscope according to an embodiment of the present invention;
FIG. 10 is a diagram showing the fourth output result of a core area detection network of a method for detecting gastric wilting and typing for a digestive endoscope according to an embodiment of the present invention;
FIG. 11 is a flow chart of a decision-making gastric withering and typing method for a digestive endoscopy gastric withering and typing method according to an embodiment of the present invention;
FIG. 12 is a block diagram of a gastric withering type detection system for a digestive endoscope according to an embodiment of the present invention;
figure 13 is a block diagram of a system for detecting gastric withering and parting for a digestive endoscope according to an embodiment of the present invention.
Detailed Description
The prior art related to this scheme is: instance segmentation and image classification in deep learning.
Example segmentation (Instance Segmentation) is an image segmentation technique in computer vision that aims at pixel-level segmentation and classification of each target object in an image. It not only can segment different objects in an image but also can identify their boundaries and shape. Thus, example segmentation has a very important role in some application scenarios, such as vehicle and pedestrian detection in automatic driving, tumor segmentation in medical image analysis, part detection in industrial production, etc.
Example segmentation methods are typically based on deep-learning Convolutional Neural Networks (CNNs) plus segmentation heads that enable end-to-end learning and processing of the image to achieve semantic understanding and interpretation of each pixel in the image. Example segmentation algorithms currently in common use include Mask R-CNN, yolat, panoptic FPN, queryInst, and the like.
Image classification (Image Classification) is a fundamental task in the field of computer vision, the purpose of which is to divide an input image into different categories. In image classification, deep learning models are typically used for feature extraction and classification tasks. Common deep learning models include CNN, resNet, inception-v3, and the like.
The gastric mucosa atrophy degree is graded and is mainly used for diagnosing and evaluating chronic gastritis. The parting method divides gastric mucosa withering into six grades of C1, C2, C3, O1, O2 and O3, and different grades correspond to different atrophy degrees and clinical significance, and the method is specifically as follows:
type C1: refers to mild gastric mucosal atrophy, localized to the antrum and the corner of the stomach, commonly associated with helicobacter pylori infection and mild gastritis.
Type C2: the moderate gastric mucosa atrophy is indicated, and the atrophy area covers the whole antrum and the corner of the stomach, and is often accompanied by symptoms such as gastric acid secretion reduction and the like.
Type C3: refers to severe gastric mucosa atrophy, and the atrophy area is expanded to the body of the stomach, which is often accompanied by serious consequences such as gastric acid secretion reduction, gastric cancer and the like.
O1 type: refers to mild diffuse atrophy of the gastric mucosa, which is uniformly atrophic throughout the gastric mucosa, often associated with autoimmune gastritis.
Type O2: refers to moderate diffuse gastric mucosal atrophy, irregular structures appear on the surface of gastric mucosa, and often accompanied by autoimmune gastritis and other diseases.
Type O3: refers to severe diffuse gastric mucosal atrophy, and the surface of gastric mucosa presents irregular folds, which are often accompanied by severe gastritis, gastric cancer and other diseases.
By grading the degree of gastric mucosal atrophy, the patient's condition can be more accurately assessed, treatment regimens can be formulated, and prognosis of the condition can be predicted. Thus, typing plays an important role in the diagnosis and treatment of chronic gastritis.
Recognizing atrophy of gastritis under endoscopy requires first recognition: normal gastric mucosa, atrophic mucosa and border. For example, the lower panel contains normal mucosa and atrophic mucosa. And the boundary of atrophy, i.e., the F-line; it should be noted that the F line has a fraction of F/F. The cell composition structure in the F line (oral side) is mainly the gastric basal gland, and the cell composition structure outside the F line (anal side) is mainly the pyloric gland. The region between the F line and the F line is called an intermediate zone (traveling zone). Our labeling is mainly the F line, and the area of the wire frame in fig. 2 is the atrophy area to be detected.
According to a first embodiment of the present invention, referring to fig. 3, the present invention claims a method for detecting gastric withering and parting for a digestive endoscope, comprising:
collecting stomach images to be typed detected by a digestive endoscope, and respectively inputting the stomach images to be typed into an atrophy area detection network, a core area detection network and a digestive tract part detection network;
processing through the atrophy area detection network, the core area detection network and the digestive tract part detection network, and outputting to obtain a atrophy area detection result, a core area detection result and a digestive tract part detection result;
and determining gastric withering and typing results according to the atrophy area detection results, the core area detection results and the digestive tract part detection results.
Further, collect the stomach image of waiting to type that the digestion scope detected, input the stomach image of waiting to type respectively atrophy area detection network, core area detection network and alimentary canal position detection network, still include:
before the stomach image to be typed is respectively input into the atrophy area detection network, the core area detection network and the digestive tract part detection network, the atrophy area detection pretreatment, the core area detection pretreatment and the digestive tract part detection pretreatment are respectively carried out on the stomach image to be typed, and then the corresponding detection networks are input.
Wherein, in this embodiment, the atrophy area detection preprocessing includes graph cropping, scaling, and normalization. The picture cutting is to remove black background except the display area of the endoscope in the report image; scaling refers to scaling the cropped picture to a uniform size, 480x480 being used in this implementation; the normalization adopts mean variance normalization, the mean parameter is [123.68, 116.78, 103.94], and the standard deviation parameter is [58.40, 57.12, 57.38];
the core area detection pretreatment comprises graph cutting, scaling and normalization. The picture cutting is to remove black background except the display area of the endoscope in the report image; scaling refers to scaling the cropped picture to a uniform size, 480x480 being used in this implementation; the normalization adopts mean variance normalization, the mean parameter is [123.68, 116.78, 103.94], and the standard deviation parameter is [58.40, 57.12, 57.38];
the digestive tract part detection pretreatment comprises graph cutting, scaling and normalization. The picture cutting is to remove black background except the display area of the endoscope in the report image; scaling refers to scaling the cropped picture to a uniform size, 224x224 being used in this implementation; the normalization uses mean variance normalization, using a mean parameter of [123.68, 116.78, 103.94], and a standard deviation parameter of [58.40, 57.12, 57.38].
Further, the core area detection network and the digestive tract part detection network further include:
the core region comprises at least antrum-pylorus, corner, cardia and greater curvature;
the digestive tract part includes at least: the cardiac lesser curvature, the greater curvature of the cardiac, the anterior cardiac wall, the posterior cardiac wall, the fundus, the anterior superior aspect of the gastric body, the posterior superior aspect of the gastric body, the anterior medial aspect of the gastric body, the posterior medial aspect of the gastric body, the lesser curvature of the gastric body, the anterior inferior aspect of the gastric body, the posterior inferior aspect of the gastric body, the inferior aspect of the lesser curvature of the gastric body, the angle of the stomach, the anterior aspect of the angle of the stomach, the posterior aspect of the angle of the stomach, the anterior aspect of the antrum, the posterior aspect of the antrum, the Dou Da curvature of the stomach, the Dou Xiaowan side of the stomach, the antrum and the pylorus.
Further, the processing of the atrophy area detection network, the core area detection network and the digestive tract part detection network, outputting and obtaining the atrophy area detection result, the core area detection result and the digestive tract part detection result, further includes:
the atrophy area detection network comprises five parts, namely a backbone network, a neg, a box_head, a mask_branch and a mask_head;
referring to fig. 4, the network structure used in the backbone network is ResNet-50, the Conv 2-Conv 5 convolution basic structure in the backbone network is represented as [ convolution kernel size, output channel number ] ×number of cycles, and the convolutions in the backbone network all have relu activation functions;
the Neck part adopts an FPN characteristic pyramid structure;
box_head completes classification of candidate boxes and detection of target positions from the feature map, and comprises two branches, wherein each branch is a convolution of 1*1, and an output channel is 256, so that category prediction and position prediction are completed respectively.
Wherein each pixel in the image is assigned to a different object instance in this embodiment, this facilitates the subsequent scoring of a single atrophic area. The network framework comprises five main parts of a backbone network, a box_head, a mask_branch and a mask_head, and the whole framework is shown in fig. 5:
further, inputting the stomach image to be typed into a backbone network to obtain a first characteristic image, a second characteristic image and a third characteristic image;
inputting the first feature map, the second feature map and the third feature map into a neg for multi-scale feature mapping to obtain a fourth feature map, a fifth feature map, a sixth feature map, a seventh feature map and an eighth feature map;
respectively inputting the fourth feature map, the fifth feature map, the sixth feature map, the seventh feature map and the eighth feature map into a box_head to obtain N corresponding examples, categories and position information;
inputting the fourth feature map into a mask_branch, executing 4 continuous 3*3 convolution layer operations, wherein the number of convolution channels of the mask_branch next to the layer 5 is 8, and the output of the mask_branch is Fmask, which is the same as the size of the fourth feature map;
splicing the Fmask of the 8 channels with the relative coordinate graph of the 2 channels to obtain Rmask of the 10 channels, wherein the relative coordinate graph is the relative position coordinate of the Fmask relative to the position (x, y) of the current example, taking (x, y) as the center coordinate (0, 0), and calculating the relative coordinate values of other points;
dynamically generating N mask_head based on the number N of the effective instances detected by the box_head;
utilizing the box characteristic of the coded position information in the box_head to generate a filter combination theta required to be used in the mask_head;
and sequentially passing the upsampled Rmask through a filter combination theta to obtain a binary image which represents mask segmentation results of corresponding examples, wherein N groups of mask_head are generated for an input image comprising N candidate examples, and finally N example segmentation results are obtained.
Wherein, the feature extraction flow in this embodiment can be expressed as: with 480×480 size images as input, a multi-scale feature map can be obtained through multiple stages of the backbone network. Belongs to a characteristic extraction network which is common in the industry.
Wherein,
(1) C3 feature map size is 60x60, which is a stage that focuses more on the underlying features of the image, such as endoscopic image brightness, edges, texture, etc.
(2) C4 feature map size is 30 x 30, which still focuses on the lower feature representation of the image, but gradually approaches the target area, e.g. highlighting the clearly visible gastric mucosa in the stomach with a highlight area.
(3) C5 feature map size is 15×15, focusing on high-level semantic features of the image. That is, the higher similarity weight is given by the correlation between the pixel points of the atrophic mucosa, and the highlighting is performed. In this stage, the target atrophy area has been segmented approximately.
The Neck part uses FPNs, i.e., feature pyramids.
Considering that small atrophic areas in the stomach can be effectively identified, the FPN feature pyramid is used at neg. The aim is to combine the high-level semantic dependencies of C5 with low-level visual characterizations of the boundaries, brightness, etc. of the bottom layer. The algorithm uses feature mapping of five scales { P3, P4, P5, P6, P7 }. Finally, pixel-by-pixel regression is performed on all five scales.
The specific flow is as follows:
1. after up-sampling the feature map P5 generated by C5, residual connection is carried out on the feature map P5 subjected to 1X 1 convolution refinement with C4, semantic relevance among pixels is enriched, and a new feature P4 is generated
2. Upsampling P4, integrating color and brightness features with C3 feature map as in step (1), and generating new feature P3
3. At the same time, P5 is subjected to convolution operation (equivalent to downsampling) with step size of 2 twice successively to obtain P6 (8 x 8) and P7 (4 x 4), respectively, and more global information is tried to be acquired.
With the feature map generated by the FPN, there is a distinct highlighting in the region of atrophy and a blurred boundary guide. This presents a significant advantage for subsequent segmentation of the atrophic areas.
Next, the feature map is input into box_head. The box-head functions to complete the task of classifying candidate boxes and detecting target positions from the feature map. It contains two branches, each branch is a convolution of 1*1, and the output channel is 256, so that class prediction and position prediction are respectively completed. As shown in fig. 4, where C in the classification represents the number of categories, hxWxC represents the respective probability p (x, y) that each point belongs to C categories, and HxWx4 in the regression branch, where 4 represents the distance (l, t, r, b) of the box centered on each point. H and W represent the width and height of the graph at that scale, respectively. Note that box_head on different scales is weight-shared. An instance is considered to exist only if p (x, y) is greater than 0.05. Then N instances can be obtained through this step, along with the corresponding category and location information.
The input to the mask_branch is the P3 feature map in the FPN, where the resolution of P3 is one eighth of the original image, i.e., 60x60. Followed by 4 successive 3x3 convolutional layer operations, the output channels are all 128. To cut down the number of parameters generated, the number of convolution channels immediately following layer 5 is 8, so the mask_branch has an output of Fmask size Hmask x Wmask x Cmask, where Cmask is 8, hmask and Wmask are the same size as the P3 scale.
For the number N of valid instances detected by the box-head, N mask_heads are dynamically generated together
Fmask (8 channels) was then concated with the relative plots (relative coordinates maps, 2 channels) to yield Rmask (10 channels). The relative coordinate of other points can be calculated by taking (x, y) as a central coordinate (0, 0) in the relative coordinate graph, namely the relative position coordinate of Fmask relative to the position (x, y) of the current instance, in order to ensure a gastric mucosa atrophy segmentation area with higher quality, sampling is carried out on Rmask by 4 times, and pixels are restored to a clearer scale.
The box feature of the position information encoded in the box_head is utilized to generate the filter filters θ required in the mask_head. The length of the filter is fixed to 169, that is, the regression feature (HxWx 256) in the box-head is subjected to convolution operation and pooling operation to obtain a 169-dimensional vector, and then the vector is recombined to obtain the network weight and bias of three 1*1 convolution layers.
The specific filter combination θ is constituted as follows:
we call the convolution of these three 1*1 mconv1, mconv2, mconv3, then the 169 parameters of the dynamic mask-head will be reorganized as follows
weight= (8+2) x8+8x8+8x1, each term mconv1, mconv2, mconv3, respectively, and the summation result is 152;
bias=8+8+1, each term mconv1, mconv2, mconv3, respectively, the summation result being 17;
then the filter combination θ is completed.
And finally, sequentially passing the upsampled Rmask through a filter combination theta to obtain a binary image which represents mask segmentation results of corresponding examples. Then for N candidate instances of an input image, there will be N mask-head sets, resulting in N instance segmentation results.
Further, using the box feature of the coded position information in the box_head, generating the filter combination θ required in the mask_head further includes:
the length of the filter is fixed to 169, a vector with 169 dimensions is obtained by convolution operation and pooling operation of the regression feature in the box_head, and then recombination is carried out to obtain network weight and bias of three 1*1 convolution layers;
the specific filter combination θ is constituted as follows:
the 169 parameters of the three 1*1 convolutions called mconv1, mconv2, mconv3, dynamic mask_head will be reorganized as follows
weight= (8+2) ×8+8×8+ 8*1, each term mconv1, mconv2, mconv3, respectively, and the summation result is 152;
bias=8+8+1, each term mconv1, mconv2, mconv3, respectively, and the summation result is 17;
the filter combination θ is completed.
The output result of the atrophy area detection network is shown in fig. 6, a polygon frame represents a specific boundary of the atrophy area, and a rectangular frame is an external rectangular frame;
further, in this embodiment, the technical details of the atrophy area detection network and the core area detection network are the same, except for the difference in the object of action. The object of action of the atrophy area detection network is the atrophy area of the stomach, the object of action of the core area detection network is the core parts in 4 stomach, and the core parts refer to the core histological parts with the highest recognition degree in the stomach.
Referring to fig. 7-10, core region detection results of class 1-antrum-pylorus, class 2-corner, class 3-cardia and class 4-greater curvature are shown.
Further, the digestive tract portion detection network further includes:
inputting the preprocessed stomach image to be typed into a MobileNet V2 neural network, wherein the first column of the MobileNet V2 neural network is an input dimension, the second column is an operation, the third column t is an expansion factor, the fourth column c is an output characteristic channel, the fifth column n is the repetition number of a bottleck structure, the sixth column s is the size of a stride, when a plurality of bottleck exists, s is only specific to the first bottleck, the following s is 1, and k is an output category;
according to the MobileNet V2 neural network, finally, each image outputs a vector with the length of 38 as confidence coefficient, the probability of predicting each category by the network is represented, the sum of the confidence coefficients is 1, and the category with the maximum confidence coefficient is selected as the position recognition result of the image.
Further, referring to fig. 11, the decision of the gastric withering and parting result based on the atrophy region detection result, the core region detection result and the digestive tract part detection result further includes:
the atrophy area detection result outputs the atrophy area condition, and when the atrophy area exists, the mask1 is output;
outputting a core part condition by a core region detection result, and outputting mask2 when the core part exists;
outputting an image class from the digestive tract part detection result;
gastric withering and typing results include at least: the gastric antrum area has atrophy or no atrophy, the gastric angle area has or does not have intersection with the atrophy, the gastric mid-portion lesser curvature has atrophy or does not have intersection with the atrophy, the cardiac and the atrophy have or does not have intersection with the atrophy, the anterior wall of the lower part of the gastric body has atrophy or does not have intersection with the greater curvature of the stomach and the atrophy;
deciding whether the gastric withering parting result has intersection with atrophy or not according to the atrophy area detection result and the core area detection result;
and determining whether atrophy exists in the gastric withering and parting result according to the atrophy area detection result and the digestive tract part detection result.
According to a second embodiment of the present invention, referring to fig. 12, the present invention claims a gastric withering and parting detection system for a digestive endoscope, comprising:
the acquisition module acquires stomach images to be typed detected by the digestive endoscopes, and respectively inputs the stomach images to be typed into the atrophy area detection network, the core area detection network and the digestive tract part detection network;
the network processing module is used for processing the atrophy area detection network, the core area detection network and the digestive tract part detection network and outputting and obtaining the atrophy area detection result, the core area detection result and the digestive tract part detection result;
a parting decision result, which decides a gastric withering parting result according to the atrophy area detection result, the core area detection result and the digestive tract part detection result;
a gastric withering and parting detection system for a digestive endoscope for performing the above-described gastric withering and parting detection method for a digestive endoscope.
According to a third embodiment of the present invention, referring to fig. 13, the present invention claims a gastric withering and parting detection system for a digestive endoscope, comprising: a processor and a memory; a memory having stored thereon a computer readable program executable by a processor; a processor executes steps in a method for detecting gastric withering and typing for a digestive endoscope when executing a computer readable program.
Those skilled in the art will appreciate that various modifications and improvements can be made to the disclosure. For example, the various devices or components described above may be implemented in hardware, or may be implemented in software, firmware, or a combination of some or all of the three.
A flowchart is used in this disclosure to describe the steps of a method according to an embodiment of the present disclosure. It should be understood that the steps that follow or before do not have to be performed in exact order. Rather, the various steps may be processed in reverse order or simultaneously. Also, other operations may be added to these processes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the methods described above may be implemented by a computer program to instruct related hardware, and the program may be stored in a computer readable storage medium, such as a read only memory, a magnetic disk, or an optical disk. Alternatively, all or part of the steps of the above embodiments may be implemented using one or more integrated circuits. Accordingly, each module/unit in the above embodiment may be implemented in the form of hardware, or may be implemented in the form of a software functional module. The present disclosure is not limited to any specific form of combination of hardware and software.
Unless defined otherwise, all terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present disclosure and is not to be construed as limiting thereof. Although a few exemplary embodiments of this disclosure have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this disclosure. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the claims. It is to be understood that the foregoing is illustrative of the present disclosure and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The disclosure is defined by the claims and their equivalents.
In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.

Claims (10)

1. A method of gastric withering and parting for a digestive endoscope, comprising:
collecting stomach images to be typed detected by a digestive endoscope, and respectively inputting the stomach images to be typed into an atrophy area detection network, a core area detection network and a digestive tract part detection network;
processing through the atrophy area detection network, the core area detection network and the digestive tract part detection network, and outputting to obtain a atrophy area detection result, a core area detection result and a digestive tract part detection result;
and deciding the gastric withering parting result according to the atrophy area detection result, the core area detection result and the digestive tract part detection result.
2. The method for detecting gastric withering and parting for a digestive endoscope according to claim 1, wherein the steps of collecting a gastric image to be parting for the digestive endoscope detection, inputting the gastric image to be parting into a atrophy area detection network, a core area detection network and a digestive tract part detection network, respectively, further comprise:
before the stomach image to be typed is respectively input into an atrophy area detection network, a core area detection network and a digestive tract part detection network, the stomach image to be typed is respectively subjected to atrophy area detection pretreatment, core area detection pretreatment and digestive tract part detection pretreatment, and then is input into a corresponding detection network.
3. A method of detecting gastric withering and typing for a digestive endoscope according to claim 1 wherein said core area detection network and digestive tract site detection network further comprise:
the core region comprises at least antrum-pylorus, corner, cardia and greater curvature;
the digestive tract part includes at least: the cardiac lesser curvature, the greater curvature of the cardiac, the anterior cardiac wall, the posterior cardiac wall, the fundus, the anterior superior aspect of the gastric body, the posterior superior aspect of the gastric body, the anterior medial aspect of the gastric body, the posterior medial aspect of the gastric body, the lesser curvature of the gastric body, the anterior inferior aspect of the gastric body, the posterior inferior aspect of the gastric body, the inferior aspect of the lesser curvature of the gastric body, the angle of the stomach, the anterior aspect of the angle of the stomach, the posterior aspect of the angle of the stomach, the anterior aspect of the antrum, the posterior aspect of the antrum, the Dou Da curvature of the stomach, the Dou Xiaowan side of the stomach, the antrum and the pylorus.
4. The method for detecting gastric atrophy typing of a digestive endoscope according to claim 1, wherein the steps of processing by the atrophy area detecting network, the core area detecting network and the digestive tract part detecting network to output the atrophy area detecting result, the core area detecting result and the digestive tract part detecting result, further comprise:
the atrophy area detection network comprises a backbone network, a box_head, a mask_branch and a mask_head;
the network structure used by the main network is ResNet-50, the convolution basic structure in the main network is expressed as [ convolution kernel size, output channel number ]. Times of circulation, and the convolution in the main network is provided with a relu activation function;
the Neck part adopts an FPN characteristic pyramid structure;
box_head completes classification of candidate boxes and detection of target positions from the feature map, and comprises two branches, wherein each branch is a convolution of 1*1, and an output channel is 256, so that category prediction and position prediction are completed respectively.
5. A method of detecting gastric withering and parting for a digestive endoscope as claimed in claim 4 further comprising:
inputting the stomach image to be typed into a backbone network to obtain a first characteristic image, a second characteristic image and a third characteristic image;
inputting the first feature map, the second feature map and the third feature map into a neg for multi-scale feature mapping to obtain a fourth feature map, a fifth feature map, a sixth feature map, a seventh feature map and an eighth feature map;
respectively inputting the fourth feature map, the fifth feature map, the sixth feature map, the seventh feature map and the eighth feature map into a box_head to obtain N corresponding examples, categories and position information;
inputting the fourth feature map into a mask_branch, executing 4 continuous 3*3 convolution layer operations, wherein the number of convolution channels of the mask_branch next to the layer 5 is 8, and the output of the mask_branch is Fmask, which is the same as the size of the fourth feature map;
splicing the Fmask of the 8 channels with the relative coordinate graph of the 2 channels to obtain Rmask of the 10 channels, wherein the relative coordinate graph is the relative position coordinate of the Fmask relative to the position (x, y) of the current example, taking the (x, y) as the central coordinate (0, 0), and calculating the relative coordinate values of other points;
dynamically generating N mask_head based on the number N of the effective instances detected by the box_head;
utilizing the box characteristic of the coded position information in the box_head to generate a filter combination theta required to be used in the mask_head;
and sequentially passing the upsampled Rmask through a filter combination theta to obtain a binary image which represents mask segmentation results of corresponding examples, wherein N groups of mask_head are generated for an input image comprising N candidate examples, and N example segmentation results are finally obtained.
6. The method for detecting gastric withering and parting for a digestive endoscope according to claim 5, wherein said generating a filter combination θ required in mask_head using a box feature of the position information encoded in the box_head further comprises:
the length of the filter is fixed to 169, a vector with 169 dimensions is obtained by convolution operation and pooling operation of the regression feature in the box_head, and then recombination is carried out to obtain network weight and bias of three 1*1 convolution layers;
the specific filter combination θ is constituted as follows:
the 169 parameters of the three 1*1 convolutions called mconv1, mconv2, mconv3, dynamic mask_head will be reorganized as follows
weight= (8+2) ×8+8×8+ 8*1, each term mconv1, mconv2, mconv3, respectively, and the summation result is 152;
bias=8+8+1, each term mconv1, mconv2, mconv3, respectively, and the summation result is 17;
the filter combination θ is completed.
7. A method of detecting gastric withering and typing for a digestive endoscope according to any one of claims 4 to 6 wherein the digestive tract site detection network further comprises:
inputting the preprocessed stomach image to be typed into a MobileNet V2 neural network, wherein the first column of the MobileNet V2 neural network is an input dimension, the second column is an operation, the third column t is an expansion factor, the fourth column c is an output characteristic channel, the fifth column n is the repetition number of a bottleck structure, the sixth column s is the size of a stride, when a plurality of bottleck exists, s only aims at the first bottleck, the following s are all 1, and k is an output category;
according to the MobileNet V2 neural network, finally, each image outputs a vector with the length of 38 as confidence coefficient, the probability of predicting each category by the network is represented, the sum of the confidence coefficients is 1, and the category with the largest confidence coefficient is selected as the part recognition result of the image.
8. The method of claim 1, wherein said determining said gastric withering type result based on said withering type result, core area detection result and gastrointestinal tract area detection result, further comprises:
outputting a atrophy area condition according to the detection result of the atrophy area, and outputting mask1 when the atrophy area exists;
the core region detection result outputs a core position condition, and when a core position exists, a mask2 is output;
outputting an image class by the digestive tract part detection result;
the gastric withering and parting result comprises at least: the gastric antrum area has atrophy or no atrophy, the gastric angle area has or does not have intersection with the atrophy, the gastric mid-portion lesser curvature has atrophy or does not have intersection with the atrophy, the cardiac and the atrophy have or does not have intersection with the atrophy, the anterior wall of the lower part of the gastric body has atrophy or does not have intersection with the greater curvature of the stomach and the atrophy;
deciding whether the gastric withering parting result intersects with atrophy or not according to the atrophy area detection result and the core area detection result;
deciding whether atrophy exists in the gastric withering typing result according to the atrophy area detection result and the digestive tract part detection result.
9. A gastric withering and parting detection system for a digestive endoscope, comprising:
the acquisition module acquires stomach images to be typed detected by the digestive endoscopes, and inputs the stomach images to be typed into the atrophy area detection network, the core area detection network and the digestive tract part detection network respectively;
the network processing module is used for processing the atrophy area detection network, the core area detection network and the digestive tract part detection network and outputting and obtaining a atrophy area detection result, a core area detection result and a digestive tract part detection result;
a typing decision result, wherein the gastric withering and typing result is decided according to the atrophy area detection result, the core area detection result and the digestive tract part detection result;
the gastric withering and parting detection system for a digestive endoscope is used for executing a gastric withering and parting detection method for a digestive endoscope according to any one of claims 2 to 8.
10. A gastric withering and parting detection system for a digestive endoscope, comprising: a processor and a memory; the memory has stored thereon a computer readable program executable by the processor; the processor, when executing the computer readable program, implements the steps of a method for detecting gastric withering and parting for a digestive endoscope as claimed in any one of claims 1 to 8.
CN202311739542.6A 2023-12-18 2023-12-18 Gastric withering parting detection method and system for digestive endoscopy Active CN117456282B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311739542.6A CN117456282B (en) 2023-12-18 2023-12-18 Gastric withering parting detection method and system for digestive endoscopy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311739542.6A CN117456282B (en) 2023-12-18 2023-12-18 Gastric withering parting detection method and system for digestive endoscopy

Publications (2)

Publication Number Publication Date
CN117456282A true CN117456282A (en) 2024-01-26
CN117456282B CN117456282B (en) 2024-03-19

Family

ID=89591235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311739542.6A Active CN117456282B (en) 2023-12-18 2023-12-18 Gastric withering parting detection method and system for digestive endoscopy

Country Status (1)

Country Link
CN (1) CN117456282B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544526A (en) * 2018-11-15 2019-03-29 首都医科大学附属北京友谊医院 A kind of atrophic gastritis image identification system, device and method
US20190304129A1 (en) * 2018-03-27 2019-10-03 Siemens Medical Solutions Usa, Inc. Image-based guidance for navigating tubular networks
CN113538344A (en) * 2021-06-28 2021-10-22 河北省中医院 Image recognition system, device and medium for distinguishing atrophic gastritis and gastric cancer
CN113610847A (en) * 2021-10-08 2021-11-05 武汉楚精灵医疗科技有限公司 Method and system for evaluating stomach markers in white light mode

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190304129A1 (en) * 2018-03-27 2019-10-03 Siemens Medical Solutions Usa, Inc. Image-based guidance for navigating tubular networks
CN109544526A (en) * 2018-11-15 2019-03-29 首都医科大学附属北京友谊医院 A kind of atrophic gastritis image identification system, device and method
CN113538344A (en) * 2021-06-28 2021-10-22 河北省中医院 Image recognition system, device and medium for distinguishing atrophic gastritis and gastric cancer
CN113610847A (en) * 2021-10-08 2021-11-05 武汉楚精灵医疗科技有限公司 Method and system for evaluating stomach markers in white light mode

Also Published As

Publication number Publication date
CN117456282B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
CN111325739B (en) Method and device for detecting lung focus and training method of image detection model
Guo et al. Giana polyp segmentation with fully convolutional dilation neural networks
US11367181B2 (en) Systems and methods for ossification center detection and bone age assessment
US8396271B2 (en) Image processing apparatus, image processing program recording medium, and image processing method
CN108062525B (en) Deep learning hand detection method based on hand region prediction
US8670622B2 (en) Image processing apparatus, image processing method, and computer-readable recording medium
CN105979847B (en) Endoscopic images diagnosis aid system
KR102332088B1 (en) Apparatus and method for polyp segmentation in colonoscopy images through polyp boundary aware using detailed upsampling encoder-decoder networks
US9773185B2 (en) Image processing apparatus, image processing method, and computer readable recording device
US8620042B2 (en) Image processing apparatus, image processing method, and computer-readable recording medium
WO2019142243A1 (en) Image diagnosis support system and image diagnosis support method
Souaidi et al. A new automated polyp detection network MP-FSSD in WCE and colonoscopy images based fusion single shot multibox detector and transfer learning
CN112884788B (en) Cup optic disk segmentation method and imaging method based on rich context network
CN113223005B (en) Thyroid nodule automatic segmentation and grading intelligent system
CN111754531A (en) Image instance segmentation method and device
CN114140651A (en) Stomach focus recognition model training method and stomach focus recognition method
CN114372951A (en) Nasopharyngeal carcinoma positioning and segmenting method and system based on image segmentation convolutional neural network
Hassan et al. SEADNet: Deep learning driven segmentation and extraction of macular fluids in 3D retinal OCT scans
CN117152433A (en) Medical image segmentation method based on multi-scale cross-layer attention fusion network
Nader et al. Automatic teeth segmentation on panoramic X-rays using deep neural networks
CN113989236A (en) Gastroscope image intelligent target detection system and method
CN117456282B (en) Gastric withering parting detection method and system for digestive endoscopy
CN116934722A (en) Small intestine micro-target detection method based on self-correction coordinate attention
CN113177912A (en) Stomach polyp detection method and device based on deep learning
CN112766332A (en) Medical image detection model training method, medical image detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant