CN116563216A - Endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition - Google Patents

Endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition Download PDF

Info

Publication number
CN116563216A
CN116563216A CN202310336070.3A CN202310336070A CN116563216A CN 116563216 A CN116563216 A CN 116563216A CN 202310336070 A CN202310336070 A CN 202310336070A CN 116563216 A CN116563216 A CN 116563216A
Authority
CN
China
Prior art keywords
standard site
image
standard
module
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310336070.3A
Other languages
Chinese (zh)
Other versions
CN116563216B (en
Inventor
薛林雁
杨昆
常世龙
王尉丞
刘爽
刘琨
孙宇锋
李开元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University
Original Assignee
Hebei University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University filed Critical Hebei University
Priority to CN202310336070.3A priority Critical patent/CN116563216B/en
Publication of CN116563216A publication Critical patent/CN116563216A/en
Application granted granted Critical
Publication of CN116563216B publication Critical patent/CN116563216B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition. The system comprises a data collection and preprocessing module, a standard site real-time classification model construction module, a model training and verification optimization module and a standard site scanning visualization module. The data collection and preprocessing module is used for collecting EUS video data sets and preprocessing the data sets; the standard site real-time classification model construction module is used for identifying whether the current frame of the input data is a standard site image or not and the spatial position of the standard site image; the model training and verifying optimization module is used for optimizing model parameters of the standard site real-time classification model; the standard site scanning visualization module is used for carrying out statistics, analysis and visualization on the data result of the standard site real-time classification model so as to ensure continuous scanning of EUS standard sites. The invention can be used for EUS operation intelligent navigation and scanning control optimization in clinical environment or doctor training.

Description

Endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition
Technical Field
The invention relates to the technical field of deep learning, in particular to an endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition.
Background
The endoscope ultrasound (endoscopic ultrasound, EUS) can well display the size, position, relation with blood vessels, the presence or absence of lymph node metastasis and the like of a tumor in a patient through close-range real-time endoscope (or optical) and ultrasound scanning in the gastrointestinal tract, and has no radiation, so that the method is an imaging examination means with high sensitivity and good safety. However, the endoscope ultrasound requires an operator to have both the endoscope operation capability and the ultrasound image interpretation capability, and the operation process has high difficulty, and the technique requires at least 12 months of training time to accurately grasp.
EUS-guided disease diagnosis and intervention are greatly affected by the subjective factors and clinical experience of the physician, and the results of the examination by operators of different skill levels are also different. In order to ensure the integrity of the inspection to the greatest extent, avoiding missed inspection, the substation continuous scanning of the EUS has become the standard inspection procedure. The EUS substation continuous scanning flow is to determine a standard site by quickly finding a guide mark representing a certain site, and complete endoscopic ultrasonic scanning is realized according to the spatial sequence of the standard site. However, the anatomy of EUS imaging is quite complex, spatial information is difficult to judge, and subtle angular changes of the ultrasound probe will all lead to significant differences in imaging. Therefore, interpretation of the EUS image is very difficult, and it is very difficult for an operator to correctly understand the anatomical structure under EUS in continuous scanning and dynamic observation, so that scanning blind areas are very easy to be caused, and thus, some parts are missed.
The existing research discloses a pancreatic endoscope ultrasonic navigation method and system based on artificial intelligence, and a plurality of sites are identified for acquired and screened pancreatic EUS images through a trained convolutional neural network; in addition, research discloses a medical image processing method, a medical image processing device, a computer device and a storage medium, and the identified medical images are scanned by continuously tracking a dynamic pancreatic medical image frame sequence. The existing researches have the defects that site identification and classification are only carried out on ultrasonic video frames, and the endoscope images obtained simultaneously in the EUS inspection process provide important upper digestive tract structure information which has a certain space contrast relation with the anatomical structure of the ultrasonic images, so that richer detail information is provided for the identification of standard sites; in addition, there have been studies that have also had limitations in terms of the real-time and accuracy of classifying EUS video frames in real-time. Therefore, the EUS optical image and the ultrasonic image are integrated, so that multidimensional characteristics can be provided, and the accuracy of standard site identification is improved.
Disclosure of Invention
The invention aims to provide an endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition, which are used for solving the problems that the image reading difficulty is high during EUS dynamic scanning, and the EUS scanning quality is difficult to guarantee due to part omission.
The invention is realized in the following way: an endoscope ultrasonic scanning control optimization system based on standard site intelligent recognition comprises:
the data collection and preprocessing module is used for collecting EUS endoscopic ultrasound multi-mode video data (comprising EUS optical video and EUS ultrasonic video data), establishing an EUS video data set, and preprocessing the EUS video data set, including anonymization, data cleaning, data set labeling and data enhancement;
the standard site real-time classification model construction module is used for constructing an algorithm model comprising a standard site image real-time screening model and a standard site space position real-time classification model according to the real-time classification requirement of the EUS standard site, judging whether the current frame of the input real-time EUS endoscope ultrasonic multi-mode video data is a standard site image or not by utilizing the algorithm model, and classifying the space position of the determined standard site image;
the standard site image real-time screening model comprises a feature extraction module, a feature weighted interaction module and a standard site real-time screening module; the feature extraction module comprises two lightweight feature extraction modules and two LSTM long and short memory network modules; one of the light-weight characteristic extraction modules is connected with an LSTM long and short memory network module, the light-weight characteristic extraction module is used for extracting the characteristic of each frame of image in the optical time sequence, and the LSTM long and short memory network module is used for extracting the correlation of the characteristic of the continuous 10 frames of images so as to obtain the total characteristic of the EUS optical video; the other light-weight characteristic extraction module is connected with the other LSTM long and short memory network module, the light-weight characteristic extraction module is used for extracting the characteristic of each frame of image in the ultrasonic time sequence, and the LSTM long and short memory network module is used for processing the correlation of the characteristic of the continuous 10 frames of images to obtain the total characteristic of the EUS ultrasonic video; the feature weighted interaction module comprises a bimodal sequence feature attention mechanism module which is used for acquiring image sequence features of two modes and adding the image sequence features of the two modes channel by channel to realize weighted fusion of the sequence features of the two modes; the standard site real-time screening module comprises a fully-connected neural network and a softmax function, and is used for identifying the last frame of an input image sequence as a standard site or non-standard site image and outputting the standard site image to a subsequent standard site space position real-time classification model;
the standard site image real-time screening model respectively extracts an optical video frame and an ultrasonic video frame from the preprocessed optical and ultrasonic bimodal real-time video, and respectively inputs the optical video frame and the ultrasonic video frame into a corresponding lightweight feature extraction module to obtain the optical image feature and the ultrasonic image feature of each current frame; from the 10 th frame, packaging the optical image features and the ultrasonic image features of the current i frame and the 9 previous frames thereof to respectively obtain an optical image feature sub-sequence and an ultrasonic image feature sub-sequence; inputting the two image feature sub-sequences into an LSTM long and short memory network module respectively to extract the correlation between the optical images of the adjacent 10 frames and the feature vectors corresponding to the ultrasonic images in each image feature sub-sequence and obtain the total features of the optical video and the total features of the ultrasonic video of the adjacent 10 frames; the obtained optical video total features of the current frame and the previous 9 frames and the obtained ultrasonic video total features are added channel by channel through a bimodal sequence feature attention mechanism module to realize feature weighted interactive fusion, and an optical endoscope/ultrasonic endoscope bimodal feature vector is generated; the full-connection neural network in the standard site real-time screening module judges whether the last frame of image in the optical image characteristic subsequence and the ultrasonic image characteristic subsequence is a standard site image or not, and if the last frame of image is the standard site image, the optical image and the ultrasonic image of the last frame of image are respectively input into a standard site space position real-time classification model; sequentially processing the i+1, i+2, … … until the last frame; i is more than or equal to 10;
the standard site space position real-time classification model comprises an encoder, a multi-mode image feature fusion module, a decoder and a classifier; the encoder comprises a color space conversion module and two dense convolution layers; the multi-mode image feature fusion module is used for fusing the features of the optical mode and the ultrasonic mode extracted by the encoder, and further fusing the U, V channel standard site optical video frame features of the encoder and the preliminarily fused multi-mode features based on global addition transformation to generate final fused image features; the decoder comprises a convolution layer and a channel attention module, and is used for refining the characteristic with higher distinguishability so as to improve the fusion result of the previous stage and realize multi-mode characteristic fusion; the classifier comprises a fully-connected neural network and a softmax function, and is used for classifying the spatial positions of the input standard site video frames and outputting the classification results of the spatial positions of the standard sites;
the standard site space position real-time classification model firstly adopts a color space conversion module to convert an optical image of a standard site from an RGB color space to a YUV color space, and then adopts a dense convolution layer to extract Y channel characteristics to obtain a characteristic vector; for an ultrasonic image of a standard site, a dense convolution layer is adopted to obtain a characteristic vector of the ultrasonic image; the feature vector after the fusion of the optical feature and the ultrasonic feature is obtained through a multi-mode image feature fusion module, and the fused feature is further extracted through a convolution layer and a channel attention module in a decoder; finally, classifying the positions of the standard station image frames through a fully connected neural network in the classifier;
the model training and verifying optimizing module is used for optimizing the model parameters constructed by the standard site real-time classification model constructing module so as to test and verify the performance of the standard site space position real-time classification model while improving the recognition effect to obtain a final optimizing model;
and the standard site scanning visualization module is used for carrying out statistics and analysis on the data result of the standard site space position real-time classification model, and carrying out real-time statistics and display on the checked standard site, the standard site where the standard site is currently located and the standard site which is missed to be detected. The method comprises the following steps: generating and displaying an upper gastrointestinal virtual anatomic structure map, and displaying the positions and numbers of 10 standard sites (for example, in white fonts); according to the statistics and analysis results of the real-time classification model of the space position of the standard site, the standard site numbers after inspection are displayed in a first color (such as green) font, the standard site numbers at present are displayed in a second color (such as yellow) font, meanwhile, operation notes of the standard site at present are given, the standard site numbers which are missed are displayed in a flashing third color (such as red) font, an operator is reminded of the condition that a station jump occurs in the inspection process (namely, a certain standard site is missed), the operator needs to return to the standard site for scanning, and if no station jump occurs in the scanning process, the standard site numbers to be scanned next are displayed in a fourth color (such as blue) font after the current standard site numbers are lighted;
the first color, the second color, the third color, and the fourth color are four different colors;
the 10 standard sites are sequentially: standard site 1 hepatic portal and gastroesophageal junction → standard site 2 abdominal aortic station → standard site 3 pancreatic neck, pancreatic body station → standard site 4 pancreatic tail station → standard site 5 splenomegaly station → standard site 6 portal vein junction trigonometric station → standard site 7 duodenal bulbar pancreatic head station → standard site 8 portal vein station → standard site 9 gastric head station → standard site 10 duodenal bulbar.
Preferably, the light feature extraction module is configured to extract, in real time, an image feature of an optical video frame and an image feature of an ultrasonic video frame, and specifically is: calculating residual errors of a current frame image and a previous frame image, setting a residual error gate 1 according to the residual errors of the current frame and the previous frame, setting a current frame importance gate 2 according to a prediction classification result of the previous frame, judging that the current frame is important (i.e. not negligible) if the previous frame image is predicted as a standard site image, and judging that the current frame is not important (i.e. negligible) if the previous frame image is not predicted as a standard site image; when the residual error is lower than a set threshold value or higher than the set threshold value and is unimportant, the characteristic of the previous frame image is directly used as the characteristic of the current frame image through the door 1; when the residual is larger than or equal to the set threshold and important (i.e. not negligible), the residual frame image features are extracted through the gate 2 by the dense convolution layer, and the features of one frame image are overlapped to be used as the current frame image features.
Preferably, the multi-mode image feature fusion module specifically performs the following operations: firstly, respectively extracting bottom layer characteristics and detail layer characteristics of a Y-channel image and bottom layer characteristics and detail layer characteristics of an original ultrasonic image based on a substrate detail characteristic decomposition module, then fusing the detail layer characteristics of the Y-channel image and the original ultrasonic image through a detail characteristic fusion module, fusing the bottom layer characteristics of the Y-channel image and the original ultrasonic image through a bottom layer characteristic fusion module, adding the fused detail layer characteristics and the bottom layer characteristics to obtain primary fused image characteristics, and finally fusing the primary fused image characteristics with the U-channel image characteristics and the V-channel image characteristics to obtain final fused image characteristics.
The endoscope ultrasonic scanning control optimization method based on standard site intelligent recognition provided by the invention adopts the system, and specifically comprises the following steps:
s1, collecting video data of EUS optical modes and ultrasonic modes by utilizing a data collection and preprocessing module, and establishing an EUS video data set; preprocessing operations including anonymization, data cleaning, data set labeling and data enhancement are carried out on the EUS video data set;
s2, constructing a standard site image real-time screening model, and extracting video sequence characteristics of an optical mode and an ultrasonic mode by utilizing a lightweight characteristic extraction module and an LSTM long and short memory network module in the standard site image real-time screening model, wherein the specific steps are as follows: the light-weight feature extraction module extracts image features corresponding to each video frame in the optical video and the ultrasonic video respectively according to time sequences, and the LSTM long and short memory network module extracts correlation features of the continuous image feature sequences by taking the continuous 10-frame optical and ultrasonic image feature sequences as input to obtain optical video total features and ultrasonic video total features; the dual-mode sequence feature attention mechanism module in the standard site image real-time screening model adds sequence features of optical modes and ultrasonic modes channel by channel to realize weighted fusion of the sequence features of the two modes, and then the standard site image is screened in real time through a fully connected neural network and a softmax function in the standard site real-time screening module;
s3, constructing a standard site space position real-time classification model, and extracting characteristics of two-mode images by using an encoder in the standard site space position real-time classification model for the standard site images of the optical mode and the ultrasonic mode screened by the standard site image real-time screening model, wherein the characteristics are as follows: extracting Y channel characteristics of an optical mode by using a color space conversion module and a dense convolution layer in the encoder, and extracting characteristics of an image under the ultrasonic mode by using another dense convolution layer in the encoder; then, utilizing a multi-mode image feature fusion module in a standard site space position real-time classification model to perform preliminary fusion of a multi-mode feature layer on the Y-channel features and the features of the images in an ultrasonic mode, and further fusing the preliminary fusion result with the U-channel features and the V-channel features in the optical mode to obtain a final fusion result; the final fusion result is used for extracting the fused characteristics through a convolution layer and a channel attention module in the decoder; finally, classifying the spatial positions of the standard station image frames in real time through a fully connected neural network in the classifier;
s4, training a model, namely training a standard site image real-time screening model and a standard site space position real-time classification model by adopting a gradient descent method;
s5, adopting parameters including classification accuracy, precision, recall rate, F1 fraction, area of a subject working curve and confusion matrix, and carrying out internal evaluation and optimization on performances of a standard site image real-time screening model and a standard site space position real-time classification model on an internal test set;
s6, collecting EUS inspection video data of different hospitals as test samples, verifying a standard site space position real-time classification model to determine performance indexes including accuracy, recall ratio, precision and ROC curve of site classification by the model, analyzing classification differences represented by the model on different hospital data, and optimizing the model;
s7, generating and displaying an upper gastrointestinal virtual anatomic structure chart by adopting a standard site scanning visualization module, and displaying the positions and numbers of 10 standard sites (for example, in white fonts); and then according to the statistics and analysis results of the real-time classification model of the standard site space position, classifying and displaying the standard site numbers, specifically: the scanned standard site number is displayed in a first color (e.g., green) font; the currently identified standard site number is displayed in a second color (e.g., yellow) font while operational notice of the current site is presented; the missed standard site numbers are displayed by a flashing third color (for example, red) font, so that operators are reminded of the condition of station jump in the checking process (namely, the missed check of a certain standard site), and the operators need to return to the standard site for checking again; if no station jump occurs in the scanning process, displaying the next standard station to be scanned by a fourth color (such as blue) font after the current standard station number is lightened;
the 10 standard sites are sequentially: standard site 1 hepatic portal and gastroesophageal junction → standard site 2 abdominal aortic station → standard site 3 pancreatic neck, pancreatic body station → standard site 4 pancreatic tail station → standard site 5 splenomegaly station → standard site 6 portal vein junction trigonometric station → standard site 7 duodenal bulbar pancreatic head station → standard site 8 portal vein station → standard site 9 gastric head station → standard site 10 duodenal bulbar.
The beneficial effects of the invention are as follows:
the invention judges whether the current frame of the input EUS endoscope/ultrasonic multi-mode video data is a standard site in real time, and carries out real-time statistics and display on the site which is checked and the site where the current site is located; and prompting operation notice of the current site to the current standard site, prompting the next site to be scanned, and if an operator has a station jump condition (namely that a certain site is missed to be inspected) in the inspection process, prompting a doctor to return to the previous standard site to be inspected again, so that the EUS operator can be well helped to ensure full coverage of EUS scanning, the condition that a certain part is missed to be inspected is avoided, and the EUS scanning quality is improved.
Drawings
FIG. 1 is a block diagram of an endoscopic ultrasound scanning control optimization system based on standard site intelligent recognition in an embodiment of the invention.
FIG. 2 is a block diagram of a standard site real-time classification model construction module according to an embodiment of the invention.
Fig. 3 is a block diagram of a lightweight feature extraction module according to an embodiment of the invention.
Fig. 4 is a block diagram of a multi-modal image feature fusion module according to an embodiment of the invention.
Fig. 5 is a schematic diagram of an upper gastrointestinal EUS specification scan in accordance with an embodiment of the present invention.
FIG. 6 is a flow chart of a standard site scanning visualization module of an embodiment of the present invention.
Fig. 7 is a visual illustration of an upper gastrointestinal EUS dynamic scanning procedure in accordance with an embodiment of the present invention.
Detailed Description
As shown in FIG. 1, the endoscope ultrasonic scanning control optimization system based on standard site intelligent recognition provided by the invention comprises four components, namely a data collection and preprocessing module, a standard site real-time classification model construction module, a model training and verification optimization module and a standard site scanning visualization module.
The data collection and preprocessing module is used for collecting EUS optical/ultrasonic multi-mode video data, establishing an EUS video data set, and preprocessing operations including anonymization, data cleaning, data set labeling, data set division and data enhancement are carried out on the EUS video data set.
Specifically, the data collection and preprocessing includes the following steps:
(1) And (3) data collection: imaging data and medical history data of EUS operation patients of multiple centers (namely multiple hospitals) are collected retrospectively, wherein the imaging data and medical history data comprise diagnosis and treatment period data such as pathology data, clinical data and the like.
Inclusion criteria: patients had signed EUS informed consent prior to surgery and patients with biliary-pancreatic disease with clear EUS indications.
Exclusion criteria: past history of sphincterotomy; past history of pancreatic bile duct disease; there are serious heart lung diseases or mental diseases; has coagulation dysfunction; missing clinical medical record data, test results or endoscopy data; there is a history of narcotic allergies.
Anonymization: and carrying out image information anonymization processing on all the images.
(2) Data cleaning: to improve the feature extraction capability and recognition result of the model, endoscopic images including reflection, foam, blurring, and the like, and ultrasonic images including liver, kidney, spleen, and the like are washed away.
(3) Labeling a data set: two experienced endoscopists divide the cleaned data into two types of standard site images and non-standard site images, and then divide the standard site images into different standard site images according to anatomical structures, namely: standard site images are labeled site 1, site 2, … …, site 10 (see description of 10 standard sites below);
(4) Data set partitioning: dividing the marked data set of one center into a training set and an internal test set according to the ratio of 4:1, and dividing the rest data of the centers into a plurality of external verification sets by taking the center as an individual unit.
(5) Data enhancement: in order to improve the generalization capability of the model, the training set is subjected to data enhancement by adopting techniques such as rotation, cutting, scaling, dynamic blurring and the like.
The standard site real-time classification model construction module is used for constructing an algorithm model comprising a standard site image real-time screening model and a standard site space position real-time classification model according to the real-time classification requirement of the EUS standard site, judging whether the current frame of the input EUS video data is a standard site image or not by using the algorithm model, and classifying the space position of the determined standard site image.
Specifically, as shown in fig. 2, the standard site image real-time screening model includes three components, namely a feature extraction module, a feature weighted interaction module and a standard site real-time screening module.
And the feature extraction module is used for: and respectively extracting shallow layer features corresponding to each ultrasonic and optical video frame based on the lightweight feature extraction module. In order to improve the real-time performance of tasks, light feature extraction is to be realized based on a residual error idea. As shown in fig. 3, the residual characteristics between two adjacent frames are calculated by using global subtraction transformation, then different residual frame image characteristic extraction modes are selected by a gate module, and finally the extracted residual frame image characteristics are fused with the previous frame image characteristics by using global addition transformation, so as to obtain the image characteristics of the current frame.
With reference to fig. 3, feature extraction based on residual concept for realizing light weight is performed on both optical video and ultrasonic video according to the following method, specifically: calculating residual errors of a current frame image and a previous frame image, setting a residual error gate 1 according to the residual errors of the current frame and the previous frame, setting a current frame importance gate 2 according to a prediction classification result of the previous frame, judging that the current frame is important (i.e. not negligible) if the previous frame image is predicted as a standard site image, and judging that the current frame is not important (i.e. negligible) if the previous frame image is not predicted as a standard site image; when the residual error is lower than a set threshold value or higher than the set threshold value and is unimportant, the characteristic of the previous frame image is directly used as the characteristic of the current frame image through the door 1; when the residual is larger than or equal to the set threshold and important (i.e. not negligible), the residual frame image features are extracted through the gate 2 by the dense convolution layer, and the features of one frame image are overlapped to be used as the current frame image features.
And then, the image features close to ten frames are packaged as feature sequences, the extracted two groups of feature sequence vectors are respectively processed based on the LSTM long and short memory network module, and the image features are more accurately extracted by respectively capturing the correlation of the time sequences of the two groups of feature vectors, so that video sequence feature diagrams under two modes are respectively generated.
And a characteristic weighted interaction module: and integrating the generated video sequence feature graphs under two modes based on the bimodal sequence feature attention mechanism module, and adding the video sequence feature graphs channel by channel to realize weighted fusion of the sequence features of the two modes so as to improve the positioning accuracy of the standard site.
Standard site real-time screening module: based on the full connection layer and the softmax function, only the last frame of the input image sequence is identified as a standard site or non-standard site image in consideration of real-time performance, and the standard site image is output to a subsequent standard site space position real-time classification model.
Specifically, as shown in fig. 2, the standard site space position real-time classification model comprises four components of an encoder, a feature fusion module, a decoder and a classifier.
An encoder: aiming at the difficulty of color distortion in the fusion of an ultrasonic image and an optical image, firstly, converting the optical image of a standard site from an RGB color space to a YUV color space by adopting color space conversion, and then extracting Y channel characteristics by adopting a dense convolution layer to obtain a characteristic vector; for an ultrasonic image of a standard site, a dense convolution layer is adopted to obtain a characteristic vector of the ultrasonic image.
And a feature fusion module: and fusing the features of the two modes extracted by the encoder. As shown in fig. 4, the filter-based base detail feature decomposition method decomposes the source image into a bottom layer and a detail layer; aiming at different characteristics of the bottom layer information and the detail layer information, acquiring fused multi-mode bottom layer and multi-mode detail layer characteristics based on a fusion strategy of a decomposition coefficient and a pulse coupling neural network; based on global addition transformation, generating primary fusion image features by combining the multi-modal bottom layer and multi-modal detail layer features; and further fusing U, V channel endoscope standard site image features and preliminarily fused multi-mode features based on global addition transformation to generate final fused image features.
A decoder: the fusion result of the previous stage is improved by refining more distinguishing features based on a convolution layer and a channel attention mechanism module so as to realize the most efficient multi-mode feature fusion.
A classifier: and based on the full connection layer and the softmax function, the classification task of the spatial position of the video frame of the input standard site is realized, and the classification result of the spatial position of the standard site is output.
Specifically, the standard site image real-time screening model and the standard site spatial location real-time classification model can be written by using Python (version 3.11.1) and an open source Pytorch library (version 1.6.1) is used as the back end of the algorithm.
The model training and verifying optimization module is used for optimizing parameters of the standard site image real-time screening model and the standard site space position real-time classification model in the standard site real-time classification model so as to test and verify the performance of the standard site space position real-time classification model while improving the recognition effect.
Specifically, in the training process of the real-time screening model of the standard site image, firstly, the model is iteratively trained after the pre-training weight of the EUS standard site model is loaded by adopting transfer learning, and the corresponding super parameters are as follows: the iteration period is 200, mini-batch size is 16, the initialization learning rate is 0.01, attenuation is one tenth of the original value every 30 iteration periods, the loss function adopts a cross entropy loss function, and the optimizer adopts a random gradient descent method. Corresponding super parameters in the training process of the standard site space position real-time classification model are as follows: the iteration period is 100, mini-batch size is 32, the initialization learning rate is 0.005, the attenuation is one tenth of the original value every 30 iteration periods, the loss function adopts a cross entropy loss function, and the optimizer adopts a random gradient descent method. Both batch-normalization and drop out are used to minimize the risk of overfitting.
Specifically, the model verification mode comprises an internal evaluation and optimization part and an external verification and optimization part, so as to ensure the robustness of the model.
Internal evaluation and optimization: in order to evaluate the performance of the EUS standard site real-time classification model, the classification accuracy, the recall rate, the F1 fraction, the area of the working curve of the subject and the confusion matrix are adopted to evaluate the performance of the model on an internal test set, and the super-parameters of the training process are adjusted to obtain the model with the optimal performance.
External verification and optimization: and retrospectively collecting EUS inspection video data from a hospital as a test sample, verifying the relevant indexes such as the site classification accuracy, recall ratio, accuracy, ROC curve and the like of the model, analyzing the classification differences expressed by the model on different central data, and adjusting the super-parameters of the training process to obtain the model with optimal performance.
Specifically, in order to avoid missed detection in the EUS scanning process, standard station scanning is a commonly used scanning flow in clinic, and specific information and a scanning sequence flow chart of 10 standard stations for upper gastrointestinal EUS scanning are shown in fig. 5, and the specific flow is as follows: site 1 hilar and gastroesophageal junction → site 2 abdominal aortic station (with abdominal aorta as guide), site 3 pancreatic neck and pancreas body station (with spleen artery and spleen vein as guide), site 4 pancreatic tail station (with kidney and spleen as guide), site 5 hilar station (with spleen vein and spleen as guide), site 6 hilar vein confluence triangle station (with hilar vein confluence triangle as guide), site 7 duodenal descent portion pancreas head station (with superior mesenteric artery and superior mesenteric vein as guide), site 8 hilar vein station (with superior mesenteric vein and spleen vein as guide), site 9 gastric pancreas head station (with hilar vein and pancreatic duct as guide), site 10 duodenal bulbar portion (with common bile duct and pancreatic duct as guide), and complete the whole upper digestive tract.
Specifically, as shown in fig. 6, the operation flow chart of the standard site scanning visualization module firstly judges whether the current frame is a standard site image, if so, reads and displays the site serial number of the current frame, and if not, then judges whether the next frame image is the standard site image; after the current frame site serial number is read and displayed, judging whether a station is jumped (namely whether a certain site is missed), if so, prompting an operator to return to the last scanning site for checking, and if not, prompting operation notice of the current site and prompting the next site to be scanned.
And carrying out statistics and analysis on the data result of the standard site real-time classification model, and carrying out real-time statistics and display on the checked site and the site where the site is currently located, wherein a schematic diagram of the display effect is shown in fig. 7. Firstly, generating and displaying an upper gastrointestinal virtual anatomic structure chart, and displaying the positions and numbers of 10 standard sites by using white fonts; and then according to the statistics and analysis results of the real-time classification model of the standard site space position, classifying and displaying the standard site numbers, specifically: the scanned standard site numbers are displayed in green fonts; the currently identified standard site number is displayed in yellow font, and meanwhile, the operation notice of the current site appears; the missed standard site numbers are displayed in a flashing red font, so that an operator is reminded of the condition of station jump (namely, the missed detection of a certain standard site) in the detection process, and the operator needs to return to the site for detection again; if no station jump occurs in the scanning process, the next standard station to be scanned is displayed in blue font after the current station number is lightened. Fig. 7 shows that the current site number is 6, the next site number is 7, and the missed site number is 5.
In summary, the invention judges whether the current frame of the input EUS endoscope/ultrasonic multi-mode video data is a standard site in real time, and carries out real-time statistics and display on the checked site and the site where the current site is located. And prompting operation notice of the current site, prompting the next site to be scanned, and prompting a doctor to return to the last site for checking again if an operator jumps to the site in the checking process (namely, the site is missed), so that the EUS operator can be well helped to ensure full coverage of EUS scanning, the condition that the site is missed is avoided, and the EUS scanning quality is improved.
It will be appreciated by those skilled in the art that modifications and variations can be made in the above description without departing from the spirit of the invention, and such modifications and variations should be considered to be within the scope of the appended claims.

Claims (4)

1. An endoscope ultrasonic scanning control optimization system based on standard site intelligent recognition is characterized by comprising a data collection and preprocessing module, a standard site real-time classification model construction module, a model training and verification optimization module and a standard site scanning visualization module;
the data collection and preprocessing module is used for acquiring real-time video data of two modes, namely optical and ultrasonic, acquired by the optical sensor and the ultrasonic probe sensor in the process of endoscopic ultrasonic examination and preprocessing the acquired data;
the standard site real-time classification model building module is used for building a standard site image real-time screening model and a standard site space position real-time classification model according to the real-time classification requirement of the standard site;
the standard site image real-time screening model comprises a feature extraction module, a feature weighted interaction module and a standard site real-time screening module; the feature extraction module comprises two lightweight feature extraction modules and two LSTM long and short memory network modules; the feature weighted interaction module comprises a bimodal sequence feature attention mechanism module; the standard site real-time screening module comprises a fully-connected neural network and a softmax function;
the standard site image real-time screening model respectively extracts an optical video frame and an ultrasonic video frame for the preprocessed optical and ultrasonic bimodal real-time video, and inputs the optical video frame and the ultrasonic video frame into the lightweight feature extraction module to obtain the optical image feature and the ultrasonic image feature of each current frame; from the 10 th frame, packaging the optical image features and the ultrasonic image features of the current i frame and the 9 previous frames thereof to respectively obtain an optical image feature sub-sequence and an ultrasonic image feature sub-sequence; inputting the two image feature sub-sequences into an LSTM long and short memory network module respectively to extract the correlation between the optical images of the adjacent 10 frames and the feature vectors corresponding to the ultrasonic images in each image feature sub-sequence and obtain the total features of the optical video and the total features of the ultrasonic video of the adjacent 10 frames; the obtained optical video total features of the current frame and the previous 9 frames and the obtained ultrasonic video total features are added channel by channel through a bimodal sequence feature attention mechanism module to realize feature weighted interactive fusion, and an optical endoscope/ultrasonic endoscope bimodal feature vector is generated; the full-connection neural network in the standard site real-time screening module judges whether the last frame of image in the optical image characteristic subsequence and the ultrasonic image characteristic subsequence is a standard site image or not, and if the last frame of image is the standard site image, the optical image and the ultrasonic image of the last frame of image are respectively input into a standard site space position real-time classification model; sequentially processing the i+1, i+2, … … until the last frame; i is more than or equal to 10;
the standard site space position real-time classification model comprises an encoder, a multi-mode image feature fusion module, a decoder and a classifier; the encoder comprises a color space conversion module and two dense convolution layers; the decoder includes a convolutional layer and a channel attention module, and the classifier includes a fully-connected neural network and a softmax function;
the standard site space position real-time classification model firstly adopts a color space conversion module to convert an optical image of a standard site from an RGB color space to a YUV color space, and then adopts a dense convolution layer to extract Y channel characteristics to obtain a characteristic vector; for an ultrasonic image of a standard site, a dense convolution layer is adopted to obtain a characteristic vector of the ultrasonic image; the feature vector after the fusion of the optical feature and the ultrasonic feature is obtained through a multi-mode image feature fusion module, and the fused feature is further extracted through a convolution layer and a channel attention module in a decoder; finally, classifying the positions of the standard station image frames through a fully connected neural network in the classifier;
the model training and verifying optimization module is used for optimizing and verifying parameters of the standard site image real-time screening model and the standard site space position real-time classification model;
the standard site scanning visualization module is used for carrying out statistics and analysis on data results of the standard site space position real-time classification model, and carrying out real-time statistics and display on the checked standard site, the current standard site and the missed standard site, and specifically comprises the following steps: generating and displaying an upper digestive tract virtual anatomic structure chart, and displaying the positions and numbers of 10 standard sites; according to the statistics and analysis results of the real-time standard site space position classification model, the standard site numbers are classified and displayed, the checked standard site numbers are displayed in a first color, the standard site numbers at present are displayed in a second color, meanwhile, operation notes of the standard site at present appear, the standard site numbers which are missed to be checked are displayed in a flashing third color, an operator is reminded of the condition that the station jump occurs in the checking process, the operator needs to return to the standard site to check again, and if the station jump does not occur in the scanning process, the next standard site number to be scanned is displayed in a fourth color after the current standard site number is lightened;
the first color, the second color, the third color, and the fourth color are four different colors;
the 10 standard sites are sequentially: standard site 1 hepatic portal and gastroesophageal junction → standard site 2 abdominal aortic station → standard site 3 pancreatic neck, pancreatic body station → standard site 4 pancreatic tail station → standard site 5 splenomegaly station → standard site 6 portal vein junction trigonometric station → standard site 7 duodenal bulbar pancreatic head station → standard site 8 portal vein station → standard site 9 gastric head station → standard site 10 duodenal bulbar.
2. The endoscope ultrasonic scanning control optimization system based on standard site intelligent recognition according to claim 1, wherein the lightweight feature extraction module is used for extracting image features of an optical video frame and image features of an ultrasonic video frame in real time, specifically: calculating residual errors of a current frame image and a previous frame image, setting a residual error gate 1 according to the residual errors of the current frame and the previous frame, setting a current frame importance gate 2 according to a prediction classification result of the previous frame, judging that the current frame is important if the previous frame image is predicted to be a standard site image, and judging that the current frame is not important if the previous frame image is predicted to be the standard site image; when the residual error is lower than a set threshold value or higher than the set threshold value and is unimportant, the characteristic of the previous frame image is directly used as the characteristic of the current frame image through the door 1; when the residual is greater than or equal to the set threshold and is important, the residual frame image features are extracted through the gate 2 by the dense convolution layer, and the features of one frame of image are overlapped to be used as the current frame of image features.
3. The endoscope ultrasonic scanning control optimization system based on standard site intelligent recognition according to claim 1, wherein the multi-mode image feature fusion module specifically performs the following operations: firstly, respectively extracting bottom layer characteristics and detail layer characteristics of a Y-channel image and bottom layer characteristics and detail layer characteristics of an original ultrasonic image based on a substrate detail characteristic decomposition module, then fusing the detail layer characteristics of the Y-channel image and the original ultrasonic image through a detail characteristic fusion module, fusing the bottom layer characteristics of the Y-channel image and the original ultrasonic image through a bottom layer characteristic fusion module, adding the fused detail layer characteristics and the bottom layer characteristics to obtain primary fused image characteristics, and finally fusing the primary fused image characteristics with the U-channel image characteristics and the V-channel image characteristics to obtain final fused image characteristics.
4. An endoscope ultrasonic scanning control optimization method based on standard site intelligent recognition is characterized by adopting the system in claim 1, and specifically comprises the following steps:
s1, collecting optical and ultrasonic bimodal video data by utilizing a data collection and preprocessing module, and establishing an EUS video data set; preprocessing operations including anonymization, data cleaning, data set labeling and data enhancement are carried out on the EUS video data set;
s2, constructing a standard site image real-time screening model, and extracting video sequence characteristics of an optical mode and an ultrasonic mode by utilizing a lightweight characteristic extraction module and an LSTM long and short memory network module in the standard site image real-time screening model, wherein the specific steps are as follows: the light-weight feature extraction module extracts image features corresponding to each video frame in the optical video and the ultrasonic video respectively according to time sequences, and the LSTM long and short memory network module extracts correlation features of the continuous image feature sequences by taking the continuous 10-frame optical and ultrasonic image feature sequences as input to obtain optical video total features and ultrasonic video total features; the dual-mode sequence feature attention mechanism module in the standard site image real-time screening model adds sequence features of optical modes and ultrasonic modes channel by channel to realize weighted fusion of the sequence features of the two modes, and then the standard site image is screened in real time through a fully connected neural network and a softmax function in the standard site real-time screening module;
s3, constructing a standard site space position real-time classification model, and extracting characteristics of two-mode images by using an encoder in the standard site space position real-time classification model for the standard site images of the optical mode and the ultrasonic mode screened by the standard site image real-time screening model, wherein the characteristics are as follows: extracting Y channel characteristics of an optical mode by using a color space conversion module and a dense convolution layer in the encoder, and extracting characteristics of an image under the ultrasonic mode by using another dense convolution layer in the encoder; then, utilizing a multi-mode image feature fusion module in a standard site space position real-time classification model to perform preliminary fusion of a multi-mode feature layer on the Y-channel features and the features of the images in an ultrasonic mode, and further fusing the preliminary fusion result with the U-channel features and the V-channel features in the optical mode to obtain a final fusion result; the final fusion result is used for extracting the fused characteristics through a convolution layer and a channel attention module in the decoder; finally, classifying the positions of the standard station image frames through a fully connected neural network in the classifier;
s4, training a model, namely training a standard site image real-time screening model and a standard site space position real-time classification model by adopting a gradient descent method;
s5, adopting parameters including classification accuracy, precision, recall rate, F1 fraction, area of a subject working curve and confusion matrix, and carrying out internal evaluation and optimization on performances of a standard site image real-time screening model and a standard site space position real-time classification model on an internal test set;
s6, collecting EUS endoscopic ultrasonic examination video data of different hospitals as test samples, verifying a standard site space position real-time classification model to determine performance indexes including accuracy, recall ratio, precision and ROC curve of site classification by the model, analyzing classification differences of the model represented on different hospital data, and optimizing the model;
s7, carrying out statistics and analysis on data results of the standard site space position real-time classification model by adopting a standard site scanning visualization module, and carrying out real-time statistics and display on the checked standard site, the standard site where the standard site is currently located and the standard site which is missed, wherein the method specifically comprises the following steps: generating and displaying an upper digestive tract virtual anatomic structure chart, and simultaneously displaying the positions and numbers of 10 standard sites; according to the statistics and analysis results of the real-time classification model of the space position of the standard site, the number of the standard site which is checked is displayed in a first color, the number of the standard site which is currently located is displayed in a second color, meanwhile, the operation notice of the current site appears, the number of the standard site which is missed is displayed in a flashing third color, an operator is reminded of the condition that the operator jumps in the checking process, the operator needs to return to the standard site to check again, and if no jump occurs in the scanning process, the next standard site to be scanned is displayed in a fourth color after the number of the current standard site is lightened; the first color, the second color, the third color, and the fourth color are four different colors;
the 10 standard sites are sequentially: standard site 1 hepatic portal and gastroesophageal junction → standard site 2 abdominal aortic station → standard site 3 pancreatic neck, pancreatic body station → standard site 4 pancreatic tail station → standard site 5 splenomegaly station → standard site 6 portal vein junction trigonometric station → standard site 7 duodenal bulbar pancreatic head station → standard site 8 portal vein station → standard site 9 gastric head station → standard site 10 duodenal bulbar.
CN202310336070.3A 2023-03-31 2023-03-31 Endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition Active CN116563216B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310336070.3A CN116563216B (en) 2023-03-31 2023-03-31 Endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310336070.3A CN116563216B (en) 2023-03-31 2023-03-31 Endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition

Publications (2)

Publication Number Publication Date
CN116563216A true CN116563216A (en) 2023-08-08
CN116563216B CN116563216B (en) 2024-02-20

Family

ID=87485180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310336070.3A Active CN116563216B (en) 2023-03-31 2023-03-31 Endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition

Country Status (1)

Country Link
CN (1) CN116563216B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218419A (en) * 2023-09-12 2023-12-12 河北大学 Evaluation system and evaluation method for pancreatic and biliary tumor parting and grading stage

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111415564A (en) * 2020-03-02 2020-07-14 武汉大学 Pancreatic ultrasonic endoscopy navigation method and system based on artificial intelligence
CN112086197A (en) * 2020-09-04 2020-12-15 厦门大学附属翔安医院 Mammary nodule detection method and system based on ultrasonic medicine
CN114372531A (en) * 2022-01-11 2022-04-19 北京航空航天大学 Pancreatic cancer pathological image classification method based on self-attention feature fusion
CN114913173A (en) * 2022-07-15 2022-08-16 天津御锦人工智能医疗科技有限公司 Endoscope auxiliary inspection system, method and device and storage medium
CN115299996A (en) * 2022-08-16 2022-11-08 苏州大学附属第二医院 Ultrasonic probe for endoscope, spine endoscope assembly and ultrasonic equipment
WO2023018343A1 (en) * 2021-08-09 2023-02-16 Digestaid - Artificial Intelligence Development, Lda. Automatic detection and differentiation of pancreatic cystic lesions in endoscopic ultrasonography

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111415564A (en) * 2020-03-02 2020-07-14 武汉大学 Pancreatic ultrasonic endoscopy navigation method and system based on artificial intelligence
CN112086197A (en) * 2020-09-04 2020-12-15 厦门大学附属翔安医院 Mammary nodule detection method and system based on ultrasonic medicine
WO2023018343A1 (en) * 2021-08-09 2023-02-16 Digestaid - Artificial Intelligence Development, Lda. Automatic detection and differentiation of pancreatic cystic lesions in endoscopic ultrasonography
CN114372531A (en) * 2022-01-11 2022-04-19 北京航空航天大学 Pancreatic cancer pathological image classification method based on self-attention feature fusion
CN114913173A (en) * 2022-07-15 2022-08-16 天津御锦人工智能医疗科技有限公司 Endoscope auxiliary inspection system, method and device and storage medium
CN115299996A (en) * 2022-08-16 2022-11-08 苏州大学附属第二医院 Ultrasonic probe for endoscope, spine endoscope assembly and ultrasonic equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
QINGYUAN TAN ETC.: "ultrafast endoscopic ultrasonography with circular array", IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 39, no. 6, XP011790867, DOI: 10.1109/TMI.2019.2963290 *
孙力祺: "基于断层影像学和超声内镜评估的胰腺囊性肿瘤危险程度的分析:", 中国博士学位论文电子期刊网 *
杨昆 等: "基于改进的Faster R-CNN的息肉目标检测和分类方法", 河北大学学报(自然科学版), vol. 43, no. 1 *
梁蒙蒙;周涛;夏勇;张飞飞;杨健;: "基于随机化融合和CNN的多模态肺部肿瘤图像识别", 南京大学学报(自然科学), no. 04 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218419A (en) * 2023-09-12 2023-12-12 河北大学 Evaluation system and evaluation method for pancreatic and biliary tumor parting and grading stage
CN117218419B (en) * 2023-09-12 2024-04-12 河北大学 Evaluation system and evaluation method for pancreatic and biliary tumor parting and grading stage

Also Published As

Publication number Publication date
CN116563216B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
CN109670510B (en) Deep learning-based gastroscope biopsy pathological data screening system
Igarashi et al. Anatomical classification of upper gastrointestinal organs under various image capture conditions using AlexNet
CN109544526B (en) Image recognition system, device and method for chronic atrophic gastritis
CN109523535B (en) Pretreatment method of lesion image
CN113015476A (en) System and method for generating and displaying studies of in vivo image flow
Everson et al. Virtual chromoendoscopy by using optical enhancement improves the detection of Barrett’s esophagus–associated neoplasia
Ghosh et al. Effective deep learning for semantic segmentation based bleeding zone detection in capsule endoscopy images
CN111227864A (en) Method and apparatus for lesion detection using ultrasound image using computer vision
CN109671053A (en) A kind of gastric cancer image identification system, device and its application
CN105979847A (en) Endoscopic image diagnosis support system
KR102531400B1 (en) Artificial intelligence-based colonoscopy diagnosis supporting system and method
CN116563216B (en) Endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition
CN111079901A (en) Acute stroke lesion segmentation method based on small sample learning
CN109670530A (en) A kind of construction method of atrophic gastritis image recognition model and its application
CN112334990A (en) Automatic cervical cancer diagnosis system
CN112801958A (en) Ultrasonic endoscope, artificial intelligence auxiliary identification method, system, terminal and medium
WO2024012080A1 (en) Endoscope auxiliary examination system, method, apparatus, and storage medium
CN110974179A (en) Auxiliary diagnosis system for stomach precancer under electronic staining endoscope based on deep learning
Xu et al. Upper gastrointestinal anatomy detection with multi‐task convolutional neural networks
CN116152185A (en) Gastric cancer pathological diagnosis system based on deep learning
KR102536369B1 (en) Artificial intelligence-based gastroscopy diagnosis supporting system and method
KR20230097646A (en) Artificial intelligence-based gastroscopy diagnosis supporting system and method to improve gastro polyp and cancer detection rate
Vania et al. Recent advances in applying machine learning and deep learning to detect upper gastrointestinal tract lesions
Ciaccio et al. Recommendations to quantify villous atrophy in video capsule endoscopy images of celiac disease patients
CN116993699A (en) Medical image segmentation method and system under eye movement auxiliary training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant