CN116999092A - Difficult airway assessment method and device based on ultrasonic image recognition technology - Google Patents

Difficult airway assessment method and device based on ultrasonic image recognition technology Download PDF

Info

Publication number
CN116999092A
CN116999092A CN202311045516.3A CN202311045516A CN116999092A CN 116999092 A CN116999092 A CN 116999092A CN 202311045516 A CN202311045516 A CN 202311045516A CN 116999092 A CN116999092 A CN 116999092A
Authority
CN
China
Prior art keywords
image
airway
difficult airway
ultrasonic
difficult
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311045516.3A
Other languages
Chinese (zh)
Inventor
夏明�
金晨昱
姬宁宁
裴蓓
曹爽
姜虹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
Original Assignee
Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine filed Critical Ninth Peoples Hospital Shanghai Jiaotong University School of Medicine
Priority to CN202311045516.3A priority Critical patent/CN116999092A/en
Publication of CN116999092A publication Critical patent/CN116999092A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0833Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
    • A61B8/085Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Quality & Reliability (AREA)
  • Vascular Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses a difficult airway assessment method based on an ultrasonic image recognition technology, which comprises the steps of obtaining ultrasonic images including mandibular, tracheal and jugular vein cuts, extracting characteristics of the ultrasonic images by adopting a resnet network model, and analyzing the characteristics by using a constructed difficult airway prediction model to obtain an assessment result of a difficult airway. During model training, based on Cormack-Lehane hierarchical labels, the ultrasonic characteristics are used as input information, a logistics regression model is constructed for classification, and a focus loss function is adopted for training and optimizing the model, so that early warning can be accurately carried out on difficult airways in clinical anesthesia, and anesthesia risks are reduced.

Description

Difficult airway assessment method and device based on ultrasonic image recognition technology
Technical Field
The invention belongs to the technical field of clinical medicine, and particularly relates to a difficult airway assessment method and device based on an ultrasonic image recognition technology.
Background
In clinical anesthesia work, the maintenance of airway safety is a precondition for ensuring the systemic oxygen supply of perioperative patients, and is also a solid guarantee for the prognosis of perioperative patients. Because of the complex and diverse causes of the difficult airways, no evaluation method can be used as a gold standard for evaluating the difficult airways at present, and a rapid and accurate intelligent early warning system is urgently needed for optimizing the difficult airways to evaluate paths and reducing clinical accidents.
Although high resolution ultrasound can accurately explore the anatomy of superficial tissue and is recommended by difficult airway guidelines for pre-operative assessment of a patient's difficult airways, the procedure is cumbersome due to the high threshold of interpretation of the ultrasound images. With the vigorous development of artificial intelligence, the method for evaluating the difficult airways by combining an artificial intelligence algorithm is attracting more attention, and an intelligent difficult airway evaluation model is established by combining the head and neck ultrasonic image and the artificial intelligence, so that the method is greatly helpful for clinical preoperative airway evaluation.
Disclosure of Invention
The invention aims to provide a difficult airway assessment method and device based on an ultrasonic image recognition technology, which can accurately early warn difficult airways in clinical anesthesia, especially difficult laryngoscope exposure.
In order to solve the problems, the technical scheme of the invention is as follows:
a difficult airway assessment method based on ultrasound image recognition technology, comprising:
acquiring an ultrasonic image reflecting the anatomical structure and function of the airway, inputting the ultrasonic image into a trained difficult airway prediction model for analysis, and automatically giving out a difficult airway evaluation result;
wherein the difficult airway prediction model comprises a feature extractor and a classifier; the feature extractor adopts a deep learning neural network to extract features of the ultrasonic image and outputs 512 x 1 image features; according to the characteristic index of the difficult airway, different weights are given to the extracted image characteristics;
the classifier is based on Cormack-Lehane hierarchical labels, and the difficult airway severity is scored on the image features through a logistics regression model.
According to an embodiment of the present invention, the acquiring the ultrasound image reflecting the anatomy and function of the airway further comprises:
acquiring a plurality of ultrasonic images, and screening out ultrasonic images conforming to P <0.05 from the ultrasonic images by adopting a multi-factor logistic regression algorithm to serve as ultrasonic images reflecting the anatomical structure and function of the airway; the screened ultrasonic images comprise a mandibular ultrasonic image, a tracheal ultrasonic image and a jugular vein notch ultrasonic image.
According to one embodiment of the invention, the feature extractor uses a ResNet-18 network to perform feature extraction on the ultrasound image.
According to one embodiment of the invention, a focus loss function is adopted to train a difficult airway prediction model, and the formula of the focus loss function is as follows:
L=-α(1-p) γ logp
where α and γ are hyper-parameters and p is the predicted probability of a difficult airway.
According to an embodiment of the present invention, the assigning different weights to the extracted image features according to the feature indicators of the difficult airways further includes:
and comparing RGB values of each pixel in the class activation heat map of the extracted image features by drawing a class activation heat map of the difficult air passage, and giving different weights to the image features with different colors.
A difficult airway assessment device based on ultrasound image recognition technology, comprising:
the image acquisition module is used for acquiring an ultrasonic image reflecting the anatomical structure and function of the airway;
the difficult airway prediction module is used for inputting the ultrasonic image into a trained difficult airway prediction model for analysis and automatically giving out a difficult airway evaluation result;
wherein the difficult airway prediction model comprises a feature extractor and a classifier; the feature extractor adopts a deep learning neural network to extract features of the ultrasonic image and outputs 512 x 1 image features; according to the characteristic index of the difficult airway, different weights are given to the extracted image characteristics;
the classifier is based on Cormack-Lehane hierarchical labels, and the difficult airway severity is scored on the image features through a logistics regression model.
According to an embodiment of the invention, the image acquisition module comprises an image screening unit, wherein the image screening unit screens out an ultrasonic image conforming to P <0.05 from a plurality of ultrasonic images by adopting a multi-factor logistic regression algorithm, and the ultrasonic image is used as an ultrasonic image reflecting the anatomical structure and function of the airway; the screened ultrasonic images comprise a mandibular ultrasonic image, a tracheal ultrasonic image and a jugular vein notch ultrasonic image.
According to an embodiment of the invention, the feature extractor is a module for extracting ultrasonic image features by using a ResNet-18 network model.
By adopting the technical scheme, the invention has the following advantages and positive effects compared with the prior art:
1) According to the difficult airway assessment method based on the ultrasonic image recognition technology, which is disclosed by the embodiment of the invention, an ultrasonic image comprising mandibular, tracheal and jugular vein notch is obtained, the characteristics of the ultrasonic image are extracted by adopting a res net network model, and the characteristics are analyzed through a constructed difficult airway prediction model, so that an assessment result of the difficult airway is obtained. During model training, based on Cormack-Lehane hierarchical labels, the ultrasonic characteristics are used as input information, a logistics regression model is constructed for classification, and a focus loss function is adopted for training and optimizing the model, so that early warning can be accurately carried out on difficult airways in clinical anesthesia, and anesthesia risks are reduced.
2) According to the difficult airway assessment method based on the ultrasonic image recognition technology, the difficult airway is predicted by adopting the ultrasonic image, and compared with the method for acquiring the image of the body surface structure, the ultrasonic image can embody subcutaneous tissue (such as subcutaneous fat, muscle, intraoral space, structures under the skin of the trachea and the like), so that the airway is reflected more directly and truly, the difficult airway is predicted more favorably, and the accuracy of difficult airway prediction is improved.
Drawings
FIG. 1 is a schematic view of an ultrasound image in accordance with an embodiment of the present invention;
FIG. 2 is a block diagram of a difficult airway prediction module according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a ResNet-18 deep learning neural network in an embodiment of the present invention;
FIG. 4 is a thermal diagram of an ultrasound image in an embodiment of the invention;
FIG. 5 is a diagram illustrating a comparison of performance of a difficult airway prediction model according to an embodiment of the present invention.
Detailed Description
The invention provides a difficult airway assessment method and device based on an ultrasonic image recognition technology, which are further described in detail below with reference to the accompanying drawings and specific embodiments. Advantages and features of the invention will become more apparent from the following description and from the claims.
A difficult airway, defined as a clinical situation in which a anesthesiologist trained physician is faced with anticipated or unexpected airway management difficulties or failures, according to 2022 american society of anesthesiologists' difficult airway management practice guidelines, comprising one or more of: mask ventilation difficulties, airway exposure difficulties, supraglottic airway ventilation difficulties, tracheal intubation difficulties or failures, extubation difficulties or failures, and invasive airway establishment difficulties or failures. The relatively accurate assessment of difficult airways prior to surgery would provide valuable information for airway management, facilitating the physician's preparation of intubation protocols and appropriate intubation tools in advance.
The embodiment provides a difficult airway assessment method based on an ultrasonic image recognition technology, wherein an ultrasonic image capable of reflecting the anatomical structure and function of an airway is used as a subject for difficult airway assessment, and because the ultrasonic image can embody subcutaneous tissue (such as subcutaneous fat, muscle, intraoral space, structures under the air tube and the like), compared with the method for collecting images of body surface structures, the ultrasonic image can reflect the airway more directly and truly, and is more beneficial to the prediction of the difficult airway.
The initial acquisition of the ultrasound image reflecting the anatomy and function of the airway in this embodiment, please refer to fig. 1, which includes a mandibular sagittal plane a, a cricoid plane B, a tracheal sagittal plane C, a thyroid isthmus plane D, a jugular notch plane E, a hyoid bone plane F, and an epiglottis plane G. In order to more effectively predict the difficult airways, the present embodiment adopts a multi-factor logistic regression algorithm to screen the ultrasound images, and screens out key factors related to difficult airway prediction.
The screening method through the multi-factor logistic regression algorithm comprises the following steps:
converting the index in the ultrasonic image into virtual variables to respectively perform single-factor logistic regression analysis;
and (3) carrying out multi-factor logistic regression analysis on the variable with P <0.05 in the single-factor logistic regression analysis by adopting a backward stepwise regression method to obtain key factors related to difficult airway prediction.
The key factors obtained by the screening method are mandibular, tracheal and jugular vein incisions. From this, three ultrasound images of the mandibular sagittal plane a, the tracheal sagittal plane C and the jugular notch level E are determined that meet the screening criteria.
In the embodiment, the three ultrasonic images are input into a trained difficult airway prediction model for analysis, so that an evaluation result of the difficult airway is automatically given.
Please refer to fig. 2 for a framework of the difficult airway prediction model. The difficult airway prediction model comprises an input layer, a feature extraction layer, an attention weight layer, a full connection layer and an output layer, wherein an ultrasonic image enters the feature extraction layer from the input layer, and the feature extraction layer adopts a resnet network model to extract features of the ultrasonic image to obtain 512 x 1 image features. Specifically, please refer to fig. 3, the ResNet network model is a ResNet-18 deep learning neural network, an ultrasonic image firstly passes through a convolution of 7×7 and then enters a maximum pooling layer, a maximum value of an image area after convolution is selected, then passes through a convolution of 3×3 of four multi-channels and then enters an average pooling layer, an average value of the image area is calculated, and an image characteristic of 512×1 is output.
The ResNet-18 deep learning neural network is trained and then serves as a feature extractor of a difficult airway prediction model.
After the input ultrasonic image is subjected to feature extraction, the image features are input into an attribute weight layer, and different weights are given to different image features. In the embodiment, by drawing the class activation heat map of the difficult air channel, the RGB value of each pixel in the class activation heat map of the extracted image features is compared, and different weights are given to the image features with different colors.
Referring to FIG. 4, a heat map is drawn using Grad-CAM++. Class activation heatmaps are two-dimensional images created by computing the importance of each region. The red and yellow regions on the heat map represent regions where the artificial intelligence model deems important for image classification, and this region is given a greater weight and the other regions are given a lesser weight. The weight value ranges from 0 to 1. The heat map may be used as a basis for difficult airway prediction.
The full connection layer serves as a classifier to output scores of difficult airway severity. When the classifier is trained, the classifier constructed by the logistics regression model is constructed to score the severity of the difficult airway based on the label of Cormack-Lehane classification (the grade I-II is a non-difficult airway and the grade III-IV is a difficult airway).
The difficult airway prediction model needs to be trained before being put into use, and the embodiment adopts the focus loss function to optimize parameters of the neural network, so that the focus loss function not only can optimize the influence of unbalance of data quantity among different categories, but also can optimize the problem of unbalance of data with different difficulty degrees. Aiming at the problem of data volume difference between the non-difficult airway type and the difficult airway type in the embodiment, a focus loss function is adopted for optimization. In this embodiment, the type of difficult airway with smaller data volume is a positive sample type, the type of non-difficult airway with larger data volume is a negative sample type, and the sharing weight of positive and negative samples to the total loss is controlled by setting the value of α. The focus loss function is as follows:
L=-α(1-p) γ logp
wherein, the coefficient alpha is the weight for adjusting positive and negative samples, the gamma is the weight for adjusting difficult-to-classify samples, and the p is the prediction probability of difficult airways.
When model training is performed, the data set is randomly divided into a training set, a verification set and a test set according to the ratio of 6:2:2, and the batch size of the model is 64. The learning rate strategy employs a warmup cosine decay strategy, i.e., the learning rate is gradually increased during warm-up, and then learned according to a cosine annealing schedule. The maximum epochs trained was set at 50. In terms of optimizers, an AdamW optimizer was chosen. This is a variation of Adam's optimizer, which includes weight decay regularization, helping to prevent overfitting. To ensure fair comparison, all models were trained and tested on the same dataset to maintain consistency of the training and evaluation procedure. This approach allows for reliable comparisons of the performance of the model.
In order to embody the performance of the difficult airway prediction model in this embodiment, this embodiment provides several other difficult airway prediction assessment methods, and compares with the performance of the difficult airway prediction model in this embodiment. The predicted performance is expressed in terms of AUC (area under the curve), see in particular fig. 5. Combine in the figure: the model evaluation method in the present embodiment; TMD: thyroid distance; MMT: improved malabase test; ULBT: upper lip bite test; NC: a neck circumference; IIG: a socket gap; EGRI: el Ganzouri risk index. The model in this example predicts performance with AUC up to 0.833, sensitivity of 1.000, specificity of 0.632, accuracy of 0.650, and predicted performance significantly higher than other methods.
In practical application, when an ultrasonic image is acquired, a patient can be arranged in a fixed quiet consulting room and is laid on a flat car, each patient uses a hard shoulder pillow in a unified mode to raise the shoulder, so that the patient is in a neck overstretched supine position, and an ultrasonic system and an ultrasonic probe related to the ultrasonic system (such as a Sonosite siI color ultrasonic diagnostic system) are used for ultrasonic data acquisition. And placing the ultrasonic probe in the head, neck and mandible areas of the patient for ultrasonic scanning, and storing the ultrasonic image under a designated ultrasonic section. The resulting ultrasound image data is stored in a security database.
The collected ultrasonic images are input into a trained difficult airway prediction model for analysis before operation, and the severity score of the difficult airway is obtained, so that a doctor can prepare an intubation scheme and a proper intubation tool in advance.
Based on the same conception, the present embodiment also provides a difficult airway assessment device based on the ultrasonic image recognition technology, comprising:
the image acquisition module is used for acquiring an ultrasonic image reflecting the anatomical structure and function of the airway;
the difficult airway prediction module is used for inputting the ultrasonic image into a trained difficult airway prediction model for analysis and automatically giving out a difficult airway evaluation result;
wherein the difficult airway prediction model comprises a feature extractor and a classifier; the feature extractor adopts a deep learning neural network to extract features of the ultrasonic image and outputs 512 x 1 image features; according to the characteristic index of the difficult airway, different weights are given to the extracted image characteristics;
the classifier scores the severity of the difficult airways for the image features by logistic regression model based on the label of the Cormack-Lehane hierarchy.
The image acquisition module comprises an image screening unit, wherein the image screening unit screens out ultrasonic images conforming to P <0.05 from a plurality of ultrasonic images by adopting a multi-factor logistic regression algorithm and uses the ultrasonic images as ultrasonic images reflecting the anatomical structure and function of the airway; the screened ultrasonic images comprise a mandibular ultrasonic image, a tracheal ultrasonic image and a jugular vein notch ultrasonic image.
The feature extractor is a module for extracting ultrasonic image features by adopting a ResNet-18 network model.
The functions and embodiments of the image acquisition module and the difficult airway prediction module are described in the difficult airway assessment mode based on the ultrasonic image recognition technology, and are not described herein.
The embodiments of the present invention have been described in detail with reference to the drawings, but the present invention is not limited to the above embodiments. Even if various changes are made to the present invention, it is within the scope of the appended claims and their equivalents to fall within the scope of the invention.

Claims (8)

1. A difficult airway assessment method based on ultrasound image recognition technology, comprising:
acquiring an ultrasonic image reflecting the anatomical structure and function of the airway, inputting the ultrasonic image into a trained difficult airway prediction model for analysis, and automatically giving out a difficult airway evaluation result;
wherein the difficult airway prediction model comprises a feature extractor and a classifier; the feature extractor adopts a deep learning neural network to extract features of the ultrasonic image and outputs 512 x 1 image features; according to the characteristic index of the difficult airway, different weights are given to the extracted image characteristics;
the classifier is based on Cormack-Lehane hierarchical labels, and the difficult airway severity is scored on the image features through a logistics regression model.
2. The method for difficult airway assessment based on ultrasound image recognition techniques of claim 1, wherein the acquiring an ultrasound image reflecting airway anatomy and function further comprises:
acquiring a plurality of ultrasonic images, and screening out ultrasonic images conforming to P <0.05 from the ultrasonic images by adopting a multi-factor logistic regression algorithm to serve as ultrasonic images reflecting the anatomical structure and function of the airway; the screened ultrasonic images comprise a mandibular ultrasonic image, a tracheal ultrasonic image and a jugular vein notch ultrasonic image.
3. The difficult airway assessment method based on ultrasound image recognition technology of claim 1, wherein the feature extractor uses a res net-18 network for feature extraction of ultrasound images.
4. The method for assessing a difficult airway based on ultrasound image recognition technology of claim 1 wherein the difficult airway predictive model is trained using a focus loss function having the formula:
L=-α(1-p) γ logp
where α and γ are hyper-parameters and p is the predicted probability of a difficult airway.
5. The method for assessing a difficult airway based on ultrasound image recognition technology of claim 1, wherein assigning different weights to the extracted image features according to the feature indicators of the difficult airway further comprises:
and comparing RGB values of each pixel in the class activation heat map of the extracted image features by drawing a class activation heat map of the difficult air passage, and giving different weights to the image features with different colors.
6. A difficult airway assessment device based on ultrasound image recognition technology, comprising:
the image acquisition module is used for acquiring an ultrasonic image reflecting the anatomical structure and function of the airway;
the difficult airway prediction module is used for inputting the ultrasonic image into a trained difficult airway prediction model for analysis and automatically giving out a difficult airway evaluation result;
wherein the difficult airway prediction model comprises a feature extractor and a classifier; the feature extractor adopts a deep learning neural network to extract features of the ultrasonic image and outputs 512 x 1 image features; according to the characteristic index of the difficult airway, different weights are given to the extracted image characteristics;
the classifier is based on Cormack-Lehane hierarchical labels, and the difficult airway severity is scored on the image features through a logistics regression model.
7. The difficult airway assessment device based on the ultrasound image recognition technique according to claim 6, wherein the image acquisition module comprises an image screening unit that screens out ultrasound images conforming to P <0.05 from a plurality of ultrasound images using a multi-factor logistic regression algorithm as ultrasound images reflecting the anatomy and function of the airway; the screened ultrasonic images comprise a mandibular ultrasonic image, a tracheal ultrasonic image and a jugular vein notch ultrasonic image.
8. The difficult airway assessment device based on ultrasound image recognition technology of claim 6 wherein said feature extractor is a module for ultrasound image feature extraction using a res net-18 network model.
CN202311045516.3A 2023-08-18 2023-08-18 Difficult airway assessment method and device based on ultrasonic image recognition technology Pending CN116999092A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311045516.3A CN116999092A (en) 2023-08-18 2023-08-18 Difficult airway assessment method and device based on ultrasonic image recognition technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311045516.3A CN116999092A (en) 2023-08-18 2023-08-18 Difficult airway assessment method and device based on ultrasonic image recognition technology

Publications (1)

Publication Number Publication Date
CN116999092A true CN116999092A (en) 2023-11-07

Family

ID=88563511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311045516.3A Pending CN116999092A (en) 2023-08-18 2023-08-18 Difficult airway assessment method and device based on ultrasonic image recognition technology

Country Status (1)

Country Link
CN (1) CN116999092A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016122536A1 (en) * 2015-01-29 2016-08-04 University Of Maryland, Baltimore Ultrasound localization of obstruction for obstructive sleep apnea
US20170189634A1 (en) * 2014-05-15 2017-07-06 Barrett J. Larson System, Method, and Device For Airway Assessment and Endotracheal Intubation
CN109788933A (en) * 2016-09-19 2019-05-21 威斯康星校友研究基金会 Utilize the system and method for ultrasonic monitoring tracheae interior air-flow
CN111862085A (en) * 2020-08-03 2020-10-30 徐州市肿瘤医院 Method and system for predicting latent N2 lymph node metastasis of peripheral NSCLC
CN113069080A (en) * 2021-03-22 2021-07-06 上海交通大学医学院附属第九人民医院 Difficult airway assessment method and device based on artificial intelligence
CN113229853A (en) * 2021-06-25 2021-08-10 苏州市立医院 Method for monitoring aspiration of airway
CN113936663A (en) * 2021-12-03 2022-01-14 上海交通大学 Method for detecting difficult airway, electronic device and storage medium thereof
CN114938974A (en) * 2022-06-06 2022-08-26 复旦大学附属中山医院 Method and system for predicting pancreatic fistula risk based on ultrasonic elasticity technology
CN116108365A (en) * 2023-01-16 2023-05-12 南京航空航天大学 Deep learning-based real-time online early warning method for non-starting state of supersonic air inlet
CN116259410A (en) * 2023-02-13 2023-06-13 兰州大学第一医院 Liver cancer occurrence risk prediction model and construction method of network calculator thereof

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170189634A1 (en) * 2014-05-15 2017-07-06 Barrett J. Larson System, Method, and Device For Airway Assessment and Endotracheal Intubation
WO2016122536A1 (en) * 2015-01-29 2016-08-04 University Of Maryland, Baltimore Ultrasound localization of obstruction for obstructive sleep apnea
CN109788933A (en) * 2016-09-19 2019-05-21 威斯康星校友研究基金会 Utilize the system and method for ultrasonic monitoring tracheae interior air-flow
CN111862085A (en) * 2020-08-03 2020-10-30 徐州市肿瘤医院 Method and system for predicting latent N2 lymph node metastasis of peripheral NSCLC
CN113069080A (en) * 2021-03-22 2021-07-06 上海交通大学医学院附属第九人民医院 Difficult airway assessment method and device based on artificial intelligence
CN113229853A (en) * 2021-06-25 2021-08-10 苏州市立医院 Method for monitoring aspiration of airway
CN113936663A (en) * 2021-12-03 2022-01-14 上海交通大学 Method for detecting difficult airway, electronic device and storage medium thereof
CN114938974A (en) * 2022-06-06 2022-08-26 复旦大学附属中山医院 Method and system for predicting pancreatic fistula risk based on ultrasonic elasticity technology
CN116108365A (en) * 2023-01-16 2023-05-12 南京航空航天大学 Deep learning-based real-time online early warning method for non-starting state of supersonic air inlet
CN116259410A (en) * 2023-02-13 2023-06-13 兰州大学第一医院 Liver cancer occurrence risk prediction model and construction method of network calculator thereof

Similar Documents

Publication Publication Date Title
Ran et al. Cataract detection and grading based on combination of deep convolutional neural network and random forests
CN106033540B (en) A kind of microecology in vaginas morphology automatic analysis method and system
CN112365464B (en) GAN-based medical image lesion area weak supervision positioning method
JP7303531B2 (en) Method and apparatus for evaluating difficult airways based on artificial intelligence
CN112088394A (en) Computerized classification of biological tissue
TWI810498B (en) Liver Tumor Intelligent Analysis Device
Tobias et al. CNN-based deep learning model for chest X-ray health classification using tensorflow
CN111862075A (en) Lung image analysis system and method based on deep learning
CN111430025B (en) Disease diagnosis model training method based on medical image data augmentation
CN113706491A (en) Meniscus injury grading method based on mixed attention weak supervision transfer learning
Tang et al. CNN-based qualitative detection of bone mineral density via diagnostic CT slices for osteoporosis screening
CN111368708A (en) Burn and scald image rapid grading identification method and system based on artificial intelligence
CN112862749A (en) Automatic identification method for bone age image after digital processing
Hu et al. ACCV: automatic classification algorithm of cataract video based on deep learning
CN113397485A (en) Scoliosis screening method based on deep learning
CN115349828A (en) Neonate pain assessment system based on computer deep learning
Zhang et al. Detection and quantification of intracerebral and intraventricular hemorrhage from computed tomography images with adaptive thresholding and case-based reasoning
CN117322865B (en) Temporal-mandibular joint disc shift MRI (magnetic resonance imaging) examination and diagnosis system based on deep learning
Chiwariro et al. Comparative analysis of deep learning convolutional neural networks based on transfer learning for pneumonia detection
CN113469987A (en) Dental X-ray image lesion area positioning system based on deep learning
CN113052227A (en) Pulmonary tuberculosis identification method based on SE-ResNet
CN113011514A (en) Intracranial hemorrhage sub-type classification algorithm applied to CT image based on bilinear pooling
CN116999092A (en) Difficult airway assessment method and device based on ultrasonic image recognition technology
Li et al. Covid-19 detection in chest radiograph based on yolo v5
CN113936775A (en) Fetal heart ultrasonic standard tangent plane extraction method based on human-in-loop intelligent auxiliary navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination