CN114145844A - Laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm - Google Patents
Laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm Download PDFInfo
- Publication number
- CN114145844A CN114145844A CN202210123800.7A CN202210123800A CN114145844A CN 114145844 A CN114145844 A CN 114145844A CN 202210123800 A CN202210123800 A CN 202210123800A CN 114145844 A CN114145844 A CN 114145844A
- Authority
- CN
- China
- Prior art keywords
- data
- model
- deep learning
- learning algorithm
- artificial intelligence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Theoretical Computer Science (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Robotics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Instructional Devices (AREA)
Abstract
The invention relates to an artificial intelligence technology and discloses a laparoscopic surgery artificial intelligence cloud auxiliary system based on a deep learning algorithm, which comprises the following steps: s1, preparing data, selecting representative images from a large number of collected laparoscopic surgery videos as training data, intercepting pictures and adding labels to the pictures; s2, training the label data to obtain a deep learning algorithm model for detecting the key elements of the operation in real time; s3, analyzing the detection effect of the model, adding label data to the part which is missed in detection and is detected by mistake again, and repeatedly strengthening training; s4, the detection precision of the model is improved to an ideal range, the operation is carried out under the guidance of the model, the data are stored in a cloud server in real time, and the cloud data are analyzed to realize the optimization iteration of the system; the cloud computing technology is carried, and continuous optimization and upgrading of system functions are achieved through real-time transmission and unified storage analysis of data.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an artificial intelligence cloud auxiliary system for laparoscopic surgery based on a deep learning algorithm.
Background
Minimally invasive surgery has progressed for 30 years and has entered the high platform at the technical level. Laparoscopic surgery, once an emerging technology, is gradually becoming a new "traditional surgery". Nowadays, surgery is gradually taking a new situation of multidisciplinary and multi-technology combination, high-quality large-scale clinical research is continuously promoted, and new science and technology concepts are continuously upgraded and iterated. Under the background, the gravity center of the minimally invasive surgery is more and more oriented to the practical problem, and the development of the minimally invasive surgery is directed to the aspects of digital surgery, high-tech surgery and the like.
By means of the AI medical auxiliary system, the problem of unbalanced urban and rural medical resources can be improved to a certain extent.
Disclosure of Invention
The invention aims to upgrade traditional laparoscope equipment into digital laparoscope equipment with an artificial intelligence cloud auxiliary system, provides a laparoscope operation artificial intelligence cloud auxiliary system based on a deep learning algorithm, and solves the defects that the existing laparoscope operation excessively depends on personal judgment of a surgeon, the equipment runs in an isolated mode, and the advanced technologies such as cloud computing, Internet of things and the like are not carried.
The invention aims to realize the technical scheme that an artificial intelligence cloud auxiliary system for laparoscopic surgery based on a deep learning algorithm comprises the following steps:
the method comprises the following steps that firstly, data are prepared, representative images are selected from a large number of collected laparoscopic surgery videos to serve as training data, pictures are intercepted, and labels are added to the pictures;
training the label data to obtain a deep learning algorithm model for detecting the key elements of the operation in real time;
analyzing the detection effect of the model, adding label data again for the part with missing detection and error detection, and repeatedly strengthening training;
and step four, the detection precision of the model is improved to an ideal range, the operation is carried out under the guidance of the model, the data are stored in a cloud server in real time, and the cloud data are analyzed to realize the optimization iteration of the system.
The representative images include:
non-damaged video, non-surgical procedure abnormal video, non-patient organ congenital anomaly video.
The label includes:
each organ area label, recommended knife entering area label and danger warning area label.
The step of adding the label comprises the following steps:
a1, dividing representative images into three stages of before, during and after lesion excision;
a2, selecting 5000 operation videos at each stage, and intercepting 10 pictures from each video;
a3, adding organ area labels, recommended knife inserting area labels and danger warning area labels to 50000 pictures of the three stages respectively.
The training step comprises:
setting a file path; preprocessing data; installing a YOLOR algorithm dependent term; preparing pre-trained weights for the YOLOR algorithm; developing an overfitting model; observing the loss of the verification set so as to find the optimal training round number of the model; and (5) rebirling the data and training an optimal model by using the optimal number of turns.
The surgical key elements include:
real-time positions of organs, recommended knife-inserting positions, and prediction and warning of dangerous situations.
The deep learning algorithm model comprises:
the system comprises an image classification model, an image segmentation model, a target detection model and a key point detection model.
The cloud data includes:
the system comprises a surgery video image, organ positions detected by the system in real time, recommended knife-inserting positions made by the system in real time, dangerous situation prediction and warning made by the system, and Boolean value combination for warning whether the system makes a warning in advance after a surgery accident.
The data preprocessing step comprises:
graying, geometric transformation, and image enhancement.
The Boolean value combination comprises:
b1, no accident happens in the operation, and the system does not give a danger early warning;
b2, no accident occurs in the operation, and the system already makes a danger early warning;
b3, an accident occurs in the operation, and the system does not give a danger early warning;
b4, an accident occurs in the operation, and the system already makes a danger early warning.
The application has the following beneficial effects:
1. according to the invention, a deep learning algorithm model with real-time detection of operation key elements is trained through a large amount of clinical operation data and labels of each organ area, a recommended knife entering area label and a danger warning area label which are completed under the guidance of experts, so that doctors are assisted to carry out more accurate judgment in the operation, and prediction and warning are made before danger occurs, thereby reducing occurrence of laparoscopic operation accidents.
2. The invention uploads operation data, system real-time detection data, system real-time recommendation data, system danger warning data and whether the system has early warning before an accident occurs or not to the cloud server, and continuously optimizes the auxiliary effect of the system by continuously analyzing the detection missing situation and the detection error situation.
3. When the laparoscopic surgery is carried out with the assistance of the invention, expert-level recommended knife-entering guide and dangerous preposed warning can be obtained in real time.
4. The invention can be used for medical education, helps young surgeons to obtain the guidance of expert doctors at the initial stage of the industry, and develops good operation habits.
5. The invention can realize embedded combination with innovative medical instruments, and promote the development of novel digital medical surgical instruments by means of continuously accumulated cloud data.
Drawings
FIG. 1 is a schematic overall flow chart of the present invention.
Fig. 2 is a schematic diagram of a data annotation process.
FIG. 3 is a schematic diagram of a model training process.
FIG. 4 is a diagram of a model structure of the YOLOR algorithm.
FIG. 5 is a comparison of the YOLOR algorithm with other target detection algorithms.
Fig. 6 is a cloud function diagram of the system.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the detailed description of the embodiments of the present application provided below in connection with the appended drawings is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application. The invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the invention relates to a laparoscopic surgery artificial intelligence cloud auxiliary system based on a deep learning algorithm, and a specific data preparation and data annotation manner thereof is as shown in fig. 2:
more than 5000 cases of laparoscopic surgery videos are collected, damaged videos, videos with abnormal operation procedures caused by human factors and videos with congenital ectopic organs of patients are removed. Then all the remaining videos are divided into three types of early stage, middle stage and later stage according to the operation stage. Subsequently, 5000 samples were randomly selected from all preoperative videos, 5000 samples were also randomly selected from all intraoperative videos, and similarly, 5000 samples were also randomly selected from all postoperative videos. Next, 10 pictures were taken from each sample, thereby obtaining 50000 pictures per stage for a total of 150000 pictures. Finally, under the guidance of the specialist, adding organ area labels, recommended knife entering area labels and danger warning area labels to the pictures respectively.
As shown in fig. 3, the specific training mode is as follows:
setting a file path; carrying out graying, geometric transformation and image enhancement on the data in sequence; installing a YOLOR algorithm dependent term; preparing pre-trained weights for the YOLOR algorithm; developing an overfitting model; observing the loss of the verification set so as to find the optimal training round number of the model; and (5) rebirling the data and training an optimal model by using the optimal number of turns.
As shown in fig. 4, the YOLOR algorithm innovatively provides concepts of explicit knowledge and implicit knowledge, and by expanding different configurations of error terms in the objective function, the model can learn deeper data details, thereby obtaining better prediction performance.
The above equation is an objective function of conventional network training, whereinIs the value of the observation that the measured value,is a set of parameters of the neural network,it is shown that the operation of the neural network,is the term for the error as a function of,is the target of a given task. The training process is minimized as much as possibleSo thatIs infinitely close to. However, error terms due to conventional objective functionWithout any subdivision, the model is limited to completing only one type of target detection at a time. For example, in an autonomous driving application, the algorithm can detect the targets of people and vehicles in real time, but cannot detect the targets of men, women and countries simultaneouslyDetection of production or import vehicles. This is because such information often exceeds the scope of the representation, and human beings can easily recognize the information, which is called implicit knowledge by virtue of human subconscious sense.
Through subconsciously learned experiences, are encoded and stored in the human brain. Using these rich experiences as a large database, humans can efficiently process data even if they are not visible in advance or are very slightly different from each other.
The above equation is the innovative objective function in YOLOR. By modeling errors with explicit knowledge and implicit knowledge, respectively, richer combinatorial error terms are generated. And then guide the training process of the multipurpose network by minimizing it. WhereinAndare respectively observed valuesAnd subconscious codesIs modeled on the error of (a) a (b),is a task-specific operation for combining or selecting information from explicit knowledge and implicit knowledge.
As shown in fig. 5, the velocity increase was up to 88% for YOLOR compared to YOLOv4 when the same target detection accuracy was achieved.
And (3) after the model detection precision is improved to an ideal range through training data and a YOLOR algorithm, deploying the model and performing an operation under the guidance of the model.
As shown in fig. 6, all data in the surgical process are uploaded to the cloud, including surgical video images, organ positions detected by the system in real time, recommended knife-in positions made by the system in real time, dangerous situation prediction and warning made by the system, and boolean value combinations for warning whether the system has made a warning in advance after a surgical accident occurs.
Next, the system automatically classifies the cloud data according to the following four cases: no accident happens in the operation, and the system does not give out danger early warning; no accident happens in the operation, and the system already gives out danger early warning; accidents occur in the operation, and the system does not give out danger early warning; an accident occurs during the operation, and the system already gives a danger warning.
And then, performing targeted optimization upgrading on the system according to different Boolean value combinations:
and directly converting data which do not have accidents and do not make danger early warning by the system into data assets for storage for later scientific research and teaching.
And analyzing the early warning accuracy of the data which does not have an accident but has made a danger early warning in advance, and making different treatments according to the early warning accuracy. If the early warning is accurate, the data is directly converted into data assets to be stored, if the early warning is false, the reason of the false warning is found out, an upgrading system is optimized, and the rate of the false warning is continuously reduced through continuous iteration.
And for data which has an accident but the system does not give a danger early warning in advance, repeating the operation process, finding out the reason of the non-early warning, optimizing and upgrading the system, and continuously reducing the rate of missing report through continuous iteration.
And directly converting data which are subjected to accidents and have early warning of danger in advance into data assets for storage for later scientific research and teaching.
Each module in the system of the present invention is a content for implementing the corresponding method step in the method of the present invention, and specifically includes the corresponding operation step of the method of the present invention.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. The utility model provides a laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm which characterized in that: the system comprises the following auxiliary methods:
the method comprises the following steps that firstly, data are prepared, representative images are selected from a large number of collected laparoscopic surgery videos to serve as training data, pictures are intercepted, and labels are added to the pictures;
training the label data to obtain a deep learning algorithm model for detecting the key elements of the operation in real time;
analyzing the detection effect of the model, adding label data again for the part with missing detection and error detection, and repeatedly strengthening training;
and step four, the detection precision of the model is improved to an ideal range, the operation is carried out under the guidance of the model, the data are stored in a cloud server in real time, and the cloud data are analyzed to realize the optimization iteration of the system.
2. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the representative images include:
non-damaged video, non-surgical procedure abnormal video, non-patient organ congenital anomaly video.
3. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the label includes:
each organ area label, recommended knife entering area label and danger warning area label.
4. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the step of adding the label comprises the following steps:
a1, dividing representative images into three stages of before, during and after lesion excision;
a2, selecting 5000 operation videos at each stage, and intercepting 10 pictures from each video;
a3, adding organ area labels, recommended knife inserting area labels and danger warning area labels to 50000 pictures of the three stages respectively.
5. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the training step comprises:
setting a file path; preprocessing data; installing a YOLOR algorithm dependent term; preparing pre-trained weights for the YOLOR algorithm; developing an overfitting model; observing the loss of the verification set so as to find the optimal training round number of the model; and (5) rebirling the data and training an optimal model by using the optimal number of turns.
6. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the surgical key elements include:
real-time positions of organs, recommended knife-inserting positions, and prediction and warning of dangerous situations.
7. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the deep learning algorithm model comprises:
the system comprises an image classification model, an image segmentation model, a target detection model and a key point detection model.
8. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the cloud data includes:
the system comprises a surgery video image, organ positions detected by the system in real time, recommended knife-inserting positions made by the system in real time, dangerous situation prediction and warning made by the system, and Boolean value combination for warning whether the system makes a warning in advance after a surgery accident.
9. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the data preprocessing step comprises:
graying, geometric transformation, and image enhancement.
10. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the Boolean value combination comprises:
b1, no accident happens in the operation, and the system does not give a danger early warning;
b2, no accident occurs in the operation, and the system already makes a danger early warning;
b3, an accident occurs in the operation, and the system does not give a danger early warning;
b4, an accident occurs in the operation, and the system already makes a danger early warning.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210123800.7A CN114145844B (en) | 2022-02-10 | 2022-02-10 | Laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210123800.7A CN114145844B (en) | 2022-02-10 | 2022-02-10 | Laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114145844A true CN114145844A (en) | 2022-03-08 |
CN114145844B CN114145844B (en) | 2022-06-10 |
Family
ID=80450317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210123800.7A Active CN114145844B (en) | 2022-02-10 | 2022-02-10 | Laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114145844B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114601560A (en) * | 2022-05-11 | 2022-06-10 | 中国科学院深圳先进技术研究院 | Minimally invasive surgery assisting method, device, equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109754007A (en) * | 2018-12-27 | 2019-05-14 | 武汉唐济科技有限公司 | Peplos intelligent measurement and method for early warning and system in operation on prostate |
WO2020096889A1 (en) * | 2018-11-05 | 2020-05-14 | Medivators Inc. | Assessing endoscope channel damage using artificial intelligence video analysis |
CN111709941A (en) * | 2020-06-24 | 2020-09-25 | 上海迪影科技有限公司 | Lightweight automatic deep learning system and method for pathological image |
CN111798439A (en) * | 2020-07-11 | 2020-10-20 | 大连东软教育科技集团有限公司 | Medical image quality interpretation method and system for online and offline fusion and storage medium |
CN112614573A (en) * | 2021-01-27 | 2021-04-06 | 北京小白世纪网络科技有限公司 | Deep learning model training method and device based on pathological image labeling tool |
CN112932663A (en) * | 2021-03-02 | 2021-06-11 | 成都与睿创新科技有限公司 | Intelligent auxiliary method and system for improving safety of laparoscopic cholecystectomy |
CN112966772A (en) * | 2021-03-23 | 2021-06-15 | 之江实验室 | Multi-person online image semi-automatic labeling method and system |
WO2021194872A1 (en) * | 2020-03-21 | 2021-09-30 | Smart Medical Systems Ltd. | Artificial intelligence detection system for mechanically-enhanced topography |
CN113813053A (en) * | 2021-09-18 | 2021-12-21 | 长春理工大学 | Operation process analysis method based on laparoscope endoscopic image |
-
2022
- 2022-02-10 CN CN202210123800.7A patent/CN114145844B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020096889A1 (en) * | 2018-11-05 | 2020-05-14 | Medivators Inc. | Assessing endoscope channel damage using artificial intelligence video analysis |
CN109754007A (en) * | 2018-12-27 | 2019-05-14 | 武汉唐济科技有限公司 | Peplos intelligent measurement and method for early warning and system in operation on prostate |
WO2021194872A1 (en) * | 2020-03-21 | 2021-09-30 | Smart Medical Systems Ltd. | Artificial intelligence detection system for mechanically-enhanced topography |
CN111709941A (en) * | 2020-06-24 | 2020-09-25 | 上海迪影科技有限公司 | Lightweight automatic deep learning system and method for pathological image |
CN111798439A (en) * | 2020-07-11 | 2020-10-20 | 大连东软教育科技集团有限公司 | Medical image quality interpretation method and system for online and offline fusion and storage medium |
CN112614573A (en) * | 2021-01-27 | 2021-04-06 | 北京小白世纪网络科技有限公司 | Deep learning model training method and device based on pathological image labeling tool |
CN112932663A (en) * | 2021-03-02 | 2021-06-11 | 成都与睿创新科技有限公司 | Intelligent auxiliary method and system for improving safety of laparoscopic cholecystectomy |
CN112966772A (en) * | 2021-03-23 | 2021-06-15 | 之江实验室 | Multi-person online image semi-automatic labeling method and system |
CN113813053A (en) * | 2021-09-18 | 2021-12-21 | 长春理工大学 | Operation process analysis method based on laparoscope endoscopic image |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114601560A (en) * | 2022-05-11 | 2022-06-10 | 中国科学院深圳先进技术研究院 | Minimally invasive surgery assisting method, device, equipment and storage medium |
CN114601560B (en) * | 2022-05-11 | 2022-08-19 | 中国科学院深圳先进技术研究院 | Minimally invasive surgery assisting method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114145844B (en) | 2022-06-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005023086A3 (en) | Systems, methods, and computer program products for analysis of vessel attributes for diagnosis, disease staging, and surgical planning | |
CN117077786A (en) | Knowledge graph-based data knowledge dual-drive intelligent medical dialogue system and method | |
CN110111885B (en) | Attribute prediction method, attribute prediction device, computer equipment and computer readable storage medium | |
JP7115693B2 (en) | Diagnosis support system, diagnosis support device and diagnosis support method | |
CN114724682B (en) | Auxiliary decision-making device for minimally invasive surgery | |
CN108595432B (en) | Medical document error correction method | |
CN114145844B (en) | Laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm | |
CN116805533A (en) | Cerebral hemorrhage operation risk prediction system based on data collection and simulation | |
Yang et al. | Using AI and computer vision to analyze technical proficiency in robotic surgery | |
CN113946217B (en) | Intelligent auxiliary evaluation system for enteroscope operation skills | |
CN111462082A (en) | Focus picture recognition device, method and equipment and readable storage medium | |
CN117237351B (en) | Ultrasonic image analysis method and related device | |
CN110378353A (en) | A kind of tongue picture feature extracting method, system and computer readable storage medium | |
CN109509517A (en) | A kind of medical test Index for examination modified method automatically | |
CN113342973A (en) | Diagnosis method of auxiliary diagnosis model based on disease two-classifier | |
CN112861881A (en) | Honeycomb lung recognition method based on improved MobileNet model | |
Kumar et al. | Prediction of Lung Cancer Using Machine Learning Technique: A Survey | |
CN115862897A (en) | Syndrome monitoring method and system based on clinical data | |
CN115565018A (en) | Image classification method and device, equipment and storage medium | |
CN108986889A (en) | A kind of lesion identification model training method, device and storage equipment | |
Umasankar et al. | Data Mining for the Prediction of Heart Disease: A Literature Survey | |
Tafti et al. | Relationship between very cold outside weather and surgical outcome: integrating shallow and deep artificial neural nets | |
CN118173270B (en) | Patient postoperative infection risk assessment system and method | |
CN115409818B (en) | Enhanced training method applied to endoscope image target detection model | |
Mounika et al. | A Deep Hybrid Neural Network Model to Detect Diabetic Retinopathy from Eye Fundus Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |