CN111080700A - Medical instrument image detection method and device - Google Patents

Medical instrument image detection method and device Download PDF

Info

Publication number
CN111080700A
CN111080700A CN201911288468.4A CN201911288468A CN111080700A CN 111080700 A CN111080700 A CN 111080700A CN 201911288468 A CN201911288468 A CN 201911288468A CN 111080700 A CN111080700 A CN 111080700A
Authority
CN
China
Prior art keywords
image
target
medical image
network
medical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911288468.4A
Other languages
Chinese (zh)
Inventor
刘市祺
侯增广
谢晓亮
边桂彬
周小虎
周彦捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201911288468.4A priority Critical patent/CN111080700A/en
Publication of CN111080700A publication Critical patent/CN111080700A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a medical instrument image detection method and device. In order to solve the problem that the medical instrument in the medical image is difficult to accurately identify in the prior art, the invention provides a medical instrument image detection method, which comprises the steps of obtaining an enhanced medical image corresponding to an original medical image through a preset image enhancement model based on the original medical image obtained in advance; extracting image features of the enhanced medical image through a feature extraction network in a preset target detection model based on the enhanced medical image; acquiring a plurality of target mark frames through a mark positioning network in the target detection model based on the image characteristics of the enhanced medical image; and acquiring the position and the category of the medical instrument in the enhanced medical image through a target detection network of the target detection model based on the plurality of target mark frames. The method of the invention can accurately identify the medical instrument in the medical image.

Description

Medical instrument image detection method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a medical instrument image detection method and device.
Background
The minimally invasive interventional operation navigation system is a new emerging cardiovascular disease treatment system, and integrates the technologies in many aspects such as computer science, artificial intelligence, automatic control, image processing, multi-mode fusion, target segmentation, three-dimensional imaging, virtual reality training, clinical treatment and the like. The system uses medical images of various modes to assist doctors to puncture interventional operation instruments from the radial artery or the femoral artery and send the interventional operation instruments to the position of the blood vessel stenosis for treatment, and the method can improve operation quality, reduce operation trauma and reduce pain of patients.
Although this system has made good clinical progress, minimally invasive interventional procedure quality coronary occlusion has faced various difficulties in practical applications. For example, currently there is no perfect surgical planning method for certain intervention-related treatments; the doctor is difficult to accurately send the interventional therapy apparatus to the pathological change part according to the preset operation scheme, and the operation is implemented, and the apparatus detection in the vascular interventional operation is an important part for realizing the vascular navigation system, but has the following difficulties: (1) the signal-to-noise ratio of the X-ray contrast image is low, so that the interventional instrument is difficult to identify; (2) some structures (such as organs, bones and the like) similar to catheters/guide wires and the like exist in X-ray images, so that the detection difficulty is increased; (3) the change of heartbeat in the operation process causes the position and the shape of an interventional operation instrument to be changed greatly at any moment, so that the motion of the instrument is in a nonlinear characteristic.
Therefore, how to propose a solution to the problems of the prior art is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
In order to solve the above problems in the prior art, that is, to solve the problem in the prior art that it is difficult to accurately identify a medical instrument in a medical image, a first aspect of the present invention provides a medical instrument image detection method, including:
acquiring an enhanced medical image corresponding to an original medical image through a preset image enhancement model based on the pre-acquired original medical image; the image enhancement model is constructed on the basis of a neural network, trained through a first preset training set and used for enhancing image features;
extracting image features of the enhanced medical image through a feature extraction network in a preset target detection model based on the enhanced medical image;
acquiring a plurality of target mark frames through a mark positioning network in the target detection model based on the image characteristics of the enhanced medical image; wherein the target marking frame is a marking frame corresponding to a medical instrument in the enhanced medical image;
acquiring the position and the category of a medical instrument in the enhanced medical image through a target detection network of the target detection model based on a plurality of target mark frames;
the target detection model is constructed based on a neural network, trained through a second preset training set and used for determining the position and the type of the medical instrument in the image.
In a possible implementation manner, after the step of "extracting image features of the enhanced medical image through a feature extraction network in a preset target detection model", and before the step of "acquiring a plurality of target marker frames through a marker positioning network in the target detection model", the method further includes:
respectively marking a first marking frame, a second marking frame and a third marking frame of the medical instrument in the training medical image through a marking positioning network to be trained in the target detection model based on a preset acquired training medical image; wherein the first marker frame corresponds to an end position of the medical instrument, the second marker frame corresponds to a torso position of the medical instrument, and the third marker frame corresponds to an overall position of the medical instrument;
and according to the first mark frame, the second mark frame and the third mark frame, clustering the sizes and the scales of the first mark frame, the second mark frame and the third mark frame through a clustering algorithm so as to train the mark positioning network.
In one possible implementation, the original medical image includes three image channels, and the method of acquiring an enhanced medical image corresponding to the original medical image based on a pre-acquired original medical image through a preset image enhancement model includes:
based on a pre-acquired original medical image, carrying out histogram equalization on any image channel in the original medical image through the image enhancement model;
and randomly selecting an image channel filter from the image enhancement model, and carrying out image filtering on the residual image channels in the original medical image through the selected image channel filter so as to obtain an enhanced medical image corresponding to the original medical image.
In one possible implementation, the method of obtaining the position and the category of the medical instrument in the enhanced medical image through the target detection network of the target detection model based on a plurality of the target mark frames includes:
based on the plurality of target mark frames, acquiring the probability that each target mark frame belongs to the region of the medical instrument through a classifier of the target detection network;
performing coordinate regression on the target marking frame with the probability of belonging to the region where the medical instrument is located being greater than a first preset threshold, and obtaining a score corresponding to the target marking frame after the coordinate regression through a non-maximum suppression algorithm;
and acquiring the position of the medical instrument in the enhanced medical image through a preset position prediction module in the target detection network based on the position corresponding to the target mark frame with the highest score, wherein the position prediction module is preset in the target detection network and is used for predicting the position of the target object.
In one possible implementation, after the step of acquiring a plurality of target marker frames through the marker localization network in the target detection model, before the step of acquiring the position and the category of the medical instrument in the enhanced medical image through the target detection network of the target detection model, the method further comprises:
respectively acquiring the image characteristics of the marked image and the image characteristics of the background image through a characteristic extraction network of the target detection model based on the marked image and the background image which are acquired in advance;
acquiring a plurality of target losses corresponding to the target detection network based on the acquired image characteristics of the marked image and the background image and a preset image interesting region;
and training the weight parameters of the target detection network through a back propagation algorithm based on the target loss which is greater than a second preset threshold value in the obtained target losses.
Another aspect of the present invention also provides a medical instrument image detection apparatus, including:
the system comprises a first module, a second module and a third module, wherein the first module is used for acquiring an enhanced medical image corresponding to an original medical image through a preset image enhancement model based on the original medical image acquired in advance; the image enhancement model is constructed on the basis of a neural network, trained through a first preset training set and used for enhancing image features;
the second module is used for extracting the image characteristics of the enhanced medical image through a characteristic extraction network in a preset target detection model based on the enhanced medical image;
a third module, configured to obtain a plurality of target marker frames through a marker positioning network in the target detection model based on image features of the enhanced medical image, where the target marker frames are marker frames corresponding to medical instruments in the enhanced medical image;
a fourth module, configured to obtain, based on the plurality of target marker frames, a position and a category of a medical instrument in the enhanced medical image through a target detection network of the target detection model;
the target detection model is constructed based on a neural network, trained through a second preset training set and used for determining the position and the type of the medical instrument in the image.
In one possible implementation manner, the apparatus further includes a first training module, and the first training module is configured to:
respectively marking a first marking frame, a second marking frame and a third marking frame of the medical instrument in the training medical image through a marking positioning network to be trained in the target detection model based on a preset acquired training medical image; wherein the first marker frame corresponds to an end position of the medical instrument, the second marker frame corresponds to a torso position of the medical instrument, and the third marker frame corresponds to an overall position of the medical instrument;
and according to the first mark frame, the second mark frame and the third mark frame, clustering the sizes and the scales of the first mark frame, the second mark frame and the third mark frame through a clustering algorithm so as to train the mark positioning network.
In one possible implementation, the raw medical image includes three image channels, and the first module is further configured to:
based on a pre-acquired original medical image, carrying out histogram equalization on any image channel in the original medical image through the image enhancement model;
and randomly selecting an image channel filter from the image enhancement model, and carrying out image filtering on the residual image channels in the original medical image through the selected image channel filter so as to obtain an enhanced medical image corresponding to the original medical image.
In one possible implementation manner, the fourth module is further configured to:
based on the plurality of target mark frames, acquiring the probability that each target mark frame belongs to the region of the medical instrument through a classifier of the target detection network;
performing coordinate regression on the target marking frame with the probability of belonging to the region where the medical instrument is located being greater than a first preset threshold, and obtaining a score corresponding to the target marking frame after the coordinate regression through a non-maximum suppression algorithm;
and acquiring the position of the medical instrument in the enhanced medical image through a preset position prediction module in the target detection network based on the position corresponding to the target mark frame with the highest score, wherein the position prediction module is preset in the target detection network and is used for predicting the position of the target object.
In one possible implementation manner, the apparatus further includes a second training module, and the second training module is configured to:
respectively acquiring the image characteristics of the marked image and the image characteristics of the background image through a characteristic extraction network of the target detection model based on the marked image and the background image which are acquired in advance;
acquiring a plurality of target losses corresponding to the target detection network based on the acquired image characteristics of the marked image and the background image and a preset image interesting region;
and training the weight parameters of the target detection network through a back propagation algorithm based on the target loss which is greater than a second preset threshold value in the obtained target losses.
The medical instrument image detection method provided by the invention is based on a pre-acquired original medical image, and an enhanced medical image corresponding to the original medical image is acquired through a preset image enhancement model; extracting image features of the enhanced medical image through a feature extraction network in a preset target detection model based on the enhanced medical image; acquiring a plurality of target mark frames through a mark positioning network in a target detection model based on the image characteristics of the enhanced medical image; based on the plurality of target marker boxes, the location and the category of the medical instrument in the enhanced medical image are obtained through a target detection network of the target detection model.
According to the medical instrument image detection method, the original medical image is converted into the enhanced medical image, so that the resolution of the original medical image is improved on the basis of keeping the information of the original medical image, and the accuracy and the robustness of a target detection network are improved; on the basis of obtaining the image characteristics of the enhanced medical image, a plurality of target marking frames are obtained through the marking and positioning network, so that not only can the image information of the medical instrument be accurately obtained, but also the background information can be reasonably reduced, the identification accuracy is improved, and meanwhile, the operation speed is reduced; the position and the category of the medical instrument in the enhanced medical image are acquired through the pre-trained target detection network, the position of the medical instrument can be accurately acquired, the type of the medical instrument is distinguished, and the problem of separation of human tissues and the medical instrument is solved.
Drawings
FIG. 1 is a schematic flow chart of a medical device image detection method of the present invention;
fig. 2 is a schematic structural diagram of the medical instrument image detection device of the present invention.
Detailed Description
In order to make the embodiments, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the embodiments are some, but not all embodiments of the present invention. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and are not intended to limit the scope of the present invention.
With the development of the technology, the deep learning has great advantages in image detection and recognition, and detection accuracy and detection speed. Since convolution has great advantages in terms of images, including great advantages in terms of computation speed, learning ability.
In terms of detection tracking of catheters, Yatziv et al propose an improved fast marching method that combines clinical knowledge, limits the search space, first detects the catheter tip through a cascaded detector, and then tracks the catheter tip using an improved geodesic framework (using fast marching algorithms for weighted geodesic distances). The algorithm is mainly aimed at the interaction of other medical instruments and catheters in the operation, but needs manual initialization and is not suitable for the case of catheter tip occlusion.
Ma et al propose a fast speckle detection method, which mainly includes: fast blob detection algorithms, shape-based searching, and model-based detection. A catheter model is extracted according to a detection method, and the model is used as the input of a tracking method, so that various catheters can be detected simultaneously in real time. But the catheter tip is positioned facing the user while the algorithm assumes that the shape of the catheter is fixed, taking into account the fact that the catheter is deformed and that the C-arm is at an extreme angle.
Wu et al propose a catheter segmentation algorithm, which extracts an initial position of a catheter by using a Fast speckle detection algorithm in combination with a patch analysis method, then detects and tracks a catheter segment in a constrained search space based on a Fast-Up Robust Features (SURF) detector and a Fast-FD algorithm, and finally integrates and smoothes the catheter segment into a complete catheter by using a Kalman filter-based growth method and a hierarchical graph model. The method can automatically detect tracking, but the algorithm is only suitable for the case of small curvature of the catheter.
Ambrosini et al propose a catheter detection algorithm based on Hidden Markov Models (HMM) that obtains a 3-D vessel tree using 3-D rotational angiography images, and that tracks the catheter tip in the 3-D vessel tree to obtain its 3-D position using localization in the 2-D image based on the state transition probability distribution in the HMM Model. The algorithm obtains the 3-D position of the catheter primarily from the angiographic image, but there are some parameters to be determined in the algorithm.
Fazlali et al utilize multi-scale Top-Hat transformation to enhance each frame of image and calculate the blood vessel distribution map in each frame, utilize a guide filter, regard the catheter structure as a valley-shaped structure, utilize ridge detection algorithm and Hough transformation to detect the catheter in the first frame, then utilize second-order polynomial fitting catheter in the remaining image, the algorithm belongs to automatic detection and tracking algorithm, it is high to detect the accuracy.
Hoffmann et al use dual viewing angles to achieve 3-D tracking of guide wires, use a graph search algorithm to detect catheters in images of different viewing angles respectively, and then use detection results at two viewing angles to perform 3-D reconstruction on the catheters. The method can obtain the 3-D structure of the catheter, but the images under two visual angles are needed, only for the EP catheter, one seed point needs to be marked manually in each frame, and meanwhile, if the two catheters are overlapped, the detection result needs to be corrected manually. In the aspect of detection and tracking for the guide wire, based on a B-spline curve, Baert proposes an energy minimization guide wire tracking algorithm. But the algorithm requires forcing the smoothness of the curve and an additional penalty term to constrain the change in curve length.
Slabaugh et al propose a phase consistency algorithm, which comprises fitting a guide wire by using a B-spline curve model, evolving the motion of control points according to the phase consistency of the control points of a spline function to obtain new positions of the control points, and fitting the guide wire by using the control points of the new positions to achieve the purpose of tracking the guide wire, wherein the control points need to be reinitialized after the guide wire tracking fails.
Pauly et al obtains a motion distribution model of the guide wire by learning according to motion deformation of the guide wire by using a machine learning method, and the method has robustness on contrast change of images and partial shielding of complex backgrounds and interventional instruments, but does not limit motion regions of guide wires of adjacent frames in an algorithm.
Heibel et al propose algorithms based on Markov Random Field Models (MRF) that use discrete points as guide wire control points and then transform the guide wire tracking problem into a labeling problem that optimizes the curve made up of discrete points using a maximum posterior probability method. However, due to the low signal-to-noise ratio of the X-ray image, the method is easy to cause the lack of the guide wire, and the complexity is high when the guide wire is obtained according to the starting point.
Chang et al fit a guide wire using a B-spline model and a region-based probabilistic algorithm, where traditional B-spline curves rely primarily on control points for fitting, and the method relies primarily on the nodes of the curve.
Wang et al divides the guidewire into three sections, detects each section separately using a detector, and then combines the three sections using a bayesian network to form a complete guidewire. The method can solve the problem of any nonlinear motion of the guide wire, but needs to track by using manually marked training data, and is not suitable for being applied to C-shaped arms with different parameters.
Wang et al propose a method to solve the guidewire detection problem using LBP to extract features and improve a cascade classifier, but both speed and accuracy need to be improved. Wang et al uses a deep learning approach to combine Zeiler and Fergus models (ZFs) with a region proposal network RPN to detect the location of a guidewire. The method mainly comprises three steps: (a) highlighting an interested part by adopting methods such as a B spline model, Hessian filtering and the like; (b) correctly marking the data; (c) the method also has a plurality of problems, such as low accuracy of detecting the network due to low medical image quality, simple network structure, easy overfitting phenomenon and incapability of detecting all guide wires outside the exposed guide tube in real time.
Vandini et al propose the best guidewire segmentation method at present, adopt the SEGlet robust characteristic to improve the method of curve fitting to carry on the guidewire segmentation, but will appear tracking and lose and problem such as the speed is slower.
Referring to fig. 1, fig. 1 schematically illustrates a flow chart of the medical instrument image detection method of the present application.
The medical instrument image detection method provided by the invention comprises the following steps:
step S101: based on a pre-acquired original medical image, acquiring an enhanced medical image corresponding to the original medical image through a preset image enhancement model.
In one possible implementation, the image enhancement model is constructed based on a neural network, trained by a first preset training set, and used to enhance image features.
For convenience of description, the medical instrument is described by taking an interventional surgical instrument as an example, wherein the interventional surgical instrument includes a guide wire, and the type of the surgical instrument is not limited in the present application.
In practical application, a real-time video sequence in the whole operation process can be extracted from an X-ray video sequence in the minimally invasive interventional operation process, so that a video sequence with an interventional operation instrument is obtained for data calibration. Wherein each frame of image in the video sequence can be taken as an original medical image.
It is understood that the medical image is a gray scale image composed of three channels having the same value, unlike the natural image. In order to solve the problem that the network performance cannot be exerted due to overfitting of the network caused by the medical image with low resolution, the original medical image can be converted into the enhanced medical image, so that the detection accuracy of the medical instrument in the medical image is improved.
In one possible implementation, the method of acquiring, based on a pre-acquired original medical image, an enhanced medical image corresponding to the original medical image through a preset image enhancement model may include:
based on a pre-acquired original medical image, carrying out histogram equalization on any image channel in the original medical image through the image enhancement model;
and randomly selecting an image channel filter from the image enhancement model, and carrying out image filtering on the residual image channels in the original medical image through the selected image channel filter so as to obtain an enhanced medical image corresponding to the original medical image.
Specifically, histogram equalization can be performed on any image channel in the original medical image, so that the channel can greatly retain the information of the original medical image and make the target information clearer; then, one of the Gaussian filter, the median filter of the mean filter and the Laplace filter is randomly selected for the other two channels of the original medical image, and the remaining two image channels are subjected to image filtering through the corresponding filters, so that the distribution of the remaining two image channels is more complex.
Step S102: and extracting the image features of the enhanced medical image through a feature extraction network in a preset target detection model based on the enhanced medical image.
In one possible implementation, in order to extract rich image features from the enhanced medical image, the image features of the enhanced medical image may be extracted through a feature extraction network in a preset target detection model. Specifically, the feature extraction network is improved based on a Faster-rcnn structure, a VGG16 network in the Faster-rcnn structure is replaced by a Resnet-101 network, and a new connection mode 'short connection' is introduced into the Resnet-101 network, so that the problems of gradient disappearance and image overfitting during image feature extraction can be solved. Along with the depth of the network, the accuracy of image classification is greatly improved, and ResNet-101 as a feature extraction network can extract good depth features, so that the accuracy of subsequent classification and regression is improved.
Step S103: and acquiring a plurality of target mark frames through a mark positioning network in the target detection model based on the image characteristics of the enhanced medical image.
In one possible implementation, the target marking frame is a marking frame corresponding to a medical instrument in the enhanced medical image.
It should be noted that the conventional way of acquiring the target marker frame by the marker positioning network is to detect the tip of the guide wire of the interventional surgical device, because when the marker is exposed out of all guide wires of the catheter of the interventional surgical device, the marker frame may contain too much background information, the blood vessel and bone contour itself have certain similarity with the guide wire, and inaccurate marking may result in reduced detection performance. During the operation, the guide wire out of the catheter can have certain influence on the movement of the tip of the guide wire, so that the detection of the guide wire is necessary.
In order to solve the problem that the target cannot be accurately marked, in one possible implementation, after the step of "extracting the image features of the enhanced medical image through a feature extraction network in a preset target detection model", and before the step of "acquiring a plurality of target mark frames through a mark positioning network in the target detection model", the method further includes:
respectively marking a first marking frame, a second marking frame and a third marking frame of the medical instrument in the training medical image through a marking positioning network to be trained in the target detection model based on a preset acquired training medical image; wherein the first marker frame corresponds to an end position of the medical instrument, the second marker frame corresponds to a torso position of the medical instrument, and the third marker frame corresponds to an overall position of the medical instrument;
and according to the first mark frame, the second mark frame and the third mark frame, clustering the sizes and the scales of the first mark frame, the second mark frame and the third mark frame through a clustering algorithm so as to train the mark positioning network.
Specifically, a first mark frame, a second mark frame and a third mark frame of a medical instrument in a training medical image can be respectively marked through a mark positioning network to be trained in a target detection model; the first marking frame corresponds to the end position of the medical instrument, and basically does not contain background information of the image due to the small size of the marking frame; the second mark frame corresponds to the trunk position of the medical instrument, mainly covers the trunk part of the guide wire, contains a small amount of background information, and can enable the mark positioning network to learn more guide wire characteristics; the third marker box corresponds to the overall position of the medical instrument, but the marker box may contain a large amount of background information. By improving the proportion of the first marking frame and the second marking frame, the marking mode which can introduce excessive background information is reduced, and the accuracy of detecting the network is improved. In practical application, in order to ensure that the mark positioning network learns enough image features and reduce introduction of background information, the proportions of the first mark frame, the second mark frame and the third mark frame can be reasonably distributed. In one possible implementation, the ratio of the first mark box, the second mark box and the third mark box is 6: 3:1, the training effect on the marker location network is optimal.
The method has the advantages that the size of the mark frame can be clustered by adopting a K-means method for network parameters to select better experiment parameters, the mark frame adopts 2 scales [2,4] and 3 proportions h: w [5:1,3:1,2:1] in the experiment process, good characteristics are learned by the method in the patent, the method has good robustness, and when the number of the suggestion frames of each image is reduced to 30, the accuracy of the detection network is hardly influenced, so that the number of the suggestion frames of each image is set to 30, and the speed of the detection network is increased.
Step S104: and acquiring the position and the category of the medical instrument in the enhanced medical image through a target detection network of the target detection model based on the plurality of target mark frames.
The target detection model is constructed based on a neural network, trained through a second preset training set and used for determining the position and the type of the medical instrument in the image.
In one possible implementation, after the step of acquiring a plurality of target marker frames through the marker localization network in the target detection model, before the step of acquiring the position and the category of the medical instrument in the enhanced medical image through the target detection network of the target detection model, the method further comprises:
respectively acquiring the image characteristics of the marked image and the image characteristics of the background image through a characteristic extraction network of the target detection model based on the marked image and the background image which are acquired in advance;
acquiring a plurality of target losses corresponding to the target detection network based on the acquired image characteristics of the marked image and the background image and a preset image interesting region;
and training the weight parameters of the target detection network through a back propagation algorithm based on the target loss which is greater than a second preset threshold value in the obtained target losses.
It can be understood that, due to the problem of serious sample imbalance between the labeled data and the background data in the training set, the number of background samples (negative samples) is much larger than that of positive samples, which easily causes the target detection network to blindly consider all samples as negative samples occupying a large amount, so that the loss function can also obtain a good result, but the detection result is poor. To address this problem, this patent uses OHEM to overcome the sample imbalance problem to boost the value of the maps for target detection. The main idea of OHEM is to select only a larger loss from all samples to perform back propagation to train the weight of the network, because most convolution operations are shared during the forward propagation, so that no more extra calculation amount is caused to calculate all ROIs (regions of interest), and only a small number of ROIs are selected for updating the model during the back propagation, so that the time consumption is not changed greatly.
Specifically, a plurality of target losses corresponding to the target detection network may be obtained based on the obtained image features of the annotation image and the background image, and the preset image emotion area. The target loss of the target detection network comprises classification and regression, and the classification loss function is defined as shown in the following formula (1):
formula (1):
Figure BDA0002313405290000111
wherein L iscls(p, u) represents a classification loss, p represents a score,
Figure BDA0002313405290000112
represents the score of the ith class in the classification network, and u represents the class label.
The regression loss function is defined as shown in the following equation (2):
Figure BDA0002313405290000113
Figure BDA0002313405290000114
wherein L isreg(t,t*) The regression loss is expressed as a function of time,
Figure BDA0002313405290000115
representing the location of the prediction by the regressor,
Figure BDA0002313405290000116
indicating the true position, { x, y, w, h } x, indicating the abscissa, y, the ordinate, w, the width, h, the height, and N, the number of categories.
Figure BDA0002313405290000121
Wherein Loss denotes the joint Loss, NregIndicating the number of categories and α indicating the balance weight.
α is used to balance the effect of the two loss functions, with the default setting of 10, so that the two terms in the equation are weighted approximately equally after normalization.
After the position and the category of the medical instrument in the enhanced medical image are acquired through the target detection network of the target detection model, in order to obtain a faster segmentation speed, a Gaussian filter can be used for smoothing the image and filtering noise; and then calculating the gradient strength and direction of each pixel point in the image, and applying non-maximum suppression to eliminate stray response caused by edge detection. And (3) applying a double-threshold detection method to determine real and potential edges, restraining isolated weak edges and finishing final edge detection. The edge detection method may include canny edge detection.
The method of the invention has at least the following four technical effects:
(1) in the parameter part, the number of the suggestion frames is reduced, the reasonable size of the suggestion frames is selected to improve the operation speed and the accuracy, and meanwhile, a method of structure combination and memory saving is adopted in the operation, so that the operation speed keeps enough competitiveness. The method realizes the guide wire positioning and tracking in the interventional operation robot.
(2) In terms of calibration, the best-performing labeling method is obtained by the hybrid labeling method.
(3) In the aspect of detection accuracy, by synthesizing a color image, designing a deeper feature extraction backbone network, and designing a prediction structure and an addition (OHEM) structure of a convolution structure, the problems of low detection accuracy, serious overfitting and unbalanced samples are solved. Meanwhile, the image enhancement method is found to greatly reduce the data volume of the training set.
(4) In the aspect of operation speed, the operation speed is improved by combining structure optimization calculation, reducing the use of the suggestion box and the like. In the aspect of segmentation, better segmentation parameters are obtained through experimental analysis. The method of the present application achieves superior results over current methods for guidewire tracking.
Compared with other segmentation methods, the method has the advantages that the method achieves more excellent results in the aspects of segmentation accuracy, error tracking rate, loss rate, F1 score, segmentation speed and the like, and the sequence tracking loss problem occurs compared with other methods. As shown in Table 1, the method provided by the patent can accurately track the guide wire in 22 groups of sequences, does not cause the problem of tracking loss, and has better robustness. The Canny edge detection method capable of selecting reasonable parameters according to experiments is provided, the segmentation precision is improved, and meanwhile, the Canny algorithm is simple in operation, and the segmentation speed is higher.
TABLE 1
Figure BDA0002313405290000131
Another aspect of the present application also provides a medical instrument image detection apparatus, including:
the medical image enhancement system comprises a first module 1, a second module and a third module, wherein the first module is used for acquiring an enhanced medical image corresponding to an original medical image through a preset image enhancement model based on the original medical image acquired in advance; the image enhancement model is constructed on the basis of a neural network, trained through a first preset training set and used for enhancing image features;
the second module 2 is used for extracting the image characteristics of the enhanced medical image through a characteristic extraction network in a preset target detection model based on the enhanced medical image;
a third module 3, configured to obtain a plurality of target marker frames through a marker positioning network in the target detection model based on image features of the enhanced medical image; wherein the target marking frame is a marking frame corresponding to a medical instrument in the enhanced medical image;
a fourth module 4, configured to obtain, based on the plurality of target marker frames, a position and a category of a medical instrument in the enhanced medical image through a target detection network of the target detection model;
the target detection model is constructed based on a neural network, trained through a second preset training set and used for determining the position and the type of the medical instrument in the image.
In one possible implementation manner, the apparatus further includes a first training module, and the first training module is configured to:
respectively marking a first marking frame, a second marking frame and a third marking frame of the medical instrument in the training medical image through a marking positioning network to be trained in the target detection model based on a preset acquired training medical image; wherein the first marker frame corresponds to an end position of the medical instrument, the second marker frame corresponds to a torso position of the medical instrument, and the third marker frame corresponds to an overall position of the medical instrument;
and according to the first mark frame, the second mark frame and the third mark frame, clustering the sizes and the scales of the first mark frame, the second mark frame and the third mark frame through a clustering algorithm so as to train the mark positioning network.
In one possible implementation, the original medical image includes three image channels, and the first module 1 is further configured to:
based on a pre-acquired original medical image, carrying out histogram equalization on any image channel in the original medical image through the image enhancement model;
and randomly selecting an image channel filter from the image enhancement model, and carrying out image filtering on the residual image channels in the original medical image through the selected image channel filter so as to obtain an enhanced medical image corresponding to the original medical image.
In a possible implementation manner, the fourth module 4 is further configured to:
based on the plurality of target mark frames, acquiring the probability that each target mark frame belongs to the region of the medical instrument through a classifier of the target detection network;
performing coordinate regression on the target marking frame with the probability of belonging to the region where the medical instrument is located being greater than a first preset threshold, and obtaining a score corresponding to the target marking frame after the coordinate regression through a non-maximum suppression algorithm;
and acquiring the position of the medical instrument in the enhanced medical image through a preset position prediction module in the target detection network based on the position corresponding to the target mark frame with the highest score, wherein the position prediction module is preset in the target detection network and is used for predicting the position of the target object.
In one possible implementation manner, the apparatus further includes a second training module, and the second training module is configured to:
respectively acquiring the image characteristics of the marked image and the image characteristics of the background image through a characteristic extraction network of the target detection model based on the marked image and the background image which are acquired in advance;
acquiring a plurality of target losses corresponding to the target detection network based on the acquired image characteristics of the marked image and the background image and a preset image interesting region;
and training the weight parameters of the target detection network through a back propagation algorithm based on the target loss which is greater than a second preset threshold value in the obtained target losses.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In summary, the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A medical instrument image detection method, characterized in that the method comprises:
acquiring an enhanced medical image corresponding to an original medical image through a preset image enhancement model based on the pre-acquired original medical image; the image enhancement model is constructed on the basis of a neural network, trained through a first preset training set and used for enhancing image features;
extracting image features of the enhanced medical image through a feature extraction network in a preset target detection model based on the enhanced medical image;
acquiring a plurality of target mark frames through a mark positioning network in the target detection model based on the image characteristics of the enhanced medical image; wherein the target marking frame is a marking frame corresponding to a medical instrument in the enhanced medical image;
acquiring the position and the category of a medical instrument in the enhanced medical image through a target detection network of the target detection model based on a plurality of target mark frames;
the target detection model is constructed based on a neural network, trained through a second preset training set and used for determining the position and the type of the medical instrument in the image.
2. The method according to claim 1, wherein after the step of extracting image features of the enhanced medical image through a feature extraction network in a preset object detection model, before the step of acquiring a plurality of object marker frames through a marker localization network in the object detection model, the method further comprises:
respectively marking a first marking frame, a second marking frame and a third marking frame of the medical instrument in the training medical image through a marking positioning network to be trained in the target detection model based on a preset acquired training medical image; wherein the first marker frame corresponds to an end position of the medical instrument, the second marker frame corresponds to a torso position of the medical instrument, and the third marker frame corresponds to an overall position of the medical instrument;
and according to the first mark frame, the second mark frame and the third mark frame, clustering the sizes and the scales of the first mark frame, the second mark frame and the third mark frame through a clustering algorithm so as to train the mark positioning network.
3. The method according to claim 1, wherein the original medical image comprises three image channels, and the method comprises the following steps of acquiring an enhanced medical image corresponding to the original medical image through a preset image enhancement model based on a pre-acquired original medical image:
based on a pre-acquired original medical image, carrying out histogram equalization on any image channel in the original medical image through the image enhancement model;
and randomly selecting an image channel filter from the image enhancement model, and carrying out image filtering on the residual image channels in the original medical image through the selected image channel filter so as to obtain an enhanced medical image corresponding to the original medical image.
4. The method according to claim 1, wherein the method of obtaining the position and the category of the medical instrument in the enhanced medical image through the object detection network of the object detection model based on a plurality of the object labeling boxes comprises:
based on the plurality of target mark frames, acquiring the probability that each target mark frame belongs to the region of the medical instrument through a classifier of the target detection network;
performing coordinate regression on the target marking frame with the probability of belonging to the region where the medical instrument is located being greater than a first preset threshold, and obtaining a score corresponding to the target marking frame after the coordinate regression through a non-maximum suppression algorithm;
and acquiring the position of the medical instrument in the enhanced medical image through a preset position prediction module in the target detection network based on the position corresponding to the target mark frame with the highest score, wherein the position prediction module is preset in the target detection network and is used for predicting the position of the target object.
5. The method of claim 1, wherein after the step of acquiring a plurality of target marker boxes via a marker localization network in the target detection model, prior to the step of acquiring the location and the classification of the medical instrument in the enhanced medical image via a target detection network of the target detection model, the method further comprises:
respectively acquiring the image characteristics of the marked image and the image characteristics of the background image through a characteristic extraction network of the target detection model based on the marked image and the background image which are acquired in advance;
acquiring a plurality of target losses corresponding to the target detection network based on the acquired image characteristics of the marked image and the background image and a preset image interesting region;
and training the weight parameters of the target detection network through a back propagation algorithm based on the target loss which is greater than a second preset threshold value in the obtained target losses.
6. A medical instrument image detection apparatus, characterized in that the apparatus comprises:
the system comprises a first module, a second module and a third module, wherein the first module is used for acquiring an enhanced medical image corresponding to an original medical image through a preset image enhancement model based on the original medical image acquired in advance; the image enhancement model is constructed on the basis of a neural network, trained through a first preset training set and used for enhancing image features;
the second module is used for extracting the image characteristics of the enhanced medical image through a characteristic extraction network in a preset target detection model based on the enhanced medical image;
a third module, configured to obtain a plurality of target marker frames through a marker positioning network in the target detection model based on image features of the enhanced medical image; wherein the target marking frame is a marking frame corresponding to a medical instrument in the enhanced medical image;
a fourth module, configured to obtain, based on the plurality of target marker frames, a position and a category of a medical instrument in the enhanced medical image through a target detection network of the target detection model;
the target detection model is constructed based on a neural network, trained through a second preset training set and used for determining the position and the type of the medical instrument in the image.
7. The apparatus of claim 6, further comprising a first training module to:
respectively marking a first marking frame, a second marking frame and a third marking frame of the medical instrument in the training medical image through a marking positioning network to be trained in the target detection model based on a preset acquired training medical image; wherein the first marker frame corresponds to an end position of the medical instrument, the second marker frame corresponds to a torso position of the medical instrument, and the third marker frame corresponds to an overall position of the medical instrument;
and according to the first mark frame, the second mark frame and the third mark frame, clustering the sizes and the scales of the first mark frame, the second mark frame and the third mark frame through a clustering algorithm so as to train the mark positioning network.
8. The apparatus of claim 6, wherein the raw medical image comprises three image channels, the first module further configured to:
based on a pre-acquired original medical image, carrying out histogram equalization on any image channel in the original medical image through the image enhancement model;
and randomly selecting an image channel filter from the image enhancement model, and carrying out image filtering on the residual image channels in the original medical image through the selected image channel filter so as to obtain an enhanced medical image corresponding to the original medical image.
9. The apparatus of claim 6, wherein the fourth module is further configured to:
based on the plurality of target mark frames, acquiring the probability that each target mark frame belongs to the region of the medical instrument through a classifier of the target detection network;
performing coordinate regression on the target marking frame with the probability of belonging to the region where the medical instrument is located being greater than a first preset threshold, and obtaining a score corresponding to the target marking frame after the coordinate regression through a non-maximum suppression algorithm;
and acquiring the position of the medical instrument in the enhanced medical image through a preset position prediction module in the target detection network based on the position corresponding to the target mark frame with the highest score, wherein the position prediction module is preset in the target detection network and is used for predicting the position of the target object.
10. The apparatus of claim 6, further comprising a second training module to:
respectively acquiring the image characteristics of the marked image and the image characteristics of the background image through a characteristic extraction network of the target detection model based on the marked image and the background image which are acquired in advance;
acquiring a plurality of target losses corresponding to the target detection network based on the acquired image characteristics of the marked image and the background image and a preset image interesting region;
and training the weight parameters of the target detection network through a back propagation algorithm based on the target loss which is greater than a second preset threshold value in the obtained target losses.
CN201911288468.4A 2019-12-11 2019-12-11 Medical instrument image detection method and device Pending CN111080700A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911288468.4A CN111080700A (en) 2019-12-11 2019-12-11 Medical instrument image detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911288468.4A CN111080700A (en) 2019-12-11 2019-12-11 Medical instrument image detection method and device

Publications (1)

Publication Number Publication Date
CN111080700A true CN111080700A (en) 2020-04-28

Family

ID=70314802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911288468.4A Pending CN111080700A (en) 2019-12-11 2019-12-11 Medical instrument image detection method and device

Country Status (1)

Country Link
CN (1) CN111080700A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968114A (en) * 2020-09-09 2020-11-20 山东大学第二医院 Orthopedics consumable detection method and system based on cascade deep learning method
CN111968115A (en) * 2020-09-09 2020-11-20 山东大学第二医院 Method and system for detecting orthopedic consumables based on rasterization image processing method
CN113239786A (en) * 2021-05-11 2021-08-10 重庆市地理信息和遥感应用中心 Remote sensing image country villa identification method based on reinforcement learning and feature transformation
CN113679402A (en) * 2020-05-18 2021-11-23 西门子(深圳)磁共振有限公司 Image presentation method and system in interventional therapy, imaging system and storage medium
CN114387332A (en) * 2022-01-17 2022-04-22 江苏省特种设备安全监督检验研究院 Pipeline thickness measuring method and device
CN115294426A (en) * 2022-10-08 2022-11-04 深圳市益心达医学新技术有限公司 Method, device and equipment for tracking interventional medical equipment and storage medium
CN115790503A (en) * 2023-01-29 2023-03-14 张家港市欧凯医疗器械有限公司 Pipeline detection method and system for J-shaped guide pipe curvature machining
CN116758077A (en) * 2023-08-18 2023-09-15 山东航宇游艇发展有限公司 Online detection method and system for surface flatness of surfboard

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598224A (en) * 2018-11-27 2019-04-09 微医云(杭州)控股有限公司 Recommend white blood cell detection method in the Sections of Bone Marrow of convolutional neural networks based on region
CN109978882A (en) * 2019-04-09 2019-07-05 中康龙马(北京)医疗健康科技有限公司 A kind of medical imaging object detection method based on multi-modal fusion
CN110503112A (en) * 2019-08-27 2019-11-26 电子科技大学 A kind of small target deteection of Enhanced feature study and recognition methods

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598224A (en) * 2018-11-27 2019-04-09 微医云(杭州)控股有限公司 Recommend white blood cell detection method in the Sections of Bone Marrow of convolutional neural networks based on region
CN109978882A (en) * 2019-04-09 2019-07-05 中康龙马(北京)医疗健康科技有限公司 A kind of medical imaging object detection method based on multi-modal fusion
CN110503112A (en) * 2019-08-27 2019-11-26 电子科技大学 A kind of small target deteection of Enhanced feature study and recognition methods

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘市祺,孙晓波,谢晓亮,侯增广: "基于区域建议网络和残差结构的导丝跟踪", 《模式识别与人工智能》, vol. 32, no. 1, pages 1 - 2 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113679402B (en) * 2020-05-18 2024-05-24 西门子(深圳)磁共振有限公司 Image presentation method and system in interventional therapy, imaging system and storage medium
CN113679402A (en) * 2020-05-18 2021-11-23 西门子(深圳)磁共振有限公司 Image presentation method and system in interventional therapy, imaging system and storage medium
CN111968115A (en) * 2020-09-09 2020-11-20 山东大学第二医院 Method and system for detecting orthopedic consumables based on rasterization image processing method
CN111968114B (en) * 2020-09-09 2021-04-09 山东大学第二医院 Orthopedics consumable detection method and system based on cascade deep learning method
CN111968115B (en) * 2020-09-09 2021-05-04 山东大学第二医院 Method and system for detecting orthopedic consumables based on rasterization image processing method
CN111968114A (en) * 2020-09-09 2020-11-20 山东大学第二医院 Orthopedics consumable detection method and system based on cascade deep learning method
CN113239786A (en) * 2021-05-11 2021-08-10 重庆市地理信息和遥感应用中心 Remote sensing image country villa identification method based on reinforcement learning and feature transformation
CN113239786B (en) * 2021-05-11 2022-09-30 重庆市地理信息和遥感应用中心 Remote sensing image country villa identification method based on reinforcement learning and feature transformation
CN114387332A (en) * 2022-01-17 2022-04-22 江苏省特种设备安全监督检验研究院 Pipeline thickness measuring method and device
CN114387332B (en) * 2022-01-17 2022-11-08 江苏省特种设备安全监督检验研究院 Pipeline thickness measuring method and device
CN115294426B (en) * 2022-10-08 2022-12-06 深圳市益心达医学新技术有限公司 Method, device and equipment for tracking interventional medical equipment and storage medium
CN115294426A (en) * 2022-10-08 2022-11-04 深圳市益心达医学新技术有限公司 Method, device and equipment for tracking interventional medical equipment and storage medium
CN115790503A (en) * 2023-01-29 2023-03-14 张家港市欧凯医疗器械有限公司 Pipeline detection method and system for J-shaped guide pipe curvature machining
CN116758077A (en) * 2023-08-18 2023-09-15 山东航宇游艇发展有限公司 Online detection method and system for surface flatness of surfboard
CN116758077B (en) * 2023-08-18 2023-10-20 山东航宇游艇发展有限公司 Online detection method and system for surface flatness of surfboard

Similar Documents

Publication Publication Date Title
CN111080700A (en) Medical instrument image detection method and device
US10614573B2 (en) Method for automatically recognizing liver tumor types in ultrasound images
CN105279759B (en) The abdominal cavity aortic aneurysm outline dividing method constrained with reference to context information arrowband
Liu et al. Automatic whole heart segmentation using a two-stage u-net framework and an adaptive threshold window
US9002078B2 (en) Method and system for shape-constrained aortic valve landmark detection
EP3307169B1 (en) Real-time collimation and roi-filter positioning in x-ray imaging via automatic detection of the landmarks of interest
CN107545584A (en) The method, apparatus and its system of area-of-interest are positioned in medical image
WO2005055137A2 (en) Vessel segmentation using vesselness and edgeness
CN111798451A (en) 3D guide wire tracking method and device based on blood vessel 3D/2D matching
Zhu et al. Automatic segmentation of the left atrium from MR images via variational region growing with a moments-based shape prior
US9730609B2 (en) Method and system for aortic valve calcification evaluation
CN113192069B (en) Semantic segmentation method and device for tree structure in three-dimensional tomographic image
Wu et al. Fast catheter segmentation from echocardiographic sequences based on segmentation from corresponding X-ray fluoroscopy for cardiac catheterization interventions
CN111932554A (en) Pulmonary blood vessel segmentation method, device and storage medium
CN116503607B (en) CT image segmentation method and system based on deep learning
CN111681254A (en) Catheter detection method and system for vascular aneurysm interventional operation navigation system
CN111080676B (en) Method for tracking endoscope image sequence feature points through online classification
Hao et al. Magnetic resonance image segmentation based on multi-scale convolutional neural network
Yang et al. Efficient catheter segmentation in 3D cardiac ultrasound using slice-based FCN with deep supervision and f-score loss
CN114494364A (en) Liver three-dimensional ultrasonic and CT image registration initialization method and device and electronic equipment
CN112950734A (en) Coronary artery reconstruction method, device, electronic equipment and storage medium
CN116883462A (en) Medical image registration method based on LOFTR network model and improved particle swarm optimization
Yan et al. Segmentation of pulmonary parenchyma from pulmonary CT based on ResU-Net++ model
Yang et al. Automated catheter localization in volumetric ultrasound using 3D patch-wise U-Net with focal loss
CN112885435A (en) Method, device and system for determining image target area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination