CN111967530A - Fluorescence area identification method of medical fluorescence imaging system - Google Patents

Fluorescence area identification method of medical fluorescence imaging system Download PDF

Info

Publication number
CN111967530A
CN111967530A CN202010886093.8A CN202010886093A CN111967530A CN 111967530 A CN111967530 A CN 111967530A CN 202010886093 A CN202010886093 A CN 202010886093A CN 111967530 A CN111967530 A CN 111967530A
Authority
CN
China
Prior art keywords
fluorescence
area
medical
imaging system
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010886093.8A
Other languages
Chinese (zh)
Inventor
蔡惠明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Nuoyuan Medical Devices Co Ltd
Original Assignee
Nanjing Nuoyuan Medical Devices Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Nuoyuan Medical Devices Co Ltd filed Critical Nanjing Nuoyuan Medical Devices Co Ltd
Priority to CN202010886093.8A priority Critical patent/CN111967530A/en
Publication of CN111967530A publication Critical patent/CN111967530A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/043Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances for fluorescence imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Optics & Photonics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention discloses a fluorescence area identification method of a medical fluorescence imaging system, belonging to the technical field of medical fluorescence imaging. The invention discloses a fluorescence area identification method of a medical fluorescence imaging system, which comprises the following steps: s1, making a medical fluorescence imaging scene added with artificial interference factors, and collecting fluorescence area images and visible area images in an experimental scene as a training data set Q1; s2, respectively carrying out labeling pretreatment on the images of the fluorescent region and the visible region in the training data set Q1; s3, constructing a fluorescence area prediction network and a loss function through training data set Q1 samples and labeled data.

Description

Fluorescence area identification method of medical fluorescence imaging system
Technical Field
The invention relates to the technical field of medical fluorescence imaging, in particular to a fluorescence area identification method of a medical fluorescence imaging system.
Background
The ICG (indocyanine green) -based medical fluorescence endoscope system can realize tumor tracing, lymph or blood radiography by injecting an ICG contrast agent into a human body in advance. The excellent fluorescence capability of the fluorescent material greatly improves the clinical diagnosis and treatment effects. The general medical fluorescence imaging equipment outputs two images, one is an image of a fluorescence area with the wavelength of about 815nm, and the other is an image of a visible light area within the range of 400-700 nm
In the existing fluorescence imaging system, due to the interference of external reflective articles or stray light, the gray scale characteristics of a reflective area and a real fluorescence area are very close; the image fusion module of the imaging system is easy to confuse the reflection interference with the real fluorescence imaging, so that a false fluorescence area exists in the fused image, and the final fluorescence imaging effect is influenced.
Disclosure of Invention
The present invention is directed to a method for identifying a fluorescence region of a medical fluorescence imaging system, so as to solve the problems mentioned in the background art.
In order to achieve the purpose, the invention provides the following technical scheme: the method comprises the following steps:
s1, making a medical fluorescence imaging scene added with artificial interference factors, and collecting fluorescence area images and visible area images in an experimental scene as a training data set Q1;
s2, respectively carrying out labeling pretreatment on the images of the fluorescent region and the visible region in the training data set Q1;
s3, constructing a fluorescence area prediction network and a loss function through a training data set Q1 sample and labeled data;
and S4, training the fluorescence identification prediction network according to the preprocessed training data set Q1 and the loss function to obtain the trained fluorescence area prediction network.
Further, the adding of the artificial interference factor in S1 includes the following steps:
simulating reflection interference of the object easy to reflect light in an experimental scene; and adjusting the shooting angle of the medical fluorescence imaging system to simultaneously image the reflection interference and the normal fluorescence area.
Further, the label preprocessing in S2 includes the following steps:
the types of the visible light region and the non-visible light region in the training data set Q1 are judged manually; and marking the interference area and the real area for the visible light area and the fluorescent light area.
Further, the loss function in S3 adopts the following formula:
L(X,Z)=∑r,cw(Xr,c)∑pZr,c,plog(Xr,c,p);
wherein r, c represents the height and width of the image, w (Xr, c) represents the sparsity weight of the fluorescence image gray information, Xr, c represents the probability distribution gray information of the pixel point, p represents the fluorescence threshold gray quantization value, Zr, c, p represents the probability distribution that the gray value at (r, c) is p, and Xr, c, p represents the probability distribution prediction value that the gray value at (r, c) is p.
Further, the fluorescence area prediction network is an end-to-end deep learning network, and the fluorescence area prediction network comprises a feature dictionary editing layer and a classifier training layer.
Further, the feature dictionary editing layer is used for learning residual coding, and feature extraction, dictionary learning and residual coding are all realized by adopting a single CNN model.
Further, the CNN model employs 50 layers of pre-training deep residual error networks.
Further, the identification and prediction of the fluorescence area prediction network trained in S4 on the unknown fluorescence area image includes the following steps:
and (4) storing the pixels in the real fluorescence area predicted by the network, setting the pixels in the non-fluorescence area to be 0, and outputting the final fluorescence image.
Compared with the prior art, the invention has the beneficial effects that: the method for extracting the fluorescence area of the fluorescence imaging system can effectively improve the fluorescence imaging effect, reduce the interference of reflection or stray light factors, and reduce the gray level of the reflection area close to the gray level of the real area, which can not be rapidly distinguished.
Drawings
FIG. 1 is a flow chart of the identification method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention provides a method for identifying a fluorescence region of a medical fluorescence imaging system, including the following steps:
s1, making a medical fluorescence imaging scene added with artificial interference factors, and collecting fluorescence area images and visible area images in an experimental scene as a training data set Q1;
s2, respectively carrying out labeling pretreatment on the images of the fluorescent region and the visible region in the training data set Q1;
s3, constructing a fluorescence area prediction network and a loss function through a training data set Q1 sample and labeled data;
and S4, training the fluorescence identification prediction network according to the preprocessed training data set Q1 and the loss function to obtain the trained fluorescence area prediction network.
It should be noted that in the fluorescence area identification method of the medical fluorescence imaging system, the medical fluorescence imaging experiment scene is made, the artificial interference factor is added to the experiment scene, and the fluorescence area image and the visible area image in the experiment scene are collected and used as the training data set Q1; then, the visible light and non-visible light areas of the images of the fluorescent area and the visible light in the training data set Q1 are processed in a manual judgment mode, and the images are labeled to display an interference area and a real area; according to the training data Q1 and the labeled data information, a fluorescent area prediction network and a loss function are established, and different from the conventional method in the field, a single training sample is adopted, but the method uses visible light and a fluorescent area image as the training sample aiming at the characteristics of a fluorescent imaging system, so that the identification effect can be better improved; and finally, training the fluorescence identification prediction network according to the two types of preprocessed training data and the loss function to obtain the trained fluorescence area prediction network.
Wherein, the adding of the artificial interference factor in the step S1 comprises the following steps:
simulating reflection interference of the object easy to reflect light in an experimental scene; adjusting the shooting angle of the medical fluorescence imaging system to simultaneously image the reflection interference and the normal fluorescence area; through the artificial interference mode, the simultaneous imaging effect of the light reflecting area and the normal fluorescence area can be better improved.
Preferably, the label preprocessing in S2 includes the following steps:
the types of the visible light region and the non-visible light region in the training data set Q1 are judged manually; marking an interference area and a real area in a visible light area and a fluorescent area; when labeling the training data set Q1, by labeling the visible light image region and the fluorescence image region, the interference region and the real region can be labeled respectively.
Further, the loss function in S3 adopts the following formula:
L(X,Z)=∑r,cw(Xr,c)∑pZr,c,plog(Xr,c,p);
wherein r, c represents the height and width of the image, w (Xr, c) represents the sparsity weight of the fluorescence image gray information, Xr, c represents the probability distribution gray information of the pixel point, p represents the fluorescence threshold gray quantization value, Zr, c, p represents the probability distribution that the gray value at (r, c) is p, and Xr, c, p represents the probability distribution prediction value that the gray value at (r, c) is p.
Then, the fluorescence area prediction network is an end-to-end deep learning network and comprises a feature dictionary editing layer and a classifier training layer; through end-to-end deep network learning, a single neural network can replace multiple stages in the data processing and learning process, and a robust residual error encoder Fisher Vector can be preferably adopted by a feature dictionary editing layer and has the characteristic of good texture distinguishability; the classifier training layer may preferably employ SVM (support vector machine).
In a further embodiment, the feature dictionary editing layer is used for learning residual coding, and the feature extraction, the dictionary learning and the residual coding are all realized by using a single CNN model.
Then, the CNN model adopts 50 layers of pre-training depth residual error networks; the deep learning model weight file data of the residual network content 50 can be used as a pre-training model, and the learning efficiency is improved.
Next, the identification and prediction of the unknown fluorescence region image by the fluorescence region prediction network trained in S4 includes the following steps:
storing the pixels in the real fluorescence area predicted by the network, setting the pixels in the non-fluorescence area to be 0, and outputting the final fluorescence image; by zeroing the pixels of the non-fluorescent regions, the fluorescent image output of the fluorescent regions can be facilitated.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (8)

1. A fluorescence area identification method of a medical fluorescence imaging system is characterized by comprising the following steps:
s1, making a medical fluorescence imaging scene added with artificial interference factors, and collecting fluorescence area images and visible area images in an experimental scene as a training data set Q1;
s2, respectively carrying out labeling pretreatment on the images of the fluorescent region and the visible region in the training data set Q1;
s3, constructing a fluorescence area prediction network and a loss function through a training data set Q1 sample and labeled data;
and S4, training the fluorescence identification prediction network according to the preprocessed training data set Q1 and the loss function to obtain the trained fluorescence area prediction network.
2. The fluorescence area identification method of medical fluorescence imaging system according to claim 1, wherein the adding of the human interference factor in S1 comprises the following steps:
simulating reflection interference of the object easy to reflect light in an experimental scene; and adjusting the shooting angle of the medical fluorescence imaging system to simultaneously image the reflection interference and the normal fluorescence area.
3. The fluorescence area identification method of medical fluorescence imaging system according to claim 1, wherein the labeling preprocessing in S2 comprises the following steps:
the types of the visible light region and the non-visible light region in the training data set Q1 are judged manually; and marking the interference area and the real area for the visible light area and the fluorescent light area.
4. The fluorescence area identification method of medical fluorescence imaging system according to claim 1, wherein the loss function in S3 adopts the following formula:
L(X,Z)=∑r,cw(Xr,c)∑pZr,c,plog(Xr,c,p);
wherein r, c represents the height and width of the image, w (Xr, c) represents the sparsity weight of the fluorescence image gray information, Xr, c represents the probability distribution gray information of the pixel point, p represents the fluorescence threshold gray quantization value, Zr, c, p represents the probability distribution that the gray value at (r, c) is p, and Xr, c, p represents the probability distribution prediction value that the gray value at (r, c) is p.
5. The fluorescence area recognition method of claim 1, wherein the fluorescence area prediction network is an end-to-end deep learning network, and the fluorescence area prediction network comprises a feature dictionary editing layer and a classifier training layer.
6. The fluorescence region identification method of a medical fluorescence imaging system according to claim 5, wherein the feature dictionary editing layer is used for learning residual coding, and the feature extraction, the dictionary learning and the residual coding are all realized by using a single CNN model.
7. The fluorescence region identification method of medical fluorescence imaging system of claim 7, wherein the CNN model adopts 50 layers of pre-training depth residual error network.
8. The fluorescence region identification method of medical fluorescence imaging system of claim 1, wherein the identification and prediction of the unknown fluorescence region image by the fluorescence region prediction network trained in S4 includes the following steps:
and (4) storing the pixels in the real fluorescence area predicted by the network, setting the pixels in the non-fluorescence area to be 0, and outputting the final fluorescence image.
CN202010886093.8A 2020-08-28 2020-08-28 Fluorescence area identification method of medical fluorescence imaging system Pending CN111967530A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010886093.8A CN111967530A (en) 2020-08-28 2020-08-28 Fluorescence area identification method of medical fluorescence imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010886093.8A CN111967530A (en) 2020-08-28 2020-08-28 Fluorescence area identification method of medical fluorescence imaging system

Publications (1)

Publication Number Publication Date
CN111967530A true CN111967530A (en) 2020-11-20

Family

ID=73400590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010886093.8A Pending CN111967530A (en) 2020-08-28 2020-08-28 Fluorescence area identification method of medical fluorescence imaging system

Country Status (1)

Country Link
CN (1) CN111967530A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102525411A (en) * 2010-12-09 2012-07-04 深圳大学 Fluorescent endoscopic imaging method and system
CN104730050A (en) * 2015-03-04 2015-06-24 深圳市金准生物医学工程有限公司 Immune complex fluorescence detection method based on image self-adaption division
CN110097552A (en) * 2018-06-21 2019-08-06 北京大学 A kind of automatic division method of mouse prefrontal lobe neuron two-photon fluorescence image
CN111368669A (en) * 2020-02-26 2020-07-03 福建师范大学 Nonlinear optical image recognition method based on deep learning and feature enhancement
CN111402306A (en) * 2020-03-13 2020-07-10 中国人民解放军32801部队 Low-light-level/infrared image color fusion method and system based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102525411A (en) * 2010-12-09 2012-07-04 深圳大学 Fluorescent endoscopic imaging method and system
CN104730050A (en) * 2015-03-04 2015-06-24 深圳市金准生物医学工程有限公司 Immune complex fluorescence detection method based on image self-adaption division
CN110097552A (en) * 2018-06-21 2019-08-06 北京大学 A kind of automatic division method of mouse prefrontal lobe neuron two-photon fluorescence image
CN111368669A (en) * 2020-02-26 2020-07-03 福建师范大学 Nonlinear optical image recognition method based on deep learning and feature enhancement
CN111402306A (en) * 2020-03-13 2020-07-10 中国人民解放军32801部队 Low-light-level/infrared image color fusion method and system based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
胡贝贝: ""层析芯片阳性信号的信息提取和定量分析研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Similar Documents

Publication Publication Date Title
JP7496389B2 (en) Image analysis method, device, program, and method for manufacturing trained deep learning algorithm
JP7076698B2 (en) Image analysis method, image analysis device, program, learned deep learning algorithm manufacturing method and learned deep learning algorithm
CN109255322B (en) A kind of human face in-vivo detection method and device
US9317761B2 (en) Method and an apparatus for determining vein patterns from a colour image
CN107591200B (en) Bone age mark identification and evaluation method and system based on deep learning and image omics
Shen et al. Domain-invariant interpretable fundus image quality assessment
CN108596046A (en) A kind of cell detection method of counting and system based on deep learning
Rana et al. Computational histological staining and destaining of prostate core biopsy RGB images with generative adversarial neural networks
CN110211087B (en) Sharable semiautomatic marking method for diabetic fundus lesions
CN110084794A (en) A kind of cutaneum carcinoma image identification method based on attention convolutional neural networks
Khan et al. Unsupervised identification of malaria parasites using computer vision
CN110490860A (en) Diabetic retinopathy recognition methods, device and electronic equipment
CN108710910A (en) A kind of target identification method and system based on convolutional neural networks
CN112184699A (en) Aquatic product health detection method, terminal device and storage medium
CN108596256B (en) Object recognition classifier construction method based on RGB-D
CN111653365A (en) Nasopharyngeal carcinoma auxiliary diagnosis model construction and auxiliary diagnosis method and system
CN112232977A (en) Aquatic product cultivation evaluation method, terminal device and storage medium
CN109255775A (en) A kind of gastrointestinal epithelial crypts structure based on optical fiber microendoscopic image quantifies analysis method and system automatically
CN114399465B (en) Benign and malignant ulcer identification method and system
CN107346419A (en) Iris identification method, electronic installation and computer-readable recording medium
CN110110606A (en) The fusion method of visible light neural network based and infrared face image
CN111967530A (en) Fluorescence area identification method of medical fluorescence imaging system
CN111612749A (en) Lung image-based focus detection method and device
Paul et al. A review on computational methods based on machine learning and deep learning techniques for malaria detection
Sharma et al. Solving image processing critical problems using machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination