CN108354578B - Capsule endoscope positioning system - Google Patents

Capsule endoscope positioning system Download PDF

Info

Publication number
CN108354578B
CN108354578B CN201810210665.3A CN201810210665A CN108354578B CN 108354578 B CN108354578 B CN 108354578B CN 201810210665 A CN201810210665 A CN 201810210665A CN 108354578 B CN108354578 B CN 108354578B
Authority
CN
China
Prior art keywords
capsule endoscope
picture
network model
digestive tract
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810210665.3A
Other languages
Chinese (zh)
Other versions
CN108354578A (en
Inventor
袁建
白家莲
梁东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Jinshan Medical Technology Research Institute Co Ltd
Original Assignee
Chongqing Jinshan Medical Appliance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Jinshan Medical Appliance Co Ltd filed Critical Chongqing Jinshan Medical Appliance Co Ltd
Priority to CN201810210665.3A priority Critical patent/CN108354578B/en
Publication of CN108354578A publication Critical patent/CN108354578A/en
Application granted granted Critical
Publication of CN108354578B publication Critical patent/CN108354578B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/041Capsule endoscopes for imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00006Operational features of endoscopes characterised by electronic signal processing of control signals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00147Holding or positioning arrangements
    • A61B1/00158Holding or positioning arrangements using magnetic field
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/0661Endoscope light sources
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/273Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Signal Processing (AREA)
  • Gastroenterology & Hepatology (AREA)
  • Endoscopes (AREA)

Abstract

The invention discloses a capsule endoscope positioning system, which comprises: an acquisition module to: acquiring a digestive tract picture acquired by a capsule endoscope, the picture brightness of the digestive tract picture and lens parameters of a lens for realizing picture acquisition in the capsule endoscope when the digestive tract picture is acquired; a positioning module to: inputting the alimentary canal picture into a depth network model trained in advance to obtain an alimentary canal position which is output by the depth network model and corresponds to the alimentary canal picture; determining the distance between the lens and the alimentary tract mucous membrane when acquiring the alimentary tract picture corresponding to the picture brightness and the lens parameters based on the predetermined corresponding relation; an output module to: and outputting pose information when the capsule endoscope collects the alimentary canal picture, wherein the pose information comprises the position of the alimentary canal corresponding to the alimentary canal picture and the distance between the lens and the alimentary canal mucous membrane. Thereby realizing the accurate positioning of the capsule endoscope in the body.

Description

Capsule endoscope positioning system
Technical Field
The invention relates to the technical field of medical instruments, in particular to a capsule endoscope positioning system.
Background
Compared with the traditional medical endoscope, the capsule endoscope has the advantages of simple operation, no wound, no pain, no cross infection, no influence on the normal work of a patient and the like, and particularly has high medical diagnosis value for the examination of small intestine diseases.
The movement of the capsule endoscope in the body is divided into: the capsule endoscope can advance along with the peristalsis of the digestive tract, the movement of the capsule endoscope is uncontrollable and random, and excessive examination or omission of certain anatomical positions can exist; the active mode is that the capsule endoscope moves forwards, backwards, pitches or rolls along with the control of an external magnetic field, the movement of the capsule endoscope is controllable, and the capsule endoscope can more effectively and comprehensively inspect the digestive tract or the focus part.
For an active capsule endoscope (hereinafter referred to as capsule endoscope for short), the automatic positioning of the anatomical position of the capsule endoscope in the body is necessary for the examination process, the automatic positioning and analysis of the anatomical position of the capsule endoscope in the body can provide judgment of the anatomical position of the capsule endoscope for an operator, can better observe a focus part in different directions and scales, and is helpful for the operator to make a more comprehensive examination route to avoid missing examination. At present, the recognition scheme of the alimentary canal position generally recognizes the position of the capsule endoscope through image data returned by the medical personnel by the capsule endoscope, is easily influenced by subjective factors of the medical personnel, and cannot realize the accurate positioning of the capsule endoscope in the body.
In summary, how to provide a technical solution capable of accurately positioning a capsule endoscope in a body is an urgent problem to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a capsule endoscope positioning system to realize accurate positioning of a capsule endoscope in a body.
In order to achieve the above purpose, the invention provides the following technical scheme:
a capsule endoscopic positioning system, comprising:
an acquisition module to: acquiring a digestive tract picture acquired by a capsule endoscope, the picture brightness of the digestive tract picture and lens parameters of a lens for realizing picture acquisition in the capsule endoscope when the digestive tract picture is acquired;
a positioning module to: inputting the alimentary canal picture into a depth network model trained in advance to obtain an alimentary canal position which is output by the depth network model and corresponds to the alimentary canal picture; determining the distance between the lens and the alimentary tract mucous membrane when acquiring the alimentary tract picture corresponding to the picture brightness and the lens parameters based on the predetermined corresponding relation;
an output module to: and outputting pose information when the capsule endoscope collects the alimentary canal picture, wherein the pose information comprises the position of the alimentary canal corresponding to the alimentary canal picture and the distance between the lens and the alimentary canal mucous membrane.
Preferably, the method further comprises the following steps:
a pose calculation module to: detecting the acceleration of the capsule endoscope when the capsule endoscope collects the alimentary canal picture; judging whether an external preset magnetic field at the position of the alimentary canal where the capsule endoscope is located meets a preset condition, if so, detecting the magnetic induction intensity at the position of the alimentary canal where the capsule endoscope is located, and simultaneously substituting the magnetic induction intensity and the acceleration into a preset formula to calculate the attitude angle of the capsule endoscope; if not, detecting the angular speed of the capsule endoscope rotating around a preset three-axis, and respectively carrying out integral calculation on each angular speed to calculate the attitude angle of the capsule endoscope;
and adding the attitude angle of the capsule endoscope into the attitude information.
Preferably, the attitude calculation module includes:
a first calculation unit to: by the formula
Figure BDA0001597124310000021
Calculating the pitch angle pitch of the capsule endoscope;
by the formula
Figure BDA0001597124310000022
Calculating a roll angle roll of the capsule endoscope;
by the formula
yaw=ξ+θ
Calculating a yaw angle yaw of the capsule endoscope; and is
Figure BDA0001597124310000031
Where ξ represents a deflection angle in the horizontal direction when an external preset magnetic field satisfies a preset condition; theta is
Figure BDA0001597124310000036
And
Figure BDA0001597124310000037
an angle in the horizontal direction;
Figure BDA0001597124310000038
setting the x-axis base vector of the capsule endoscope
Figure BDA0001597124310000039
Figure BDA00015971243100000310
Figure BDA00015971243100000311
For the y-axis base vector of the capsule endoscope, set
Figure BDA00015971243100000312
Figure BDA00015971243100000313
For the z-axis base vector of the capsule endoscope, set
Figure BDA00015971243100000314
Figure BDA00015971243100000315
Setting the acceleration of the capsule endoscope when acquiring the alimentary canal picture
Figure BDA00015971243100000316
Figure BDA00015971243100000317
Setting the magnetic induction intensity of the alimentary canal position of the capsule endoscope
Figure BDA00015971243100000318
Figure BDA00015971243100000319
Is composed of
Figure BDA00015971243100000320
Projection on the x-y plane of the capsule endoscope is provided
Figure BDA00015971243100000321
Figure BDA00015971243100000322
And is
Figure BDA00015971243100000323
When in use
Figure BDA00015971243100000324
And A is E.R, A>When 0, roll is | roll |, when
Figure BDA00015971243100000325
And A belongs to R, when A is less than or equal to 0, roll is | - | roll |; when in use
Figure BDA00015971243100000326
And A is E.R, A>When 0, theta is ═ theta |, when
Figure BDA00015971243100000327
And A belongs to R, and when A is less than or equal to 0, theta is | - | theta |.
Preferably, the attitude calculation module includes:
a second calculation unit to: by the formula
Figure BDA0001597124310000032
Calculating the rotation angle of the capsule endoscope around a preset x axis at the moment t;
by the formula
Figure BDA0001597124310000033
Calculating the rotation angle of the capsule endoscope around a preset y axis at the moment t;
by the formula
Figure BDA0001597124310000034
Calculating the rotation angle of the capsule endoscope around a preset z axis at the moment t;
the attitude angle of the capsule endoscope can be represented by a rotation matrix as:
Figure BDA0001597124310000035
wherein alpha is0、β0And gamma0Respectively setting the initial rotation angle of the capsule endoscope around a preset x axis, the initial rotation angle of the capsule endoscope around a preset y axis and the initial rotation angle of the capsule endoscope around a preset z axis in an integral constant term; omegax、ωyAnd ωzThe angular speeds of the capsule endoscope rotating around the preset three axes are respectively, and the integral constant term is the attitude angle of the capsule endoscope calculated at the moment when the external preset magnetic field is changed from meeting the preset condition to not meeting the preset condition.
Preferably, the method further comprises the following steps:
a model training module to: acquiring a training set and a testing set, wherein the training set and the testing set comprise digestive tract pictures and marks representing positions of digestive tracts corresponding to the digestive tract pictures;
selecting a deep network model based on a deep learning frame as a current deep network model, training the deep network model by using the training set, testing the trained deep network model by using the test set to obtain recognition precision data of the deep network model, judging whether the recognition precision data meets the preset precision requirement, if so, determining the trained deep network model as the deep network model for completing the training, if not, determining the deep network model obtained after adjusting the trained deep network model as the deep network model, and returning to execute the step of training the deep network model by using the training set.
Preferably, the method further comprises the following steps:
a pre-processing module to: after the training set and the test set are obtained, determining that the digestive tract pictures with unknown digestive tract positions in the training set and the test set are unknown pictures, and eliminating pictures with similarity larger than a preset threshold value in the unknown pictures by using a perceptual hash algorithm;
and carrying out preset angle rotation processing and image enhancement processing on the digestive tract pictures contained in the training set and the testing set.
Preferably, the model training module includes:
a first training unit to: combining the digestive tract pictures contained in the training set into a plurality of sub-training sets, wherein the digestive tract pictures contained in each sub-training set are not identical; and respectively training the deep network models by utilizing the plurality of sub-training sets to obtain a plurality of corresponding deep network models, respectively testing the plurality of deep network models by utilizing the test set to obtain the identification precision data of the corresponding deep network models, and selecting the deep network model with the highest identification precision indicated by the corresponding identification precision data as the deep network model after the deep network model is trained by utilizing the training set.
Preferably, the model training module includes:
a second training unit to: testing the deep network model by using the test set, and calculating the identification accuracy and the positive prediction rate included in the identification accuracy data of the deep network model according to the following formulas based on the test result:
the identification accuracy rate is 100% of the number of correct digestive tract pictures at a certain digestive tract position automatically identified in the test set/the total number of the digestive tract pictures at the corresponding digestive tract position in the test set;
the positive prediction rate is the number of correct digestive tract pictures at a certain digestive tract position automatically identified in the test set/the total number of digestive tract pictures at a corresponding digestive tract position automatically identified in the test set is 100%.
Preferably, the method further comprises the following steps:
a discrimination module for: before outputting the pose information when the capsule endoscope collects the alimentary canal picture, judging whether the alimentary canal position is an unknown class, if so, outputting the pose information which is output most recently at a moment, and if not, indicating the output module to execute the step of outputting the pose information when the capsule endoscope collects the alimentary canal picture.
Preferably, the output module includes:
a display unit for: setting the pre-drawn pose of the simulated capsule endoscope to be the pose corresponding to the pose information when the capsule endoscope collects the alimentary canal picture, and displaying the simulated capsule endoscope.
The invention provides a capsule endoscope positioning system, which comprises: an acquisition module to: acquiring a digestive tract picture acquired by a capsule endoscope, the picture brightness of the digestive tract picture and lens parameters of a lens for realizing picture acquisition in the capsule endoscope when the digestive tract picture is acquired; a positioning module to: inputting the alimentary canal picture into a depth network model trained in advance to obtain an alimentary canal position which is output by the depth network model and corresponds to the alimentary canal picture; determining the distance between the lens and the alimentary tract mucous membrane when acquiring the alimentary tract picture corresponding to the picture brightness and the lens parameters based on the predetermined corresponding relation; an output module to: and outputting pose information when the capsule endoscope collects the alimentary canal picture, wherein the pose information comprises the position of the alimentary canal corresponding to the alimentary canal picture and the distance between the lens and the alimentary canal mucous membrane. According to the technical scheme provided by the invention, the recognition of the position of the alimentary canal corresponding to the alimentary canal picture is realized through the depth network model, and the distance between the corresponding lens and the alimentary canal mucous membrane is determined through the picture brightness and the lens parameter of the lens at the acquisition moment of the alimentary canal picture, so that the capsule endoscope is accurately positioned in vivo, the finally obtained pose information is output, and the capsule endoscope has good guiding significance for an operator to realize further checking and planning routes.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a method for positioning a capsule endoscope according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a three-axis coordinate system of a capsule endoscope and detected acceleration and magnetic field vectors in a capsule endoscope positioning method according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a capsule endoscope positioning system according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a hardware system corresponding to the capsule endoscope positioning method and system provided by the embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart of a method for positioning a capsule endoscope according to an embodiment of the present invention is shown, which may include:
s11: acquiring a digestive tract picture acquired by the capsule endoscope, the picture brightness of the digestive tract picture and lens parameters of a lens for realizing picture acquisition in the capsule endoscope when the digestive tract picture is acquired.
The lens parameters of the lens can comprise the gain, exposure time and the like of the lens, the capsule endoscope in the application can specifically refer to a magnetic control capsule endoscope, and a corresponding alimentary canal picture can be obtained through the picture acquisition of the capsule endoscope in a human body. The execution subject of the capsule endoscope positioning method provided by the embodiment of the invention can be a corresponding capsule endoscope positioning method.
S12: inputting the alimentary canal picture into a depth network model trained in advance to obtain an alimentary canal position which is output by the depth network model and corresponds to the alimentary canal picture; and determining the distance between the lens and the alimentary tract mucosa when acquiring the alimentary tract picture corresponding to the picture brightness and the lens parameters based on the predetermined corresponding relation.
And inputting the alimentary canal picture acquired by the capsule endoscope into a depth network model which is trained in advance to obtain the alimentary canal position corresponding to the alimentary canal picture output by the depth network model. The image of the digestive tract acquired by the capsule endoscope is the image of the digestive tract of which the corresponding position is required to be identified, the image of the digestive tract is input to the trained deep network model, and the deep network model outputs the position of the digestive tract corresponding to the image of the digestive tract, so that the identification of the position of the digestive tract of the image of the digestive tract is realized.
The distance between the lens of the capsule endoscope and the alimentary tract mucosa is calculated according to the picture information of the alimentary tract picture, and can be obtained through the corresponding relation among the picture brightness, the lens parameter and the lens distance from the alimentary tract mucosa at the same moment, the relation can be obtained through experiments, the application takes the lens parameter as the gain and the exposure time of the lens as an example for explanation, and the obtaining process of the corresponding relation can be as follows: the experiment simulates an in-vivo environment, the distance between a shot object and the capsule endoscope is changed by fixed step length within the effective shooting distance of the lens of the capsule endoscope, the gain and the exposure time of the lens of the capsule endoscope are adjusted under the condition that the distance between the shot object and the capsule endoscope is fixed, and a corresponding relation table between different imaging distances (the imaging distance is the distance between the lens and the alimentary tract mucous membrane) and the gain, the exposure time and the image brightness of the lens of the capsule endoscope is obtained by calculating the image brightness acquired by the capsule endoscope. In the specific implementation process, the correspondence table between the imaging distance and the gain, exposure time and picture brightness of the lens of the capsule endoscope is obtained through experiments and stored in the positioning system for realizing the technical scheme provided by the embodiment of the invention, and the distance between the lens of the capsule endoscope and a shot object (which can be referred to as a gastrointestinal mucosa in the application) is obtained through the correspondence table obtained through the experiments by acquiring the picture brightness of the gastrointestinal tract picture and combining the gain and exposure time of the lens of the capsule endoscope.
S13: and outputting pose information when the capsule endoscope collects the alimentary canal picture, wherein the pose information comprises the position of the alimentary canal corresponding to the alimentary canal picture and the distance between the lens and the alimentary canal mucous membrane.
The output of the pose information can be displayed on a corresponding display unit, so that a worker can intuitively and quickly acquire the pose information of the capsule endoscope based on the information displayed on the display unit, and of course, other settings can be performed according to actual needs, which are within the protection scope of the invention. In addition, the pose information in the embodiment of the invention corresponds to the same alimentary canal picture, namely the pose of the capsule endoscope at the same time.
According to the technical scheme provided by the invention, the recognition of the position of the alimentary canal corresponding to the alimentary canal picture is realized through the depth network model, and the distance between the corresponding lens and the alimentary canal mucous membrane is determined through the picture brightness and the lens parameter of the lens at the acquisition moment of the alimentary canal picture, so that the capsule endoscope is accurately positioned in vivo, the finally obtained pose information is output, and the capsule endoscope has good guiding significance for an operator to realize further checking and planning routes.
The capsule endoscope positioning method provided by the embodiment of the invention can further comprise the following steps:
detecting the acceleration of the capsule endoscope when acquiring the digestive tract picture;
judging whether an external preset magnetic field at the position of the alimentary canal where the capsule endoscope is located meets a preset condition, if so, detecting the magnetic induction intensity at the position of the alimentary canal where the capsule endoscope is located, and simultaneously substituting the magnetic induction intensity and the acceleration into a preset formula to calculate the attitude angle of the capsule endoscope; if not, detecting the angular speed of the capsule endoscope rotating around the preset three axes, and respectively carrying out integral calculation on each angular speed to calculate the attitude angle of the capsule endoscope;
and adding the attitude angle of the capsule endoscope into the attitude information.
Specifically describing the above steps, it is considered that the acceleration detection is the first step, the attitude angle calculation is the second step, and the attitude angle addition is the third step, that is, each minor segment in the above further included steps corresponds to one step.
In the first step, the acceleration of the capsule endoscope during the acquisition of the digestive tract picture is detected, the acceleration is a vector and has magnitude and direction, and specifically, in the step, the acceleration in the three-axis direction of the capsule endoscope can be detected through a three-axis acceleration sensor arranged in the capsule endoscope. The three-axis direction is a three-dimensional coordinate system established by taking the capsule endoscope as a reference, wherein the X, Y, Z axis can be randomly specified on the capsule endoscope, and the accurate determination of the posture of the capsule endoscope is not influenced. For the sake of convenience, the following description of the present embodiment is made with the axial direction of the capsule endoscope as the Z-axis and the right-hand coordinate system as the standard. In this way, after the three-axis acceleration sensor detects the acceleration of the capsule endoscope in each axis direction, the accelerations in the three axes can be added, and the resultant acceleration can be calculated as the acceleration of the capsule endoscope.
In the second step, since the external preset magnetic field is required to satisfy a certain condition, generally, the external preset magnetic field is required to be a horizontal directional magnetic field or a horizontal magnetic field with the magnetic induction line parallelism higher than 95%, and the external preset magnetic field is changed in stages during operation, by means of the auxiliary measurement of the spatial magnetic field vector, it is first required to determine whether the external preset magnetic field at the position of the alimentary canal where the capsule endoscope is located satisfies the preset condition. If so, the attitude angle measurement can be performed by the auxiliary measurement means of the space magnetic field vector. Therefore, the magnetic induction intensity of the position of the alimentary canal where the capsule endoscope is located can be detected, then the detected magnetic induction intensity and the acceleration of the capsule endoscope collecting the corresponding alimentary canal picture are subjected to data fusion, the detected magnetic induction intensity and the acceleration are substituted into a preset formula to be calculated, and the attitude angle of the capsule endoscope can be accurately measured. If not, the attitude angle cannot be measured by the aid of the space magnetic field vector, and when the attitude angle of the capsule endoscope changes, the rotation angle of the capsule endoscope around the preset three axes changes synchronously, so that the rotation angular speed of the capsule endoscope around the preset three axes can be detected, the rotation angle of the capsule endoscope around the preset three axes can be calculated by respectively performing integral calculation on the three angular speeds, and the attitude angle of the capsule endoscope can be accurately measured.
Therefore, according to the technical scheme, when the external preset magnetic field meets the preset condition, data fusion can be carried out through the acceleration of the capsule endoscope and the magnetic induction intensity of the position of the digestive tract, and the attitude angle of the capsule endoscope is accurately calculated; and when the external preset magnetic field does not meet the preset condition, data fusion can be carried out through the acceleration of the capsule endoscope and the angular speed of the capsule endoscope rotating around the preset three axes, and the attitude angle of the capsule endoscope is accurately calculated. In summary, the technical solution provided by this embodiment can always accurately realize the measurement of the full attitude of the capsule endoscope under the environment of the external preset magnetic field variation.
The capsule endoscope positioning method provided by the embodiment of the invention can calculate the attitude angle of the capsule endoscope by simultaneously substituting the magnetic induction intensity and the acceleration into a preset formula, and can comprise the following steps:
by the formula
Figure BDA0001597124310000091
Calculating the pitch angle pitch of the capsule endoscope;
by the formula
Figure BDA0001597124310000092
Calculating a roll angle of the capsule endoscope;
by the formula
yaw=ξ+θ
Calculating the yaw angle yaw of the capsule endoscope; and is
Figure BDA0001597124310000093
Where ξ represents a deflection angle in the horizontal direction when an external preset magnetic field satisfies a preset condition; theta is
Figure BDA0001597124310000094
And
Figure BDA0001597124310000095
an angle in the horizontal direction;
Figure BDA0001597124310000096
is the x-axis base vector of the capsule endoscope
Figure BDA0001597124310000097
Figure BDA0001597124310000098
Is a y-axis base vector of the capsule endoscope
Figure BDA0001597124310000099
Figure BDA00015971243100000910
Is the z-axis base vector of the capsule endoscope
Figure BDA00015971243100000911
Figure BDA00015971243100000912
Acceleration when acquiring digestive tract pictures for capsule endoscopy is set
Figure BDA00015971243100000913
Figure BDA00015971243100000914
Is digested by capsule endoscopeMagnetic induction of track position
Figure BDA00015971243100000915
Figure BDA00015971243100000916
Is composed of
Figure BDA00015971243100000917
Projection on the x-y plane of the capsule endoscope
Figure BDA00015971243100000918
And is
Figure BDA00015971243100000919
When in use
Figure BDA00015971243100000920
And A is E.R, A>When 0, roll is | roll |, when
Figure BDA00015971243100000921
And A belongs to R, when A is less than or equal to 0, roll is | - | roll |; when in use
Figure BDA00015971243100000922
And A is E.R, A>When 0, theta is ═ theta |, when
Figure BDA00015971243100000923
And A belongs to R, and when A is less than or equal to 0, theta is | - | theta |.
When the magnetic induction intensity of the alimentary canal position of the capsule endoscope is detected, the magnetic induction intensity can be detected through a magnetic field sensor arranged in the capsule endoscope. The magnetic induction intensity is a vector as well as the acceleration of the capsule endoscope, has a magnitude and a direction, and is changed in a step mode when the external preset magnetic field is in normal operation, so that the magnetic induction intensity is also changed at any time. When the external preset magnetic field meets the preset condition and forms a horizontal directional magnetic field, the horizontal direction deflection angle of the magnetic field direction relative to a preset standard axis (generally, the horizontal coordinate axis of a ground coordinate system which is established by taking a human body as the center) isLet us say, as is known, its angle be ξ. Meanwhile, when the magnetic induction intensity and the acceleration are simultaneously substituted into a preset formula to calculate the attitude angle of the capsule endoscope, specifically, a three-axis coordinate system of the capsule endoscope can be constructed firstly. In the present embodiment, the axial direction of the capsule endoscope is taken as the Z-axis direction, and the cross-sectional direction is taken as the X-Y plane direction. Thus, the x-axis basis vector of the capsule endoscope
Figure BDA0001597124310000103
Is [ 100 ]]And the y-axis base vector of the capsule endoscope
Figure BDA0001597124310000104
Is [ 010 ]]Base vector of z axis thereof
Figure BDA0001597124310000105
I.e., [ 001 ]]. At the same time, the detected acceleration can be set
Figure BDA0001597124310000106
Is [ gx gy gz]Magnetic induction intensity of the position of the capsule endoscope in the digestive tract
Figure BDA0001597124310000107
Is [ mxmy mz]The corresponding schematic diagram is shown in fig. 2. Meanwhile, the attitude of the capsule endoscope is determined by an attitude angle which mainly comprises a pitch angle, a roll angle and a yaw angle, so that the pitch angle can be pitch, the roll angle can be roll and the yaw angle can be yaw.
Wherein, pitch angle pitch can be solved through the acceleration of capsule scope and the contained angle between capsule scope self coordinate system Z axle, so the pitch angle can be through the formula:
Figure BDA0001597124310000101
and (6) performing calculation.
The calculation method of the roll angle is the same as the principle, firstly, the projection vector of the acceleration of the capsule endoscope on the xy plane of the coordinate system of the capsule endoscope can be set as
Figure BDA0001597124310000108
Then
Figure BDA0001597124310000109
Is [ gx gy 0]Thus, the roll angle roll can be represented by the formula:
Figure BDA0001597124310000102
and (6) performing calculation.
Wherein when
Figure BDA00015971243100001010
And A is E.R, A>At 0, roll ═ roll |,
when in use
Figure BDA00015971243100001011
And A belongs to R, when A is less than or equal to 0, roll is | - | roll |.
The yaw angle yaw needs to perform data fusion processing on the acceleration and the magnetic induction intensity, and also needs to use the deflection angle ξ of the external preset magnetic field in the horizontal direction.
Firstly, setting theta as the horizontal included angle between the magnetic induction intensity and the Z-axis direction of the coordinate system of the capsule endoscope, and secondly, enabling
Figure BDA0001597124310000111
Wherein,
Figure BDA0001597124310000117
is composed of
Figure BDA0001597124310000118
And
Figure BDA0001597124310000119
the normal vector of the plane is formed,
Figure BDA00015971243100001110
is composed of
Figure BDA00015971243100001111
And
Figure BDA00015971243100001112
the normal vector that makes up the plane, thus:
Figure BDA0001597124310000112
wherein when
Figure BDA00015971243100001113
And A is E.R, A>When 0, theta is equal to | theta |,
when in use
Figure BDA00015971243100001114
And A belongs to R, when A is less than or equal to 0, theta is | - | theta |;
a is a coefficient.
After θ is calculated, it is added to ξ to obtain yaw angle yaw, that is, yaw + θ.
Therefore, the pitch angle pitch, the roll angle roll and the yaw angle yaw of the capsule endoscope are calculated, the attitude angle of the capsule endoscope can be smoothly obtained, and the full-attitude analysis of the capsule endoscope is realized.
The capsule endoscope positioning method provided by the embodiment of the invention integrates each angular velocity to calculate the attitude angle of the capsule endoscope, and can comprise the following steps:
by the formula
Figure BDA0001597124310000113
Calculating the rotation angle of the capsule endoscope around a preset x axis at the moment t;
by the formula
Figure BDA0001597124310000114
Calculating the rotation angle of the capsule endoscope around a preset y axis at the moment t;
by the formula
Figure BDA0001597124310000115
Calculating the rotation angle of the capsule endoscope around a preset z axis at the moment t;
the attitude angle of the capsule endoscope can be represented by a rotation matrix as:
Figure BDA0001597124310000116
wherein alpha is0、β0And gamma0Respectively setting the initial rotation angle of the capsule endoscope around a preset x axis, the initial rotation angle of the capsule endoscope around a preset y axis and the initial rotation angle of the capsule endoscope around a preset z axis in an integral constant term; omegax、ωyAnd ωzThe angular speeds of the capsule endoscope rotating around the self preset three axes are respectively, and the integral constant term is the attitude angle of the capsule endoscope calculated at the moment when the external preset magnetic field is changed from meeting the preset condition to not meeting the preset condition.
Particularly, when the angular velocity is integrated to calculate the attitude angle of the capsule endoscope, the method may further include: and correcting a formula for respectively carrying out integral calculation on each angular velocity by taking the attitude angle of the capsule endoscope calculated by the external preset magnetic field at the moment when the external preset magnetic field is changed from meeting the preset condition to not meeting the preset condition as an integral constant term.
When the external preset magnetic field does not meet the preset condition, the angular speed of the capsule endoscope rotating around the preset three axes needs to be detected, and integral calculation is carried out on each angular speed respectively to determine the attitude angle. Specifically, when the angular speed of the capsule endoscope rotating around the preset three axes is detected, the angular speed can be detected through an angular speed sensor arranged in the capsule endoscope. Meanwhile, in the MEMS field, the angular velocity sensor may cause an error to the system because it measures an angular velocity rather than an angle itself. The angle can be obtained by the angular velocity definite integral, and in the integral process, due to the influence of factors such as measurement errors and sampling errors, an error introduced quantity is obtained, so that the influence on the accuracy of attitude determination is not large in a short time and can be ignored, but when the attitude angle is determined in a mode of angular velocity integral continuously for a long time, the system accumulated error is larger and larger along with the lapse of time. For this, a correction link is added in this embodiment.
When the external preset magnetic field suddenly changes and does not meet the preset condition, the attitude angle recorded at the moment when the external preset magnetic field changes from meeting the preset condition to not meeting the preset condition is the last group of error-free data, and the error-free data is used as an integral constant term and is used as a correction parameter for subsequent angular velocity integral operation, so that the system accumulated error formed in the process of solving the attitude angle through angular velocity integral for a long time is eliminated.
Specifically, when the angular velocities are respectively integrated to calculate the attitude angle of the capsule endoscope, firstly, the attitude angle can be calculated by the following formula:
Figure BDA0001597124310000121
calculating the rotation angle of the capsule endoscope around the preset x axis at the moment t,
meanwhile, through the formula:
Figure BDA0001597124310000122
calculating the rotation angle of the capsule endoscope around the preset y axis at the moment t,
meanwhile, through the formula:
Figure BDA0001597124310000123
and calculating the rotation angle of the capsule endoscope around the preset z axis at the moment t.
Wherein, ω isx、ωyAnd ωzThe angular speeds of the capsule endoscope rotating around the self preset three axes are respectively.
t is a time counted from the instant when the external preset magnetic field changes from satisfying the preset condition to not satisfying the preset condition, and α0、β0And gamma0The initial rotation angles of the capsule endoscope in the integral constant term around the preset x axis, the preset y axis and the preset z axis are respectively, each initial rotation angle is related to the last recorded group of data without accumulated errors, and the data can be obtained by converting a pitch angle pitch, a roll angle roll and a yaw angle yaw in the group of data.
Then, the attitude angle of the capsule endoscope can be solved according to the three calculation formulas for calculating the rotation angle of the capsule endoscope around the preset three axes, and for convenience of discussion, the embodiment uses a rolling-pitching-yawing representation method in dynamics and uses a rotation matrix Rrpy(φ, θ, ψ) represents the attitude angle of the capsule endoscope as:
Figure BDA0001597124310000131
and the general expressions of the pitch angle pitch, the roll angle roll and the yaw angle yaw can be obtained by operating the data in the matrix, and are not described herein again.
In addition, the angular velocity can be detected by a gyroscope disposed in the capsule inner mirror.
The capsule endoscope positioning method provided by the embodiment of the invention is used for training a depth network model, and comprises the following steps:
acquiring a training set and a testing set, wherein the training set and the testing set comprise digestive tract pictures and marks representing positions of digestive tracts corresponding to the digestive tract pictures;
selecting a deep network model based on a deep learning frame as a current deep network model, training the deep network model by using a training set, testing the trained deep network model by using a test set to obtain identification precision data of the deep network model, judging whether the identification precision data meets a preset precision requirement, if so, determining the trained deep network model as the deep network model for completing the training, if not, determining the deep network model obtained after adjusting the trained deep network model as the deep network model, and returning to execute the step of training the deep network model by using the training set.
The data set preparation comprises the acquisition of a training set and a testing set, wherein the training set is used for realizing the training of a deep network model, the testing set is used for realizing the testing of the deep network model, digestive tract pictures (which can be simply referred to as pictures in the application) contained in the testing set and the training set can be obtained by manually marking complete pictures collected by a capsule endoscope by medical personnel from an endoscope room in a gastroenterology department, the marked digestive tract positions can include but are not limited to esophagus, cardia, fundus ventriculi, corpus gastris, angle gastrium, antrum, pylorus, duodenum and the like, different labels are adopted for different anatomical positions, the marking principle is that the anatomical positions can be identified by human eyes according to single picture information, the digestive tract pictures contained in the training set and the testing set are different, and therefore the identification precision of the trained deep network model can be improved. For example, the digestive tract pictures of N patients can be used as the data set, wherein the digestive tract picture of the ith patient is used as the test set, and the digestive tract pictures of the remaining N-1 patients except the ith patient are used as the training set (1 ≦ i ≦ N).
In the present application, a deep network model is used to recognize the position of the alimentary tract, and the selection of the deep network model and the setting of the model parameters thereof can be set in advance according to actual needs, and in the present application, the deep network model can use Alexnet, Resnet, Googlenet, VGG, etc. based on the CNN feedforward convolutional neural network model, and in the embodiments of the present application, it is found through comparison after training and testing the above network model, and for a capsule endoscopy picture (i.e. the alimentary tract anatomical picture in the present application), the Alexne network model has relatively high recognition accuracy and positive prediction rate, and therefore, in the present application, the Alexne network model is preferred to realize corresponding functions, that is, the deep network model based on the deep learning framework is selected as the current deep network model, and can include: and selecting an Alexene network model based on a deep learning framework as a current deep network model.
Specifically, the deep network model selected in the application is preferably realized based on an Alexene network model, the Alexene network model has 8 basic layers, 5 convolutional layers and 3 fully-connected layers, the output layer of the last fully-connected layer is a precision layer with loss function output, and the output number of the output layers is the number of digestive tract positions; the first convolutional layer (conv1) and the second convolutional layer (conv2) are followed by a normalization layer (norm), and at each convolutional layer and the fully-connected layer (FC) a RELU operation is followed, i.e. an activation function is used to solve the non-linearity problem, and after norm1, norm2, conv5 a pooling layer (posing). The preset precision requirement is preset according to actual needs, if the recognition precision data meet the preset precision requirement, the recognition precision of the deep network model corresponding to the recognition precision data reaches the requirement on the recognition precision, at the moment, the training is determined to be finished, otherwise, the deep network model is trained again after the corresponding deep network model is adjusted, and therefore the recognition precision of the deep network model which is finally trained is guaranteed to be high.
In the technical scheme disclosed by the embodiment of the invention, a training set and a test set containing digestive tract pictures and corresponding digestive tract position marks are obtained, a deep network model is trained by using the training set, the deep network model is tested by using the test set to obtain identification precision data representing the identification precision of the deep network model, the deep network model is adjusted after the identification precision corresponding to the identification precision data does not meet the requirement, the step of training the deep network model by using the training set is returned until the identification precision data corresponding to the deep network model meets the requirement, and therefore, the identification precision of the deep network model is ensured to be higher.
The capsule endoscope positioning method provided by the embodiment of the invention can further comprise the following steps of after the training set and the test set are obtained:
determining the digestive tract pictures with unknown digestive tract positions in the training set and the test set as unknown pictures, and eliminating the pictures with similarity greater than a preset threshold value in the unknown pictures by using a perceptual hash algorithm;
and performing preset angle rotation processing and image enhancement processing on digestive tract pictures contained in the training set and the test set.
The images of the digestive tract included in the training set and the test set also comprise images marked as unknown classes, the images are residual images which can be used for distinguishing the position of the digestive tract and are removed from the images of the complete digestive tract acquired by the capsule endoscope, for example, the images of the anatomical digestive tract which are blocked by bubbles or other contents of the digestive tract and cannot be distinguished by single images, or the images which are difficult to distinguish and are close to the mucous membrane of the digestive tract can be marked as the unknown classes. Specifically, the process of excluding similar pictures using the perceptual hashing algorithm may be as follows:
s1, reducing the adjacent two pictures to 8 × 8 size;
s2, if the picture after the reduction is a color picture, the picture is processed to reduce the gray level, specifically to 64 gray level;
s3, calculating the gray average value of all pixels of the 8 x 8 picture;
s4, comparing the gray scale of each pixel with the average gray scale value, recording the pixels which are larger than or equal to the average gray scale value as 1, and recording the pixels which are smaller than the average gray scale value as 0, so as to obtain the hash value of the corresponding picture; comparing the hash values of the two pictures, judging whether the hash values of corresponding pixels are equal, recording the number of unequal pixels, and if the number reaches a certain threshold value set according to actual needs, indicating that the two pictures are dissimilar; if the number does not reach the threshold value, the two pictures are similar, the first picture is excluded, the second picture and the next picture are reserved, and the process is repeated until all the pictures are traversed.
In addition, in the present application, the pictures included in the training set and the test set may be directly used, but in order to enable the final depth network model to adapt to the isotropic recognition result, in this embodiment, a certain angle rotation process and an image enhancement process may be performed on all the pictures in the data set, where the angle of rotation may be set according to actual needs, and the rotation process and the image enhancement process are consistent with the implementation principle of the corresponding technical scheme in the prior art, and are not described herein again. Of course, it is also possible to perform only the preset angle rotation or only the image enhancement processing on the pictures in the training set and the test set, and the like, all of which are within the protection scope of the present invention.
The capsule endoscope positioning method provided by the embodiment of the invention utilizes a training set to train a depth network model, and can comprise the following steps:
combining the digestive tract pictures contained in the training set into a plurality of sub-training sets, wherein the digestive tract pictures contained in each sub-training set are not identical; and respectively training the deep network models by utilizing the plurality of sub-training sets to obtain a plurality of corresponding deep network models, respectively testing the plurality of deep network models by utilizing the test set to obtain the identification precision data of the corresponding deep network models, and selecting the deep network model with the highest identification precision indicated by the corresponding identification precision data as the deep network model after the deep network model is trained by utilizing the training set.
The digestive tract images contained in the training set can be combined into a plurality of sub-training sets, for example, when the training set contains n patient images, the images of other patients except the jth patient can be used as the sub-training sets, and j takes values from 1 to n respectively, so that n sub-training sets can be obtained. And obtaining corresponding deep network models through a plurality of sub-training sets and selecting the optimal model to realize the subsequent steps, thereby further improving the precision of the deep network models obtained by training.
In addition, a plurality of sub-training sets and test sets can be formed by the data sets, for example, when the data sets contain N pictures of patients, pictures of other patients except the ith patient can be used as the sub-training sets, the picture of the ith patient is used as the corresponding sub-test set, and i takes values from 1 to N respectively, so that the N sub-training sets and the N sub-test sets can be obtained, the deep network model is tested by using the corresponding sub-test set after the deep network model is trained by using any sub-training set, and finally the deep network model with the highest recognition precision is selected as the deep network model after the deep network model is trained by using the training sets to realize subsequent steps, so that the deep network model is trained by using the data sets to the maximum extent, and the precision of the deep network model is improved.
The capsule endoscope positioning method provided by the embodiment of the invention utilizes the test set to test the depth network model to obtain the identification precision data of the depth network model, and can comprise the following steps:
testing the deep network model by using the test set, and calculating the identification accuracy and the positive prediction rate included in the identification accuracy data of the deep network model according to the following formulas based on the test result:
the identification accuracy rate is 100% of the number of correct digestive tract pictures at a certain digestive tract position automatically identified in the test set/the total number of the digestive tract pictures at the corresponding digestive tract position in the test set;
the positive prediction rate is the number of correct digestive tract pictures at a certain digestive tract position automatically identified in the test set/the total number of digestive tract pictures at a corresponding digestive tract position automatically identified in the test set is 100%.
The testing the deep network model by using the test set may specifically include: taking the pictures of the test set as the input of the depth network model, obtaining the positions of the digestive tracts corresponding to the pictures output by the depth network model, determining that the pictures are correctly identified if the positions of the digestive tracts corresponding to the marks of the pictures are consistent with the positions of the digestive tracts output by the depth network model, or else, determining that the pictures are not correctly identified, thereby counting the number of the digestive tract pictures at a certain correct digestive tract position automatically identified in the test set, the total number of the digestive tract pictures at a corresponding digestive tract position in the test set, the number of the digestive tract pictures at a certain correct digestive tract position automatically identified in the test set, and the total number of the digestive tract pictures at a corresponding digestive tract position automatically identified in the test set, further calculating the identification accuracy and the positive prediction rate of the depth network model based on the counted numbers, if the identification accuracy and the positive prediction rate both reach the corresponding values in the preset accuracy requirement, and if not, indicating that the recognition precision of the deep network model does not meet the requirement, not completing the training and continuing the subsequent training step.
In addition, the step of inputting the alimentary canal picture acquired by the capsule endoscope into the trained deep network model to obtain the alimentary canal position corresponding to the alimentary canal picture output by the deep network model may include:
and inputting the alimentary canal picture acquired by the capsule endoscope into the trained deep network model, obtaining the probability of the acquired alimentary canal picture corresponding to different alimentary canal positions, which is output by the softmax layer and has probability output, contained in the deep network model, and determining the alimentary canal position with the maximum probability as the alimentary canal position corresponding to the acquired alimentary canal picture.
In the application, the output layer of the depth network model can adopt a softmax layer with probability, the probability of each digestive tract position corresponding to the digestive tract image which is output by the corresponding depth network model and acquired by the capsule endoscope is selected, and the digestive tract position with the maximum probability is the digestive tract position corresponding to the image, so that the identification precision is ensured.
In addition, in the technical scheme provided by the embodiment of the present invention, for the pictures acquired by the capsule endoscope, the depth network model may be input in a manner of reading picture memory data or a picture storage path, and the pictures included in the training set may be stored in a hard disk in a manner of a training list for acquisition during training.
In the above technical solution, training the deep network model by using the training set may include: and training the deep network model on the GPU by using the training set.
In this embodiment, in order to make the deep network model faster, the training process of the deep network model is preferably implemented on a GPU platform of a high-speed computing performance card, and certainly can be implemented on other hardware platforms according to actual application requirements, which is within the protection scope of the present invention.
In addition, the deep learning framework based on the deep network model can be Caffe, Caffe2, Tensorflow, Theano, Torch, CNTK and the like, and can be specifically set according to actual needs, and the deep learning framework is within the protection scope of the invention. In this document, an Alexnet network model under the Caffe framework is preferably used as a deep network learning model, and a corresponding command line sentence is called under the Caffe framework correspondingly to realize training of the deep network learning model.
In the above technical solution, determining that the deep network model obtained after adjusting the trained deep network model is the deep network model may include: and determining the depth network model obtained after the trained current depth network model is adjusted as the current depth network model, wherein the adjustment comprises the adjustment of the network hyper-parameters and the number of layers of the current depth network model.
It should be noted that the adjustment of the deep network model may be to adjust the network hyper-parameters and the number of layers of the network model, and specifically may be implemented by changing the content in the corresponding configuration file solution. Therefore, the precision of the trained deep network model is improved. The adjustment in this embodiment may be a fine-tuning process.
Before outputting pose information when the capsule endoscope collects images of the alimentary tract, the capsule endoscope positioning method provided by the embodiment of the invention can further comprise the following steps:
and judging whether the position of the alimentary canal is unknown, if so, outputting the pose information output most recently at the moment, and if not, indicating an output module to output the pose information when the capsule endoscope collects the alimentary canal picture.
And judging that the position of the alimentary canal is unknown, namely judging that the position of the alimentary canal is a category of which the real position cannot be determined manually, and considering that the position and posture information of the last determined position of the alimentary canal is not unknown (namely the position and posture information output most recently from the current moment) is the position and posture information of the capsule endoscope at the moment of acquiring the alimentary canal picture and outputting the position and posture information, so that external workers can know the position and posture of the capsule endoscope.
The capsule endoscope positioning method provided by the embodiment of the invention can output the pose information when the capsule endoscope collects the alimentary canal picture, and comprises the following steps:
setting the pre-drawn pose of the simulated capsule endoscope to be the pose corresponding to the pose information when the capsule endoscope collects the alimentary canal picture, and displaying the simulated capsule endoscope.
The display can be realized by the display unit, namely the display unit can display the pose information of the capsule endoscope by setting a pre-drawn simulated capsule endoscope to a pose corresponding to the pose information of the capsule endoscope when the capsule endoscope collects the alimentary canal picture and displaying the pose information, so that a worker can intuitively and quickly know the true pose information of the capsule endoscope through the simulated capsule endoscope. It should be noted that the simulated capsule endoscope can be drawn according to the ratio of 1:1 to the real capsule endoscope, and the size of the simulated capsule endoscope can be enlarged or reduced under the control of the staff, so as to facilitate the viewing of the staff. In addition, the display unit can include the touch screen, from this, when showing the simulation capsule scope through the touch screen, staff's accessible touch screen realizes the control to the size, the angle etc. of simulation capsule scope.
An embodiment of the present invention further provides a capsule endoscope positioning system, as shown in fig. 2, which may include:
an obtaining module 11, configured to: acquiring a digestive tract picture acquired by a capsule endoscope, the picture brightness of the digestive tract picture and lens parameters of a lens for realizing picture acquisition in the capsule endoscope when the digestive tract picture is acquired;
a positioning module 12 for: inputting the alimentary canal picture into a depth network model trained in advance to obtain an alimentary canal position which is output by the depth network model and corresponds to the alimentary canal picture; determining the distance between the lens and the alimentary tract mucosa when acquiring the alimentary tract image corresponding to the image brightness and the lens parameters based on the predetermined corresponding relation;
an output module 13 for: and outputting pose information when the capsule endoscope collects the alimentary canal picture, wherein the pose information comprises the position of the alimentary canal corresponding to the alimentary canal picture and the distance between the lens and the alimentary canal mucous membrane.
The capsule endoscope positioning system provided by the embodiment of the invention can further comprise:
a pose calculation module to: detecting the acceleration of the capsule endoscope when acquiring the digestive tract picture; judging whether an external preset magnetic field at the position of the alimentary canal where the capsule endoscope is located meets a preset condition, if so, detecting the magnetic induction intensity at the position of the alimentary canal where the capsule endoscope is located, and simultaneously substituting the magnetic induction intensity and the acceleration into a preset formula to calculate the attitude angle of the capsule endoscope; if not, detecting the angular speed of the capsule endoscope rotating around the preset three axes, and respectively carrying out integral calculation on each angular speed to calculate the attitude angle of the capsule endoscope; and adding the attitude angle of the capsule endoscope into the attitude information.
In the capsule endoscope positioning system provided by the embodiment of the present invention, the attitude calculation module may include:
a first calculation unit to: by the formula
Figure BDA0001597124310000201
Calculating the pitch angle pitch of the capsule endoscope;
by the formula
Figure BDA0001597124310000202
Calculating a roll angle of the capsule endoscope;
by the formula
yaw=ξ+θ
Calculating the yaw angle yaw of the capsule endoscope; and is
Figure BDA0001597124310000203
Where ξ represents the exteriorThe deflection angle in the horizontal direction when the preset magnetic field meets the preset condition is determined; theta is
Figure BDA0001597124310000205
And
Figure BDA0001597124310000206
an angle in the horizontal direction;
Figure BDA0001597124310000207
is the x-axis base vector of the capsule endoscope
Figure BDA0001597124310000208
Figure BDA0001597124310000209
Is a y-axis base vector of the capsule endoscope
Figure BDA00015971243100002010
Figure BDA00015971243100002011
Is the z-axis base vector of the capsule endoscope
Figure BDA00015971243100002012
Figure BDA00015971243100002013
Acceleration when acquiring digestive tract pictures for capsule endoscopy is set
Figure BDA00015971243100002014
Figure BDA00015971243100002015
For the magnetic induction intensity of the alimentary canal position of the capsule endoscope, a
Figure BDA00015971243100002016
Figure BDA00015971243100002017
Is composed of
Figure BDA00015971243100002018
Projection on the x-y plane of the capsule endoscope
Figure BDA00015971243100002019
And is
Figure BDA00015971243100002020
When in use
Figure BDA00015971243100002021
And A is E.R, A>When 0, roll is | roll |, when
Figure BDA00015971243100002022
And A belongs to R, when A is less than or equal to 0, roll is | - | roll |; when in use
Figure BDA00015971243100002023
And A is E.R, A>When 0, theta is ═ theta |, when
Figure BDA00015971243100002024
And A belongs to R, and when A is less than or equal to 0, theta is | - | theta |.
In the capsule endoscope positioning system provided by the embodiment of the present invention, the attitude calculation module may include:
a second calculation unit to: by the formula
Figure BDA0001597124310000204
Calculating the rotation angle of the capsule endoscope around a preset x axis at the moment t;
by the formula
Figure BDA0001597124310000211
Calculating the rotation angle of the capsule endoscope around a preset y axis at the moment t;
by the formula
Figure BDA0001597124310000212
Calculating the rotation angle of the capsule endoscope around a preset z axis at the moment t;
the attitude angle of the capsule endoscope can be represented by a rotation matrix as:
Figure BDA0001597124310000213
wherein alpha is0、β0And gamma0Respectively setting the initial rotation angle of the capsule endoscope around a preset x axis, the initial rotation angle of the capsule endoscope around a preset y axis and the initial rotation angle of the capsule endoscope around a preset z axis in an integral constant term; omegax、ωyAnd ωzThe angular speeds of the capsule endoscope rotating around the self preset three axes are respectively, and the integral constant term is the attitude angle of the capsule endoscope calculated at the moment when the external preset magnetic field is changed from meeting the preset condition to not meeting the preset condition.
The capsule endoscope positioning system provided by the embodiment of the invention can further comprise:
a model training module to: acquiring a training set and a testing set, wherein the training set and the testing set comprise digestive tract pictures and marks representing positions of digestive tracts corresponding to the digestive tract pictures; selecting a deep network model based on a deep learning frame as a current deep network model, training the deep network model by using a training set, testing the trained deep network model by using a test set to obtain identification precision data of the deep network model, judging whether the identification precision data meets a preset precision requirement, if so, determining the trained deep network model as the deep network model for completing the training, if not, determining the deep network model obtained after adjusting the trained deep network model as the deep network model, and returning to execute the step of training the deep network model by using the training set.
The capsule endoscope positioning system provided by the embodiment of the invention can further comprise:
a pre-processing module to: after a training set and a test set are obtained, determining that digestive tract pictures corresponding to the unknown digestive tract positions in the training set and the test set are unknown pictures, and eliminating pictures with similarity greater than a preset threshold value in the unknown pictures by using a perceptual hash algorithm; and performing preset angle rotation processing and image enhancement processing on digestive tract pictures contained in the training set and the test set.
In the capsule endoscope positioning system provided by the embodiment of the invention, the model training module may include:
a first training unit to: combining the digestive tract pictures contained in the training set into a plurality of sub-training sets, wherein the digestive tract pictures contained in each sub-training set are not identical; and respectively training the deep network models by utilizing the plurality of sub-training sets to obtain a plurality of corresponding deep network models, respectively testing the plurality of deep network models by utilizing the test set to obtain the identification precision data of the corresponding deep network models, and selecting the deep network model with the highest identification precision indicated by the corresponding identification precision data as the deep network model after the deep network model is trained by utilizing the training set.
In the capsule endoscope positioning system provided by the embodiment of the invention, the model training module may include:
a second training unit to: testing the deep network model by using the test set, and calculating the identification accuracy and the positive prediction rate included in the identification accuracy data of the deep network model according to the following formulas based on the test result:
the identification accuracy rate is 100% of the number of correct digestive tract pictures at a certain digestive tract position automatically identified in the test set/the total number of the digestive tract pictures at the corresponding digestive tract position in the test set;
the positive prediction rate is the number of correct digestive tract pictures at a certain digestive tract position automatically identified in the test set/the total number of digestive tract pictures at a corresponding digestive tract position automatically identified in the test set is 100%.
The capsule endoscope positioning system provided by the embodiment of the invention can further comprise:
a discrimination module for: before outputting the pose information when the capsule endoscope acquires the alimentary canal picture, judging whether the alimentary canal position is an unknown class, if so, outputting the pose information which is output most recently at the moment, and if not, indicating an output module to output the pose information when the capsule endoscope acquires the alimentary canal picture.
In the capsule endoscope positioning system provided by the embodiment of the present invention, the output module may include:
a display unit for: setting the pre-drawn pose of the simulated capsule endoscope to be the pose corresponding to the pose information when the capsule endoscope collects the alimentary canal picture, and displaying the simulated capsule endoscope.
For a description of relevant parts in a capsule endoscope positioning system provided by an embodiment of the present invention, reference is made to detailed descriptions of corresponding parts in a capsule endoscope positioning method provided by an embodiment of the present invention, which are not repeated herein. In addition, parts of the technical solutions provided in the embodiments of the present invention that are consistent with the implementation principles of the corresponding technical solutions in the prior art are not described in detail, so as to avoid redundant description.
It should be noted that, as shown in fig. 4, a hardware system implementing the above technical solution provided by the embodiment of the present invention may include: the capsule endoscope comprises a capsule endoscope shell, an illumination unit, an image acquisition unit (comprising a lens), a microprocessor, a transceiving unit, an acceleration sensor, a magnetic field sensor, an internal magnet and a battery; the external magnetic field device comprises an external magnetic field generating device and an external magnetic field detecting device; the processor comprises a capsule space posture calculation unit, a capsule anatomical position recognition unit, a distance calculation unit and a display unit. The components of the above solution that are consistent with the implementation principle of the corresponding solution in the prior art are not specifically described herein.
The acceleration sensor is used for detecting the acceleration vector of the capsule endoscope; the magnetic field sensor is used for generating a magnetic field vector by an external magnetic field generator at the position of the capsule endoscope; the external magnetic field generating device is used for generating an external driving magnetic field and generating a pulling force and a torque force to the capsule endoscope so as to drive the capsule endoscope to roll, rotate and tilt, thereby achieving the purpose of actively controlling the capsule endoscope to move in the body; the external magnetic field detection device is used for detecting an external magnetic field. The capsule space attitude calculation unit is used for calculating the attitude angle of the capsule endoscope according to a preset formula according to the acceleration and the magnetic field vector obtained by the capsule endoscope acceleration sensor and the magnetic field sensor, and the capsule anatomical position recognition unit is used for determining the position of the alimentary tract according to the alimentary tract image acquired by the capsule endoscope by using the depth network model. The distance calculation unit is used for calculating the distance between the lens of the capsule endoscope and the alimentary tract mucous membrane according to the picture information. The display unit is used for displaying the pose information of the capsule endoscope on the display so as to achieve the purpose of positioning the capsule endoscope in the body and have good guiding significance for an operator to realize a further inspection planning route.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. A capsule endoscope positioning system, comprising:
an acquisition module to: acquiring a digestive tract picture acquired by a capsule endoscope, the picture brightness of the digestive tract picture and lens parameters of a lens for realizing picture acquisition in the capsule endoscope when the digestive tract picture is acquired;
a positioning module to: inputting the alimentary canal picture into a depth network model trained in advance to obtain an alimentary canal position which is output by the depth network model and corresponds to the alimentary canal picture; determining the distance between the lens and the alimentary tract mucous membrane when acquiring the alimentary tract picture corresponding to the picture brightness and the lens parameters based on the predetermined corresponding relation; wherein, the lens parameters include the gain and the exposure time of the lens, and the predetermined corresponding relationship includes: simulating an in-vivo environment, changing the distance between a subject in the in-vivo environment and the capsule endoscope in a fixed step length within an effective shooting distance of a lens of the capsule endoscope, and under the condition that the distance between the subject and the capsule endoscope is fixed, adjusting the gain and exposure time of the lens of the capsule endoscope and calculating the brightness of a picture acquired by the capsule endoscope to obtain corresponding relations between different imaging distances and the gain, exposure time and picture brightness, wherein the imaging distance is the distance between the lens of the capsule endoscope and the subject;
an output module to: outputting pose information when the capsule endoscope collects the alimentary canal picture, wherein the pose information comprises the position of the alimentary canal corresponding to the alimentary canal picture and the distance between the lens and the alimentary canal mucous membrane;
the output module includes:
a display unit for: setting a pre-drawn pose of the simulated capsule endoscope to a pose corresponding to pose information of the capsule endoscope when the capsule endoscope collects the alimentary canal picture, and displaying the simulated capsule endoscope; the display unit comprises a touch screen, and a worker can control the size and the angle of the simulated capsule endoscope through the touch screen;
the system further comprises:
a model training module to: acquiring a training set and a testing set, wherein the training set and the testing set comprise digestive tract pictures and marks representing positions of digestive tracts corresponding to the digestive tract pictures; selecting a deep network model based on a deep learning frame as a current deep network model, training the deep network model by using the training set, testing the trained deep network model by using the test set to obtain recognition precision data of the deep network model, judging whether the recognition precision data meets the preset precision requirement, if so, determining the trained deep network model as the deep network model for completing training, if not, determining the deep network model obtained after adjusting the trained deep network model as the deep network model, and returning to the step of training the deep network model by using the training set;
the model training module comprises:
a first training unit to: combining the digestive tract pictures contained in the training set into a plurality of sub-training sets, wherein the digestive tract pictures contained in each sub-training set are not identical; respectively training the depth network models by utilizing the plurality of sub-training sets to obtain a plurality of corresponding depth network models, respectively testing the plurality of depth network models by utilizing the test set to obtain identification precision data of the corresponding depth network models, and selecting the depth network model with the highest identification precision indicated by the corresponding identification precision data as the depth network model after the depth network model is trained by utilizing the training set;
the system further comprises:
a discrimination module for: before outputting the pose information when the capsule endoscope collects the alimentary canal picture, judging whether the alimentary canal position is an unknown class, if so, outputting the pose information which is output most recently at a moment, and if not, indicating the output module to execute the step of outputting the pose information when the capsule endoscope collects the alimentary canal picture.
2. The system of claim 1, further comprising:
a pose calculation module to: detecting the acceleration of the capsule endoscope when the capsule endoscope collects the alimentary canal picture; judging whether an external preset magnetic field at the position of the alimentary canal where the capsule endoscope is located meets a preset condition, if so, detecting the magnetic induction intensity at the position of the alimentary canal where the capsule endoscope is located, and simultaneously substituting the magnetic induction intensity and the acceleration into a preset formula to calculate the attitude angle of the capsule endoscope; if not, detecting the angular speed of the capsule endoscope rotating around a preset three-axis, and respectively carrying out integral calculation on each angular speed to calculate the attitude angle of the capsule endoscope; and adding the attitude angle of the capsule endoscope into the attitude information.
3. The system of claim 2, wherein the pose computation module comprises:
a first calculation unit to: by the formula
Figure FDA0002513012150000021
Calculating the pitch angle pitch of the capsule endoscope;
by the formula
Figure FDA0002513012150000022
Calculating a roll angle roll of the capsule endoscope;
by the formula
yaw=ξ+θ
Calculating a yaw angle yaw of the capsule endoscope; and is
Figure FDA0002513012150000031
Where ξ represents a deflection angle in the horizontal direction when an external preset magnetic field satisfies a preset condition; theta is
Figure FDA0002513012150000032
And
Figure FDA0002513012150000033
an angle in the horizontal direction;
Figure FDA0002513012150000034
setting the x-axis base vector of the capsule endoscope
Figure FDA0002513012150000035
Figure FDA0002513012150000036
For the y-axis base vector of the capsule endoscope, set
Figure FDA0002513012150000037
Figure FDA0002513012150000038
For the z-axis base vector of the capsule endoscope, set
Figure FDA0002513012150000039
Figure FDA00025130121500000310
Setting the acceleration of the capsule endoscope when acquiring the alimentary canal picture
Figure FDA00025130121500000311
Figure FDA00025130121500000312
Setting the magnetic induction intensity of the alimentary canal position of the capsule endoscope
Figure FDA00025130121500000313
Figure FDA00025130121500000314
Is composed of
Figure FDA00025130121500000315
Projection on the x-y plane of the capsule endoscope is provided
Figure FDA00025130121500000316
And is
Figure FDA00025130121500000317
When in use
Figure FDA00025130121500000318
And A is equal to R, when A > 0Roll is when
Figure FDA00025130121500000319
And A belongs to R, when A is less than or equal to 0, roll is | - | roll |; when in use
Figure FDA00025130121500000320
And A belongs to R, when A is more than 0, theta is ═ theta |, when
Figure FDA00025130121500000321
And A belongs to R, and when A is less than or equal to 0, theta is | - | theta |.
4. The system of claim 3, wherein the pose computation module comprises:
a second calculation unit to: by the formula
Figure FDA00025130121500000322
Calculating the rotation angle of the capsule endoscope around a preset x axis at the moment t;
by the formula
Figure FDA00025130121500000323
Calculating the rotation angle of the capsule endoscope around a preset y axis at the moment t;
by the formula
Figure FDA00025130121500000324
Calculating the rotation angle of the capsule endoscope around a preset z axis at the moment t;
the attitude angle of the capsule endoscope can be represented by a rotation matrix as:
Figure FDA00025130121500000325
wherein alpha is0、β0And gamma0Respectively setting the initial rotation angle of the capsule endoscope around a preset x axis, the initial rotation angle of the capsule endoscope around a preset y axis and the initial rotation angle of the capsule endoscope around a preset z axis in an integral constant term; omegax、ωyAnd ωzThe angular speeds of the capsule endoscope rotating around the preset three axes are respectively, and the integral constant term is the attitude angle of the capsule endoscope calculated at the moment when the external preset magnetic field is changed from meeting the preset condition to not meeting the preset condition.
5. The system of claim 1, further comprising:
a pre-processing module to: after the training set and the test set are obtained, determining that the digestive tract pictures with unknown digestive tract positions in the training set and the test set are unknown pictures, and eliminating pictures with similarity larger than a preset threshold value in the unknown pictures by using a perceptual hash algorithm; and carrying out preset angle rotation processing and image enhancement processing on the digestive tract pictures contained in the training set and the testing set.
6. The system of claim 1, wherein the model training module comprises:
a second training unit to: testing the deep network model by using the test set, and calculating the identification accuracy and the positive prediction rate included in the identification accuracy data of the deep network model according to the following formulas based on the test result:
the identification accuracy rate is 100% of the number of correct digestive tract pictures at a certain digestive tract position automatically identified in the test set/the total number of the digestive tract pictures at the corresponding digestive tract position in the test set;
the positive prediction rate is the number of correct digestive tract pictures at a certain digestive tract position automatically identified in the test set/the total number of digestive tract pictures at a corresponding digestive tract position automatically identified in the test set is 100%.
CN201810210665.3A 2018-03-14 2018-03-14 Capsule endoscope positioning system Active CN108354578B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810210665.3A CN108354578B (en) 2018-03-14 2018-03-14 Capsule endoscope positioning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810210665.3A CN108354578B (en) 2018-03-14 2018-03-14 Capsule endoscope positioning system

Publications (2)

Publication Number Publication Date
CN108354578A CN108354578A (en) 2018-08-03
CN108354578B true CN108354578B (en) 2020-10-30

Family

ID=63000276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810210665.3A Active CN108354578B (en) 2018-03-14 2018-03-14 Capsule endoscope positioning system

Country Status (1)

Country Link
CN (1) CN108354578B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109464115A (en) * 2018-12-17 2019-03-15 深圳开立生物医疗科技股份有限公司 A kind of endoscopic system and a kind of AI diagnostic device
CN110236474B (en) 2019-06-04 2020-10-27 北京理工大学 Active magnetic control capsule robot detection system and detection method
CN110613454B (en) * 2019-10-09 2022-07-26 北京华亘安邦科技有限公司 Method and system for searching position of capsule endoscope
CN110897596A (en) * 2019-12-05 2020-03-24 重庆金山医疗技术研究院有限公司 Method for automatically adjusting capsule shooting rate, endoscope, recorder and system
CN111493805A (en) * 2020-04-23 2020-08-07 重庆金山医疗技术研究院有限公司 State detection device, method, system and readable storage medium
CN112200250A (en) * 2020-10-14 2021-01-08 重庆金山医疗器械有限公司 Digestive tract segmentation identification method, device and equipment of capsule endoscope image
CN113743287B (en) * 2021-08-31 2024-03-26 之江实验室 Robot self-adaptive grabbing control method and system based on impulse neural network

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2441374B1 (en) * 2009-06-10 2016-11-16 Olympus Corporation Capsule endoscope device
CN104203068A (en) * 2012-05-14 2014-12-10 奥林巴斯医疗株式会社 Capsule therapy device and therapy system
CN106097335B (en) * 2016-06-08 2019-01-25 安翰光电技术(武汉)有限公司 Alimentary canal lesion image identification system and recognition methods
CN106934799B (en) * 2017-02-24 2019-09-03 安翰科技(武汉)股份有限公司 Capsule endoscope visual aids diagosis system and method
CN107374566B (en) * 2017-07-13 2019-06-14 重庆金山医疗器械有限公司 A kind of full attitude sensing system of capsule endoscope based on variation magnetic field
CN107437100A (en) * 2017-08-08 2017-12-05 重庆邮电大学 A kind of picture position Forecasting Methodology based on the association study of cross-module state

Also Published As

Publication number Publication date
CN108354578A (en) 2018-08-03

Similar Documents

Publication Publication Date Title
CN108354578B (en) Capsule endoscope positioning system
US8371693B2 (en) Autism diagnosis support apparatus
CN102149325B (en) Line-of-sight direction determination device and line-of-sight direction determination method
US20200345288A1 (en) A 3-dimensional measurement method for eye movement and fully automated deep-learning based system for vertigo diagnosis
US9521944B2 (en) Endoscope system for displaying an organ model image to which an endoscope image is pasted
KR101978548B1 (en) Server and method for diagnosing dizziness using eye movement measurement, and storage medium storin the same
KR102043672B1 (en) System and method for lesion interpretation based on deep learning
CN107249427A (en) Medical treatment device, medical image generation method and medical image generation program
CN107374566B (en) A kind of full attitude sensing system of capsule endoscope based on variation magnetic field
KR20230150934A (en) System for providing educational information of surgical techniques and skills and surgical guide system based on machine learning using 3 dimensional image
KR100982171B1 (en) The System for Capturing 2D Facial Image
JP7189355B2 (en) Computer program, endoscope processor, and information processing method
JP2016539767A (en) Endoscopy equipment
CN114637871A (en) Method and device for establishing digestive tract database and storage medium
JP2022128414A (en) Tracheal intubation positioning method based on deep learning, device, and storage medium
US20190388057A1 (en) System and method to guide the positioning of a physiological sensor
WO2021171464A1 (en) Processing device, endoscope system, and captured image processing method
CN116704401A (en) Grading verification method and device for operation type examination, electronic equipment and storage medium
WO2020121500A1 (en) Estimation method, estimation program, and estimation device
JP7450239B2 (en) Stroke testing system, stroke testing method, and program
KR101398193B1 (en) Device and Method for Calibration
CN113326729B (en) Multi-mode classroom concentration detection method and device
JP2023166176A (en) Display control apparatus, method and program
CN114220149A (en) Method, device, equipment and storage medium for acquiring true value of head posture
US20240122444A1 (en) Endoscopic examination support apparatus, endoscopic examination support method, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210927

Address after: 401120 1-1, 2-1, 3-1, building 5, No. 18, Cuiping Lane 2, Huixing street, Yubei District, Chongqing

Patentee after: Chongqing Jinshan Medical Technology Research Institute Co.,Ltd.

Address before: 401120 1 office buildings, Jinshan International Industrial City, 18 of Nei sang Road, Hui Xing street, Yubei District, Chongqing.

Patentee before: CHONGQING JINSHAN MEDICAL APPLIANCE Co.,Ltd.

TR01 Transfer of patent right