WO2007129493A1 - 医療画像観察支援装置 - Google Patents
医療画像観察支援装置 Download PDFInfo
- Publication number
- WO2007129493A1 WO2007129493A1 PCT/JP2007/052894 JP2007052894W WO2007129493A1 WO 2007129493 A1 WO2007129493 A1 WO 2007129493A1 JP 2007052894 W JP2007052894 W JP 2007052894W WO 2007129493 A1 WO2007129493 A1 WO 2007129493A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- information
- organ
- anatomical
- luminal
- Prior art date
Links
- 210000000056 organ Anatomy 0.000 claims abstract description 395
- 238000000605 extraction Methods 0.000 claims description 110
- 210000001519 tissue Anatomy 0.000 claims description 78
- 238000000547 structure data Methods 0.000 claims description 50
- 210000003484 anatomy Anatomy 0.000 claims description 36
- 210000004204 blood vessel Anatomy 0.000 claims description 36
- 210000001165 lymph node Anatomy 0.000 claims description 28
- 239000000203 mixture Substances 0.000 claims description 22
- 238000003780 insertion Methods 0.000 claims description 20
- 230000037431 insertion Effects 0.000 claims description 20
- 238000003672 processing method Methods 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 8
- 230000002194 synthesizing effect Effects 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 53
- 238000000034 method Methods 0.000 description 33
- 238000012545 processing Methods 0.000 description 33
- 210000000621 bronchi Anatomy 0.000 description 26
- 230000008569 process Effects 0.000 description 19
- 238000001514 detection method Methods 0.000 description 17
- 239000011159 matrix material Substances 0.000 description 16
- 230000009466 transformation Effects 0.000 description 13
- 230000005484 gravity Effects 0.000 description 10
- 210000002429 large intestine Anatomy 0.000 description 10
- 239000000284 extract Substances 0.000 description 9
- 230000008859 change Effects 0.000 description 8
- 238000006243 chemical reaction Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 210000001367 artery Anatomy 0.000 description 7
- 238000013500 data storage Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 230000003902 lesion Effects 0.000 description 6
- 238000003745 diagnosis Methods 0.000 description 5
- 238000002674 endoscopic surgery Methods 0.000 description 4
- 238000002350 laparotomy Methods 0.000 description 4
- 230000035945 sensitivity Effects 0.000 description 4
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 201000005202 lung cancer Diseases 0.000 description 3
- 208000020816 lung neoplasm Diseases 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000001356 surgical procedure Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 210000000013 bile duct Anatomy 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 210000000988 bone and bone Anatomy 0.000 description 2
- 238000013170 computed tomography imaging Methods 0.000 description 2
- 239000006185 dispersion Substances 0.000 description 2
- 210000001198 duodenum Anatomy 0.000 description 2
- 210000003238 esophagus Anatomy 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 230000001926 lymphatic effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 210000000813 small intestine Anatomy 0.000 description 2
- 210000002784 stomach Anatomy 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 210000003437 trachea Anatomy 0.000 description 2
- 210000003462 vein Anatomy 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 206010009944 Colon cancer Diseases 0.000 description 1
- 208000001333 Colorectal Neoplasms Diseases 0.000 description 1
- 210000001015 abdomen Anatomy 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 210000001365 lymphatic vessel Anatomy 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000002271 resection Methods 0.000 description 1
- 238000010863 targeted diagnosis Methods 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/46—Arrangements for interfacing with the operator or the patient
- A61B6/461—Displaying means of special interest
- A61B6/466—Displaying means of special interest adapted to display 3D data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
Definitions
- the present invention relates to a medical image observation support apparatus that supports observation of the appearance of a luminal organ.
- a helical continuous scan (helical force scan) is performed on a three-dimensional region of the subject by continuously feeding the subject in the direction of the body axis while continuously rotating the X-ray irradiator / detector.
- 3D image is created by stacking tomographic images of successive slices of 3D region.
- 3D images there are 3D images of trunk blood vessel regions and lung bronchial regions.
- a 3D image of a blood vessel region can be used to determine which part of the blood vessel should be ligated (to tie the blood vessel with a thread or the like!)
- When determining the resection site for example, in colorectal cancer surgery.
- a desired organ such as the lumen of the bronchus is obtained from the image data of the three-dimensional region. It is necessary to extract road area information.
- Patent Document 1 JP 2000-135215 A
- Patent Document 2 JP 2004-180940 A
- Patent Document 3 International Publication No. 2004Z010857 Pamphlet
- Patent Document 4 Japanese Unexamined Patent Publication No. 2006-68351
- Patent Document 5 Japanese Unexamined Patent Application Publication No. 2004-230086
- Patent Document 6 Japanese Unexamined Patent Publication No. 2000-163555
- Patent Document 7 Japanese Patent Laid-Open No. 2003-265408
- Non-Patent Document 1 it is possible to extract the luminal tract region information of the desired luminal organ from the three-dimensional image data of the subject.
- a threshold extraction process using a single threshold or a filter extraction process using an image enhancement filter common to the entire image the peripheral part of a luminal organ having a tree structure, for example, If extraction is performed using the same threshold value or emphasis filter for the specific anatomical region, there is a problem that sufficient extraction accuracy cannot be obtained.
- reduced image generation means for generating a plurality of reduced images of a three-dimensional image at all branch points where a body cavity path in a subject branches
- the reduced image generation means include Image rotation means for rotating the 3D image of the generated reduced image, and rotation amount data storage for storing rotation amount data obtained by rotating the 3D image by the image rotation means in association with the 3D image.
- An endoscope apparatus having means is proposed.
- the image rotating means for rotating the three-dimensional image must be executed based on the manual operation of the operator who is the operator, and the operation of the endoscope. It is difficult to perform such operations.
- Patent Document 3 a body cavity in the subject obtained by taking a reduced image of the three-dimensional image at all branch points where the body cavity path in the subject branches is captured by the endoscope.
- an endoscope apparatus characterized by comprising navigation image generation means for generating the navigation image by adding it to a navigation image composed of an endoscopic image of a road and the three-dimensional image.
- the reduced image is merely a reduced 3D image at the branch point, the reduced images are similar to each other. The operator may be confused.
- Patent Document 4 core point sequence data of a tubular structure of a subject is obtained.
- a medical image processing method has been proposed, which is based on a plurality of volume data groups in the time axis direction.
- Patent Document 5 proposes an image processing apparatus that sets a region of a desired tubular structure in stereoscopic image data of a subject and sets a core line of the tubular structure set by the region setting means. Be beaten!
- the core wire is set by the method as it is in the case where the force of the previous statement is applied. I can't.
- Patent Document 6 a branch start point and a region of interest direction are designated in a region including a branch branch region that is a region of interest, and the concentration value of each point is within a certain concentration range within the same region.
- the concentration value of each point is within a certain concentration range within the same region.
- Patent Document 7 compared with the actual endoscopic image and the virtual endoscopic image stored in the database, the degree of similarity between the endoscopic image and the virtual endoscopic image is highest!
- an endoscope guidance device that determines an endoscope image and determines the position and posture of the distal end portion of the flexible endoscope from the information of the determined virtual endoscope image.
- the comparison between the endoscopic image and the virtual endoscopic image is performed for comparison of the entire image, which may increase the time required for the processing.
- the endoscope guidance device described in Patent Document 7 displays the determined position and posture information of the endoscope tip on the MRI image or CT image in a superimposed display using force. Therefore, it is difficult to say that the surgeon is given enough information to guide the endoscope.
- Patent Document 2 there is a description that a bronchial route name is superimposed on a virtual endoscopic image. However, for a method and means for specifically realizing a method that works, In Reference 2, there is no mention of anything!
- the present invention has been made in view of the above-described points, and is a medical image observation support device that can easily and appropriately support external observation of a hollow organ and grasping of the structure of the hollow organ. For the purpose of providing!
- the invention according to claim 1 is: (a) a volume region including a part of a hollow organ extending into the subject based on the three-dimensional image data of the subject. (B) a lumen region that is region information of a specific lumen organ in the volume region based on three-dimensional image data representing the lumen organ in the volume region; Luminal organ area information calculating means for repeatedly calculating data, and (c) for each luminal area data calculated by the luminal organ area information calculating means, the structure information of the luminal organ in the volume area Luminal organ structure information calculating means for calculating certain luminal structure data, and (d) a virtual core line generating means for generating a virtual core line along the longitudinal direction of the luminal organ based on the luminal structure data.
- (G) A medical image observation support device comprising display means for displaying the virtual image.
- the volume region setting means includes a part of a hollow organ extending into the subject based on the three-dimensional image data of the subject.
- a volume region to be set, and the lumen organ region information calculation means repeatedly calculates lumen region data, which is region information of a specific lumen organ in the volume region, based on the three-dimensional image data,
- the luminal organ structure information calculating means calculates luminal structure data that is structural information of the luminal organ in the volume region for each luminal area data, and the virtual core generating means calculates the luminal structure data.
- a virtual core line is generated along the longitudinal direction of the luminal organ, and the virtual image generating means generates a virtual image of the luminal organ along the virtual core line.
- the virtual core wire, the lumen area data, and the lumen structure data are reduced.
- an observation position for generating the virtual image is determined so that a display area of the luminal organ on the display means has a desired size, and the virtual core line or the luminal structure is determined. The observation position is moved along the longitudinal direction of the luminal organ based on the data, and the virtual image is displayed by the display means.
- a virtual image reflecting the structure information of the hollow organ can be obtained from the three-dimensional image data, and any positional force of interest of the hollow organ can be reliably observed along the organ structure without complicated viewpoint changing operations.
- the observation position can be automatically moved along the longitudinal direction of the luminal organ, and the observing position is calculated with the luminal organ display area on the display means having a desired size.
- the display magnification of the luminal organ when the external image is displayed is automatically adjusted, so that the observer can easily observe a very long luminal organ along the direction in which the luminal organ travels.
- the desired size is a size selected by the observer according to the usage status of the apparatus. For example, when the traveling status of the entire blood vessel is confirmed, it is displayed relatively small, and irregularities on the blood vessel surface are displayed. When observing, display relatively large.
- the medical image observation support device includes (a) anatomical structure information storage means for storing anatomical structure information including at least anatomical name information, and (b) An anatomical name associating means for associating the anatomical name information stored in the anatomical structure information storage means with the luminal structure data is further provided.
- anatomical structure information is associated with the lumen structure data
- the anatomical structure information associated with the lumen structure data can be used as a pair. Become.
- the medical image observation support device is configured to display the luminal organ on a virtual image displayed on the display unit based on the anatomical name association by the anatomical name association unit. It is characterized by having an image composition means for displaying an anatomical name. According to this configuration, the anatomical name of the luminal organ is displayed on the virtual image displayed on the display unit by the image synthesizing unit, thereby facilitating observation of the luminal organ. .
- the virtual image generation means The image processing method is changed based on the anatomical structure information or the luminal structure data.
- an appropriate image processing method can be changed automatically or by the operator for each part of the luminal organ, so that the luminal area data can be extracted with high accuracy.
- the medical image observation support device includes: (a) an endoscope position for detecting a relative position of a distal end portion of an endoscope actually inserted into the subject. Detecting means; and (b) comparing the distal end position of the endoscope detected by the endoscope position detecting means with the lumen structure data, so that the inside of the lumen organ at the distal end portion of the endoscope is compared. And a first actual image observation position estimating means for estimating an actual image observation position, which is a position at. In this way, the relative position of the distal end portion of the endoscope detected by the endoscope position detecting means is compared with the lumen structure data, and the actual image observation position is estimated. The position of the tip of the endoscope corresponding to the observation position can be grasped more accurately.
- the medical image observation support device includes: (a) a virtual image including a branching portion in the luminal organ among the plurality of virtual images generated by the virtual image generation unit; Virtual image storage means for storing the luminal structure data corresponding to the virtual image in association with each other, and (b) an actual endoscopic image actually captured by the endoscope inserted into the subject.
- the feature corresponding to the appearing lumen structure data is extracted, the feature is compared with the lumen structure data stored in the virtual image storage means, and the virtual image corresponding to the lumen structure data matched as a result of the comparison is extracted.
- It has the 2nd real image observation position estimation means which estimates an observation position as the said real image observation position, It is characterized by the above-mentioned.
- the second real image observation position estimating unit extracts the feature corresponding to the lumen structure data appearing in the actual endoscopic image, and the feature is stored in the virtual image storage unit. Since the observation position of the virtual image corresponding to the lumen structure data matched as a result of the comparison is estimated as the actual image observation position by comparing with the cavity structure data, it is possible to detect the actual endoscope tip position. It is possible to estimate the distal end position of the endoscope, and the actual endoscope image and the virtual image are collated based on the characteristics corresponding to the lumen structure data appearing on the image. Highly accurate collation can be realized while reducing.
- the image synthesizing unit pairs the display unit with the real endoscopic image and the virtual image generated by the virtual image generating unit corresponding to the real endoscopic image. It is characterized by being displayed in a ratioable manner. In this way, the actual endoscopic image and the virtual image are displayed on the display means so as to be comparable.
- the virtual image generation unit generates the virtual image by using the observation position as the real image observation position estimated by the first real image observation position estimation unit. To do.
- the virtual image is a virtual image obtained by setting the observation position as the real image observation position estimated by the first real image observation position estimation unit, the real endoscope A virtual image of the observation position force estimated to be the same as the actual image observation position of the image is obtained.
- the virtual image generation unit generates the virtual image by setting the observation position as the real image observation position estimated by the second real image observation position estimation unit. To do.
- the virtual image is a virtual image obtained by setting the observation position as the real image observation position estimated by the second real image observation position estimation unit, the real endoscope A virtual image of the observation position force estimated to be the same as the actual image observation position of the image is obtained.
- the image synthesizing unit includes an actual endoscopic image displayed on the display unit based on the anatomical name association by the anatomical name association unit.
- the anatomical name of the luminal organ is displayed on the top.
- the anatomical name associated with the anatomical name association unit is displayed on the luminal organ on the actual endoscopic image displayed on the display unit. Even in the case of endoscopic images, it is possible to grasp which part is the luminal organ displayed in the image.
- the virtual image and the real endoscopic image include the number and position of the luminal structure on the image, and the luminal shape. At least one of the brightness values in the structure is a feature on the image corresponding to the lumen structure data.
- the luminal structure which is a feature corresponding to the luminal structure data on the image, which the virtual image and the real endoscope image have. Since the actual endoscopic image and the virtual image are collated based on at least one of the number of structures, the position, and the brightness in the tubular structure, it is necessary to collate the entire image. There is no.
- the medical image observation support apparatus has virtual image learning means for learning and correcting the contents stored in the virtual image storage means based on the result of the collation.
- the second actual image observation position estimating means learns and corrects the contents stored in the virtual image storage means based on the result of the collation by the virtual image learning means.
- the verification is repeated, more accurate verification can be performed.
- the medical image observation support device has an insertion site force for inserting the endoscope into a target site in the luminal organ on the image a route to the target site.
- a plurality of navigation means that open to the branching portion of the luminal organ shown on the actual endoscopic image displayed on the display means; One of the branch pipes is displayed to indicate one branch pipe into which the endoscope is to be inserted.
- the endoscope of the plurality of branch pipes opened at the branching portion of the luminal organ shown on the actual endoscopic image displayed on the display means by the navigation means.
- the medical image observation support device has an insertion site force for inserting the endoscope into a target site in the lumen organ on the image of a route to the target site.
- the navigation means automatically generates the path and associates the anatomical name with each part of the luminal organ constituting the path.
- a plurality of anatomical names respectively associated by means are listed in the order of the insertion site force and the route to the target site.
- the navigation means automatically generates the route, while a plurality of anatomical name forces associated with each part of the luminal organ constituting the route. Are listed in the order of the route from the target site to the target site.
- the path for inserting the endoscope to the target site in the hollow organ can be recognized in advance by the anatomical name.
- the medical image support apparatus relates to a structure of extraluminal tissue existing outside the luminal organ in the subject based on the three-dimensional image data.
- the virtual image generation means includes a virtual image of the luminal organ, a virtual image of the extraluminal tissue, and Are displayed at the same scale while maintaining the actual positional relationship in the same image. In this way, the position and size of the extraluminal tissue existing outside the luminal organ based on the three-dimensional image data, even on the virtual image. Can be grasped.
- the anatomical structure information storage means includes at least anatomical name information for the luminal organ and at least anatomy for the extraluminal tissue.
- the anatomical name association means stores the anatomy stored in the anatomical structure information storage means for the hollow organ.
- the anatomical number information stored in the anatomical structure information storage means is associated with the extraluminal tissue structure information. In this way, the anatomical name association means can associate an anatomical number with the extraluminal tissue in the same manner as associating the anatomical name with the luminal organ.
- the medical image observation support device is displayed on the display device based on the association of the anatomical name or the anatomical number by the anatomical name association unit.
- An image composition means for displaying the anatomical name of the luminal organ and the anatomical number of the extraluminal tissue on a virtual image is provided.
- the image synthesizing unit displays the anatomical name of the luminal organ and the anatomical number of the extraluminal tissue on the virtual image displayed on the display unit. Observation of the luminal organ is facilitated.
- the virtual image generation means includes the anatomical structure information, the luminal structure data, or the extraluminal tissue structure information.
- the image processing method is changed based on at least one. If you do this
- the navigation means can insert the endoscope in the luminal organ adjacent to the extraluminal tissue when the extraluminal tissue is set as a target site.
- the site is an actual target site.
- the operator simply sets the target extraluminal tissue as the target site, and then inserts the endoscope into the site where the endoscope in the luminal organ adjacent to the extraluminal tissue can be inserted. Can receive assistance to insert an endoscope.
- the extraluminal tissue is a lymph node
- the luminal organ is a blood vessel.
- the endoscope that is close to the lymph node is used.
- the endoscope can be inserted through the blood vessel to the vicinity of the lymph node.
- FIG. 1 is a functional block diagram showing an outline of functions of a medical image observation support apparatus according to Embodiment 1 of the present invention.
- FIG. 2 is a block diagram showing a functional configuration of the information extraction unit in FIG.
- FIG. 3 is a diagram for explaining information stored in the anatomical information database of FIG. 1.
- FIG. 4 is a diagram illustrating an example of the relationship between each branch site of a hollow organ and the branch serial number assigned to each site using a model.
- FIG. 5 is a flowchart showing a flow of extraction processing of organ structure information, organ structure information, and each branch part name by the information extraction unit of FIG.
- FIG. 6 An example of a tomographic image based on CT image data as a three-dimensional image.
- FIG. 7 is an example of MPR (multi-section reconstructed image) as a three-dimensional image.
- FIG. 8 is a diagram for explaining VOI setting by a VOI setting unit.
- FIG. 9 is a diagram for explaining extraction of organ region information by an organ region information extraction unit.
- FIG. 10 is a diagram for explaining extraction of organ region information on the lower surface of the VOI.
- FIG. 11 is a diagram for explaining extraction of organ structure information by an organ structure information extraction unit.
- FIG. 13 is a diagram for explaining extraction of organ region information for an extended VOI.
- FIG. 14 is a diagram for explaining extraction of organ region information on the underside of an expanded VOI.
- This is a diagram for explaining further expansion of VOI by the information extraction unit and corresponds to FIG.
- FIG. 16 is a diagram for explaining extraction of organ region information for a further expanded VOI, and corresponds to FIG.
- FIG. 18 is a diagram for explaining extraction of organ structure information by the organ structure information extraction unit for the extended VOI, and corresponds to FIG.
- FIG. 19 is a diagram showing a state in which the VOI deviates from the luminal organ.
- FIG. 20 is a view showing the bottom surface of a VOI showing a state where the VOI is displaced from the luminal organ.
- FIG. 21 is a diagram for explaining VOI direction correction processing.
- FIG. 22 is a diagram for explaining VOI direction correction processing.
- FIG. 23 is a diagram showing an example of VOI when a branch of a hollow organ is detected.
- FIG. 24 is a diagram showing an example of VOI when a branch of a hollow organ is detected.
- FIG. 25 is a diagram for explaining setting of a child VOI by a VOI setting unit.
- FIG. 26 is a diagram for explaining the setting of grandchild VOI.
- FIG. 27 is a diagram showing an example of generated organ structure information.
- FIG. 28 is a diagram showing an example of organ structure information displayed on the display means.
- FIG. 29 is a diagram for explaining a concept in which organ structure information of a plurality of luminal organs is associated.
- FIG. 30 is a diagram in which a 3D image of an artery and organ structure information of the artery are superimposed and displayed.
- FIG. 31 is a diagram in which a three-dimensional image of a vein is further superimposed on FIG.
- FIG. 33 is a flowchart showing the flow of the luminal organ observation support process of the luminal organ observation support device of FIG. 1.
- FIG. 34 is a diagram showing an example of a mouse constituting the input unit.
- FIG. 35 is a diagram illustrating an example of a relationship between a gazing point, a viewpoint, and a line of sight.
- FIG. 37 is a diagram for explaining a plurality of viewpoint positions and a generated lumen appearance image.
- FIG. 39 is a diagram showing the structure of a device in Example 2.
- FIG. 41 is a diagram for explaining the transformation matrix calculation routine in FIG.
- ⁇ 42 It is a diagram for explaining the relationship between the measured endoscope tip position and the virtual core wire.
- FIG. 43 is a diagram for explaining a measured endoscope tip position, a converted position, and its distance.
- ⁇ 45 It is a functional block diagram for explaining the outline of the functions of the medical image observation support device in Example 2.
- ⁇ 46 A flowchart explaining an outline of an operation corresponding to the virtual image storage unit in the medical image observation support device in the second embodiment.
- FIG. 47 is a flowchart for explaining the outline of the operation corresponding to the second actual image observation position estimation means 114 in the medical image observation support device in the second embodiment.
- FIG. 48 is a diagram for explaining the learning routine in FIG.
- FIG. 50 is a functional block diagram showing an outline of functions of the medical image observation apparatus in the fourth embodiment.
- FIG. 51 is a flowchart showing an outline of the operation of the medical image observation apparatus in Example 4.
- FIG. 52 is a diagram illustrating an example of display on the monitor 2 in the fourth embodiment.
- FIG. 53 is a diagram illustrating an example of display on the monitor 2 in the fourth embodiment.
- FIG. 54 is a diagram illustrating an example of display on the monitor 2 in the fourth embodiment.
- FIG. 55 is a diagram illustrating an example of display on the monitor 2 in the fourth embodiment.
- FIG. 56 is a diagram illustrating an outline of functions of the information extraction unit 12 in the fifth embodiment, and corresponds to FIG.
- FIG. 57 is a flowchart showing an outline of the operation of the medical image observation apparatus in Example 5.
- FIG. 58 is a diagram illustrating an example of display on the monitor 2 in the fifth embodiment.
- Anatomical structure information storage means (anatomical information database)
- Virtual image learning unit (virtual image learning means)
- Virtual image storage (virtual image storage means)
- First actual image observation position estimation unit (First actual image observation position estimation means)
- Second actual image observation position estimation unit (second actual image observation position estimation unit)
- Navigation unit (navigation unit)
- Extraluminal tissue extraction unit Extraluminal tissue extraction means
- FIG. 1 is a functional block diagram showing an outline of the functional configuration of the medical image observation support apparatus
- FIG. 2 is a function of the information extraction unit 12 of FIG. It is a functional block diagram which shows a structure.
- 3 is a first diagram for explaining the anatomical information DB of FIG. 1
- FIG. 4 is a second diagram for explaining the anatomical information DB of FIG.
- FIG. 5 is a flowchart showing a flow of extraction processing of organ structure information, organ structure information, and branch site names by the information extraction unit of FIG. 1, and FIGS. 6 to 31 are flow charts of FIG. It is a figure explaining the mode of the process of.
- FIG. 32 is a flowchart showing the flow of the luminal organ observation support process in the medical image observation support apparatus of FIG. 1, and FIGS. 34 to 38 each show the flow chart process of FIG. FIG.
- the medical image observation support device 1 of this embodiment includes a ROM that stores information stored in a readable manner, and a RAM that stores information in a readable and writable manner as necessary. It is composed of a so-called computer that can control operations according to a pre-stored program composed of a CPU (central processing unit) that computes this information. Based on the CT image data, the computer 1 as the medical image observation support device extracts the region information and the structure information of the luminal organ designated by the input unit 3 having a pointing device such as a mouse, and generates a virtual image. Control the viewpoint position and gaze direction.
- the viewpoint position which is the observation position, moves along a virtual core line along the longitudinal direction of the luminal organ, and the gaze direction is calculated as the direction of force from the viewpoint position to the gazing point placed on the virtual core line.
- the point of gaze indicates the point that becomes the center of observation. In the image, this gazing point is located at the center of the image. Therefore, the direction of directional force from the viewpoint position to the gazing point becomes the line-of-sight direction.
- a state in which a gazing point is placed on the virtual core and the viewpoint position and the direction of the line of sight are controlled is referred to as a lock-on state.
- the placement status of the luminal organs is displayed on the monitor 2 as a display means with a virtual image before or during image diagnosis, laparotomy, endoscopic surgery, or during endoscopic surgery. I do.
- the medical image observation support device 1 includes a CT image data capturing unit 10, a CT image data storage unit 11, a volume region setting unit, a luminal organ region information calculating unit, a luminal organ structure information calculating unit, and a virtual core generation.
- Information extraction unit 12 as means and anatomical name association means, anatomical information database (hereinafter abbreviated as DB) 13 as anatomical structure information storage means, viewpoint position as observation position defining means Z eye-gaze direction A setting unit 14, a luminal organ image generation unit 15 as a virtual image generation unit, an anatomical name information generation unit 16, a branch specification unit 17 as a start point specification unit, an image synthesis display unit 18 as an image synthesis unit, and It comprises a user interface (hereinafter abbreviated as IZF) control unit 19 as a target point movement control means.
- DB anatomical information database
- viewpoint position as observation position defining means
- a luminal organ image generation unit 15 as a virtual image generation unit
- an anatomical name information generation unit 16 as a branch specification unit 17 as a start point specification unit
- an image synthesis display unit 18 as an image synthesis unit
- It comprises a user interface (hereinafter abbreviated as IZF)
- the CT image data capturing unit 10 converts 3D image data generated by a known CT apparatus (not shown) that captures an X-ray tomogram of a patient into, for example, an MO (Magnetic Optical Disc) apparatus or a DVD (Digital Versatile Disc) The data is taken in via a portable storage medium.
- a known CT apparatus not shown
- MO Magnetic Optical Disc
- DVD Digital Versatile Disc
- the CT image data storage unit 11 stores the CT image data acquired by the CT image data acquisition unit 10.
- the information extraction unit 12 divides the luminal organ into VOIs (Volume Of Interest: hereinafter referred to simply as VOI) which are volume regions to be described later, and extracts the luminal organs in the VOI. Extract region information (lumen region data) and structure information (lumen structure data). In addition, the information extraction unit 12 associates the luminal organ region information and structure information with the anatomical information stored in the anatomical information DB 13. Based on the structure of the luminal organ connected to the VOI associated with the anatomical information, a new VOI is generated, and the area information and structure information of the luminal organ in the new VOI are extracted. In addition, name assignment information for assigning names to structural information in the VOI associated with anatomical information is generated. Output to raw part 16.
- VOI Volume Of Interest
- the viewpoint position Z line-of-sight direction setting unit 14 sets the gazing point on the virtual core line of the luminal organ based on the structural information of the luminal organ extracted by the information extraction unit 12, and the appearance of the luminal organ Set the viewpoint position and line-of-sight direction.
- the observation position defining means includes the viewpoint position Z line-of-sight direction setting unit 14.
- the luminal organ image generation unit 15 performs image processing on the CT image data stored in the CT image data storage unit 11 based on the region information of the luminal organ extracted by the information extraction unit 12, Viewpoint position Z Visual direction setting unit 14 generates an appearance image of a virtual luminal organ with the viewpoint position and visual direction force.
- the virtual image generation means is configured by the luminal organ image generation unit 15.
- the anatomical name information generation unit 16 generates character image data based on the name assignment information from the information extraction unit 12.
- the branch specifying unit 17 specifies a branch of the luminal organ having a tree structure (a structure having a branch) with a mouse or the like that the input unit 2 has.
- the image composition display unit 18 synthesizes the virtual luminal organ image from the luminal organ image generation unit 15 and the character image data from the anatomical name information generation unit 16 to monitor the image.
- the synthesized image is displayed on the screen 2.
- the user IZF control unit 19 controls the input of the input unit 3 according to the setting state of the viewpoint position Z line-of-sight direction setting unit 14. Specifically, when the display method is locked on in the viewpoint position Z line-of-sight direction setting unit 14, control is performed so that the input information of the mouse included in the input unit 3 is used only for the vertical movement information of the image. To do. Details will be described later. It should be noted that when the display method is not in the lock-on state, the control of the input information of the mouse that the input unit 3 has is cancelled.
- the information extraction unit 12 includes an organ region information extraction unit 12a, an organ structure information extraction unit 12b, an image processing method extraction unit 12c, a part name assignment information extraction unit 12d, and an extraction information. It consists of an association unit 12e, an extracted information storage unit 12f, and a VOI setting unit 12g.
- the luminal organ region information calculating means is configured by the organ region information extracting unit 12a
- the luminal organ structure information calculating unit is configured by the organ structure information extracting unit 12b.
- the volume area setting means is constituted by a VOI setting unit 12g.
- the anatomical information DB 13 is a database of branch serial numbers in units of branch structures of a hollow organ that forms a tree structure. Specifically, anatomical information
- Branch link information that is the serial number of the branch to which the nth branch is linked
- Density value characteristics of nth branch pixel data eg, average value information, variance information
- the information extraction unit 12 determines which branch structure is the extracted VOI, and correlates with the anatomical information DB13 information (1) to (6) according to the branch structure, so that the region information of the luminal organ is obtained. And the structure information is stored in the extracted information storage unit 12f.
- bronchi 30 as shown in Fig. 4 is used as an example, and using the flowchart of Fig. 5, organ region information, organ structure information, and each branch, which are one of the actions of the present embodiment.
- the part name extraction process will be described.
- FIG. 4 shows an anatomical model of bronchus 30.
- Bronchi 30 has a tree structure, and a serial number “n” is attached to each branch site (“1” to “20” in FIG. 4).
- the anatomical information DB 13 stores the above information (1) to (6) at least for each branch of the bronchus 30 based on the serial number “n”.
- the luminal organ is not limited to the bronchus 30, but this embodiment is applicable to any luminal organ such as the esophagus, blood vessel, large intestine, small intestine, duodenum, stomach, bile duct, knee tube, or lymphatic organ. it can.
- FIG. 5 is a flowchart for explaining a main part of the control operation of the medical image observation support apparatus 1.
- step a step corresponding to the branch specifying unit 17 (hereinafter, “step” is omitted)
- S1 is based on CT image data displayed on the monitor 2, for example, from the input unit 3 having a pointing device such as a mouse.
- a tomographic image see Fig. 6 for example
- an MPR image a multi-section reconstructed image: see Fig. 7 for example
- region information and structure information of a luminal organ An input of a starting point at which to start extraction is detected.
- a VOI 33 having a cross section 32 including the start point 31 on the upper surface is set as shown in FIG.
- the size of the upper surface of the VOI 33 is set based on the radius of the cross section 32. For example, when the radius of the cross-section is 32, the size of the upper surface of VOI33 is set to a 5r x 5r square.
- organ region information 35 in the VOI 33 (including inner wall and outer wall information of the luminal organ) Is extracted.
- the image processing method is changed based on the anatomical structure information stored in the anatomical information DB 13.
- an image processing method for example,
- organ region information 35 is also extracted for the lower cross section 34 of the bronchus 30 that is the lower surface of the VOI 33. Specifically, as shown in FIG. 10, the center of gravity 36 of the organ region information 35 in the lower cross section 34 is also calculated.
- S8 corresponding to the information extraction unit 12, it is determined whether or not the extraction processing by S2 to S7 has been completed in the entire region of the trachea 30. Specifically, for example, first, it is determined whether or not the organ area information 35 of the lower cross section 34 of the bronchus 30 is present on the lower surface of the VOI 33. It is determined whether the extraction in the area is complete.
- S9 is executed.
- whether or not the force has reached the branch is determined by detecting the presence or absence of the branch. Specifically, for example, it is determined that a branch has been reached based on the detection of a plurality of organ region information 35 of the lower cross section 34 of the trachea 30 on the lower surface of the VOI 33.
- the line segment connecting the center of gravity 36 and the center of gravity 36a is extracted as organ structure information 37a. Is issued.
- the organ region information 35a calculated in S5 and the organ structure information 37a extracted in S6 are associated with the anatomical information (1) to (6) in the anatomy information DB13 in the extracted information storage unit 12f. Stored.
- the anatomical information (1) to (6) of the branch part of the bronchus 30 included in the children VOI33 (l) a and 33 (l) b connected to VOI33A is anatomical.
- the anatomical information (1) to (6) of the branch part extracted in S2 is stored in the extracted information storage unit 12f, and the anatomical names are associated with each other.
- the traveling direction and length of the corresponding branch calculated using the structural data of the luminal organ obtained by the information extraction unit 12, and the bronchial branch obtained so far that is, the branch Using the anatomical information of the parent of the part and the branches above it, the most similar branch among the bronchial branches stored in the anatomical information DB 13 is obtained, and the anatomy is determined for that branch. Associate specific names.
- the size of the upper surface of the children VOI33 (l) a and 33 (l) b is based on the radius of the branch sections 32A and 32B! Is set.
- the organ structure information 37A obtained before the setting of the children VOI33 (l) a and 33 (l) b is from the start point 31 to the center of the lower section 34 ⁇ of the VOI33A.
- the centroids of the cross sections 32A and 32B of a and 33 (l) b are connected to each other and stored in the extracted information storage unit 12f as organ structure information which is a continuous line segment in the organ structure information 37A.
- step S8 corresponding to the information extraction unit 12, when it is determined that extraction of all organ regions is completed, the processing of this flowchart is terminated.
- the information extraction unit 12 stores the organ structure information 37A in the extraction information storage unit 12f corresponding to all the branch sites of the bronchus 30.
- FIG. 27 conceptually shows the above processing result, and the extracted information storage unit 12f stores the organ structure information 37A, the organ region information 35A, and the anatomical name of each branch part in association with each other.
- FIG. 28 shows an example of a display image in which the organ structure information 37A displayed on the actual monitor 2 and the anatomical name of each branch site generated from the anatomical name information generation unit 16 are associated with each other. . In the image shown in the upper half of FIG.
- the anatomical name information generation unit 16 generates character image data representing the anatomical name based on the name assignment information from the information extraction unit 12, In the image composition display unit 18, the anatomical name of each branch constituting the luminal organ on the virtual luminal appearance image data based on the organ area information 35A, organ structure information 37 or organ area information 35A, organ structure information 37 Is displayed.
- a portion A surrounded by a broken-line square on the lower half surface of FIG. 28 is an enlarged view of the portion A of the image displayed on the monitor 2 described on the upper half surface.
- the luminal organ is not limited to the bronchus 30, but the esophagus, blood vessel, large intestine, small intestine, duodenum, stomach, bile duct, knee canal, or lymphatic vessel is also stored.
- section 12f as shown in FIG. 29, bronchial organ structure information, blood vessel organ structure information, and blood vessel organ structure information are stored in association with each organ structure information and each branch site name.
- FIG. 30 shows the organ structure information of the artery and the 3D image of the artery superimposed
- FIG. 31 shows the 3D image of the vein superimposed on FIG. Both are images displayed on the monitor 2.
- various information can be extracted from an artery as well as the bronchus.
- the luminal organ image generation unit 15 uses the organ region information from the information extraction unit 12, the organ structure information, and the CT image data from the CT image data storage unit as shown in FIG. A lumen appearance image 50 as shown is generated and displayed on the monitor 2. At this time, a pointer 51 is superimposed on the lumen appearance image 50 displayed on the monitor 2.
- the pointer if the pointer reaches the luminal organ on the luminal appearance screen, the pointer cannot be moved out of the luminal organ unless a large amount of movement is given, and the luminal organ travels along the direction of the luminal organ. Can move the pointer easily. As a result, the observer can feel that the pointer is attracted to the hollow organ, and can easily select a point on the hollow organ as a starting point.
- the line of sight can be changed by a predetermined angle ⁇ vertically and horizontally with respect to the line segment connecting the viewpoint and the gazing point.
- the viewpoint position that is the observation position and the gaze point are set so that the size of the blood vessel displayed on the monitor 2 is the size desired by the observer regardless of the region.
- the distance D between them is calculated, and the viewpoint 75 is determined based on this distance D.
- viewpoint position information which is information related to the determined position of the viewpoint 75
- gaze direction information which is information related to the direction of the line of sight
- the actual size of the desired blood vessel on the screen for example, “actual size of 10 mm on the screen”) or a ratio to the width of the screen (for example, “10% of the screen width”).
- the enlargement ratio is automatically calculated based on the blood vessel diameter obtained from the organ region information, and the distance D is calculated.
- luminal appearance image data based on the viewpoint position information and the line-of-sight direction information calculated in S25 is generated. At this time, along with the lumen appearance image data, all the information based on the viewpoint position information and the line-of-sight direction information calculated in S25 may be output to the monitor 2.
- the enlargement / reduction is interactively performed so that the viewer has a desired size via the input device which is a keyboard or a mouse constituting the input unit 3.
- the distance D may be adjusted by performing. In this case, while holding down a key on the keyboard (for example, the z key) and pressing the mouse pointer button and moving it left and right, the blood vessel appearance image is displayed while being enlarged or reduced in accordance with this operation. The user can select a desired enlargement ratio. In other words, S24 to S26 may be repeated until a desired lumen appearance image is obtained.
- FIG. 37 shows images from multiple viewpoints 75 (observation positions) when the viewpoint 75 (observation position) is moved as shown in FIG. 38 in the lock-on state, for example. Yes.
- the gazing point is fixed on the core line which is the organ structure information 37A. In this way, in S 26 corresponding to the luminal organ image generation unit 15, a virtual luminal appearance image is generated so that, for example, the gazing point is the image center.
- organ structure information is calculated after organ region information is extracted, it has been difficult to match the organ structure information with the core line of the luminal organ.
- the information extraction unit 12 extracts the organ structure information simultaneously with the organ region information, the organ structure information can be extracted so as to substantially coincide with the core line of the luminal organ.
- an appropriate image processing method can be selected based on the anatomical information DB 13 of each branch, it is possible to accurately extract the area and structure information of the luminal organ.
- the extracted organ structure information is used as the core line of the luminal organ, and the viewpoint position and the gazing point are set so that the displayed blood vessel has a desired size regardless of the region. The distance between them is calculated, and a virtual lumen appearance image is generated and displayed so that the point of interest is at the center of the image. Therefore, the observer can surely observe the luminal organ at a desired size without a complicated operation of changing the viewpoint position / gaze direction. Therefore, the observer can easily grasp the running state and shape change of the luminal organ before or during the image diagnosis, laparotomy, and endoscopic surgery. Therefore, it can be said that the present invention based on Example 1 facilitates the appearance observation of the luminal organ.
- the VOI setting unit 12g sets a volume region including a part of a hollow organ extending into the subject based on the three-dimensional image data of the subject, Based on the 3D image data, the organ region information extraction unit 12a repeatedly calculates organ region information 35, which is region information of a specific luminal organ in the volume region, and the organ structure information extraction unit 12b For each of the organ region information 35, organ structure information 37, which is the structure information of the hollow organ in the volume region, is calculated, and the organ structure information is obtained by the structure information extraction unit 12b as the virtual core generation means.
- a virtual core line is generated along the longitudinal direction of the luminal organ, and the luminal organ image generation unit 15 generates a virtual image of the luminal organ along the virtual core line, and the viewpoint position Z sight line direction setting is generated.
- the unit 14 generates the virtual image based on at least one of the virtual core line, the organ area information 35, and the organ structure information 37 so that the display area of the luminal organ on the monitor 2 has a desired size.
- the observation position to be determined is determined, the observation position is moved along the longitudinal direction of the luminal organ based on the virtual core line or the luminal structure data, and the virtual image is displayed by the display means.
- a virtual image reflecting the structural information of the luminal organ can be obtained from the three-dimensional image data, and a complicated visual point change can be made from an arbitrary position of the luminal organ.
- the observation position can be automatically moved along the longitudinal direction of the hollow organ, and the display position of the hollow organ on the display means is calculated to an area of a desired size. This automatically adjusts the display magnification of the luminal organ when displaying the appearance image, so that it becomes easy for the observer to observe the very long luminal organ along the direction in which the luminal organ travels. .
- the anatomical structure information associated with the organ structure information 37 can be used as a pair.
- the anatomical name of the luminal organ is displayed on the virtual image displayed on the monitor 2 by the image composition display unit 18. The observation of the organ becomes easy.
- the luminal organ image generation unit 15 changes the image processing method based on the anatomical structure information or the organ structure information 37. It is possible to change an appropriate image processing method for each part automatically or by an operator, and there is an effect that lumen region data can be extracted with high accuracy.
- FIG. 39 is an overview for explaining a medical image support apparatus to which the present invention is applied.
- the computer 1 as the medical image support apparatus is provided with an endoscope position detection unit 106 as an endoscope position detection unit and a first actual image observation position estimation unit 112 as a first actual image observation position estimation unit.
- a position sensor 86 is attached to the distal end portion of the endoscope 84.
- the position sensor 86 for example, a magnetic position sensor is used.
- the endoscope 84 is connected to an endoscope apparatus 88, and the endoscope apparatus 88 performs processing such as outputting an image of a small video camera force provided at the distal end of the endoscope 84.
- the magnetic field generating coil 90 generates a predetermined magnetic field based on an instruction from the position detection device 82 and causes the position sensor 86 to detect information related to the magnetic field.
- the position detection device 82 collects information on the magnetic field generated by the magnetic field generation coil 90 and information on the magnetic field detected by the position sensor 86 and sends the collected information to the endoscope position detection unit 106.
- the endoscope position detection unit 106 detects the relative positional relationship between the magnetic field generating coil 90 and the position sensor 86 based on the transmitted information. At this time, the endoscope position detection unit 106 can detect three degrees of freedom in the translation direction and three degrees of freedom in the rotation direction with respect to the position of the position sensor 86.
- position sensor 86 is microBIRD manufactured by Assention Technology Inc. Further, since the position sensor 86 is very small and is embedded in the distal end portion of the endoscope 84, the position detected for the position sensor 86 is regarded as the position of the distal end portion of the endoscope as it is. I can do it.
- control computer 1 as the medical image observation support device uses the tip position of the endoscope 84 detected by the position detection device 82 and the organ structure information 37 corresponding to the luminal organ structure data. By comparing, the position of the distal end portion of the endoscope 84 in the bronchus 30 as the luminal organ is estimated.
- FIG. 40 is a flowchart for explaining the main part of the control operation of the computer 1 in this embodiment, that is, the operation for estimating the actual image observation position.
- step hereinafter, “step” is omitted) in S31, the organ region information extraction unit 12a and the organ structure are described.
- the organ region information 35 and the organ structure information 37 extracted by the structure information extraction unit 12b and stored in the extraction information storage unit 12f are read out.
- the information power about the position pj of the distal end portion of the endoscope 84 which is detected by 2 and stored in the storage device 4, for example, is also acquired for the past detection.
- FIG. 42 shows a relationship between the virtual core line c and the position pj of the distal end portion of the endoscope 84 in the organ structure information 37 read out at this time.
- S33 to S35 correspond to the first actual image observation position estimation unit 112. Of these, S3
- step 3 the subroutine for calculating the transformation matrix T shown in FIG. 41 is executed.
- the transformation matrix T is
- the transformation matrix T is a matrix that transforms the coordinate system of the position detected by the position detection device 82 into a coordinate system that represents a three-dimensional image.
- the initial value of the transformation matrix T is determined.
- the coordinates of the position pj of the tip are converted by the conversion matrix T, and the converted position qj is calculated.
- a conversion error e is calculated.
- a distance dj between the converted position qj and the virtual core line c is calculated.
- the distance dj is, specifically, the intersection force between the perpendicular and the virtual core c when the perpendicular is drawn from the tip position qj to the virtual core c to the converted position qj. Is the distance.
- Figure 43 shows the situation at this time. Subsequently, a conversion error e is calculated based on the calculated distance dj. This conversion error e is given by
- wj is a weight for the distance dj, for example, the position pj of the tip detected in the past It is set so that the effect on error e becomes smaller.
- the routine is terminated with the value of T when S43 is executed as the transformation matrix T. On the other hand, if this determination is negative, S46 is executed.
- the value of T in S43 is updated by a predetermined amount, and S42 and subsequent steps are executed again.
- updating by a predetermined amount means changing the value of T so that the conversion error e decreases. That is, S42 to S44 and S46 are executed repeatedly.
- a transformation matrix T that minimizes the transformation error e is determined.
- the sufficient number of times means, for example, that S33 to S35 corresponding to the first actual image observation position estimation unit 112 have been executed for all the endoscope positions detected by the actual endoscope position detection unit 106. Etc. are determined based on the above.
- the distal end position pj of the endoscope 84 is updated. Specifically, for example, the newly detected position information is fetched, while the most recently detected position information is deleted by the number corresponding to the fetched position information. .
- the position pj of the distal end portion of the endoscope 84 detected by S32 corresponding to the endoscope position detecting unit 106 corresponds to the first actual image observation position estimating unit 112.
- the transformation matrix T calculated in 3 to S35 is transformed into the structure of the hollow organ in the organ structure information 37, that is, the point on the coordinate in the three-dimensional image. This is shown in Figure 44.
- the position pj of the distal end of the endoscope 84 can be converted into a point on the coordinates of the 3D image using the conversion matrix T. It is possible to cope with the case where the position of the luminal organ in the specimen differs between when the endoscope is inserted.
- the medical image observation support device 1 is operated by the position detection unit 106.
- the relative position pj of the position sensor 86 provided at the distal end portion of the endoscope 84 actually inserted into the subject is detected, and the detected first endoscope is detected by the first actual image observation position estimation unit 112.
- an observation position 75 as an actual image observation position that is a position in the luminal organ of the tip of the endoscope 75 Therefore, the actual image observation position corresponding to the position of the distal end portion of the endoscope can be grasped more accurately.
- the first actual image observation position estimation unit 112 detects the relative position of the distal end portion of the endoscope detected by the position detection unit 106 and the lumen structure data. Since the actual image observation position is estimated, the position of the distal end portion of the endoscope corresponding to the actual image observation position can be grasped more accurately.
- FIG. 45 shows an outline of the function of the computer 1 which is another embodiment of the present invention, that is, a medical image observation support device having a virtual image storage means detection means and a second real image observation position estimation means.
- a virtual image storage unit 110 as a virtual image storage unit includes a branching part feature information generation unit 92, an association unit 94, a virtual image learning unit 102, and the like, which will be described later, and serves as a second actual image observation position estimation unit.
- the second actual image observation position estimation unit 114 includes a feature extraction unit 96, an image collation unit 98, a position determination unit 100, and the like, which will be described later. That is, in FIG. 45, the portion surrounded by the broken line corresponds to the virtual image storage unit 110, and the portion surrounded by the alternate long and short dash line corresponds to the second actual image observation position estimation unit 114.
- the branch part feature information generation unit 92 as the branch part feature information generation means is based on the organ structure information 37 stored in the information extraction unit 12 (extraction information storage unit 12f), in particular, information on the virtual core c. , A portion where the luminal organ branches (referred to as a “branch portion”) is detected. Then, on the basis of the organ structure information 37 on the detected branching portion, the information (“branching portion feature information”) regarding the features appearing on the image is displayed in the virtual image including the branching portion. Is generated).
- the features appearing on the image are the number of images of the hole in which the luminal organ that continues in the back direction of the screen appears as a hole, the position of the image of the hole, or the brightness in the image of the hole.
- the lightness in the hole image is a feature based on the length of the luminal organ. For example, when the luminal organ continues on a straight line for a long time, the lightness is low, that is, a dark hole image. It is because it appears. An example of this is shown in FIG.
- Fig. 49 for four different branch parts, the virtual image and branch part feature information extracted from the virtual image are shown.
- the figure shown on the left represents the virtual image of the branch part
- the figure shown on the right represents the characteristic information of the branch part.
- the bifurcation part feature information includes the number of holes and the force that indicates the position of the hole in the image. It is done. Therefore, comparing easel and case2 in Fig. 49, they are distinguished because the number of holes and their positions in the image are similar, but the brightness of the holes is significantly different.
- the associating unit 94 as the associating means is generated by the virtual image generated by the luminal organ image generating unit 15 and the branching part feature information generating unit 92 for the same branching unit. Correlate with the branch feature information. Further, the associated virtual image and branching part feature information are stored in the storage unit 4 as storage means, and a virtual image database 104 with feature information is generated.
- the feature extraction unit 96 as the feature extraction means is an image obtained by a video camera attached to the distal end of the endoscope 84 and processed by the endoscope device 88. A feature corresponding to the feature information is extracted.
- the image matching unit 98 serving as an image matching unit is added to the virtual image database 104 with feature information based on the branching feature information in the real endoscope image extracted by the feature extracting unit 96.
- the stored virtual image is compared and collated with the branch feature information. Then, a virtual image associated with the bifurcation feature information to be matched with the bifurcation feature information in the actual endoscopic image is selected.
- the position determining unit 100 serving as a position determining means uses the virtual image observation position (viewpoint) 75 selected by the image collating unit 98 as the distal end portion of the endoscope at the time of capturing the real endoscopic image. It is determined that the position is.
- the virtual image learning unit 102 as virtual image learning means compares the virtual image selected by the image matching unit 98 with the real endoscopic image, and the branching portion feature information in the virtual image is The virtual image is corrected so that it becomes the branch feature information of the actual endoscopic image.
- FIGS. 46 to 48 are flowcharts showing an outline of the operation of the computer 1 as the medical image observation support device in the present embodiment. 46 corresponds to the operation of the virtual image storage unit 110, and the flowchart of FIG. 47 corresponds to the operation of the second actual image observation position estimation unit 114. The flowchart in FIG. 48 corresponds to the learning routine executed in the flowchart in FIG.
- S51 and S52 correspond to the branching feature information generating unit 92.
- the organ structure information 37 stored in the extracted information storage unit 12f in particular information related to the virtual core line c, is read, and the branching section where the virtual core line branches is specified.
- branch feature information that appears on the virtual image when a virtual image including the branch is generated is generated.
- the bifurcation feature information means at least the number of images of the hole in which the luminal organ continuing in the direction toward the back of the screen appears as a hole, the position of the image of the hole, or the brightness of the image of the hole.
- a virtual image including the branching unit is generated.
- the branching unit is included from among the stored virtual images. A virtual image may be selected.
- step 1 it is determined whether or not the processing of S52 to S54 has been executed for all the branch portions in the virtual core line c. And when it is executed for all the branching sections, this determination is affirmed, and this flowchart ends. On the other hand, if not all branches are executed, the branch executed in S56 is changed, that is, S5. In step 1, the processing of S52 to S54 is still executed among the branching portions identified above, and the processing of S52 to S54 is repeated while paying attention to the branching portion.
- the flowchart in FIG. 47 corresponds to the second actual image observation position estimation unit 114.
- S61 an actual endoscopic image captured by the video camera provided at the distal end portion of the endoscope 84 and processed by the endoscope device 88 is taken into the computer 1 as a medical image observation support device.
- the feature corresponding to the branching part feature image is extracted from the real endoscopic image captured by S61. Then, in subsequent S63, it is determined whether or not the feature extracted in S62 corresponds to the branch of the luminal organ. That is, if an image feature corresponding to the branching of the luminal organ appears in the actual endoscopic image, the determination in this step is affirmed, and the subsequent S64 and subsequent steps are executed. On the other hand, features on the image corresponding to the branching of the luminal organ appear in the actual endoscopic image! / Wow! In this case, for example, when the feature exists on the image but it does not correspond to the branching of the hollow organ, or when the feature that can be detected in S62 does not exist on the image, this flowchart is terminated. To do.
- the position of the distal end portion of the endoscope 84 at the time of capturing the real endoscope image captured in S61 is determined based on the virtual image selected in S64.
- the observation position (viewpoint) is determined to be 75.
- the virtual image corrected in S71 is stored in the virtual image database 104 with feature information in place of the virtual image so far.
- the second real image observation position estimation unit 114 extracts the feature corresponding to the bifurcation structure information that appears in the real endoscope image, and the feature is virtual with feature information. Based on the bifurcation feature information that is also generated in the luminal structure data stored in the image database 104, collation is performed, and a virtual image associated with the bifurcation feature information that matches as a result of the collation is selected. Since the observation position 75 of the virtual image is estimated as the actual image observation position, it is possible to estimate the distal end position of the endoscope without detecting the actual distal end position of the endoscope. Since the actual endoscopic image and the virtual image are collated based on the feature corresponding to the lumen structure data appearing in FIG. 5, it is possible to realize highly accurate collation while reducing the time required for collation.
- the second real image observation position estimating unit 114 extracts features corresponding to the lumen structure data appearing in the real endoscopic image, and the features are extracted from the virtual image storage unit. 110 is compared with the organ structure information 37 stored in 110, and the observation position 75 of the virtual image corresponding to the organ structure information 37 matched as a result of the comparison is estimated as the actual image observation position. It is possible to estimate the tip position of the endoscope 84 without detecting the tip position of the endoscope, and the features corresponding to the organ structure information 37 appearing on the image and the actual endoscope image and the Since virtual images are collated, it is possible to achieve highly accurate collation while reducing the time required for collation.
- the second actual image observation position estimation unit 114 is the virtual Since the image learning unit 102 learns and corrects the content stored in the virtual image storage unit 110 based on the result of the collation, more accurate collation can be performed each time the collation is repeated.
- FIG. 50 is a functional block diagram showing an outline of the functions of the computer 1 which is another embodiment of the present invention, that is, the medical image observation support device having the image synthesizing means and the navigation means.
- an image composition display unit 18 serving as an image composition unit is captured by a video camera attached to the distal end portion of an endoscope 84 and obtained through an endoscope device 88.
- the endoscopic image and the virtual image generated by the luminal organ image generation unit 15 as the virtual image generation means are displayed on the monitor 2 as the display means so as to be comparable.
- the luminal organ image generation unit 15 generates a virtual image using the actual image observation position estimated by the first actual image observation position estimation unit 112 or the second actual image observation position estimation unit 114 as the observation position.
- a virtual image having the same observation position force as that of the actual endoscopic image that is, a display image, a display position, a scale, and the like are substantially equal to the real image, and a virtual image is obtained.
- the image composition display unit 18 uses the anatomical name information generation unit 16 to associate the anatomical name associated with the site of the luminal organ displayed in the virtual image or the real endoscopic image.
- the target name is superimposed on the virtual image or the real endoscopic image, for example, with characters. This is shown in FIG.
- the navigation unit 116 as the navigation means includes a route generation unit 118 as the route generation unit, an insertion guide unit 120 as the insertion guide unit, and a route name display unit 122 as the route name display unit.
- a route generation unit 118 as the route generation unit
- an insertion guide unit 120 as the insertion guide unit
- a route name display unit 122 as the route name display unit.
- the route generation unit 118 for example, when the operator specifies the target part set in the 3D image data, that is, when the operator specifies the part from which the endoscope is to be reached from now on, The path of the luminal organ that is passed through to reach the target site is searched. Of this route The search is performed, for example, by accumulating information on which route the endoscope should be advanced in, for example, a branching unit.
- the insertion guide section 120 is a plurality of openings that open to the branch section when the endoscope 84 reaches immediately before the branch section according to the path of the hollow organ generated by the path generation section 118.
- a display indicating one branch pipe into which the endoscope 84 should be inserted is displayed on the image composition display unit 18. Specifically, first, when the bifurcation feature information appears in the actual endoscopic image, or when the first real image observation position estimation unit 112 or the second real image observation position estimation unit 114 performs real image observation. As a result of collating the position of the distal end portion of the endoscope 84 estimated as the position with the organ structure information, it is determined that the endoscope 84 is positioned immediately before the bifurcation portion. To detect.
- the endoscope should be advanced to any branch pipe that opens at the branch. Further, a display indicating the branch pipe to be advanced is displayed on the image composition display unit 18, which is displayed on the real endoscopic image and / or the virtual image, and is displayed on the branch image. Display.
- FIG. 54 shows an example of this.
- FIG. 54 is an image showing a branch portion where branch pipes 124a, 124b, and 124c exist. This image may be the actual endoscopic image or the virtual image.
- the insertion guide unit 120 determines that the branch pipe into which the endoscope 84 is to be inserted is 124c based on the route generated by the route generation unit 118, the insertion guide unit 120 displays the image composition display unit 18
- the branch pipe 124c is displayed as a branch pipe to be advanced by the endoscope.
- This display may be, for example, an arrow 126a in FIG. 54, a character display 126b, or a combination thereof.
- the route name display unit 122 describes the locations of the luminal organs constituting the route of the luminal organs generated by the route generation unit 118, and stores them in association with these sites.
- the scientific name is read from the extracted information storage unit 12f.
- the route generated by the route generation unit 118 can be grasped by the anatomical name.
- the route name display unit 122 displays information on the route expressed by the anatomical name on the monitor 2 for the image composition display unit 18 according to the operation of the operator.
- FIG. 55 shows an example of the display on the monitor 2 at this time.
- a path name display unit 128 provided on the left half surface and an image display unit 130 provided on the right half surface constitute a screen.
- the route name display unit 1208 information on the route expressed by the anatomical name created by the route name display unit 122, that is, the anatomy of each part of the luminal organ constituting the route is described.
- the target names are displayed so as to be listed in the order of the insertion site of the endoscope or the route from the current position to the target site.
- the virtual image of the branching portion existing on the path generated by the luminal organ image generation unit 15 is displayed in a reduced size.
- the virtual image of the branching unit may be displayed as shown in FIG. 55, or the real endoscopic image and the virtual image may be displayed in a comparable manner as shown in FIG.
- the insertion guide display by the insertion guide unit 120 and the real endoscope image or the virtual image may be superimposed and displayed, or the image display unit 130 may not be provided.
- FIG. 51 is a flowchart showing an outline of the operation of the computer 1 also as the medical image observation apparatus in the present embodiment.
- 3D image data is acquired from the CT image data storage unit 11 (see FIG. 1).
- S82 in the acquired three-dimensional image data, a target site which is a site to which the endoscope 84 is to be inserted and reached is set. This target part is input by the operator via the input unit 3, for example.
- the organ region information 35, the organ structure information 37, and the like stored in the information extraction unit 12 are read. Further, in S84, the information extraction unit 12 (extraction information storage) also stores anatomical information associated with each part of the hollow organ from which the organ-related information 35 or organ structure information 37 is read in S83. Part 12f) Force is also read.
- S85 and S86 correspond to the path generation unit 118.
- a start position for starting navigation is set.
- the start position may be set by the endoscope position detection unit 106, or from the actual endoscope image of the endoscope 84, the first actual image observation position estimation unit 112 or the second actual image observation.
- the position estimation unit 114 may estimate the position.
- the insertion position is used before insertion of the endoscope into the subject. A little.
- the route in the hollow organ from the start position set in S85 to the target site set in S82 is determined.
- this route when the structure of the luminal organ is only capable of branching, it is uniquely determined by setting the start position and the target site.
- a plurality of routes may be candidates as a result of the search when the start position and the target site are set. In such a case, for example, based on which route is the shortest or which route has a structure that is easy to insert the endoscope 84, a plurality of candidates are selected. If the route power of the one also determines the one route ,.
- the route determined in S86 is represented by the anatomical name read out in S83.
- the anatomical name associated with each part of the hollow organ constituting the path determined in S86 is selected from the anatomical name read out in S83.
- the path expressed by the anatomical name that is, the anatomical name associated with each part of the hollow organ constituting the path is determined from the start position. Listed in order of the route to the target site is displayed on the monitor 2.
- an actual endoscopic image captured by a video camera attached to the distal end portion of the endoscope 84 is captured via the endoscopic device 88.
- the video camera when the actual endoscope image is captured The position of the tip of the endoscope where is attached is estimated.
- An image made up of symbols and characters eg, arrow 126a and character 126b in FIG. 54 is created.
- the generated image data is appropriately selected according to the operation of the operator, and the selected image data is synthesized and displayed on two monitors.
- the real endoscopic image captured in S88 and the virtual image generated in S90 are displayed in a comparable manner, and S is appropriately displayed on both the real endoscopic image and the virtual image.
- the character image representing the anatomical name created in 91 can be superimposed and displayed.
- the three-dimensional image data captured in S81 may be displayed, or as shown in FIG.
- the endoscopic image includes an image that also has a force such as a symbol for indicating the branch pipe into which the endoscope should be inserted at the branch portion created in S91, and the observation site created in S91 as well.
- a character representing the anatomical name of the character (for example, 125 in FIG. 54) and the force S can be displayed in a superimposed manner.
- a path expressed by enumerating anatomical names can be displayed as character information, and some image data can be displayed.
- Such a plurality of display forms can be appropriately switched according to the operation of the operator, for example! / Speak.
- the image composition display unit 18 is generated on the monitor 2 by the virtual image generation unit corresponding to the real endoscopic image and the real endoscopic image.
- the virtual image is displayed so that it can be compared.
- the luminal organ image generation unit 15 sets the observation position 75 as the actual image observation position estimated by the first actual image observation position estimation unit 112. Since the virtual image is generated, a virtual image of the observation position force estimated to be the same as the real image observation position of the real endoscope image is obtained. According to the above-described embodiment, the luminal organ image generation unit 15 sets the observation position 75 as the actual image observation position estimated by the second actual image observation position estimation unit 114. Since the virtual image is generated, a virtual image of the observation position force estimated to be the same as the real image observation position of the real endoscope image is obtained.
- the image composition display unit 18 is displayed on the monitor 2 based on the anatomical name association by the information extraction unit 12. Since the anatomical name of the luminal organ is displayed on the actual endoscopic image, it is possible to grasp which part of the luminal organ displayed in the image is the actual endoscopic image. it can.
- the navigation unit 116 opens a plurality of branches opened at the branching portion of the luminal organ shown on the actual endoscopic image displayed on the monitor 2. Since the display showing one branch tube into which the endoscope 84 of the tube is to be inserted is displayed, the operator can easily insert the endoscope 84 to the target site in the luminal organ. A branch pipe to be inserted can also be recognized at a branch portion of a hollow organ.
- the navigation unit 116 automatically generates the route, while a plurality of anatomical names associated with each part of the luminal organ constituting the route. Are listed in the order of the path from the insertion site to the target site, it is possible to recognize in advance the path from the insertion of the endoscope to the target site in the hollow organ by an anatomical name. it can.
- the computer 1 as the medical image observation support device has the same configuration as the configuration diagram of FIG.
- the information extraction unit 12 in FIG. 1 has a configuration as shown in FIG. 56 and is different from the information extraction unit 12 in FIG. 2 in that it has an extraluminal tissue extraction unit 12h.
- FIGS. 1 and 2 differ from the functions in FIGS. 1 and 2 will be described.
- the extraluminal tissue extracting unit 12h as the extraluminal tissue extracting means extracts the image of the extraluminal tissue existing outside the luminal organ by analyzing the three-dimensional image, Extraluminal tissue structure information 132, which is information about the size and position in the three-dimensional image, is generated.
- the extraction information association unit 12e associates the organ region information 35 and the organ structure information 37 with the anatomical information and stores them in the extraction information storage unit 12f. Associating with an anatomical number, which will be described later, and storing it in the extracted information storage unit
- the anatomical information DB 13 is connected to the extraluminal tissue! This is the anatomical structure information
- anatomical information DB13 is an anatomical number
- anatomical model information as anatomical structure information as shown in (1) to (6) below is stored.
- the information extraction unit 12 determines the anatomical number of the extraluminal tissue extracted by the extraluminal tissue extraction unit 12h based on the information stored in the anatomical information DB 13. Then, the anatomical number determined to be the extraluminal tissue is associated and stored in the extracted information storage unit 12f.
- the luminal organ image generation unit 15 as a virtual image generation unit generates a virtual image of the luminal organ, and in addition to the extraluminal tissue extracted by the extraluminal tissue extraction unit 12h. Based on the structure information, the CT image data stored in the CT image data storage unit 11 is subjected to image processing, and in the same image as the virtual image of the luminal organ, the luminal organ and the luminal organ are processed. An image of the extraluminal tissue is generated while maintaining the position and size relationship with the extraluminal tissue.
- the anatomical name information generation unit 16 generates character image data based on the name assignment information from the information extraction unit 12, and in addition, generates anatomical information associated with the extraluminal tissue.
- the number character image data is also generated.
- FIG. 57 is a flowchart showing an outline of the operation of the luminal organ extracting unit 12h.
- a process using a median smoothing filter is executed, and as the masking process, a background region outside the body surface is deleted, and the extraluminal tissue is inside the body.
- a process is performed to delete a region that is determined to be non-existent based on a numerical value representing the structure of the tissue corresponding to each pixel in the three-dimensional image.
- CT value representing the degree of X-ray absorption of the tissue.
- the region corresponding to the block structure is extracted from the 3D image preprocessed in S101.
- the above-mentioned 3D image CT image force is also trying to extract the lymph nodes.
- the lymph node candidate region can be extracted.
- the region extracted as a candidate may be changed. Specifically, for example, when the contrast between the extracted area and the outside of the area is very low, an area larger than necessary may be extracted. In such a case, processing is performed such that the region that has been over-extracted is deleted based on the possible size of the lymph node, and the region is reduced.
- the region overlapping with the luminal organ is deleted from the regions that are candidates for the extraluminal tissue in S102.
- the region of the blood vessel is based on the organ region information of the blood vessel that is a luminal organ stored in the information extraction unit 12. Are compared with the candidate area for the lymph node, and if both areas overlap, the candidate area for the duplicated lymph node is deleted. At this time, the three-dimensional image force may be extracted from the blood vessel region.
- deletion of the extracted candidate area is executed based on the size of the extraluminal tissue.
- an area smaller than a predetermined threshold is deleted from the lymph node candidate areas.
- a candidate area smaller than the area on the image corresponding to the size of the lymph node to be extracted (for example, a radius of 2.5 mm or more) cannot be a candidate.
- Areas on the image that are smaller than a threshold (e.g., area volume) determined based on the size of the lymph nodes to be removed are deleted
- the extraction of the extracted candidate area is executed based on the shape of the extraluminal tissue. As a result, excessive over-extraction that cannot be deleted in the process up to S104 is performed.
- the source area is deleted.
- the shape of the lymph node to be extracted has an elliptical shape, and thus the candidate region having a shape that is clearly not an elliptical shape is deleted.
- the shape is determined based on the sphericity DOS expressed by the following equation.
- S is the surface area of the region and V is the volume of the region.
- This sphericity DOS is 1 when the area is spherical, and increases as it becomes non-spherical. Therefore, the sphericity DOS is calculated for each of the candidate areas, and when the value exceeds a predetermined threshold value s (for example, 6), the candidate area is deleted.
- a predetermined threshold value s for example, 6
- the candidate region remaining after the above processing is set as a region indicating a lymph node.
- Information on the size of the region and the position in the 3D image is stored as extraluminal tissue structure information.
- FIG. 58 is an example of a virtual image in which a lymph node that is an extraluminal tissue extracted by the extraluminal tissue extraction means 12h and a blood vessel that is the luminal organ are displayed.
- the luminal organ image generation unit 15 generates a virtual image while maintaining the positional relationship in the three-dimensional image on the same scale with the lymph node and the blood vessel, and the anatomical name generation unit.
- the anatomical name of the blood vessel and the anatomical number of the lymph node are displayed superimposed on the virtual image.
- the luminal organ extraction unit 12h based on the three-dimensional image data, the structure of the extraluminal tissue existing outside the luminal organ in the subject.
- Extraluminal tissue structure information which is information relating to the above, is extracted, and the luminal organ image generation unit 15 actually combines the virtual image of the luminal organ and the virtual image of the luminal tissue in the same image. Since it is displayed on the same scale while maintaining the positional relationship, based on the three-dimensional image data, the extraluminal tissue existing outside the luminal organ has the same structure. The position and size of the virtual image can be grasped.
- the anatomical information database 13 includes at least anatomical name information for the luminal organ and at least anatomical information for the extraluminal tissue.
- the anatomical structure information including each number is stored, and the information extracting unit 12 Associates the anatomical name information stored in the anatomical information database 13 with the luminal structure data for the luminal organ, and stores the anatomical information database 13 with respect to the extraluminal tissue. Since the stored anatomical number is associated with the extraluminal tissue structure information, the information extraction unit 12 associates the extraluminal tissue with the anatomical name in association with the luminal organ. Anatomic numbers can be associated.
- the image composition display unit 18 is based on the association of the anatomical name or the anatomical number by the information extraction unit 12 (extraction information association unit 12e), Since the anatomical name of the luminal organ and the anatomical number of the extraluminal tissue are displayed on the virtual image displayed on the monitor 2, observation of the luminal organ is facilitated.
- the luminal organ image generation unit 15 has the anatomical structure information !, the organ structure information 37, and at least one of the extraluminal tissue structure information. Therefore, it is possible to change the appropriate image processing method automatically or by the operator for each part of the luminal organ or the extraluminal tissue. Region data or extraluminal tissue structure information can be extracted.
- the navigation unit 116 when the extraluminal tissue is set as a target site, the above-described lumen organ in the vicinity of the extraluminal tissue. Since the site where the endoscope can be inserted is set as the actual target site, the operator simply sets the target extraluminal tissue as the target site, and in the lumen organ close to the extralumenal tissue. It is possible to receive support for inserting the endoscope up to the site where the endoscope can be inserted.
- the extraluminal tissue is a lymph node
- the luminal organ is a blood vessel
- 3D image force can be extracted from extraluminal tissue information of nodes.
- the video camera provided at the distal end portion of the position sensor 86 or the endoscope 84 is very small.
- the first actual image observation position estimation means includes The estimated position and orientation of the position sensor 86 are directly used as the observation position of the endoscope 84, but this may be corrected based on the actual positional relationship.
- the CT image data capturing unit 10 is not limited to the force that captures the three-dimensional image data by, for example, an MO device or a DVD device. It may be possible to capture 3D image data directly captured by a CT imaging device, such as by connecting to a CT imaging device via a network.
- the force calculated from the transformation matrix is calculated more precisely by focusing only on the coordinate system of the position detector 82 and the coordinate system of the three-dimensional image. Defines the coordinate system of the position detector 82, the camera coordinate system, the coordinate system of the position sensor 86, the coordinate system in the 3D image, the actual coordinate system where the specimen exists, etc. You can do it!
- the execution condition is executed as many times as necessary in step S34, and the force that the iteration is completed by satisfying this condition is not limited to this.
- the latest pj and use the latest, ie, the most recently calculated transformation matrix T.
- the virtual image learning unit 102 is not an essential component, and a certain effect can be obtained without the virtual image learning unit 102.
- the navigation unit 116 is set such that the target part is set by the operator, for example.
- the present invention is not limited to this, and is provided separately, for example.
- the 3D image data is analyzed by an image diagnosis means or the like and a lesion site is found in the 3D image data, the found lesion site may be designated as a target site. In this way, the detection of the lesion site and the generation of the path in the luminal organ leading to the lesion site are automatically executed based on the 3D image data.
- the navigation unit 116 performs the plan for inserting the endoscope on the monitor 2 as the display means.
- the present invention is not limited to this, and particularly in the present invention. Since the route can be expressed by the anatomical name of the luminal organ, it is also possible to perform voice guidance by, for example, mechanically speaking the anatomical name. .
- the display of the route name in S87 may be executed in accordance with the operation of the operator that is not necessarily required.
- the image composition display unit 18 displays the real endoscopic image and the virtual image so as to be comparable as shown in FIG. 52, but the present invention is not limited to this.
- FIG. 53 the real endoscopic image and the virtual image are displayed so as to be comparable, and the three-dimensional image may be further displayed.
- the observation position or the observation position 75 of the virtual image may be displayed in a superimposed manner.
- the plurality of display forms made by the image composition display unit 18 can be switched as appropriate according to the operation of the operator, but is not limited to this. May be switched automatically. Specifically, when the actual endoscopic image becomes the image of the branching section, the navigation section 116 performs guidance (that is, the display 126a and the display 126b), and automatically switches depending on the situation. You may make it
- the target site in the navigation unit 116 is in the luminal organ.
- the present invention is not limited to this, and an extraluminal tissue such as the extraluminal tissue is used. It may be the target site.
- the navigation means is configured to execute a route search using a site where the endoscope of a hollow organ close to the set target site can be inserted as the target site. In this way, if the target site is set outside the luminal organ, In addition, since the route to the site where the endoscope of the hollow organ adjacent to the target site can be inserted is searched, the operator's endoscope insertion operation is supported.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Human Computer Interaction (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- High Energy & Nuclear Physics (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/226,922 US8199984B2 (en) | 2006-05-02 | 2007-02-17 | System that assists in observing a luminal organ using the structure of the luminal organ |
JP2008514405A JP4899068B2 (ja) | 2006-05-02 | 2007-02-17 | 医療画像観察支援装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006-128681 | 2006-05-02 | ||
JP2006128681 | 2006-05-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007129493A1 true WO2007129493A1 (ja) | 2007-11-15 |
Family
ID=38667603
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2007/052894 WO2007129493A1 (ja) | 2006-05-02 | 2007-02-17 | 医療画像観察支援装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US8199984B2 (ja) |
JP (1) | JP4899068B2 (ja) |
WO (1) | WO2007129493A1 (ja) |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009153677A (ja) * | 2007-12-26 | 2009-07-16 | Konica Minolta Medical & Graphic Inc | 動態画像処理システム |
JP2009254600A (ja) * | 2008-04-17 | 2009-11-05 | Fujifilm Corp | 画像表示装置並びに画像表示制御方法およびプログラム |
JP2010082374A (ja) * | 2008-10-02 | 2010-04-15 | Toshiba Corp | 画像表示装置及び画像表示方法 |
JP2010253017A (ja) * | 2009-04-24 | 2010-11-11 | Toshiba Corp | 画像表示装置及び画像表示方法 |
JP2011030839A (ja) * | 2009-08-03 | 2011-02-17 | Nagoya Univ | 医用画像観察支援装置 |
JP2011045448A (ja) * | 2009-08-25 | 2011-03-10 | Toshiba Corp | 消化管画像表示装置及び消化管画像データ表示用制御プログラム |
JP2011050590A (ja) * | 2009-09-02 | 2011-03-17 | Toshiba Corp | 医用画像処理装置、及び医用画像処理プログラム |
JP2011120709A (ja) * | 2009-12-10 | 2011-06-23 | Toshiba Corp | 画像処理装置および画像処理プログラム |
WO2011102012A1 (ja) * | 2010-02-22 | 2011-08-25 | オリンパスメディカルシステムズ株式会社 | 医療機器 |
US20120027260A1 (en) * | 2009-04-03 | 2012-02-02 | Koninklijke Philips Electronics N.V. | Associating a sensor position with an image position |
WO2012014438A1 (ja) * | 2010-07-28 | 2012-02-02 | 富士フイルム株式会社 | 内視鏡観察を支援する装置および方法、並びに、プログラム |
WO2012046846A1 (ja) * | 2010-10-07 | 2012-04-12 | 株式会社 東芝 | 医用画像処理装置 |
WO2012101888A1 (ja) * | 2011-01-24 | 2012-08-02 | オリンパスメディカルシステムズ株式会社 | 医療機器 |
JP2012252559A (ja) * | 2011-06-03 | 2012-12-20 | Sony Corp | 画像処理装置および方法、記録媒体並びにプログラム |
WO2013031637A1 (ja) * | 2011-08-31 | 2013-03-07 | テルモ株式会社 | 呼吸域用ナビゲーションシステム |
JP2014512850A (ja) * | 2011-01-14 | 2014-05-29 | コーニンクレッカ フィリップス エヌ ヴェ | 気管支鏡検査の経路計画及び誘導に関するアリアドネ壁テーピング |
WO2014103238A1 (ja) * | 2012-12-27 | 2014-07-03 | 富士フイルム株式会社 | 仮想内視鏡画像表示装置および方法並びにプログラム |
WO2014136576A1 (ja) | 2013-03-06 | 2014-09-12 | オリンパスメディカルシステムズ株式会社 | 内視鏡システム |
WO2014156378A1 (ja) | 2013-03-27 | 2014-10-02 | オリンパスメディカルシステムズ株式会社 | 内視鏡システム |
KR20150004538A (ko) * | 2013-07-03 | 2015-01-13 | 현대중공업 주식회사 | 수술용 내비게이션의 측정 방향 설정 시스템 및 방법 |
JP2015514492A (ja) * | 2012-04-19 | 2015-05-21 | コーニンクレッカ フィリップス エヌ ヴェ | 術前及び術中3d画像を用いて内視鏡を手動操作するガイダンスツール |
WO2016009701A1 (ja) * | 2014-07-15 | 2016-01-21 | オリンパス株式会社 | ナビゲーションシステム、ナビゲーションシステムの作動方法 |
JP2017522912A (ja) * | 2014-07-02 | 2017-08-17 | コヴィディエン リミテッド パートナーシップ | 気管を検出するためのシステムおよび方法 |
KR101834081B1 (ko) * | 2016-08-30 | 2018-03-02 | 오주영 | 3d 영상 디스플레이 방법 및 3d 영상을 이용한 방사선 촬영 가이드 시스템 및 방법 |
JP2018079010A (ja) * | 2016-11-15 | 2018-05-24 | 株式会社島津製作所 | X線透視装置及びx線透視方法 |
WO2018220930A1 (ja) * | 2017-05-30 | 2018-12-06 | オリンパス株式会社 | 画像処理装置 |
JP2019010382A (ja) * | 2017-06-30 | 2019-01-24 | 富士フイルム株式会社 | 画像位置合わせ装置、方法およびプログラム |
WO2019130868A1 (ja) * | 2017-12-25 | 2019-07-04 | 富士フイルム株式会社 | 画像処理装置、プロセッサ装置、内視鏡システム、画像処理方法、及びプログラム |
WO2019163890A1 (ja) * | 2018-02-21 | 2019-08-29 | オリンパス株式会社 | 医療システムおよび医療システムの作動方法 |
US10646190B2 (en) | 2016-11-22 | 2020-05-12 | Joo Young Oh | Radiography guide system and method |
JP2020092816A (ja) * | 2018-12-12 | 2020-06-18 | キヤノンメディカルシステムズ株式会社 | 医用画像処理装置、x線ct装置及び医用画像処理方法 |
WO2023200320A1 (ko) * | 2022-04-15 | 2023-10-19 | 충남대학교산학협력단 | 내시경 이동 경로 가이드 장치, 시스템, 방법, 및 컴퓨터 판독 가능한 기록 매체 |
DE102023112313A1 (de) | 2022-05-12 | 2023-11-23 | Fujifilm Corporation | Informationsverarbeitungsvorrichtung, bronchoskopvorrichtung, informationsverarbeitungsverfahren und programm |
JP7443197B2 (ja) | 2020-08-25 | 2024-03-05 | キヤノンメディカルシステムズ株式会社 | 医用画像処理装置、システム及び方法 |
WO2024096084A1 (ja) * | 2022-11-04 | 2024-05-10 | 富士フイルム株式会社 | 医療支援装置、内視鏡、医療支援方法、及びプログラム |
Families Citing this family (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8107701B2 (en) * | 2006-03-29 | 2012-01-31 | Hitachi Medical Corporation | Medical image display system and medical image display program |
US20080086051A1 (en) * | 2006-09-20 | 2008-04-10 | Ethicon Endo-Surgery, Inc. | System, storage medium for a computer program, and method for displaying medical images |
US8672836B2 (en) * | 2007-01-31 | 2014-03-18 | The Penn State Research Foundation | Method and apparatus for continuous guidance of endoscopy |
US8155728B2 (en) * | 2007-08-22 | 2012-04-10 | Ethicon Endo-Surgery, Inc. | Medical system, method, and storage medium concerning a natural orifice transluminal medical procedure |
US8457718B2 (en) * | 2007-03-21 | 2013-06-04 | Ethicon Endo-Surgery, Inc. | Recognizing a real world fiducial in a patient image data |
US20080319307A1 (en) * | 2007-06-19 | 2008-12-25 | Ethicon Endo-Surgery, Inc. | Method for medical imaging using fluorescent nanoparticles |
US20080221434A1 (en) * | 2007-03-09 | 2008-09-11 | Voegele James W | Displaying an internal image of a body lumen of a patient |
US20080234544A1 (en) * | 2007-03-20 | 2008-09-25 | Ethicon Endo-Sugery, Inc. | Displaying images interior and exterior to a body lumen of a patient |
DE102007013566B4 (de) * | 2007-03-21 | 2017-02-23 | Siemens Healthcare Gmbh | Verfahren zur Bilddatenaufnahme und medizinische Modalität |
US8081810B2 (en) * | 2007-03-22 | 2011-12-20 | Ethicon Endo-Surgery, Inc. | Recognizing a real world fiducial in image data of a patient |
US8718363B2 (en) * | 2008-01-16 | 2014-05-06 | The Charles Stark Draper Laboratory, Inc. | Systems and methods for analyzing image data using adaptive neighborhooding |
US8737703B2 (en) * | 2008-01-16 | 2014-05-27 | The Charles Stark Draper Laboratory, Inc. | Systems and methods for detecting retinal abnormalities |
US8218846B2 (en) | 2008-05-15 | 2012-07-10 | Superdimension, Ltd. | Automatic pathway and waypoint generation and navigation method |
WO2010092919A1 (ja) * | 2009-02-13 | 2010-08-19 | 株式会社 日立メディコ | 医用画像表示方法、医用画像診断装置、及び医用画像表示装置 |
US10004387B2 (en) | 2009-03-26 | 2018-06-26 | Intuitive Surgical Operations, Inc. | Method and system for assisting an operator in endoscopic navigation |
US8337397B2 (en) | 2009-03-26 | 2012-12-25 | Intuitive Surgical Operations, Inc. | Method and system for providing visual guidance to an operator for steering a tip of an endoscopic device toward one or more landmarks in a patient |
GB2475722B (en) * | 2009-11-30 | 2011-11-02 | Mirada Medical | Measurement system for medical images |
JP2013517909A (ja) * | 2010-01-28 | 2013-05-20 | ザ ペン ステイト リサーチ ファンデーション | 気管支鏡検査法ガイダンスに適用される画像ベースのグローバル登録 |
JP5606832B2 (ja) * | 2010-03-05 | 2014-10-15 | 富士フイルム株式会社 | 画像診断支援装置、方法およびプログラム |
DE102010039289A1 (de) * | 2010-08-12 | 2012-02-16 | Leica Microsystems (Schweiz) Ag | Mikroskopsystem |
EP2719322A1 (en) | 2010-09-08 | 2014-04-16 | Covidien LP | Catheter with imaging assembly |
JP5844093B2 (ja) * | 2010-09-15 | 2016-01-13 | 株式会社東芝 | 医用画像処理装置及び医用画像処理方法 |
US9275452B2 (en) | 2011-03-15 | 2016-03-01 | The Trustees Of Columbia University In The City Of New York | Method and system for automatically determining compliance of cross sectional imaging scans with a predetermined protocol |
US8900131B2 (en) * | 2011-05-13 | 2014-12-02 | Intuitive Surgical Operations, Inc. | Medical system providing dynamic registration of a model of an anatomical structure for image-guided surgery |
JP5745947B2 (ja) * | 2011-06-20 | 2015-07-08 | 株式会社日立メディコ | 医用画像処理装置、医用画像処理方法 |
CN106562757B (zh) | 2012-08-14 | 2019-05-14 | 直观外科手术操作公司 | 用于多个视觉***的配准的***和方法 |
US9517184B2 (en) | 2012-09-07 | 2016-12-13 | Covidien Lp | Feeding tube with insufflation device and related methods therefor |
WO2014070396A1 (en) * | 2012-11-02 | 2014-05-08 | Covidien Lp | Catheter with imaging assembly and console with reference library and related methods therefor |
WO2014141968A1 (ja) * | 2013-03-12 | 2014-09-18 | オリンパスメディカルシステムズ株式会社 | 内視鏡システム |
US9639666B2 (en) | 2013-03-15 | 2017-05-02 | Covidien Lp | Pathway planning system and method |
CN104156935B (zh) | 2013-05-14 | 2018-01-02 | 东芝医疗***株式会社 | 图像分割装置、图像分割方法和医学图像设备 |
EP3012727B1 (en) * | 2013-06-19 | 2019-07-03 | Sony Corporation | Display control device, display control method, and program |
JP6304737B2 (ja) * | 2013-08-30 | 2018-04-04 | 国立大学法人名古屋大学 | 医用観察支援装置及び医用観察支援プログラム |
US10216762B2 (en) * | 2014-06-04 | 2019-02-26 | Panasonic Corporation | Control method and non-transitory computer-readable recording medium for comparing medical images |
US11188285B2 (en) | 2014-07-02 | 2021-11-30 | Covidien Lp | Intelligent display |
US9892506B2 (en) * | 2015-05-28 | 2018-02-13 | The Florida International University Board Of Trustees | Systems and methods for shape analysis using landmark-driven quasiconformal mapping |
JP6594133B2 (ja) * | 2015-09-16 | 2019-10-23 | 富士フイルム株式会社 | 内視鏡位置特定装置、内視鏡位置特定装置の作動方法および内視鏡位置特定プログラム |
JP6698699B2 (ja) * | 2015-12-25 | 2020-05-27 | オリンパス株式会社 | 画像処理装置、画像処理方法およびプログラム |
WO2018138828A1 (ja) * | 2017-01-26 | 2018-08-02 | オリンパス株式会社 | 画像処理装置、動作方法およびプログラム |
JP6702902B2 (ja) * | 2017-02-24 | 2020-06-03 | 富士フイルム株式会社 | マッピング画像表示制御装置および方法並びにプログラム |
DE102017214447B4 (de) * | 2017-08-18 | 2021-05-12 | Siemens Healthcare Gmbh | Planare Visualisierung von anatomischen Strukturen |
US10984585B2 (en) * | 2017-12-13 | 2021-04-20 | Covidien Lp | Systems, methods, and computer-readable media for automatic computed tomography to computed tomography registration |
CN112641514B (zh) * | 2020-12-17 | 2022-10-18 | 杭州堃博生物科技有限公司 | 一种微创介入导航***与方法 |
CN113476098B (zh) * | 2021-07-26 | 2022-06-24 | 首都医科大学附属北京儿童医院 | 一种封堵管的监测*** |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002345725A (ja) * | 2001-05-22 | 2002-12-03 | Olympus Optical Co Ltd | 内視鏡システム |
JP2003265408A (ja) * | 2002-03-19 | 2003-09-24 | Mitsubishi Electric Corp | 内視鏡誘導装置および方法 |
JP2004180940A (ja) * | 2002-12-03 | 2004-07-02 | Olympus Corp | 内視鏡装置 |
JP2004230086A (ja) * | 2003-01-31 | 2004-08-19 | Toshiba Corp | 画像処理装置、画像データ処理方法、及びプログラム |
JP2004283373A (ja) * | 2003-03-20 | 2004-10-14 | Toshiba Corp | 管腔状構造体の解析処理装置 |
JP2006042969A (ja) * | 2004-08-02 | 2006-02-16 | Hitachi Medical Corp | 医用画像表示装置 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5920319A (en) * | 1994-10-27 | 1999-07-06 | Wake Forest University | Automatic analysis in virtual endoscopy |
US5699799A (en) * | 1996-03-26 | 1997-12-23 | Siemens Corporate Research, Inc. | Automatic determination of the curved axis of a 3-D tube-shaped object in image volume |
US6343936B1 (en) * | 1996-09-16 | 2002-02-05 | The Research Foundation Of State University Of New York | System and method for performing a three-dimensional virtual examination, navigation and visualization |
US6346940B1 (en) * | 1997-02-27 | 2002-02-12 | Kabushiki Kaisha Toshiba | Virtualized endoscope system |
JP2000135215A (ja) | 1998-10-30 | 2000-05-16 | Ge Yokogawa Medical Systems Ltd | 管路案内方法および装置並びに放射線断層撮影装置 |
JP4087517B2 (ja) | 1998-11-25 | 2008-05-21 | 株式会社日立製作所 | 領域抽出方法 |
EP1466552B1 (en) | 2002-07-31 | 2007-01-03 | Olympus Corporation | Endoscope |
JP2006068351A (ja) | 2004-09-03 | 2006-03-16 | Toshiba Corp | 医用画像処理方法、医用画像処理プログラム及び医用画像処理装置 |
-
2007
- 2007-02-17 WO PCT/JP2007/052894 patent/WO2007129493A1/ja active Search and Examination
- 2007-02-17 US US12/226,922 patent/US8199984B2/en not_active Expired - Fee Related
- 2007-02-17 JP JP2008514405A patent/JP4899068B2/ja active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002345725A (ja) * | 2001-05-22 | 2002-12-03 | Olympus Optical Co Ltd | 内視鏡システム |
JP2003265408A (ja) * | 2002-03-19 | 2003-09-24 | Mitsubishi Electric Corp | 内視鏡誘導装置および方法 |
JP2004180940A (ja) * | 2002-12-03 | 2004-07-02 | Olympus Corp | 内視鏡装置 |
JP2004230086A (ja) * | 2003-01-31 | 2004-08-19 | Toshiba Corp | 画像処理装置、画像データ処理方法、及びプログラム |
JP2004283373A (ja) * | 2003-03-20 | 2004-10-14 | Toshiba Corp | 管腔状構造体の解析処理装置 |
JP2006042969A (ja) * | 2004-08-02 | 2006-02-16 | Hitachi Medical Corp | 医用画像表示装置 |
Cited By (52)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009153677A (ja) * | 2007-12-26 | 2009-07-16 | Konica Minolta Medical & Graphic Inc | 動態画像処理システム |
JP2009254600A (ja) * | 2008-04-17 | 2009-11-05 | Fujifilm Corp | 画像表示装置並びに画像表示制御方法およびプログラム |
JP2010082374A (ja) * | 2008-10-02 | 2010-04-15 | Toshiba Corp | 画像表示装置及び画像表示方法 |
US9214139B2 (en) | 2008-10-02 | 2015-12-15 | Kabushiki Kaisha Toshiba | Image display apparatus and image display method |
US20120027260A1 (en) * | 2009-04-03 | 2012-02-02 | Koninklijke Philips Electronics N.V. | Associating a sensor position with an image position |
JP2010253017A (ja) * | 2009-04-24 | 2010-11-11 | Toshiba Corp | 画像表示装置及び画像表示方法 |
JP2011030839A (ja) * | 2009-08-03 | 2011-02-17 | Nagoya Univ | 医用画像観察支援装置 |
JP2011045448A (ja) * | 2009-08-25 | 2011-03-10 | Toshiba Corp | 消化管画像表示装置及び消化管画像データ表示用制御プログラム |
JP2011050590A (ja) * | 2009-09-02 | 2011-03-17 | Toshiba Corp | 医用画像処理装置、及び医用画像処理プログラム |
JP2011120709A (ja) * | 2009-12-10 | 2011-06-23 | Toshiba Corp | 画像処理装置および画像処理プログラム |
US8102416B2 (en) | 2010-02-22 | 2012-01-24 | Olympus Medical Systems Corp. | Medical apparatus |
WO2011102012A1 (ja) * | 2010-02-22 | 2011-08-25 | オリンパスメディカルシステムズ株式会社 | 医療機器 |
WO2012014438A1 (ja) * | 2010-07-28 | 2012-02-02 | 富士フイルム株式会社 | 内視鏡観察を支援する装置および方法、並びに、プログラム |
WO2012046846A1 (ja) * | 2010-10-07 | 2012-04-12 | 株式会社 東芝 | 医用画像処理装置 |
JP2012096024A (ja) * | 2010-10-07 | 2012-05-24 | Toshiba Corp | 医用画像処理装置 |
JP2014512850A (ja) * | 2011-01-14 | 2014-05-29 | コーニンクレッカ フィリップス エヌ ヴェ | 気管支鏡検査の経路計画及び誘導に関するアリアドネ壁テーピング |
WO2012101888A1 (ja) * | 2011-01-24 | 2012-08-02 | オリンパスメディカルシステムズ株式会社 | 医療機器 |
JP5160699B2 (ja) * | 2011-01-24 | 2013-03-13 | オリンパスメディカルシステムズ株式会社 | 医療機器 |
CN103068294A (zh) * | 2011-01-24 | 2013-04-24 | 奥林巴斯医疗株式会社 | 医疗设备 |
JP2012252559A (ja) * | 2011-06-03 | 2012-12-20 | Sony Corp | 画像処理装置および方法、記録媒体並びにプログラム |
WO2013031637A1 (ja) * | 2011-08-31 | 2013-03-07 | テルモ株式会社 | 呼吸域用ナビゲーションシステム |
JP2015514492A (ja) * | 2012-04-19 | 2015-05-21 | コーニンクレッカ フィリップス エヌ ヴェ | 術前及び術中3d画像を用いて内視鏡を手動操作するガイダンスツール |
JP2014124384A (ja) * | 2012-12-27 | 2014-07-07 | Fujifilm Corp | 仮想内視鏡画像表示装置および方法並びにプログラム |
WO2014103238A1 (ja) * | 2012-12-27 | 2014-07-03 | 富士フイルム株式会社 | 仮想内視鏡画像表示装置および方法並びにプログラム |
US9619938B2 (en) | 2012-12-27 | 2017-04-11 | Fujifilm Corporation | Virtual endoscopic image display apparatus, method and program |
WO2014136576A1 (ja) | 2013-03-06 | 2014-09-12 | オリンパスメディカルシステムズ株式会社 | 内視鏡システム |
WO2014156378A1 (ja) | 2013-03-27 | 2014-10-02 | オリンパスメディカルシステムズ株式会社 | 内視鏡システム |
US9516993B2 (en) | 2013-03-27 | 2016-12-13 | Olympus Corporation | Endoscope system |
KR102191035B1 (ko) | 2013-07-03 | 2020-12-15 | 큐렉소 주식회사 | 수술용 내비게이션의 측정 방향 설정 시스템 및 방법 |
KR20150004538A (ko) * | 2013-07-03 | 2015-01-13 | 현대중공업 주식회사 | 수술용 내비게이션의 측정 방향 설정 시스템 및 방법 |
JP2017522912A (ja) * | 2014-07-02 | 2017-08-17 | コヴィディエン リミテッド パートナーシップ | 気管を検出するためのシステムおよび方法 |
US10918443B2 (en) | 2014-07-15 | 2021-02-16 | Olympus Corporation | Navigation system and operation method of navigation system |
WO2016009701A1 (ja) * | 2014-07-15 | 2016-01-21 | オリンパス株式会社 | ナビゲーションシステム、ナビゲーションシステムの作動方法 |
KR101834081B1 (ko) * | 2016-08-30 | 2018-03-02 | 오주영 | 3d 영상 디스플레이 방법 및 3d 영상을 이용한 방사선 촬영 가이드 시스템 및 방법 |
JP2018079010A (ja) * | 2016-11-15 | 2018-05-24 | 株式会社島津製作所 | X線透視装置及びx線透視方法 |
US10646190B2 (en) | 2016-11-22 | 2020-05-12 | Joo Young Oh | Radiography guide system and method |
WO2018220930A1 (ja) * | 2017-05-30 | 2018-12-06 | オリンパス株式会社 | 画像処理装置 |
JP2019010382A (ja) * | 2017-06-30 | 2019-01-24 | 富士フイルム株式会社 | 画像位置合わせ装置、方法およびプログラム |
WO2019130868A1 (ja) * | 2017-12-25 | 2019-07-04 | 富士フイルム株式会社 | 画像処理装置、プロセッサ装置、内視鏡システム、画像処理方法、及びプログラム |
JP7050817B2 (ja) | 2017-12-25 | 2022-04-08 | 富士フイルム株式会社 | 画像処理装置、プロセッサ装置、内視鏡システム、画像処理装置の動作方法及びプログラム |
JPWO2019130868A1 (ja) * | 2017-12-25 | 2020-12-10 | 富士フイルム株式会社 | 画像処理装置、プロセッサ装置、内視鏡システム、画像処理方法、及びプログラム |
JPWO2019163890A1 (ja) * | 2018-02-21 | 2021-02-04 | オリンパス株式会社 | 医療システムおよび医療システムの作動方法 |
US20200367733A1 (en) | 2018-02-21 | 2020-11-26 | Olympus Corporation | Medical system and medical system operating method |
JP6990292B2 (ja) | 2018-02-21 | 2022-01-12 | オリンパス株式会社 | 医療システムおよび医療システムの作動方法 |
WO2019163890A1 (ja) * | 2018-02-21 | 2019-08-29 | オリンパス株式会社 | 医療システムおよび医療システムの作動方法 |
US11800966B2 (en) | 2018-02-21 | 2023-10-31 | Olympus Corporation | Medical system and medical system operating method |
JP2020092816A (ja) * | 2018-12-12 | 2020-06-18 | キヤノンメディカルシステムズ株式会社 | 医用画像処理装置、x線ct装置及び医用画像処理方法 |
JP7164423B2 (ja) | 2018-12-12 | 2022-11-01 | キヤノンメディカルシステムズ株式会社 | 医用画像処理装置、x線ct装置及び医用画像処理方法 |
JP7443197B2 (ja) | 2020-08-25 | 2024-03-05 | キヤノンメディカルシステムズ株式会社 | 医用画像処理装置、システム及び方法 |
WO2023200320A1 (ko) * | 2022-04-15 | 2023-10-19 | 충남대학교산학협력단 | 내시경 이동 경로 가이드 장치, 시스템, 방법, 및 컴퓨터 판독 가능한 기록 매체 |
DE102023112313A1 (de) | 2022-05-12 | 2023-11-23 | Fujifilm Corporation | Informationsverarbeitungsvorrichtung, bronchoskopvorrichtung, informationsverarbeitungsverfahren und programm |
WO2024096084A1 (ja) * | 2022-11-04 | 2024-05-10 | 富士フイルム株式会社 | 医療支援装置、内視鏡、医療支援方法、及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
JP4899068B2 (ja) | 2012-03-21 |
JPWO2007129493A1 (ja) | 2009-09-17 |
US20090161927A1 (en) | 2009-06-25 |
US8199984B2 (en) | 2012-06-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4899068B2 (ja) | 医療画像観察支援装置 | |
JP4822142B2 (ja) | 内視鏡挿入支援システム及び内視鏡挿入支援方法 | |
JP6122875B2 (ja) | 血管ツリー画像内での見えない分岐部の検出 | |
CN106659373B (zh) | 用于在肺内部的工具导航的动态3d肺图谱视图 | |
JP3820244B2 (ja) | 挿入支援システム | |
JP4009639B2 (ja) | 内視鏡装置、内視鏡装置のナビゲーション方法、内視鏡画像の表示方法、及び内視鏡用画像表示プログラム | |
JP5918548B2 (ja) | 内視鏡画像診断支援装置およびその作動方法並びに内視鏡画像診断支援プログラム | |
CN103957832B (zh) | 血管树图像的内窥镜配准 | |
JP6080248B2 (ja) | 3次元画像表示装置および方法並びにプログラム | |
CN106030656A (zh) | 在微创搭桥外科手术期间对内乳动脉的空间可视化 | |
JP2006246941A (ja) | 画像処理装置及び管走行トラッキング方法 | |
CN111481292A (zh) | 手术装置及其使用方法 | |
WO2005041761A1 (ja) | 挿入支援システム | |
JP2010517632A (ja) | 内視鏡の継続的案内のためのシステム | |
WO2012014438A1 (ja) | 内視鏡観察を支援する装置および方法、並びに、プログラム | |
JP2014512931A (ja) | ユーザ操作されるオンザフライの経路プランニング | |
JP4323288B2 (ja) | 挿入支援システム | |
JP5670145B2 (ja) | 医用画像処理装置及び制御プログラム | |
JP4022192B2 (ja) | 挿入支援システム | |
JP5561578B2 (ja) | 医用画像観察支援装置 | |
JP4445792B2 (ja) | 挿入支援システム | |
CN116157088A (zh) | 用于规划和执行活检程序的***及相关方法 | |
JP5366713B2 (ja) | 消化管画像表示装置及び消化管画像データ表示用制御プログラム | |
US20230215059A1 (en) | Three-dimensional model reconstruction | |
JP4190454B2 (ja) | 挿入支援装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07714424 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 12226922 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2008514405 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07714424 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) |