WO2019146582A1 - Image capture device, image capture system, and image capture method - Google Patents

Image capture device, image capture system, and image capture method Download PDF

Info

Publication number
WO2019146582A1
WO2019146582A1 PCT/JP2019/001819 JP2019001819W WO2019146582A1 WO 2019146582 A1 WO2019146582 A1 WO 2019146582A1 JP 2019001819 W JP2019001819 W JP 2019001819W WO 2019146582 A1 WO2019146582 A1 WO 2019146582A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
nerve
infrared light
imaging
sample
Prior art date
Application number
PCT/JP2019/001819
Other languages
French (fr)
Japanese (ja)
Inventor
譲 池原
浩紀 石川
Original Assignee
国立研究開発法人産業技術総合研究所
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立研究開発法人産業技術総合研究所, 株式会社ニコン filed Critical 国立研究開発法人産業技術総合研究所
Publication of WO2019146582A1 publication Critical patent/WO2019146582A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B10/00Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/31Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
    • G01N21/35Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light
    • G01N21/359Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light using near infrared light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present disclosure relates to an imaging device, an imaging system, and an imaging method.
  • Patent Document 1 discloses that a living tissue is imaged, position information of a blood vessel of a living body is extracted based on the imaged image, and an image in which the blood vessel is emphasized is projected onto the living tissue.
  • Patent Document 1 it is not possible to identify a nerve in a living tissue or to make the nerve easy to see.
  • the light detection unit acquires the measurement result obtained by irradiating the sample with infrared light of a wavelength suitable for identifying the nerve, and the nerve specifying the position of the nerve in the sample from the measurement result And a control unit that generates a feature image of the image.
  • the light detection unit acquires a measurement result obtained by irradiating the sample with infrared light of a wavelength suitable for identifying a nerve, and the control unit determines from the measurement result in the sample.
  • An imaging method includes generating a neural feature image that locates a nerve.
  • the light detection unit acquires a plurality of measurement results obtained by irradiating the sample with a plurality of infrared lights suitable for identifying a nerve, and a calculation using the plurality of measurement results.
  • An image pickup apparatus comprising: a control unit that generates a nerve enhanced image in which the position of the nerve is enhanced.
  • an imaging system is provided that includes the imaging device of the above aspect and a display device that displays a feature image of a nerve.
  • FIG. 14 is a diagram showing a screen example on which a visible light image is displayed on a monitor 1 _ 51 and a screen example (2-monitor configuration) on which a background image on which an enhanced image is superimposed is displayed on a monitor 1 _ 51 in the present embodiment. It is a figure which shows the example of a screen (1 monitor 2 screen structure) on which the visible light image and the background image on which the emphasis image was superimposed are displayed on the monitor 50 in this embodiment.
  • FIG. 1 It is a figure which shows schematic structure (example of a structure using the mirror 33) of the optical system 300 which can be employ
  • imaging system surgery assistance system
  • FIG. 7 It is a flowchart for demonstrating the process which removes an error part from the superimposed image produced
  • FIG. 12A shows the image (FIG. 12B) superimposed on the infrared-light image obtained by irradiating infrared light.
  • FIG. 12B shows the image (FIG. 12B) superimposed on the infrared-light image obtained by irradiating infrared light.
  • FIG. 12B shows the image (FIG. 12B) superimposed on the infrared-light image obtained by irradiating infrared light.
  • FIG. 12B shows the image (FIG. 12B) superimposed on the infrared-light image obtained by irradiating infrared light.
  • FIG. 12B shows the image (FIG. 12B) superimposed on the infrared-light image obtained by irradiating infrared light.
  • FIG. 12B shows the example of wavelength setting GUI (Graphical User Interface) 1400 for setting the wavelength of the infrared light inject
  • FIG. 15A shows an example of a superimposed image generated by performing neural imaging processing according to the method according to the second embodiment (Fig. 15A) and an example of a superimposed image generated by performing neural imaging processing according to the method according to the first embodiment
  • a nerve imaging result (a weighted image of a nerve is generated from spectrum data of each wavelength when the combination of wavelengths is changed in a wavelength range of 900 nm to 2500 nm in the present embodiment) and superimposed on a background image (infrared light image of 1070 nm) Of the results obtained).
  • the present embodiment may be implemented by software running on a general-purpose computer, or may be implemented by dedicated hardware or a combination of software and hardware.
  • imaging regarding nerves in a living tissue is performed.
  • the reason for performing nerve imaging in this way is as follows. For example, when performing surgery (for example, laparoscopic surgery, prostate surgery, large intestine surgery, etc.) on a specific part of a patient's living body, nerves (for example, somatic nervous system, autonomic nervous system) important for the operation part (specific part) If the nerve is injured, the patient's condition may change due to post-operative complications when peripheral nerves such as the enteric nervous system or the central nervous system are present, so it is necessary to avoid damaging the nerve. It does not.
  • a doctor or the like identifies the nerve based on anatomical knowledge, and normally does not mistake or injure the nerve for another site (for example, a blood vessel), but does not damage the nerve during the operation.
  • another site for example, a blood vessel
  • the position of the nerve in the operative field is directly identified, and the position of the nerve is easily identified (for example, it is identified as the nerve which images the site identified as the nerve more easily visible)
  • the nerve in the living tissue is directly identified (identified, estimate)
  • Disclosed is a technique for superimposing an image (e.g., an enhanced image) on the image of the living tissue.
  • the identification includes that a part of the sample is equivalent to the nerve, and the identification includes specifying the position of the nerve in the sample.
  • FIG. 1 is a diagram for explaining an example of an appearance configuration of an imaging system 1 according to the first embodiment.
  • the imaging system 1 is used, for example, for medical support such as pathological diagnosis support, clinical diagnosis support, observation support, and surgery support.
  • a surgery support system a surgery imaging system, a medical support imaging system
  • FIG. 1 will be described as an example of the imaging system 1.
  • the imaging system 1 includes, for example, a control device (control unit) 10 that controls the entire imaging system 1 and infrared light that irradiates the living body 80 (hereinafter, the imaging region in the living body 80 may be referred to as a “sample”)
  • An infrared light source unit 20 (reference numeral 20 is not shown in FIG. 1) including infrared light sources 21 and 22 emitting light, and an imaging unit (light detection unit) 30 for imaging radiation light from a living body 80; It includes an input device 40 used when an operator (for example, a doctor etc.) inputs various data and an instruction command to the control device 10, a monitor 1_51 (first display) and a monitor 2_52 (second display).
  • a display device (display unit) 50 for displaying an image, characters, and the like captured by the unit 30 and a surgical shadow lamp 60 communicably connected to the control device 10 are provided.
  • the imaging system 1 can also be said to be an imaging device including at least the imaging unit 30.
  • the infrared light from each of the infrared light sources 21 to 22 of the infrared light source unit 20 may be configured, for example, to uniformly irradiate the light amount to the sample through the diffusion plate.
  • a diffusion plate can be attached to the surface from which the light of each light source is emitted, or the diffusion plate can be disposed in the light path between the infrared light source unit 20 and the living body 80. Thereby, light is uniformly emitted to the sample, and it is possible to prevent illumination unevenness, shadows, and the like from being generated in the imaging range (for example, the range corresponding to the surgical field) by the imaging unit 30. .
  • the living body 80 is, for example, a patient lying on the operating table 90.
  • an image of a surgical site of a living body (patient) 80 is captured by the imaging unit 30.
  • the surgical site site to be imaged: imaging site
  • FIG. 2 is a diagram showing an example of a functional block configuration of the imaging system 1.
  • the imaging system 1 includes the control device 10, the infrared light source unit 20, the imaging unit 30, the input device 40, the display device (display unit) 50, the surgical shadow lamp 60 for surgery, and the imaging unit. And 30, a stage 70 (not shown in FIG. 1) that slides 30 in a horizontal direction, for example.
  • the control device 10 is, for example, a computer, and includes a control unit 101 including a processor and the like, and a storage unit 102 that stores various programs, parameters, imaging results, and the like.
  • the control unit 101 reads various programs, parameters, and the like from the storage unit 102, expands the various programs read into the internal memory (not shown), and performs various processes according to information processing sequences specified by instructions and various programs input from the input device 40. Execute program processing.
  • the control unit 101 acquires, for example, a light irradiation control unit 1011 that controls the irradiation of infrared light of the infrared light source unit 20 and image data detected (imaged) by the imaging unit 30 from the imaging unit 30 and emphasizing nerves.
  • An image generation unit 1012 that generates a background image on which an image is superimposed, a weighted image generation unit 1013 that generates a neural enhancement image from image data acquired from the imaging unit 30, and an error image (image And an image correction unit 1014 that corrects (error of the emphasized image).
  • the storage unit 102 stores, for example, at least programs corresponding to the light irradiation control unit 1011, the image generation unit 1012, the enhanced image generation unit 1013, and the image correction unit 1014.
  • the stage device 70 can move the imaging device 30 relative to the sample.
  • the infrared light source unit 20 emits (emits) infrared light of at least a part of a wavelength range of, for example, 900 nm to 2500 nm (or 800 nm to 3000 nm, or 900 nm to 1650 nm, or 1000 nm to 1700 nm). And 22.
  • FIG. 1 illustrates an example in which the infrared light source unit 20 includes two light sources, three or more light sources may be included.
  • the infrared light source unit 20 divides light having a wide wavelength band emitted (radiated) from, for example, the infrared light sources 21 and 22 with an optical system, and filters each of the separated light with an optical filter disposed in the optical path.
  • the imaging unit 30 described later includes a hyperspectral camera as an imaging device, the radiation (reflection) light from the sample is split (wavelength resolved) in the hyperspectral camera to correspond to the spectral data (each wavelength) Since the luminance value is detected, the light does not have to be split by the above optical system. Also, when using a hyperspectral camera, it is necessary to move either the camera or the sample (subject) relative to the stage or the like, so although not shown, the imaging system 1 of FIG. And moving means (stage apparatus) such as a stage for moving the relative movement.
  • stage apparatus such as a stage for moving the relative movement.
  • the infrared light source 21 and the infrared light source 22 of the infrared light source unit 20 are configured to emit (emit) infrared light of different wavelengths and irradiate the sample (the living body 80) with each light source It may be switched and used.
  • the wavelength of light emitted (radiated) by each of the infrared light sources 21 and the infrared light sources 22 included in the infrared light source unit 20 may be set by the operator.
  • GUI Graphic User Interface
  • the imaging unit 30 captures a first imaging device (an imaging device having high detection sensitivity to light in a visible light region, a visible light camera) that captures a visible light image of the sample, and an infrared light image of the sample And a second imaging device (an imaging device having high detection sensitivity to light in an infrared light region, an infrared light camera).
  • a first imaging device an imaging device having high detection sensitivity to light in a visible light region, a visible light camera
  • a second imaging device an imaging device having high detection sensitivity to light in an infrared light region, an infrared light camera
  • the second imaging device is, for example, radiation from a sample of infrared light of at least a part of a wavelength band of 900 nm to 2500 nm emitted (radiated) from an infrared light source (infrared light source 21 or infrared light source 22)
  • the (reflected) light can be split (wavelength resolved) into a hyperspectral camera that detects spectral data (brightness values) corresponding to each wavelength.
  • the second imaging device is a normal infrared image acquiring an image by irradiating the sample with infrared light of a plurality of wavelengths and detecting the luminance (luminance value) of the light emitted from the sample
  • It may be an optical camera (an infrared camera that captures images without wavelength decomposition).
  • the infrared light of a plurality of wavelengths to be irradiated for example, a plurality of types of light (infrared light) are selected from wavelengths of 900 nm to 2500 nm.
  • a silicon (Si) camera can be used as the first imaging device.
  • an InGaAs camera hyperspectral camera
  • InGaAs indium potassium arsenic
  • the optical axis of the first imaging device 31 and the optical axis of the second imaging device 32 may not be the same as shown in FIG. However, in this case, in order to make the imaging axis of the first imaging device 31 (field of view of imaging) the same as the imaging axis of the second imaging device 32 (field of view of imaging) It is necessary to provide a stage 70 for As described later with reference to FIG. 4, an optical system in which the optical axes of both imaging devices are coaxial may be provided in the imaging unit 30.
  • the input device 40 is a device including, for example, a keyboard, a mouse, a microphone, and a touch panel, and used when an operator (user) inputs an instruction or a parameter when causing the control device 10 to execute a predetermined process.
  • a semiconductor memory such as USB
  • an input port not shown
  • the various programs may be executed by reading an instruction described in a determined rule.
  • the display device 50 corrects the image generated by the control unit 101 (for example, the visible light image of the sample, the infrared light image of the sample, and the nerve enhancement image) and the control unit 101 corrects the image (for example, nerve enhancement image) Received from the control device 10, the generated image (visible light image of the sample or infrared light image of the sample), visible light image and / or infrared light image of the nerve An image (composite image) in which the enhanced image is superimposed, or an image (composite image) in which the corrected and enhanced image is superimposed on the visible light image and / or the infrared light image are displayed on the display screen.
  • the display device 50 includes, for example, a monitor 1_51 and a monitor 2_52.
  • the monitor 1_51 displays, for example, a visible light image.
  • the monitor 2_52 displays, for example, an image obtained by superimposing the enhanced image on the visible light image and / or the infrared light image, or an image obtained by superimposing the corrected and enhanced image on the visible light image and / or the infrared light image on the display screen.
  • the control unit 101 has, for example, a normal observation mode in which the generated image is displayed on a display screen (eg, monitor 1_51, monitor 2_52), and an enhanced observation mode in which the synthesized image is displayed on the display screen.
  • the display device 50 can be controlled to switch between the two display modes to display an image of a target on the display screen or simultaneously display images of the two targets based on the two modes.
  • 3A is a diagram showing a screen example in which a visible light image is displayed on the monitor 1_51 and a screen example (example of a two monitor configuration) in which a background image on which a nerve enhanced image is superimposed is displayed on the monitor 2_52. is there.
  • the image (background image) of the target on which the enhanced image is superimposed may be an infrared light image or a visible light image.
  • the visible light image displayed on the monitor 1_51 is, for example, a normal color image. Since the visible light image is an image obtained by imaging the living body region of the surgical field as it is, for example, it may be difficult to distinguish between a nerve and its surrounding tissue (eg, blood vessels or streaks).
  • the image in which the nerve is enhanced is superimposed on the background image as the enhanced image.
  • the operator such as a doctor can easily visually check the position of the nerve in the living body part (morphological feature) of the surgical field.
  • the display color for example, a color different from the other, such as green and white
  • the emphasized image can be appropriately changed, for example, by the operator.
  • the surgical shadow lamp 60 is a visible light source configured of a plurality of LED light sources and halogen light sources.
  • the surgical shadow lamp 60 is very bright, for example, up to 160000 lux. While the surgical shadow lamp 60 is on, a visible light image can be acquired by the first imaging device 31.
  • the surgical shadow lamp 60 for surgery is configured not to emit light in the wavelength region of infrared light, infrared light is used in any period of the lighting period and the lighting off period of the surgical shadow lamp 60 for surgery.
  • An image may be acquired by the second imaging device 32.
  • FIG. 3A shows a mode in which a visible light image is displayed on one of the two monitors, and a background image on which the nerve enhanced image is superimposed on the other monitor, as shown in FIG.
  • a two-screen configuration (for example, both images are displayed in the same size, or at least a portion of one image is superimposed on another image) on a single monitor 50 with a visible light image and a background image on which an enhanced image is superimposed. , Or one image may be displayed larger than another image).
  • the control unit 10 controls the display position of the plurality of screen areas DA (for example, the first screen area DA1 and the second screen area DA2) in the monitor 50. Further, for example, the lighting and extinguishing of the surgical shadow lamp 60 may be controlled by the control device 10.
  • the imaging system (surgery support system) 1 of the present embodiment for example, the position of peripheral nerves present at a living body site (for example, a surgery target site of a patient) in a surgical field is specified and highlighted Because the operator (doctor etc.) can check the position of the nerve, he / she can perform the operation or the examination without damaging the nerve.
  • a living body site for example, a surgery target site of a patient
  • the operator can check the position of the nerve, he / she can perform the operation or the examination without damaging the nerve.
  • FIGS. 4 and 5 are diagrams showing a schematic configuration of an optical system 300 that can be employed in the imaging unit 30 of the imaging system 1 according to the present embodiment.
  • the optical system 300 is an optical system in which the optical axis of the first imaging device 31 and the optical axis of the second imaging device 32 are the same (including substantially the same).
  • FIG. 4 shows a configuration example using the mirror 33
  • FIG. 5 shows a configuration example using the dichroic mirror 38. As shown in FIG.
  • the optical system 300 has, for example, a mirror 33, and an angle of 45 between the first position P1 and the second position P2 (for example, the mirror 33 at the position P1 and the mirror 33 at the position P2) And the mirror drive unit 36 for axially rotating the mirror 33 between them, and the optical axis of the first imaging device 31 and the optical axis of the second imaging device 32 are aligned.
  • the control unit 101 generates a position switching control signal such that the mirror 33 alternately takes the first position P1 and the second position P2 Supply to the unit 34.
  • the mirror driving unit 34 sets the mirror 33 to the first position P1 in response to the position switching control signal from the control unit 101. set.
  • both the visible light 35 and the infrared light 36 from the sample are incident on the first imaging device 31.
  • the first imaging device 31 detects light in the visible light wavelength region, and generates an image based on the detection data.
  • the mirror driving unit 34 responds to the position switching control signal from the control unit 101 to Set to 2-position P2. In this case, both the visible light 35 and the infrared light 36 from the living body enter the second imaging device 32.
  • the second imaging device 32 detects light in the infrared wavelength region, and generates an image based on the detection data.
  • the optical system 300 can be configured to include the dichroic mirror 38, and the optical axis of the first imaging device 31 and the optical axis of the second imaging device 32 can be made to coincide with this configuration as well.
  • the dichroic mirror 38 is an optical element (mirror) having the function of reflecting light of a specific wavelength and transmitting light of other wavelengths. For example, when a short pass dichroic mirror is used, the transmittance of light having a wavelength shorter than the cutoff wavelength is high, and the reflectance of light having a wavelength longer than the cutoff wavelength is high.
  • a short pass dichroic mirror is adopted as the dichroic mirror 38 and the cutoff wavelength is set to 800 nm or 900 nm
  • visible light from a living body passes through the dichroic mirror 38 and enters the first imaging device 31.
  • the infrared light (for example, light of at least a part wavelength or wavelength band from 900 nm to 2500 nm) that has been emitted (reflected) from the living body is reflected by the dichroic mirror 38 and is incident on the second imaging device 32 .
  • the dichroic mirror 38 unlike the optical system 300 shown in FIG. 4, there is no need to provide the mirror driving unit 34, so the configuration of the optical system 300 can be simplified.
  • the optical axis of the first imaging device 31 and the optical axis of the second imaging device 32 can be made identical, and alignment of images obtained from the two imaging devices can be performed. The effect of eliminating the need to Therefore, when such an optical system 300 is employed, the stage 70 (see FIG. 2) is unnecessary.
  • FIG. 6 is a diagram showing a configuration in which alignment marks are arranged around the surgical field (eg, surgical cloth) according to the present embodiment.
  • the alignment mark in the present embodiment is, for example, the alignment mark compared with the periphery of the surgical field (eg, a surgical cloth) with respect to the illumination of visible light for alignment and the illumination of infrared light for alignment.
  • a substance (such as a dye) which can recognize the mark by strong reflection or absorption can be employed.
  • the alignment mark in the present embodiment employs, for example, a light emitter such as an LED that emits infrared light (light for alignment), a reflector that reflects light of a specific wavelength (light for alignment), or the like. It can also be done.
  • the alignment marks 71 to 74 provided on the surgical cloth do not have the same optical axis as the first imaging device 31 and the second imaging device 32 (the optical system 300 of FIGS. 4 and 5 is employed) Provided in order to align the camera axis of the first imaging device 31 (imaging position, visual field of imaging) with the camera axis of the second imaging device 32 (imaging position, visual field of imaging) .
  • the first imaging device 31 captures a visible light image
  • the alignment marks 71 to 74 are captured, and the position (coordinate position) information is notified to the control unit 101.
  • the control unit 101 When capturing an infrared light image, the control unit 101 causes the alignment marks 71 to 74 of which the imaging positions (coordinate positions) of the alignment marks 71 to 74 by the second imaging device 32 are notified from the first imaging device 31.
  • the control signal for controlling the movement of the stage 70 is generated to be identical (overlapping) with the position of.
  • the stage 70 receives the control signal from the control unit 101, and in response thereto, the camera axis (imaging position) of the second imaging device 32 becomes identical to the imaging position of the sample by the first imaging device 31.
  • the second imaging device 32 is moved. Then, the second imaging device 32 captures an infrared light image.
  • either the visible light image or the infrared light image may be converted such that the alignment marks 71 to 74 are in the same position on the image (coincident).
  • a nerve enhanced image is generated, and the visible light image captured by the first imaging device 31 or the infrared image captured by the second imaging device 32. It is superimposed on the light image and displayed on the monitor 2 _ 52 of the display device 50.
  • the imaging system 1 may be configured to include the zoom mechanism.
  • the said emphasized image is superimposed on visible light image or infrared-light image (these can be said collectively as a background image)
  • the 2nd imaging device 32 It is possible to capture a background image and an image (original image) which is the source of the enhanced image only with the above. Therefore, for example, when the background image is an infrared light image (when the enhanced image is superimposed on the infrared light image), the stage 70 is moved and the camera axis (imaging position) of the first imaging device 31 There is no need to align the camera axis (imaging position) of the second imaging device 32.
  • FIG. 7 shows the contents of nerve imaging processing (processing for generating a nerve enhanced image in a sample and generating a superimposed image on a background image, etc.) in the imaging system (surgery support system) 1 of this embodiment. It is a flowchart for demonstrating. Each step will be described below. In each step, each processing unit (light irradiation control unit 1011, image generation unit 1012, enhanced image generation unit 1013) is described to execute processing of each step, but each processing unit is included in control unit 101.
  • the control unit 101 may be an operation subject because it is a function to be performed.
  • Step 701 The light irradiation control unit 1011 controls the infrared light source unit 20 in response to an instruction input (input signal) of an operator (such as a doctor) using the input device 40, and thus the infrared light (for example, 900 nm) in a specific wavelength band.
  • the sample is irradiated with infrared light of at least a part of the wavelength band of
  • infrared light source part 20 irradiates the infrared light of a broad band
  • the nerve in the case where infrared light source part 20 equips two or more light sources which emit (radiates) infrared light of different wavelength. The imaging process of will be described later in the second embodiment (see FIG. 12).
  • the image generation unit 1012 controls the second imaging device 32 to acquire a hyperspectral image.
  • the second imaging device 32 is configured by a hyperspectral camera.
  • the second imaging device 32 detects spectral data (brightness value) corresponding to each wavelength while separating the radiation (reflection) light from the sample obtained by irradiating the sample with infrared light to each wavelength.
  • the luminance value data acquired by the camera is corrected by the luminance value of a reference object for correction (for example, a white board, an object adjusted to the shape of the sample, etc.) acquired in advance, and the correction value after correction Luminance data may be used for analysis.
  • the second imaging device 32 includes a light receiving sensor capable of acquiring a plurality of spectrum data in one shooting of one pixel of the image.
  • the second imaging device 32 detects spectral data obtained by irradiating the sample with light of N wavelength bands selected from a predetermined wavelength band (for example, a wavelength band of 900 nm or more and 2500 nm or less).
  • the second imaging device can acquire spectral data of three-dimensional infrared light reflection consisting of the surface position (corresponding to a pixel) of the sample and the wavelength direction.
  • the emphasized image generation unit 1013 reads teacher spectrum data of a nerve from the storage unit 102.
  • the training data of all nerves stored in the storage unit 102 and corresponding to each wavelength may be read, it corresponds to each wavelength of infrared light used when acquiring the spectrum data in step 702. Only teacher spectral data may be read.
  • the teacher's spectrum data of the nerve is, for example, spectrum data corresponding to each wavelength (the wavelength of the infrared region) obtained by irradiating infrared light of each wavelength to a site which is known to be a nerve.
  • the luminance value is stored in advance in the storage unit 102.
  • spectral data of a nerve at a site corresponding to a sample that is actually being imaged is used as teacher spectral data.
  • the sample the imaging site in the surgical field
  • the spectrum data of nerves in the leg or the waist is used as the teacher spectrum data.
  • the emphasized image generation unit 1013 calculates the SCM (Spectral Correlation Mapper) value of each pixel in the infrared light image of the captured sample.
  • the enhanced image generation unit 1013 may select spectral data values of samples at respective wavelengths (spectral measured values t ⁇ of the samples) and teacher spectral data values of nerves of corresponding wavelengths read at step 703 (reference spectral measured values Substituting r ⁇ ) into the following equation (1), the SCM value is calculated.
  • the SCM indicates that the smaller the value is, the stronger the correlation between the sample spectral data and the neural teacher spectral data is.
  • the SCM value indicates the degree of correlation or the degree of similarity between the sample (e.g. spectral data of the sample) and the teacher (e.g. neural teacher spectral data).
  • the average value of each spectrum value is added to the calculation. This is to remove the effect of background noise that may occur when measuring the spectrum. Since SCM removes the influence of such background noise, it is possible to accurately determine the correlation between reference data (for example, teacher spectral data) and measurement data.
  • the correlation is determined using the SCM method, but the correlation may be determined using multivariate analysis (eg, SAM, MLR, PCR, PLS, etc.).
  • SAM spectral angle mapper method
  • the enhanced image generation unit 1013 when using a spectral angle mapper method (SAM) in generation of an enhanced image, the enhanced image generation unit 1013 generates an enhanced image using the spectral data of a sample and the spectral data of a neural teacher, as in SCM.
  • the enhanced image generation unit 1013 generates an enhanced image based on the spectral angles of the vector in the multi-dimensional space based on the spectral data of the sample and the vector in the multi-dimensional space based on the spectral data of the teacher.
  • the angle (spectral angle) between the two vectors indicates the degree of similarity of the spectrum, and the smaller the angle, the greater the similarity.
  • the emphasized image generation unit 1013 compares each SCM value calculated in step 704 with a preset threshold value, and generates a nerve emphasized image using pixel data taking an SCM value equal to or less than the threshold value.
  • the emphasizing image generation unit 1013 can generate an emphasizing image by assigning predetermined color data (for example, a color such as green) to pixels taking SCM values equal to or less than the threshold value.
  • the enhanced image generation unit 1013 can generate an enhanced image as a binarized image, a toned image (scale image of a single color), or the like.
  • the enhanced image generation unit 1013 may simply perform binarization processing in which a color such as green is simply assigned to each pixel whose SCM value is less than or equal to the threshold, but using, for example, a gradation value consisting of 8 bits
  • a color such as green
  • a gradation value consisting of 8 bits
  • This enhanced image is an image obtained by extracting, based on the SCM value, pixels having a strong correlation with neural teacher spectrum data among the spectrum data of each pixel obtained by irradiating the sample with infrared light of a predetermined wavelength band. That is, it is an image which shows a part with very high possibility of being a nerve in a sample.
  • the image generation unit 1012 obtains an enhanced image of a nerve from the enhanced image generation unit 1013, superimposes the separately acquired background image and the enhanced image while aligning them, and generates a superimposed image in which the nerve is enhanced.
  • the control unit 101 acquires a superimposed image of nerves from the image generation unit 1012 and instructs the display device 50 (eg, monitor 2_52) to display the image.
  • the display device 50 receives, from the control unit 101, a superimposed image generated by superimposing the emphasis image on the background image, and displays the superimposed image on the display device 50.
  • superimposition may have a meaning that includes processing for aligning the background image and the enhanced image, and replacing pixels of the background image with pixels of the enhanced image for pixels at the same position.
  • the image generation unit 1012 may use, for example, a visible light image acquired by the first imaging device 31 or an infrared light image acquired by the second imaging device 32 as the background image.
  • an infrared light image is used as a background image, for example, an infrared light image of a specific wavelength (for example, 1070 nm) acquired by the second imaging device (hyperspectral camera) 32 can be used.
  • FIG. 8 illustrates a process of removing an error portion (e.g., a non-target portion other than a nerve such as a blood vessel or fat) from the above-described superimposed image generated by the nerve imaging process (FIG. 7) according to the present embodiment
  • the control unit 101 causes the image correction unit 1014 to execute shape analysis of the image, and an image (image of a portion other than the portion determined to be a nerve from the enhanced image generated in the nerve imaging process (FIG. 7) Remove the area) as an error.
  • the shape analysis process is performed, for example, in the following steps. In each step, the image correction unit 1014 is described to execute the processing of each step. However, since the image correction unit 1014 is a function included in the control unit 101, the control unit 101 may be an operation subject.
  • Step 801 The image correction unit 1014 performs, for example, a Fourier transform on the generated emphasized image.
  • Step 802 The image correction unit 1014 performs filter processing (for example, using a morphological filter) on the result of the Fourier transform obtained in step 801, and generates a shape other than an elongated shape such as a round shape in the generated enhanced image. Extract and remove the image of This is because the nerve part usually has an elongated shape (linear shape).
  • filter processing for example, using a morphological filter
  • the image correction unit 1014 can remove a portion that seems to be an error from the enhanced image of the superimposed image.
  • the function of the error removal process may be set to be valid (ON) or invalid (OFF) by an instruction of an operator (such as a doctor).
  • an operator such as a doctor designates a portion (designation information) to be removed from the superimposed image using, for example, the input device 40, and the control unit 101 designates the designated information
  • the image correction unit 1014 may be made to delete the designated part based on the above.
  • FIG. 9 is a photograph (image) showing an example of a superimposed image (background image + emphasized image of nerve) generated by the imaging process of nerve (for example, see FIG. 7) according to the present embodiment.
  • FIG. 9A is a photograph showing that the skin of the facial part of the rat was peeled off (however, the visual field and posture are different from FIGS. 9B and 9C).
  • FIG. 9A is a photograph showing that the skin of the facial part of the rat was peeled off (however, the visual field and posture are different from FIGS. 9B and 9C).
  • FIG. 9B was obtained by irradiating the rat's face with infrared light including continuous wavelengths in the wavelength band of 1290 nm to 1600 nm (continuous wavelength: actually, spectral data is acquired in steps of 1600 nm to 0.5 nm) It is a photograph showing an image obtained by superimposing a nerve enhanced image generated based on an infrared light image on an infrared light image (background image) captured at a wavelength of 1070 nm.
  • FIG. 9C is a photograph showing a visible image of a face portion of a rat with the same field of view and posture as FIG. 9B.
  • Example 1 of the spectrum data in the measured region (sampled image), the top 10% (for example, the top 5%, 15%, 20%, etc. may be used) whose luminance value is close to the neural teacher spectrum data.
  • Green data is assigned to pixels (for example, the top 10% pixels whose luminance values are close to the nerve teacher spectrum data among the pixels whose SCM values are equal to or less than the threshold) to generate an emphasized image.
  • facial nerves, blood vessels and subcutaneous fat are present in the face portion of the rat, and both the facial nerves and blood vessels are whitish when viewed macroscopically It may be difficult to judge what is the boundary between them.
  • the portion corresponding to the facial nerve is visually highlighted in green, and the portion corresponding to the blood vessel or subcutaneous fat around the neural region (peripheral region of the neural region) Is not highlighted.
  • the operator (such as a doctor) can easily confirm a nervous part in the sample by viewing during operation a superimposed image including a nerve enhanced image according to FIG. 9B displayed on the display device 50. Because it is possible, surgery can dramatically reduce the risk of nerve injury.
  • FIG. 10 is a photograph (image) showing an image example of error removal by the error removal processing of the present embodiment.
  • FIG. 10A shows an image before error removal
  • FIG. 10B shows an image after error removal.
  • FIG. 10A in the synthesized superimposed image, in addition to the nerve portion, there are mainly image regions other than the three elongated shapes (shape of the nerve portion).
  • shape analysis shape analysis
  • FIG. 10B An image after performing such an error removal process is shown in FIG. 10B. Referring to FIG. 10B, the portion determined as an error in FIG. 10A is removed, and only the nerve enhanced image is displayed in the superimposed image, and the accuracy of the obtained nerve enhanced image (e.g. It can be seen that the accuracy) is improved.
  • FIG. 11 is a diagram showing a comparison result of spectral data of nerve and spectral data of other portions (eg, streaks).
  • FIG. 11A is a photograph (image) showing a portion believed to be a porcine sciatic nerve.
  • FIG. 11B is a graph showing spectral data (brightness values) with respect to each infrared light wavelength of two nerve parts (nerves 1 and 2) and a streak part of a pig.
  • the graph of FIG. 11B takes out a portion that seems to be a porcine sciatic nerve (sample), irradiates it with infrared light in a wavelength range of 900 nm to 2500 nm, captures reflected light from the sample with a hyperspectral camera, Are spectral data obtained by plotting spectral values (brightness values) corresponding to.
  • the nerve parts Neves 1 and 2 have different shapes of spectral characteristics compared to the streaks, the shape of the spectrum characteristics of nerve 1 and the shape of the spectrum characteristics of nerve 2 mutually differ. Very similar.
  • FIG. 12 is a background image of a highlight image of a nerve generated by performing SCM calculation using a visible light image (FIG. 12A) of a portion considered to be the sciatic nerve of a pig according to the present embodiment and the spectrum data of FIG. For example, it is a photograph (image) showing a superimposed image (FIG. 12B) superimposed on an infrared light image obtained by irradiating infrared light of wavelength 1070 nm. As shown in FIG. 12B, it can be seen that the neural portion (neural region) is highlighted separately from the streak portion (the peripheral region of the neural region).
  • image analysis such as SCM is performed using the teacher spectrum (reference spectrum) of the nerve set in the infrared light region.
  • teacher spectrum reference spectrum
  • infrared light of a predetermined wavelength is used for imaging a nerve as in the present embodiment, depending on the amount of water, the amount of lipid, the amount of hemoglobin in blood, etc., between different substances (eg, A difference occurs in the spectral characteristics (optical characteristics (spectral characteristics, absorption characteristics, reflection (scattering) characteristics, etc.)) of water and lipids, water and hemoglobin, etc.
  • the imaging system 1 positively utilizes this difference (difference in optical characteristics), and performs image analysis such as SCM using the teacher spectrum (reference spectrum) of the nerve set in the infrared light region. Do.
  • This makes it possible to exclude other substances and selectively extract nerves to emphasize the nerves in the captured image.
  • the device and method according to the present embodiment it is possible to identify a neural part in an unknown sample and highlight the neural part in distinction from other parts.
  • the operator such as a doctor
  • spectral data of nerve can be extracted through an experiment as in Example 3, and spectral data of nerve extracted in this way can be used as the above-mentioned neural spectral data of nerve.
  • the first embodiment discloses that a hyperspectral camera is used as the second imaging device 32, and spectrum data is continuously acquired in a specific wavelength region to specify a nerve.
  • infrared light of a plurality of types of wavelengths (eg, different wavelengths for each living tissue to be specified such as a first wavelength, a second wavelength different from the first wavelength, etc.) is emitted.
  • a normal infrared camera a camera that does not require wavelength resolution or scanning like a hyperspectral camera, an InGaAs sensor, etc.
  • the configuration of the imaging system (surgery support system) 1 is, for example, that the infrared light source unit 20 has a light source for emitting (emitting) infrared light of a plurality of types of wavelengths, and the second imaging device 32 is
  • the second embodiment is the same as the first embodiment except that an infrared light camera is used, and thus the description thereof will be omitted.
  • FIG. 13 is a flowchart for explaining the contents of the nerve imaging process according to the second embodiment.
  • each processing unit (light irradiation control unit 1011, image generation unit 1012, enhanced image generation unit 1013) is described to execute the processing of each step, but each processing unit is included in control unit 101. Because of the function, the control unit 101 may be an operation subject.
  • the light irradiation control unit 1011 controls the infrared light sources 21 to 22 that emit the wavelength ⁇ k of the infrared light source unit 20 to emit infrared light of wavelength ⁇ k (for example, 950 nm, 970 nm, 1000 nm, 1070 nm, 1100 nm, 1300 nm , 1450 nm, 1550 nm, or 1600 nm (individual wavelengths)).
  • wavelength ⁇ k for example, 950 nm, 970 nm, 1000 nm, 1070 nm, 1100 nm, 1300 nm , 1450 nm, 1550 nm, or 1600 nm (individual wavelengths)
  • the infrared light sources are switched to output infrared light of different wavelengths, but for example, an infrared light source that emits infrared light of a predetermined wavelength band is used to Infrared light of a specific wavelength may be extracted by filtering infrared light emitted from a light source.
  • the image generation unit 1012 controls the second imaging device 32 (for example, a normal infrared light camera), and emits infrared light of wavelength ⁇ k to the sample to emit (reflect) infrared light from the sample And spectral data (brightness value) for the wavelength ⁇ k is detected (acquired).
  • the second imaging device 32 for example, a normal infrared light camera
  • Step 1304 the image generation unit 1012 stores the spectrum data acquired in step 1303 in the storage unit 102.
  • the emphasis image generation unit 1013 reads, from the storage unit 102, teacher spectrum data of a nerve corresponding to at least the wavelength ⁇ k .
  • the enhanced image generation unit 1013 calculates the SCM (Spectral Correlation Mapper) value of each pixel in the infrared light image of the captured sample.
  • the enhanced image generation unit 1013 may select spectral data values of the sample at each wavelength (spectral measured value t ⁇ of the sample) and teacher spectral data values of the corresponding wavelength read in step 1305 (reference spectral measured value the r lambda) and, like the first embodiment, into equation (1), calculates the SCM value.
  • the SCM indicates that the smaller the value is, the stronger the correlation between the sample spectral data and the neural teacher spectral data is.
  • the correlation is determined using the SCM method, but the correlation may be determined using multivariate analysis (eg, SAM, MLR, PCR, PLS, etc.).
  • the emphasized image generation unit 1013 compares each SCM value calculated in step 1306 with a preset threshold value, and generates a nerve emphasized image using pixel data taking an SCM value equal to or less than the threshold value.
  • the emphasizing image generation unit 1013 can generate an emphasizing image by assigning predetermined color data (for example, a color such as green) to pixels taking SCM values equal to or less than the threshold value.
  • the enhanced image generation unit 1013 can generate an enhanced image as a binarized image, a toned image (scale image of a single color), or the like.
  • the enhanced image generation unit 1013 may simply perform binarization processing in which a color such as green is simply assigned to each pixel whose SCM value is less than or equal to the threshold, but using, for example, a gradation value consisting of 8 bits
  • a color such as green
  • a gradation value consisting of 8 bits
  • the enhanced image is an image obtained by extracting a pixel having a strong correlation with the neural teacher spectrum data out of the spectrum data of each pixel obtained by irradiating the sample with infrared light of a predetermined wavelength band. Is an image showing a place where the possibility of being a nerve is very high.
  • the image generation unit 1012 acquires the enhanced image from the enhanced image generation unit 1013, superimposes the separately acquired background image and the enhanced image while aligning them, and generates a superimposed image.
  • the control unit 101 acquires a superimposed image from the image generation unit 1012 and instructs the display device 50 (for example, the monitor 2_52) to display the superimposed image.
  • the display device 50 receives, from the control unit 101, a superimposed image generated by superimposing the emphasized image on the background image, and displays the superimposed image on the display device 50.
  • the image generation unit 1012 may use, for example, a visible light image acquired by the first imaging device 31 or an infrared light image acquired by the second imaging device 32 as the background image.
  • a visible light image acquired by the first imaging device 31 or an infrared light image acquired by the second imaging device 32 as the background image.
  • an infrared-light image as a background image
  • the infrared-light image of the specific wavelength (for example, 1070 nm) which the 2nd imaging device (infrared light camera) 32 acquired can be used, for example.
  • FIG. 14 is a view showing an example of a wavelength setting GUI (Graphical User Interface) 1400 for setting a wavelength emitted (emitted) from a light source in the infrared light source unit 20 according to the present embodiment.
  • a wavelength band setting area 1408 for setting the wavelength band of infrared light to be irradiated A wavelength band setting area 1408 for setting the wavelength band of infrared light to be irradiated, a save button 1409 for saving the setting result, and an end button 1410 for closing the wavelength setting GUI.
  • a parameter adjustment unit capable of adjusting “brightness / contrast” or “gamma value” may be displayed on the display device as a GUI.
  • the individual wavelength setting unit 1401 and the continuous wavelength setting unit 1402 are alternatively used.
  • one of the setting units can be selected by a radio button.
  • each wavelength value of the k infrared light sources 21 to 22 included in the infrared light source unit 20 is input can do.
  • the control unit 101 may automatically input the wavelength value of each infrared light source. .
  • the control unit 101 automatically inputs the value of the wavelength band of infrared light. Good.
  • the control unit 101 reads the value of the wavelength of each light source set in the GUI 1400 and applies a voltage to a drive unit (not shown) of the infrared light source unit 20 And the wavelength value of light emitted (emitted) by each light source are transmitted to the drive unit.
  • the drive unit applies a voltage to the infrared light sources 21 and 22 under the control of the control unit 101 to emit (emit) light.
  • the control unit 101 transmits, for example, the timing at which the infrared light source 22 emits (radiates) light of each wavelength and the light emission (emission) time to a drive unit (not shown) of the infrared light source unit 20.
  • the infrared light source unit 20 is controlled so that light of a plurality of wavelengths is periodically emitted (radiated) from the infrared light source 22.
  • the imaging system 1 is provided with a plurality of light sources for emitting a plurality of infrared light of a single wavelength, switched to irradiate the sample, for example, infrared light of a specific wavelength band Infrared light of the bandwidth may be emitted from a light source to be emitted, and infrared light of a desired wavelength may be selectively extracted by a filter and the sample may be irradiated.
  • the GUI 1400 may display the individual wavelength setting unit 1401 and the continuous wavelength setting unit 1402 simultaneously, or may separately display the individual wavelength setting unit 1401 and the continuous wavelength setting unit 1402 based on the initial setting. You may
  • FIG. 15 shows an example (FIG. 15A) of a superimposed image generated by performing nerve imaging processing according to the device and method according to the second embodiment, and nerve imaging processing according to the device and method according to the first embodiment
  • FIG. 15B is a photograph (image) showing a comparison with an example of the superimposed image generated in FIG. Note that error removal processing (see FIG. 8) is not performed for both images.
  • infrared light of four wavelengths of 1000 nm, 1300 nm, 1450 nm, and 1600 nm is irradiated to the face portion of the mouse, and the reflected light is imaged with a normal infrared light camera (eg, InGaAs sensor)
  • a normal infrared light camera eg, InGaAs sensor
  • An enhanced image is generated based on the spectral data corresponding to each detected wavelength, and an image obtained by superimposing on a background image obtained by irradiating 1070 nm infrared light is shown.
  • 15B was detected by irradiating infrared light in a wavelength band of 1290 nm to 1600 nm to the face portion of the mouse and scanning its reflected light every 0.5 nm (continuous wavelength) with a hyperspectral camera
  • the figure shows an image obtained by generating an enhanced image based on spectral data corresponding to each wavelength, and superimposing the enhanced image on a background image obtained by irradiating 1070 nm infrared light.
  • the imaging system 1 can easily perform imaging of a nerve with a moving image during surgery, as well as imaging of the nerve (position and guidance of the nerve are taught) by still images.
  • the imaging system 1 is inexpensive by using a normal infrared light camera (an infrared light camera capable of capturing a moving image that does not require wavelength resolution or scanning), which is cheaper than the hyperspectral camera. It will be possible to realize a user-friendly system.
  • FIG. 16 shows imaging results of nerves according to Example 5 (based on the method of the second embodiment) when wavelength combinations in the wavelength range of 900 nm to 2500 nm are changed (emphasis of nerves from spectral data of each wavelength) It is a photograph (image) which generates a picture and shows comparison of a result obtained by superimposing on a background image (infrared light image of 1070 nm). 16A to C show imaging results of nerves when continuous wavelengths are used, and FIG.
  • 16D shows infrared light of plural types of wavelengths (900 nm, 1000 nm, 1100 nm, 1150 nm, 1210 nm, 1300 nm, 1350 nm, 1400 nm, 1650 nm). The imaging result of the nerve when used is shown.
  • 16A uses infrared light in the wavelength range of 1290 nm to 1600 nm
  • FIG. 16B uses infrared light in the wavelength range of 900 nm to 1600 nm
  • FIG. 16C shows infrared light in the wavelength range of 900 nm to 2500 nm. The case where it used is shown, respectively.
  • FIGS. 16A to C What can be said from FIGS. 16A to C is that if the wavelength band (wavelength range) of infrared light to be used is wide (that is, the number of spectral data is large), the imaging result of nerve is good but the wavelength band of infrared light It means that good nerve imaging results can be obtained even if it is narrowed.
  • the results of FIG. 16A (wavelength range: 1290 nm to 1600 nm) are clearly better than the results of FIG. 16B (wavelength range: 900 nm to 1600 nm) or FIG. 16C (wavelength range: 900 nm to 2500 nm). Further, when this result and the experimental result shown in Example 3 (see FIG.
  • the analysis accuracy may be lower than the result of FIG. 16A because the number of data is small, but the accuracy is almost the same as the result of FIG. 16B and the result of FIG. is there.
  • the wavelength of infrared light used for neuroimaging is appropriately selected, processing that can withstand practical use can be realized while improving the processing speed even if the number of data is reduced.
  • the result of FIG. 15A has a smaller number of data than the results of FIGS. 16A to D, the analysis accuracy is better than the results of FIGS. 16B and 16C. Therefore, also from the result of FIG. 15A, if the wavelength of infrared light used for nerve imaging is appropriately selected, processing that can withstand practical use can be realized while improving processing speed even if the number of data is reduced. It can be said.
  • a sample is irradiated with infrared light of a plurality of types of wavelengths, and radiation (reflected) light from the sample is detected to detect an infrared light image of the sample (a plurality of wavelengths
  • the present invention discloses a process of identifying a neural portion by generating an infrared light image corresponding to Y.sub.2 and switching and displaying a plurality of infrared light images.
  • FIG. 17 is a view showing a configuration example of an imaging system (a surgery support system) 1 according to the third embodiment.
  • the imaging system (surgery support system) 1 according to the third embodiment includes, for example, a point that the infrared light source unit 20 has a light source for emitting (emitting) infrared light of a plurality of types of wavelengths, and the second imaging device 32
  • the second embodiment is the same as the first embodiment except that a normal infrared camera is used, and the emphasis image generation unit 1013 and the image correction unit 1014 are not provided.
  • the image generation unit 1012 since the enhanced image is not generated, the image generation unit 1012 does not generate the superimposed image as described above.
  • FIG. 18 is a flow chart for explaining the contents of a nerve imaging process according to the third embodiment.
  • each processing unit (light irradiation control unit 1011, image generation unit 1012) is described to execute the processing of each step, but each processing unit is a function included in control unit 101,
  • the unit 101 may be an operation subject.
  • Step 1802 Light irradiation control unit 1011 from the infrared light source 21 that emits a wavelength lambda k of the infrared light source unit 20 by controlling the infrared light source 22, the wavelength lambda k infrared light (e.g., 950 nm, 1070 nm, 1300 nm, 1450 nm, The sample is illuminated with any discrete wavelength of 1600 nm.
  • the infrared light source is switched to output infrared light of different wavelengths.
  • the infrared light source concerned may be filtered to extract infrared light of a specific wavelength.
  • the image generation unit 1012 controls the second imaging device 32 (for example, a normal infrared light camera), and emits infrared light of wavelength ⁇ k to the sample to emit (reflect) infrared light from the sample Are imaged, spectral data (luminance value) for the wavelength ⁇ k is detected (acquired), and an infrared light image is generated.
  • the second imaging device 32 for example, a normal infrared light camera
  • Step 1804 The image generation unit 1012 stores the infrared light image acquired in step 1803 in the storage unit 102. Therefore, when infrared light of all wavelengths is irradiated to the sample, a plurality of infrared light images corresponding to the respective wavelengths are generated.
  • Step 1805 The control unit 101 reads the infrared light images generated in steps 1801 to 1804 from the storage unit 102, transfers these images to the display device 50, and transmits these images to the display device 50 (eg, at least two red images). It is instructed to display the ambient light image and the plurality of infrared light images while sequentially switching based on a predetermined timing (eg, a switching signal generated at each timing).
  • the display device 50 receives a plurality of images from the control unit 101, and switches the plurality of images every predetermined time (for example, every 0.5 seconds, every second, every 2 seconds, every 3 seconds, etc.) as the monitor 2_52.
  • control unit 101 causes the display device 50 to display a switching signal at any timing at which a switching signal based on an input unit (screen switching device) such as a foot switch (not shown) operated by an operator (doctor or the like) is received. Images may be switched.
  • the display device 50 also receives a visible light image of the same sample from the control unit 101, and displays it on the monitor 1_51.
  • the imaging system 1 As described above, according to the imaging system 1 according to the third embodiment, by displaying on the screen while switching a plurality of images obtained by irradiating infrared light of a specific plurality of wavelengths, It becomes possible to make it easy to identify the location.
  • the teacher spectrum data of the nerve is not used and the superimposed image is not generated, the processing is simple and the inexpensive imaging system 1 can be realized.
  • FIG. 19 and FIG. 20 are photographs showing an example of an imaging result of a nerve obtained based on the method of the third embodiment.
  • FIG. 19 shows an image of an ordinary digital camera (digital camera image) on the face portion of a mouse, an infrared light image obtained by irradiating infrared light with a wavelength of 1300 nm, and visible light It shows a visible light image.
  • FIG. 20 shows an image of a normal digital camera, an infrared light image obtained by irradiating infrared light with a wavelength of 950 nm, and a visible light image in a face portion of a mouse.
  • the middle two white lines are nerves.
  • other visible light images of the face part do not know the location of the nerve.
  • visible light images although blood vessel-like parts are known, it can not be said that blood vessels can be clearly identified.
  • infrared light image of 1300 nm FIG. 19
  • infrared light image of 950 nm FIG. 19
  • the light absorption amount is so low for the nerve part and the blood vessel part in the 1300 nm infrared light image. Although it does not change, in the infrared light image of 950 nm, the blood vessel part strongly absorbs light, and the nerve part absorbs less light. Thus, it can be seen that when the wavelengths of infrared light to be irradiated are different, a large difference occurs between the two infrared light images. Even if there is a difference between the two images, both images can extract the blood vessel part and the nerve part, and can distinguish the other parts from those (the blood vessel part and the nerve part).
  • the imaging system 1 of the present embodiment can specify the position of the blood vessel portion and the position of the nerve portion, or specify the position of the nerve portion (or the blood vessel portion) of the blood vessel portion and the nerve portion. I know what I can do. Furthermore, the imaging system 1 can also confirm that there is a blood vessel near the nerve, by referring to the infrared light image on the upper stage.
  • the imaging system 1 displays a plurality of infrared light images acquired by irradiating infrared light of a plurality of wavelengths while switching. Therefore, when the 950 nm infrared light image (FIG. 20) and the 1300 nm infrared light image are alternately displayed on one screen of the display device 50, it is difficult to distinguish the relatively bright 950 nm infrared light image, There is a part which is easy to distinguish in the relatively dark 1300 nm infrared light image. On the contrary, there is a part which is difficult to distinguish in the relatively dark 1300 nm infrared light image, but is easy to distinguish in the relatively bright 950 nm infrared light image. As described above, the imaging system 1 (for example, the control unit 101) can distinguish between blood vessels and nerves by performing control to switch and display infrared light images of a plurality of wavelengths on the screen. .
  • a sample is irradiated with infrared light of a plurality of types of wavelengths, and radiation (reflected) light from the sample is detected to detect an infrared light image of the sample (a plurality of wavelengths
  • the present invention discloses a process of identifying a neural part by generating an infrared light image corresponding to Y.sub.2 and generating and displaying a difference image of a plurality of infrared light images.
  • FIG. 21 is a flowchart for explaining the contents of the nerve imaging process according to the fourth embodiment.
  • each processing unit (light irradiation control unit 1011, image generation unit 1012) is described to execute the processing of each step, but each processing unit is a function included in control unit 101,
  • the unit 101 may be an operation subject.
  • Step 2102 Light irradiation control unit 1011 from the infrared light source 21 that emits a wavelength lambda k of the infrared light source unit 20 by controlling the infrared light source 22, the wavelength lambda k infrared light (e.g., 950 nm, 1070 nm, 1300 nm, 1450 nm, The sample is illuminated with any discrete wavelength of 1600 nm.
  • the imaging system 1 outputs infrared light of different wavelengths by switching infrared light sources, for example, an infrared light source that emits infrared light of a predetermined wavelength band is used.
  • the infrared light of the specific wavelength may be extracted by filtering the infrared light emitted from the infrared light source.
  • the image generation unit 1012 controls the second imaging device 32 (for example, a normal infrared light camera), and emits infrared light of wavelength ⁇ k to the sample to emit (reflect) infrared light from the sample Are imaged, spectral data (luminance value) for the wavelength ⁇ k is detected (acquired), and an infrared light image is generated.
  • the second imaging device 32 for example, a normal infrared light camera
  • Step 2104 the image generation unit 1012 stores the infrared light image acquired in step 2103 in the storage unit 102. Therefore, when infrared light of all wavelengths is irradiated to the sample, a plurality of infrared light images corresponding to the respective wavelengths are generated.
  • Step 2105 The control unit 101 reads the infrared light images generated in step 2101 to step 2104 from the storage unit 102, and generates a difference image of the respective infrared light images. If the number of generated infrared light images is N (the N types of wavelengths are irradiated to obtain infrared light images), (N ⁇ (N ⁇ 1)) number of difference images are generated. .
  • Step 2106 The control unit 101 transfers the difference image to the display device 50, and instructs the display device 50 to display the difference image. For example, in the case where there are a plurality of difference images, the control unit 101 may instruct switching display, or the operator (such as a doctor) may specify a difference image of any wavelength.
  • the display device 50 receives the difference image from the control unit 101, and displays the difference image on the monitor 2_52.
  • the display device 50 also receives a visible light image of the same sample from the control unit 101, and displays it on the monitor 1_51.
  • the imaging system 1 generates a difference image (difference infrared light image) from a plurality of infrared light images obtained by irradiating infrared light of a specific plurality of wavelengths, and displays it on the screen. , Feature parts such as nerve parts in the sample can be highlighted and displayed, and the location of the nerve parts can be identified easily. Also in the fourth embodiment, as in the third embodiment, since the teacher spectrum data of the nerve is not used and the superimposed image is not generated either, the processing is simple and the inexpensive imaging system 1 is realized. become able to.
  • FIG. 22 is a photograph showing an example of a nerve imaging result (a face portion of a mouse) (determined based on the method of the fourth embodiment) according to an example.
  • FIG. 22 shows a differential image of the 1300 nm infrared light image (upper part of FIG. 19) and the 950 nm infrared light image (upper part of FIG. 20).
  • the imaging system 1 can selectively distinguish between the nerve part and the blood vessel part and the other parts, and selectively distinguish the nerve part (or the blood vessel part) of the nerve part and the blood vessel part. You can also As described above, the imaging system 1 can clearly distinguish and display the characteristic part (nerve part and blood vessel part) and the other part by generating and displaying the difference of infrared light images of different wavelengths. it can. In addition, even the feature portions are different in appearance on the difference image, so that it is possible to distinguish, for example, a nerve portion and a blood vessel portion. Therefore, an operator (such as a doctor) can work while checking the position of the nerve part while looking at the difference image during surgery, for example, and can avoid the risk of damaging the nerve during the operation or the like. It will be.
  • imaging system surgery support system
  • the functions of the imaging system 1 may be configured to be executed by another device (imaging device).
  • imaging unit light detection unit
  • neural feature data for example, a nerve characteristic data that specifies the position of a nerve using the detected infrared light (measurement result)
  • the control unit 101 (the control device 10) that generates an enhanced image of the nerve, information indicating the position of the nerve itself, and the like can be configured as an imaging device.
  • the neural feature data is, for example, a nerve used in the above-described embodiment or example in a form that enables discrimination of a neural part from a site (including a blood vessel) other than the neural part. It is data including “emphasis image” or “infrared light image” representing a part, or positional information of pixels constituting the emphasis image.
  • a measurement result obtained by irradiating a sample with infrared light of a wavelength suitable for identifying a nerve infrared light of a single wavelength or infrared light having a predetermined wavelength band
  • An imaging apparatus including a light detection unit (imaging unit) for acquiring a detection result) and a control unit for generating a nerve characteristic image for specifying the position of a nerve in a sample from the measurement result will be disclosed.
  • the light detection unit samples infrared light of a plurality of types of wavelengths (a plurality of types of single-wavelength infrared light having different wavelengths or an infrared light having a predetermined wavelength band).
  • a plurality of measurement results (detection results) obtained by the irradiation may be acquired, and the control unit may generate a plurality of neural feature images from the plurality of measurement results.
  • the control unit outputs a control signal for causing the display device to display while switching a plurality of neural feature images, or the control unit generates a differential image of the plurality of neural feature images and displays the differential image.
  • a control signal for displaying on a device may be output.
  • the imaging device can also generate neural feature data using neural training data indicating spectral data of nerves.
  • the control unit reads, from the storage unit, neural training data indicating spectral data of nerves in the infrared light region, calculates correlation values between the neural training data and a plurality of measurement results, and samples samples based on the correlation values.
  • the neural feature image may be generated in Thus, the position of the nerve can be accurately identified by determining the presence or absence of the correlation with the neuroteacher data indicating the feature of the nerve in the measurement data.
  • the neural feature image may be generated by superimposing the generated neural feature image on a background image (for example, a visible light image or an infrared light image) representing a sample (image of the surgical field of a living body) to generate a superimposed image.
  • a background image for example, a visible light image or an infrared light image
  • the imaging device can clearly show the position of the nerve in the sample as an image.
  • the operator for example, a doctor
  • the control unit calculates a SCM (Spectral Correlation Mapper) value using the neuroteaching data and the plurality of measurement results. Then, the control unit determines that the measurement result whose SCM value is equal to or less than a predetermined threshold is highly correlated with the neural training data, and generates a neural feature image. Since the correlation with the neuroteacher data can be accurately determined by performing the calculation using the SCM as described above, the obtained neural feature image accurately represents the position of the nerve.
  • SCM Standard Correlation Mapper
  • the imaging unit a hyperspectral camera that obtains a spectral value of an imaging target by wavelength decomposition can be used.
  • the imaging unit acquires spectral values of an imaging target for infrared light of a single wavelength without wavelength decomposition.
  • An infrared camera for example, an InGaAs infrared camera sensitive to infrared light
  • the hyperspectral camera receives N wavelengths of light by dispersing infrared light in a predetermined wavelength band (for example, 800 nm to 2500 nm) with a spectroscope, and emits red light for each wavelength (N wavelengths). It includes a camera that can obtain an ambient light image.
  • the hyperspectral camera can acquire spectral data of three-dimensional infrared light consisting of surface position (each pixel) and wavelength direction in the predetermined wavelength band.
  • control unit performs shape analysis on the neural feature image, and based on the result, determines whether a site other than the nerve (eg, blood vessel, fat, etc.) is included in the neural feature image as noise. May be When the part other than the nerve is included as noise, the control unit removes the part other than the nerve (in this case, the noise) specified by the shape analysis of the image from the nerve characteristic image using a filter or the like. . By doing so, the imaging device can generate a more easily viewable neural feature image.
  • a site other than the nerve eg, blood vessel, fat, etc.
  • the functions of the present embodiment can also be realized by program code of software.
  • a storage medium storing the program code is provided to the system or apparatus, and a computer (or CPU or MPU) of the system or apparatus reads the program code stored in the storage medium.
  • the program code itself read out from the storage medium implements the functions of the above-described embodiments, and the program code itself and the storage medium storing the same constitute the present embodiment.
  • a storage medium for supplying such a program code for example, a flexible disk, a CD-ROM, a DVD-ROM, a hard disk, an optical disk, an optical magnetic disk, a CD-R, a magnetic tape, a non-volatile memory card, a ROM Etc. are used.
  • an operating system (OS) operating on the computer performs a part or all of the actual processing, and the processing of the above-described embodiment is realized by the processing. It is also good. Furthermore, after the program code read out from the storage medium is written in the memory on the computer, the CPU of the computer or the like performs part or all of the actual processing based on the instruction of the program code, and the processing
  • OS operating system
  • the CPU of the computer or the like performs part or all of the actual processing based on the instruction of the program code, and the processing
  • the program code of the software that implements the functions of the embodiment can be used as a storage means such as a hard disk or a memory of a system or device or a storage medium such as a CD-RW or CD-R. It may be stored, and the computer (or CPU or MPU) of the system or apparatus may read out and execute the program code stored in the storage means or the storage medium at the time of use.
  • control device 20 infrared light source unit 21 first infrared light source 22 second infrared light source 30 imaging unit 31 first imaging device 32 second imaging device 40 input device 50 display device 51 monitor 1 52 Monitor 2 60 Operating shadow lamp 80 living body 101 control unit 1011 light irradiation control unit 1012 image generation unit 1013 enhanced image generation unit 1014 image correction unit 102 storage unit

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Multimedia (AREA)
  • Robotics (AREA)
  • Optics & Photonics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)

Abstract

An image capable of identifying neural parts in a sample is generated. The image capture device according to this embodiment comprises a light detection unit for acquiring measurement results obtained by irradiating a sample with infrared light of a wavelength suitable for identifying nerves and a control unit for generating a neural feature image that identifies the positions of nerves in the sample.

Description

撮像装置、撮像システム、及び撮像方法Imaging device, imaging system, and imaging method
 本開示は、撮像装置、撮像システム、及び撮像方法に関する。 The present disclosure relates to an imaging device, an imaging system, and an imaging method.
 例えば特許文献1には、生体組織を撮像し、撮像した画像に基づいて生体の血管の位置情報を抽出し、該血管を強調した画像を生体組織に投影することが開示されている。しかしながら、特許文献1の技術を用いても、生体組織内の神経を特定すること、又は神経を見やすくすることはできない。 For example, Patent Document 1 discloses that a living tissue is imaged, position information of a blood vessel of a living body is extracted based on the imaged image, and an image in which the blood vessel is emphasized is projected onto the living tissue. However, even with the technique of Patent Document 1, it is not possible to identify a nerve in a living tissue or to make the nerve easy to see.
特開2006-102360号公報JP, 2006-102360, A
 本実施形態によれば、神経を同定するのに適する波長の赤外光をサンプルに照射して得られる測定結果を取得する光検出部と、測定結果から、サンプルにおいて神経の位置を特定する神経の特徴画像を生成する制御部と、備える撮像装置が提供される。 According to the present embodiment, the light detection unit acquires the measurement result obtained by irradiating the sample with infrared light of a wavelength suitable for identifying the nerve, and the nerve specifying the position of the nerve in the sample from the measurement result And a control unit that generates a feature image of the image.
 本実施形態によれば、光検出部が、神経を同定するのに適する波長の赤外光をサンプルに照射して得られる測定結果を取得することと、制御部が、測定結果から、サンプルにおいて神経の位置を特定する神経の特徴画像を生成することと、を含む撮像方法が提供される。 According to the present embodiment, the light detection unit acquires a measurement result obtained by irradiating the sample with infrared light of a wavelength suitable for identifying a nerve, and the control unit determines from the measurement result in the sample. An imaging method is provided that includes generating a neural feature image that locates a nerve.
 本実施形態によれば、神経を同定するのに適する複数の赤外光をサンプルに照射して得られる複数の測定結果を取得する光検出部と、複数の測定結果を用いた演算によってサンプルにおける神経の位置が強調された神経の強調画像を生成する制御部と、を備える撮像装置が提供される。
 また、本実施態様によれば、上記態様の撮像装置と、神経の特徴画像を表示する表示装置と、を備える、撮像システムが提供される。
According to the present embodiment, the light detection unit acquires a plurality of measurement results obtained by irradiating the sample with a plurality of infrared lights suitable for identifying a nerve, and a calculation using the plurality of measurement results. An image pickup apparatus is provided, comprising: a control unit that generates a nerve enhanced image in which the position of the nerve is enhanced.
Further, according to the present embodiment, an imaging system is provided that includes the imaging device of the above aspect and a display device that displays a feature image of a nerve.
第1の実施形態に係る撮像システム(手術支援システム)1の外観構成例を説明するための図である。It is a figure for explaining an example of appearance composition of imaging system (surgery support system) 1 concerning a 1st embodiment. 本実施形態における撮像システム1の機能ブロック構成例を示す図である。It is a figure showing an example of functional block composition of imaging system 1 in this embodiment. 本実施形態におけるモニタ1_51に可視光画像が表示されている画面例とモニタ2_52に強調画像が重畳された背景画像が表示されている画面例(2モニタ構成)とを示す図である。FIG. 14 is a diagram showing a screen example on which a visible light image is displayed on a monitor 1 _ 51 and a screen example (2-monitor configuration) on which a background image on which an enhanced image is superimposed is displayed on a monitor 1 _ 51 in the present embodiment. 本実施形態におけるモニタ50に可視光画像と強調画像が重畳された背景画像とが表示されている画面例(1モニタ2画面構成)とを示す図である。It is a figure which shows the example of a screen (1 monitor 2 screen structure) on which the visible light image and the background image on which the emphasis image was superimposed are displayed on the monitor 50 in this embodiment. 本実施形態における撮像システム1の撮像部30において採用することができる光学系300の概略構成(ミラー33を用いた構成例)を示す図である。It is a figure which shows schematic structure (example of a structure using the mirror 33) of the optical system 300 which can be employ | adopted in the imaging part 30 of the imaging system 1 in this embodiment. 本実施形態における撮像システム1の撮像部30において採用することができる光学系300の概略構成(ダイクロイックミラー38を用いた構成例)を示す図である。It is a figure which shows schematic structure (The structural example using the dichroic mirror 38) which can be employ | adopted in the imaging part 30 of the imaging system 1 in this embodiment. 本実施形態における手術野の周辺(例、手術用布)にアライメント用マークを配置する構成を示す図である。It is a figure which shows the structure which arrange | positions the mark for alignment in the periphery (for example, cloth for surgery) in the present embodiment. 本実施形態の撮像システム(手術支援システム)1における神経イメージング処理(サンプル内における神経の強調画像を生成し、背景画像に重畳して表示する処理)の内容を説明するためのフローチャートである。It is a flowchart for demonstrating the content of the nerve imaging process (The process which produces | generates the emphasis image of the nerve in a sample, and superimposes it on a background image, and displays it) in imaging system (surgery assistance system) 1 of this embodiment. 本実施形態における神経イメージング処理(図7)によって生成された重畳画像からエラー部分を除去する処理を説明するためのフローチャートである。It is a flowchart for demonstrating the process which removes an error part from the superimposed image produced | generated by the nerve imaging process (FIG. 7) in this embodiment. 本実施形態による神経イメージング処理(例えば、図7参照)によって生成された重畳画像(背景画像+神経強調画像)の例を示す写真である。It is a photograph which shows the example of the superimposition image (background image + nerve emphasis image) produced | generated by the nerve imaging process (for example, refer FIG. 7) by this embodiment. 本実施形態におけるエラー除去の画像例を示す写真である。It is a photograph which shows the image example of the error removal in this embodiment. 本実施形態における神経のスペクトルデータおよびその他の部位(例えば、スジ)のスペクトルデータの比較結果を示す図である。It is a figure which shows the comparison result of the spectral data of the nerve in this embodiment, and the spectral data of another site | part (for example, streak). 本実施形態におけるブタの坐骨神経と思われる部位の可視光画像(図12A)と、図11Bのスペクトルデータを用いてSCM演算を行って生成した神経の強調画像を背景画像(例えば、波長1070nmの赤外光を照射して得られる赤外光画像)に重畳した画像(図12B)を示す写真である。In this embodiment, a visible light image (FIG. 12A) of a portion considered to be the sciatic nerve of a pig and a highlight image of a nerve generated by performing SCM calculation using the spectrum data of FIG. It is a photograph which shows the image (FIG. 12B) superimposed on the infrared-light image obtained by irradiating infrared light. 第2の実施形態による神経イメージング処理の内容を説明するためのフローチャートである。It is a flowchart for demonstrating the content of the nerve imaging process by 2nd Embodiment. 本実施形態における赤外光源部20における光源から射出(放射)する赤外光の波長を設定するための波長設定GUI(Graphical User Interface)1400の例を示す図である。It is a figure which shows the example of wavelength setting GUI (Graphical User Interface) 1400 for setting the wavelength of the infrared light inject | emitted (radiated) from the light source in the infrared light source part 20 in this embodiment. 第2の実施形態による手法に従って神経イメージング処理を行って生成された重畳画像の例(図15A)と、第1の実施形態による手法に従って神経イメージング処理を行って生成された重畳画像の例(図15B)との比較を示す写真である。Fig. 15A shows an example of a superimposed image generated by performing neural imaging processing according to the method according to the second embodiment (Fig. 15A) and an example of a superimposed image generated by performing neural imaging processing according to the method according to the first embodiment It is a photograph showing comparison with 15B). 本実施形態における900nmから2500nmの波長範囲において波長の組み合わせを変えたときの神経イメージング結果(各波長のスペクトルデータから神経の強調画像を生成し、背景画像(1070nmの赤外光画像)に重畳して得られた結果)の比較を示す写真である。A nerve imaging result (a weighted image of a nerve is generated from spectrum data of each wavelength when the combination of wavelengths is changed in a wavelength range of 900 nm to 2500 nm in the present embodiment) and superimposed on a background image (infrared light image of 1070 nm) Of the results obtained). 第3の実施形態による撮像システム(手術支援システム)1の構成例を示す図である。It is a figure which shows the structural example of the imaging system (operation assistance system) 1 by 3rd Embodiment. 第3の実施形態による神経イメージング処理の内容を説明するためのフローチャートである。It is a flowchart for demonstrating the content of the nerve imaging process by 3rd Embodiment. 本実施形態におけるマウスの顔面部分における、通常のデジカメ画像と、波長が1300nmの赤外光を照射することにより得られる赤外光画像と、可視光画像とを示す写真である。It is a photograph which shows the normal digital camera image in the face part of the mouse in this embodiment, the infrared-light image obtained by irradiating infrared light with a wavelength of 1300 nm, and a visible light image. 本実施形態におけるマウスの顔面部分における、通常のデジカメ画像と、波長が950nmの赤外光を照射することにより得られる赤外光画像と、可視光画像とを示す写真である。It is a photograph which shows the normal digital camera image in the face part of the mouse in this embodiment, the infrared-light image obtained by irradiating infrared light with a wavelength of 950 nm, and a visible light image. 第4の実施形態による神経イメージング処理の内容を説明するためのフローチャートである。It is a flowchart for demonstrating the content of the nerve imaging process by 4th Embodiment. 第4の実施形態の手法に基づいて求めた神経イメージング結果例(マウスの顔面部分)を示す写真である。It is a photograph which shows the nerve imaging result example (face part of a mouse | mouth) calculated | required based on the method of 4th Embodiment.
 以下、添付図面を参照して本実施形態について説明する。添付図面では、機能的に同じ要素は同じ番号で表示される場合もある。なお、添付図面は本開示の原理に則った実施形態と実装例を示しているが、これらは本開示の理解のためのものであり、決して本開示を限定的に解釈するために用いられるものではない。本明細書の記述は典型的な例示に過ぎず、本開示の請求の範囲又は適用例を如何なる意味においても限定するものではない。 Hereinafter, the present embodiment will be described with reference to the attached drawings. In the attached drawings, functionally the same elements may be denoted by the same numbers. Although the attached drawings show embodiments and implementation examples in accordance with the principles of the present disclosure, these are for the understanding of the present disclosure and are not used for a limited interpretation of the present disclosure. is not. The description in the present specification is merely exemplary and does not limit the scope of claims or application of the present disclosure in any sense.
 本実施形態では、当業者が本開示を実施するのに十分詳細にその説明がなされているが、他の実装・形態も可能で、本開示の技術的思想の範囲と精神を逸脱することなく構成・構造の変更や多様な要素の置き換えが可能であることを理解する必要がある。従って、以降の記述をこれに限定して解釈してはならない。 While this embodiment is described in sufficient detail to enable those skilled in the art to practice the present disclosure, other implementations are possible, without departing from the scope and spirit of the technical idea of the present disclosure. It is necessary to understand that it is possible to change the configuration / structure and replace various elements. Therefore, the following description should not be interpreted in a limited manner.
 更に、本実施形態は、後述されるように、汎用コンピュータ上で稼動するソフトウェアで実装しても良いし専用ハードウェア又はソフトウェアとハードウェアの組み合わせで実装しても良い。 Furthermore, as described later, the present embodiment may be implemented by software running on a general-purpose computer, or may be implemented by dedicated hardware or a combination of software and hardware.
A.第1の実施形態
 <神経のイメージングについて>
 本実施形態では、生体組織における神経に関するイメージングを実行する。このように神経のイメージングを実行するのは次のような事情による。例えば、患者の生体の特定箇所について手術(例、腹腔鏡手術、前立腺手術や大腸手術など)を行う場合、当該手術箇所(特定箇所)に大事な神経(例えば、体性神経系、自立神経系や腸管神経系などの末梢神経、中枢神経など)が存在するとき、神経を傷つけてしまうと術後の合併症で患者の容態に変化を起こす場合もあるため、神経を傷つけることは避けなければならない。この点、医師等は、解剖学的知識に基づいて神経を特定し、通常、神経を別の部位(例えば、血管)と間違えたり、傷つけたりすることはないが、術中に神経を傷つけないよう細心の注意を払うために、特定部位が神経であるか否か予め確認したうえで手術操作に取り掛かりたい場合もある。一方、従来、術野にある神経の位置を直接的に特定し、当該神経の位置を確認しやすくする(例えば、神経であると特定された部位をより見やすくイメージングする、神経であると特定された部位を強調して表示する、等)という考えや技術は存在していない。
 本実施形態は、手術中や病理検査などにおいて、神経を特定(同定、推定)したいというニーズに応えるべく、生体組織における神経を直接的に特定(同定、推定)し、当該特定した神経部分の画像(例えば、強調画像)を上記生体組織の画像に重畳表示する技術について開示する。なお、同定はサンプルの一部分が神経と同等であることを含み、特定はサンプルにおける神経の位置を指定することを含む。
A. First embodiment <On nerve imaging>
In the present embodiment, imaging regarding nerves in a living tissue is performed. The reason for performing nerve imaging in this way is as follows. For example, when performing surgery (for example, laparoscopic surgery, prostate surgery, large intestine surgery, etc.) on a specific part of a patient's living body, nerves (for example, somatic nervous system, autonomic nervous system) important for the operation part (specific part) If the nerve is injured, the patient's condition may change due to post-operative complications when peripheral nerves such as the enteric nervous system or the central nervous system are present, so it is necessary to avoid damaging the nerve. It does not. In this respect, a doctor or the like identifies the nerve based on anatomical knowledge, and normally does not mistake or injure the nerve for another site (for example, a blood vessel), but does not damage the nerve during the operation. In order to pay close attention, in some cases it may be desirable to check in advance whether or not a specific site is a nerve and then to begin surgery. On the other hand, conventionally, the position of the nerve in the operative field is directly identified, and the position of the nerve is easily identified (for example, it is identified as the nerve which images the site identified as the nerve more easily visible) There is no idea or technique to highlight and display the
In this embodiment, in order to meet the need to identify (identify, estimate) the nerve during surgery, pathological examination, etc., the nerve in the living tissue is directly identified (identified, estimate), and Disclosed is a technique for superimposing an image (e.g., an enhanced image) on the image of the living tissue. The identification includes that a part of the sample is equivalent to the nerve, and the identification includes specifying the position of the nerve in the sample.
(1)第1の実施形態
 <撮像システム1の構成>
 図1は、第1の実施形態に係る撮像システム1の外観構成例を説明するための図である。撮像システム1は、例えば、病理診断支援、臨床診断支援、観察支援、手術支援などの医療支援に利用される。図1に示すように、本実施形態では、撮像システム1の例として手術支援システム(手術用撮像システム、医療支援用撮像システム)について説明する。
(1) First Embodiment <Configuration of Imaging System 1>
FIG. 1 is a diagram for explaining an example of an appearance configuration of an imaging system 1 according to the first embodiment. The imaging system 1 is used, for example, for medical support such as pathological diagnosis support, clinical diagnosis support, observation support, and surgery support. As shown in FIG. 1, in the present embodiment, a surgery support system (a surgery imaging system, a medical support imaging system) will be described as an example of the imaging system 1.
 撮像システム1は、例えば、撮像システム1の全体を制御する制御装置(制御部)10と、生体80(以下、生体80における撮像部位を「サンプル」と言うこともある)に照射する赤外光を発する赤外光源21および22を含む赤外光源部20(参照番号20は図1には示されていない)と、生体80からの放射光を撮像する撮像部(光検出部)30と、オペレータ(例えば、医師等)が各種データや制御装置10への指示コマンドなどを入力する際に用いる入力装置40と、モニタ1_51(第1ディスプレイ)およびモニタ2_52(第2ディスプレイ)を含み、例えば撮像部30によって撮像された画像や文字などを表示する表示装置(表示部)50と、制御装置10と通信可能に接続された手術用無影灯60と、を備えている。なお、撮像システム1は、少なくとも撮像部30を備える撮像装置とも言える。なお、赤外光源部20の各赤外光源21から22からの赤外光は、例えば拡散板を介してサンプルに光量が均一に照射されるように構成しても良い。例えば、各光源の光が射出する面に拡散板を取り付けたり、赤外光源部20と生体80との間の光路に拡散板を配置したりすることができる。これにより、サンプルに万遍なく光が照射されるようになり、撮像部30による撮像範囲(例えば、手術野に対応にする範囲)に照明ムラや影などができてしまうことを防止することできる。 The imaging system 1 includes, for example, a control device (control unit) 10 that controls the entire imaging system 1 and infrared light that irradiates the living body 80 (hereinafter, the imaging region in the living body 80 may be referred to as a “sample”) An infrared light source unit 20 (reference numeral 20 is not shown in FIG. 1) including infrared light sources 21 and 22 emitting light, and an imaging unit (light detection unit) 30 for imaging radiation light from a living body 80; It includes an input device 40 used when an operator (for example, a doctor etc.) inputs various data and an instruction command to the control device 10, a monitor 1_51 (first display) and a monitor 2_52 (second display). A display device (display unit) 50 for displaying an image, characters, and the like captured by the unit 30 and a surgical shadow lamp 60 communicably connected to the control device 10 are provided. The imaging system 1 can also be said to be an imaging device including at least the imaging unit 30. The infrared light from each of the infrared light sources 21 to 22 of the infrared light source unit 20 may be configured, for example, to uniformly irradiate the light amount to the sample through the diffusion plate. For example, a diffusion plate can be attached to the surface from which the light of each light source is emitted, or the diffusion plate can be disposed in the light path between the infrared light source unit 20 and the living body 80. Thereby, light is uniformly emitted to the sample, and it is possible to prevent illumination unevenness, shadows, and the like from being generated in the imaging range (for example, the range corresponding to the surgical field) by the imaging unit 30. .
 生体80は、例えば、手術台90に横たわる患者である。例えば、生体(患者)80の手術部位の画像が撮像部30によって撮像される。当該手術部位(撮像される部位:撮像部位)を患部、サンプルやターゲットと言うことも可能である。 The living body 80 is, for example, a patient lying on the operating table 90. For example, an image of a surgical site of a living body (patient) 80 is captured by the imaging unit 30. The surgical site (site to be imaged: imaging site) can also be referred to as an affected area, a sample or a target.
 図2は、撮像システム1の機能ブロック構成例を示す図である。撮像システム1は、上述と同様、制御装置10と、赤外光源部20と、撮像部30と、入力装置40と、表示装置(表示部)50と、手術用無影灯60と、撮像部30を例えば水平方向にスライド移動させるステージ70(図1には示さず)と、を備える。制御装置10は、例えば、コンピュータで構成され、プロセッサなどで構成される制御部101と、各種プログラム、パラメータ、撮像結果などを格納する記憶部102と、を備えている。制御部101は、記憶部102から各種プログラムやパラメータなどを読み込み、図示しない内部メモリに読み込んだ各種プログラムを展開し、入力装置40から入力される指示や各種プログラムによって特定される情報処理シーケンスに従って各種プログラムの処理を実行する。制御部101は、例えば、赤外光源部20の赤外光の照射を制御する光照射制御部1011と、撮像部30が検出(撮像)した画像データを撮像部30から取得し、神経の強調画像が重畳される背景画像を生成する画像生成部1012と、撮像部30から取得した画像データから神経の強調画像を生成する強調画像生成部1013と、強調画像が重畳された画像におけるエラー画像(強調画像のエラー)を補正する画像補正部1014と、を備える。記憶部102は、例えば、少なくとも、光照射制御部1011、画像生成部1012、強調画像生成部1013、及び画像補正部1014に対応するプログラムを格納する。なお、例えば、ステージ装置70は、撮像装置30をサンプルに対して相対移動させることが可能である。 FIG. 2 is a diagram showing an example of a functional block configuration of the imaging system 1. As described above, the imaging system 1 includes the control device 10, the infrared light source unit 20, the imaging unit 30, the input device 40, the display device (display unit) 50, the surgical shadow lamp 60 for surgery, and the imaging unit. And 30, a stage 70 (not shown in FIG. 1) that slides 30 in a horizontal direction, for example. The control device 10 is, for example, a computer, and includes a control unit 101 including a processor and the like, and a storage unit 102 that stores various programs, parameters, imaging results, and the like. The control unit 101 reads various programs, parameters, and the like from the storage unit 102, expands the various programs read into the internal memory (not shown), and performs various processes according to information processing sequences specified by instructions and various programs input from the input device 40. Execute program processing. The control unit 101 acquires, for example, a light irradiation control unit 1011 that controls the irradiation of infrared light of the infrared light source unit 20 and image data detected (imaged) by the imaging unit 30 from the imaging unit 30 and emphasizing nerves. An image generation unit 1012 that generates a background image on which an image is superimposed, a weighted image generation unit 1013 that generates a neural enhancement image from image data acquired from the imaging unit 30, and an error image (image And an image correction unit 1014 that corrects (error of the emphasized image). The storage unit 102 stores, for example, at least programs corresponding to the light irradiation control unit 1011, the image generation unit 1012, the enhanced image generation unit 1013, and the image correction unit 1014. In addition, for example, the stage device 70 can move the imaging device 30 relative to the sample.
 赤外光源部20は、例えば、900nmから2500nm(又は800nmから3000nm、又は900nmから1650nm、又は1000nmから1700nmなど)の波長帯域の少なくとも一部の赤外光を射出(放射)する赤外光源21および22を含む。図1では、赤外光源部20が2つの光源で構成される例が示されているが、3つ以上の光源が含まれていてもよい。赤外光源部20は、例えば赤外光源21および22から射出(放射)された広帯域の波長帯を有する光を光学系で分光し、分光した各光を光路に配置された光学フィルタでフィルタリングして所望の波長の光を生成するように構成することもできる。なお、後述の撮像部30が撮像デバイスとしてハイパースペクトルカメラを含む場合には、当該ハイパースペクトルカメラ内でサンプルからの放射(反射)光を分光(波長分解)してスペクトルデータ(各波長に対応する輝度値)を検出するので、上記光学系で分光しなくてもよい。また、ハイパースペクトルカメラを用いる場合、カメラあるいはサンプル(被写体)のいずれかをステージなどで相対移動させる必要があるため、図示していないが、図1の撮像システム1には、例えば、カメラあるいはサンプルを相対移動させるステージなどの移動手段(ステージ装置)が含まれる。 The infrared light source unit 20 emits (emits) infrared light of at least a part of a wavelength range of, for example, 900 nm to 2500 nm (or 800 nm to 3000 nm, or 900 nm to 1650 nm, or 1000 nm to 1700 nm). And 22. Although FIG. 1 illustrates an example in which the infrared light source unit 20 includes two light sources, three or more light sources may be included. The infrared light source unit 20 divides light having a wide wavelength band emitted (radiated) from, for example, the infrared light sources 21 and 22 with an optical system, and filters each of the separated light with an optical filter disposed in the optical path. It can also be configured to generate light of a desired wavelength. When the imaging unit 30 described later includes a hyperspectral camera as an imaging device, the radiation (reflection) light from the sample is split (wavelength resolved) in the hyperspectral camera to correspond to the spectral data (each wavelength) Since the luminance value is detected, the light does not have to be split by the above optical system. Also, when using a hyperspectral camera, it is necessary to move either the camera or the sample (subject) relative to the stage or the like, so although not shown, the imaging system 1 of FIG. And moving means (stage apparatus) such as a stage for moving the relative movement.
 例えば、赤外光源部20の赤外光源21と赤外光源22とを、それぞれ異なる波長の赤外光を射出(放射)してサンプル(生体80)に照射するように構成し、各光源を切り替えて使用するようにしてもよい。赤外光源部20に含まれる赤外光源21と赤外光源22とがそれぞれ射出(放射)する光の波長は、オペレータによって設定されるようにしてもよい。なお、第2の実施形態では、波長設定のGUI(Graphical User Interface)例について説明している(図14参照)。 For example, the infrared light source 21 and the infrared light source 22 of the infrared light source unit 20 are configured to emit (emit) infrared light of different wavelengths and irradiate the sample (the living body 80) with each light source It may be switched and used. The wavelength of light emitted (radiated) by each of the infrared light sources 21 and the infrared light sources 22 included in the infrared light source unit 20 may be set by the operator. In the second embodiment, an example of GUI (Graphical User Interface) of wavelength setting is described (see FIG. 14).
 撮像部30は、例えば、サンプルの可視光画像を撮像する第1撮像デバイス(可視光領域の光に対して高い検出感度を有する撮像デバイス、可視光カメラ)と、サンプルの赤外光画像を撮像する第2撮像デバイス(赤外光領域の光に対して高い検出感度を有する撮像デバイス、赤外光カメラ)と、を含む。第2撮像デバイスは、例えば、赤外光源(赤外光源21や赤外光源22)から射出(放射)された900nmから2500nmの波長帯域の少なくとも一部の帯域の赤外光のサンプルからの放射(反射)光を分光(波長分解)して各波長に対応するスペクトルデータ(輝度値)を検出するハイパースペクトルカメラとすることができる。また、別の形態として、第2撮像デバイスは、複数波長の赤外光をサンプルに照射してサンプルから放射される光の輝度(輝度値)を検出することにより画像を取得する通常の赤外光カメラ(波長分解せずに撮像する赤外光カメラ)であってもよい。この場合、照射される複数波長の赤外光は、例えば、900nmから2500nmの波長から複数種類の光(赤外光)が選択される。第1撮像デバイスとして、例えば、シリコン(Si)カメラを用いることができる。第2撮像デバイスとして、例えば、センサにInGaAs(インジウムカリウムヒ素)を用いたInGaAsカメラ(ハイパースペクトルカメラ)を用いることができる。第1撮像デバイス31の光軸と第2撮像デバイス32の光軸とは、図2に示されるように、同じでなくてもよい。ただし、この場合、第1撮像デバイス31の撮像軸(撮像の視野)と第2撮像デバイス32の撮像軸(撮像の視野)とを同一にするために、撮像部30を例えば水平方向にスライド移動させるステージ70を設ける必要がある。なお、図4を用いて後述するように、双方の撮像デバイスの光軸を同軸にする光学系を撮像部30に設けるようにしても良い。 For example, the imaging unit 30 captures a first imaging device (an imaging device having high detection sensitivity to light in a visible light region, a visible light camera) that captures a visible light image of the sample, and an infrared light image of the sample And a second imaging device (an imaging device having high detection sensitivity to light in an infrared light region, an infrared light camera). The second imaging device is, for example, radiation from a sample of infrared light of at least a part of a wavelength band of 900 nm to 2500 nm emitted (radiated) from an infrared light source (infrared light source 21 or infrared light source 22) The (reflected) light can be split (wavelength resolved) into a hyperspectral camera that detects spectral data (brightness values) corresponding to each wavelength. As another mode, the second imaging device is a normal infrared image acquiring an image by irradiating the sample with infrared light of a plurality of wavelengths and detecting the luminance (luminance value) of the light emitted from the sample It may be an optical camera (an infrared camera that captures images without wavelength decomposition). In this case, as the infrared light of a plurality of wavelengths to be irradiated, for example, a plurality of types of light (infrared light) are selected from wavelengths of 900 nm to 2500 nm. For example, a silicon (Si) camera can be used as the first imaging device. As the second imaging device, for example, an InGaAs camera (hyperspectral camera) using InGaAs (indium potassium arsenic) as a sensor can be used. The optical axis of the first imaging device 31 and the optical axis of the second imaging device 32 may not be the same as shown in FIG. However, in this case, in order to make the imaging axis of the first imaging device 31 (field of view of imaging) the same as the imaging axis of the second imaging device 32 (field of view of imaging) It is necessary to provide a stage 70 for As described later with reference to FIG. 4, an optical system in which the optical axes of both imaging devices are coaxial may be provided in the imaging unit 30.
 入力装置40は、例えば、キーボード、マウス、マイク、タッチパネルなどによって構成され、オペレータ(ユーザ)が制御装置10に所定の処理を実行させる際に指示やパラメータなどを入力する際に使用するデバイスである。また、例えば、単にUSBなどの半導体メモリを制御装置10に設けられた入力ポート(図示せず)に挿入することにより、制御装置10の制御部101が自動的に半導体メモリからデータや指示(予め決められたルールで記述された指示)を読み込み、各種プログラムを実行するようにしても良い。 The input device 40 is a device including, for example, a keyboard, a mouse, a microphone, and a touch panel, and used when an operator (user) inputs an instruction or a parameter when causing the control device 10 to execute a predetermined process. . Also, for example, simply inserting a semiconductor memory such as USB into an input port (not shown) provided in the control device 10 allows the control unit 101 of the control device 10 to automatically transmit data or instructions from the semiconductor memory (previously The various programs may be executed by reading an instruction described in a determined rule.
 表示装置50は、制御部101が生成した画像(例えば、サンプルの可視光画像やサンプルの赤外光画像および神経の強調画像)や、制御部101が画像(例えば、神経の強調画像)を補正して得られた補正画像(補正強調画像)を制御装置10から受信し、生成画像(サンプルの可視光画像やサンプルの赤外光画像)や可視光画像および/または赤外光画像に神経の強調画像を重畳した画像(合成画像)、あるいは可視光画像および/または赤外光画像に補正強調画像を重畳した画像(合成画像)を表示画面に表示する。表示装置50は、例えば、モニタ1_51およびモニタ2_52を備えている。モニタ1_51は、例えば、可視光画像を表示する。モニタ2_52は、例えば、可視光画像および/または赤外光画像に強調画像を重畳した画像、あるいは可視光画像および/または赤外光画像に補正強調画像を重畳した画像を表示画面に表示する。なお、制御部101は、例えば、上記の生成画像を表示画面(例、モニタ1_51、モニタ2_52)に表示する通常観察モードと、上記の合成画像を表示画面に表示する強調観察モードとを有しており、その2つの表示モードを切り替えて表示画面に対象の画像を表示したり、その2つのモードに基づく2つの対象の画像を同時に表示するように表示装置50を制御できる。
 図3Aは、モニタ1_51に可視光画像が表示されている画面例とモニタ2_52に神経の強調画像が重畳された背景画像が表示されている画面例(2モニタ構成の例)とを示す図である。強調画像が重畳される対象の画像(背景画像)は赤外光画像や可視光画像とすることができる。モニタ1_51に表示された可視光画像は、例えば、通常のカラー画像である。可視光画像は、例えば、手術野の生体部位をそのまま撮像して得られる画像であるので、神経とその周辺組織(例、血管やスジ)とを見分けるのが困難な場合もある。一方、モニタ2_52に表示された強調画像と背景画像とによる合成画像(例、互いに異なる色で合成され画像処理された画像)によれば、神経が強調された画像が強調画像として背景画像に重畳されているので、医師等のオペレータは、手術野の生体部位(形態的特徴)における神経の位置を容易に目視で確認することができる。なお、強調画像の表示色(例、緑色、白色などのように他と異なる色)は、例えばオペレータによって適宜変更することができるようになっている。
The display device 50 corrects the image generated by the control unit 101 (for example, the visible light image of the sample, the infrared light image of the sample, and the nerve enhancement image) and the control unit 101 corrects the image (for example, nerve enhancement image) Received from the control device 10, the generated image (visible light image of the sample or infrared light image of the sample), visible light image and / or infrared light image of the nerve An image (composite image) in which the enhanced image is superimposed, or an image (composite image) in which the corrected and enhanced image is superimposed on the visible light image and / or the infrared light image are displayed on the display screen. The display device 50 includes, for example, a monitor 1_51 and a monitor 2_52. The monitor 1_51 displays, for example, a visible light image. The monitor 2_52 displays, for example, an image obtained by superimposing the enhanced image on the visible light image and / or the infrared light image, or an image obtained by superimposing the corrected and enhanced image on the visible light image and / or the infrared light image on the display screen. The control unit 101 has, for example, a normal observation mode in which the generated image is displayed on a display screen (eg, monitor 1_51, monitor 2_52), and an enhanced observation mode in which the synthesized image is displayed on the display screen. The display device 50 can be controlled to switch between the two display modes to display an image of a target on the display screen or simultaneously display images of the two targets based on the two modes.
FIG. 3A is a diagram showing a screen example in which a visible light image is displayed on the monitor 1_51 and a screen example (example of a two monitor configuration) in which a background image on which a nerve enhanced image is superimposed is displayed on the monitor 2_52. is there. The image (background image) of the target on which the enhanced image is superimposed may be an infrared light image or a visible light image. The visible light image displayed on the monitor 1_51 is, for example, a normal color image. Since the visible light image is an image obtained by imaging the living body region of the surgical field as it is, for example, it may be difficult to distinguish between a nerve and its surrounding tissue (eg, blood vessels or streaks). On the other hand, according to a composite image of the enhanced image and the background image displayed on the monitor 2_52 (for example, an image processed by combining different colors), the image in which the nerve is enhanced is superimposed on the background image as the enhanced image. As a result, the operator such as a doctor can easily visually check the position of the nerve in the living body part (morphological feature) of the surgical field. Note that the display color (for example, a color different from the other, such as green and white) of the emphasized image can be appropriately changed, for example, by the operator.
 手術用無影灯60は、複数のLED光源やハロゲン光源によって構成される可視光源である。手術用無影灯60は、例えば、最大160000ルクスと非常に明るい。手術用無影灯60を点灯している間は、可視光画像を第1撮像デバイス31によって取得することができる。一方、手術用無影灯60は赤外光の波長領域の光を放出することがないように構成されるため、手術用無影灯60の点灯期間および消灯期間の何れの期間でも赤外光画像を第2撮像デバイス32によって取得することができる。
 なお、図3Aでは2つのモニタの1つに可視光画像を、もう1つのモニタに神経の強調画像が重畳された背景画像を表示する形態が示されているが、図3Bに示すように、1つのモニタ50に可視光画像と強調画像が重畳された背景画像とを2画面構成(例えば、両画像を同一の大きさに表示したり、1つの画像の少なくとも一部分をもう1つの画像に重畳させて表示したり、1つの画像をもう1つの画像よりも大きく表示したりしてもよい)で表示するようにしてもよい。この場合、制御部10は,モニタ50における複数の画面領域DA(例、第1の画面領域DA1、第2の画面領域DA2)の表示位置を制御する。また、例えば、手術用無影灯60の点灯及び消灯が制御装置10によって制御されるようにしても良い。
The surgical shadow lamp 60 is a visible light source configured of a plurality of LED light sources and halogen light sources. The surgical shadow lamp 60 is very bright, for example, up to 160000 lux. While the surgical shadow lamp 60 is on, a visible light image can be acquired by the first imaging device 31. On the other hand, since the surgical shadow lamp 60 for surgery is configured not to emit light in the wavelength region of infrared light, infrared light is used in any period of the lighting period and the lighting off period of the surgical shadow lamp 60 for surgery. An image may be acquired by the second imaging device 32.
Although FIG. 3A shows a mode in which a visible light image is displayed on one of the two monitors, and a background image on which the nerve enhanced image is superimposed on the other monitor, as shown in FIG. 3B, A two-screen configuration (for example, both images are displayed in the same size, or at least a portion of one image is superimposed on another image) on a single monitor 50 with a visible light image and a background image on which an enhanced image is superimposed. , Or one image may be displayed larger than another image). In this case, the control unit 10 controls the display position of the plurality of screen areas DA (for example, the first screen area DA1 and the second screen area DA2) in the monitor 50. Further, for example, the lighting and extinguishing of the surgical shadow lamp 60 may be controlled by the control device 10.
 以上のように、本実施形態の撮像システム(手術支援システム)1によれば、例えば、手術野における生体部位(例えば、患者の手術対象箇所)に存在する末梢神経の位置を特定し、強調表示することができるので、オペレータ(医師等)は、神経の位置を確認しながら神経を傷つけずに手術や検査などをすることできるようになる。以下、本実施形態による撮像システム(手術支援システム)1の動作や構成の更なる詳細について説明する。 As described above, according to the imaging system (surgery support system) 1 of the present embodiment, for example, the position of peripheral nerves present at a living body site (for example, a surgery target site of a patient) in a surgical field is specified and highlighted Because the operator (doctor etc.) can check the position of the nerve, he / she can perform the operation or the examination without damaging the nerve. Hereinafter, further details of the operation and configuration of the imaging system (surgery support system) 1 according to the present embodiment will be described.
 <撮像デバイスの光軸を同一にする光学系>
 図4および5は、本実施形態による、撮像システム1の撮像部30において採用することができる光学系300の概略構成を示す図である。光学系300は、第1撮像デバイス31の光軸と第2撮像デバイス32の光軸とを同一(略同一を含む)にする光学系である。図4はミラー33を用いた構成例を示し、図5はダイクロイックミラー38を用いた構成例を示している。
<Optical system to make the optical axes of imaging devices identical>
FIGS. 4 and 5 are diagrams showing a schematic configuration of an optical system 300 that can be employed in the imaging unit 30 of the imaging system 1 according to the present embodiment. The optical system 300 is an optical system in which the optical axis of the first imaging device 31 and the optical axis of the second imaging device 32 are the same (including substantially the same). FIG. 4 shows a configuration example using the mirror 33, and FIG. 5 shows a configuration example using the dichroic mirror 38. As shown in FIG.
 図4において、光学系300は、例えば、ミラー33と、軸37を中心に第1ポジションP1と第2ポジションP2(例えば、P1の位置のミラー33とP2の位置のミラー33との角度が45度)との間でミラー33を軸回転させるミラー駆動部36と、を構成として含むことができ、第1撮像デバイス31の光軸と第2撮像デバイス32の光軸とを一致させている。図4のような構成を有する光学系300を用いる場合、制御部101は、例えば、ミラー33が交互に第1ポジションP1と第2ポジションP2となるような位置切替制御信号を生成し、ミラー駆動部34に供給する。第1撮像デバイス31が可視光を受光して可視光画像を生成する場合には、ミラー駆動部34は、制御部101からの位置切替制御信号に応答して、ミラー33を第1ポジションP1にセットする。この場合、サンプルからの可視光35および赤外光36共に第1撮像デバイス31に入射する。第1撮像デバイス31は、可視光波長領域の光を検出し、検出データに基づいて画像を生成する。一方、第2撮像デバイス32が赤外光を受光して赤外光画像を生成する場合には、ミラー駆動部34は、制御部101からの位置切替制御信号に応答して、ミラー33を第2ポジションP2にセットする。この場合、生体からの可視光35および赤外光36共に第2撮像デバイス32に入射する。第2撮像デバイス32は、赤外光波長領域の光を検出し、検出データに基づいて画像を生成する。 In FIG. 4, the optical system 300 has, for example, a mirror 33, and an angle of 45 between the first position P1 and the second position P2 (for example, the mirror 33 at the position P1 and the mirror 33 at the position P2) And the mirror drive unit 36 for axially rotating the mirror 33 between them, and the optical axis of the first imaging device 31 and the optical axis of the second imaging device 32 are aligned. In the case of using the optical system 300 having the configuration as shown in FIG. 4, for example, the control unit 101 generates a position switching control signal such that the mirror 33 alternately takes the first position P1 and the second position P2 Supply to the unit 34. When the first imaging device 31 receives visible light and generates a visible light image, the mirror driving unit 34 sets the mirror 33 to the first position P1 in response to the position switching control signal from the control unit 101. set. In this case, both the visible light 35 and the infrared light 36 from the sample are incident on the first imaging device 31. The first imaging device 31 detects light in the visible light wavelength region, and generates an image based on the detection data. On the other hand, when the second imaging device 32 receives infrared light and generates an infrared light image, the mirror driving unit 34 responds to the position switching control signal from the control unit 101 to Set to 2-position P2. In this case, both the visible light 35 and the infrared light 36 from the living body enter the second imaging device 32. The second imaging device 32 detects light in the infrared wavelength region, and generates an image based on the detection data.
 図5において、光学系300は、ダイクロイックミラー38を構成とし含むことができ、この構成によっても第1撮像デバイス31の光軸と第2撮像デバイス32の光軸とを一致させることができる。ダイクロイックミラー38は、特定の波長の光を反射し、その他の波長の光を透過させる作用を有する光学素子(ミラー)である。例えば、ショートパスダイクロイックミラーを用いると、カットオフ波長よりも短い波長の光の透過率は高く、カットオフ波長よりも長い波長の光の反射率は高くなる。例えば、ダイクロイックミラー38としてショートパスダイクロイックミラーを採用し、カットオフ波長を800nm又は900nm付近に設定すれば、生体からの可視光はダイクロイックミラー38を透過して第1撮像デバイス31に入射し、一方、生体からの放射(反射)してきた赤外光(例えば、900nmから2500nmまでの少なくとも一部の波長あるは波長帯域の光)は、ダイクロイックミラー38で反射し、第2撮像デバイス32に入射する。ダイクロイックミラー38を用いる場合、図4に示す光学系300とは異なり、ミラー駆動部34を設ける必要はないため、光学系300の構成を単純にすることができる。 In FIG. 5, the optical system 300 can be configured to include the dichroic mirror 38, and the optical axis of the first imaging device 31 and the optical axis of the second imaging device 32 can be made to coincide with this configuration as well. The dichroic mirror 38 is an optical element (mirror) having the function of reflecting light of a specific wavelength and transmitting light of other wavelengths. For example, when a short pass dichroic mirror is used, the transmittance of light having a wavelength shorter than the cutoff wavelength is high, and the reflectance of light having a wavelength longer than the cutoff wavelength is high. For example, if a short pass dichroic mirror is adopted as the dichroic mirror 38 and the cutoff wavelength is set to 800 nm or 900 nm, visible light from a living body passes through the dichroic mirror 38 and enters the first imaging device 31. The infrared light (for example, light of at least a part wavelength or wavelength band from 900 nm to 2500 nm) that has been emitted (reflected) from the living body is reflected by the dichroic mirror 38 and is incident on the second imaging device 32 . In the case of using the dichroic mirror 38, unlike the optical system 300 shown in FIG. 4, there is no need to provide the mirror driving unit 34, so the configuration of the optical system 300 can be simplified.
 以上のような光学系を採用すれば、第1撮像デバイス31の光軸と第2撮像デバイス32の光軸とを同一にすることができ、2つの撮像デバイスから得られる画像同士の位置合わせをする必要がなくなるという効果を奏することができる。従って、このような光学系300を採用する場合には、ステージ70(図2参照)は不要となる。 If the optical system as described above is adopted, the optical axis of the first imaging device 31 and the optical axis of the second imaging device 32 can be made identical, and alignment of images obtained from the two imaging devices can be performed. The effect of eliminating the need to Therefore, when such an optical system 300 is employed, the stage 70 (see FIG. 2) is unnecessary.
 <アライメント用マーク>
 図6は、本実施形態による、手術野の周辺(例、手術用布)にアライメント用マークを配置する構成を示す図である。本実施形態におけるアライメント用マークは、例えば、アライメント用の可視光の照明とアライメント用の赤外光の照明とに対して、アライメント用マークが手術野の周囲(例、手術用布)に比べて反射又は吸収が強いことによって該マークが認識できる物質(染料など)を採用することができる。なお、本実施形態におけるアライメント用マークは、例えば、赤外光(アライメント用の光)を発するLEDなどの発光体や特定の波長の光(アライメント用の光)を反射する反射体などを採用することもできる。
<Mark for alignment>
FIG. 6 is a diagram showing a configuration in which alignment marks are arranged around the surgical field (eg, surgical cloth) according to the present embodiment. The alignment mark in the present embodiment is, for example, the alignment mark compared with the periphery of the surgical field (eg, a surgical cloth) with respect to the illumination of visible light for alignment and the illumination of infrared light for alignment. A substance (such as a dye) which can recognize the mark by strong reflection or absorption can be employed. Note that the alignment mark in the present embodiment employs, for example, a light emitter such as an LED that emits infrared light (light for alignment), a reflector that reflects light of a specific wavelength (light for alignment), or the like. It can also be done.
 手術用布に設けられたアライメント用マーク71から74は、例えば、第1撮像デバイス31の光軸および第2撮像デバイス32の光軸が同一でない場合(図4や図5の光学系300を採用しない場合)に、第1撮像デバイス31のカメラ軸(撮像位置、撮像の視野)と第2撮像デバイス32のカメラ軸(撮像位置、撮像の視野)との位置合わせを行うために設けられている。例えば、第1撮像デバイス31が可視光画像を撮像する際に、アライメント用マーク71から74を撮像し、その位置(座標位置)情報を制御部101に通知する。赤外光画像を撮像する際に、制御部101は、第2撮像デバイス32によるアライメント用マーク71から74の撮像位置(座標位置)が第1撮像デバイス31から通知されたアライメント用マーク71から74の位置と同一となるように(重なるように)、ステージ70の移動動作を制御する制御信号を生成する。ステージ70は、制御部101から上記制御信号を受信し、これに応答して、第2撮像デバイス32のカメラ軸(撮像位置)が第1撮像デバイス31によるサンプルの撮像位置と同一となるように第2撮像デバイス32を移動させる。そして、第2撮像デバイス32は、赤外光画像を撮像する。さらに、アライメント用マーク71から74が画像上において互いの位置が同一になるように(一致するように)、可視光の画像又は赤外光の画像のいずれかを変換してもよい。また、例えば、この赤外光画像に基づいて、神経の強調画像が生成され、当該強調画像が第1撮像デバイス31によって撮像された可視光画像、あるいは第2撮像デバイス32によって撮像された赤外光画像と重畳され、表示装置50のモニタ2_52に表示される。 For example, when the alignment marks 71 to 74 provided on the surgical cloth do not have the same optical axis as the first imaging device 31 and the second imaging device 32 (the optical system 300 of FIGS. 4 and 5 is employed) Provided in order to align the camera axis of the first imaging device 31 (imaging position, visual field of imaging) with the camera axis of the second imaging device 32 (imaging position, visual field of imaging) . For example, when the first imaging device 31 captures a visible light image, the alignment marks 71 to 74 are captured, and the position (coordinate position) information is notified to the control unit 101. When capturing an infrared light image, the control unit 101 causes the alignment marks 71 to 74 of which the imaging positions (coordinate positions) of the alignment marks 71 to 74 by the second imaging device 32 are notified from the first imaging device 31. The control signal for controlling the movement of the stage 70 is generated to be identical (overlapping) with the position of. The stage 70 receives the control signal from the control unit 101, and in response thereto, the camera axis (imaging position) of the second imaging device 32 becomes identical to the imaging position of the sample by the first imaging device 31. The second imaging device 32 is moved. Then, the second imaging device 32 captures an infrared light image. Furthermore, either the visible light image or the infrared light image may be converted such that the alignment marks 71 to 74 are in the same position on the image (coincident). Also, for example, based on this infrared light image, a nerve enhanced image is generated, and the visible light image captured by the first imaging device 31 or the infrared image captured by the second imaging device 32. It is superimposed on the light image and displayed on the monitor 2 _ 52 of the display device 50.
 アライメント用マークを設けることにより、各撮像デバイスの単純な位置移動や回転だけでなく、撮像の倍率(レンズの倍率など)を変更する等のズーム機構などの遠近移動による画像のずれにも対応して画像補正することが可能となる。なお、撮像システム1は、該ズーム機構を備える構成でも良い。 By providing alignment marks, not only simple positional movement and rotation of each imaging device but also image misalignment due to perspective movement such as a zoom mechanism such as changing the imaging magnification (such as lens magnification) is also supported. Image correction is possible. The imaging system 1 may be configured to include the zoom mechanism.
 なお、上記強調画像は、可視光画像あるいは赤外光画像(これらをまとめて背景画像と言うことができる)に重畳されるが、赤外光画像に重畳する場合には、第2撮像デバイス32のみで背景画像、および強調画像の元となる画像(元画像)を撮像することができる。このため、例えば、背景画像を赤外光画像とする場合(強調画像を赤外光画像に重畳する場合)には、ステージ70を移動させて第1撮像デバイス31のカメラ軸(撮像位置)と第2撮像デバイス32のカメラ軸(撮像位置)とを合わせる必要がなくなる。 In addition, although the said emphasized image is superimposed on visible light image or infrared-light image (these can be said collectively as a background image), when superimposing on an infrared-light image, the 2nd imaging device 32 It is possible to capture a background image and an image (original image) which is the source of the enhanced image only with the above. Therefore, for example, when the background image is an infrared light image (when the enhanced image is superimposed on the infrared light image), the stage 70 is moved and the camera axis (imaging position) of the first imaging device 31 There is no need to align the camera axis (imaging position) of the second imaging device 32.
 <神経イメージング処理の内容>
 図7は、本実施形態の撮像システム(手術支援システム)1における神経のイメージング処理(サンプル内における神経の強調画像を生成し、背景画像に強調画像を重畳して表示する処理など)の内容を説明するためのフローチャートである。以下、各ステップについて説明する。なお、各ステップでは、各処理部(光照射制御部1011、画像生成部1012、強調画像生成部1013)が各ステップの処理を実行するように説明するが、各処理部は制御部101に含まれる機能であるため、制御部101を動作主体としてもよい。
<Contents of neuroimaging processing>
FIG. 7 shows the contents of nerve imaging processing (processing for generating a nerve enhanced image in a sample and generating a superimposed image on a background image, etc.) in the imaging system (surgery support system) 1 of this embodiment. It is a flowchart for demonstrating. Each step will be described below. In each step, each processing unit (light irradiation control unit 1011, image generation unit 1012, enhanced image generation unit 1013) is described to execute processing of each step, but each processing unit is included in control unit 101. The control unit 101 may be an operation subject because it is a function to be performed.
(i)ステップ701
 光照射制御部1011は、入力装置40を用いたオペレータ(医師等)の指示入力(入力信号)に応答し、赤外光源部20を制御して、特定波長帯域の赤外光(例えば、900nmから2500nmの少なくとも一部の波長帯域の赤外光)をサンプルに照射する。ここでは、赤外光源部20が広帯域の赤外光を照射する場合を例に説明するが、赤外光源部20が異なる波長の赤外光を射出(放射)する光源を複数備える場合の神経のイメージング処理については第2の実施形態において後述する(図12参照)。
(I) Step 701
The light irradiation control unit 1011 controls the infrared light source unit 20 in response to an instruction input (input signal) of an operator (such as a doctor) using the input device 40, and thus the infrared light (for example, 900 nm) in a specific wavelength band. The sample is irradiated with infrared light of at least a part of the wavelength band of Here, although the case where infrared light source part 20 irradiates the infrared light of a broad band is explained to an example, the nerve in the case where infrared light source part 20 equips two or more light sources which emit (radiates) infrared light of different wavelength. The imaging process of will be described later in the second embodiment (see FIG. 12).
(ii)ステップ702
 画像生成部1012は、第2撮像デバイス32を制御し、ハイパースペクトル画像を取得する。ハイパースペクトル画像を取得する場合、例えば、第2撮像デバイス32をハイパースペクトルカメラで構成する。第2撮像デバイス32は、赤外光をサンプルに照射することにより得られるサンプルからの放射(反射)光を各波長に分光しながら、各波長に対応するスペクトルデータ(輝度値)を検出する。なお、カメラで取得する輝度値データは、事前に取得しておいた補正用の参照物体(例、ホワイト板、サンプルの形状に合わせた物体など)での輝度値によって補正され、この補正後の輝度値データが解析に利用されることもある。よって、補正後の輝度値データを解析に用いる輝度値(補正された輝度値)データとすることも可能である。また、このように、第2撮像デバイス32は、画像の1画素において複数のスペクトルデータを1回の撮影で取得できる受光センサを含む。例えば、第2撮像デバイス32は、所定の波長帯域(例、900nm以上2500nm以下の波長帯域)から選ばれるN個の波長帯の光をサンプルに対して照射して得られるスペクトルデータを検出する。例えば、第2撮像デバイスは、サンプルの表面位置(画素に相当)及び波長方向からなる三次元の赤外光反射のスペクトルデータを取得できる。
(Ii) Step 702
The image generation unit 1012 controls the second imaging device 32 to acquire a hyperspectral image. When acquiring a hyperspectral image, for example, the second imaging device 32 is configured by a hyperspectral camera. The second imaging device 32 detects spectral data (brightness value) corresponding to each wavelength while separating the radiation (reflection) light from the sample obtained by irradiating the sample with infrared light to each wavelength. The luminance value data acquired by the camera is corrected by the luminance value of a reference object for correction (for example, a white board, an object adjusted to the shape of the sample, etc.) acquired in advance, and the correction value after correction Luminance data may be used for analysis. Therefore, it is also possible to use the luminance value data after correction as luminance value (corrected luminance value) data used for analysis. Also, as described above, the second imaging device 32 includes a light receiving sensor capable of acquiring a plurality of spectrum data in one shooting of one pixel of the image. For example, the second imaging device 32 detects spectral data obtained by irradiating the sample with light of N wavelength bands selected from a predetermined wavelength band (for example, a wavelength band of 900 nm or more and 2500 nm or less). For example, the second imaging device can acquire spectral data of three-dimensional infrared light reflection consisting of the surface position (corresponding to a pixel) of the sample and the wavelength direction.
(iii)ステップ703
 強調画像生成部1013は、神経の教師スペクトルデータを記憶部102から読み込む。記憶部102に格納され各波長に対応している全ての神経の教師スペクトルデータを読み込むようにしてもよいが、ステップ702でスペクトルデータを取得する際に用いた赤外光の各波長に対応する教師スペクトルデータのみを読み込むようにしてもよい。
 なお、神経の教師スペクトルデータは、例えば、神経であると判明している部位に対して各波長の赤外光を照射して得られる、各波長(赤外領域の波長)に対応するスペクトルデータ(輝度値)であり、このデータは予め記憶部102に格納されている。また、教師スペクトルデータとして、例えば、実際に撮像しているサンプルに対応する部位における神経のスペクトルデータを用いるのが好ましい。例えば、サンプル(手術野における撮像部位)が脚部や腰部であれば、教師スペクトルデータも脚部や腰部における神経のスペクトルデータを用いるようにするとよい。
(Iii) Step 703
The emphasized image generation unit 1013 reads teacher spectrum data of a nerve from the storage unit 102. Although the training data of all nerves stored in the storage unit 102 and corresponding to each wavelength may be read, it corresponds to each wavelength of infrared light used when acquiring the spectrum data in step 702. Only teacher spectral data may be read.
In addition, the teacher's spectrum data of the nerve is, for example, spectrum data corresponding to each wavelength (the wavelength of the infrared region) obtained by irradiating infrared light of each wavelength to a site which is known to be a nerve. The luminance value is stored in advance in the storage unit 102. In addition, it is preferable to use, for example, spectral data of a nerve at a site corresponding to a sample that is actually being imaged, as teacher spectral data. For example, if the sample (the imaging site in the surgical field) is a leg or a waist, it is preferable to use the spectrum data of nerves in the leg or the waist as the teacher spectrum data.
(iv)ステップ704
 強調画像生成部1013は、撮像したサンプルの赤外光画像における各画素のSCM(Spectral Correlation Mapper)値を算出する。例えば、強調画像生成部1013は、各波長におけるサンプルのスペクトルデータ値(サンプルのスペクトル測定値tλ)と、ステップ703で読み込んだ、対応する各波長の神経の教師スペクトルデータ値(参照スペクトル測定値rλ)とを下記式(1)に代入し、SCM値を算出する。
(Iv) Step 704
The emphasized image generation unit 1013 calculates the SCM (Spectral Correlation Mapper) value of each pixel in the infrared light image of the captured sample. For example, the enhanced image generation unit 1013 may select spectral data values of samples at respective wavelengths (spectral measured values t λ of the samples) and teacher spectral data values of nerves of corresponding wavelengths read at step 703 (reference spectral measured values Substituting r λ ) into the following equation (1), the SCM value is calculated.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 このSCMは、その値が小さいほど、サンプルのスペクトルデータと神経の教師スペクトルデータとの相関が強いことを示している。したがって、例えば、SCM値は、サンプル(例、サンプルのスペクトルデータ)と教師(例、神経の教師スペクトルデータ)との相関の度合い又は類似の度合いなどを示す。なお、このSCMでは、各スペクトル値の平均値を演算に加えている。これは、スペクトルを測定する際に発生する可能性のある背景ノイズの影響を取り除くためである。SCMはこのような背景ノイズの影響を取り除いているため、参照データ(例、教師スペクトルデータ)と測定データとの相関を正確に判定することができる。
 なお、本実施形態では、SCM法を用いて相関性を判断しているが、多変量解析(例:SAM、MLR、PCR、PLSなど)を用いて相関性を判断してもよい。例えば、強調画像の生成においてスペクトル角マッパ法(SAM)を用いる場合、強調画像生成部1013は、SCMと同様にサンプルのスペクトルデータと神経の教師のスペクトルデータとを用いて強調画像を生成する。例えば、強調画像生成部1013は、サンプルのスペクトルデータに基づく複数次元空間内のベクタと、教師のスペクトルデータに基づく複数次元空間内のベクタとのスペクトル角に基づいて強調画像を生成する。ここで、この2つのベクタの間の角度(スペクトル角)は、スペクトルの類似の度合いを示し、この角度が小さいほど類似性が大きいことを示す。
The SCM indicates that the smaller the value is, the stronger the correlation between the sample spectral data and the neural teacher spectral data is. Thus, for example, the SCM value indicates the degree of correlation or the degree of similarity between the sample (e.g. spectral data of the sample) and the teacher (e.g. neural teacher spectral data). In this SCM, the average value of each spectrum value is added to the calculation. This is to remove the effect of background noise that may occur when measuring the spectrum. Since SCM removes the influence of such background noise, it is possible to accurately determine the correlation between reference data (for example, teacher spectral data) and measurement data.
In the present embodiment, the correlation is determined using the SCM method, but the correlation may be determined using multivariate analysis (eg, SAM, MLR, PCR, PLS, etc.). For example, when using a spectral angle mapper method (SAM) in generation of an enhanced image, the enhanced image generation unit 1013 generates an enhanced image using the spectral data of a sample and the spectral data of a neural teacher, as in SCM. For example, the enhanced image generation unit 1013 generates an enhanced image based on the spectral angles of the vector in the multi-dimensional space based on the spectral data of the sample and the vector in the multi-dimensional space based on the spectral data of the teacher. Here, the angle (spectral angle) between the two vectors indicates the degree of similarity of the spectrum, and the smaller the angle, the greater the similarity.
(v)ステップ705
 強調画像生成部1013は、ステップ704で算出した各SCM値と予め設定された閾値とを比較し、当該閾値以下のSCM値を取る画素データを用いて神経の強調画像を生成する。例えば、強調画像生成部1013は、閾値以下のSCM値を取る画素に対して所定の色データ(例えば、緑色などのカラー)を割り当てることにより強調画像を生成することができる。その際、強調画像生成部1013は、二値化画像や階調化画像(単色のスケール画像)などとして強調画像を生成することができる。例えば、強調画像生成部1013は、SCM値が閾値以下の各画素に、単純に緑色などの色を割り当てる二値化の処理で済ますこともできるが、例えば8ビットからなる階調値を用いて、SCM値が最も0に近い画素に緑色輝度値255(最も明るい緑)を割り当て、SCM値が閾値に最も近い画素に緑色輝度値0を割り当てることにより、さらに分かりやすい神経の強調画像を生成することができる。
(V) Step 705
The emphasized image generation unit 1013 compares each SCM value calculated in step 704 with a preset threshold value, and generates a nerve emphasized image using pixel data taking an SCM value equal to or less than the threshold value. For example, the emphasizing image generation unit 1013 can generate an emphasizing image by assigning predetermined color data (for example, a color such as green) to pixels taking SCM values equal to or less than the threshold value. At this time, the enhanced image generation unit 1013 can generate an enhanced image as a binarized image, a toned image (scale image of a single color), or the like. For example, the enhanced image generation unit 1013 may simply perform binarization processing in which a color such as green is simply assigned to each pixel whose SCM value is less than or equal to the threshold, but using, for example, a gradation value consisting of 8 bits By assigning the green luminance value 255 (brightest green) to the pixel with the SCM value closest to 0 and assigning the green luminance value 0 to the pixel with the SCM value closest to the threshold, a more easily understood nerve-emphasized image is generated. be able to.
 この強調画像は、所定波長帯域の赤外光をサンプルに照射して得られる各画素のスペクトルデータのうち、神経の教師スペクトルデータと相関が強い画素をSCM値に基づいて抽出して得られる画像であって、サンプルにおいて神経である可能性が非常に高い箇所を示す画像である。 This enhanced image is an image obtained by extracting, based on the SCM value, pixels having a strong correlation with neural teacher spectrum data among the spectrum data of each pixel obtained by irradiating the sample with infrared light of a predetermined wavelength band. That is, it is an image which shows a part with very high possibility of being a nerve in a sample.
(vi)ステップ706
 画像生成部1012は、強調画像生成部1013から神経の強調画像を取得し、別途取得している背景画像と該強調画像とを位置合わせしながら重ね合わせ、神経が強調された重畳画像を生成する。制御部101は、画像生成部1012から神経の重畳画像を取得し、表示装置50(例、モニタ2_52)に表示するように指示する。表示装置50は、背景画像に強調画像が重畳させて生成された重畳画像を制御部101から受け取り、これを表示装置50に表示する。ここで、重畳とは、背景画像と強調画像とを位置合わせし、同じ位置の画素については、強調画像の画素で背景画像の画素を置き換える処理を含む意味としてもよい。
(Vi) Step 706
The image generation unit 1012 obtains an enhanced image of a nerve from the enhanced image generation unit 1013, superimposes the separately acquired background image and the enhanced image while aligning them, and generates a superimposed image in which the nerve is enhanced. . The control unit 101 acquires a superimposed image of nerves from the image generation unit 1012 and instructs the display device 50 (eg, monitor 2_52) to display the image. The display device 50 receives, from the control unit 101, a superimposed image generated by superimposing the emphasis image on the background image, and displays the superimposed image on the display device 50. Here, superimposition may have a meaning that includes processing for aligning the background image and the enhanced image, and replacing pixels of the background image with pixels of the enhanced image for pixels at the same position.
 画像生成部1012は、背景画像として、例えば、第1撮像デバイス31によって取得された可視光画像を用いてもよいし、第2撮像デバイス32によって取得された赤外光画像を用いてもよい。赤外光画像を背景画像として用いる場合、例えば、第2撮像デバイス(ハイパースペクトルカメラ)32が取得した特定の波長(例えば、1070nm)の赤外光画像を用いることができる。 The image generation unit 1012 may use, for example, a visible light image acquired by the first imaging device 31 or an infrared light image acquired by the second imaging device 32 as the background image. When an infrared light image is used as a background image, for example, an infrared light image of a specific wavelength (for example, 1070 nm) acquired by the second imaging device (hyperspectral camera) 32 can be used.
 <エラー除去処理>
 図8は、本実施形態による、神経のイメージング処理(図7)によって生成された上記の重畳画像からエラー部分(例、血管や脂肪などの神経以外の対象外部分)を除去する処理を説明するためのフローチャートである。
 制御部101は、画像補正部1014に対して、画像の形状解析を実行させ、神経のイメージング処理(図7)で生成された強調画像から、神経と判断される部分以外の部分の画像(画像領域)をエラーとして除去する。形状解析処理は、例えば、以下のようなステップで実行される。なお、各ステップでは、画像補正部1014が各ステップの処理を実行するように説明するが、画像補正部1014は制御部101に含まれる機能であるため、制御部101を動作主体としてもよい。
<Error removal process>
FIG. 8 illustrates a process of removing an error portion (e.g., a non-target portion other than a nerve such as a blood vessel or fat) from the above-described superimposed image generated by the nerve imaging process (FIG. 7) according to the present embodiment Is a flowchart for
The control unit 101 causes the image correction unit 1014 to execute shape analysis of the image, and an image (image of a portion other than the portion determined to be a nerve from the enhanced image generated in the nerve imaging process (FIG. 7) Remove the area) as an error. The shape analysis process is performed, for example, in the following steps. In each step, the image correction unit 1014 is described to execute the processing of each step. However, since the image correction unit 1014 is a function included in the control unit 101, the control unit 101 may be an operation subject.
(i)ステップ801
 画像補正部1014は、例えば、生成された強調画像に対してフーリエ変換を実行する。
(I) Step 801
The image correction unit 1014 performs, for example, a Fourier transform on the generated emphasized image.
(ii)ステップ802
 画像補正部1014は、ステップ801で得られたフーリエ変換の結果に対して、フィルタ処理(例えば、モルフォロジーフィルタを用いて)を実行し、生成された強調画像のうち丸い形状など細長い形状以外の形状の画像を抽出して除去する。神経部分は通常細長い形状(線状)をなしているためである。
(Ii) Step 802
The image correction unit 1014 performs filter processing (for example, using a morphological filter) on the result of the Fourier transform obtained in step 801, and generates a shape other than an elongated shape such as a round shape in the generated enhanced image. Extract and remove the image of This is because the nerve part usually has an elongated shape (linear shape).
 以上のようにして、画像補正部1014は、重畳画像の強調画像からエラーと思われる部分を除去することができる。なお、当該エラー除去処理の機能は、オペレータ(医師など)の指示により、有効(ON)・無効(OFF)を設定できるようにしてもよい。また、当該エラー除去処理の代わりに、オペレータ(医師など)が、例えば、入力装置40を用いて、重畳画像から除去すべき部分(指定情報)を指定し、制御部101は指定された指定情報に基づいて画像補正部1014に対して該指定された部分を削除させるようにしてもよい。 As described above, the image correction unit 1014 can remove a portion that seems to be an error from the enhanced image of the superimposed image. The function of the error removal process may be set to be valid (ON) or invalid (OFF) by an instruction of an operator (such as a doctor). Further, instead of the error removal process, an operator (such as a doctor) designates a portion (designation information) to be removed from the superimposed image using, for example, the input device 40, and the control unit 101 designates the designated information The image correction unit 1014 may be made to delete the designated part based on the above.
 <実施例1:神経イメージングの例>
 図9は、本実施形態による神経のイメージング処理(例えば、図7参照)によって生成された重畳画像(背景画像+神経の強調画像)の例を示す写真(画像)である。図9Aは、ラットの顔面部分の皮膚を剥がした様子を示す写真である(ただし、図9BおよびCとは視野および体勢が異なる)。図9Bは、1290nmから1600nmの波長帯域の連続波長を含む赤外光(連続波長:実際には、1600nmから0.5nm刻みでスペクトルデータを取得)をラットの顔面部分に照射して得られた赤外光画像に基づいて生成された神経の強調画像を、1070nmの波長で撮像した赤外光画像(背景画像)に重畳した画像を示す写真である。図9Cは、図9Bと同一視野・体勢のラットの顔面部分の可視画像を示す写真である。なお、実施例1では、測定した部位(撮像したサンプル)におけるスペクトルデータのうち神経の教師スペクトルデータに輝度値が近い上位10%(例、上位5%、15%、20%などでも良い)の画素(例、上記のSCM値が閾値以下の画素のうち該輝度値が神経の教師スペクトルデータに近い上位10%の画素など)に対して緑色データを割り当てて強調画像を生成した。
Example 1: Example of Neurological Imaging
FIG. 9 is a photograph (image) showing an example of a superimposed image (background image + emphasized image of nerve) generated by the imaging process of nerve (for example, see FIG. 7) according to the present embodiment. FIG. 9A is a photograph showing that the skin of the facial part of the rat was peeled off (however, the visual field and posture are different from FIGS. 9B and 9C). FIG. 9B was obtained by irradiating the rat's face with infrared light including continuous wavelengths in the wavelength band of 1290 nm to 1600 nm (continuous wavelength: actually, spectral data is acquired in steps of 1600 nm to 0.5 nm) It is a photograph showing an image obtained by superimposing a nerve enhanced image generated based on an infrared light image on an infrared light image (background image) captured at a wavelength of 1070 nm. FIG. 9C is a photograph showing a visible image of a face portion of a rat with the same field of view and posture as FIG. 9B. In Example 1, of the spectrum data in the measured region (sampled image), the top 10% (for example, the top 5%, 15%, 20%, etc. may be used) whose luminance value is close to the neural teacher spectrum data. Green data is assigned to pixels (for example, the top 10% pixels whose luminance values are close to the nerve teacher spectrum data among the pixels whose SCM values are equal to or less than the threshold) to generate an emphasized image.
 図9Cの可視光画像に示されるように、ラットの顔面部分には、顔面神経と、血管と、皮下脂肪(顎下腺組織)とが存在し、肉眼で見ると、顔面神経も血管も白っぽく写り、どこがそれらの境界部分であるのか判断しづらい場合もある。一方、図9Bを見ると、顔面神経に対応する部分(神経領域)が緑色に視覚的に強調され、該神経領域の周辺である血管や皮下脂肪に対応する部分(該神経領域の周辺領域)は強調表示されていない。 As shown in the visible light image in FIG. 9C, facial nerves, blood vessels and subcutaneous fat (submandibular gland tissue) are present in the face portion of the rat, and both the facial nerves and blood vessels are whitish when viewed macroscopically It may be difficult to judge what is the boundary between them. On the other hand, looking at FIG. 9B, the portion corresponding to the facial nerve (neural region) is visually highlighted in green, and the portion corresponding to the blood vessel or subcutaneous fat around the neural region (peripheral region of the neural region) Is not highlighted.
 従って、例えば、オペレータ(医師等)は、表示装置50に表示された、図9Bによる神経の強調画像を含む重畳画像を術中に見ることにより、サンプルにおいて神経が通っている箇所を容易に確認することができるため、手術によって神経を傷つける危険性を劇的に減少させることができるようになる。 Therefore, for example, the operator (such as a doctor) can easily confirm a nervous part in the sample by viewing during operation a superimposed image including a nerve enhanced image according to FIG. 9B displayed on the display device 50. Because it is possible, surgery can dramatically reduce the risk of nerve injury.
 <実施例2:エラー除去の画像例>
 図10は、本実施形態のエラー除去処理によるエラー除去の画像例を示す写真(画像)である。図10Aはエラー除去前の画像を示し、図10Bはエラー除去後の画像を示している。
 図10Aに示されるように、合成した重畳画像には、神経部分以外に、主に3つの細長い形状(神経部分の形状)以外の画像領域が存在する。上述のエラー除去処理(形状解析)を実行すると、これら細長い形状(線状)以外の形状をなす画像領域は除去される。このようなエラー除去処理を実行した後の画像が図10Bに示されている。図10Bを見ると、図10Aにおいてエラーと判断された部分が除去され、神経の強調画像のみが重畳画像に表示されており、得られる神経の強調画像の精度(例、神経と推定された特定精度)が向上していることが分かる。
Example 2 Image Example of Error Removal
FIG. 10 is a photograph (image) showing an image example of error removal by the error removal processing of the present embodiment. FIG. 10A shows an image before error removal, and FIG. 10B shows an image after error removal.
As shown in FIG. 10A, in the synthesized superimposed image, in addition to the nerve portion, there are mainly image regions other than the three elongated shapes (shape of the nerve portion). When the above-described error removal processing (shape analysis) is performed, image regions having shapes other than these elongated shapes (linear shapes) are removed. An image after performing such an error removal process is shown in FIG. 10B. Referring to FIG. 10B, the portion determined as an error in FIG. 10A is removed, and only the nerve enhanced image is displayed in the superimposed image, and the accuracy of the obtained nerve enhanced image (e.g. It can be seen that the accuracy) is improved.
 <実施例3:神経のスペクトルデータの例>
 図11は、神経のスペクトルデータおよびその他の部位(例えば、スジ)のスペクトルデータの比較結果を示す図である。図11Aは、ブタの坐骨神経と思われる部位を示す写真(画像)である。図11Bは、ブタの2箇所の神経部分(神経1と神経2)とスジ部分の各赤外光波長に対するスペクトルデータ(輝度値)を示すグラフである。
<Example 3: Example of spectrum data of nerve>
FIG. 11 is a diagram showing a comparison result of spectral data of nerve and spectral data of other portions (eg, streaks). FIG. 11A is a photograph (image) showing a portion believed to be a porcine sciatic nerve. FIG. 11B is a graph showing spectral data (brightness values) with respect to each infrared light wavelength of two nerve parts (nerves 1 and 2) and a streak part of a pig.
 図11Bのグラフは、ブタの坐骨神経(サンプル)と思われる部位を取り出し、900nmから2500nmの波長域の赤外光をそれに照射し、サンプルからの反射光をハイパースペクトルカメラで撮像し、各波長に対応するスペクトル値(輝度値)をプロットすることにより得たスペクトルデータである。図11Bのグラフを参照すると、神経部分(神経1および神経2)は、スジ部分と比べてスペクトル特性の形状が異なる一方、神経1のスペクトル特性の形状と神経2のスペクトル特性の形状とは互いに非常に類似している。このため、神経とそれ以外の部位(実施例3ではスジを意味する)と区別し、神経のスペクトルのみを抽出することができると分かる。なお、スジのスペクトル値と神経のスペクトル値とが波長に対して同一値を取る場合もあるが、本実施形態の制御部101によって上述のSCM値を特定の波長領域に亘って演算すれば、得られる値は少なくとも両者を区別できる程度には異なっているのである。 The graph of FIG. 11B takes out a portion that seems to be a porcine sciatic nerve (sample), irradiates it with infrared light in a wavelength range of 900 nm to 2500 nm, captures reflected light from the sample with a hyperspectral camera, Are spectral data obtained by plotting spectral values (brightness values) corresponding to. Referring to the graph of FIG. 11B, while the nerve parts (Nerves 1 and 2) have different shapes of spectral characteristics compared to the streaks, the shape of the spectrum characteristics of nerve 1 and the shape of the spectrum characteristics of nerve 2 mutually differ. Very similar. For this reason, it is understood that it is possible to distinguish between nerves and other parts (meaning streaks in Example 3) and to extract only the spectrum of nerves. Although there are cases where the spectral value of the streak and the spectral value of the nerve take the same value with respect to the wavelength, if the above-mentioned SCM value is calculated over a specific wavelength region by the control unit 101 of this embodiment, The values obtained are at least as different as they can be distinguished.
 図12は、本実施例による、ブタの坐骨神経と思われる部位の可視光画像(図12A)と、図11Bのスペクトルデータを用いてSCM演算を行って生成した神経の強調画像を背景画像(例えば、波長1070nmの赤外光を照射して得られる赤外光画像)に重畳した重畳画像(図12B)を示す写真(画像)である。図12Bに示されるように、神経部分(神経領域)がスジ部分(該神経領域の周辺領域)とは区別されて強調表示されていることが分かる。 FIG. 12 is a background image of a highlight image of a nerve generated by performing SCM calculation using a visible light image (FIG. 12A) of a portion considered to be the sciatic nerve of a pig according to the present embodiment and the spectrum data of FIG. For example, it is a photograph (image) showing a superimposed image (FIG. 12B) superimposed on an infrared light image obtained by irradiating infrared light of wavelength 1070 nm. As shown in FIG. 12B, it can be seen that the neural portion (neural region) is highlighted separately from the streak portion (the peripheral region of the neural region).
 以上のように、本実施形態では、赤外光領域で設定した神経の教師スペクトル(参照スペクトル)を用いてSCMなどの画像解析を行っている。通常、神経の撮像に可視光を用いた場合、神経と血管とは同じような色合いをしているために、目視や画像に基づき神経と血管とを区別することは難しい。しかしながら、本実施形態のように、神経の撮像に所定波長の赤外光を用いた場合、水分の量、脂質の量や血液中のヘモグロビンの量などに依存して、異なる物質間(例、水と脂質、水とヘモグロビン、脂質とヘモグロビンなど)におけるスペクトルの特性(光学特性(分光特性、吸収特性、反射(散乱)特性など))に違いが生じる。したがって、本実施形態の撮像システム1は、この違い(光学特性の違い)を積極的に利用し、赤外光領域で設定した神経の教師スペクトル(参照スペクトル)を用いてSCMなどの画像解析を行う。このようにすることにより、他の物質を排除し、選択的に神経を抽出して撮像画像における神経を強調することが可能になる。さらに、本実施形態による装置や手法を用いると、未知のサンプルにおける神経部分を特定し、神経部分を他の部位と区別して強調表示することができるようになる。よって、オペレータ(医師等)は、サンプルにおける神経部分の位置を表示装置50で確認しながら医療行為(例えば、手術など)を実施することができる。また、実施例3のような実験を通して、神経のスペクトルデータを抽出することができ、このように抽出した神経のスペクトルデータを上述の神経の教師スペクトルデータとして用いることができる。 As described above, in the present embodiment, image analysis such as SCM is performed using the teacher spectrum (reference spectrum) of the nerve set in the infrared light region. In general, when visible light is used for imaging of nerves, it is difficult to distinguish between nerves and blood vessels based on visual observation or images because nerves and blood vessels have similar colors. However, when infrared light of a predetermined wavelength is used for imaging a nerve as in the present embodiment, depending on the amount of water, the amount of lipid, the amount of hemoglobin in blood, etc., between different substances (eg, A difference occurs in the spectral characteristics (optical characteristics (spectral characteristics, absorption characteristics, reflection (scattering) characteristics, etc.)) of water and lipids, water and hemoglobin, etc. Therefore, the imaging system 1 according to the present embodiment positively utilizes this difference (difference in optical characteristics), and performs image analysis such as SCM using the teacher spectrum (reference spectrum) of the nerve set in the infrared light region. Do. This makes it possible to exclude other substances and selectively extract nerves to emphasize the nerves in the captured image. Furthermore, by using the device and method according to the present embodiment, it is possible to identify a neural part in an unknown sample and highlight the neural part in distinction from other parts. Thus, the operator (such as a doctor) can perform a medical procedure (for example, surgery) while confirming the position of the neural portion in the sample on the display device 50. In addition, spectral data of nerve can be extracted through an experiment as in Example 3, and spectral data of nerve extracted in this way can be used as the above-mentioned neural spectral data of nerve.
(2)第2の実施形態
 第1の実施形態は、第2撮像デバイス32としてハイパースペクトルカメラを用い、特定波長領域において連続的にスペクトルデータを取得して神経を特定していることを開示しているが、第2の実施形態は、複数種類の波長(例、第1波長、該第1とは異なる第2波長など特定する生体組織ごとに異なる波長)の赤外光を照射することにより得られる反射光を通常の赤外光カメラ(ハイパースペクトルカメラのように波長分解やスキャンを必要としないカメラ、InGaAsセンサなど)で撮像してスペクトルデータを取得する形態について開示する。なお、撮像システム(手術支援システム)1の構成は、例えば、赤外光源部20が複数種類の波長の赤外光を射出(放射)する光源を有する点、および第2撮像デバイス32が通常の赤外光カメラを用いる点以外は第1の実施形態と同一であるため、第2の実施形態ではその説明は省略する。
(2) Second Embodiment The first embodiment discloses that a hyperspectral camera is used as the second imaging device 32, and spectrum data is continuously acquired in a specific wavelength region to specify a nerve. However, in the second embodiment, infrared light of a plurality of types of wavelengths (eg, different wavelengths for each living tissue to be specified such as a first wavelength, a second wavelength different from the first wavelength, etc.) is emitted. Disclosed is an embodiment in which the obtained reflected light is captured by a normal infrared camera (a camera that does not require wavelength resolution or scanning like a hyperspectral camera, an InGaAs sensor, etc.) to acquire spectrum data. The configuration of the imaging system (surgery support system) 1 is, for example, that the infrared light source unit 20 has a light source for emitting (emitting) infrared light of a plurality of types of wavelengths, and the second imaging device 32 is The second embodiment is the same as the first embodiment except that an infrared light camera is used, and thus the description thereof will be omitted.
 <神経イメージング処理の内容>
 図13は、第2の実施形態による神経のイメージング処理の内容を説明するためのフローチャートである。
<Contents of neuroimaging processing>
FIG. 13 is a flowchart for explaining the contents of the nerve imaging process according to the second embodiment.
(i)ステップ1301
 制御部101は、波長λ(k=1からn)の赤外光について、ステップ1302からステップ1304の処理を繰り返すように撮像システム1を制御する。以下のステップでは、各処理部(光照射制御部1011、画像生成部1012、強調画像生成部1013)が各ステップの処理を実行するように説明するが、各処理部は制御部101に含まれる機能であるため、制御部101を動作主体としてもよい。
(I) Step 1301
The control unit 101 controls the imaging system 1 so as to repeat the processing of step 1302 to step 1304 for infrared light of wavelength λ k (k = 1 to n). In the following steps, each processing unit (light irradiation control unit 1011, image generation unit 1012, enhanced image generation unit 1013) is described to execute the processing of each step, but each processing unit is included in control unit 101. Because of the function, the control unit 101 may be an operation subject.
(ii)ステップ1302
 光照射制御部1011は、赤外光源部20の波長λを発する赤外光源21から22を制御して、波長λの赤外光(例えば、950nm、970nm、1000nm、1070nm、1100nm、1300nm、1450nm、1550nm、1600nmのいずれかの個別波長)をサンプルに照射する。なお、ここでは、赤外光源を切り替えるようにして異なる波長の赤外光を出力するようにしているが、例えば、所定の波長帯域の赤外光を射出する赤外光源を用い、当該赤外光源から発せられた赤外光をフィルタ処理することにより特定波長の赤外光を取り出すようにしてもよい。
(Ii) Step 1302
The light irradiation control unit 1011 controls the infrared light sources 21 to 22 that emit the wavelength λ k of the infrared light source unit 20 to emit infrared light of wavelength λ k (for example, 950 nm, 970 nm, 1000 nm, 1070 nm, 1100 nm, 1300 nm , 1450 nm, 1550 nm, or 1600 nm (individual wavelengths)). Here, the infrared light sources are switched to output infrared light of different wavelengths, but for example, an infrared light source that emits infrared light of a predetermined wavelength band is used to Infrared light of a specific wavelength may be extracted by filtering infrared light emitted from a light source.
(iii)ステップ1303
 画像生成部1012は、第2撮像デバイス32(例えば、通常の赤外光カメラ)を制御し、波長λの赤外光をサンプルに照射することによってサンプルから放射(反射)された赤外光を撮像し、当該波長λに対するスペクトルデータ(輝度値)を検出(取得)する。
(Iii) Step 1303
The image generation unit 1012 controls the second imaging device 32 (for example, a normal infrared light camera), and emits infrared light of wavelength λ k to the sample to emit (reflect) infrared light from the sample And spectral data (brightness value) for the wavelength λ k is detected (acquired).
(iv)ステップ1304
 例えば、画像生成部1012は、ステップ1303で取得したスペクトルデータを記憶部102に格納する。
(Iv) Step 1304
For example, the image generation unit 1012 stores the spectrum data acquired in step 1303 in the storage unit 102.
(v)ステップ1305
 例えば、強調画像生成部1013は、少なくとも波長λに対応する神経の教師スペクトルデータを記憶部102から読み込む。
(V) Step 1305
For example, the emphasis image generation unit 1013 reads, from the storage unit 102, teacher spectrum data of a nerve corresponding to at least the wavelength λ k .
(vi)ステップ1306
 例えば、強調画像生成部1013は、撮像したサンプルの赤外光画像における各画素のSCM(Spectral Correlation Mapper)値を算出する。例えば、強調画像生成部1013は、各波長におけるサンプルのスペクトルデータ値(サンプルのスペクトル測定値tλ)と、ステップ1305で読み込んだ、対応する各波長の神経の教師スペクトルデータ値(参照スペクトル測定値rλ)とを、第1の実施形態と同様に、式(1)に代入し、SCM値を算出する。
(Vi) Step 1306
For example, the enhanced image generation unit 1013 calculates the SCM (Spectral Correlation Mapper) value of each pixel in the infrared light image of the captured sample. For example, the enhanced image generation unit 1013 may select spectral data values of the sample at each wavelength (spectral measured value t λ of the sample) and teacher spectral data values of the corresponding wavelength read in step 1305 (reference spectral measured value the r lambda) and, like the first embodiment, into equation (1), calculates the SCM value.
 このSCMは、その値が小さいほど、サンプルのスペクトルデータと神経の教師スペクトルデータとの相関が強いことを示している。
 なお、本実施形態では、SCM法を用いて相関性を判断しているが、多変量解析(例:SAM、MLR、PCR、PLSなど)を用いて相関性を判断してもよい。
The SCM indicates that the smaller the value is, the stronger the correlation between the sample spectral data and the neural teacher spectral data is.
In the present embodiment, the correlation is determined using the SCM method, but the correlation may be determined using multivariate analysis (eg, SAM, MLR, PCR, PLS, etc.).
(vii)ステップ1307
 強調画像生成部1013は、ステップ1306で算出した各SCM値と予め設定された閾値とを比較し、当該閾値以下のSCM値を取る画素データを用いて神経の強調画像を生成する。例えば、強調画像生成部1013は、閾値以下のSCM値を取る画素に対して所定の色データ(例えば、緑色などのカラー)を割り当てることにより強調画像を生成することができる。その際、強調画像生成部1013は、二値化画像や階調化画像(単色のスケール画像)などとして強調画像を生成することができる。例えば、強調画像生成部1013は、SCM値が閾値以下の各画素に、単純に緑色などの色を割り当てる二値化の処理で済ますこともできるが、例えば8ビットからなる階調値を用いて、SCM値が最も0に近い画素に緑色輝度値255(最も明るい緑)を割り当て、SCM値が閾値に最も近い画素に緑色輝度値0を割り当てることにより、さらに分かりやすい神経の強調画像を生成することができる。
(Vii) Step 1307
The emphasized image generation unit 1013 compares each SCM value calculated in step 1306 with a preset threshold value, and generates a nerve emphasized image using pixel data taking an SCM value equal to or less than the threshold value. For example, the emphasizing image generation unit 1013 can generate an emphasizing image by assigning predetermined color data (for example, a color such as green) to pixels taking SCM values equal to or less than the threshold value. At this time, the enhanced image generation unit 1013 can generate an enhanced image as a binarized image, a toned image (scale image of a single color), or the like. For example, the enhanced image generation unit 1013 may simply perform binarization processing in which a color such as green is simply assigned to each pixel whose SCM value is less than or equal to the threshold, but using, for example, a gradation value consisting of 8 bits By assigning the green luminance value 255 (brightest green) to the pixel with the SCM value closest to 0 and assigning the green luminance value 0 to the pixel with the SCM value closest to the threshold, a more easily understood nerve-emphasized image is generated. be able to.
 この強調画像は、所定波長帯域の赤外光をサンプルに照射して得られる各画素のスペクトルデータのうち、神経の教師スペクトルデータと相関が強い画素を抽出して得られる画像であって、サンプルにおいて神経である可能性が非常に高い箇所を示す画像である。 The enhanced image is an image obtained by extracting a pixel having a strong correlation with the neural teacher spectrum data out of the spectrum data of each pixel obtained by irradiating the sample with infrared light of a predetermined wavelength band. Is an image showing a place where the possibility of being a nerve is very high.
(viii)ステップ1308
 画像生成部1012は、強調画像生成部1013から強調画像を取得し、別途取得している背景画像と強調画像とを位置合わせしながら重ね合わせ、重畳画像を生成する。制御部101は、画像生成部1012から重畳画像を取得し、表示装置50(例、モニタ2_52)に表示するように指示する。表示装置50は、背景画像に強調画像を重畳させて生成された重畳画像を制御部101から受信し、この重畳画像を表示装置50に表示する。
(Viii) Step 1308
The image generation unit 1012 acquires the enhanced image from the enhanced image generation unit 1013, superimposes the separately acquired background image and the enhanced image while aligning them, and generates a superimposed image. The control unit 101 acquires a superimposed image from the image generation unit 1012 and instructs the display device 50 (for example, the monitor 2_52) to display the superimposed image. The display device 50 receives, from the control unit 101, a superimposed image generated by superimposing the emphasized image on the background image, and displays the superimposed image on the display device 50.
 画像生成部1012は、背景画像として、例えば、第1撮像デバイス31によって取得された可視光画像を用いてもよいし、第2撮像デバイス32によって取得された赤外光画像を用いてもよい。赤外光画像を背景画像として用いる場合、例えば、第2撮像デバイス(赤外光カメラ)32が取得した特定の波長(例えば、1070nm)の赤外光画像を用いることができる。 The image generation unit 1012 may use, for example, a visible light image acquired by the first imaging device 31 or an infrared light image acquired by the second imaging device 32 as the background image. When using an infrared-light image as a background image, the infrared-light image of the specific wavelength (for example, 1070 nm) which the 2nd imaging device (infrared light camera) 32 acquired can be used, for example.
 <波長設定GUI>
 図14は、本実施形態による、赤外光源部20における光源から射出(放射)する波長を設定するための波長設定GUI(Graphical User Interface)1400の例を示す図である。
 波長設定GUI1400は、例えば、各光源について個別に波長設定を行うことを選択する個別波長設定部1401と、照射する赤外光の波長帯域(ただし、この場合、スペクトルデータは、例えば、0.5nmの波長間隔で取得される)の設定を選択する連続波長設定部1402と、k個(k=1からn)の光源から射出させる赤外光の波長を設定する個別波長値設定領域1403から1407と、照射する赤外光の波長帯域を設定する波長帯域設定領域1408と、設定結果を保存する保存ボタン1409と、波長設定GUIを閉じるための終了ボタン1410と、を備えている。これらのGUI構成要素に加えて、「明るさ/コントラスト」や「ガンマ値」などを調整可能とするパラメータ調整部をGUIとして表示装置に表示してもよい。
<Wavelength setting GUI>
FIG. 14 is a view showing an example of a wavelength setting GUI (Graphical User Interface) 1400 for setting a wavelength emitted (emitted) from a light source in the infrared light source unit 20 according to the present embodiment.
The wavelength setting GUI 1400 includes, for example, an individual wavelength setting unit 1401 which selects wavelength setting individually for each light source, and a wavelength band of infrared light to be irradiated (however, in this case, the spectrum data is 0.5 nm, for example) And an individual wavelength value setting area 1403 to 1407 for setting the wavelength of infrared light to be emitted from k (k = 1 to n) light sources. , A wavelength band setting area 1408 for setting the wavelength band of infrared light to be irradiated, a save button 1409 for saving the setting result, and an end button 1410 for closing the wavelength setting GUI. In addition to these GUI components, a parameter adjustment unit capable of adjusting “brightness / contrast” or “gamma value” may be displayed on the display device as a GUI.
 個別波長設定部1401と連続波長設定部1402とは択一的に用いられ、例えば、ラジオボタンによってどちらか一方の設定部を選択可能に構成されている。例えば、オペレータ(医師など)は、入力装置40を操作して、個別波長設定部1401が選択した後、赤外光源部20に含まれるk個の赤外光源21から22の各波長値を入力することができる。あるいは、例えば、オペレータ(医師など)が入力装置40を操作して、個別波長設定部1401を選択すると、制御部101が自動的に各赤外光源の波長値を自動的に入力してもよい。また、例えば、オペレータ(医師など)は、入力装置40を操作して連続波長設定部1402を選択した後、赤外光源部20から射出される光のうち強調画像生成部1013におけるSCM値の計算に使用する赤外光の波長帯域の値を入力することができる。あるいは、例えば、オペレータ(医師など)が入力装置40を操作して、連続波長設定部1402を選択すると、制御部101が自動的に赤外光の波長帯域の値を自動的に入力してもよい。 The individual wavelength setting unit 1401 and the continuous wavelength setting unit 1402 are alternatively used. For example, one of the setting units can be selected by a radio button. For example, after the operator (such as a doctor) operates the input device 40 and selects the individual wavelength setting unit 1401, each wavelength value of the k infrared light sources 21 to 22 included in the infrared light source unit 20 is input can do. Alternatively, for example, when an operator (such as a doctor) operates the input device 40 and selects the individual wavelength setting unit 1401, the control unit 101 may automatically input the wavelength value of each infrared light source. . Also, for example, after the operator (such as a doctor) operates the input device 40 to select the continuous wavelength setting unit 1402, the calculation of the SCM value in the enhanced image generation unit 1013 among the light emitted from the infrared light source unit 20 It is possible to input the value of the wavelength band of infrared light used for. Alternatively, for example, when the operator (such as a doctor) operates the input device 40 and selects the continuous wavelength setting unit 1402, the control unit 101 automatically inputs the value of the wavelength band of infrared light. Good.
 制御部101は、例えば、光照射制御部1011(プログラム)に基づいて、GUI1400で設定された各光源の波長の値を読み込み、赤外光源部20の駆動部(図示せず)に印加する電圧と各光源が射出(放射)する光の波長値とを駆動部に伝達する。当該駆動部は、制御部101の制御の下、赤外光源21及び22に電圧を印加し、光を射出(放射)させる。また、制御部101は、例えば、赤外光源22が各波長の光を射出(放射)するタイミングと光の射出(放射)時間を赤外光源部20の駆動部(図示せず)に送信し、赤外光源22から複数の波長の光が周期的に射出(放射)されるように赤外光源部20を制御する。
 なお、第2の実施形態では、撮像システム1は、複数の単波長の赤外光を照射する光源を複数個設け、切り替えてサンプルに照射するが、例えば、特定の波長帯域の赤外光を射出する光源から当該帯域幅の赤外光を射出させ、フィルタによって所望の波長の赤外光を選択的に抽出してサンプルに照射するようにしてもよい。また、上記のGUI1400は、個別波長設定部1401と連続波長設定部1402とを同時に表示させてもよいし、個別波長設定部1401と連続波長設定部1402とを初期設定に基づき別々に表示させるようにしてもよい。
For example, based on the light irradiation control unit 1011 (program), the control unit 101 reads the value of the wavelength of each light source set in the GUI 1400 and applies a voltage to a drive unit (not shown) of the infrared light source unit 20 And the wavelength value of light emitted (emitted) by each light source are transmitted to the drive unit. The drive unit applies a voltage to the infrared light sources 21 and 22 under the control of the control unit 101 to emit (emit) light. Further, the control unit 101 transmits, for example, the timing at which the infrared light source 22 emits (radiates) light of each wavelength and the light emission (emission) time to a drive unit (not shown) of the infrared light source unit 20. The infrared light source unit 20 is controlled so that light of a plurality of wavelengths is periodically emitted (radiated) from the infrared light source 22.
In the second embodiment, the imaging system 1 is provided with a plurality of light sources for emitting a plurality of infrared light of a single wavelength, switched to irradiate the sample, for example, infrared light of a specific wavelength band Infrared light of the bandwidth may be emitted from a light source to be emitted, and infrared light of a desired wavelength may be selectively extracted by a filter and the sample may be irradiated. In addition, the GUI 1400 may display the individual wavelength setting unit 1401 and the continuous wavelength setting unit 1402 simultaneously, or may separately display the individual wavelength setting unit 1401 and the continuous wavelength setting unit 1402 based on the initial setting. You may
 <実施例4:神経イメージングの例>
 図15は、第2の実施形態による装置や手法に従って神経のイメージング処理を行って生成された重畳画像の例(図15A)と、第1の実施形態による装置や手法に従って神経のイメージング処理を行って生成された重畳画像の例(図15B)との比較を示す写真(画像)である。なお、両画像ともにエラー除去処理(図8参照)は施されていない。
<Example 4: Example of nerve imaging>
FIG. 15 shows an example (FIG. 15A) of a superimposed image generated by performing nerve imaging processing according to the device and method according to the second embodiment, and nerve imaging processing according to the device and method according to the first embodiment FIG. 15B is a photograph (image) showing a comparison with an example of the superimposed image generated in FIG. Note that error removal processing (see FIG. 8) is not performed for both images.
 図15Aの写真は、1000nm、1300nm、1450nm、および1600nmの4波長の赤外光をマウスの顔面部分に照射し、その反射光を通常の赤外光カメラ(例、InGaAsセンサ)で撮像して検出された各波長に対応するスペクトルデータに基づいて強調画像を生成し、1070nmの赤外光を照射して得られた背景画像に重畳することによって得られた画像を示している。一方、図15Bの写真は、1290nmから1600nmの波長帯域の赤外光をマウスの顔面部分に照射し、その反射光をハイパースペクトルカメラで0.5nm毎にスキャン(連続波長)して検出された各波長に対応するスペクトルデータに基づいて強調画像を生成し、1070nmの赤外光を照射して得られた背景画像に強調画像を重畳することによって得られた画像を示している。 In the photograph of FIG. 15A, infrared light of four wavelengths of 1000 nm, 1300 nm, 1450 nm, and 1600 nm is irradiated to the face portion of the mouse, and the reflected light is imaged with a normal infrared light camera (eg, InGaAs sensor) An enhanced image is generated based on the spectral data corresponding to each detected wavelength, and an image obtained by superimposing on a background image obtained by irradiating 1070 nm infrared light is shown. On the other hand, the photograph in FIG. 15B was detected by irradiating infrared light in a wavelength band of 1290 nm to 1600 nm to the face portion of the mouse and scanning its reflected light every 0.5 nm (continuous wavelength) with a hyperspectral camera The figure shows an image obtained by generating an enhanced image based on spectral data corresponding to each wavelength, and superimposing the enhanced image on a background image obtained by irradiating 1070 nm infrared light.
 図15Aの画像と図15Bの画像とを比較すると、強調画像の生成に用いる波長の数(スペクトルデータ数や解析画像の数ともいう)が少なくなると神経の位置の同定精度は相対的に落ちてしまう。しかしながら、波長の数を少なくすることにより検出されるスペクトルデータ数を減らすことができるため、制御部101における解析(処理)速度を相対的に向上させることができる。したがって、撮像システム1は、静止画により神経のイメージング(神経の位置を特定して教示)するだけではなく、術中に動画で神経のイメージングすることも容易に可能となる。このようにすることにより、撮像システム1は、ハイパースペクトルカメラよりも安価な通常の赤外光カメラ(波長分解やスキャンを必要としない動画撮影可能な赤外光カメラ)を用いることで、安価で使い勝手の良いシステムを実現することができるようになる。 Comparing the image of FIG. 15A with the image of FIG. 15B, when the number of wavelengths (also referred to as the number of spectral data and the number of analysis images) used to generate the enhanced image decreases, the identification accuracy of the nerve position relatively decreases. I will. However, since the number of spectrum data to be detected can be reduced by reducing the number of wavelengths, the analysis (processing) speed in the control unit 101 can be relatively improved. Therefore, the imaging system 1 can easily perform imaging of a nerve with a moving image during surgery, as well as imaging of the nerve (position and guidance of the nerve are taught) by still images. By doing this, the imaging system 1 is inexpensive by using a normal infrared light camera (an infrared light camera capable of capturing a moving image that does not require wavelength resolution or scanning), which is cheaper than the hyperspectral camera. It will be possible to realize a user-friendly system.
 <実施例5:連続波長間での比較、および連続波長と複数波長間での比較>
 図16は、実施例5による(第2の実施形態の手法に基づいた)、900nmから2500nmの波長範囲において波長の組み合わせを変えたときの神経のイメージング結果(各波長のスペクトルデータから神経の強調画像を生成し、背景画像(1070nmの赤外光画像)に重畳して得られた結果)の比較を示す写真(画像)である。図16AからCは連続波長を用いたときの神経のイメージング結果を示し、図16Dは複数種類の波長(900nm、1000nm、1100nm、1150nm、1210nm、1300nm、1350nm、1400nm、1650nm)の赤外光を用いたときの神経のイメージング結果を示している。図16Aは1290nmから1600nmの波長範囲の赤外光を用いた場合、図16Bは900nmから1600nmの波長範囲の赤外光を用いた場合、図16Cは900nmから2500nmの波長範囲の赤外光を用いた場合をそれぞれ示している。
Example 5 Comparison Between Continuous Wavelengths and Comparison Between Continuous Wavelengths and Multiple Wavelengths
FIG. 16 shows imaging results of nerves according to Example 5 (based on the method of the second embodiment) when wavelength combinations in the wavelength range of 900 nm to 2500 nm are changed (emphasis of nerves from spectral data of each wavelength) It is a photograph (image) which generates a picture and shows comparison of a result obtained by superimposing on a background image (infrared light image of 1070 nm). 16A to C show imaging results of nerves when continuous wavelengths are used, and FIG. 16D shows infrared light of plural types of wavelengths (900 nm, 1000 nm, 1100 nm, 1150 nm, 1210 nm, 1300 nm, 1350 nm, 1400 nm, 1650 nm). The imaging result of the nerve when used is shown. 16A uses infrared light in the wavelength range of 1290 nm to 1600 nm, FIG. 16B uses infrared light in the wavelength range of 900 nm to 1600 nm, and FIG. 16C shows infrared light in the wavelength range of 900 nm to 2500 nm. The case where it used is shown, respectively.
 図16AからCから言えることは、用いる赤外光の波長帯域(波長範囲)が広ければ(つまり、スペクトルデータ数が多ければ)神経のイメージング結果が良好ではあるが、赤外光の波長帯域を狭めても良好な神経のイメージング結果が得られるということである。例えば、図16A(波長範囲:1290nmから1600nm)の結果が図16B(波長範囲:900nmから1600nm)の結果や図16C(波長範囲:900nmから2500nm)の結果よりも明らかに良好である。また、この結果と実施例3で示した実験結果(図11B参照)とを突き合わせて考察すると、神経部分のスペクトル特性とそれ以外の部分のスペクトル特性との乖離がある部分の波長範囲のみを用いると良好な神経イメージング結果が得られるということである。図16Bの波長範囲や図16Cの波長範囲においては、神経部分のスペクトル特性とそれ以外の部分(図11Bではスジ)のスペクトル特性が重なっている部分がある。この重なっている部分が多い程、神経部分とそれ以外の部分との弁別機能(神経の同定機能)に影響を与えると考えられる。従って、連続波長によってスペクトルデータを取得する場合には、神経部分のスペクトル特性とそれ以外の部分のスペクトル特性との重なり合いがないような範囲の波長帯域を選択する必要がある。 What can be said from FIGS. 16A to C is that if the wavelength band (wavelength range) of infrared light to be used is wide (that is, the number of spectral data is large), the imaging result of nerve is good but the wavelength band of infrared light It means that good nerve imaging results can be obtained even if it is narrowed. For example, the results of FIG. 16A (wavelength range: 1290 nm to 1600 nm) are clearly better than the results of FIG. 16B (wavelength range: 900 nm to 1600 nm) or FIG. 16C (wavelength range: 900 nm to 2500 nm). Further, when this result and the experimental result shown in Example 3 (see FIG. 11B) are compared and considered, only the wavelength range of a portion where there is a divergence between the spectral characteristics of the nerve portion and the spectral characteristics of other portions is used. And good neuroimaging results can be obtained. In the wavelength range of FIG. 16B and the wavelength range of FIG. 16C, there is a portion where the spectral characteristics of the nerve portion and the spectral characteristics of the other portion (in FIG. 11B, streaks) overlap. It is considered that the more the overlapping portion, the more the discrimination function (neural identification function) between the nerve portion and the other portion is affected. Therefore, when acquiring spectral data by continuous wavelength, it is necessary to select a wavelength band in such a range that there is no overlap between the spectral characteristic of the nerve part and the spectral characteristic of the other part.
 一方、図16Dの結果は、データ数が少ないため、図16Aの結果よりも解析の精度は落ちる可能性があるが、図16Bの結果および図16Cの結果と比較すると、精度はほぼ同程度である。このように、神経イメージングに用いる赤外光の波長を適切にうまく選択すれば、データ数を落としたとしても処理速度を向上しつつ、実用に耐え得る処理を実現することができる。なお、図15Aの結果は、図16AからDの結果よりもデータ数が少ないが、図16BおよびCの結果よりも解析精度が良好である。従って、図15Aの結果からも、神経イメージングに用いる赤外光の波長を適切に選択すれば、データ数を落としたとしても処理速度を向上しつつ、実用に耐え得る処理を実現することができると言える。 On the other hand, in the result of FIG. 16D, the analysis accuracy may be lower than the result of FIG. 16A because the number of data is small, but the accuracy is almost the same as the result of FIG. 16B and the result of FIG. is there. As described above, if the wavelength of infrared light used for neuroimaging is appropriately selected, processing that can withstand practical use can be realized while improving the processing speed even if the number of data is reduced. Although the result of FIG. 15A has a smaller number of data than the results of FIGS. 16A to D, the analysis accuracy is better than the results of FIGS. 16B and 16C. Therefore, also from the result of FIG. 15A, if the wavelength of infrared light used for nerve imaging is appropriately selected, processing that can withstand practical use can be realized while improving processing speed even if the number of data is reduced. It can be said.
(3)第3の実施形態
 第3の実施形態は、複数種類の波長の赤外光をサンプルに照射し、そこからの放射(反射)光を検出してサンプルの赤外光画像(複数波長に対応する赤外光画像)を生成し、複数の赤外光画像を切り替え表示することにより、神経部分を同定する処理について開示する。
(3) Third Embodiment In the third embodiment, a sample is irradiated with infrared light of a plurality of types of wavelengths, and radiation (reflected) light from the sample is detected to detect an infrared light image of the sample (a plurality of wavelengths The present invention discloses a process of identifying a neural portion by generating an infrared light image corresponding to Y.sub.2 and switching and displaying a plurality of infrared light images.
 <撮像システム1の構成例>
 図17は、第3の実施形態による撮像システム(手術支援システム)1の構成例を示す図である。第3の実施形態による撮像システム(手術支援システム)1は、例えば、赤外光源部20が複数種類の波長の赤外光を射出(放射)する光源を有する点、および第2撮像デバイス32が通常の赤外光カメラを用いる点、強調画像生成部1013および画像補正部1014を備えていない点以外は第1の実施形態と同一である。
 第3の実施形態による撮像システム1では、強調画像を生成しないため、画像生成部1012は、上述のような重畳画像を生成することはない。
<Configuration Example of Imaging System 1>
FIG. 17 is a view showing a configuration example of an imaging system (a surgery support system) 1 according to the third embodiment. The imaging system (surgery support system) 1 according to the third embodiment includes, for example, a point that the infrared light source unit 20 has a light source for emitting (emitting) infrared light of a plurality of types of wavelengths, and the second imaging device 32 The second embodiment is the same as the first embodiment except that a normal infrared camera is used, and the emphasis image generation unit 1013 and the image correction unit 1014 are not provided.
In the imaging system 1 according to the third embodiment, since the enhanced image is not generated, the image generation unit 1012 does not generate the superimposed image as described above.
 <神経イメージング処理の内容>
 図18は、第3の実施形態による神経のイメージング処理の内容を説明するためのフローチャートである。
<Contents of neuroimaging processing>
FIG. 18 is a flow chart for explaining the contents of a nerve imaging process according to the third embodiment.
(i)ステップ1801
 制御部101は、波長λ(k=1からn)の赤外光について、ステップ1802からステップ1804の処理を繰り返すように撮像システム1を制御する。以下のステップでは、各処理部(光照射制御部1011、画像生成部1012)が各ステップの処理を実行するように説明するが、各処理部は制御部101に含まれる機能であるため、制御部101を動作主体としてもよい。
(I) Step 1801
The control unit 101 controls the imaging system 1 so as to repeat the processing of step 1802 to step 1804 for infrared light of wavelength λ k (k = 1 to n). In the following steps, each processing unit (light irradiation control unit 1011, image generation unit 1012) is described to execute the processing of each step, but each processing unit is a function included in control unit 101, The unit 101 may be an operation subject.
(ii)ステップ1802
 光照射制御部1011は、赤外光源部20の波長λを発する赤外光源21から赤外光源22を制御して、波長λの赤外光(例えば、950nm、1070nm、1300nm、1450nm、1600nmのいずれかの個別波長)をサンプルに照射する。なお、ここでは、赤外光源を切り替えるようにして異なる波長の赤外光を出力するようにしているが、例えば所定の波長帯域の赤外光を射出する赤外光源を用い、当該赤外光源から発せられた赤外光をフィルタ処理することにより特定波長の赤外光を取り出すようにしてもよい。
(Ii) Step 1802
Light irradiation control unit 1011 from the infrared light source 21 that emits a wavelength lambda k of the infrared light source unit 20 by controlling the infrared light source 22, the wavelength lambda k infrared light (e.g., 950 nm, 1070 nm, 1300 nm, 1450 nm, The sample is illuminated with any discrete wavelength of 1600 nm. Here, the infrared light source is switched to output infrared light of different wavelengths. For example, using an infrared light source that emits infrared light of a predetermined wavelength band, the infrared light source concerned The infrared light emitted from the light source may be filtered to extract infrared light of a specific wavelength.
(iii)ステップ1803
 画像生成部1012は、第2撮像デバイス32(例えば、通常の赤外光カメラ)を制御し、波長λの赤外光をサンプルに照射することによってサンプルから放射(反射)された赤外光を撮像し、当該波長λに対するスペクトルデータ(輝度値)を検出(取得)し、赤外光画像を生成する。
(Iii) Step 1803
The image generation unit 1012 controls the second imaging device 32 (for example, a normal infrared light camera), and emits infrared light of wavelength λ k to the sample to emit (reflect) infrared light from the sample Are imaged, spectral data (luminance value) for the wavelength λ k is detected (acquired), and an infrared light image is generated.
(iv)ステップ1804
 画像生成部1012は、ステップ1803で取得した赤外光画像を記憶部102に格納する。従って、全ての波長の赤外光をサンプルに照射した場合には、それぞれの波長に対応する複数の赤外光画像が生成される。
(Iv) Step 1804
The image generation unit 1012 stores the infrared light image acquired in step 1803 in the storage unit 102. Therefore, when infrared light of all wavelengths is irradiated to the sample, a plurality of infrared light images corresponding to the respective wavelengths are generated.
(v)ステップ1805
 制御部101は、記憶部102からステップ1801からステップ1804で生成された赤外光画像を読み込み、表示装置50にこれらの画像を転送し、表示装置50にこれらの画像(例、少なくとも2つの赤外光画像、複数の赤外光画像)を所定のタイミング(例、タイミングごとに生じる切替信号など)に基づき順次切り替えながら表示するように指示する。表示装置50は、制御部101から複数の画像を受け取り、当該複数の画像を所定時間毎(例えば、0.5秒毎、1秒毎、2秒毎、3秒毎など)に切り替えながらモニタ2_52に表示する。また、制御部101は、例えば、オペレータ(医師など)が操作するフットスイッチ(不図示)などの入力手段(画面切替装置)に基づく切替信号を受信した任意のタイミングにおいて、表示装置50に表示させる画像を切り替えてもよい。また、表示装置50は、制御部101から同一サンプルの可視光画像も受け取り、それをモニタ1_51に表示する。
(V) Step 1805
The control unit 101 reads the infrared light images generated in steps 1801 to 1804 from the storage unit 102, transfers these images to the display device 50, and transmits these images to the display device 50 (eg, at least two red images). It is instructed to display the ambient light image and the plurality of infrared light images while sequentially switching based on a predetermined timing (eg, a switching signal generated at each timing). The display device 50 receives a plurality of images from the control unit 101, and switches the plurality of images every predetermined time (for example, every 0.5 seconds, every second, every 2 seconds, every 3 seconds, etc.) as the monitor 2_52. Display on In addition, the control unit 101 causes the display device 50 to display a switching signal at any timing at which a switching signal based on an input unit (screen switching device) such as a foot switch (not shown) operated by an operator (doctor or the like) is received. Images may be switched. The display device 50 also receives a visible light image of the same sample from the control unit 101, and displays it on the monitor 1_51.
 以上のように、第3の実施形態による撮像システム1によれば、特定の複数波長の赤外光を照射して得られる複数の画像を切り替えながら画面に表示することにより、サンプルにおける神経部分の箇所を同定しやすくすることができるようになる。第3の実施形態では、神経の教師スペクトルデータを用いておらず、重畳画像も生成していないため、処理がシンプルであり、安価な撮像システム1を実現できるようになる。 As described above, according to the imaging system 1 according to the third embodiment, by displaying on the screen while switching a plurality of images obtained by irradiating infrared light of a specific plurality of wavelengths, It becomes possible to make it easy to identify the location. In the third embodiment, since the teacher spectrum data of the nerve is not used and the superimposed image is not generated, the processing is simple and the inexpensive imaging system 1 can be realized.
 <実施例6:神経イメージングの例>
 図19および図20は、第3の実施形態の手法に基づいて求めた神経のイメージング結果例を示す写真である。図19は、マウスの顔面部分における、通常のデジタルカメラの画像(デジカメ画像)と、波長が1300nmの赤外光を照射することにより得られる赤外光画像と、可視光を照射して得られる可視光画像とを示している。図20は、マウスの顔面部分における、通常のデジタルカメラの画像と、波長が950nmの赤外光を照射することにより得られる赤外光画像と、可視光画像とを示している。
Example 6: Example of Neurological Imaging
FIG. 19 and FIG. 20 are photographs showing an example of an imaging result of a nerve obtained based on the method of the third embodiment. FIG. 19 shows an image of an ordinary digital camera (digital camera image) on the face portion of a mouse, an infrared light image obtained by irradiating infrared light with a wavelength of 1300 nm, and visible light It shows a visible light image. FIG. 20 shows an image of a normal digital camera, an infrared light image obtained by irradiating infrared light with a wavelength of 950 nm, and a visible light image in a face portion of a mouse.
 ここで、可視光画像(図19および図20参照)を参照すると、図19及び図20のそれぞれの可視光画像において、真中の2本の白い線(上段の可視光画像参照)が神経であろうということは分かるが、顔面部分の他の可視光画像(中段および下段の可視光画像)では神経の箇所は分からない。また、可視光画像においては、血管らしき部分は分かるものの、血管を明確に特定できるとは言い難い。一方、1300nmの赤外光画像(図19)と950nmの赤外光画像(図20)とを比較すると、1300nmの赤外光画像では、神経部分および血管部分に関し、いずれも光吸収量はそれほど変わらないが、950nmの赤外光画像では、血管部分は強く光を吸収し、神経部分は光の吸収量がそれよりも少ない。このように、照射する赤外光の波長が異なると、2つの赤外光画像間で大きな違いが生じることが分かる。2つの画像間で違いがあるにしても、両画像共に血管部分と神経部分とを抽出できており、他の部分とそれら(血管部分及び神経部分)を区別することができている。つまり、本実施形態の撮像システム1は、血管部分の位置および神経部分の位置を特定することができること、や血管部分と神経部分とのうち神経部分(又は血管部分)の位置を特定することができることが分かる。さらに、撮像システム1は、上段の赤外光画像を参照すると、神経の近くに血管が存在することも確認することができる。 Here, referring to the visible light images (see FIGS. 19 and 20), in each of the visible light images in FIGS. 19 and 20, the middle two white lines (see the visible light image in the upper row) are nerves. Although it is understood that the deafness is recognized, other visible light images of the face part (middle and lower visible light images) do not know the location of the nerve. Also, in visible light images, although blood vessel-like parts are known, it can not be said that blood vessels can be clearly identified. On the other hand, when comparing the infrared light image of 1300 nm (FIG. 19) and the infrared light image of 950 nm (FIG. 20), the light absorption amount is so low for the nerve part and the blood vessel part in the 1300 nm infrared light image. Although it does not change, in the infrared light image of 950 nm, the blood vessel part strongly absorbs light, and the nerve part absorbs less light. Thus, it can be seen that when the wavelengths of infrared light to be irradiated are different, a large difference occurs between the two infrared light images. Even if there is a difference between the two images, both images can extract the blood vessel part and the nerve part, and can distinguish the other parts from those (the blood vessel part and the nerve part). That is, the imaging system 1 of the present embodiment can specify the position of the blood vessel portion and the position of the nerve portion, or specify the position of the nerve portion (or the blood vessel portion) of the blood vessel portion and the nerve portion. I know what I can do. Furthermore, the imaging system 1 can also confirm that there is a blood vessel near the nerve, by referring to the infrared light image on the upper stage.
 第3の実施形態では、撮像システム1は、複数波長の赤外光を照射して取得した複数の赤外光画像を切り替えながら表示するようにしている。そこで、950nmの赤外光画像(図20)と1300nmの赤外光画像とを交互に表示装置50の1つの画面に表示すると、相対的に明るい950nmの赤外光画像では判別しづらいが、相対的に暗い1300nmの赤外光画像では判別しやすくなっている部位がある。逆に、相対的に暗い1300nmの赤外光画像では判別しづらいが、相対的に明るい950nmの赤外光画像では判別しやすくなっている部位もある。このように、撮像システム1(例、制御部101)は、複数の波長の赤外光画像を画面上で切り替え表示する制御を行うことにより、血管と神経とを区別することもできるようになる。 In the third embodiment, the imaging system 1 displays a plurality of infrared light images acquired by irradiating infrared light of a plurality of wavelengths while switching. Therefore, when the 950 nm infrared light image (FIG. 20) and the 1300 nm infrared light image are alternately displayed on one screen of the display device 50, it is difficult to distinguish the relatively bright 950 nm infrared light image, There is a part which is easy to distinguish in the relatively dark 1300 nm infrared light image. On the contrary, there is a part which is difficult to distinguish in the relatively dark 1300 nm infrared light image, but is easy to distinguish in the relatively bright 950 nm infrared light image. As described above, the imaging system 1 (for example, the control unit 101) can distinguish between blood vessels and nerves by performing control to switch and display infrared light images of a plurality of wavelengths on the screen. .
(4)第4の実施形態
 第4の実施形態は、複数種類の波長の赤外光をサンプルに照射し、そこからの放射(反射)光を検出してサンプルの赤外光画像(複数波長に対応する赤外光画像)を生成し、さらに複数の赤外光画像の差分画像を生成して表示することにより、神経部分を同定する処理について開示する。
(4) Fourth Embodiment In the fourth embodiment, a sample is irradiated with infrared light of a plurality of types of wavelengths, and radiation (reflected) light from the sample is detected to detect an infrared light image of the sample (a plurality of wavelengths The present invention discloses a process of identifying a neural part by generating an infrared light image corresponding to Y.sub.2 and generating and displaying a difference image of a plurality of infrared light images.
 <撮像システム1の構成例>
 第4の実施形態による処理を実現する場合、例えば、第3の実施形態と同様の構成を備える図17の撮像システム(手術支援システム)1を用いることができる。従って、ここではその詳細な説明は省略する。
<Configuration Example of Imaging System 1>
When the process according to the fourth embodiment is realized, for example, the imaging system (operation support system) 1 of FIG. 17 having the same configuration as that of the third embodiment can be used. Therefore, the detailed description is omitted here.
 <神経イメージング処理の内容>
 図21は、第4の実施形態による神経のイメージング処理の内容を説明するためのフローチャートである。
<Contents of neuroimaging processing>
FIG. 21 is a flowchart for explaining the contents of the nerve imaging process according to the fourth embodiment.
(i)ステップ2101
 制御部101は、波長λ(k=1からn)の赤外光について、ステップ2102からステップ2104の処理を繰り返すように撮像システム1を制御する。以下のステップでは、各処理部(光照射制御部1011、画像生成部1012)が各ステップの処理を実行するように説明するが、各処理部は制御部101に含まれる機能であるため、制御部101を動作主体としてもよい。
(I) Step 2101
The control unit 101 controls the imaging system 1 so as to repeat the processing of step 2102 to step 2104 for infrared light of wavelength λ k (k = 1 to n). In the following steps, each processing unit (light irradiation control unit 1011, image generation unit 1012) is described to execute the processing of each step, but each processing unit is a function included in control unit 101, The unit 101 may be an operation subject.
(ii)ステップ2102
 光照射制御部1011は、赤外光源部20の波長λを発する赤外光源21から赤外光源22を制御して、波長λの赤外光(例えば、950nm、1070nm、1300nm、1450nm、1600nmのいずれかの個別波長)をサンプルに照射する。なお、ここでは、撮像システム1は、赤外光源を切り替えるようにして異なる波長の赤外光を出力するようにしているが、例えば所定の波長帯域の赤外光を射出する赤外光源を用い、当該赤外光源から発せられた赤外光をフィルタ処理することにより特定波長の赤外光を取り出すようにしてもよい。
(Ii) Step 2102
Light irradiation control unit 1011 from the infrared light source 21 that emits a wavelength lambda k of the infrared light source unit 20 by controlling the infrared light source 22, the wavelength lambda k infrared light (e.g., 950 nm, 1070 nm, 1300 nm, 1450 nm, The sample is illuminated with any discrete wavelength of 1600 nm. Here, although the imaging system 1 outputs infrared light of different wavelengths by switching infrared light sources, for example, an infrared light source that emits infrared light of a predetermined wavelength band is used. The infrared light of the specific wavelength may be extracted by filtering the infrared light emitted from the infrared light source.
(iii)ステップ2103
 画像生成部1012は、第2撮像デバイス32(例えば、通常の赤外光カメラ)を制御し、波長λの赤外光をサンプルに照射することによってサンプルから放射(反射)された赤外光を撮像し、当該波長λに対するスペクトルデータ(輝度値)を検出(取得)し、赤外光画像を生成する。
(Iii) Step 2103
The image generation unit 1012 controls the second imaging device 32 (for example, a normal infrared light camera), and emits infrared light of wavelength λ k to the sample to emit (reflect) infrared light from the sample Are imaged, spectral data (luminance value) for the wavelength λ k is detected (acquired), and an infrared light image is generated.
(iv)ステップ2104
 例えば、画像生成部1012は、ステップ2103で取得した赤外光画像を記憶部102に格納する。従って、全ての波長の赤外光をサンプルに照射した場合には、それぞれの波長に対応する複数の赤外光画像が生成される。
(Iv) Step 2104
For example, the image generation unit 1012 stores the infrared light image acquired in step 2103 in the storage unit 102. Therefore, when infrared light of all wavelengths is irradiated to the sample, a plurality of infrared light images corresponding to the respective wavelengths are generated.
(v)ステップ2105
 制御部101は、記憶部102からステップ2101からステップ2104で生成された赤外光画像を読み込み、各赤外光画像同士の差分画像を生成する。生成された赤外光画像がN個(N種類の波長を照射して赤外光画像を取得)であれば、(N×(N-1))個の差分画像が生成されることになる。
(V) Step 2105
The control unit 101 reads the infrared light images generated in step 2101 to step 2104 from the storage unit 102, and generates a difference image of the respective infrared light images. If the number of generated infrared light images is N (the N types of wavelengths are irradiated to obtain infrared light images), (N × (N−1)) number of difference images are generated. .
(vi)ステップ2106
 制御部101は、表示装置50に差分画像を転送し、表示装置50に差分画像を表示するように指示する。例えば、複数の差分画像がある場合には、制御部101は、切り替え表示を指示するようにしてもよいし、オペレータ(医師など)が何れの波長の差分画像か指定できるようにしてもよい。表示装置50は、制御部101から差分画像を受信して、当該差分画像をモニタ2_52に表示する。また、表示装置50は、制御部101から同一サンプルの可視光画像も受信して、それをモニタ1_51に表示する。
(Vi) Step 2106
The control unit 101 transfers the difference image to the display device 50, and instructs the display device 50 to display the difference image. For example, in the case where there are a plurality of difference images, the control unit 101 may instruct switching display, or the operator (such as a doctor) may specify a difference image of any wavelength. The display device 50 receives the difference image from the control unit 101, and displays the difference image on the monitor 2_52. The display device 50 also receives a visible light image of the same sample from the control unit 101, and displays it on the monitor 1_51.
 以上のように、撮像システム1は、特定の複数波長の赤外光を照射して得られる複数の赤外光画像から差分画像(差分赤外光画像)を生成して画面に表示することにより、サンプルにおける神経部分などの特徴部分を強調して表示することができ、神経部分の箇所を同定しやすくすることができるようになる。第4の実施形態においても、第3の実施形態と同様に、神経の教師スペクトルデータを用いておらず、重畳画像も生成していないため、処理がシンプルであり、安価な撮像システム1を実現できるようになる。 As described above, the imaging system 1 generates a difference image (difference infrared light image) from a plurality of infrared light images obtained by irradiating infrared light of a specific plurality of wavelengths, and displays it on the screen. , Feature parts such as nerve parts in the sample can be highlighted and displayed, and the location of the nerve parts can be identified easily. Also in the fourth embodiment, as in the third embodiment, since the teacher spectrum data of the nerve is not used and the superimposed image is not generated either, the processing is simple and the inexpensive imaging system 1 is realized. become able to.
 <実施例7:差分画像表示の例>
 図22は、実施例による(第4の実施形態の手法に基づいて求めた)神経のイメージング結果例(マウスの顔面部分)を示す写真である。図22は、1300nmの赤外光画像(図19の上段)と950nmの赤外光画像(図20の上段)との差分画像を示している。
<Example 7: Example of difference image display>
FIG. 22 is a photograph showing an example of a nerve imaging result (a face portion of a mouse) (determined based on the method of the fourth embodiment) according to an example. FIG. 22 shows a differential image of the 1300 nm infrared light image (upper part of FIG. 19) and the 950 nm infrared light image (upper part of FIG. 20).
 図22の差分画像から分かるように、神経部分は白くなり、血管部分は黒くなっている。このため、撮像システム1は、神経部分および血管部分とそれら以外の部分とを選択的に区別することができるとともに、神経部分と血管部分とのうち神経部分(又は血管部分)を選択的に区別することもできる。
 このように、撮像システム1は、波長の異なる赤外光画像の差分を生成し、表示することにより、特徴部分(神経部分と血管部分)とその他の部分とを明確に区別して表示することができる。また、特徴部分同士であっても差分画像上での見え方が異なるため、例えば、神経部分と血管部分とを区別することができるようになる。よって、オペレータ(医師など)は、例えば、手術中に当該差分画像を見て神経部分の位置を確認しながら作業することができ、術中などに神経を傷つけてしまうという危険を回避することができるようになる。
As can be seen from the difference image in FIG. 22, the nerve part is white and the blood vessel part is black. Therefore, the imaging system 1 can selectively distinguish between the nerve part and the blood vessel part and the other parts, and selectively distinguish the nerve part (or the blood vessel part) of the nerve part and the blood vessel part. You can also
As described above, the imaging system 1 can clearly distinguish and display the characteristic part (nerve part and blood vessel part) and the other part by generating and displaying the difference of infrared light images of different wavelengths. it can. In addition, even the feature portions are different in appearance on the difference image, so that it is possible to distinguish, for example, a nerve portion and a blood vessel portion. Therefore, an operator (such as a doctor) can work while checking the position of the nerve part while looking at the difference image during surgery, for example, and can avoid the risk of damaging the nerve during the operation or the like. It will be.
(5)その他の実施形態
(i)上述の各実施形態では、1つの撮像システム(手術支援システム)1において、赤外光照射処理から重畳画像表示処理まで全ての処理が実行されているが、例えば、撮像システム1の機能の少なくとも一部を別の装置(撮像装置)で実行するように構成しても良い。例えば、サンプルから放射(反射)された赤外光を検出する撮像部(光検出部)30と、検出された赤外光(測定結果)を用いて神経の位置を特定する神経特徴データ(例えば、神経の強調画像や神経の位置そのものを示す情報など)を生成する制御部101(制御装置10)と、を撮像装置として構成することができる。ここで、神経特徴データ(神経特徴画像)は、例えば、上述の実施形態や実施例で用いた、神経部分を神経部分以外の部位(血管を含む)から区別することを可能とする形態で神経部分を表す「強調画像」や「赤外光画像」、あるいは強調画像を構成する画素の位置情報を含むデータである。
(5) Other Embodiments (i) In the above-described embodiments, all processing from infrared light irradiation processing to superimposed image display processing is executed in one imaging system (surgery support system) 1. For example, at least a part of the functions of the imaging system 1 may be configured to be executed by another device (imaging device). For example, an imaging unit (light detection unit) 30 that detects infrared light emitted (reflected) from a sample, and neural feature data (for example, a nerve characteristic data that specifies the position of a nerve using the detected infrared light (measurement result) The control unit 101 (the control device 10) that generates an enhanced image of the nerve, information indicating the position of the nerve itself, and the like can be configured as an imaging device. Here, the neural feature data (neural feature image) is, for example, a nerve used in the above-described embodiment or example in a form that enables discrimination of a neural part from a site (including a blood vessel) other than the neural part. It is data including “emphasis image” or “infrared light image” representing a part, or positional information of pixels constituting the emphasis image.
(ii)本実施形態は、例えば、神経を同定するのに適する波長の赤外光(単波長の赤外光あるいは所定波長帯域を有する赤外光)をサンプルに照射して得られる測定結果(検出結果)を取得する光検出部(撮像部)と、測定結果から、サンプルにおいて神経の位置を特定する神経特徴画像を生成する制御部と、備える撮像装置について開示する。このような構成を採ることにより、サンプル(例えば、生体において手術野として露出されている部位)における神経を他の部位(例えば、血管やスジ)と区別した画像を表示装置の画面上に表示することができるようになる。このため、オペレータ(例えば、医師など)は、表示装置に表示された神経の位置を視認して、術中に当該神経を傷つけることなく作業を遂行することができるようになる。 (Ii) In the present embodiment, for example, a measurement result obtained by irradiating a sample with infrared light of a wavelength suitable for identifying a nerve (infrared light of a single wavelength or infrared light having a predetermined wavelength band) ( An imaging apparatus including a light detection unit (imaging unit) for acquiring a detection result) and a control unit for generating a nerve characteristic image for specifying the position of a nerve in a sample from the measurement result will be disclosed. By adopting such a configuration, an image in which a nerve in a sample (for example, a portion exposed as a surgical field in a living body) is distinguished from another portion (for example, a blood vessel or a streak) is displayed on the screen of the display device. Will be able to Therefore, the operator (for example, a doctor or the like) can visually recognize the position of the nerve displayed on the display device and perform the operation without damaging the nerve during the operation.
 本実施形態では、光検出部(撮像部)が、複数種類の波長の赤外光(互いに波長が異なる、複数種類の単波長の赤外光あるいは所定の波長帯域を有する赤外光)をサンプルに照射して得られる複数の測定結果(検出結果)を取得し、制御部が、複数の測定結果から、複数の神経特徴画像を生成するようにしてもよい。この場合、制御部が、複数の神経特徴画像を切り替えながら表示装置に表示させるための制御信号を出力したり、制御部が、複数の神経特徴画像の差分画像を生成し、当該差分画像を表示装置に表示させるための制御信号を出力したりしてもよい。このようにすることにより、神経の位置をさらに強調し、他の部位と神経とを区別しやすくすることができる。 In the present embodiment, the light detection unit (imaging unit) samples infrared light of a plurality of types of wavelengths (a plurality of types of single-wavelength infrared light having different wavelengths or an infrared light having a predetermined wavelength band). Alternatively, a plurality of measurement results (detection results) obtained by the irradiation may be acquired, and the control unit may generate a plurality of neural feature images from the plurality of measurement results. In this case, the control unit outputs a control signal for causing the display device to display while switching a plurality of neural feature images, or the control unit generates a differential image of the plurality of neural feature images and displays the differential image. A control signal for displaying on a device may be output. By doing this, the position of the nerve can be further emphasized, and the nerve can be easily distinguished from other regions.
 本実施形態では、撮像装置は、神経のスペクトルデータを示す神経教師データを用いて神経特徴データを生成することもできる。例えば、制御部は、赤外光領域における神経のスペクトルデータを示す神経教師データを記憶部から読み込み、当該神経教師データと複数の測定結果との相関値を演算し、当該相関値に基づいてサンプルにおける神経特徴画像を生成するようにしてもよい。このように、測定データにおける、神経の特徴を示す神経教師データとの相関の有無を判断することにより、神経の位置を正確に同定することができる。なお、当該神経特徴画像は、サンプル(生体の手術野の画像)を表す背景画像(例えば、可視光画像あるいは赤外光画像)に生成した神経特徴画像を重畳して重畳画像を生成してもよい。このようにすることにより、撮像装置は、サンプルにおける神経の位置を画像として明確に示すことができる。よって、オペレータ(例えば、医師など)は、容易に神経の位置を確認することができるようになる。 In the present embodiment, the imaging device can also generate neural feature data using neural training data indicating spectral data of nerves. For example, the control unit reads, from the storage unit, neural training data indicating spectral data of nerves in the infrared light region, calculates correlation values between the neural training data and a plurality of measurement results, and samples samples based on the correlation values. The neural feature image may be generated in Thus, the position of the nerve can be accurately identified by determining the presence or absence of the correlation with the neuroteacher data indicating the feature of the nerve in the measurement data. The neural feature image may be generated by superimposing the generated neural feature image on a background image (for example, a visible light image or an infrared light image) representing a sample (image of the surgical field of a living body) to generate a superimposed image. Good. By doing so, the imaging device can clearly show the position of the nerve in the sample as an image. Thus, the operator (for example, a doctor) can easily confirm the position of the nerve.
 測定結果と神経教師データとの相関の有無を判断する場合、制御部は、神経教師データと複数の測定結果とを用いてSCM(Spectral Correlation Mapper)値を算出する。そして、制御部は、当該SCM値が所定の閾値以下の測定結果が神経教師データとの相関が高いと判断して、神経特徴画像を生成する。このようにSCMを用いた演算をすることにより、正確に神経教師データとの相関性を判断することができるため、得られる神経特徴画像は神経の位置を正確に表したものとなる。 When determining the presence or absence of the correlation between the measurement result and the neuroteaching data, the control unit calculates a SCM (Spectral Correlation Mapper) value using the neuroteaching data and the plurality of measurement results. Then, the control unit determines that the measurement result whose SCM value is equal to or less than a predetermined threshold is highly correlated with the neural training data, and generates a neural feature image. Since the correlation with the neuroteacher data can be accurately determined by performing the calculation using the SCM as described above, the obtained neural feature image accurately represents the position of the nerve.
 例えば、所定の波長帯域の赤外光をサンプルに照射して測定結果を得る場合には、900nmから2500nmの波長帯域の少なくとも一部(例えば、1290nmから1600nm)を有する赤外光を用いることができる。この場合、撮像部として、波長分解して撮像対象のスペクトル値を取得するハイパースペクトルカメラを用いることができる。例えば、複数種類の波長の赤外光をサンプルに照射して複数の測定結果を得る場合には、撮像部として、波長分解をせずに単波長の赤外光に対する撮像対象のスペクトル値を取得する赤外光カメラ(例、赤外光に感度を有するInGaAsの赤外光カメラ)を用いることができる。なお、ハイパースペクトルカメラは、所定の波長帯域(例、800nmから2500nm)の赤外光を分光器で分光することによってN個の波長の光を受光して波長ごと(N個の波長)に赤外光画像が得られるカメラを含む。例えば、ハイパースペクトルカメラは、上記所定の波長帯域において、表面位置(各画素)及び波長方向からなる三次元の赤外光のスペクトルデータを取得することができる。 For example, in the case of irradiating a sample with infrared light of a predetermined wavelength band to obtain a measurement result, use infrared light having at least a part of the wavelength band of 900 nm to 2500 nm (for example, 1290 nm to 1600 nm) it can. In this case, as the imaging unit, a hyperspectral camera that obtains a spectral value of an imaging target by wavelength decomposition can be used. For example, when irradiating a sample with infrared light of plural types of wavelengths to obtain a plurality of measurement results, the imaging unit acquires spectral values of an imaging target for infrared light of a single wavelength without wavelength decomposition. An infrared camera (for example, an InGaAs infrared camera sensitive to infrared light) can be used. The hyperspectral camera receives N wavelengths of light by dispersing infrared light in a predetermined wavelength band (for example, 800 nm to 2500 nm) with a spectroscope, and emits red light for each wavelength (N wavelengths). It includes a camera that can obtain an ambient light image. For example, the hyperspectral camera can acquire spectral data of three-dimensional infrared light consisting of surface position (each pixel) and wavelength direction in the predetermined wavelength band.
 さらに、例えば、制御部が、神経特徴画像について形状解析を行い、この結果に基づいて、神経以外の部位(例、血管、脂肪など)がノイズとして神経特徴画像に含まれているか否か判断してもよい。制御部は、神経以外の部位がノイズとして含まれている場合には、画像の形状解析によって特定した当該神経以外の部位(この場合、ノイズ)を、フィルタなどを用いて神経特徴画像から除去する。このようにすることにより、撮像装置は、さらに見やすい神経特徴画像を生成することが可能となる。 Furthermore, for example, the control unit performs shape analysis on the neural feature image, and based on the result, determines whether a site other than the nerve (eg, blood vessel, fat, etc.) is included in the neural feature image as noise. May be When the part other than the nerve is included as noise, the control unit removes the part other than the nerve (in this case, the noise) specified by the shape analysis of the image from the nerve characteristic image using a filter or the like. . By doing so, the imaging device can generate a more easily viewable neural feature image.
(iii)本実施形態の機能は、ソフトウェアのプログラムコードによっても実現することが可能である。この場合、プログラムコードを記録した記憶媒体をシステム或は装置に提供し、そのシステム或は装置のコンピュータ(又はCPUやMPU)が記憶媒体に格納されたプログラムコードを読み出す。この場合、記憶媒体から読み出されたプログラムコード自体が前述した実施形態の機能を実現することになり、そのプログラムコード自体、及びそれを記憶した記憶媒体は本実施形態を構成することになる。このようなプログラムコードを供給するための記憶媒体としては、例えば、フレキシブルディスク、CD-ROM、DVD-ROM、ハードディスク、光ディスク、光磁気ディスク、CD-R、磁気テープ、不揮発性のメモリカード、ROMなどが用いられる。 (Iii) The functions of the present embodiment can also be realized by program code of software. In this case, a storage medium storing the program code is provided to the system or apparatus, and a computer (or CPU or MPU) of the system or apparatus reads the program code stored in the storage medium. In this case, the program code itself read out from the storage medium implements the functions of the above-described embodiments, and the program code itself and the storage medium storing the same constitute the present embodiment. As a storage medium for supplying such a program code, for example, a flexible disk, a CD-ROM, a DVD-ROM, a hard disk, an optical disk, an optical magnetic disk, a CD-R, a magnetic tape, a non-volatile memory card, a ROM Etc. are used.
 また、プログラムコードの指示に基づき、コンピュータ上で稼動しているOS(オペレーティングシステム)などが実際の処理の一部又は全部を行い、その処理によって前述した実施形態の機能が実現されるようにしてもよい。さらに、記憶媒体から読み出されたプログラムコードが、コンピュータ上のメモリに書きこまれた後、そのプログラムコードの指示に基づき、コンピュータのCPUなどが実際の処理の一部又は全部を行い、その処理によって前述した実施形態の機能が実現されるようにしてもよい。 Also, based on instructions of the program code, an operating system (OS) operating on the computer performs a part or all of the actual processing, and the processing of the above-described embodiment is realized by the processing. It is also good. Furthermore, after the program code read out from the storage medium is written in the memory on the computer, the CPU of the computer or the like performs part or all of the actual processing based on the instruction of the program code, and the processing The functions of the embodiments described above may be realized by
 さらに、実施形態の機能を実現するソフトウェアのプログラムコードを、ネットワークを介して配信することにより、それをシステム又は装置のハードディスクやメモリ等の記憶手段又はCD-RW、CD-R等の記憶媒体に格納し、使用時にそのシステム又は装置のコンピュータ(又はCPUやMPU)が当該記憶手段や当該記憶媒体に格納されたプログラムコードを読み出して実行するようにしても良い。 Furthermore, by distributing the program code of the software that implements the functions of the embodiment via a network, it can be used as a storage means such as a hard disk or a memory of a system or device or a storage medium such as a CD-RW or CD-R. It may be stored, and the computer (or CPU or MPU) of the system or apparatus may read out and execute the program code stored in the storage means or the storage medium at the time of use.
 ここで述べたプロセス及び技術は本質的に如何なる特定の装置に関連することはなく、コンポーネントの如何なる相応しい組み合わせによってでも実装できる。更に、汎用目的の多様なタイプのデバイスがここで記述した方法に従って使用可能である。ここで述べた方法のステップを実行するのに、専用の装置を構築するのが有益である場合もある。また、実施形態に開示されている複数の構成要素の適宜な組み合わせにより、種々の発明を形成できる。例えば、実施形態に示される全構成要素から幾つかの構成要素を削除してもよい。さらに、異なる実施形態にわたる構成要素を適宜組み合わせてもよい。 The processes and techniques described herein are not inherently related to any particular apparatus, and may be implemented by any suitable combination of components. Furthermore, various types of general purpose devices may be used in accordance with the methods described herein. It may be beneficial to build a dedicated device to perform the method steps described herein. In addition, various inventions can be formed by appropriately combining a plurality of constituent elements disclosed in the embodiments. For example, some components may be deleted from all the components shown in the embodiment. Furthermore, components in different embodiments may be combined as appropriate.
 本技術分野の通常の知識を有する者には、本開示のその他の実装が本明細書及び実施形態の考察から明らかになる。記述された実施形態の多様な態様及び/又はコンポーネントは、単独又は如何なる組み合わせでも使用することが出来る。 Other implementations of the present disclosure will be apparent from consideration of the specification and embodiments to one of ordinary skill in the art. Various aspects and / or components of the described embodiments can be used alone or in any combination.
1 撮像システム
10 制御装置
20 赤外光源部
21 第1赤外光源
22 第2赤外光源
30 撮像部
31 第1撮像デバイス
32 第2撮像デバイス
40 入力装置
50 表示装置
51 モニタ1
52 モニタ2
60 手術用無影灯
80 生体
101 制御部
1011 光照射制御部
1012 画像生成部
1013 強調画像生成部
1014 画像補正部
102 記憶部
1 imaging system 10 control device 20 infrared light source unit 21 first infrared light source 22 second infrared light source 30 imaging unit 31 first imaging device 32 second imaging device 40 input device 50 display device 51 monitor 1
52 Monitor 2
60 Operating shadow lamp 80 living body 101 control unit 1011 light irradiation control unit 1012 image generation unit 1013 enhanced image generation unit 1014 image correction unit 102 storage unit

Claims (35)

  1.  神経を同定するのに適する波長の赤外光をサンプルに照射して測定結果を取得する光検出部と、
     前記測定結果から、前記サンプルにおいて前記神経の位置を特定する前記神経の画像を生成する制御部と、
    を備える、撮像装置。
    A light detection unit for obtaining a measurement result by irradiating a sample with infrared light of a wavelength suitable for identifying a nerve;
    A control unit that generates an image of the nerve that specifies the position of the nerve in the sample from the measurement result;
    An imaging device comprising:
  2.  請求項1において、
     前記制御部は、前記神経と神経以外の部位とを区別できる態様で前記神経の位置を表すように前記神経の画像を生成する、撮像装置。
    In claim 1,
    The imaging device, wherein the control unit generates an image of the nerve so as to represent the position of the nerve in a manner that can distinguish the nerve from a site other than the nerve.
  3.  請求項1または2において、
     前記制御部は、前記神経と血管とを区別できる態様で前記神経の位置を表すように前記神経の画像を生成する、撮像装置。
    In claim 1 or 2,
    The imaging device, wherein the control unit generates an image of the nerve so as to indicate the position of the nerve in a manner that can distinguish the nerve from a blood vessel.
  4.  請求項1から3の何れか1項において、
     前記光検出部は、前記神経を同定するのに適する複数の波長の赤外光を前記サンプルに照射して得られる複数の測定結果を取得し、
     前記制御部は、前記複数の測定結果から、複数の前記神経の画像を生成する、撮像装置。
    In any one of claims 1 to 3,
    The light detection unit acquires a plurality of measurement results obtained by irradiating the sample with infrared light of a plurality of wavelengths suitable for identifying the nerve.
    The imaging device, wherein the control unit generates images of a plurality of the nerves from the plurality of measurement results.
  5.  請求項4において、
     前記制御部は、前記複数の神経の画像を切り替えながら表示装置に表示させるための制御信号を出力する、撮像装置。
    In claim 4,
    The imaging device, wherein the control unit outputs a control signal for displaying on the display device while switching the images of the plurality of nerves.
  6.  請求項4において、
     前記制御部は、前記複数の神経の画像から差分画像を生成し、当該差分画像を表示装置に表示させるための制御信号を出力する、撮像装置。
    In claim 4,
    The imaging device, wherein the control unit generates a difference image from images of the plurality of nerves, and outputs a control signal for displaying the difference image on a display device.
  7.  請求項4において、
     前記制御部は、神経の神経教師データと前記測定結果との相関値を演算し、当該相関値に基づいて前記サンプルにおける前記神経の画像を生成する、撮像装置。
    In claim 4,
    The imaging device, wherein the control unit calculates a correlation value between nerve teacher data of a nerve and the measurement result, and generates an image of the nerve in the sample based on the correlation value.
  8.  請求項7において、
     前記制御部は、前記サンプルを表す背景画像に前記神経の画像を重畳して重畳画像を生成する、撮像装置。
    In claim 7,
    The imaging device, wherein the control unit superimposes the image of the nerve on a background image representing the sample to generate a superimposed image.
  9.  請求項8において、
     前記光検出部は、可視光領域の光に対して高い検出感度を有する第1撮像デバイスと、
     赤外光領域の光に対して高い検出感度を有する第2撮像デバイスと、を含み、
     前記制御部は、前記第1撮像デバイスが前記サンプルを撮像した可視光画像、あるいは前記第2撮像デバイスが前記サンプルを撮像した赤外光画像を前記背景画像とする、撮像装置。
    In claim 8,
    The light detection unit includes a first imaging device having high detection sensitivity to light in a visible light region;
    A second imaging device having high detection sensitivity to light in the infrared light region,
    The imaging device, wherein the control unit uses a visible light image obtained by imaging the sample by the first imaging device or an infrared light image acquired by imaging the sample by the second imaging device as the background image.
  10.  請求項9において、
     前記光検出部は、前記第1撮像デバイスの光軸と前記第2撮像デバイスの光軸とを同一にする光学系を含む、撮像装置。
    In claim 9,
    The imaging apparatus, wherein the light detection unit includes an optical system that makes the optical axis of the first imaging device and the optical axis of the second imaging device the same.
  11.  請求項9において、
     さらに、前記第1撮像デバイスおよび前記第2撮像デバイスを前記サンプルに対して相対移動させるステージを備え、
     前記制御部は、前記ステージを制御し、前記第1撮像デバイスによる前記サンプルの撮像位置と前記第2撮像デバイスによる前記サンプルの撮像位置とを同一にする、撮像装置。
    In claim 9,
    And a stage for moving the first imaging device and the second imaging device relative to the sample,
    The imaging device, wherein the control unit controls the stage, and makes the imaging position of the sample by the first imaging device identical to the imaging position of the sample by the second imaging device.
  12.  請求項7から11の何れか1項において、
     前記制御部は、前記神経教師データと前記複数の測定結果とを用いてSCM(Spectral Correlation Mapper)値を算出し、当該SCM値が所定の閾値以下の前記測定結果を用いて前記神経の画像を生成する、撮像装置。
    In any one of claims 7 to 11,
    The control unit calculates an SCM (Spectral Correlation Mapper) value using the neural training data and the plurality of measurement results, and the image of the nerve is calculated using the measurement result in which the SCM value is less than a predetermined threshold. An imaging device to generate.
  13.  請求項12において、
     前記光検出部は、撮像対象のスペクトル値を取得するハイパースペクトルカメラを含み、所定の波長帯域を有する赤外光を前記サンプルに照射して得られる測定結果を、前記ハイパースペクトルカメラによって取得し、
     前記制御部は、前記所定の波長帯域において前記SCM値を算出する、撮像装置。
    In claim 12,
    The light detection unit includes a hyperspectral camera that acquires a spectral value of an imaging target, and acquires, by the hyperspectral camera, a measurement result obtained by irradiating infrared light having a predetermined wavelength band to the sample.
    The imaging device, wherein the control unit calculates the SCM value in the predetermined wavelength band.
  14.  請求項13において、
     前記光検出部は、900nmから2500nmの波長帯域の少なくとも一部を有する前記赤外光を前記サンプルに照射して得られる測定結果を前記ハイパースペクトルカメラで取得する、撮像装置。
    In claim 13,
    The imaging device, wherein the light detection unit acquires, with the hyperspectral camera, a measurement result obtained by irradiating the sample with the infrared light having at least a part of a wavelength band of 900 nm to 2500 nm.
  15.  請求項14において、
     前記波長帯域は、1290nmから1600nmである、撮像装置。
    In claim 14,
    The imaging device, wherein the wavelength band is 1290 nm to 1600 nm.
  16.  請求項12において、
     前記光検出部は、単波長の前記赤外光に対する撮像対象のスペクトル値を取得する赤外光カメラを含み、波長の異なる複数の前記赤外光を前記サンプルに照射して得られる複数の測定結果を、前記赤外光カメラで取得し、
     前記制御部は、前記波長の異なる複数の赤外光の波長において前記SCM値を算出する、撮像装置。
    In claim 12,
    The light detection unit includes an infrared light camera for acquiring a spectral value of an imaging target for the infrared light of a single wavelength, and a plurality of measurements obtained by irradiating a plurality of the infrared light of different wavelengths to the sample The result is obtained by the infrared camera,
    The imaging device, wherein the control unit calculates the SCM value at a plurality of wavelengths of infrared light different in wavelength.
  17.  請求項1から16の何れか1項において、
     前記制御部は、前記神経の画像に基づいて、神経以外の部位が前記神経の画像に含まれているか否か判断し、含まれている場合には当該神経以外の部位を前記神経の画像から除外する、撮像装置。
    In any one of claims 1 to 16,
    The control unit determines whether a site other than the nerve is included in the image of the nerve based on the image of the nerve, and if it is included, the site other than the nerve is obtained from the image of the nerve An imaging device to exclude.
  18.  請求項1から17の何れか1項の撮像装置と、
     前記神経の画像を表示する表示装置と、
    を備える、撮像システム。
    An imaging device according to any one of claims 1 to 17.
    A display device for displaying an image of the nerve;
    An imaging system comprising:
  19.  光検出部が、神経を同定するのに適する波長の赤外光をサンプルに照射して測定結果を取得することと、
     制御部が、前記測定結果から、前記サンプルにおいて前記神経の位置を特定する前記神経の画像を生成することと、
    を含む、撮像方法。
    Obtaining a measurement result by irradiating the sample with infrared light of a wavelength suitable for identifying a nerve by the light detection unit;
    Generating, from the measurement result, an image of the nerve that specifies the position of the nerve in the sample;
    Imaging methods, including:
  20.  請求項19において、
     前記制御部は、前記神経と神経以外の部位とを区別できる態様で前記神経の位置を表すように前記神経の画像を生成する、撮像方法。
    In claim 19,
    The imaging method, wherein the control unit generates an image of the nerve so as to represent the position of the nerve in a manner that can distinguish the nerve from a site other than the nerve.
  21.  請求項19または20において、
     前記制御部は、前記神経と血管とを区別できる態様で前記神経の位置を表すように前記神経の画像を生成する、撮像方法。
    In claim 19 or 20,
    The imaging method, wherein the control unit generates an image of the nerve so as to indicate the position of the nerve in a manner that can distinguish the nerve from a blood vessel.
  22.  請求項19から21の何れか1項において、
     前記測定結果を取得することは、前記光検出部が、前記神経を同定するのに適する複数の波長の赤外を前記サンプルに照射して得られる複数の測定結果を取得することを含み、 前記サンプルの赤外光画像を生成することは、前記制御部が、前記複数の測定結果から、複数の前記神経の画像を生成することを含む、撮像方法。
    In any one of claims 19 to 21,
    Acquiring the measurement result includes acquiring a plurality of measurement results obtained by irradiating the sample with infrared light of a plurality of wavelengths suitable for identifying the nerve. Generating an infrared light image of a sample includes the control unit generating images of a plurality of the nerves from the plurality of measurement results.
  23.  請求項22において、さらに、
     前記制御部が、前記複数の神経の画像を切り替えながら表示装置に表示させるための制御信号を出力することを含む、撮像方法。
    In claim 22, further,
    An imaging method comprising: outputting a control signal for causing the display device to display the image while switching the images of the plurality of nerves.
  24.  請求項22において、さらに、
     前記制御部が、前記複数の神経の画像から差分画像を生成し、当該差分画像を表示装置に表示させるための制御信号を出力することを含む、撮像方法。
    In claim 22, further,
    An imaging method comprising: generating a difference image from images of the plurality of nerves, and outputting a control signal for causing the display device to display the difference image.
  25.  請求項22において、
     前記サンプルの赤外光画像を生成することは、前記制御部が、前記神経の神経教師データと前記測定結果との相関値を演算することと、前記制御部が、前記相関値に基づいて前記サンプルにおける前記神経の画像を生成することと、を含む、撮像方法。
    In claim 22,
    The generation of an infrared light image of the sample may be performed by the control unit calculating a correlation value between the neural training data of the nerve and the measurement result, and the control unit may be configured to calculate the infrared light image based on the correlation value. Generating an image of the nerve in a sample.
  26.  請求項25において、
     前記神経の画像を生成することは、前記制御部が、前記神経教師データと前記複数の測定結果とを用いてSCM(Spectral Correlation Mapper)値を算出することと、前記制御部が、前記SCM値が所定の閾値以下の前記測定結果を用いて前記神経の画像を生成することとを含む、撮像方法。
    In claim 25,
    The generation of the image of the nerve may be performed by the control unit calculating a SCM (Spectral Correlation Mapper) value using the neural training data and the plurality of measurement results, and the control unit may be configured to calculate the SCM value. Generating an image of the nerve using the measurement result of which is less than or equal to a predetermined threshold value.
  27.  請求項26において、
     前記光検出部は、所定の波長帯域を有する赤外光を前記サンプルに照射して得られる測定結果を、撮像対象のスペクトル値を取得するハイパースペクトルカメラで取得し、
     前記制御部は、前記所定の波長帯域において前記SCM値を算出する、撮像方法。
    In claim 26,
    The light detection unit acquires a measurement result obtained by irradiating the sample with infrared light having a predetermined wavelength band, using a hyperspectral camera that acquires a spectral value of an imaging target.
    The imaging method, wherein the control unit calculates the SCM value in the predetermined wavelength band.
  28.  請求項27において、
     前記光検出部は、900nmから2500nmの波長帯域の少なくとも一部を有する前記赤外光を前記サンプルに照射して得られる測定結果を前記ハイパースペクトルカメラで取得する、撮像方法。
    In claim 27,
    The imaging method, wherein the light detection unit acquires, with the hyperspectral camera, a measurement result obtained by irradiating the sample with the infrared light having at least a part of a wavelength band of 900 nm to 2500 nm.
  29.  請求項26において、
     前記光検出部は、複数の波長の赤外光を前記サンプルに照射して得られる複数の測定結果を、単波長の赤外光に対する撮像対象のスペクトル値を取得する赤外光カメラで取得し、
     前記制御部は、前記複数の波長において前記SCM値を算出する、撮像方法。
    In claim 26,
    The light detection unit acquires a plurality of measurement results obtained by irradiating the sample with infrared light of a plurality of wavelengths with an infrared camera that acquires spectral values of an imaging target for a single wavelength infrared light. ,
    The imaging method, wherein the control unit calculates the SCM value at the plurality of wavelengths.
  30.  請求項19から29の何れか1項において、さらに、
     前記制御部が、前記神経の画像に基づいて、神経以外の部位が前記神経の画像に含まれているか否か判断し、含まれている場合には当該神経以外の部位を前記神経の画像から除外することを含む、撮像方法。
    In any one of claims 19 to 29, further,
    The control unit determines whether a site other than the nerve is included in the image of the nerve based on the image of the nerve, and if it is included, the site other than the nerve is obtained from the image of the nerve A method of imaging, including excluding.
  31.  神経を同定するのに適する複数の赤外光をサンプルに照射して複数の測定結果を取得する光検出部と、
     前記複数の測定結果を用いた演算によって前記サンプルにおける前記神経の位置が強調された前記神経の強調画像を生成する制御部と、
     を備える撮像装置。
    A light detection unit for irradiating a sample with a plurality of infrared light suitable for identifying a nerve and acquiring a plurality of measurement results;
    A control unit that generates a weighted image of the nerve in which the position of the nerve in the sample is emphasized by calculation using the plurality of measurement results;
    An imaging device comprising:
  32.  請求項31において、
     赤外光領域における神経のスペクトルデータを示す神経教師データを格納する記憶部を備え、
     前記制御部は、前記記憶部から前記神経教師データを読み込み、前記複数の測定結果と前記神経教師データとを比較することにより、前記神経の位置を示す神経特徴データを生成する、撮像装置。
    In claim 31,
    A storage unit for storing neural training data indicating spectral data of nerves in the infrared light region;
    The image pickup apparatus, wherein the control unit generates nerve characteristic data indicating a position of the nerve by reading the nerve teacher data from the storage unit and comparing the plurality of measurement results with the nerve teacher data.
  33.  請求項32において、
     前記制御部は、前記複数の測定結果と前記神経教師データとを用いてSCM演算を行い、当該SCM演算の結果に基づいて前記神経特徴データを生成する、撮像装置。
    In claim 32,
    The imaging device, wherein the control unit performs an SCM operation using the plurality of measurement results and the neural training data, and generates the neural feature data based on the result of the SCM operation.
  34.  請求項32または33において、
     前記制御部は、前記サンプルの画像を背景画像とし、当該背景画像における神経の位置に前記神経特徴データをもとにした前記神経の強調画像を重畳して重畳画像を生成する、撮像装置。
    In claim 32 or 33,
    The imaging device, wherein the control unit sets an image of the sample as a background image, and superimposes an enhanced image of the nerve based on the neural feature data on a position of the nerve in the background image to generate a superimposed image.
  35.  請求項31から34の何れか1項の撮像装置と、
     前記神経の特徴画像を表示する表示装置と、
    を備える、撮像システム。
    An imaging device according to any one of claims 31 to 34;
    A display device for displaying a characteristic image of the nerve;
    An imaging system comprising:
PCT/JP2019/001819 2018-01-25 2019-01-22 Image capture device, image capture system, and image capture method WO2019146582A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-011011 2018-01-25
JP2018011011 2018-01-25

Publications (1)

Publication Number Publication Date
WO2019146582A1 true WO2019146582A1 (en) 2019-08-01

Family

ID=67394958

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/001819 WO2019146582A1 (en) 2018-01-25 2019-01-22 Image capture device, image capture system, and image capture method

Country Status (1)

Country Link
WO (1) WO2019146582A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112444493A (en) * 2020-10-13 2021-03-05 中科巨匠人工智能技术(广州)有限公司 Optical detection system and device based on artificial intelligence
JPWO2022158451A1 (en) * 2021-01-19 2022-07-28

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004154255A (en) * 2002-11-05 2004-06-03 Olympus Corp Observation device for operation
JP2012024283A (en) * 2010-07-22 2012-02-09 Fujifilm Corp Endoscope diagnostic apparatus
WO2014203901A1 (en) * 2013-06-19 2014-12-24 株式会社トプコン Ophthalmological imaging device and ophthalmological image display device
JP2017064405A (en) * 2015-09-29 2017-04-06 住友電気工業株式会社 Optical measuring device and optical measuring method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004154255A (en) * 2002-11-05 2004-06-03 Olympus Corp Observation device for operation
JP2012024283A (en) * 2010-07-22 2012-02-09 Fujifilm Corp Endoscope diagnostic apparatus
WO2014203901A1 (en) * 2013-06-19 2014-12-24 株式会社トプコン Ophthalmological imaging device and ophthalmological image display device
JP2017064405A (en) * 2015-09-29 2017-04-06 住友電気工業株式会社 Optical measuring device and optical measuring method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GIBBS, S. L . ET AL.: "Structure-activity relationship of nerve-highlighting fluorophores", PLOS ONE, vol. 8, no. 9, 9 September 2013 (2013-09-09), pages e73493, XP055627112 *
WANG, C. ET AL.: "Longitudinal near-infrared imaging of myelination", THE JOURNAL OF NEUROSCIENCE, vol. 31, no. 7, 16 February 2011 (2011-02-16), pages 2382 - 2390, XP055627109 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112444493A (en) * 2020-10-13 2021-03-05 中科巨匠人工智能技术(广州)有限公司 Optical detection system and device based on artificial intelligence
CN112444493B (en) * 2020-10-13 2024-01-09 中科巨匠人工智能技术(广州)有限公司 Optical detection system and device based on artificial intelligence
JPWO2022158451A1 (en) * 2021-01-19 2022-07-28
WO2022158451A1 (en) * 2021-01-19 2022-07-28 アナウト株式会社 Computer program, method for generating learning model, and assistance apparatus
JP7457415B2 (en) 2021-01-19 2024-03-28 アナウト株式会社 Computer program, learning model generation method, and support device

Similar Documents

Publication Publication Date Title
US11145053B2 (en) Image processing apparatus and computer-readable storage medium storing instructions for specifying lesion portion and performing differentiation classification in response to judging that differentiation classification operation is engaged based on signal from endoscope
US20170079741A1 (en) Scanning projection apparatus, projection method, surgery support system, and scanning apparatus
JP5981213B2 (en) Fundus observation device
JP2016030214A (en) Multispectral medical imaging devices and methods therefor
JP6745508B2 (en) Image processing system, image processing device, projection device, and projection method
US10736499B2 (en) Image analysis apparatus, image analysis system, and method for operating image analysis apparatus
US20210145248A1 (en) Endoscope apparatus, operating method of endoscope apparatus, and information storage medium
JP5567168B2 (en) Image processing apparatus and endoscope system
KR20220070563A (en) System and method for specular reflection detection and reduction
JP5280026B2 (en) Image processing apparatus and endoscope system
WO2019146582A1 (en) Image capture device, image capture system, and image capture method
JP6770587B2 (en) Endoscope system and image display device
JP2021035549A (en) Endoscope system
JP2019128295A (en) Imaging device, imaging system, and imaging method
JP6254724B2 (en) Fundus observation device
US20160210746A1 (en) Organ imaging device
EP4183311A1 (en) Image analysis processing device, endoscopy system, operation method for image analysis processing device, and program for image analysis processing device
WO2021140923A1 (en) Medical image generation device, medical image generation method, and medical image generation program
JPWO2020111104A1 (en) Ophthalmic equipment
JP2020054479A (en) Ophthalmologic image processing program, ophthalmologic image processing method, and ocular fundus imaging apparatus
US11689689B2 (en) Infrared imaging system having structural data enhancement
US20220375089A1 (en) Endoscope apparatus, information processing method, and storage medium
WO2022009478A1 (en) Image processing device, endoscope system, operation method for image processing device, and program for image processing device
EP4185180A1 (en) Scene adaptive endoscopic hyperspectral imaging system
CN117500426A (en) Medical image processing device, endoscope system, and method for operating medical image processing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19743948

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19743948

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP