WO2017030276A1 - Dispositif d'affichage d'image médicale et procédé de traitement d'image médicale - Google Patents

Dispositif d'affichage d'image médicale et procédé de traitement d'image médicale Download PDF

Info

Publication number
WO2017030276A1
WO2017030276A1 PCT/KR2016/006087 KR2016006087W WO2017030276A1 WO 2017030276 A1 WO2017030276 A1 WO 2017030276A1 KR 2016006087 W KR2016006087 W KR 2016006087W WO 2017030276 A1 WO2017030276 A1 WO 2017030276A1
Authority
WO
WIPO (PCT)
Prior art keywords
medical image
image
medical
anatomical
region
Prior art date
Application number
PCT/KR2016/006087
Other languages
English (en)
Korean (ko)
Inventor
남우현
오지훈
박용섭
이재성
Original Assignee
삼성전자(주)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020160044817A external-priority patent/KR102522539B1/ko
Application filed by 삼성전자(주) filed Critical 삼성전자(주)
Priority to US15/753,051 priority Critical patent/US10682111B2/en
Priority to EP16837218.3A priority patent/EP3338625B1/fr
Publication of WO2017030276A1 publication Critical patent/WO2017030276A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves

Definitions

  • the present invention relates to a medical image display apparatus for displaying a screen including a medical image and a medical image processing method thereof.
  • the medical image display apparatus is a device for acquiring an internal structure of an object as an image.
  • the medical image display device is a non-invasive inspection device, and photographs and processes structural details, internal tissues, and fluid flow in the body and shows them to the user.
  • a user such as a doctor may diagnose a patient's health condition and a disease by using the medical image output from the medical image display apparatus.
  • Medical image display devices include magnetic resonance imaging (MRI) devices, computed tomography (CT) devices, X-ray (X-ray) devices, and ultrasound to provide magnetic resonance imaging. Device and the like.
  • MRI magnetic resonance imaging
  • CT computed tomography
  • X-ray X-ray
  • ultrasound ultrasound to provide magnetic resonance imaging. Device and the like.
  • Magnetic resonance imaging is a device for photographing a subject using a magnetic field, and is widely used for accurate disease diagnosis because it shows bones, disks, joints, nerve ligaments, etc. in three dimensions at a desired angle.
  • the magnetic resonance imaging apparatus acquires a magnetic resonance (MR) signal by using a high frequency multi-coil including RF coils, a permanent magnet, a gradient coil, and the like.
  • the magnetic resonance image is reconstructed by sampling the magnetic resonance signal (MR signal).
  • Computed tomography (CT) device can provide a cross-sectional image of the object, and has the advantage that the internal structure of the object (for example, organs, such as kidneys, lungs, etc.) do not overlap, compared to the general X-ray device, the disease Widely used for the precise diagnosis of CT.
  • CT Computed tomography
  • the computed tomography apparatus radiates X-rays to the object and detects X-rays passing through the object. Then, the image is restored using the detected X-rays.
  • the X-ray imaging apparatus images the inside of the object by radiating X-rays to the object and detecting X-rays passing through the object.
  • the ultrasound apparatus transmits an ultrasound signal to the object and receives an ultrasound signal reflected from the object to form a two-dimensional or three-dimensional ultrasound image of the object of interest in the object.
  • the medical images acquired by the various medical image display apparatuses express the object in various ways according to the type and the photographing method of the medical image display apparatus.
  • the doctor reads the medical image to determine whether the patient's illness or health is abnormal. Therefore, it is necessary for a doctor to provide a medical image display device that can facilitate the diagnosis of a doctor so that a medical image suitable for diagnosing a patient can be selected and read.
  • a medical image display apparatus includes: a display unit configured to display a first medical image of an object including at least one anatomical object; Extracting reference region information corresponding to the anatomical entity from at least one second medical image that is a reference image of the first medical image, and extracting the region corresponding to the anatomical entity from the first medical image based on the extracted reference region information.
  • At least one processor may be configured to detect and control the display unit to display the detected anatomical region separately from the non-anatomical region.
  • the processor generates a third medical image including display area information on the anatomical object detected in the first medical image by matching the first medical image with the second medical image, and generates the third medical image based on the display area information.
  • the display unit may be controlled to display an area of the anatomical object detected in the medical image separately from an area that is not an anatomical object. As a result, it is possible to display an individual object by the image generated using medical registration.
  • the display unit may allow regions of the plurality of anatomical objects to be displayed separately. Thus, it is possible to distinguish the identification of the unidentified objects, thereby providing convenience for diagnosis.
  • the plurality of anatomical entities may comprise at least one of blood vessels and lymph nodes.
  • the first medical image may be a non-contrast medical image.
  • the first medical image may be a non-contrast medical image.
  • the second medical image may be a contrast-enhanced medical image.
  • the contrast-enhanced medical image which is easy to distinguish the individual by segmentation, may be used as a reference image.
  • the second medical image may be a medical image obtained by photographing an object from which the first medical image is acquired at another time point.
  • the past history of the patient can be utilized for the present diagnosis.
  • an area of the anatomical object detected by at least one of color, pattern, pointer, highlight, and animation effect may be displayed separately from an area that is not an anatomical object.
  • various division display functions are provided according to the user's preference.
  • the division mark of the area of the anatomical object may be activated or deactivated by user selection.
  • the user selection convenience for the object display function may be activated or deactivated by user selection.
  • the apparatus may further include a user input unit configured to receive a user input, and the processor may control the display unit to adjust the level of the division display of the anatomical object in response to the user input.
  • a user input unit configured to receive a user input
  • the processor may control the display unit to adjust the level of the division display of the anatomical object in response to the user input.
  • the processor may further detect the lesion extension region in the region of the anatomical entity, and control the display unit so that the lesion extension region detected in the region of the anatomical entity is identifiably displayed.
  • the information on the progress of the lesion is further provided to provide convenience of the diagnosis of the lesion.
  • the processor may extract reference region information corresponding to the anatomical object by using brightness values of pixels of the second medical image. As a result, necessary information can be obtained by efficiently utilizing the information of the pre-stored video.
  • the processor may perform image registration using a predetermined conversion model parameter to maximize the result of the similarity measurement function between the first medical image and the second medical image.
  • a predetermined conversion model parameter to maximize the result of the similarity measurement function between the first medical image and the second medical image.
  • the processor may perform image registration using a predetermined conversion model parameter such that a result value of the cost function of the first medical image and the second medical image is minimized. As a result, the possibility of error of the matched image generated as a result of the image matching process is lowered.
  • the processor may be configured to map a coordinate system between the first medical image and the second medical image, and maintain the image characteristics of the second medical image with respect to the first medical image and the second medical image to which the coordinate system is mapped. Homogeneous matching may be performed. Thus, a registration image is provided in which lymph nodes and blood vessels are separately displayed.
  • the processor may further perform a heterogeneous registration on the first medical image and the second medical image on which the homogeneous registration has been performed, by modifying the image characteristics of the second medical image to completely match the first medical image.
  • the medical image processing method comprises the steps of: displaying a first medical image photographed on an object including at least one anatomical object; Extracting reference region information corresponding to the anatomical object from at least one second medical image that is a reference image of the first medical image; Detecting a region corresponding to the anatomical entity in the first medical image based on the extracted reference region information, and displaying the detected anatomical entity's region separately from the non-anatomical entity. .
  • a division display function for an object that has not been identified in the medical image using information extracted from the reference image is provided.
  • the region of the anatomical entity detected in the third medical image generated based on the display region information may be displayed separately from the region which is not the anatomical entity. As a result, it is possible to display an individual object by the image generated using medical registration.
  • the anatomical objects are plural, and the step of displaying the anatomical objects separately may include displaying the areas of the plurality of anatomical objects separately. Thus, it is possible to distinguish the identification of the unidentified objects, thereby providing convenience for diagnosis.
  • the plurality of anatomical entities may comprise at least one of blood vessels and lymph nodes.
  • the first medical image may be a non-contrast medical image.
  • the first medical image may be a non-contrast medical image.
  • the second medical image may be a contrast-enhanced medical image.
  • the contrast-enhanced medical image which is easy to distinguish the individual by segmentation, may be used as a reference image.
  • the second medical image may be a medical image obtained by photographing an object from which the first medical image is acquired at another time point.
  • the past history of the patient can be utilized for the present diagnosis.
  • the distinguishing display may be performed by distinguishing the detected anatomical region from the non-anatomical region by at least one of color, pattern, pointer, highlight, and animation effect.
  • various division display functions are provided according to the user's preference.
  • the method may further include receiving a user selection of activating or deactivating the division mark of the anatomical object.
  • the user selection convenience for the object display function may be provided.
  • the method may further include receiving a user input for adjusting a level of the division mark of the anatomical object.
  • the method may further include detecting a lesion extension region in the region of the anatomical entity that is displayed separately, and causing the lesion extension region detected in the region of the anatomical entity to be distinguishably displayed.
  • the information on the progress of the lesion is further provided to provide convenience of the diagnosis of the lesion.
  • the reference region information corresponding to the anatomical object may be extracted using the brightness values of the pixels of the second medical image. As a result, necessary information can be obtained by efficiently utilizing the information of the pre-stored video.
  • image registration may be performed so that a result value of the similarity measurement function between the first medical image and the second medical image is maximized using a predetermined conversion model parameter.
  • image registration may be performed such that a result value of a cost function of the first medical image and the second medical image is minimized using a predetermined conversion model parameter. As a result, the possibility of error of the matched image generated as a result of the image matching process is lowered.
  • the generating of the third medical image comprises: mapping coordinate systems of the first medical image and the second medical image;
  • the method may include performing homogeneous matching on the first medical image and the second medical image to which the coordinate system is mapped to the first medical image while maintaining the image characteristics of the second medical image.
  • a registration image is provided in which lymph nodes and blood vessels are separately displayed.
  • the generating of the third medical image may include performing heterogeneous matching on the first medical image and the second medical image on which the homogeneous registration has been performed, by modifying image characteristics of the second medical image to completely match the first medical image. It may further comprise a step. Thus, it is possible to provide the user with quantification and results of lesion expansion in lymph nodes.
  • the medical image processing method may include capturing an object including at least one anatomical object. Displaying a first medical image; Extracting reference region information corresponding to the anatomical object from at least one second medical image that is a reference image of the first medical image; Detecting a region corresponding to the anatomical entity in the first medical image based on the extracted reference region information, and displaying the detected anatomical entity's region separately from the non-anatomical entity. .
  • a division display function for an object that has not been identified in the medical image using information extracted from the reference image is provided.
  • the region of the anatomical entity detected in the third medical image generated based on the display region information may be displayed separately from the region which is not the anatomical entity. As a result, it is possible to display an individual object by the image generated using medical registration.
  • the anatomical objects are plural, and the step of displaying the anatomical objects separately may include displaying the areas of the plurality of anatomical objects separately. Thus, it is possible to distinguish the identification of the unidentified objects, thereby providing convenience for diagnosis.
  • the plurality of anatomical entities may comprise at least one of blood vessels and lymph nodes.
  • the first medical image may be a non-contrast medical image.
  • the first medical image may be a non-contrast medical image.
  • the second medical image may be a contrast-enhanced medical image.
  • the contrast-enhanced medical image which is easy to distinguish the individual by segmentation, may be used as a reference image.
  • the second medical image may be a medical image obtained by photographing an object from which the first medical image is acquired at another time point.
  • the past history of the patient can be utilized for the present diagnosis.
  • the detected anatomical region may be distinguished from the non-anatomical region by at least one of color, pattern, pointer, highlight, and animation effect.
  • various division display functions are provided according to the user's preference.
  • the method may further include receiving a user selection of activating or deactivating the division mark of the anatomical object.
  • the user selection convenience for the object display function may be provided.
  • the method may further include receiving a user input for adjusting a level of the division mark of the anatomical object.
  • the method may further include detecting a lesion extension region in the region of the anatomical entity that is displayed separately, and causing the lesion extension region detected in the region of the anatomical entity to be distinguishably displayed.
  • the information on the progress of the lesion is further provided to provide convenience of the diagnosis of the lesion.
  • the reference region information corresponding to the anatomical object may be extracted using the brightness values of the pixels of the second medical image. As a result, necessary information can be obtained by efficiently utilizing the information of the pre-stored video.
  • image registration may be performed so that a result value of the similarity measurement function between the first medical image and the second medical image is maximized using a predetermined conversion model parameter.
  • image registration may be performed such that a result value of a cost function of the first medical image and the second medical image is minimized using a predetermined conversion model parameter. As a result, the possibility of error of the matched image generated as a result of the image matching process is lowered.
  • the generating of the third medical image comprises: mapping coordinate systems of the first medical image and the second medical image;
  • the method may include performing homogeneous matching on the first medical image and the second medical image to which the coordinate system is mapped to the first medical image while maintaining the image characteristics of the second medical image.
  • a registration image is provided in which lymph nodes and blood vessels are separately displayed.
  • the generating of the third medical image may include performing heterogeneous matching on the first medical image and the second medical image on which the homogeneous registration has been performed, by modifying image characteristics of the second medical image to completely match the first medical image. It may further comprise a step. Thus, it is possible to provide the user with quantification and results of lesion expansion in lymph nodes.
  • lymph node follow-up is possible even in patients with weak renal function who are burdened to actively use the contrast agent.
  • the present embodiment can be applied to non-images for general examination, and is used for early diagnosis of cancer diseases such as cancer metastasis.
  • FIG. 1 is a view for explaining a medical image display apparatus according to an embodiment of the present invention
  • FIG. 2 is a view schematically showing an MRI apparatus according to an embodiment of the present invention
  • FIG. 3 is a view showing a CT device according to an embodiment of the present invention.
  • FIG. 4 is a view schematically showing the configuration of the CT device of FIG.
  • FIG. 5 is a diagram schematically illustrating a configuration of a communication unit that performs communication with an external device in a network system.
  • FIG. 6 is a diagram illustrating a system including a first medical device, a second medical device, and a medical image registration device according to an embodiment of the present invention.
  • FIG. 7 is a diagram conceptually illustrating lymph nodes and blood vessel distribution of a thoracic region.
  • FIG. 8 is a diagram illustrating contrast-enhanced CT images taken of a chest region.
  • FIG. 9 is a diagram illustrating non-contrast CT images taken of a chest region.
  • FIG. 10 is a block diagram showing the configuration of a medical image display apparatus according to an embodiment of the present invention.
  • FIG. 11 is a block diagram illustrating a configuration of an image processor of FIG. 10.
  • FIG. 12 is a view showing a first medical image according to an embodiment of the present invention.
  • FIG. 13 is a view showing a second medical image according to an embodiment of the present invention.
  • FIG. 14 is a diagram illustrating a second medical image from which an object is extracted
  • 15 is a diagram for explaining an image registration process according to the present embodiment.
  • 18 is a flowchart illustrating processes of performing a matching process in an embodiment of the present invention.
  • FIG. 19 is a view showing a third medical image according to an embodiment of the present invention.
  • FIG. 21 is an enlarged view of a portion of an object area in FIG. 20.
  • FIG. 22 is a diagram illustrating a screen displayed according to driving of an application having a medical diagnosis function in a medical image display apparatus according to an embodiment of the present invention.
  • 23 to 26 illustrate various examples of using image registration for diagnosis in a medical image display apparatus according to an embodiment of the present invention.
  • FIG. 27 is a flowchart illustrating a medical image processing method according to an embodiment of the present invention.
  • part refers to a hardware component such as software, FPGA, or ASIC, and “part” plays a role. However, “part” is not meant to be limited to software or hardware.
  • the “unit” may be configured to be in an addressable storage medium and may be configured to play one or more processors.
  • a “part” refers to components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, procedures, Subroutines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays and variables.
  • the functionality provided within the components and “parts” may be combined into a smaller number of components and “parts” or further separated into additional components and “parts”.
  • image refers to multi-dimensional data consisting of discrete image elements (e.g., pixels in a two-dimensional image and voxels in a three-dimensional image).
  • image may include a medical image of an object acquired by X-ray, CT, MRI, ultrasound, and other medical imaging systems.
  • an "object” may include a person or an animal, or a part of a person or an animal.
  • the subject may include organs such as the liver, heart, uterus, brain, breast, abdomen, or blood vessels.
  • the "object” may include a phantom. Phantom means a material having a volume very close to the density and effective atomic number of an organism, and may include a sphere phantom having properties similar to the body.
  • the "user” may be a doctor, a nurse, a clinical pathologist, a medical imaging expert, or the like, and may be a technician who repairs a medical device, but is not limited thereto.
  • FIG. 1 is a view for explaining a medical image display apparatus 100 according to an embodiment of the present invention.
  • the medical image display apparatus 100 may be a device for obtaining a medical image and displaying the medical image on the screen.
  • the medical image display apparatus 100 includes a magnetic resonance imaging apparatus (hereinafter referred to as an MRI apparatus) 101 and a computed tomography apparatus (hereinafter referred to as a CT apparatus) 102.
  • MRI apparatus magnetic resonance imaging apparatus
  • CT apparatus computed tomography apparatus
  • X-ray imaging device not shown
  • angiography device not shown
  • ultrasonic device 103 ultrasonic device
  • the MRI apparatus 101 is an apparatus that obtains an image of a tomography region of an object by expressing intensity of a magnetic resonance (MR) signal with respect to a radio frequency (RF) signal generated in a magnetic field of a specific intensity in contrast.
  • MR magnetic resonance
  • RF radio frequency
  • the CT device 102 may provide a cross-sectional image of the object, the CT device 102 may express the internal structure of the object (for example, an organ such as a kidney or a lung) without overlapping as compared to a general X-ray imaging apparatus.
  • the CT device 102 may provide a relatively accurate cross-sectional image of an object by acquiring and processing an image having a thickness of 2 mm or less at several tens or hundreds per second.
  • the X-ray imaging apparatus refers to a device that transmits X-rays through a human body to image an internal structure of the human body.
  • Angiography apparatus is a device that allows the blood vessels (arteries and veins) of a subject to be injected with a contrast agent through a 2 mm thin tube called a catheter through X-rays.
  • the ultrasound apparatus 103 transmits an ultrasound signal from a body surface of the object toward a predetermined part of the body, and uses the information of an ultrasound signal (hereinafter, also referred to as an ultrasound echo signal) reflected from tissues of the body to detect a monolayer of soft tissue, Means a device for obtaining an image of blood flow.
  • an ultrasound signal hereinafter, also referred to as an ultrasound echo signal
  • the medical image display apparatus 100 may be implemented in various forms.
  • the medical image display apparatus 100 described herein may be implemented in the form of a mobile terminal as well as a fixed terminal.
  • a mobile terminal may be a smartphone, a smart pad, a tablet PC, a laptop computer, a PDA, and the like.
  • the medical image display apparatus 100 may exchange medical image data with a hospital server connected to a medical image information system (PACS) or another medical apparatus in the hospital. have.
  • the medical image display apparatus 100 may perform data communication with a server or the like according to a digital imaging and communications in medicine (DICOM) standard.
  • DICOM digital imaging and communications in medicine
  • the medical image display apparatus 100 may include a touch screen.
  • the touch screen may be configured to detect not only the touch input position and the touched area but also the touch input pressure.
  • the touch screen may be configured to detect proximity touch as well as real-touch.
  • a real touch is a touch pen (eg, a pointing device, a stylus, a haptic pen) that is actually provided on the screen as a user's body (eg, a finger) or a touch tool.
  • a user's body eg, a finger
  • a touch tool e.g., An electronic pen, etc.
  • proximity-touch refers to a case in which the user's body or a touch tool does not actually touch the screen but approaches a predetermined distance from the screen (for example, Detectable spacing refers to hovering of 30 mm or less.
  • the touch screen may be implemented by, for example, a resistive method, a capacitive method, an infrared method, or an acoustic wave method.
  • the medical image display apparatus 100 may detect a gesture input as a user's teach input to the medical image through the touch screen.
  • the touch input of a user described herein includes a tap, a click that touches harder than a tap, a touch and hold, a double tap, a double click, and a touch.
  • Drag, drag and drop, slide, flicking, panning, swipe, pinch, etc. Includes all of them.
  • Input such as drag, slide, flicking, swipe, etc. may consist of a press on which the finger (or touch pen) touches the touch screen, a movement of a certain distance, and a release from the touch screen. Includes all curve-shaped movements.
  • the various touch inputs are included in the gesture input.
  • the medical image display apparatus 100 may provide some or all of the buttons for controlling the medical image in the form of a graphical user interface (GUI).
  • GUI graphical user interface
  • FIG. 2 is a diagram schematically showing an MRI apparatus 101 according to an embodiment of the present invention.
  • a magnetic resonance image refers to an image of an object acquired using the nuclear magnetic resonance principle.
  • the MRI apparatus 101 is an apparatus that obtains an image of a tomography region of an object by expressing the intensity of a magnetic resonance (MR) signal with respect to a radio frequency (RF) signal generated in a magnetic field of a specific intensity in contrast. For example, if a subject is placed in a strong magnetic field and then irradiates the subject with an RF signal that resonates only a particular nucleus (eg, a hydrogen nucleus, etc.), the MR signal is emitted from the particular nucleus. 101 may obtain the MR image by receiving the MR signal.
  • the MR signal refers to an RF signal radiated from the object.
  • the magnitude of the MR signal may be determined by the concentration of a predetermined atom (eg, hydrogen, etc.) included in the subject, a relaxation time T1, a relaxation time T2, and a flow of blood flow.
  • the MRI apparatus 101 includes different features from other imaging apparatuses. Unlike imaging devices such as CT where the acquisition of the image depends on the direction of the detection hardware, the MRI device 101 may acquire a 2D image or a 3D volume image directed to an arbitrary point. In addition, unlike CT, X-ray, PET, and SPECT, the MRI apparatus 101 does not expose radiation to the subject and the inspector, and may acquire an image having high soft tissue contrast, thereby causing abnormality. Neuralological images, intravascular images, musculoskeletal images, oncologic images, etc., in which a clear description of tissue is important can be obtained.
  • the MRI apparatus 101 includes a gantry 220, a signal transceiver 230, a monitor 240, a system controller 250, and an operating unit 260. It may include.
  • the gantry 220 blocks electromagnetic waves generated by the main magnet 222, the gradient coil 224, the RF coil 226, and the like from radiating to the outside.
  • a static magnetic field and a gradient magnetic field are formed in the bore in the gantry 220, and an RF signal is irradiated toward the object 210.
  • the main magnet 222, the gradient coil 224 and the RF coil 226 may be disposed along a predetermined direction of the gantry 220.
  • the predetermined direction may include a coaxial cylindrical direction or the like.
  • the object 210 may be positioned on a table 228 that can be inserted into the cylinder along the horizontal axis of the cylinder.
  • the main magnet 222 generates a static magnetic field or a static magnetic field for aligning the directions of the magnetic dipole moments of the nuclei included in the object 210 in a predetermined direction. As the magnetic field generated by the main magnet is stronger and more uniform, a relatively precise and accurate MR image of the object 210 may be obtained.
  • the gradient coil 224 includes X, Y, and Z coils that generate gradient magnetic fields in the X-, Y-, and Z-axis directions that are perpendicular to each other.
  • the gradient coil 224 may induce resonance frequencies differently for each part of the object 210 to provide location information of each part of the object 210.
  • the RF coil 226 may radiate an RF signal to the patient and receive an MR signal emitted from the patient.
  • the RF coil 226 transmits an RF signal having a frequency equal to the frequency of the precession toward the atomic nucleus during precession to the patient, stops transmitting the RF signal, and receives the MR signal emitted from the patient. Can be.
  • the RF coil 226 generates an electromagnetic signal, for example, an RF signal having a radio frequency corresponding to the type of the nuclear nucleus, such as an RF signal, in order to transition a nuclear nucleus from a low energy state to a high energy state.
  • an electromagnetic signal generated by the RF coil 226 is applied to a certain nucleus, the nucleus can transition from a low energy state to a high energy state. Thereafter, when the electromagnetic wave generated by the RF coil 226 disappears, the atomic nucleus to which the electromagnetic wave is applied may radiate an electromagnetic wave having a Lamor frequency while transitioning from a high energy state to a low energy state.
  • the RF coil 226 may receive an electromagnetic wave signal radiated from atomic nuclei inside the object 210.
  • the RF coil 226 may be implemented as one RF transmission / reception coil having a function of generating an electromagnetic wave having a radio frequency corresponding to the type of the atomic nucleus and a function of receiving the electromagnetic wave radiated from the atomic nucleus. Further, it may be implemented as a transmitting RF coil having a function of generating an electromagnetic wave having a radio frequency corresponding to the type of atomic nucleus and a receiving RF coil having a function of receiving electromagnetic waves radiated from the atomic nucleus.
  • the RF coil 226 may be a form fixed to the gantry 220, it may be a removable form.
  • the detachable RF coil 226 may include RF coils for portions of the object, including head RF coils, chest RF coils, leg RF coils, neck RF coils, shoulder RF coils, wrist RF coils and ankle RF coils, and the like. have.
  • the RF coil 226 may communicate with an external device in a wired and / or wireless manner, and may also perform dual tune communication according to a communication frequency band.
  • the RF coil 226 may include a cage coil, a surface coil, and a transverse electromagnetic coil (TEM coil) according to the structure of the coil.
  • TEM coil transverse electromagnetic coil
  • the RF coil 226 may include a transmission-only coil, a reception-only coil, and a transmission / reception combined coil according to an RF signal transmission / reception method.
  • the RF coil 226 may include RF coils of various channels, such as 16 channels, 32 channels, 72 channels, and 144 channels.
  • the RF coil 226 is a radio frequency multi coil including N coils corresponding to the first to Nth channels, which are the plurality of channels, will be described as an example.
  • the high frequency multi-coil may be referred to as a multichannel RF coil.
  • the gantry 220 may further include a display 229 positioned outside the gantry 220 and a display (not shown) positioned inside the gantry 220. Predetermined information may be provided to a user or an object through a display positioned inside and / or outside the gantry 220.
  • the signal transceiver 230 may control the gradient magnetic field formed in the gantry 220, that is, the bore according to a predetermined MR sequence, and may control the transmission and reception of the RF signal and the MR signal.
  • the signal transceiver 230 may include a gradient magnetic field amplifier 232, a transceiver switch 234, an RF transmitter 236, and an RF data acquirer 238.
  • the gradient amplifier 232 drives the gradient coil 224 included in the gantry 220, and outputs a pulse signal for generating the gradient magnetic field under the control of the gradient magnetic field controller 254. Can be supplied to By controlling the pulse signal supplied from the gradient amplifier 232 to the gradient coil 224, gradient magnetic fields in the X-axis, Y-axis, and Z-axis directions can be synthesized.
  • the RF transmitter 236 and the RF data acquirer 238 may drive the RF coil 226.
  • the RF transmitter 236 may supply an RF pulse of a Larmor frequency to the RF coil 226, and the RF data acquirer 238 may receive an MR signal received by the RF coil 226.
  • the transmission / reception switch 234 may adjust the transmission and reception directions of the RF signal and the MR signal.
  • the RF signal may be irradiated to the object 210 through the RF coil 226 during the transmission mode, and the MR signal may be received from the object 210 through the RF coil 226 during the reception mode.
  • the transmission / reception switch 234 may be controlled by a control signal from the RF control unit 256.
  • the monitoring unit 240 may monitor or control the gantry 220 or the devices mounted on the gantry 220.
  • the monitoring unit 240 may include a system monitoring unit 242, an object monitoring unit 244, a table control unit 246, and a display control unit 248.
  • the system monitoring unit 242 may include a state of a static magnetic field, a state of a gradient magnetic field, a state of an RF signal, a state of an RF coil, a state of a table, a state of a device measuring body information of an object, a state of a power supply, a state of a heat exchanger, It can monitor and control the condition of the compressor.
  • the object monitoring unit 244 monitors the state of the object 210.
  • the object monitoring unit 244 may include a camera for observing the movement or position of the object 210, a respiration meter for measuring the respiration of the object 210, an ECG meter for measuring the electrocardiogram of the object 210, Or it may include a body temperature meter for measuring the body temperature of the object 210.
  • the table controller 246 controls the movement of the table 228 in which the object 210 is located.
  • the table controller 246 may control the movement of the table 228 according to the sequence control of the sequence controller 252. For example, in moving imaging of an object, the table controller 246 may continuously or intermittently move the table 228 according to the sequence control by the sequence controller 252.
  • the object may be photographed with an FOV larger than the field of view (FOV) of the gantry.
  • FOV field of view
  • the display controller 248 controls the display 229 positioned outside and / or inside the gantry 220.
  • the display controller 248 may control on / off of the display 229 located at the outside and / or the inside of the gantry 220 or a screen to be output to the display 229.
  • the display controller 248 may control the on / off of the speaker or the sound to be output through the speaker.
  • the system controller 250 includes a sequence controller 252 for controlling a sequence of signals formed in the gantry 220, and a gantry controller 258 for controlling the gantry 220 and the devices mounted on the gantry 220. can do.
  • the sequence controller 252 controls the gradient magnetic field controller 254 for controlling the gradient magnetic field amplifier 232, and the RF controller 256 for controlling the RF transmitter 236, the RF data acquisition unit 238, and the transmission / reception switch 234. It may include.
  • the sequence controller 252 may control the gradient magnetic field amplifier 232, the RF transmitter 236, the RF data acquirer 238, and the transmit / receive switch 234 according to the pulse sequence received from the operating unit 260.
  • the pulse sequence means a continuation of a signal repeatedly applied by the MRI apparatus 101.
  • the pulse sequence may include a time parameter of the RF pulse, for example, a repetition time (TR), an echo time (Time to Echo, TE), and the like.
  • the pulse sequence includes all the information necessary for controlling the gradient magnetic field amplifier 232, the RF transmitter 236, the RF data acquisition unit 238, and the transmit / receive switch 234, for example, the gradient coil.
  • Information on the intensity of the pulse signal applied to the 224, the application time, the application timing (timing), and the like may be included.
  • the operating unit 260 may command pulse sequence information to the system control unit 250 and control the operation of the entire MRI apparatus 101.
  • the operating unit 260 may include an image processor 262, an output unit 264, and a user interface 266 that process an MR signal received from the RF data acquirer 238.
  • the image processor 262 may generate MR image data of the object 210 by processing the MR signal received from the RF data acquirer 238.
  • the image processor 262 applies various signal processing such as amplification, frequency conversion, phase detection, low frequency amplification, filtering, etc. to the MR signal received by the RF data acquisition unit 238.
  • the image processor 262 may, for example, arrange digital data in k-space of the memory, and reconstruct the data into image data by performing two-dimensional or three-dimensional Fourier transform.
  • the image processing unit 262 may also perform composition processing, difference calculation processing, and the like of the image data.
  • the composition process may include an addition process for pixels, a maximum projection (MIP) process, and the like.
  • the image processing unit 262 may store not only the image data to be reconstructed but also image data subjected to the synthesis process or the difference calculation process to a memory (not shown) or an external server.
  • various signal processings applied to the MR signal by the image processor 262 may be performed in parallel.
  • signal processing may be applied in parallel to a plurality of MR signals received by the multi-channel RF coil to reconstruct the plurality of MR signals into image data.
  • the output unit 264 may output the image data or the reconstructed image data generated by the image processor 262 to the user.
  • the output unit 264 may output information necessary for the user to operate the MRI system, such as a user interface (UI), user information, or object information.
  • UI user interface
  • the output unit 264 may include a speaker, a printer, a display, and the like.
  • the implementation manner of the display is not limited, for example, liquid crystal, plasma, light-emitting diode, organic light-emitting diode, surface-conduction electron- It can be implemented by various display methods such as emitters, carbon nano-tubes, and nano-crystals.
  • the display may be implemented to display an image in a 3D form, and in some cases, may be implemented as a transparent display.
  • the output unit 264 may include various output devices within a range apparent to those skilled in the art.
  • the user may input object information, parameter information, scan conditions, pulse sequences, information on image composition or difference calculation, etc. using the user input unit 266.
  • the user input unit 266 may include a keyboard, a mouse, a trackball, a voice recognizer, a gesture recognizer, a touch pad, and the like, and may include various input devices within a range apparent to those skilled in the art.
  • FIG. 2 illustrates the signal transceiver 230, the monitor 240, the system controller 250, and the operating unit 260 as separate objects, the signal transceiver 230, the monitor 240, and the system are shown. It will be understood by those skilled in the art that the functions performed by each of the controller 250 and the operating unit 260 may be performed in other objects.
  • the image processing unit 262 described above converts the MR signal received by the RF data acquisition unit 238 into a digital signal, but the conversion to the digital signal is performed by the RF data acquisition unit 238 or the RF coil ( 226 may be performed directly.
  • the gantry 220, the RF coil 226, the signal transmitting and receiving unit 230, the monitoring unit 240, the system control unit 250, and the operating unit 260 may be connected to each other wirelessly or by wire.
  • a device (not shown) for synchronizing clocks with each other may be further included.
  • Communication between the gantry 220, the RF coil 226, the signal transmitting and receiving unit 230, the monitoring unit 240, the system control unit 250, and the operating unit 260 is performed at a high speed such as low voltage differential signaling (LVDS).
  • Digital interface, asynchronous serial communication such as universal asynchronous receiver transmitter (UART), low delay type network protocol such as error synchronization serial communication or controller area network (CAN), optical communication, etc. can be used, and it will be apparent to those skilled in the art.
  • Various communication methods may be used.
  • FIG. 3 is a view showing a CT device 102 according to an embodiment of the present invention
  • Figure 4 is a schematic view showing the configuration of the CT device 102 of FIG.
  • the CT device 102 may include a gantry 302, a table 305, an X-ray generator 306, and an X-ray detector 308.
  • a tomography device such as the CT device 102, may provide a cross-sectional image of an object, so that the internal structure of the object (for example, an organ such as a kidney or a lung, etc.) does not overlap with a general X-ray imaging device.
  • the internal structure of the object for example, an organ such as a kidney or a lung, etc.
  • the tomography apparatus may include all tomography apparatuses such as a computed tomography (CT) device, an optical coherenc tomography (OCT) device, or a positron emission tomography (PET) -CT device.
  • CT computed tomography
  • OCT optical coherenc tomography
  • PET positron emission tomography
  • a tomography image is an image obtained by tomography imaging an object in a tomography apparatus.
  • the tomography image may refer to an image that is imaged by using projected data after irradiating a light ray such as an X-ray to the object.
  • the CT image may refer to a composite image of a plurality of X-ray images obtained by photographing an object while rotating about at least one axis of the object.
  • CT apparatus 102 illustrated in FIGS. 2 and 3 will be described as an example of the tomography apparatus 300.
  • the CT device 102 may provide a relatively accurate cross-sectional image of an object by acquiring and processing image data having a thickness of 2 mm or less tens or hundreds per second.
  • image reconstruction techniques Conventionally, there has been a problem that only the horizontal cross section of the object is expressed, but it is overcome by the appearance of various image reconstruction techniques as follows. Three-dimensional reconstruction imaging techniques are as follows.
  • SSD Shade surface display
  • VR volume rendering
  • Virtual endoscopy A technique that allows endoscopic observation in three-dimensional images reconstructed by the VR or SSD technique.
  • MPR multi planar reformation
  • VOI voxel of interest
  • Computed tomography (CT) device 102 can be described with reference to FIGS. 3 and 4.
  • CT device 102 according to an embodiment of the present invention may include various types of devices as shown in FIG.
  • the gantry 302 may include an X-ray generator 306 and an X-ray detector 308.
  • the object 30 may be located on the table 305.
  • the table 305 may move in a predetermined direction (eg, at least one of up, down, left, and right) during the CT imaging process.
  • a predetermined direction eg, at least one of up, down, left, and right
  • the table 305 may be tilted or rotated by a predetermined angle in a predetermined direction.
  • the gantry 302 may be tilted by a predetermined angle in a predetermined direction.
  • the CT device 102 may include a gantry 302, a table 305, a controller 318, a storage 324, an image processor 326, and a user.
  • the input unit 328 may include a display unit 330 and a communication unit 332.
  • the object 310 may be located on the table 305.
  • Table 305 according to an embodiment of the present invention can be moved in a predetermined direction (eg, at least one of up, down, left, right), the movement can be controlled by the controller 318.
  • the gantry 302 includes a rotating frame 304, an X-ray generator 306, an X-ray detector 308, a rotation driver 310, a data acquisition circuit 316, and a data transmitter 320. It may include.
  • the gantry 302 may include a rotatable frame 304 that is rotatable based on a predetermined rotation axis (RA).
  • the rotating frame 304 may also be in the form of a disc.
  • the rotation frame 304 may include an X-ray generator 306 and an X-ray detector 308 disposed to face each other to have a predetermined field of view (FOV).
  • the rotating frame 304 may also include an anti-scatter grid 314.
  • the scattering prevention grid 314 may be located between the X-ray generator 306 and the X-ray detector 308.
  • the X-ray radiation that reaches the detector includes scattered radiation that degrades the quality of the image as well as attenuated primary radiation that forms a useful image. Etc. are included.
  • An anti-scattering grid can be placed between the patient and the detector (or photosensitive film) in order to transmit most of the main radiation and attenuate the scattered radiation.
  • anti-scatter grids include strips of lead foil, solid polymer materials or solid polymers and fiber composite materials. It may be configured in the form of alternately stacked space filling material (interspace material) of. However, the shape of the anti-scattering grid is not necessarily limited thereto.
  • the rotation frame 304 may rotate the X-ray generator 306 and the X-ray detector 308 at a predetermined rotation speed based on the driving signal received from the rotation driver 310.
  • the rotation frame 304 may receive a driving signal and power from the rotation driver 310 in a contact manner through a slip ring (not shown).
  • the rotation frame 304 may receive a drive signal and power from the rotation driver 310 through wireless communication.
  • the X-ray generator 306 generates an X-ray by receiving a voltage and a current through a high voltage generator (not shown) through a slip ring (not shown) in a power distribution unit (PDU) (not shown). Can be released.
  • a high voltage generator applies a predetermined voltage (hereinafter referred to as a tube voltage)
  • the X-ray generator 306 may generate X-rays having a plurality of energy spectra corresponding to the predetermined tube voltage. have.
  • the X-rays generated by the X-ray generator 306 may be emitted in a predetermined form by the collimator 112.
  • the X-ray detector 308 may be positioned to face the X-ray generator 306.
  • the X-ray detector 308 may include a plurality of X-ray detection elements.
  • the single X-ray detection element may form a single channel, but is not necessarily limited thereto.
  • the X-ray detector 308 may detect the X-rays generated by the X-ray generator 306 and transmitted through the object 30 and generate an electric signal corresponding to the intensity of the detected X-rays.
  • the X-ray detector 308 may include an indirect method of converting radiation into light and a direct method detector for converting and detecting radiation into direct charge.
  • the indirect X-ray detector may use a scintillator.
  • the direct type X-ray detector may use a photon counting detector.
  • the data acquisitino system (DAS) 316 may be connected to the X-ray detector 308.
  • the electrical signal generated by the X-ray detector 308 may be collected by the DAS 316.
  • the electrical signal generated by the X-ray detector 308 may be collected by the DAS 316 by wire or wirelessly.
  • the electrical signal generated by the X-ray detector 308 may be provided to an analog / digital converter (not shown) through an amplifier (not shown).
  • Only some data collected from the X-ray detector 308 may be provided to the image processor 326 according to the slice thickness or the number of slices, or only some data may be selected by the image processor 326.
  • the digital signal may be provided to the image processor 326 through the data transmitter 320.
  • the digital signal may be transmitted to the image processor 326 by wire or wirelessly through the data transmitter 320.
  • the control unit 318 of the CT device 102 may control the operation of each module in the CT device 102.
  • the controller 318 may include a table 305, a rotation driver 310, a collimator 312, a DAS 316, a storage 324, an image processor 326, a user input unit 328, and a display unit. 330 and the communication unit 332 may be controlled.
  • the image processor 326 may receive data (for example, pure data before processing) from the DAS 316 through the data transmitter 320 to perform a pre-processing process.
  • the preprocessing may include, for example, a process of correcting the sensitivity nonuniformity between the channels, a sharp reduction in signal strength, or a process of correcting the loss of a signal due to an X-ray absorber such as metal.
  • the output data of the image processor 326 may be referred to as raw data or projection data.
  • Such projection data may be stored in the storage unit 324 together with photographing conditions (eg, tube voltage, photographing angle, etc.) at the time of data acquisition.
  • the projection data may be a set of data values corresponding to the intensity of X-rays passing through the object.
  • a set of projection data acquired simultaneously at the same photographing angle for all channels is referred to as a projection data set.
  • the storage unit 324 may include a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (SD, XD memory, etc.), RAM ; Random Access Memory (Static Random Access Memory), ROM (Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), PROM (Programmable Read-Only Memory) magnetic memory, magnetic disk, optical disk It may include at least one type of storage medium.
  • the image processor 326 may reconstruct a cross-sectional image of the object by using the obtained projection data set.
  • the cross-sectional image may be a 3D image.
  • the image processor 326 may generate a 3D image of the object by using a cone beam reconstruction method or the like based on the obtained projection data set.
  • X-ray tomography conditions may include a plurality of tube voltages, energy values of a plurality of X-rays, imaging protocol selection, image reconstruction method selection, FOV region setting, number of slices, slice thickness, post-image Processing parameter settings, and the like.
  • image processing condition may include a resolution of an image, attenuation coefficient setting for the image, a combination ratio setting of the image, and the like.
  • the user input unit 328 may include a device for receiving a predetermined input from the outside.
  • the user input unit 328 may include a microphone, a keyboard, a mouse, a joystick, a touch pad, a touch pen, a voice, a gesture recognition device, and the like.
  • the display 330 may display the X-ray photographed image reconstructed by the image processor 326.
  • the transmission and reception of data, power, and the like between the above-described elements may be performed using at least one of wired, wireless, and optical communication.
  • the communication unit 332 may communicate with an external device, an external medical device, or the like through the server 334.
  • FIG. 5 is a diagram schematically illustrating a configuration of a communication unit 532 that communicates with the outside in a network system.
  • the communication unit 532 illustrated in FIG. 5 is also connected to at least one of the gantry 220, the signal transceiver 230, the monitoring unit 240, the system controller 250, and the operating unit 260 illustrated in FIG. 2. It is possible. That is, the communication unit 532 may exchange data with a hospital server or another medical device in a hospital connected through a PACS (Picture Archiving and Communication System), and may use digital digital imaging and communication (DICOM, Digital Imaging and Communication System). Communications in Medicine) can communicate data.
  • PACS Picture Archiving and Communication System
  • DICOM Digital Imaging and Communication System
  • the communication unit 532 is connected to the network 501 by wire or wirelessly to connect with an external server 534, an external medical device 536, or an external device 538 such as a portable device. Communication can be performed.
  • the communication unit 532 may transmit and receive data related to diagnosis of the object through the network 501, and may also transmit and receive medical images photographed by another medical device 536 such as CT, ultrasound, and X-ray.
  • another medical device 536 such as CT, ultrasound, and X-ray.
  • the communication unit 532 shown in FIG. 5 may be included in the CT device 102 of FIG. 4.
  • the communication unit 532 shown in FIG. 4 is the same as the communication unit 332 shown in FIG.
  • the other medical device 536 may be, for example, the MRI device 101 or the ultrasound device 103 of FIG. 1.
  • the communication unit 532 illustrated in FIG. 5 may be included in the MRI apparatus 101 of FIG. 2.
  • the MRI apparatus 101 shown in FIG. 2 may be implemented in a form further including the communication unit 532 of FIG. 5.
  • the other medical device 536 may be, for example, the CT device 102 or the ultrasound device 103 of FIG. 1.
  • the communication unit 532 may be connected to the network 501 by wire or wirelessly to perform communication with the server 534, the external medical device 536, or the external device 538.
  • the communication unit 532 may exchange data with a hospital server or another medical device in the hospital connected through a PACS (Picture Archiving and Communication System).
  • the communication unit 532 may perform data communication with an external device 538 or the like according to a digital imaging and communications in medicine (DICOM) standard.
  • DICOM digital imaging and communications in medicine
  • the communication unit 532 may transmit / receive an image of the object and / or data related to the diagnosis of the object through the network 501.
  • the communication unit 532 may receive a medical image acquired from another medical device 536 such as an MRI apparatus 101 and an X-ray imaging apparatus.
  • the communication unit 532 may receive a diagnosis history or treatment schedule of the patient from the server 534 and use it for clinical diagnosis of the patient.
  • the communication unit 532 may perform data communication with not only a server 534 or a medical device 536 in a hospital, but also a portable device (terminal device) 538 of a user or a patient.
  • the medical images acquired by the various medical image display apparatuses express the object in various ways according to the type and the photographing method of the medical image display apparatus.
  • characteristics of the acquired medical image may vary according to a photographing method and a type of the medical image display apparatus. For example, in one medical image, cancer tissue may be easily identified, and in another medical image, blood vessels may be easily identified.
  • a medical image display device that can provide a medical image for facilitating a diagnosis of a user with respect to a predetermined region in a medical image.
  • the medical image display apparatus may be any image processing apparatus capable of displaying, storing, and / or processing a medical image.
  • the medical image display apparatus 100 may be provided to be included in a tomography apparatus such as the MRI apparatus 101 or the CT apparatus 102 described with reference to FIGS. 2 to 4. have.
  • the medical image display apparatus 100 may include the communication unit 532 described with reference to FIG. 5.
  • the medical image display apparatus 100 may include at least one of a tomography apparatus such as the MRI apparatus 101 and the CT apparatus 102 described above with reference to FIGS. It may be included in the server 534, the medical device 536, or an external device that is connected through the 501, that is, the portable terminal 538.
  • the server 534, the medical device 536, or the portable terminal 538 may be an image processing device capable of displaying, storing, or processing at least one of an MRI image and a tomography image.
  • the medical image display apparatus may have a form of a server 534, a medical device 536, or a portable terminal 538, and at least one of an MRI image and a tomography image. It may be a PACS (Picture Archiving and Communication System) that can display, store, or process the data.
  • PACS Picture Archiving and Communication System
  • the medical image display apparatus 100 may be configured to process / restore an image using data acquired by scanning an object, in addition to the MRI apparatus 101 or the CT apparatus 102. It may be included in the medical imaging device / system, or may be provided in connection with all medical imaging devices / systems.
  • the medical image display apparatus 100 obtains the first medical image and the second medical image from two or more different medical devices, for example, the first medical device and the second medical device. It may be implemented as a medical image registration device for displaying an image (third medical image) matching the first medical image and the second medical image.
  • FIG. 6 is a diagram illustrating a system including a first medical device 610, a second medical device 620, and a medical image registration device 630 according to an exemplary embodiment.
  • the first medical device 610 and the second medical device 620 generate the first medical image and the second medical image, respectively, and provide them to the medical image matching device 630.
  • the first medical image and the second medical image may be images generated by the same principle.
  • first medical image and the second medical image may have different image modalities. That is, the first medical image and the second medical image may have different generation methods and principles.
  • the medical image matching device 630 acquires the first medical image and the second medical image, respectively, and matches the first medical image and the second medical image.
  • the image matched by the medical image matching device 630 is displayed on the display unit 632.
  • the first medical device 610, the second medical device 620, and the medical image registration device 630 each constitute independent devices.
  • the medical device 610 and the medical image registration device 630 may be implemented as a single device, or the second medical device 620 and the medical image registration device 630 may be implemented as a single device.
  • the medical image matching device 630 includes the main body 631 and the display unit 632, a separate display device for receiving and displaying image data from the medical image matching device 630 is included in the system. It may be implemented.
  • the medical image registration device 630 of the present embodiment is a computer system that is capable of communicating with at least one medical device and is included in another medical device having a display, or is capable of communicating with two or more medical devices. It may be implemented in a computer system that includes.
  • the first medical device 610 may provide the first medical image in real time with respect to the volume of interest of the object. For example, when deformation and displacement of an organ due to physical activity of the subject occur, a change appears in the first medical image in real time.
  • the first medical device 620 is an ultrasonography machine (103 of FIG. 1) that generates an image in real time during an interventional medical procedure for a patient. It can be configured as. For example, when deformation and displacement of an organ due to physical activity of an object occur, a change is indicated in a medical image displayed on a display in real time.
  • the first medical device 620 may be another medical device such as an OCT that provides an image in real time.
  • the first medical device 610 configured as an ultrasound device generates an ultrasound image by radiating an ultrasound signal to a region of interest of an object using a probe 611 and detecting a reflected ultrasound signal, that is, an ultrasound echo signal. .
  • the probe 611 is a part in contact with the object and may include a plurality of transducer elements (hereinafter, referred to as transducers) (not shown) and a light source (not shown).
  • transducers transducer elements
  • a light source not shown.
  • the transducer may be, for example, a magnetostrictive ultrasonic transducer using a magnetostrictive effect of a magnetic body, a piezoelectric ultrasonic transducer 118 using a piezoelectric effect of a piezoelectric material, hundreds of microfabricated or Various kinds of ultrasonic transducers, such as capacitive micromachined ultrasonic transducers (cMUTs), which transmit and receive ultrasonic waves using thousands of thin film vibrations, may be used.
  • cMUTs capacitive micromachined ultrasonic transducers
  • the plurality of conversion elements may be arranged in a straight line (Linear array), or may be arranged in a curve (Convex array).
  • a cover for covering the plurality of conversion elements may be provided on the change element.
  • the light source is for irradiating light into the object.
  • at least one light source for generating light having a specific wavelength may be used as the light source.
  • a plurality of light sources for generating light having different wavelengths may be used as the light source.
  • the wavelength of light generated by the light source may be selected in consideration of a target in the object.
  • Such a light source may be implemented by a semiconductor laser (LD), a light emitting diode (LED), a solid state laser, a gas laser, an optical fiber, or a combination thereof.
  • the transducer provided in the probe 611 generates an ultrasonic signal according to the control signal, and irradiates the generated ultrasonic signal into the object.
  • the ultrasound echo signal reflected from a specific tissue (eg, a lesion) in the object is received, that is, detected.
  • the reflected ultrasonic waves vibrate the transducer of the probe 611, and the transducer outputs electrical pulses according to the vibrations. Such electrical pulses are converted into an image.
  • the anatomical objects have different ultrasonic reflection characteristics from each other, for example, in an ultrasound image of a B mode (brightness mode), each anatomical object appears with different brightness values.
  • Types of ultrasound images include a B mode (brightness mode) image representing the magnitude of an ultrasound echo signal reflected from an object and a Doppler modem (spectral form) representing an image of a moving object using a Doppler effect.
  • a D mode also called a P-Doppler mode
  • M mode showing the movement of the object over time at a certain position
  • a difference in response between applying and not applying pressure to the object may be classified into an elastic mode image represented by an image, a C mode image expressing a speed of a moving object in color using a Doppler effect, and the like.
  • the Doppler image may include both a blood flow Doppler image (or a color Doppler image) and a tissue Doppler image representing tissue movement.
  • the 3D ultrasound image may be generated by forming volume data from a signal received from the probe head 611 and performing volume rendering on the volume data.
  • the first medical apparatus 610 includes a probe 611 and an image processing apparatus 612 for processing to generate an image based on the ultrasonic echo signal detected by the probe 611.
  • the image processing apparatus 612 may be provided with an image processor that supports a plurality of modes and generates an ultrasound image corresponding to each mode.
  • FIG. 6 illustrates an example in which the image processing apparatus 612 is implemented as a computer body and is connected to the probe 611 which is a fixed terminal by wire.
  • the image processing apparatus 612 may further include a display unit displaying an ultrasound image.
  • the probe 611 may be implemented in the form of a mobile terminal (portable terminal) provided to move a place in a state in which the user grips, as well as a fixed terminal.
  • the probe 611 may perform wireless communication with the image processing apparatus 612.
  • the wireless communication is a short-range communication of a predetermined frequency, Wi-Fi (Wifi), Wi-Fi Direct (Wifi Direct), UWB (Ultra Wideband), Bluetooth (Bluetooth), RF (Radio Frequency), Zigbee (Zigbee), Wireless LAN (Wireless LAN) And at least one of various wireless communication modules such as Near Field Communication (NFC).
  • An example of an image processing apparatus 612 for processing to generate an ultrasound image based on an ultrasound echo signal received from the probe 611 may include a smart phone, a smart pad such as a tablet, and a smart. Smart TVs, desktop computers, laptop computers, personal digital assistants (PDAs), and the like.
  • PDAs personal digital assistants
  • an image processor for generating an ultrasound image corresponding to a plurality of modes is provided inside the probe 611, and the image processing apparatus 612 may wire the image generated by the probe 611 by wire or wirelessly. It may be implemented to receive and display through the display unit.
  • the second medical apparatus 620 may generate a second medical image of a volume of interest (VOI) of the object in real time.
  • the second medical apparatus 620 may have a non real-time characteristic compared with the first medical apparatus 610, and may provide the medical image matching apparatus 630 with the second medical image generated before the medical procedure. .
  • the second medical device 620 may be the CT device 102 or the MRI device 101 described with reference to FIGS. 2 to 4.
  • the second medical apparatus 620 may also be implemented as an X-ray imaging apparatus, a single photon emission computed tomography (SPECT) device, a position emission tomography (PET) device, or the like.
  • SPECT single photon emission computed tomography
  • PET position emission tomography
  • the second medical image is an MR or CT image for convenience of description, but the scope of the present invention is not limited thereto.
  • the medical images photographed by the first medical device 610 or the second medical device 620 may be three-dimensional images generated by scaling two-dimensional cross sections.
  • the second medical apparatus 620 photographs a plurality of cross-sectional images while changing a location and orientation of the cross-sectional image.
  • image data of a three-dimensional volume representing a specific part of the patient's body in three dimensions may be generated.
  • a method of generating image data of a 3D volume by accumulating cross-sectional images is referred to as a multiplanar reconstruction (MPR) method.
  • MPR multiplanar reconstruction
  • the first medical device 610 may hand sweep the probe 611 or generate 3D volume image data through the Wabbler type or the 3D Array type probe 621.
  • FIG. 6 illustrates a case where the first medical image and the second medical image are generated by different types of medical devices, the first medical image and the second medical image are the same type of medical device, for example, the CT device 102.
  • the images are captured at different time points by the other, they are included in the scope of the present invention.
  • the first medical image is a non-contrast medical image taken in the state not administered the contrast agent
  • the second medical image is a contrast enhancement image taken in the state administered the contrast agent
  • Contrast agents administered to patients have the problem of causing various side effects. For example, at least the patient may feel numb or burning, and may cause urticaria, itching, vomiting, nausea, rash, etc., and even seriously, the patient may die.
  • non-contrast imaging is mainly used for the follow-up examination of lung cancer and simple diagnosis of lung lesions (such as bronchial disease and emphysema).
  • diagnosis is based on non-contrast imaging based on the As Low As Reasonably Achievable (ALARA) principle (an international regulation that recommends minimizing the use of doses and contrast agents) and the National Comprehensive Cancer Network (NCCN) guidelines. This is being done.
  • ALARA As Low As Reasonably Achievable
  • NCCN National Comprehensive Cancer Network
  • FIG. 7 is a diagram conceptually illustrating lymph nodes and blood vessel distribution of a chest region
  • FIG. 8 is a diagram showing contrast-enhanced CT images taken of a chest region
  • FIG. 9 is a non-contrast CT shot of a chest region. It is a figure which shows an image.
  • Lymph node (lymph node, lymph node) 701 shown in Figure 7 is involved in causing an immune response by recognizing pathogens (eg, inflammation, cancer cells, etc.) in the human body. Therefore, the degree of change in the size of lymph nodes, the change in the number and distribution of changed lymph nodes are important clinical judgment factors for diagnosis and treatment monitoring.
  • pathogens eg, inflammation, cancer cells, etc.
  • lymph nodes when the cancer cells develop or metastasize, the lymph nodes increase in size. Therefore, for detecting and diagnosing lymph nodes, it is useful to use contrast-enhanced images that are relatively easy to distinguish from other structures, particularly the blood vessels 702. May be advantageous.
  • the lymph node 703 and the pulmonary blood vessel 704 are distinguishably displayed in a contrast-enhanced CT image photographed by administering a contrast agent to a patient.
  • a contrast agent to a patient.
  • the non-contrast CT image of FIG. 9 it is confirmed that the lymph node and the blood vessel are not easily distinguished in the region 705 where the lymph node is located.
  • contrast agents are gradually limited due to the various side effects described above.
  • contrast agents cannot be administered, and thus, an inevitable diagnosis based on non-contrast medical images occurs.
  • lymph node region information is an important landmark for lung cancer diagnosis (metastasis, state changes, etc.) and lung lesion diagnosis, it is difficult to distinguish lymph node / vessel regions in non-contrast imaging, Missing early diagnosis of the disease, or missed treatment time occurs.
  • FIG. 10 is a block diagram illustrating a configuration of the medical image display apparatus 1000 according to an exemplary embodiment.
  • FIG. 11 is a block diagram illustrating the configuration of the image processor 1030 of FIG. 10.
  • the medical image display apparatus 1000 may include a control unit 1010, a display unit 1020, an image processing unit 1030, a user input unit 1040, and a storage unit ( 1050 and the communication unit 1060.
  • the illustrated components are not all essential components, and other general components may be further included in addition to the illustrated components.
  • the medical image display apparatus 1000 When the medical image display apparatus 1000 is included in the MRI apparatus 101 illustrated in FIG. 2, at least a part of the medical image display apparatus 1000 may correspond to the operating unit 260.
  • the image processor 1030 and the display 1020 may correspond to the image processor 262 and the output unit 264 of FIG. 2, respectively.
  • the controller 1010 may correspond to at least a portion of the operating unit 260 and / or the display control unit 248. Therefore, in the medical image display apparatus 1000, a description overlapping with that of FIG. 2 will be omitted.
  • the controller 1010, the display unit 1020, the image processor 1030, and the user input unit 1040 may be used.
  • the storage unit 1050 may correspond to the control unit 318, the display unit 330, the image processor 326, the user input unit 328, and the storage unit 324 of FIG. 4, respectively. Therefore, in the medical image display apparatus 1000, a description overlapping with that of FIG. 3 or 4 will be omitted.
  • the medical image display apparatus 1000 may be included in any one of the server 534, the medical apparatus 536, the portable terminal 538, and the ultrasound apparatus 610 described with reference to FIG. 6.
  • the display unit 1020 displays an application related to the operation of the medical image display apparatus.
  • the display unit 1020 may display menus or guidance items necessary for diagnosis using a medical device.
  • the display 1020 may display the images acquired during the diagnosis process and a user interface (UI) for helping the user manipulate the medical image display device.
  • UI user interface
  • FIG. 10 illustrates an example in which one display unit 1020 is provided in the medical image display apparatus 1000, but the present invention is not limited thereto, and includes a plurality of display units, for example, a main display and a sub display. Can be implemented.
  • the display 1020 may include a first image (first medical image) and / or a third image in which a matching process, which will be described below, is performed on the first image of an object including at least one anatomical object. (Third Medical Image) is displayed.
  • the display unit 1020 may further display a fourth image (fourth medical image) displaying up to an extended area of the lesion, which will be described later.
  • the display 1020 may further display a second image (second medical image) that is a reference image of the first image.
  • the first image may be a medical image photographing an object, and may be all medical images photographed for diagnosing a disease, such as a tomography image, an X-ray image, and an ultrasound image.
  • the image processor 1030 processes the image to be displayed on the display 1020.
  • the image processor 1030 may process a signal obtained by capturing an object and image it as image data that can be displayed on the display unit 1020.
  • a method of imaging a medical image includes a method of photographing an object by irradiating a ray, such as an X-ray, with the object, like the imaging method of an X-ray image.
  • This method is a method of imaging an object without discriminating a photographing technique or a scan mode.
  • the method can image the object directly, without a separate reconstruction or calculation operation for the image to be acquired.
  • the method may obtain a target image by performing a separate reconstruction or calculation operation for the image to be acquired.
  • a technique applied in scanning a subject and taking a medical image is referred to as a 'scan protocol' or a 'protocol', hereinafter referred to as a 'protocol'.
  • the image processor 1030 may generate a medical image by applying a predetermined protocol to the acquired image data.
  • the medical image display apparatus 700 may generate calculated or post-processed image data (third image) by using image data (first image) obtained by applying a protocol.
  • the calculation or post-processing process includes a matching process, so that the generated image becomes a third image and / or a fourth image.
  • the MRI apparatus 101 scans an object by applying various protocols, and generates an image of the object by using the acquired MR signal.
  • data acquired by scanning an object for example, MR signals or K-space data is called scan data
  • image data an image of the object generated using the scan data.
  • the image data corresponds to the first image described above.
  • the acquired image data may be a sinogram or projection data, and the image data, that is, the first image may be generated using the acquired scan data. have.
  • the user input unit 1040 is provided to receive a command from the user.
  • the medical image display apparatus 1040 receives an input for operating the medical image display apparatus 1040 from the user through the user input unit 1040, and in response thereto, obtains the medical image display apparatus 1040 obtained by the medical image display apparatus 1040.
  • the first medical image, the second medical image, and / or the matched third medical image (or fourth medical image) may be output through the display unit 1020.
  • the user input unit 1040 may include a button, a keypad, a switch, a dial, or a user interface displayed on the display unit 1020 for direct manipulation of the medical image display apparatus 1040 by the user.
  • the user input unit 1040 may include a touch screen provided on the display unit 1020.
  • the medical image display apparatus 1000 may receive at least one point selected from the medical image (first image) displayed on the display unit 1020 through the user input unit 1040.
  • the selected point may correspond to the lymph node / vessel area in the non-contrast CT image (first image) of FIG. 9, and is performed by the image processor 1030 in response to the user's selection of the display unit 1020.
  • the processed image third image may be displayed to distinguish the lymphnote and the blood vessel at the selected point.
  • the display unit 1020 may enlarge and display the selected point.
  • the storage unit 1050 stores unlimited data under the control of the controller 1010.
  • the storage unit 1050 is implemented as a nonvolatile storage medium such as a flash memory and a hard disk drive.
  • the storage unit 1050 is accessed by the control unit 1010, and read / record / modify / delete / update data, etc., by the control unit 1010.
  • the data stored in the storage unit 1050 includes, for example, an operating system for driving the medical image display apparatus 1000, and various applications, image data, additional data, etc. executable on the operating system.
  • the storage unit 1050 of the present embodiment may store various data related to a medical image.
  • the storage unit 1050 stores at least one image data generated by applying at least one protocol in the medical image display apparatus 1000 and / or at least one medical image data received from the outside.
  • the storage unit 1050 may further store at least one image data generated by performing a matching process on the image data.
  • Image data stored in the storage unit 1050 is displayed by the display unit 1050.
  • the communication unit 1060 includes a wired / wireless network communication module for communicating with various external devices.
  • the communication unit 1060 transmits a command / data / information / signal received from an external device to the control unit 1010.
  • the communication unit 1060 may transmit a command / data / information / signal received from the control unit 1010 to an external device.
  • the communication unit 150 may be embedded in the medical image display apparatus 1000, but in one embodiment, the communication unit 150 may be implemented in a dongle or module form to connect the connector of the medical image display apparatus 1000 (not shown). It may be detached from the device.
  • the communication unit 1060 may include an I / O port for connecting human interface devices (HIDs).
  • the medical image display apparatus 1000 may transmit and receive image data with an external device connected by wire through an I / O port.
  • the communication unit 1060 of the present embodiment may receive medical image data generated by another medical device.
  • the other medical device may be the same kind of medical device as the medical image display device 1000 or may be another medical device.
  • the medical image display apparatus 1000 is a CT device
  • the other medical device may be another CT device
  • the other medical device may be an MRI device or an ultrasound device.
  • the medical image display apparatus 1000 may be directly connected to another medical apparatus through the communication unit 1060.
  • the communication unit 160 may include a connection for connecting to an external storage medium storing a medical image.
  • the controller 1010 performs a control operation on various components of the medical image display apparatus 1000.
  • the controller 1010 may perform an image processing / image registration process processed by the image processor 1030 and perform a control operation corresponding to a command from the user input unit 1040, thereby causing the medical image display apparatus 1000 to perform the control. To control the overall operation.
  • the controller 1010 includes at least one processor. At least one processor loads and executes a program corresponding to a volatile memory (RAM) from a nonvolatile memory (ROM) in which the program is stored.
  • RAM volatile memory
  • ROM nonvolatile memory
  • the control unit 1010 includes at least one general purpose processor such as a central processing unit (CPU), an application processor (AP), a microcomputer (MICOM), for example, a predetermined algorithm stored in a ROM.
  • the corresponding program may be loaded into the RAM and executed to execute various operations of the medical image display apparatus 1000.
  • the controller 1010 of the medical image display apparatus 1000 When the controller 1010 of the medical image display apparatus 1000 is implemented as a single processor, for example, a CPU, the CPU may display various functions that may be performed by the medical image display apparatus 1000, for example, the display unit 1020.
  • Various image processing processes for imaging medical images for example, selection of an applied protocol and control of imaging accordingly, response to commands received through the user input unit 1040, and a wired / wireless network with an external device. It may be provided to be able to control the communication and the like.
  • the processor may include single core, dual core, triple core, quad core, and multiple cores thereof.
  • the processor may include a plurality of processors, for example, a main processor and a sub processor.
  • the subprocessor is provided to operate in a standby mode (hereinafter, also referred to as a sleep mode) in which only standby power is supplied and does not operate as the medical image display apparatus 1000.
  • the processor, ROM, and RAM included in the controller 1010 may be connected to each other through an internal bus.
  • the controller 1010 when the medical image display apparatus 1000 is implemented as a laptop or desktop computer, the controller 1010 may be provided in a main body and further include a GPU (Graphic Processing Unit, not shown) for graphic processing. Can be.
  • the processor when the medical image display apparatus 1000 is implemented as a portable terminal such as a smart phone or a smart pad, the processor may include a GPU.
  • the processor may include a core and a GPU. SoC (System On Chip) can be implemented.
  • the controller 1010 is dedicated to execute a program for performing a specific function supported by the medical image display apparatus 1000, for example, a function for detecting an error occurrence in a predetermined configuration including a main processor.
  • a chip provided as a processor may include, for example, an integrated circuit (IC) chip.
  • the controller 1010 may receive a user command to execute a predetermined application as a platform capable of analyzing a medical image through the user input unit 1040.
  • the executed application may be a user selectable GUI and may include an input area (2220 of FIG. 22) on which various buttons are displayed and a display area (2210 of FIG. 22) on which a medical image is displayed.
  • a user may load, i.e., load, a medical image stored internally or externally using a GUI of an input area of an application, and the loaded medical image is displayed on the display unit 1020 through the display area of the application.
  • the user may input a user command to register the first medical image and the second medical image in the executed application.
  • the image processing unit 1030 may be implemented as a medical image analysis application which is a software configuration driven by the controller 1010 including at least one processor having a hardware configuration.
  • the operations of the image processor 1030 described below are performed according to the execution of the software driven by the controller 1010. Therefore, the various operations performed by the image processor 1030 may be regarded as being performed by the controller 1010, that is, at least one processor.
  • the controller 1010 of the medical image display apparatus 1000 controls the image processor 1030 to perform an image matching process on the non-contrast medical image, that is, the first medical image.
  • the image processor 1030 may perform image registration on the first medical image by using the first medical image and the second medical image.
  • the second medical image is a contrast-enhanced medical image obtained at another time point and serves as a reference image of the first medical image.
  • the contrast-enhanced medical image is an image captured by an object at a predetermined time point in the past, and is stored in another medical device, a server, or the like, and is loaded into the medical image display apparatus 1000 through the communication unit 1060, or internal or externally.
  • the storage unit 1050 may be stored in advance.
  • the contrast-enhanced medical image may be a medical image taken in the past with respect to the same object as the first medical image, that is, the same patient.
  • the user may select the contrast-enhanced medical image that can be used as the second medical image by using history information about the patient.
  • the user may select at least one contrast enhancement medical image as the second medical image.
  • the second medical image may be an image generated by using a plurality of contrast-enhanced medical images previously photographed on the same object.
  • the contrast enhanced medical image may be a standardized medical image.
  • a medical image database in which brain CT images of a plurality of objects are accumulated, subjects having conditions similar to those of the first medical subject, that is, age, gender, and disease progression, etc. It may be a standardized medical image generated using the contrast-enhanced medical images taken for.
  • the image processor 1030 extracts at least one anatomical object by dividing or segmenting the second medical image.
  • the image processor 1030 may extract reference region information corresponding to at least one anatomical object from the second medical image that is a reference image of the first medical image.
  • the image processing unit 1030 may include a region corresponding to the first object (hereinafter, also referred to as a first anatomical object) and a second object different from the first object (hereinafter, referred to as “first”). 2 may be further extracted from the second medical image.
  • the first subject may be a blood vessel and the second subject may be a lymph node. In another embodiment, the first subject may be a blood vessel and the second subject may be a bronchus.
  • the image processor 1030 registers the first medical image and the second medical image by using reference region information extracted from the second medical image.
  • the matched image is displayed on the display unit 1020 as a third medical image, and is displayed separately from other regions of the detected anatomical object, that is, the region other than the detected anatomical object.
  • the image processing unit 1030 may use the geometric relationship between the anatomical object of the first medical image and the anatomical object of the second medical image in performing image registration, and the geometric relationship refers to the relative positional relationship of the anatomical objects. May contain a vector to represent.
  • Registration of the medical images includes a process of mapping coordinates of the first medical image and the second medical image to each other.
  • the first medical image and the second medical image may be medical images generated using a coordinate system according to digital imaging and communication in medicine (DICOM), respectively.
  • DICOM digital imaging and communication in medicine
  • the image processor 1030 calculates a coordinate transformation function for converting or inversely transforming the coordinates of the second medical image into the coordinates of the first medical image through a matching process of the first medical image and the second medical image.
  • the coordinate transformation function may include a first transformation equation that maintains unique characteristics of the anatomical object of the previous point of time calculated in the homogeneous registration process described later, and a second transformation equation in which two image information calculated in the heterogeneous registration process are completely matched. Can be.
  • the image processor 1030 may synchronize the coordinates and the views of the first medical image and the second medical image by using a coordinate transformation function.
  • the matched image may be an image obtained by converting the first medical image.
  • the matched image may be a fusion image of the first medical image and the second medical image.
  • the display unit 1020 may display the first medical image and display the third medical image and / or the fourth medical image generated by matching the first medical image with the second medical image.
  • FIG. 12 is a diagram illustrating a first medical image 1210 according to an embodiment of the present invention.
  • the first medical image 1210 is a relatively recently image of the object.
  • the first medical image 1210 may be a non-contrast image taken without a contrast agent being administered to the subject.
  • the first medical image 1210 may be a brain CT image as shown in FIG. 12.
  • the first medical image 1210 may be a captured image capable of real-time display, for example, an ultrasound image.
  • the first medical image 1210 is a non-contrast CT image, it is not easy to distinguish the first object and the second object, that is, the blood vessel and the lymph node, that is, the identification in the image 1210. .
  • the user may select an area 1211 in which the blood vessel and the lymph node are expected to be located in the first medical image 1210 using the user input unit 1040.
  • the controller 1010 may control the display 1020 to enlarge and display the selected area 1211. In the enlarged area 1211, blood vessels and lymph nodes are not identified.
  • FIG. 13 is a diagram illustrating a second medical image 1310 according to an embodiment of the present invention
  • FIG. 14 is a diagram illustrating a second medical image 1410 from which an object is extracted.
  • the second medical image 1310 may be a contrast enhancement image taken in a state where a contrast agent is administered to the subject, and may be, for example, a brain CT image as shown in FIGS. 13 and 14.
  • the image processor 1030 includes a first object extractor 1031, a second object extractor 1032, and a matching unit.
  • the matching unit includes a coordinate converting unit 1033, a homogeneous matching unit 1034, and a heterogeneous matching unit 1035.
  • the image processor 1030 is illustrated as including the first object extractor 1031 and the second object extractor 1032 to extract two anatomical objects, but is not limited thereto. That is, for example, three or more anatomical objects may be extracted and displayed in a third medical image so as to be distinguishable.
  • the image processor 1030 is provided with one object extractor to extract an area of the first object from the second medical image, and display the area of the first object separately from other areas except for the first object. can do.
  • the first individual region is a blood vessel region, and a vessel region and a non-vascular region are divided and displayed.
  • the non-vascular region includes a lymph node region.
  • the non-vascular region may comprise a bronchial region.
  • the first object extractor 1031 and the second object extractor 1032 extract area information of the first anatomical object and area information of the second anatomical object from the second medical image, respectively.
  • the regions of the first and second objects extracted in this way are reference regions, and the extracted region information is used as reference region information by the matching unit.
  • the first object extractor 1031 extracts a region corresponding to the first object in the second medical image by using anatomical features of the first object, and the second object extractor 1032 anatomically extracts the second object.
  • the region corresponding to the second object is extracted from the second medical image by using the feature.
  • the first object extractor 1031 may use a brightness value for each pixel included in the second medical image to determine an area of the first object.
  • the first object extractor 1031 detects points having a brightness value within a preset first range in the contrast-enhanced second medical image, and corresponds to the first object including the detected points. You can decide which area to do.
  • the first object extractor 1031 may detect the points having anatomical characteristics similar to those of the selected point. The difference between the brightness values of, i.e., the points whose contrast is equal to or less than the first threshold value may be detected to determine a first object region including the detected points.
  • the second object extractor 1032 detects points having a brightness value within a preset second range in the contrast-enhanced second medical image, and detects an area corresponding to the second object including the detected points. You can decide. In another embodiment, when a specific point in the second medical image is selected by the user input unit 1040, the second object extractor 1032 may detect a point having an anatomical characteristic similar to the selected point. By detecting the difference of the brightness value of, i.e., the contrast is equal to or less than the second threshold value, the second object region including the detected points may be determined.
  • the first range and the second range for the brightness value may be preset in correspondence to the anatomical features of each of the first and second objects.
  • the first threshold value and the second threshold value may be preset in correspondence with the anatomical characteristics of each of the first object and the second object, or in some cases, the first threshold value and the second threshold value may be set to the same value. have.
  • the controller 1010 displays the display unit 1020 to display the first object region 1411 and the second object region 1412, which are reference regions extracted from the second medical image, to be identified. Can be controlled.
  • the user may identify the blood vessel region 1411 corresponding to the first object and the lymph node region 1412 corresponding to the second object by using the displayed image 1410 of FIG. 14, and use the same for diagnosis.
  • the information of the first object region and the information of the second object region extracted by the first object extractor 1031 and the second object extractor 1032 are transferred to the matching unit.
  • the matching unit registers the first medical image and the second medical image based on a predetermined algorithm.
  • the matching unit uses the first medical image by using the reference region information, that is, the information of the first object region received from the first object extractor 1031 and the information of the second object region received from the second object extractor 1032.
  • the second medical image may be registered.
  • first object region 1411 and the second object region 1412 segmented from the second medical image are arranged through the image registration process of the matching unit in the image processor 1030 and the first object region and the second object image of the first medical image. Matches to the object area respectively.
  • 15 is a diagram for describing an image registration process according to the present embodiment.
  • image registration may include different sets of different image data in a captured same scene as illustrated in FIG. 15.
  • image data that is, a process of transforming I f and I m into a coordinate system, and maximizes the similarity between the images I f 'and I m that are to be matched. Is implemented by optimization algorithms to minimize.
  • Equation 1 a result of a similarity measure is maximized as shown in Equation 1 below using a predetermined conversion model parameter, or a result of a cost function as shown in Equation 2 is obtained.
  • P final the final parameter
  • the process of finding the final parameter may include homogeneous matching and hetero matching, which will be described later.
  • I f is a fixed image, for example, a first medical image which is a non-contrast image
  • I m is a moving image, for example, a second medical image, which is a contrast enhancement image
  • S is a similarity measure (similarity measure)
  • C is a cost function
  • P is a parameter set of transformation model.
  • transformation model parameters that can be used in the embodiments of the present invention as described above include rigid transformation, affine transformation, thin-plate-spline free form deformation (TPS FFD), and B. Spline (B-spline) FFD, elastic model (elastic model) and the like.
  • the result of the cost function may be determined by weights assigned to each of the similarity (or dis-similarity) measure and the regularization metric.
  • Similarity or dissimilarity measurement functions include Mutual information (MI), Normalized mutual information (NMI), Gradient-magnitude, Gradient-orientation, Sum of Squared Difference (SSD), Normalized Gradient-vector Flow (NGF), Gradient NMI (GNMI) And the like.
  • normalization metrics include Volume regularization, Diffusion regularization, Curvature regularization, Local rigidity constraint, and the like.
  • the coordinate transformation unit 1033 maps the coordinate systems of the first medical image and the second medical image to each other.
  • the coordinate system mapping is to match the coordinate system of the first medical image and the coordinate system of the second medical image.
  • the coordinate converting unit 1033 may follow a direction in which the first anatomical object of the first medical image is disposed.
  • the coordinate system of the second medical image may be aligned such that the first anatomical object (reference area) of the second medical image may be disposed.
  • the coordinate transformation unit 1033 may rotate or move the second medical image in a range where the alignment state is not shifted between the first anatomical objects of the first medical image and the second medical image.
  • the image processing unit 1030 sequentially performs homogeneous matching and heterogeneous matching on the first medical image and the second medical image in which the coordinate system is adjusted through the coordinate transformation unit 1033.
  • FIG. 16 is a diagram conceptually illustrating a homogeneous matching process
  • FIG. 17 is a diagram conceptually illustrating a heterogeneous matching process.
  • Homogeneous registration is to match a fixed image while maintaining an image characteristic (shape) of a moving image during image registration.
  • In-homogeneous registration is to modify an image characteristic (shape) of a moving image so as to completely match a fixed image when the image is matched. .
  • the homogeneous matching unit 1034 and the heterogeneous matching unit 1035 calculate a cost function through a conversion process between the images I f ′ and I m , which are coordinate matching objects, and calculate the cost function. By repeating the process of updating the parameter P based on the cost function, the final parameter P final is found to minimize the result of the cost function.
  • homogeneous matching is performed in such a way as to gradually change the weights assigned to each of the similarity (or dis-similarity) measure and the regularization metric.
  • the matching may be performed, and the change of the weight may be performed in a direction of gradually increasing the degree of freedom of the second medical image used as the moving image.
  • the image processing unit 1030 includes a homogeneous matching unit 1033 and a heterogeneous matching unit 1034, but it is not a process in which homozygous matching and hetero matching are completely separated from each other. Some of the processes in the first half of the process of updating P correspond to homogeneous matching, and some of the subsequent processes correspond to heterogeneous matching.
  • the processes are repeatedly performed until the heterogeneous matching is completed even after the homogeneous matching is completed.
  • the present invention may be implemented to perform only homogeneous matching and not heterogeneous matching.
  • 18 is a flowchart illustrating processes of performing a matching process in an embodiment of the present invention.
  • the image processor 1030 continuously updates P in the algorithm of FIG. 18 to design a normalization metric constituting a transformation model and a cost function. In this way, different results can be derived from homozygous and heterozygous.
  • known models such as Rigid global model, Non-rigid global model, Rigid local model, and Non-rigid local model are used.
  • the image processor 1030 first initializes the transformation model parameter P (S1801).
  • the second medical image I m is transformed such that the coordinate system of the first medical image I f is aligned.
  • the mapping of the coordinate system may use a coordinate system according to an affine space.
  • the process of S1803 may also be referred to as affine registration.
  • the second medical image (I m ') converted in the process of S1803 and the first medical image (I f) to compute a cost function using the pixel in the overlap region (overlapped regions) (C) calculate the (S1805).
  • the result of the cost function (C) is determined using a normalization metric based on a similarity (or dis-similarity) measure and prior information. For example, it may be determined by a sum of weights assigned to each of the similarity measure function and the normalization metric.
  • the overlapped areas may be, for example, areas corresponding to at least one anatomical entity.
  • the image processor 1030 determines whether the result value of the cost function C calculated in the process of S1805 is minimum (S1807).
  • the conversion model parameter P is updated (S1809).
  • the image processor 1030 repeatedly performs the processes of S1803 to S1807 until the result value of the cost function is minimum, based on the determination result of the process of S1807.
  • This process is the process of homogeneous and heterogeneous matching, and finds the optimization algorithms for each.
  • the homogeneous matching unit 1034 obtains the first transform information as an optimized algorithm in which the unique characteristics of the lymph node and the blood vessel information of the previous time point are maintained.
  • the heterogeneous matching unit 1035 obtains the second transform information as an optimized algorithm in which the two image information are completely matched.
  • the heterogeneous registration may be a quantification process for tracking changes in the second anatomical entity, for example, lymph nodes, and the degree of change in the lymph nodes may be quantified to be displayed on the medical image according to the quantification.
  • the similarity (or dissimilarity) measurement function can be evaluated whether the image is matched between I f 'and I m , and the weights given to each of the similarity (or dissimilarity) measurement function and the normalization metric are It can be a factor that distinguishes between homogeneous and heterogeneous matching.
  • the image processor 1030 generates a third medical image from the first medical image according to the homogeneous matching process as described above.
  • the fourth medical image may be further generated from the third medical image according to the heterogeneous matching process.
  • the controller 1010 controls the display unit 1020 to display the third medical image and / or the fourth medical image generated by the image processor 1030.
  • FIG. 19 is a view showing a third medical image 1910 according to an embodiment of the present invention
  • FIG. 20 is a view showing a fourth medical image 1610
  • FIG. 21 is an enlarged view of some object regions in FIG. 20. The figure is shown.
  • the controller 1010 may display the first object region 1911 and the second object region 1912 in a third medical image generated by homogeneous matching so as to be identified. Can be controlled. Accordingly, the user may identify the blood vessel region 1911 corresponding to the first entity and the lymph node region 1912 corresponding to the second entity, which are not identified in FIG. 12, and use the same for diagnosis.
  • the controller 1010 may display the area of the anatomical object detected by at least one of color, pattern, pointer, highlight, and animation effects on the display 1020 separately from an area that is not an anatomical object. have.
  • different colors, patterns, pointers, highlights, and animation effects may be applied to each of the areas.
  • a combination of colors, patterns, pointers, highlights, and animation effects may be applied to a plurality of areas.
  • the first object region 1911 may be displayed by color and the second object region 1912 by a pattern.
  • the first object region 1911 may be provided with a predetermined pattern and a pointer to the first object region 1911. It may be implemented by various modifications such as to be distinguished from other areas.
  • FIG. 19 illustrates that the detected first object region 1911 and the second object region 1912 are divided and displayed by a pattern, various embodiments in which the user is visually discernible are applicable. It is possible.
  • the pattern includes a plurality of horizontal lines, vertical lines, diagonal lines in a predetermined direction, various types of dot patterns, wave patterns, and the like including a circle.
  • the pointer includes a solid line displayed along the circumference of the detected area and a dotted line of various forms, and the brightness of the pointer may be displayed brighter than the surrounding area.
  • Highlighting includes displaying the brightness of the detected area differently, for example, brighter than other areas.
  • the animation effect is to apply various visual effects, such as flickering at predetermined time intervals, gradually brightening / darkening, to the detected area.
  • the division marks of the regions 1911 and 1912 of the anatomical entity can be activated or deactivated by user selection. That is, the user may activate a function of distinguishing the first area 1911 and the second area 1912 by color or the like by manipulating the user input unit 1040, and the user may input a user input for activating the corresponding function. can do.
  • the user interface selectable whether to activate the object classification that is, the GUI is displayed on the display unit 1020, or the user input unit 1040 includes a toggle switch assigned to correspond to the object classification function, such as in various ways It may be provided to enable user selection.
  • the level (ie, degree, intensity, etc.) of the division display of the anatomical object detected through the user input unit 1040 may be adjusted. That is, the medical image display apparatus 1000 according to the present invention may be provided to distinguish and display anatomical objects in various ways according to the user's preference and taste.
  • the fourth medical image 2010 generated by performing heterogeneous registration not only the first object region 2011 and the second object region 2012 are displayed to be identified, but also the previous time point.
  • the change in the second object area 2012 with respect to the display may be displayed.
  • the previous time point may be a time point at which the second medical image is taken.
  • the diagnosis of the user is easier.
  • the user may be instructed to enlarge and display a predetermined region by using the user input unit 1040.
  • the blood vessel region 2111 corresponding to the first object may be displayed on the fourth medical image 2110 as shown in FIG.
  • the lymph node region 2112 corresponding to the second individual is distinguishably displayed, and the existing lesion region 2113 and the lesion expansion region 2114 are distinguishable within the lymph node region 2112.
  • the controller 1010 distinguishes and displays the existing lesion areas 2013 and 2113 and the lesion extension areas 2014 and 2114 in FIGS. 20 and 21 by at least one of color, pattern, pointer, highlight, and animation effects.
  • the display unit 1020 may be controlled.
  • the controller 1010 may apply a combination of colors, patterns, pointers, highlights, and animation effects to the existing lesion areas 2013 and 2113 or the lesion extension areas 2014 and 2114.
  • the existing lesion areas 2013 and 2113 may be displayed by patterns
  • the lesion extension areas 2014 and 2114 may be displayed by a pointer, and predetermined patterns and highlights may be displayed on the lesion extension areas 2014 and 2114.
  • the present invention may be modified in various ways, such as to be displayed separately from other areas.
  • the division display of the lesion extension regions 2014 and 2114 may be activated or deactivated by user selection. That is, the user may operate the user input unit 1040 to activate a function in which the lesion extension regions 2014 and 2114 are distinguished and displayed by color, and the user may perform a user input for activating the corresponding function.
  • a user interface that can select whether to enable extended area classification, that is, a GUI is displayed on the display unit 1020 or the user input unit 1040 includes a toggle switch allocated to correspond to the extended area classification function. May be provided to enable user selection. By selecting to enable / disable extension area division, the user can more easily grasp the extent of lesion extension.
  • the display level (ie, degree or intensity, etc.) of the existing lesion areas 2013 and 2113 and the lesion extension areas 2013 and 2114 detected through the user input unit 1040 may be adjusted. That is, the medical image display apparatus 1000 according to the present invention may be provided so as to be able to distinguish and display the anatomical object in various ways according to the user's preference and taste.
  • FIG. 22 is a diagram illustrating a screen displayed according to the driving of an application having a medical diagnosis function in a medical image display apparatus according to an exemplary embodiment of the present invention.
  • the medical image 2210 illustrated in FIG. 22 is an image displaying a result of performing an image registration process.
  • the medical image 2210 may include a first anatomical object 2211, a second anatomical object 2112, and a second anatomical object 2112.
  • a user selectable user interface that is, an input area 2220 including various GUIs, is positioned on the left side of the display area in which the existing lesion area 2113 and the lesion extension area 2114 are distinguishable.
  • the user selects a predetermined button of the user interface of the input area 2220 to call up the reference image used as the second medical image and then display the first image. It can be used for image registration of medical images.
  • various medical images may be displayed on the display area 2210 of FIG. 22 including the images illustrated in FIGS. 12 to 14 and 19 to 21.
  • two or more images including the images of FIGS. 12 to 14 and 19 to 21 may be displayed in a horizontal and / or vertical direction so as to be comparable.
  • the display unit 1020 when the display unit 1020 is provided to include a plurality of main displays and sub-displays, for example, two or more images may be displayed to be compared in various combinations.
  • 23 to 26 illustrate various examples of using image registration for diagnosis in the medical image display apparatus 1000 according to an exemplary embodiment of the present invention.
  • control unit 1010 of the medical image display apparatus 100 may include the first medical image (non-contrast medical image) 2301 and the second medical image (other view) as described above.
  • Acquisition contrast-enhanced medical images) 2311 are acquired, respectively, to generate a fused display that matches them.
  • the process of generating a fusion display includes image matching 2302 between a non-contrast medical image and another view-enhanced contrast-enhancing medical image, converting and propagating an image generated according to the matching 2303, and correcting an area 2304.
  • the anatomical objects that is, the lymph nodes and the blood vessel region
  • the transformation and propagation 2303)
  • region correction corresponding to the coordinates of the two medical images
  • quantification is performed to compare changes of lymph nodes in the non-contrast medical images taken with respect to other viewpoint-acquired contrast enhanced medical images (2313). The result of quantification is displayed.
  • control unit 1010 of the medical image display apparatus 100 may include a first medical image (non-contrast medical image) 2401 and a second medical image (other view acquisition contrast enhancement medical image). (2411), respectively, to obtain a matched fusion display, and in this process, data is loaded from the medical image data base (2421), and machine learning is performed (2422), which is further added to the area correction (2404) process. It can be utilized.
  • the controller 1010 classifies data (including images) of conditions similar to the object (such as the age, gender, and progression of the lesion) among the stored information, and classifies the classified data. Through the training process using the machine learning to predict the data can proceed.
  • the controller 1010 may control the image processor 1030 to utilize data predicted according to machine learning for segmentation and rectification in the image registration process of the other embodiment.
  • the accuracy of image registration may be further improved as compared with the embodiment using only reference information extracted from the second image (enhanced medical image at another point in time).
  • the process of generating a fusion display includes image matching 2401 between a non-contrast medical image and another point-enhanced contrast-enhancing medical image, converting and propagating an image generated according to the matching, and a region correction 2403.
  • the anatomical objects that is, the lymph node and the blood vessel region
  • the anatomical objects are divided (2412) with respect to other viewpoint-acquired contrast-enhanced medical images, and transformed and propagated (2403) and region correction (corresponding to coordinates of the two medical images). 2404).
  • region correction 2404 is complete, the resulting fused image is displayed as a registration image (2405).
  • quantification is performed to compare changes in lymph nodes in the non-contrast medical images taken with respect to other time-acquired contrast enhanced medical images (2413). In the process, data from machine learning is used more. The quantification result is then displayed 2414.
  • the control unit 1010 of the medical image display apparatus 100 may include a first medical image (non-contrast medical image) 2501 and a second medical image (other view acquisition contrast enhancement medical image). ) Obtain each of the 2525 and create a fused display that matches them, and further utilize a standard image / model in this process. For example, the controller 1010 loads a plurality of images corresponding to conditions similar to the object (age, gender, lesion progression, etc.) of the object from the data stored in the standard image / model (2521), and the loaded image The image registration 2522 and the transformation and propagation 2523 may be performed.
  • the process of generating a fusion display includes image matching 2502 between a non-contrast medical image and another point-enhanced contrast-enhancing medical image, converting and propagating an image generated according to the registration, and correcting an area 2504.
  • a predetermined anatomical object that is, a lymph node and a blood vessel region
  • data obtained by image matching 2522 and transformation and propagation 2523 from the standard image / model is further utilized.
  • the acquired area correction 2504 is complete, the resulting fused image is displayed as a registration image (2505).
  • quantification is performed to compare changes in lymph nodes in the non-contrast medical images taken with respect to other viewpoint-acquired contrast enhanced medical images (2513).
  • data of the standard image / model is further utilized in the process, the accuracy of image registration may be further improved.
  • the quantification result is then displayed 2514.
  • FIG. 26 when there is no contrast enhancement image captured at another time point with respect to an object, two or more non-contrast medical images t1 and t2 are matched, and a standard image is used to improve accuracy.
  • the / model 2621 and / or the medical image database 2651 may be utilized.
  • two or more non-contrast images photographed at different time points t1 and t2 are matched to determine the extent of lesion progression in the anatomical object according to the photographing order so that the lesions are identified and displayed.
  • control unit 1010 of the medical image display apparatus 100 obtains the non-contrast medical image 2601 taken at the time point t2 and the non-contrast medical image 2611 taken at the past time point t1, respectively. And create a matched fusion display.
  • the process of generating a fusion display includes image matching 2602 of non-contrast medical images taken at the time point t2 and non-contrast medical images taken at a past time point t1, and converting and propagating images generated according to the matching 2603. , Area correction 2604.
  • segmentation and correction (2612) of certain anatomical entities i.e., lymph nodes and blood vessel regions, for non-contrast medical images of a past time point t1, where standard image / model 2621 and / or medical image database ( The stored information of 2631 is utilized.
  • the controller 1010 classifies and classifies data (including images) of conditions similar to the object (eg, age, gender, and progression of the lesion) among the stored information.
  • a training process using the collected data may proceed to machine learning to predict the data (2632).
  • the controller 1010 loads a plurality of images corresponding to a condition similar to the object (age, gender, lesion progression, etc.) of the object from data stored in the standard image / model (2621), and the image of the loaded images. Matching 2622 and transformation and propagation 2623 may be performed. Here, the predicted data machine-learned in the image registration 2622 process may be further utilized.
  • the controller 1010 corrects the extracted lymph node / vascular region for the non-contrast medical image of the past time point t1 by using the machine learning 2632 and / or the data converted and propagated from the standard image / model ( 2612).
  • region correction 2603 may be used for transformation and propagation 2602 and region correction 2603 to correspond to the coordinates of two medical images.
  • region correction 2603 is complete, the resulting fused image is displayed as a registered image (2605).
  • quantification is performed to compare changes in lymph nodes in non-contrast medical images at time t2 with respect to non-contrast medical images at another time t1. (2613), data from machine learning is further utilized in the process. The quantification result is then displayed 2614.
  • the third medical image and / or the fourth medical image generated in the embodiments of the present invention as described above are stored in a medical image database or a standard image / model, and the stored images are matched / converted by machine learning or two or more images.
  • Radio waves may be used for image registration for identifying and displaying anatomical objects in other non-contrast images.
  • FIG. 27 is a flowchart illustrating a medical image processing method according to an embodiment of the present invention.
  • a first medical image of an object including at least one anatomical object may be displayed on the display unit 1020 of the medical image display apparatus 1000 in operation S2701.
  • the first medical image may be a non-contrast medical image.
  • the image processor 1030 extracts reference region information corresponding to at least one anatomical object from the second medical image that is the reference image of the first medical image displayed in step S2701 under the control of the controller 1010 (S2703).
  • the second medical image may be a contrast-enhanced medical image obtained by capturing an object obtained with the first medical image at another time point.
  • the second medical image may be a standard image generated based on images having a condition similar to that of the object.
  • the anatomical entity from which the information is extracted may be plural and include blood vessels, lymph nodes, bronchus, and the like.
  • the reference area information of step S2703 may be extracted corresponding to a predetermined anatomical object using brightness values of pixels constituting the second medical image.
  • the controller 1010 controls the image processor 1030 to detect at least an area corresponding to the anatomical object in the first medical image displayed in step S2701 based on the reference area information in step S2703 (S2705).
  • the controller 1010 displays a third medical image in which the area detected in operation S2705 is distinguished from an area that is not a corresponding anatomical entity (S2707).
  • the third medical image is generated by matching the first medical image with the second medical image, and may include display area information on the anatomical object detected in operation S2705.
  • the controller 1010 controls the display unit 1020 such that the region of the anatomical object is displayed separately from other regions in the third medical image based on the display region information.
  • the controller 1010 may display a fourth medical image in which the lesion extension area is divided and displayed in the anatomical object displayed in step S2705 (S2709).
  • Steps S2707 and S2709 may be performed during the image registration process of the first medical image and the second medical image.
  • the third medical image displayed in step S2707 corresponds to the result of homozygous matching
  • the fourth medical image displayed in step S2709 corresponds to the result of heterogeneous matching.
  • the medical image registration of steps S2707 and S2709 is performed using a predetermined conversion model parameter, and the result of the similarity measurement function between the first medical image and the second medical image is maximized, or the first medical image and the second medical image are maximized. It can be performed repeatedly until the result of the cost function of is minimized.
  • homogeneous matching is performed to map the coordinate systems of the first medical image and the second medical image, and obtains conversion information in which the intrinsic characteristics of the anatomical object are maintained. Heterogeneous matching to obtain information may be performed sequentially.
  • a segmentation display function of an anatomical entity for example, a lymph node and a blood vessel region
  • a non-contrast image is provided based on a non-contrast image.
  • non-contrast imaging based lymph node follow-up and quantification functions are provided.
  • each of the features of the various embodiments of the present invention may be combined or combined with each other in part or in whole, various technically interlocking and driving as can be understood by those skilled in the art, each of the embodiments may be implemented independently of each other It may be possible to carry out together in an association.
  • lymph node follow-up examination is possible even in patients with weak renal function who are burdened to actively use the contrast agent.
  • the present embodiment can be applied to non-images for general examination, and is used for early diagnosis of cancer diseases such as cancer metastasis.
  • Computer-readable recording media include transmission media and storage media that store data readable by a computer system.
  • the transmission medium may be implemented through a wired or wireless network in which computer systems are interconnected.
  • the controller 1010 may include a nonvolatile memory in which a computer program, which is software, is stored, a RAM in which the computer program stored in the nonvolatile memory is loaded, and a CPU that executes a computer program loaded in the RAM.
  • Nonvolatile memories include, but are not limited to, hard disk drives, flash memory, ROMs, CD-ROMs, magnetic tapes, floppy disks, optical storage, data transfer devices using the Internet, and the like.
  • the nonvolatile memory is an example of a computer-readable recording medium in which a computer-readable program of the present invention is recorded.
  • the computer program is code that the CPU can read and execute, and includes code for performing operations of the control unit 1010 such as steps S1801 to S1809 shown in FIG. 18 and steps S2301 to S2309 shown in FIG.
  • the computer program may be implemented by being included in software including an operating system or an application included in the medical image display apparatus 1000 and / or software for interfacing with an external device.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Engineering & Computer Science (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Radiology & Medical Imaging (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

La présente invention concerne un dispositif d'affichage d'image médicale et un procédé de traitement d'image médicale. Le dispositif d'affichage d'image médicale comprend : une unité d'affichage pour afficher une première image médicale obtenue par photographie d'une image d'un objet comprenant au moins une entité anatomique ; et au moins un processeur pour extraire des informations de zone de référence correspondant à l'entité anatomique à partir d'au moins une seconde image médicale qui est une image de référence de la première image médicale, détecter une zone correspondant à l'entité anatomique à partir de la première image médicale sur la base des informations de zone de référence extraites, et commander l'unité d'affichage de telle sorte que la zone détectée de l'entité anatomique est affichée pour être distinguée d'une zone qui ne correspond pas à l'entité anatomique. Ainsi, la présente invention permet l'affichage distinctif d'une entité anatomique qui n'est pas identifiée dans une image médicale sans contraste dans l'état de la technique. Par conséquent, la présente invention permet un test de suivi de ganglion lymphatique par rapport à un patient sur lequel un milieu de contraste est difficile à utiliser, permet un diagnostic précoce de diverses maladies, et peut en outre améliorer la précision d'un diagnostic.
PCT/KR2016/006087 2015-08-17 2016-06-09 Dispositif d'affichage d'image médicale et procédé de traitement d'image médicale WO2017030276A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/753,051 US10682111B2 (en) 2015-08-17 2016-06-09 Medical image display device and medical image processing method
EP16837218.3A EP3338625B1 (fr) 2015-08-17 2016-06-09 Dispositif d'affichage d'image médicale et procédé de traitement d'image médicale

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20150115661 2015-08-17
KR10-2015-0115661 2015-08-17
KR10-2016-0044817 2016-04-12
KR1020160044817A KR102522539B1 (ko) 2015-08-17 2016-04-12 의료영상 표시장치 및 의료영상 처리방법

Publications (1)

Publication Number Publication Date
WO2017030276A1 true WO2017030276A1 (fr) 2017-02-23

Family

ID=58050905

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2016/006087 WO2017030276A1 (fr) 2015-08-17 2016-06-09 Dispositif d'affichage d'image médicale et procédé de traitement d'image médicale

Country Status (1)

Country Link
WO (1) WO2017030276A1 (fr)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018159868A1 (fr) * 2017-02-28 2018-09-07 메디컬아이피 주식회사 Procédé de segmentation de région d'image médicale et dispositif s'y rapportant
US10402975B2 (en) 2017-02-28 2019-09-03 Medicalip Co., Ltd. Method and apparatus for segmenting medical images
CN110742631A (zh) * 2019-10-23 2020-02-04 深圳蓝韵医学影像有限公司 一种医学图像的成像方法和装置
CN111789618A (zh) * 2020-08-10 2020-10-20 上海联影医疗科技有限公司 一种成像***和方法
JP2020536638A (ja) * 2017-10-09 2020-12-17 ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー 深層学習を用いた医用イメージングの造影剤用量削減
CN112259197A (zh) * 2020-10-14 2021-01-22 北京赛迈特锐医疗科技有限公司 急腹症腹部平片的智能化分析***及方法
CN113362934A (zh) * 2021-06-03 2021-09-07 深圳市妇幼保健院 一种基于儿童脑电图模拟病情发作表征的***
WO2021187675A1 (fr) * 2020-03-17 2021-09-23 인하대학교 산학협력단 Procédé de diffusion en subsurface fiable pour rendu de volume dans une image ultrasonore tridimensionnelle
CN113557714A (zh) * 2019-03-11 2021-10-26 佳能株式会社 医学图像处理装置、医学图像处理方法和程序
US20210398259A1 (en) 2019-03-11 2021-12-23 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US20220122266A1 (en) * 2019-12-20 2022-04-21 Brainlab Ag Correcting segmentation of medical images using a statistical analysis of historic corrections
US11922601B2 (en) 2018-10-10 2024-03-05 Canon Kabushiki Kaisha Medical image processing apparatus, medical image processing method and computer-readable medium
US12039704B2 (en) 2018-09-06 2024-07-16 Canon Kabushiki Kaisha Image processing apparatus, image processing method and computer-readable medium
US12040079B2 (en) 2018-06-15 2024-07-16 Canon Kabushiki Kaisha Medical image processing apparatus, medical image processing method and computer-readable medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140200433A1 (en) * 2013-01-16 2014-07-17 Korea Advanced Institute Of Science And Technology Apparatus and method for estimating malignant tumor
KR20140105101A (ko) * 2013-02-21 2014-09-01 삼성전자주식회사 의료 영상들의 정합 방법 및 장치
WO2014155299A1 (fr) * 2013-03-28 2014-10-02 Koninklijke Philips N.V. Visualisation de suivi interactive
KR20140120236A (ko) * 2013-04-02 2014-10-13 재단법인 아산사회복지재단 심근 및 심혈관 정보의 통합 분석 방법
US8983179B1 (en) * 2010-11-10 2015-03-17 Google Inc. System and method for performing supervised object segmentation on images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8983179B1 (en) * 2010-11-10 2015-03-17 Google Inc. System and method for performing supervised object segmentation on images
US20140200433A1 (en) * 2013-01-16 2014-07-17 Korea Advanced Institute Of Science And Technology Apparatus and method for estimating malignant tumor
KR20140105101A (ko) * 2013-02-21 2014-09-01 삼성전자주식회사 의료 영상들의 정합 방법 및 장치
WO2014155299A1 (fr) * 2013-03-28 2014-10-02 Koninklijke Philips N.V. Visualisation de suivi interactive
KR20140120236A (ko) * 2013-04-02 2014-10-13 재단법인 아산사회복지재단 심근 및 심혈관 정보의 통합 분석 방법

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3338625A4 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10402975B2 (en) 2017-02-28 2019-09-03 Medicalip Co., Ltd. Method and apparatus for segmenting medical images
WO2018159868A1 (fr) * 2017-02-28 2018-09-07 메디컬아이피 주식회사 Procédé de segmentation de région d'image médicale et dispositif s'y rapportant
JP7476382B2 (ja) 2017-10-09 2024-04-30 ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー 深層学習を用いた医用イメージングの造影剤用量削減
JP7244499B2 (ja) 2017-10-09 2023-03-22 ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー 深層学習を用いた医用イメージングの造影剤用量削減
JP2020536638A (ja) * 2017-10-09 2020-12-17 ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー 深層学習を用いた医用イメージングの造影剤用量削減
US12040079B2 (en) 2018-06-15 2024-07-16 Canon Kabushiki Kaisha Medical image processing apparatus, medical image processing method and computer-readable medium
US12039704B2 (en) 2018-09-06 2024-07-16 Canon Kabushiki Kaisha Image processing apparatus, image processing method and computer-readable medium
US11922601B2 (en) 2018-10-10 2024-03-05 Canon Kabushiki Kaisha Medical image processing apparatus, medical image processing method and computer-readable medium
CN113557714A (zh) * 2019-03-11 2021-10-26 佳能株式会社 医学图像处理装置、医学图像处理方法和程序
US20210398259A1 (en) 2019-03-11 2021-12-23 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US11887288B2 (en) 2019-03-11 2024-01-30 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN110742631A (zh) * 2019-10-23 2020-02-04 深圳蓝韵医学影像有限公司 一种医学图像的成像方法和装置
CN110742631B (zh) * 2019-10-23 2024-02-20 深圳蓝影医学科技股份有限公司 一种医学图像的成像方法和装置
US20220122266A1 (en) * 2019-12-20 2022-04-21 Brainlab Ag Correcting segmentation of medical images using a statistical analysis of historic corrections
US11861846B2 (en) * 2019-12-20 2024-01-02 Brainlab Ag Correcting segmentation of medical images using a statistical analysis of historic corrections
WO2021187675A1 (fr) * 2020-03-17 2021-09-23 인하대학교 산학협력단 Procédé de diffusion en subsurface fiable pour rendu de volume dans une image ultrasonore tridimensionnelle
CN111789618B (zh) * 2020-08-10 2023-06-30 上海联影医疗科技股份有限公司 一种成像***和方法
CN111789618A (zh) * 2020-08-10 2020-10-20 上海联影医疗科技有限公司 一种成像***和方法
CN112259197A (zh) * 2020-10-14 2021-01-22 北京赛迈特锐医疗科技有限公司 急腹症腹部平片的智能化分析***及方法
CN113362934A (zh) * 2021-06-03 2021-09-07 深圳市妇幼保健院 一种基于儿童脑电图模拟病情发作表征的***

Similar Documents

Publication Publication Date Title
WO2017030276A1 (fr) Dispositif d'affichage d'image médicale et procédé de traitement d'image médicale
WO2015108306A1 (fr) Appareil producteur d'images médicales et son procédé de traitement d'images médicales
WO2016080813A1 (fr) Procédé et appareil de traitement d'image médicale
WO2017142183A1 (fr) Appareil de traitement d'image, procédé de traitement d'image et support d'enregistrement enregistrant celle-ci
WO2015126205A2 (fr) Appareil de tomographie et procédé de reconstruction d'image de tomographie associé
WO2017142281A1 (fr) Appareil de traitement d'image, procédé de traitement d'image et support d'enregistrement associé
WO2015002409A1 (fr) Procédé de partage d'informations dans une imagerie ultrasonore
WO2015126189A1 (fr) Appareil de tomographie, et procédé de reconstruction d'image de tomographie par l'appareil de tomographie
WO2015122687A1 (fr) Appareil de tomographie et méthode d'affichage d'une image tomographique par l'appareil de tomographie
EP3302239A1 (fr) Appareil d'affichage d'image médicale et procédé de fourniture d'interface utilisateur
WO2017023105A1 (fr) Appareil d'imagerie par tomographie et procédé de reconstruction d'image de tomographie
WO2016117807A1 (fr) Appareil de diagnostic de dispositif médical et son procédé de commande
EP3220826A1 (fr) Procédé et appareil de traitement d'image médicale
WO2020185003A1 (fr) Procédé d'affichage d'image ultrasonore, dispositif de diagnostic ultrasonore et produit programme d'ordinateur
WO2016060475A1 (fr) Procédé de fourniture d'informations à l'aide d'une pluralité de dispositifs d'affichage et appareil à ultrasons associé
WO2016195417A1 (fr) Appareil et procédé de traitement d'image médicale
EP3107457A1 (fr) Appareil de tomographie, et procédé de reconstruction d'image de tomographie par l'appareil de tomographie
EP3104782A1 (fr) Appareil de tomographie et méthode d'affichage d'une image tomographique par l'appareil de tomographie
WO2015060656A1 (fr) Appareil et procédé d'imagerie par résonance magnétique
WO2015126217A2 (fr) Procédé et appareil d'imagerie diagnostique, et support d'enregistrement associé
WO2016186279A1 (fr) Procédé et appareil de synthèse d'images médicales
WO2014200289A2 (fr) Appareil et procédé de fourniture d'informations médicales
WO2016043411A1 (fr) Appareil d'imagerie à rayons x et procédé de balayage associé
WO2015105314A1 (fr) Détecteur de rayonnement, son appareil d'imagerie par tomographie, et son appareil de détection de rayonnement
WO2016072581A1 (fr) Appareil d'imagerie médicale et procédé de traitement d'image médicale

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16837218

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15753051

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE