WO2016206093A1 - Procédé et appareil pour affichage par fusion des images d'électrode de cortex cérébral et de résonance magnétique - Google Patents

Procédé et appareil pour affichage par fusion des images d'électrode de cortex cérébral et de résonance magnétique Download PDF

Info

Publication number
WO2016206093A1
WO2016206093A1 PCT/CN2015/082501 CN2015082501W WO2016206093A1 WO 2016206093 A1 WO2016206093 A1 WO 2016206093A1 CN 2015082501 W CN2015082501 W CN 2015082501W WO 2016206093 A1 WO2016206093 A1 WO 2016206093A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
electrode
mri
implantation
scale
Prior art date
Application number
PCT/CN2015/082501
Other languages
English (en)
Chinese (zh)
Inventor
汤洁
李卓
任乐枫
胡楠
谢菲
Original Assignee
深圳市美德医疗电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市美德医疗电子技术有限公司 filed Critical 深圳市美德医疗电子技术有限公司
Priority to PCT/CN2015/082501 priority Critical patent/WO2016206093A1/fr
Priority to CN201580003336.6A priority patent/CN105849777B/zh
Publication of WO2016206093A1 publication Critical patent/WO2016206093A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Definitions

  • the invention relates to the field of information fusion, in particular to a method and a device for fused display of a cerebral cortex electrode and a magnetic resonance image.
  • intracranial electrodes have been identified as the gold standard for localization of lesions.
  • the intracranial electrodes can directly record the brain electrical activity of the cerebral cortex and internal brain tissue, and can detect abnormal cortical discharge within 1 cm, with clear graphics and high sensitivity.
  • intracranial electrode EEG monitoring is a safe and reliable method of localization.
  • CT scan can be used to obtain the positional distribution information of the electrode in the brain.
  • CT scans are not sensitive to the inner cortical tissue of the brain.
  • CT scans are not able to obtain information about the inner cortex of the brain, which is not conducive to obtaining the position information of the electrodes.
  • MRI scans are the best way to obtain information on the inner cortical tissue of the brain, but since the implanted electrodes are usually metal electrodes, MRI scans cannot be performed after implantation.
  • the technical problem to be solved by the present invention is to provide a method and apparatus for fused display of a cerebral cortex electrode and a magnetic resonance image capable of acquiring position information of an electrode in view of the above-mentioned drawbacks of the prior art which are disadvantageous for acquiring position information of the electrode.
  • the technical solution adopted by the present invention to solve the technical problem thereof is: constructing a method for fusion display of a cerebral cortex electrode and a magnetic resonance image, comprising the following steps:
  • the step B) further comprises:
  • the second scale uses the second scale to rotate and translate the partial image pixels in the MRI image after the first scale adjustment and the partial image pixels in the post-implant CT image;
  • the first scale includes more pixels than the first scale The number of pixels included in the second scale;
  • step B4 determining whether the similarity between the MRI image after the second scale adjustment and the position pixel of the post-implant CT image reaches a second similarity, and if so, ending the registration process; otherwise, returning to step B3); The degree is greater than the first similarity.
  • the step C) further includes:
  • the method further includes:
  • the method further includes:
  • the step B0) further includes:
  • the first scale includes the number of pixels larger than The number of pixels included in the second scale
  • step B04 determining whether the similarity between the MRI image after the second scale adjustment and the position pixel corresponding to the pre-implant CT image reaches a second similarity, and if so, ending the registration process; otherwise, returning to step B03); The degree is greater than the first similarity.
  • the registered CT image includes a pre-implantation CT image after registration and a post-implantation CT image after registration
  • Step C) further includes:
  • the step D) further comprises:
  • step D3 expanding the radius, and determining whether there is a point in the enlarged area whose pixel value falls within the set interval, and if so, incorporating it into the area where the seed point is located, and performing step D4 ); otherwise, the region where the seed point is located is used as cerebral cortical tissue image information;
  • the step E) further includes:
  • the size of the first scale is eight pixels
  • the size of the second scale is one pixel
  • the MRI image before implantation
  • Both the CT image and the post-implantation CT image are image files in DICOM format, and the required format is a DICOM file, a picture file, or a video file.
  • the present invention also relates to an apparatus for implementing a method for fusion display of a cerebral cortex electrode and a magnetic resonance image, comprising:
  • Registration unit used to register the MRI image and the post-implantation CT image in the same coordinate space
  • An electrode information extracting unit configured to extract electrode image information from the registered CT image
  • Cerebral cortex information extraction unit for extracting cerebral cortical tissue image information from the registered MRI image;
  • the information fusion unit is configured to spatially fuse the electrode image information and the cerebral cortex tissue image information, and display the spatially fused information through a three-dimensional visualization method;
  • Display effect adjustment unit used to adjust the effect and angle of the 3D visual display, and convert the adjusted result into the required format output for saving.
  • the registration unit further includes:
  • a coordinate conversion module configured to convert the MRI image and the post-implant CT image into the same coordinate system, and move the physical center of the MRI image to the same point of the physical center of the post-implant CT image;
  • Post-implantation rotational translation alignment module for respectively rotating, translating and comparing partial image pixels in the MRI image and partial image pixels in the post-implant CT image using the first scale until the MRI image The similarity between the contour of the CT image and the contour of the post-implant CT image reaches a first similarity
  • Post-implantation rotation translation module for rotating and translating partial image pixels in the MRI image after the first scale adjustment and partial image pixels in the post-implant CT image using the second scale; The number of pixels included is greater than the number of pixels included in the second scale;
  • Post-implantation similarity judgment module for judging whether the similarity between the MRI image adjusted by the second scale and the pixel corresponding to the post-implantation CT image reaches the second similarity, and if so, ending the registration process; otherwise, proceeding The adjustment of the second scale; the second similarity is greater than the first similarity.
  • the electrode information extracting unit further includes:
  • Grayscale threshold setting module configured to set a grayscale threshold range of the registered CT image, the set grayscale threshold range including the electrode image;
  • Electrode image extraction module an image for extracting an image gray scale within a set grayscale threshold range as electrode image information.
  • the device further includes:
  • Pre-implantation registration unit for registering the MRI image and the pre-implantation CT image in the same coordinate space.
  • the pre-implantation registration unit further comprises:
  • Pre-implantation coordinate conversion module for converting the MRI image and the pre-implant CT image into the same coordinate system, and moving the physical center of the MRI image to the same point as the physical center of the pre-implant CT image ;
  • Pre-implantation rotational translation alignment module for respectively rotating, translating and comparing partial image pixels in the MRI image and partial image pixels in the pre-implant CT image using the first scale until the MRI image The similarity between the contour of the CT image and the contour of the pre-implant CT image reaches a first similarity;
  • a pre-implantation rotation translation module for rotating and translating a partial image pixel in the first scale-adjusted MRI image and a partial image pixel in the pre-implant CT image using a second scale;
  • the number of pixels included is greater than the number of pixels included in the second scale;
  • the pre-implantation similarity judgment module is configured to determine whether the similarity between the MRI image adjusted by the second scale and the pixel corresponding to the pre-implant CT image reaches a second similarity, and if so, the registration process ends; otherwise, return to continue Performing a second scale adjustment; the second similarity is greater than the first similarity.
  • the registered CT image includes a pre-implantation CT image after registration and a post-implantation CT image after registration
  • the electrode information extraction unit further includes:
  • the CT image subtraction module the post-implantation CT image after the registration is subtracted from the pre-implantation CT image, and the portion where the gradation difference value is not 0 is the electrode image information.
  • the cerebral cortex information extracting unit further comprises:
  • Seed point selection module for arbitrarily selecting a seed point from the registered MRI image
  • a seed point selection module for setting an interval: for selecting a point whose pixel value falls within a set interval in an area centered on the seed point and having a radius of a first set value, and incorporating the The area where the seed point is located;
  • the pixel value judgment module is configured to continue the determination of the pixel value in the set interval.
  • the information fusion unit further includes:
  • a data type conversion module configured to convert a data type of the electrode image information and a data type of the cerebral cortex tissue image information into the same data type;
  • Gray value mapping module for mapping different gray values of each pixel in the electrode image and the cerebral cortex tissue image into different colors and transparency through the set color mapping table, respectively adjusting the color and transparency thereof;
  • Image display module used to display the fused image by face drawing or volume drawing.
  • the first size is eight pixels
  • the second size is one pixel
  • the MRI image, the pre-implant CT image, and the post-implant CT image are both An image file in DICOM format
  • the required format is a DICOM file, a picture file, or a video file.
  • the method and device for realizing the fusion display of the cerebral cortex electrode and the magnetic resonance image of the present invention have the following beneficial effects: the MRI image obtained by MRI scan before the patient is implanted into the intracranial electrode and the CT after the patient is implanted with the intracranial electrode.
  • the scanned CT image is scanned, and then the CT image is registered with the MRI image, and then the extracted electrode information is fused and displayed on the MRI image, so that the brain tissue information can be seen and the position of the electrode can be seen at the same time. Information, so it can get the position information of the electrode.
  • FIG. 1 is a flow chart of an embodiment of a method and apparatus for fusion display of a cerebral cortex electrode and a magnetic resonance image of the present invention
  • FIG. 2 is a specific flowchart of registering an MRI image and an implanted CT image in the same coordinate space in the embodiment
  • FIG. 3 is a specific flowchart of a case where electrode image information is extracted from a registered CT image in the embodiment
  • FIG. 4 is a specific flowchart of registering an MRI image and a pre-implant CT image in the same coordinate space in the embodiment
  • FIG. 5 is a specific flowchart of extracting cerebral cortex tissue image information from the registered MRI image in the embodiment
  • FIG. 6 is a specific flowchart of spatially merging electrode image information and cerebral cortex tissue image information in the embodiment, and displaying the spatially fused information through a three-dimensional visualization method;
  • Figure 7 is a schematic view showing the structure of the apparatus in the embodiment.
  • FIG. 1 a flow chart of the method for displaying the cerebral cortex electrode and the magnetic resonance image is shown in FIG. 1 .
  • the method for displaying the cerebral cortex electrode and the magnetic resonance image fusion comprises the following steps:
  • Step S01 takes the MRI image obtained by MRI scan before the patient implants the intracranial electrode and the post-implant CT image obtained by CT scan after the patient implants the intracranial electrode: in this step, the patient is implanted before the intracranial electrode is implanted.
  • the above MRI image and post-implantation CT image are both image files in DICOM format.
  • Step S02 registers the MRI image and the post-implantation CT image in the same coordinate space: in this step, the MRI image and the post-implantation CT image are registered by calling the three-dimensional spatial data registration algorithm, and the registered MRI is performed.
  • the image and the CT image data after implantation are in the same coordinate space, which facilitates data fusion.
  • a software system is provided, which is a fusion display system of a cerebral cortex electrode and a magnetic resonance image. Firstly, the MRI image and the post-implantation CT image are copied to the host where the cerebral cortex electrode and the magnetic resonance image fusion display system are located, and the MRI image and the post-implantation CT image are imported through the software of the cerebral cortex electrode and the magnetic resonance image fusion display system. The data is then registered by the 3D spatial data registration algorithm to register the MRI image and the post-implant CT image in the same coordinate space. For details on how to register, a detailed description will be given later.
  • Step S03 extracts electrode image information from the registered CT image: in this step, extracting electrode image information from the registered CT image, specifically extracting electrode image information from the registered CT image by using an electrode extraction algorithm .
  • Step S04 extracting cerebral cortical tissue image information from the registered MRI image:
  • the data of the MRI image includes not only the information of the cerebral cortex, but also other parts of the skull, the neck, etc., and this part of the information has an adverse effect on the position of the doctor to observe the electrode.
  • the cerebral cortex tissue image information is extracted from the registered MRI image, specifically, the image segmentation algorithm is used to extract the information of the cerebral cortex tissue from the registered MRI image, and the other useless information is removed.
  • Step S05 The electrode image information and the cerebral cortex tissue image information are spatially fused, and the spatially fused information is displayed by a three-dimensional visualization method: in this step, the electrode image information and the cerebral cortex tissue image information are spatially fused, and the spatial fusion is performed. The subsequent information is displayed by a three-dimensional visualization method.
  • Step S06 Adjust the effect and angle of the 3D visualization display, and adjust the adjusted result to the desired format.
  • Output preservation In this step, the cerebral cortex tissue and electrode position can be visually seen in the cerebral cortex electrode and magnetic resonance image fusion display system. Related information, you can also observe the brain from different directions by mouse drag, you can adjust the effect and angle of the 3D visual display, and adjust the results to convert to the desired format output save. It is worth mentioning that the above required format is a DICOM file, a picture file or a video file, and the results of different formats can be used for various purposes. Since the electrode information of the extraction process is fused and displayed on the MRI image, the brain tissue information can be seen and the position information of the electrode can be seen at the same time, so that the position information of the electrode can be obtained.
  • the above step S02 can be further refined, and the refined flowchart is as shown in FIG. 2 .
  • the above step S02 further includes:
  • Step S21 Convert the MRI image and the post-implant CT image to the same coordinate system, and move the physical center of the MRI image and the physical center of the post-implant CT image to the same point: registration is for MRI images and post-implantation CT The pixels at the corresponding positions of the image are spatially consistent. Since the MRI image and the post-implantation CT image are from different devices, the rates are also different. In this embodiment, a multi-resolution registration strategy is adopted. In this step, the MRI image and the post-implant CT image are converted to the same coordinate system, and the physical center of the MRI image and the physical center of the post-implant CT image are moved to the same point, so that the two images will substantially coincide. This will reduce the number of moves and the number of rotations.
  • Step S22 Rotating, translating, and comparing partial image pixels in the MRI image and partial image pixels in the post-implant CT image using the first scale until the contour of the MRI image is similar to the contour of the CT image after implantation.
  • a similarity degree in this step, the first image (rough scale) is used to rotate, translate and compare partial image pixels in the MRI image and partial image pixels in the post-implant CT image, respectively, until the contour of the MRI image
  • the contour of the CT image after implantation is substantially coincident, that is, the contour of the MRI image and the contour of the CT image after implantation satisfy the set first similarity.
  • the first similarity is set to be 90%.
  • the first similarity may also be other values.
  • the size of the first size is eight pixels.
  • the size of the first size may be adjusted to other values, and the specific adjustment may be based on specific The situation is fixed.
  • Step S23 uses the second scale to rotate and translate the partial image pixels in the MRI image after the first scale adjustment and the partial image pixels in the post-implant CT image: in this step, the second scale is used first. Partial image pixels in the scale-adjusted MRI image and partial image pixels in the post-implantation CT image are rotated and translated again, specifically, using a second scale (precise scale), and using the result of the previous layer Its parameters are initialized, and the above rotation and translation processes are iterated until the most accurate scale is reached.
  • a second scale precision scale
  • the number of pixels included in the first scale is greater than the number of pixels included in the second scale; the size of the second scale is one pixel, of course, in some cases of the first embodiment, The size of the second scale is adjusted according to the specific situation.
  • Step S24 determines whether the similarity between the MRI image after the second scale adjustment and the position pixel of the post-implant CT image reaches the second similarity: in this step, the MRI image after the second scale adjustment and the CT after implantation are determined. Whether the similarity of the image corresponding to the position pixel reaches the second similarity. Specifically, the mutual information similarity comparison algorithm is used to determine whether the similarity between the MRI image adjusted by the second scale and the pixel corresponding to the post-implant CT image reaches the second degree. Similarity, if the result of the determination is YES, step S25 is performed; otherwise, step S23 is returned. It is worth mentioning that in the first embodiment, the second similarity is greater than the first similarity. For example, the second similarity may be set to 99.8%. Of course, in some cases of the embodiment, the second similarity may also be set to other values.
  • Step S25 Ending the registration process: If the result of the above step S24 is YES, the present step is performed. In this step, the registration process of the MRI image and the post-implantation CT image is ended. This method from coarse to fine, looking at the whole on a large scale, and looking at the details on a small scale can greatly improve the success rate of registration.
  • the above step S03 can be further refined, and the refined flowchart is as shown in FIG. 3.
  • the above step S03 further includes:
  • Step S31 sets a grayscale threshold range of the registered CT image, and the set grayscale threshold range includes the electrode image: the threshold segmentation algorithm is suitable for the grayscale of the electrode image to be distinct from the grayscale of the soft tissue and the flesh. happening.
  • the grayscale threshold range of the registered CT image is set, and the set grayscale threshold range includes the electrode image.
  • the registered CT image herein referred to A set grayscale threshold range is displayed in the post-implantation CT image after registration.
  • Step S32 The image whose image gray scale falls within the set grayscale threshold value is extracted as the electrode image information: in this step, the image whose image grayscale falls within the set grayscale threshold value is extracted as the electrode image information.
  • the set grayscale threshold range may be adjusted to an appropriate range in the vicinity of the set grayscale threshold range according to a specific situation, that is, adjusted to a required grayscale range, and the grayscale of the image is dropped.
  • the image in the desired gray scale range is extracted as the electrode image information. It is worth mentioning that most of the non-electrode images can be removed by setting a suitable gray scale range and including most of the electrode images. The electrode is left so that the electrode image information can be obtained.
  • the method for displaying the cerebral cortex electrode and the magnetic resonance image needs to include some steps. As shown in FIG. 1 , before the step S01, the method further includes:
  • Step S00 Pre-implantation CT image obtained by CT scan before the patient is implanted into the intracranial electrode: In this step, the pre-implantation CT image obtained by CT scan before the patient is implanted into the intracranial electrode is taken, and then the CT image before implantation is taken. Introduced into the above cerebral cortex electrode with a magnetic resonance image fusion display system.
  • the pre-implantation CT image is an image file in DICOM format.
  • the method further includes:
  • Step S20 The MRI image and the pre-implantation CT image are registered in the same coordinate space: in this step, the MRI image and the pre-implantation CT image are registered in the same coordinate space. A detailed description of how to register will follow.
  • the above step S20 can be further refined, and the refined flowchart is as shown in FIG. 4 .
  • the above step S20 further includes:
  • Step S201 converts the MRI image and the pre-implant CT image into the same coordinate system, and moves the physical center of the MRI image and the physical center of the pre-implant CT image to the same point: in this step, the registration is to make the MRI The pixels in the position corresponding to the image and the pre-implant CT image are spatially consistent. Since the MRI image and the pre-implant CT image are from different devices, the rates are also different. In this embodiment, multi-resolution registration is adopted. Strategy. In this step, the MRI image and the pre-implant CT image are converted to the same coordinate system, and the physical center of the MRI image and the physical center of the pre-implant CT image are moved to the same point, so that the two images will substantially coincide. This will reduce the number of moves and the number of rotations.
  • Step S202 Rotating, translating, and comparing partial image pixels in the MRI image and partial image pixels in the pre-implant CT image using the first scale until the contour of the MRI image is similar to the contour of the pre-implant CT image.
  • a similarity degree in this step, the first image (rough scale) is used to rotate, translate and compare partial image pixels in the MRI image and partial image pixels in the pre-implant CT image, respectively, until the contour of the MRI image
  • the contour of the CT image before implantation is substantially coincident, that is, the contour of the MRI image and the contour of the CT image before implantation satisfy the set first similarity.
  • the size of the first size is eight pixels. Of course, in some cases of the first embodiment, the size of the first size may be adjusted to other values, and the specific adjustment may be based on specific The situation is fixed.
  • Step S203 Using a second scale, the partial image pixels in the MRI image after the first scale adjustment and the partial image pixels in the pre-implant CT image are rotated and translated again: in this step, the second scale is adjusted by the first scale. Part of the image pixels in the post-MRI image and part of the image pixels in the pre-implant CT image are rotated and translated again, specifically, using the second scale (precise scale), and using the result of the previous layer for its parameters Initialize, iterating through the above rotation and translation process until the most accurate scale is reached.
  • the second scale precision scale
  • the number of pixels included in the first scale is greater than the number of pixels included in the second scale; the size of the second scale is one pixel, of course, in some cases of the first embodiment, The size of the second scale is adjusted according to the specific situation.
  • Step S204 determining whether the similarity between the MRI image after the second scale adjustment and the position pixel corresponding to the pre-implant CT image reaches the second similarity: in this step, determining the MRI image after the second scale adjustment and the pre-implant CT Whether the similarity of the pixel corresponding to the position reaches the second similarity, and specifically, using the mutual information similarity comparison algorithm to determine whether the similarity of the pixel corresponding to the position of the first CT image and the MRI image after the second scale adjustment reaches the second similarity If the result of the determination is yes, then step S205 is performed; otherwise, the process returns to step S203. It is worth mentioning that in this embodiment, the second similarity is greater than the first similarity.
  • Step S205 ends the registration process: if the result of the above step S204 is YES, the present step is executed. In this step, the registration process for the MRI image and the pre-implant CT image is ended. That is, after registration, the CT image before implantation, the CT image after implantation, and the MRI image are in the same coordinate system, which is convenient for data fusion. This method from coarse to fine, looking at the whole on a large scale, and looking at the details on a small scale can greatly improve the success rate of registration.
  • the CT image after registration includes the pre-implantation CT image after registration and the registration.
  • the extraction steps used in the above step S03 are as follows:
  • the post-implantation CT image after registration is subtracted from the pre-implantation CT image, and the portion where the gradation difference value is not 0 is the electrode image information: this embodiment uses the subtraction segmentation algorithm to extract the electrode image information.
  • the idea of the subtraction segmentation algorithm is to subtract the two images, the pixels with the same position in the image are subtracted and the gray value is 0, and the remaining gray is not 0, which is the difference between the two images.
  • the basis of the subtractive segmentation algorithm is registration. Theoretically, the pre-implantation CT image and the post-implantation CT image are respectively registered to the same coordinate system, so that the difference between the two images is only in the presence or absence of the electrode, so the two images are The electrode can be extracted by subtraction.
  • the post-implantation CT image after registration is subtracted from the pre-implantation CT image, and the portion where the gradation difference value is not zero is the electrode image information.
  • the cerebral cortex electrode and the magnetic resonance image fusion display system combine the pre-implantation CT image and the post-implantation CT image comparison algorithm and the threshold segmentation algorithm to extract the position information of the implanted electrode. This makes it possible to extract the electrode position information better.
  • step S04 can be further refined, and the refined flowchart is as shown in FIG. 5.
  • the above step S04 further includes:
  • Step S41 Arbitrarily selecting a seed point from the registered MRI image:
  • an image segmentation algorithm is used to extract cerebral cortical tissue image information, and the image segmentation algorithm is based on a region growing algorithm, and the idea is to set a similarity pixel set. Together form the area.
  • a seed point is arbitrarily selected from the registered MRI image, and a seed point is randomly selected as a starting point for growth.
  • Step S42 selects a point whose pixel value falls within the set interval in a region centered on the seed point and whose radius is the first set value, and incorporates it into the region where the seed point is located: in this step, in the seed A point whose center is the center and whose radius is the first set value is selected from the point where the pixel value falls within the set interval, and is included in the area where the seed point is located, that is, the seed point and the seed point around the pixel point. Pixels with the same or similar properties of pixels (as judged according to some predetermined growth or similarity criteria) are merged into the region where the seed point pixels are located.
  • the first set value is set in advance by software.
  • Step S43 The radius is expanded, and it is judged whether there is a point in the enlarged area whose pixel value falls within the set interval: in this step, the radius is expanded, and it is judged whether or not the pixel value still exists in the enlarged area.
  • the point in the set interval that is, the new pixel in the region where the nano seed point pixel is located is regarded as a new seed pixel, and the above process is continued, and it is determined whether there is still a point where the pixel value falls within the set interval, if If the result of the determination is yes, then step S44 is performed; otherwise, step S45 is performed.
  • Step S44 It is included in the area where the seed point is located: if the result of the above step S43 is YES, this step is performed. In this step, it is included in the area where the seed point is located. After performing this step, the process returns to step S43.
  • Step S45 uses the region where the seed point is located as cerebral cortical tissue image information: if the result of the above-described step S43 is NO, the present step is executed.
  • the region where the seed point is located is used as image information of the cerebral cortex tissue. That is, when a pixel that does not satisfy the condition is included, such an area becomes long. That is to say, the growth criterion used by the current image segmentation algorithm is that the point pixel values in the field with a certain radius are within a certain interval, and then the points are included in the region where the seed pixel is located, and when there is no point meeting the criterion, When incorporated, the image segmentation algorithm terminates. This extracts the electrode image information and the cerebral cortex tissue image information.
  • step S05 can be further refined, and the refined flowchart is as shown in FIG. 6.
  • the above step S05 further includes:
  • Step S51 Converting the data type of the electrode image information and the data type of the cerebral cortex tissue image information into the same data type: in this step, the data type of the electrode image information and the data type of the cerebral cortex tissue image information are converted into the same data type.
  • the same data type is Byte type, that is, the electrode image information and the cerebral cortex tissue image information are both converted into eight-bit data.
  • Step S52 Through the set color map, the gray values of each pixel in the electrode image and the cerebral cortex tissue image are respectively mapped into different colors and transparency, and the color and transparency are respectively adjusted: in this step, the set color is adopted.
  • the mapping table maps different gray values of each pixel in the electrode image and the cerebral cortex tissue image into different colors and transparency, respectively, and adjusts the color and transparency thereof. It is worth mentioning that in the process of fusion, the brain tissue and the electrode are respectively given different colors, and finally they are displayed in the same coordinate system, so that they can adjust their color and transparency separately, which brings a very good use. Great convenience.
  • Step S53 displays the fused image by surface rendering or volume rendering: in this step, the fused image is displayed by surface rendering or volume rendering.
  • the embodiment further relates to a device for realizing the method for displaying the fusion of the cerebral cortex electrode and the magnetic resonance image, and a schematic structural view thereof is shown in FIG. 7 .
  • the apparatus comprises a registration unit 2, an electrode information extraction unit 3, a cerebral cortex information extraction unit 4, an information fusion unit 5, and a display effect adjustment unit 6; wherein the registration unit 2 is used for implanting MRI images and implants The post CT image is registered in the same coordinate space; the electrode information extracting unit 3 is configured to extract electrode image information from the registered CT image; the cerebral cortex information extracting unit 4 is configured to extract the brain from the registered MRI image.
  • Cortical tissue image information; the information fusion unit 5 is configured to spatially fuse the electrode image information and the cerebral cortex tissue image information, and display the spatially fused information through a three-dimensional visualization method; the display effect adjustment unit 6 is configured to adjust the three-dimensional visual display The effect and angle, and the adjusted result is converted to the desired format output saved.
  • the above MRI image and the post-implant CT image are both DICOM format image files.
  • the above required format is a DICOM file, a picture file or a video file, and the results of different formats can be used for various purposes. Since the electrode information of the extraction process is fused and displayed on the MRI image, the brain tissue information can be seen and the position information of the electrode can be seen at the same time, so that the position information of the electrode can be obtained.
  • the registration unit 2 further includes a coordinate conversion module 21, a post-implantation rotation translation comparison module 22, a post-implantation rotation translation module 23, and a post-implantation similarity determination module 24; wherein, the coordinate conversion module 21 uses The MRI image and the post-implantation CT image are converted to the same coordinate system, and the physical center of the MRI image is moved to the same point as the physical center of the post-implant CT image; the post-implantation rotation translation comparison module 22 is used for Rotating, translating, and comparing partial image pixels in the MRI image and partial image pixels in the post-implant CT image using the first scale until the contour of the MRI image is similar to the contour of the CT image after implantation.
  • the coordinate conversion module 21 uses The MRI image and the post-implantation CT image are converted to the same coordinate system, and the physical center of the MRI image is moved to the same point as the physical center of the post-implant CT image
  • the post-implantation rotation translation comparison module 22 is used for Rotating, translating, and comparing partial image pixels in the MRI image and partial image pixels in the
  • the post-implantation rotation translation module 23 is configured to rotate and translate a portion of the image pixels in the MRI image after the first scale adjustment and a portion of the image pixels in the post-implant CT image using the second scale;
  • the number of pixels included in the first scale is greater than the number of pixels included in the second scale;
  • the post-implant similarity judgment module 24 is configured to determine the MRI image and the implant after the second scale adjustment CT image corresponding to the pixel position of the similarity reaches a second degree of similarity, and if so, the end of the registration process; otherwise, continue adjusting the second scale; and the second similarity is larger than the first similarity.
  • the size of the first size is eight pixels
  • the size of the second size is one pixel.
  • the first size and the second size The size can be adjusted according to the specific situation.
  • the electrode information extracting unit 3 when only the pre-implant MRI scan and the post-implant CT scan are performed, the electrode information extracting unit 3 further includes a grayscale threshold setting module 31 and an electrode image extracting module 32;
  • the degree threshold setting module 31 is configured to set a grayscale threshold range of the registered CT image, the set grayscale threshold range includes the electrode image;
  • the electrode image extraction module 32 is configured to drop the image grayscale An image within the set grayscale threshold range is extracted as electrode image information. This completes the extraction of the electrode information.
  • the device further includes a pre-implantation registration unit 20, The pre-implantation registration unit 20 is used to register the MRI image and the pre-implantation CT image in the same coordinate space.
  • the pre-implantation CT image is an image file in DICOM format.
  • the pre-implantation registration unit 20 further includes a pre-implantation coordinate conversion module 201, a pre-implantation rotational translation comparison module 202, a pre-implantation rotation translation module 203, and a pre-implantation similarity determination module 204;
  • the pre-implantation coordinate conversion module 201 is configured to convert the MRI image and the pre-implant CT image into the same coordinate system, and move the physical center of the MRI image to the same point of the physical center of the pre-implant CT image;
  • the front rotation translation comparison module 202 is configured to respectively rotate, translate, and compare partial image pixels in the MRI image and partial image pixels in the pre-implant CT image using the first scale until the contour of the MRI image and the pre-implantation The similarity of the contour of the CT image reaches a first similarity;
  • the pre-implantation rotation translation module 203 is configured to use a second scale to pass a partial image pixel in the MRI image after the first scale adjustment and a portion in the pre-implant CT image The pixel of the image is rotated and translated again;
  • the CT image after registration includes the pre-implantation CT image after registration and the post-implantation CT image after registration.
  • the electrode information extracting unit 3 further includes CT image subtraction.
  • the module 31', the CT image subtraction module 31' is configured to subtract the registered pre-implant CT image after the registration of the post-implantation CT image, and the portion where the gradation difference value is not 0 is the electrode image information. .
  • the electrode information extraction unit 3 may be configured to include the grayscale threshold setting module 31 and the electrode image extraction module 32, or the electrode information extraction unit 3 may be configured to include CT image subtraction module 31'.
  • the cerebral cortex information extracting unit 4 further includes a seed point selecting module 41, a set interval intra-segment point selecting module 42, a set interval intra-pixel value judging module 43 and a pixel value judging module 44; wherein, the seed point is selected
  • the module 41 is configured to arbitrarily select one seed point from the registered MRI image
  • the seeding point selection module 42 is configured to select the pixel in the region with the seed point as the center and the radius as the first set value. The value falls within the set interval and is included in the region where the seed point is located
  • the set interval intra-pixel value judgment module 43 is configured to expand the radius and determine whether the pixel value still exists in the enlarged region.
  • the pixel value judging module 44 is configured to continue to perform the pixel in the set interval The judgment of the value. This completes the extraction of cerebral cortex information.
  • the information fusion unit 5 further includes a data type conversion module 51, a gray value mapping module 52, and an image display module 53.
  • the data type conversion module 51 is configured to use the data type of the electrode image information with the The data type of the cerebral cortex tissue image information is converted into the same data type;
  • the gray value mapping module 52 is configured to respectively map different gray values of each pixel in the electrode image and the cerebral cortical tissue image through the set color mapping table. Different colors and transparency are used to adjust their color and transparency respectively;
  • the image display module 53 is used to display the fused image by face drawing or volume rendering. In this way, you can see the information of the cerebral cortex and the position information of the electrodes to help the user.
  • the data of the pre-implant CT image, the post-implant CT image and the MRI image are taken and introduced into the cerebral cortex electrode and the magnetic resonance image fusion display system, and the cerebral cortex electrode and the magnetic resonance image fusion display system are automatically performed.
  • Processing and visualizing the results in a three-dimensional visualization by combining the information of the implanted electrodes with the information of the cerebral cortex, provides a wealth of auxiliary information, and can obtain more satisfactory results without increasing the burden. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

L'invention concerne un procédé et un appareil pour affichage par fusion des images d'électrode de cortex cérébral et de résonance magnétique. Le procédé consiste à : extraire une image IRM obtenue par balayage IRM avant qu'une électrode intracrânienne ne soit implantée dans un patient, et une image CT post-implantation obtenue par balayage CT une fois que l'électrode intracrânienne est implantée dans le patient (S01); enregistrer l'image IRM et l'image CT post-implantation sous un espace de coordonnées identique (S02); extraire des informations d'image d'électrode à partir de l'image CT enregistrée (S03); extraire des informations d'image de tissu de cortex cérébral à partir de l'image IRM enregistrée (S04); fusionner spatialement les informations d'image d'électrode et les informations d'image de tissu de cortex cérébral, et afficher les informations fusionnées spatialement au moyen d'un procédé visuel en trois dimensions (S05); et ajuster un angle et un effet d'affichage visuel en trois dimensions, et convertir un résultat d'ajustement en un format nécessaire pour la sortie et le stockage (S06). En mettant en œuvre le procédé et l'appareil pour l'affichage par fusion d'images de résonance magnétique et d'électrode de cortex cérébral, des informations de position autour d'électrodes peuvent être acquises.
PCT/CN2015/082501 2015-06-26 2015-06-26 Procédé et appareil pour affichage par fusion des images d'électrode de cortex cérébral et de résonance magnétique WO2016206093A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2015/082501 WO2016206093A1 (fr) 2015-06-26 2015-06-26 Procédé et appareil pour affichage par fusion des images d'électrode de cortex cérébral et de résonance magnétique
CN201580003336.6A CN105849777B (zh) 2015-06-26 2015-06-26 一种大脑皮层电极与磁共振图像融合显示的方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/082501 WO2016206093A1 (fr) 2015-06-26 2015-06-26 Procédé et appareil pour affichage par fusion des images d'électrode de cortex cérébral et de résonance magnétique

Publications (1)

Publication Number Publication Date
WO2016206093A1 true WO2016206093A1 (fr) 2016-12-29

Family

ID=56580841

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/082501 WO2016206093A1 (fr) 2015-06-26 2015-06-26 Procédé et appareil pour affichage par fusion des images d'électrode de cortex cérébral et de résonance magnétique

Country Status (2)

Country Link
CN (1) CN105849777B (fr)
WO (1) WO2016206093A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288641A (zh) * 2019-07-03 2019-09-27 武汉瑞福宁科技有限公司 Pet/ct与mri脑部图像异机配准方法、装置、计算机设备及存储介质
CN112150419A (zh) * 2020-09-10 2020-12-29 东软医疗***股份有限公司 图像处理方法、装置及电子设备
CN114820406A (zh) * 2020-10-30 2022-07-29 武汉联影医疗科技有限公司 融合图像显示方法、装置和医学影像***

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3547252A4 (fr) 2016-12-28 2019-12-04 Shanghai United Imaging Healthcare Co., Ltd. Système et procédé de traitement d'images multi-modales
TWI684994B (zh) * 2018-06-22 2020-02-11 國立臺灣科技大學 脊椎影像註冊方法
CN109662778B (zh) * 2019-03-01 2020-11-10 中国人民解放军国防科技大学 基于三维卷积的人机交互式颅内电极定位方法与***
CN110680312B (zh) * 2019-09-17 2022-05-17 首都医科大学宣武医院 电极脑回标尺图的制作方法
CN112164019A (zh) * 2020-10-12 2021-01-01 珠海市人民医院 一种ct、mr扫描图像融合方法
CN114631885A (zh) * 2020-12-15 2022-06-17 上海微创卜算子医疗科技有限公司 模拟***及可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006075331A2 (fr) * 2005-01-13 2006-07-20 Mazor Surgical Technologies Ltd. Systeme robotique guide par image destine a la neurochirurgie de precision
US20090048507A1 (en) * 2007-08-07 2009-02-19 Thorsten Feiweier Method and apparatus for imaging functional and electrical activities of the brain
US20140039577A1 (en) * 2012-08-04 2014-02-06 Boston Scientific Neuromodulation Corporation Techniques and methods for storing and transferring registration, atlas, and lead information between medical devices
CN103932796A (zh) * 2014-04-13 2014-07-23 北京师范大学 一种基于多模态医学影像数据融合的颅内电极个体化定位的方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102708293B (zh) * 2012-05-14 2015-06-17 电子科技大学 一种电极模型和头模型配准的方法
CN102814002B (zh) * 2012-08-08 2015-04-01 深圳先进技术研究院 经颅磁刺激导航***及经颅磁刺激线圈定位方法
KR101579110B1 (ko) * 2013-09-16 2015-12-22 삼성전자주식회사 자기 공명 영상 생성 방법, 그에 따른 위상 대조 영상의 위상 정보 획득 방법, 그에 따른 자화율 강조 영상의 위상 정보 획득 방법 및 그에 따른 자기 공명 영상 생성 장치

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006075331A2 (fr) * 2005-01-13 2006-07-20 Mazor Surgical Technologies Ltd. Systeme robotique guide par image destine a la neurochirurgie de precision
US20090048507A1 (en) * 2007-08-07 2009-02-19 Thorsten Feiweier Method and apparatus for imaging functional and electrical activities of the brain
US20140039577A1 (en) * 2012-08-04 2014-02-06 Boston Scientific Neuromodulation Corporation Techniques and methods for storing and transferring registration, atlas, and lead information between medical devices
CN103932796A (zh) * 2014-04-13 2014-07-23 北京师范大学 一种基于多模态医学影像数据融合的颅内电极个体化定位的方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HU , ZHENHONG ET AL.: "j?yú pícéng n? odiàn g?opín gamma shénj?ngzhèndàng de yu yáng? ngn éngq? dìngwèi", CBME 2013, 21 October 2013 (2013-10-21), pages 3 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288641A (zh) * 2019-07-03 2019-09-27 武汉瑞福宁科技有限公司 Pet/ct与mri脑部图像异机配准方法、装置、计算机设备及存储介质
CN112150419A (zh) * 2020-09-10 2020-12-29 东软医疗***股份有限公司 图像处理方法、装置及电子设备
CN114820406A (zh) * 2020-10-30 2022-07-29 武汉联影医疗科技有限公司 融合图像显示方法、装置和医学影像***

Also Published As

Publication number Publication date
CN105849777A (zh) 2016-08-10
CN105849777B (zh) 2019-01-18

Similar Documents

Publication Publication Date Title
WO2016206093A1 (fr) Procédé et appareil pour affichage par fusion des images d'électrode de cortex cérébral et de résonance magnétique
WO2016126056A1 (fr) Appareil et procédé de fourniture d'informations médicales
Ayoub et al. Towards building a photo-realistic virtual human face for craniomaxillofacial diagnosis and treatment planning
CN110033465B (zh) 一种应用于双目内窥镜医学图像的实时三维重建方法
WO2021169191A1 (fr) Procédé et système de tomographie assistée par ordinateur rapide basés sur une image positionnée stéréoscopique virtuelle
WO2020027377A1 (fr) Dispositif pour fournir un repérage d'image 3d et procédé associé
WO2021025461A1 (fr) Système de diagnostic à base d'image échographique pour lésion d'artère coronaire utilisant un apprentissage automatique et procédé de diagnostic
WO2018066764A1 (fr) Système et procédé de génération d'images pour évaluation d'implant
JP3910239B2 (ja) 医用画像合成装置
WO2021141416A1 (fr) Appareil et procédé de génération de modèle tridimensionnel par le biais d'une mise en correspondance de données
WO2018066763A1 (fr) Système et procédé permettant de générer des images pour une évaluation d'implant
WO2022191575A1 (fr) Dispositif et procédé de simulation basés sur la mise en correspondance d'images de visage
WO2019045144A1 (fr) Appareil et procédé de traitement d'image médicale pour dispositif de navigation médicale
WO2019132244A1 (fr) Procédé de génération d'informations de simulation chirurgicale et programme
WO2019045390A1 (fr) Système de soins bucco-dentaires
WO2024111913A1 (fr) Procédé et dispositif de conversion d'image médicale à l'aide d'une intelligence artificielle
Andrews et al. Three-dimensional CT scan reconstruction for the assessment of congenital aural atresia
JP2005124617A (ja) 医用画像診断支援システム
Lerma et al. Smartphone-based video for 3D modelling: Application to infant’s cranial deformation analysis
Hussain et al. Vision-based augmented reality system for middle ear surgery: evaluation in operating room environment
Guigou et al. Augmented reality based transmodiolar cochlear implantation
JP2005536236A (ja) 画像調整手段を持つ医療用ビューイングシステム
US20100082147A1 (en) Method for the manufacturing of a reproduction of an encapsulated head of a foetus and objects obtained by the method
WO2020235784A1 (fr) Procédé et dispositif de détection de nerf
WO2020209495A1 (fr) Appareil de prétraitement de données d'image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15895991

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.06.2018)

122 Ep: pct application non-entry in european phase

Ref document number: 15895991

Country of ref document: EP

Kind code of ref document: A1