CN113808227A - Medical image alignment method, medium and electronic device - Google Patents

Medical image alignment method, medium and electronic device Download PDF

Info

Publication number
CN113808227A
CN113808227A CN202010535461.4A CN202010535461A CN113808227A CN 113808227 A CN113808227 A CN 113808227A CN 202010535461 A CN202010535461 A CN 202010535461A CN 113808227 A CN113808227 A CN 113808227A
Authority
CN
China
Prior art keywords
image
target image
reference object
sequence
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010535461.4A
Other languages
Chinese (zh)
Other versions
CN113808227B (en
Inventor
***
丁雨晖
孙忠权
顾静军
周公敢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Pujian Medical Technology Co ltd
Original Assignee
Hangzhou Pujian Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Pujian Medical Technology Co ltd filed Critical Hangzhou Pujian Medical Technology Co ltd
Priority to CN202010535461.4A priority Critical patent/CN113808227B/en
Publication of CN113808227A publication Critical patent/CN113808227A/en
Application granted granted Critical
Publication of CN113808227B publication Critical patent/CN113808227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides a medical image alignment method, a medium and an electronic device. The medical image alignment method is used for aligning a target image and a reference image, and comprises the following steps: acquiring a characteristic value of a reference object in the target image, and acquiring a characteristic sequence of the target image according to the characteristic value of the reference object in the target image; acquiring a characteristic value of the reference object in the reference image, and acquiring a characteristic sequence of the reference image according to the characteristic value of the reference object in the reference image; acquiring a translation distance according to the characteristic sequence of the target image and the characteristic sequence of the reference image; and translating the target image according to the translation distance. The medical image alignment method can improve the efficiency of alignment processing of the medical images.

Description

Medical image alignment method, medium and electronic device
Technical Field
The present invention relates to an image alignment method, and more particularly, to a medical image alignment method, a medium, and an electronic device, which belong to the field of image processing.
Background
CT (Computed Tomography) examination is a relatively advanced medical imaging examination technique. In the CT examination process, it is often necessary to take flat scan CT images, venous CT images, arterial CT images, and the like of the same patient. The flat scan phase CT image refers to a CT image obtained when a contrast medium is not injected, the venous phase CT image refers to a CT image obtained at the filling and developing time of venous blood vessels, and the arterial phase CT image refers to a CT image obtained at the filling and developing time of arterial blood vessels. However, in the actual examination process, the capturing time of the CT images in each phase may be different, and the breathing and movement of the patient may cause the position of the patient in the CT scanner to be deviated in different capturing times, thereby causing the position of the CT images in different phases to be different in the vertical direction, for example, the first CT image in the flat scan phase may correspond to the second CT image in the arterial phase, or the third CT image in the flat scan phase may correspond to the first CT image in the arterial phase. In the prior art, the alignment of CT images in different phases is often realized in a manual mode, and the efficiency is low.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, an object of the present invention is to provide a medical image alignment method, a medium and an electronic device, which are used to solve the problem of low efficiency of manual alignment in the prior art.
To achieve the above and other related objects, a first aspect of the present invention provides a medical image alignment method for aligning a target image with a reference image. The medical image alignment method comprises the following steps: acquiring a characteristic value of a reference object in the target image, and acquiring a characteristic sequence of the target image according to the characteristic value of the reference object in the target image; acquiring a characteristic value of the reference object in the reference image, and acquiring a characteristic sequence of the reference image according to the characteristic value of the reference object in the reference image; acquiring a translation distance according to the characteristic sequence of the target image and the characteristic sequence of the reference image; and translating the target image according to the translation distance.
In some embodiments of the first aspect, an implementation method for obtaining a feature value of a reference object in the target image includes: acquiring a CT value range of the reference object; acquiring pixel points of the reference object according to the CT value range, and further generating a template of the reference object; and acquiring the characteristic value of the reference object according to the mask of the reference object.
In certain embodiments of the first aspect, the feature values of the reference object comprise: the reference object corresponds to a geometric feature of the convex hull, a center point of the reference object, and/or an area of the reference object.
In some embodiments of the first aspect, a method for obtaining a feature sequence of the target image according to a feature value of a reference object in the target image includes: obtaining a feature vector corresponding to the target image according to the feature value of the reference object in the target image; and obtaining a characteristic sequence of the target image according to the characteristic vector corresponding to the target image.
In some embodiments of the first aspect, the method for obtaining the translation distance according to the feature sequence of the target image and the feature sequence of the reference image includes: acquiring the value range of the translation distance; the range of values includes at least two integer values; sequentially calculating the sequence variance corresponding to each integer value in the value range according to the characteristic sequence of the target image and the characteristic sequence of the reference image; and selecting a corresponding integer value as the translation distance according to the sequence variance.
In some embodiments of the first aspect, the feature sequence is based on the target image
Figure BDA0002536896770000021
And a sequence of features of the reference image
Figure BDA0002536896770000022
Calculating a sequence variance corresponding to an integer value x
Figure BDA0002536896770000023
The formula of (1) is:
Figure BDA0002536896770000024
wherein n is the number of CT images included in the target image, and m is the number of characteristic values of a reference object corresponding to each CT image in the target image;
Figure BDA0002536896770000025
Figure BDA0002536896770000026
to represent
Figure BDA0002536896770000027
The characteristic value of the ith row and the jth column,
Figure BDA0002536896770000028
to represent
Figure BDA0002536896770000029
The ith row and the jth column.
In certain embodiments of the first aspect, the reference object is the spine.
In some embodiments of the first aspect, the target image comprises: flat scan phase CT images, venous phase CT images and/or arterial phase CT images.
The second aspect of the present invention also provides a computer-readable storage medium having a computer-readable program stored thereon. The computer program, when executed by a processor, implements the medical image alignment method of the present invention.
The third aspect of the present invention also provides an electronic apparatus, comprising: a memory storing a computer program; the processor is in communication connection with the memory and executes the medical image alignment method when the computer program is called; and the display is in communication connection with the processor and the memory and is used for displaying a related GUI interactive interface of the medical image alignment method.
As described above, the medical image alignment method, medium, and electronic device according to the present invention have the following advantages:
the medical image alignment method can automatically acquire the characteristic sequences of a target image and a reference image, and further can translate the target image to align the target image and the reference image by acquiring the translation distance of the target image according to the characteristic sequences. The whole process basically does not need manual participation, and the operation is convenient and the efficiency is higher.
Drawings
Fig. 1A is a diagram illustrating an example of a flat-scan CT image of a medical image alignment method according to an embodiment of the invention.
Fig. 1B is a diagram illustrating an example of a venous CT image according to an embodiment of the medical image alignment method of the present invention.
FIG. 1C is a diagram illustrating an example of an arterial phase CT image according to an embodiment of the medical image alignment method of the present invention.
Fig. 2 is a flowchart illustrating a medical image alignment method according to an embodiment of the invention.
Fig. 3A is a diagram illustrating an example of CT sequences included in a target image according to a medical image alignment method of the present invention in an embodiment.
Fig. 3B is a diagram illustrating an example of a CT sequence included in a reference image according to an embodiment of the medical image alignment method of the present invention.
Fig. 4A is a flowchart illustrating a feature value obtaining method according to an embodiment of the present invention.
FIG. 4B is a flowchart illustrating a method for acquiring a reference object mask according to an embodiment of the invention.
Fig. 4C is a CT image exemplary diagram of a medical image alignment method according to an embodiment of the invention.
Fig. 4D is a CT image exemplary diagram of a medical image alignment method according to an embodiment of the invention.
Fig. 4E is a CT image exemplary diagram of a medical image alignment method according to an embodiment of the invention.
Fig. 4F is a CT image exemplary diagram of a medical image alignment method according to an embodiment of the invention.
Fig. 5 is a flowchart illustrating a feature sequence obtained by the medical image alignment method according to an embodiment of the invention.
Fig. 6 is a flowchart illustrating the step S23 of the medical image alignment method according to an embodiment of the invention.
Fig. 7A is a flowchart illustrating a medical image alignment method according to another embodiment of the present invention.
Fig. 7B is a flowchart illustrating the step S72 of the medical image alignment method according to an embodiment of the invention.
Fig. 7C is a flowchart illustrating the step S74 of the medical image alignment method according to an embodiment of the invention.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Description of the element reference numerals
4 selection frame
411 Upper frame
412 left frame
413 right frame
414 lower frame
4' outer frame
411' adjusted upper frame
412' adjusted left frame
413' adjusted right frame
414' adjusted lower frame
51 spinal column
52 Rib
800 electronic device
810 memory
820 processor
830 display
S21-S24
S211 a-S213 b
S211 b-S212 b
S231 to S233
S41-S44
S71-S74
S721 to S724 steps
S741 to S743 Steps
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
In the actual CT examination process, the capturing times of the CT images in different phases may be different, and the breathing and movement of the patient may cause the position of the patient in the CT scanner to be deviated at different capturing times, thereby causing the vertical position of the CT images in different phases to be different. Please refer to fig. 1A to 1C, which are exemplary diagrams of CT images obtained in different phases according to an embodiment; wherein fig. 1A shows the first CT image taken during the flat scan phase, fig. 1B shows the first CT image taken during the venous phase, and fig. 1C shows the first CT image taken during the arterial phase. It can be seen that the position and shape of the organ and bone in the flat-scan CT image shown in fig. 1A substantially correspond to the position and shape of the organ and bone in the CT image shown in fig. 1B, and therefore the CT image shown in fig. 1A is aligned with the CT image shown in fig. 1B; since the positions and shapes of the organs and bones in the flat-scan CT image shown in fig. 1A are greatly different from the positions and shapes of the organs and bones in the arterial CT image shown in fig. 1C, the CT image shown in fig. 1A is not aligned with the CT image shown in fig. 1C, and the flat-scan CT image and the arterial CT image need to be aligned. In the prior art, the alignment of CT images in different phases is usually realized in a manual mode, and the efficiency is low.
In view of the above problem, the present invention provides a medical image alignment method for aligning a target image with a reference image. The medical image alignment method comprises the following steps: acquiring a characteristic value of a reference object in the target image, and acquiring a characteristic sequence of the target image according to the characteristic value of the reference object in the target image; acquiring a characteristic value of a reference object in the reference image, and acquiring a characteristic sequence of the reference image according to the characteristic value of the reference object in the reference image; acquiring a translation distance according to the characteristic sequence of the target image and the characteristic sequence of the reference image; and translating the target image according to the translation distance. The medical image alignment method can automatically acquire the characteristic sequences of a target image and a reference image so as to acquire the translation distance of the target image, and translates the target image according to the translation distance so as to align the target image and the reference image. The medical image alignment method provided by the invention basically does not need manual participation, and the efficiency of image alignment processing is improved.
Referring to fig. 2, in an embodiment of the invention, the medical image alignment method includes:
s21, acquiring the characteristic value of the reference object in the target image, and acquiring the characteristic sequence of the target image according to the characteristic value of the reference object in the target image; wherein the reference object is a bone or an organ in the target image and the reference image; the target image comprises a CT sequence of at least 2 CT images, such as: fig. 3A shows a CT sequence included in the target image according to the present embodiment, wherein the CT sequence is composed of a plurality of CT images.
S22, acquiring the characteristic value of the reference object in the reference image, and acquiring the characteristic sequence of the reference image according to the characteristic value of the reference object in the reference image; for example, fig. 3B shows a CT sequence included in the reference image in the present embodiment, and the CT sequence is composed of a plurality of CT images.
S23, acquiring a translation distance according to the characteristic sequence of the target image and the characteristic sequence of the reference image;
and S24, translating the target image according to the translation distance so as to align the target image and the reference image.
It should be noted that fig. 3A and 3B are only exemplary illustrations of the target image and the reference image. In practical application, a CT scanner is used to obtain multiple CT images, one of the CT images is selected as the reference image, and a CT image other than the reference image is selected as the target image. According to the medical image alignment method, manual participation is basically not needed, and the efficiency of alignment processing on the medical images is improved.
In an embodiment of the invention, the translation distance N is a positive integer, zero, or a negative integer. Wherein:
when N is a positive integer, the 1 st CT image representing the target image is aligned with the N +1 st CT image of the reference image. If the shooting sequence of the target image is from bottom to top, and the 1 st CT image of the target image is the lowest image, all CT images in the target image need to be shifted upwards by N, so that the 1 st CT image of the target image is aligned with the N +1 th CT image of the reference image, the 2 nd CT image of the target image is aligned with the N +2 nd CT image of the reference image, and so on. If the shooting sequence of the target image is from top to bottom, and the 1 st CT image of the target image is the top image, all the CT images in the target image need to be translated downward by N, so that the 1 st CT image of the target image is aligned with the N +1 th CT image of the reference image, the 2 nd CT image of the target image is aligned with the N +2 nd CT image of the reference image, and so on. For example, if the 1 st CT image in fig. 3A is aligned with the 3 rd CT image in fig. 3B, the CT sequence shown in fig. 3A is shifted up by 2 as a whole to achieve the alignment of fig. 3A and 3B, i.e.: the 1 st CT image of FIG. 3A is aligned with the 3 rd CT image of FIG. 3B, and the 2 nd CT image of FIG. 3A is aligned with the 4 th CT image of FIG. 3B.
When N is zero, the target image and the reference image are already aligned, and at this time, the target image does not need to be translated.
When N is a negative integer, the-N +1 th CT image representing the target image is aligned with the 1 st CT image of the reference image. If the shooting sequence of the target image is from bottom to top, and the 1 st CT image of the target image is the lowest image, all CT images in the target image need to be translated downward by-N, so that the-N +1 st CT image of the target image is aligned with the 1 st CT image of the reference image, the-N +2 th CT image of the target image is aligned with the 2 nd CT image of the reference image, and so on. If the shooting sequence of the target image is from top to bottom, and the 1 st CT image of the target image is the top image, all the CT images in the target image need to be translated upwards by-N, so that the-N +1 st CT image of the target image is aligned with the 1 st CT image of the reference image, the-N +2 th CT image of the target image is aligned with the 2 nd CT image of the reference image, and so on.
Referring to fig. 4A, in an embodiment of the present invention, the target image includes at least 2 CT images; each CT image in the target image corresponds to one or more eigenvalues of the reference object; the characteristic value of the reference object in the target image comprises the characteristic value of the reference object corresponding to each CT image in the target image. In addition, in the present embodiment, the reference object has a significant difference in CT value from the surrounding adjacent organs or bones, so that the reference object can be distinguished from the surrounding adjacent organs or bones by the CT value. In this embodiment, the method for obtaining a feature value of a reference object corresponding to any CT image in the target image includes:
s211a, acquiring the CT value range of the reference object; the CT image has a value in Hounsfield units (hu) which essentially reflects the density of the body organ or tissue. Different reference objects have different CT value ranges, for example: for bone, the CT value range is greater than 1000 hu.
S212a, acquiring pixel points of the reference object according to the CT value range, and further generating a mask of the reference object; ideally, the mask of the reference object can cover the reference object completely and is composed of all the pixels of the reference object.
S213a, obtaining the feature value of the reference object according to the mask of the reference object.
The steps S211a to S213a can obtain a feature value of a reference object corresponding to any CT image in the target image; furthermore, the feature values of the reference objects corresponding to all the CT images in the target image are combined together, so as to obtain the feature value of the reference object in the target image.
Referring to fig. 4B, in an embodiment of the present invention, for any CT image, an implementation method for obtaining pixel points of the reference object according to the CT value range to generate a mask of the reference object includes:
s41, deleting all pixel points with CT values outside the CT value range in the CT image. Since the CT value of the reference object in this embodiment is different from the CT value of the surrounding adjacent organ or bone, the reference object can be distinguished from the surrounding adjacent organ or bone by the CT value, and the organ and/or bone adjacent to the reference object can be deleted by step S41, so as to obtain one or more organs or bones discrete in position. For example, when the spine is selected as the reference object, the CT image only includes bones at discrete positions such as the spine and ribs after step S41.
S42, generating a selection frame according to the position and the shape of the reference object; the selection box contains a portion of the pixels of the reference object. The selection box may be manually specified according to prior knowledge, or may be selected by using an artificial intelligence technique, which is not limited herein. Since the reference object is in a closed and continuous shape, at least one border of the selection box includes pixels of the reference object. Preferably, the selection frame is a rectangular frame.
S43, adjusting the borders of the selection frame so that all borders of the selection frame do not contain the pixel points of the reference object; at this time, the selection frame is the outer frame of the spine. Specifically, traversing all borders of the selection frame, and selecting the border containing the reference object pixel points as a border to be adjusted; and adjusting all frames to be adjusted in the appointed direction until all frames to be adjusted do not contain the pixel points of the reference object.
For example, for a rectangular selection frame, if the lower frame of the selection frame contains the pixel point of the reference object, an adjustment step is executed, and whether the adjusted lower frame still contains the pixel point of the reference object is determined: if the adjusted lower frame does not contain the pixel point of the reference object, the adjustment of the lower frame is finished; otherwise, repeating the adjusting step until the adjusted lower frame does not contain the pixel point of the reference object. And the adjusting step is to translate the lower frame downwards for a certain distance and correspondingly extend the left frame and the right frame.
And S44, wherein the set formed by all the pixel points in the outer frame is the mask of the reference object.
In this embodiment, a mask of the reference object is obtained according to the position and shape of the reference object, and a feature value of the reference object is extracted based on the mask. Because the reference object is an organ or a bone, the shape of the reference object is mostly irregular, the difficulty in directly obtaining the characteristic value of the reference object is high, the obtaining process of the mask can be simplified by setting the selection frame into a regular shape such as a rectangle, and the extraction of the characteristic value of the reference object is relatively simple on the basis.
In an embodiment of the invention, a CT image of the medical image is shown in fig. 4C, wherein the spine 51 is selected as the reference object. In this embodiment, in step S41, all the pixel points with CT values smaller than 1000hu are deleted, and the CT image after deletion only includes the spine 51 and the ribs 52. Referring to fig. 4D, in the present embodiment, in step S42, a rectangular selection box 4 is generated according to the prior knowledge, and the selection box 4 has an upper frame 411, a left frame 412, a right frame 413, and a lower frame 414. In step S43, the borders of the selection box are adjusted until all borders of the selection box do not include the pixel points of the spine 51. For example: referring to fig. 4E, first, the right frame 413 is continuously translated rightward until the right frame 413 does not include the pixel points of the spine 51 any more, and then an adjusted right frame 413' is obtained, and the upper frame 411 and the lower frame 414 are correspondingly extended to keep the selection frame as a rectangle; thereafter, the lower border 414 is continuously translated downwards until the lower border 414 no longer contains the pixel points of the spine 51, and at this time, an adjusted lower border 414 'is obtained, and the left border 412 and the adjusted right border 413' are correspondingly extended so that the selection box remains rectangular. Next, referring to fig. 4F, the left frame 412 is continuously shifted to the left until the left frame 412 no longer includes the pixel points of the spine 51, and an adjusted left frame 412' is obtained at this time; the upper frame 411 is translated upwards until the right frame 411 no longer contains the pixel points of the spine 51, and at this time, an adjusted upper frame 411 ' is obtained, and the adjusted upper frame 411 ', the adjusted left frame 412 ' and the adjusted right frame 413 ' are correspondingly extended, so that the adjusted selection frame is kept as a rectangle, and the adjusted selection frame is the outer frame 4 ' of the spine. At this time, the outer frame 4' includes all the pixel points of the spine 51; in the outer frame 4', a set formed by all the pixel points of the spine 51 is a mask of the spine 51.
In an embodiment of the present invention, an implementation method for obtaining pixel points of the reference object according to the CT value range for any CT image to generate a mask of the reference object includes: deleting all pixel points of which the CT values are out of the CT value range in the CT image; by this step organs and/or bones adjacent to the reference object can be deleted, thereby obtaining one or more positionally discrete organs or bones. And selecting the closed region with the largest area from the one or more discrete organs or bones on the position as the mask of the reference object.
In some embodiments of the present invention, an implementation method for obtaining the feature value of the reference object in the reference image is similar to an implementation method for obtaining the feature value of the reference object in the target image, and details thereof are omitted here.
In an embodiment of the invention, the reference object corresponds to a geometric feature of a convex hull, a center point of the reference object, and/or an area of the reference object. The convex hull corresponding to the reference object can be obtained by the mask calculation, and the geometric characteristics of the convex hull can be the area, the side length and/or the number of the sides of the convex hull; the center point of the reference object can be replaced by the center point of the outer frame of the mask; in addition, in this embodiment, the number of pixels included in the mask may be counted as the area of the reference object.
Referring to fig. 5, in an embodiment of the present invention, an implementation method for obtaining a feature sequence of a target image according to a feature value of a reference object in the target image includes:
s211b, obtaining a feature vector corresponding to the target image according to the feature value of the reference object in the target image. Wherein, all CT images in the target image correspond to a feature vector; the feature vector corresponding to a CT image is derived from the reference object corresponding to the CT imageThe composition of characteristic values; the reference vector corresponding to the target image comprises feature vectors corresponding to all CT images in the target image. For example, for the first CT image in the target image, the corresponding feature vector fl=[fl,1,fl,2...fl,m]Wherein m is the number of characteristic values of a reference object corresponding to any one CT image, and the value of m is a positive integer; f. ofl,kThe numerical value of the kth characteristic value corresponding to the first CT image; k is a positive integer and k is not more than m; l is a positive integer and l is not more than n;
s212b, obtaining a feature sequence of the target image according to the feature vector corresponding to the target image. The feature sequence of the target image is composed of feature vectors corresponding to all CT images in the target image, and preferably, the feature sequence of the target image
Figure BDA0002536896770000091
The calculation formula of (2) is as follows:
Figure BDA0002536896770000092
wherein n is the number of CT images contained in the target image, and the value of n is a positive integer.
Characteristic sequence of the reference image
Figure BDA0002536896770000093
And the feature sequence of the target image
Figure BDA0002536896770000094
The calculation methods are similar and are not described herein again.
Referring to fig. 6, in an embodiment of the present invention, an implementation method for obtaining a translation distance according to a feature sequence of a target image and a feature sequence of a reference image includes:
s231, obtaining a value range of the translation distance; the range of values includes at least two integer values; the integer value may be a positive integer, a negative integer, or zero; the value range is, for example, [ -3,3] or [ -4,4 ].
S232, sequentially calculating a sequence variance corresponding to each integer value in the value range according to the feature sequence of the target image and the feature sequence of the reference image, that is: traversing all integer values in the value range, and calculating the corresponding sequence variance of each integer value;
and S233, selecting a corresponding integer value as the translation distance according to the sequence variance. Preferably, the integer value with the smallest sequence variance is selected as the translation distance.
In this embodiment, by calculating the sequence variances corresponding to all the integer values and selecting the integer value with the smallest sequence variance as the translation distance, the smallest difference between the translated target image and the reference image can be ensured, thereby achieving a good alignment effect between the reference image and the target image.
In an embodiment of the present invention, the feature sequence of the target image is determined according to the feature sequence
Figure BDA0002536896770000101
And a sequence of features of the reference image
Figure BDA0002536896770000102
Calculating the sequence variance corresponding to any integer value x in the value range
Figure BDA0002536896770000103
The formula of (1) is:
Figure BDA0002536896770000104
wherein n is the number of CT images included in the target image, and m is the number of feature values of the reference object corresponding to each CT image in the target image.
Difference value
Figure BDA0002536896770000105
Representing the jth characteristic value of the ith image in the target image after the ith image is translated by the x vector distance and the corresponding graph in the reference imageDifference of corresponding characteristic value of the image, and
Figure BDA0002536896770000106
wherein the content of the first and second substances,
Figure BDA0002536896770000107
to represent
Figure BDA0002536896770000108
The characteristic value of the ith row and the jth column,
Figure BDA0002536896770000109
to represent
Figure BDA00025368967700001010
The ith row and the jth column. When the characteristic sequence of the target image
Figure BDA00025368967700001011
When the image is translated in the positive direction, namely when x is larger than or equal to 0, the image corresponding to the ith image is the (i + x) th image in the reference image; when the characteristic sequence of the target image
Figure BDA00025368967700001012
In negative translation, i.e. x<When 0, the corresponding image of the ith image is the ith-x image in the reference image.
For said difference
Figure BDA00025368967700001013
Its expected value in the whole characteristic sequence
Figure BDA00025368967700001014
After the target image is translated by x vector distance, the j-th characteristic value of the target image is expected to be corresponding to the characteristic value of the corresponding image in the reference image. The expected value
Figure BDA00025368967700001015
Is calculated by the formula
Figure BDA00025368967700001016
As described above
Figure BDA00025368967700001017
The calculation formula of the method selects to carry out integral summation processing on the variance of the difference values of the m feature sequences, so that a plurality of features can be referred in the calculation process, the integral effect after translation is optimal, and errors caused by inaccurate extraction of single features are prevented.
In one embodiment of the present invention, the reference object is a spine. For the abdominal CT image, the position and the shape of the spine are relatively fixed, so that the spine is selected as the reference object to improve the alignment effect of the medical image.
In an embodiment of the present invention, the target image includes: flat scan phase CT images, venous phase CT images and/or arterial phase CT images.
In one embodiment of the present invention, the spine is selected as the reference object. Referring to fig. 7A, the medical image alignment method includes:
s71, extracting a spine mask of the image;
s72, extracting the characteristic value of the spine in each CT image according to the spine mask;
s73, respectively obtaining characteristic sequences of the flat scanning period CT image, the venous period CT image and the arterial period CT image;
and S74, respectively translating the venous phase CT image and the arterial phase CT image by taking the flat scanning phase CT image as a reference so as to align the three-phase CT images.
In this embodiment, step S71 uses a threshold method to extract the spine mask. The numerical units of CT images are hounsfield units, which reflect the density values of human organs or tissues. Typically, the density of human bone is greater than 1000 hu. Therefore, when the threshold method is adopted to extract the spine mask, only tissues below 1000hu need to be filtered out to obtain the skeleton. In order to avoid the influence of non-spinal bones such as ribs on the extraction result, the mask of the spinal column is further obtained by the mask extraction method in steps S41 to S44 in step S71.
Referring to fig. 7B, in the present embodiment, the implementation method for extracting the spine feature value in each CT image in step S72 includes:
and S721, calculating the central point of the spine in each CT image. Because the outer frame of the spine mask is rectangular in this embodiment, the center point of the mask can be approximately considered as the center point of the spine according to the symmetry of the outer frame;
s722, calculating the area of the spine. Specifically, the number of pixel points included in the spine mask may be counted as the spine area.
And S723, acquiring the geometrical characteristics of the spine. Specifically, a convex hull can be obtained by calculating the extracted spine mask, and the area, parameters, and the like of the convex hull are sequentially calculated as the geometric features of the spine.
And S724, combining feature vectors according to the central point, the area and the geometric characteristics of the spine.
The characteristic value of the spine in any CT image can be obtained through steps S721-S724, the characteristic value of the CT image in the flat scan period can be obtained through obtaining the characteristic values of all the CT images in the flat scan period, and further the characteristic sequence of the CT image in the flat scan period can be obtained; the characteristic value of the vein phase CT image can be obtained by obtaining the characteristic values of all the vein phase CT images, and further the characteristic sequence of the vein phase CT image can be obtained; the characteristic value of the CT image in the artery phase can be obtained by obtaining the characteristic values of all the CT images in the artery phase, and further the characteristic sequence of the CT image in the artery phase can be obtained.
Referring to fig. 7C, in the present embodiment, the implementation method of step S74 includes:
s741, sequentially selecting all integer values in the range of [ -M, M ], and sequentially translating the venous phase CT image along the direction of the spine according to each selected integer value; wherein M is a positive integer, preferably 3;
s742, sequentially calculating sequence variances corresponding to all selected integer values based on the characteristic sequence of the venous phase CT image and the characteristic sequence of the arterial phase CT image;
and S743, selecting the integer value with the minimum corresponding sequence variance as a translation distance, and further translating the venous phase CT image, so that the venous phase CT image is aligned with the flat scan phase CT image.
The manner of translating the CT image in the arterial phase to align the CT image in the arterial phase with the CT image in the flat scan phase is similar to S741 to S743, and details thereof are not repeated here.
Based on the above description of the medical image alignment method, the present invention further provides a computer readable storage medium having a computer readable program stored thereon. The computer program, when executed by a processor, implements the medical image alignment method of the present invention.
Based on the above description of the medical image alignment method, the invention further provides an electronic device. Referring to fig. 8, the electronic device 800 includes: a memory 810 storing a computer program; a processor 820, communicatively coupled to the memory, for executing the medical image alignment method of the present invention when the computer program is invoked; a display 830, communicatively coupled to the processor and the memory, for displaying a GUI interactive interface associated with the medical image registration method.
The protection scope of the medical image alignment method according to the present invention is not limited to the execution sequence of the steps illustrated in the embodiment, and all the solutions of the prior art including step addition, step subtraction, and step replacement according to the principles of the present invention are included in the protection scope of the present invention.
The medical image alignment method can automatically acquire the characteristic sequences of the target image and the reference image, and further can translate the target image according to the translation distance of the target image acquired by the characteristic sequences so as to align the target image and the reference image. The whole process basically does not need manual participation, and the operation is convenient and the efficiency is higher.
The medical image alignment method can obtain a translation distance with the minimum error according to the three-dimensional growth shape of the reference object, and further allows the CT image to be aligned and preprocessed, so that the CT image data processing through artificial intelligence becomes possible.
In conclusion, the present invention effectively overcomes various disadvantages of the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A medical image alignment method for aligning a target image with a reference image, the medical image alignment method comprising:
acquiring a characteristic value of a reference object in the target image, and acquiring a characteristic sequence of the target image according to the characteristic value of the reference object in the target image;
acquiring a characteristic value of the reference object in the reference image, and acquiring a characteristic sequence of the reference image according to the characteristic value of the reference object in the reference image;
acquiring a translation distance according to the characteristic sequence of the target image and the characteristic sequence of the reference image;
and translating the target image according to the translation distance.
2. The medical image alignment method according to claim 1, wherein the method for obtaining the feature value of the reference object in the target image comprises:
acquiring a CT value range of the reference object;
acquiring pixel points of the reference object according to the CT value range, and further generating a mask of the reference object;
and acquiring the characteristic value of the reference object according to the mask of the reference object.
3. The medical image registration method according to claim 2, wherein the feature values of the reference object include: the reference object corresponds to a geometric feature of the convex hull, a center point of the reference object, and/or an area of the reference object.
4. The medical image alignment method according to claim 1, wherein the implementation method for obtaining the feature sequence of the target image according to the feature value of the reference object in the target image comprises:
obtaining a feature vector corresponding to the target image according to the feature value of the reference object in the target image;
and obtaining a characteristic sequence of the target image according to the characteristic vector corresponding to the target image.
5. The medical image alignment method according to claim 1, wherein the method for obtaining the translation distance according to the feature sequence of the target image and the feature sequence of the reference image comprises:
acquiring the value range of the translation distance; the range of values includes at least two integer values;
sequentially calculating the sequence variance corresponding to each integer value in the value range according to the characteristic sequence of the target image and the characteristic sequence of the reference image;
and selecting a corresponding integer value as the translation distance according to the sequence variance.
6. The method according to claim 5, wherein the target image is aligned according to the feature sequence of the target image
Figure FDA0002536896760000021
And a sequence of features of the reference image
Figure FDA0002536896760000022
Calculating a sequence variance corresponding to an integer value x
Figure FDA0002536896760000023
The formula of (1) is:
Figure FDA0002536896760000024
wherein n is the number of CT images included in the target image, and m is the number of characteristic values of a reference object corresponding to each CT image in the target image;
Figure FDA0002536896760000025
Figure FDA0002536896760000026
to represent
Figure FDA0002536896760000027
The characteristic value of the ith row and the jth column,
Figure FDA0002536896760000028
to represent
Figure FDA0002536896760000029
The ith row and the jth column.
7. The medical image registration method according to claim 1, wherein: the reference object is the spine.
8. The medical image registration method according to claim 1, wherein the target image comprises: flat scan phase CT images, venous phase CT images and/or arterial phase CT images.
9. A computer-readable storage medium on which a computer-readable program is stored, characterized in that: the computer program, when executed by a processor, implements the medical image registration method of any one of claims 1 to 8.
10. An electronic device, characterized in that the electronic device comprises:
a memory storing a computer program;
a processor, communicatively coupled to the memory, for executing the medical image alignment method of any of claims 1 to 8 when the computer program is invoked;
and the display is in communication connection with the processor and the memory and is used for displaying a related GUI interactive interface of the medical image alignment method.
CN202010535461.4A 2020-06-12 2020-06-12 Medical image alignment method, medium and electronic equipment Active CN113808227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010535461.4A CN113808227B (en) 2020-06-12 2020-06-12 Medical image alignment method, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010535461.4A CN113808227B (en) 2020-06-12 2020-06-12 Medical image alignment method, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113808227A true CN113808227A (en) 2021-12-17
CN113808227B CN113808227B (en) 2023-08-25

Family

ID=78892128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010535461.4A Active CN113808227B (en) 2020-06-12 2020-06-12 Medical image alignment method, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113808227B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090232378A1 (en) * 2008-03-17 2009-09-17 Keigo Nakamura Image analysis apparatus, image analysis method, and computer-readable recording medium storing image analysis program
US20110144480A1 (en) * 2009-12-10 2011-06-16 Siemens Corporation Stent marker detection using a learning based classifier in medical imaging
US20150042839A1 (en) * 2013-08-12 2015-02-12 Canon Kabushiki Kaisha Distance measuring apparatus, imaging apparatus, and distance measuring method
CN107527360A (en) * 2017-08-23 2017-12-29 维沃移动通信有限公司 A kind of image alignment method and mobile terminal
US20180108136A1 (en) * 2016-10-18 2018-04-19 Ortery Technologies, Inc. Method of length measurement for 2d photography
CN108898567A (en) * 2018-09-20 2018-11-27 北京旷视科技有限公司 Image denoising method, apparatus and system
CN109978965A (en) * 2019-03-21 2019-07-05 江南大学 A kind of simulation CT image generating method, device, computer equipment and storage medium
CN110611767A (en) * 2019-09-25 2019-12-24 北京迈格威科技有限公司 Image processing method and device and electronic equipment
CN110909580A (en) * 2018-09-18 2020-03-24 北京市商汤科技开发有限公司 Data processing method and device, electronic equipment and storage medium
US20200113649A1 (en) * 2018-10-12 2020-04-16 Laonpeople Inc. Apparatus and method for generating image of corrected teeth

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090232378A1 (en) * 2008-03-17 2009-09-17 Keigo Nakamura Image analysis apparatus, image analysis method, and computer-readable recording medium storing image analysis program
US20110144480A1 (en) * 2009-12-10 2011-06-16 Siemens Corporation Stent marker detection using a learning based classifier in medical imaging
US20150042839A1 (en) * 2013-08-12 2015-02-12 Canon Kabushiki Kaisha Distance measuring apparatus, imaging apparatus, and distance measuring method
US20180108136A1 (en) * 2016-10-18 2018-04-19 Ortery Technologies, Inc. Method of length measurement for 2d photography
CN107527360A (en) * 2017-08-23 2017-12-29 维沃移动通信有限公司 A kind of image alignment method and mobile terminal
CN110909580A (en) * 2018-09-18 2020-03-24 北京市商汤科技开发有限公司 Data processing method and device, electronic equipment and storage medium
CN108898567A (en) * 2018-09-20 2018-11-27 北京旷视科技有限公司 Image denoising method, apparatus and system
US20200113649A1 (en) * 2018-10-12 2020-04-16 Laonpeople Inc. Apparatus and method for generating image of corrected teeth
CN109978965A (en) * 2019-03-21 2019-07-05 江南大学 A kind of simulation CT image generating method, device, computer equipment and storage medium
CN110611767A (en) * 2019-09-25 2019-12-24 北京迈格威科技有限公司 Image processing method and device and electronic equipment

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
JIAN YANG ETC.: "Convex hull matching and hierarchical decomposition for multimodality medical image registration", JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY, vol. 23, no. 02 *
M.JAYARAM ETC.: "Convex Hulls in Image Processing:A Scoping Review", AMERICAN JOURNAL OF INTELLIGENT SYSTEMS, vol. 06, no. 02 *
姜雪琦等: "基于活动形状模型对超声图像左心室的分割", 计算机应用, no. 1 *
聊蕾等: "基于图像序列的VideoSAR动目标检测方法", 雷达科学与技术, no. 06 *
顾静军等: "一种类似视频场景分割的家庭数字照片分类方法研究", 计算机工程, vol. 29, no. 01 *
龙锋: "浅析基于多目标跟踪的医学影像分析", 世界最新医学信息文摘, no. 19 *

Also Published As

Publication number Publication date
CN113808227B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
US10709394B2 (en) Method and system for 3D reconstruction of X-ray CT volume and segmentation mask from a few X-ray radiographs
US11534136B2 (en) Three-dimensional segmentation from two-dimensional intracardiac echocardiography imaging
Isaac et al. Super resolution techniques for medical image processing
JP4104054B2 (en) Image alignment apparatus and image processing apparatus
CN109754448B (en) CT cardiac scanning artifact correction method and system
JP4885138B2 (en) Method and system for motion correction in a sequence of images
CN107106102B (en) Digital subtraction angiography
CN104644202A (en) Medical image data processing apparatus, medical image data processing method and medical image data processing program
US7630548B2 (en) Image segmentation using isoperimetric trees
US20060204064A1 (en) Automatic registration of intra-modality medical volume images using affine transformation
CN116849691B (en) Method, equipment and storage medium for automatically identifying global optimal phase of cardiac CT imaging
JP2013198603A (en) Image processing apparatus, method, and program
CN113808227A (en) Medical image alignment method, medium and electronic device
CN116894783A (en) Metal artifact removal method for countermeasure generation network model based on time-varying constraint
CN105787922B (en) A kind of method and apparatus for realizing automatic MPR batch processing
CN110473241A (en) Method for registering images, storage medium and computer equipment
CN113538419B (en) Image processing method and system
KR102505908B1 (en) Medical Image Fusion System
JP7155670B2 (en) Medical image processing apparatus, medical image processing method, program, and data creation method
JP6253992B2 (en) Organ position estimation apparatus, organ position estimation apparatus control method, and organ position estimation apparatus control program
KR102348863B1 (en) Method and Apparatus for Registration of Image data Using Unsupervised Learning Model
JP5846368B2 (en) Medical image processing apparatus, method, and program
JP2013505779A (en) Computer readable medium, system, and method for improving medical image quality using motion information
JP7300285B2 (en) Medical image processing device, X-ray diagnostic device and medical image processing program
CN117765044A (en) Registration method, system and device for medical image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Gu Jingjun

Inventor after: Zhou Gonggan

Inventor before: Ding Yuan

Inventor before: Ding Yuhui

Inventor before: Sun Zhongquan

Inventor before: Gu Jingjun

Inventor before: Zhou Gonggan

GR01 Patent grant
GR01 Patent grant