CN113808227B - Medical image alignment method, medium and electronic equipment - Google Patents

Medical image alignment method, medium and electronic equipment Download PDF

Info

Publication number
CN113808227B
CN113808227B CN202010535461.4A CN202010535461A CN113808227B CN 113808227 B CN113808227 B CN 113808227B CN 202010535461 A CN202010535461 A CN 202010535461A CN 113808227 B CN113808227 B CN 113808227B
Authority
CN
China
Prior art keywords
image
target image
reference object
characteristic
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010535461.4A
Other languages
Chinese (zh)
Other versions
CN113808227A (en
Inventor
顾静军
周公敢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Pujian Medical Technology Co ltd
Original Assignee
Hangzhou Pujian Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Pujian Medical Technology Co ltd filed Critical Hangzhou Pujian Medical Technology Co ltd
Priority to CN202010535461.4A priority Critical patent/CN113808227B/en
Publication of CN113808227A publication Critical patent/CN113808227A/en
Application granted granted Critical
Publication of CN113808227B publication Critical patent/CN113808227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Geometry (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides a medical image alignment method, a medium and electronic equipment. The medical image alignment method is used for aligning a target image with a reference image, and comprises the following steps: acquiring a characteristic value of a reference object in the target image, and acquiring a characteristic sequence of the target image according to the characteristic value of the reference object in the target image; acquiring a characteristic value of the reference object in the reference image, and acquiring a characteristic sequence of the reference image according to the characteristic value of the reference object in the reference image; acquiring a translation distance according to the characteristic sequence of the target image and the characteristic sequence of the reference image; and translating the target image according to the translation distance. The medical image alignment method can improve the efficiency of performing alignment processing on medical images.

Description

Medical image alignment method, medium and electronic equipment
Technical Field
The invention belongs to the field of image processing, relates to an image alignment method, and in particular relates to a medical image alignment method, a medium and electronic equipment.
Background
CT (Computed Tomography), electronic computed tomography) examination is a modern, more advanced medical imaging examination technique. In the CT examination process, it is often required to shoot the flat scan phase CT image, the venous phase CT image, the arterial phase CT image and the like of the same patient. The flat scan phase CT image is a CT image obtained when no contrast agent is injected, the venous phase CT image is a CT image obtained at the time of development of filling of venous vessels, and the arterial phase CT image is a CT image obtained at the time of development of filling of arterial vessels. However, in the actual examination process, the shooting time of the CT images in each period may not be the same, and the patient may have deviation in the position of the patient in the CT scanner at different shooting times due to factors such as respiration and movement of the patient, so that the positions of the CT images in different periods in the vertical direction may not be corresponding, for example, the first CT image in the flat scan period may correspond to the second CT image in the arterial period, or the third CT image in the flat scan period may correspond to the first CT image in the arterial period. In the prior art, manual mode is often needed to realize alignment of CT images in different periods, and the efficiency is low.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, an object of the present invention is to provide a medical image alignment method, medium and electronic device, which are used for solving the problem that the efficiency is low when the manual alignment method is adopted in the prior art.
To achieve the above and other related objects, a first aspect of the present invention provides a medical image alignment method for aligning a target image with a reference image. The medical image alignment method includes: acquiring a characteristic value of a reference object in the target image, and acquiring a characteristic sequence of the target image according to the characteristic value of the reference object in the target image; acquiring a characteristic value of the reference object in the reference image, and acquiring a characteristic sequence of the reference image according to the characteristic value of the reference object in the reference image; acquiring a translation distance according to the characteristic sequence of the target image and the characteristic sequence of the reference image; and translating the target image according to the translation distance.
In some embodiments of the first aspect, the method for obtaining the feature value of the reference object in the target image includes: acquiring a CT value range of the reference object; acquiring pixel points of the reference object according to the CT value range, and further generating a template of the reference object; and acquiring the characteristic value of the reference object according to the mask of the reference object.
In certain embodiments of the first aspect, the feature values of the reference object include: the reference object corresponds to the geometric feature of the convex hull, the center point of the reference object and/or the area of the reference object.
In certain embodiments of the first aspect, the method for obtaining the feature sequence of the target image according to the feature value of the reference object in the target image includes: obtaining a feature vector corresponding to the target image according to the feature value of the reference object in the target image; and obtaining a characteristic sequence of the target image according to the characteristic vector corresponding to the target image.
In certain embodiments of the first aspect, the method for obtaining the translation distance according to the feature sequence of the target image and the feature sequence of the reference image includes: acquiring the value range of the translation distance; the value range comprises at least two integer values; sequentially calculating sequence variances corresponding to all integer values in the value range according to the characteristic sequence of the target image and the characteristic sequence of the reference image; and selecting a corresponding integer value as the translation distance according to the sequence variance.
In certain embodiments of the first aspect, the method further comprises generating a sequence of features of the target imageAnd a characteristic sequence of said reference image +.>Calculating the sequence variance corresponding to the integer value x>The formula of (2) is:
wherein n is the number of CT images contained in the target image, and m is the number of characteristic values of reference objects corresponding to each CT image in the target image; representation ofCharacteristic value of ith row and jth column,/->Representation->Characteristic values of the ith row and the jth column of the table.
In certain embodiments of the first aspect, the reference object is a spinal column.
In certain embodiments of the first aspect, the target image comprises: flat scan phase CT images, venous phase CT images and/or arterial phase CT images.
The second aspect of the present invention also provides a computer-readable storage medium having a computer-readable program stored thereon. The computer program, when executed by a processor, implements the medical image alignment method of the present invention.
A third aspect of the present invention also provides an electronic device, comprising: a memory storing a computer program; a processor communicatively coupled to said memory for executing the medical image alignment method of the present invention when said computer program is invoked; and the display is in communication connection with the processor and the memory and is used for displaying a related GUI interaction interface of the medical image alignment method.
As described above, the medical image alignment method, medium and electronic device of the present invention have the following beneficial effects:
the medical image alignment method can automatically acquire the characteristic sequences of the target image and the reference image, and acquire the translation distance of the target image according to the characteristic sequences so as to translate the target image to realize the alignment of the target image and the reference image. The whole process basically does not need manual participation, and is convenient to operate and high in efficiency.
Drawings
FIG. 1A is a schematic view of a flat scan CT image of an embodiment of a medical image alignment method according to the present invention.
FIG. 1B is a diagram showing an exemplary embodiment of a CT image during a venous phase of a medical image alignment method according to the present invention.
FIG. 1C is a diagram showing an example of an arterial phase CT image in an embodiment of the medical image alignment method according to the present invention.
Fig. 2 is a flowchart illustrating a medical image alignment method according to an embodiment of the invention.
Fig. 3A is a diagram showing an exemplary CT sequence included in a target image according to an embodiment of the medical image alignment method of the present invention.
FIG. 3B is a diagram showing an exemplary CT sequence included in a reference image in an embodiment of a medical image alignment method according to the present invention.
Fig. 4A is a flowchart illustrating a method for obtaining feature values in an embodiment of the medical image alignment method according to the present invention.
FIG. 4B is a flowchart illustrating a method for obtaining a mask of a reference object according to an embodiment of the present invention.
Fig. 4C is a diagram illustrating an example of a CT image in an embodiment of the medical image alignment method according to the present invention.
Fig. 4D is a diagram illustrating an exemplary CT image of an embodiment of the medical image alignment method according to the present invention.
Fig. 4E is a diagram illustrating an exemplary CT image of an embodiment of the medical image alignment method according to the present invention.
FIG. 4F is a diagram illustrating an exemplary CT image of a medical image alignment method according to an embodiment of the present invention.
Fig. 5 is a flowchart illustrating a method for obtaining a feature sequence in an embodiment of the medical image alignment method according to the present invention.
Fig. 6 is a flowchart showing a step S23 of the medical image alignment method according to an embodiment of the invention.
Fig. 7A is a flowchart of a medical image alignment method according to another embodiment of the present invention.
Fig. 7B is a flowchart illustrating a step S72 of the medical image alignment method according to an embodiment of the invention.
Fig. 7C is a flowchart illustrating step S74 of the medical image alignment method according to an embodiment of the present invention.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Description of element reference numerals
4. Selection frame
411. Upper frame
412. Left side frame
413. Right side frame
414. Lower frame
4' outer frame
411' adjusted upper rim
412' adjusted left side frame
413' right side frame after adjustment
414' adjusted lower rim
51. Spinal column
52. Rib rib
800. Electronic equipment
810. Memory device
820. Processor and method for controlling the same
830. Display device
S21 to S24 steps
Steps S211a to S213b
S211b to S212b steps
Steps S231 to S233
S41 to S44 steps
S71 to S74 steps
Steps S721 to S724
Steps S741 to S743
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In the actual CT examination process, the shooting time of the CT images in different periods may be different, and the position of the patient in the CT scanner in different shooting times may be deviated due to factors such as respiration and movement of the patient, so that the position of the CT images in different periods in the vertical direction is not corresponding. 1A-1C, exemplary views of different phases of CT images obtained in one embodiment are shown; fig. 1A shows a first CT image taken during a panning period, fig. 1B shows a first CT image taken during a venous period, and fig. 1C shows a first CT image taken during an arterial period. It can be seen that the positions and shapes of the organs and bones in the flat scan CT image shown in fig. 1A are substantially identical to those of the organs and bones in the CT image shown in fig. 1B, so that the CT image shown in fig. 1A is aligned with the CT image shown in fig. 1B; the positions and shapes of the organs and bones in the flat scan CT image shown in fig. 1A are greatly different from those of the organs and bones in the arterial CT image shown in fig. 1C, so that the CT image shown in fig. 1A is not aligned with the CT image shown in fig. 1C, and at this time, alignment processing needs to be performed on the flat scan CT image and the arterial CT image. The prior art often adopts a manual mode to realize the alignment of CT images in different periods, and has lower efficiency.
In response to this problem, the present invention provides a medical image alignment method for aligning a target image with a reference image. The medical image alignment method includes: acquiring a characteristic value of a reference object in the target image, and acquiring a characteristic sequence of the target image according to the characteristic value of the reference object in the target image; acquiring a characteristic value of a reference object in the reference image, and acquiring a characteristic sequence of the reference image according to the characteristic value of the reference object in the reference image; acquiring a translation distance according to the characteristic sequence of the target image and the characteristic sequence of the reference image; and translating the target image according to the translation distance. The medical image alignment method can automatically acquire the characteristic sequences of the target image and the reference image so as to acquire the translation distance of the target image, and translate the target image according to the translation distance so as to realize the alignment of the target image and the reference image. The medical image alignment method disclosed by the invention basically does not need manual participation, and improves the efficiency of image alignment processing.
Referring to fig. 2, in an embodiment of the invention, the medical image alignment method includes:
s21, obtaining a characteristic value of a reference object in the target image, and obtaining a characteristic sequence of the target image according to the characteristic value of the reference object in the target image; wherein the reference object is a bone or organ in the target image and the reference image; the target image comprises a CT sequence of at least 2 CT images, such as: fig. 3A shows a CT sequence included in the target image according to the present embodiment, where the CT sequence is composed of a plurality of CT images.
S22, obtaining a characteristic value of a reference object in the reference image, and obtaining a characteristic sequence of the reference image according to the characteristic value of the reference object in the reference image; the reference image includes a CT sequence of at least 2 CT images, for example, fig. 3B shows a CT sequence included in the reference image in this embodiment, where the CT sequence includes a plurality of CT images.
S23, obtaining a translation distance according to the characteristic sequence of the target image and the characteristic sequence of the reference image;
s24, translating the target image according to the translation distance so as to align the target image with the reference image.
It should be noted that fig. 3A and 3B only exemplarily illustrate the target image and the reference image. In practical application, a multi-stage CT image is obtained through a CT scanner, one of the multi-stage CT images is selected as the reference image, a certain stage CT image except the reference image is selected as the target image, and the alignment of the target image and the reference image can be realized through the medical image alignment method in the embodiment. The medical image alignment method in the embodiment basically does not need manual participation, and is beneficial to improving the efficiency of performing alignment processing on the medical images.
In an embodiment of the present invention, the translation distance N is a positive integer, zero or a negative integer. Wherein:
when N is a positive integer, the 1 st CT image representing the target image is aligned with the (n+1) th CT image of the reference image. If the shooting sequence of the target image is from bottom to top, the 1 st CT image of the target image is the image at the lowest end, all CT images in the target image need to be shifted upwards by N, so that the 1 st CT image of the target image is aligned with the n+1st CT image of the reference image, the 2 nd CT image of the target image is aligned with the n+2nd CT image of the reference image, and so on. If the shooting sequence of the target image is from top to bottom, the 1 st CT image of the target image is the uppermost image, all CT images in the target image need to be translated downwards by N, so that the 1 st CT image of the target image is aligned with the n+1st CT image of the reference image, the 2 nd CT image of the target image is aligned with the n+2nd CT image of the reference image, and so on. For example, if the 1 st CT image in fig. 3A is aligned with the 3 rd CT image in fig. 3B, the CT sequence shown in fig. 3A is shifted up by 2 pieces as a whole to achieve the alignment of fig. 3A and 3B, namely: the 1 st CT image of FIG. 3A is aligned with the 3 rd CT image of FIG. 3B, and the 2 nd CT image of FIG. 3A is aligned with the 4 th CT image of FIG. 3B.
When N is zero, the target image and the reference image are aligned, and translation processing is not required to be carried out on the target image.
When N is a negative integer, the-N+1st CT image representing the target image is aligned with the 1st CT image of the reference image. If the shooting sequence of the target image is from bottom to top, the 1 st CT image of the target image is the lowest image, then all CT images in the target image need to be translated downwards by-N so that the-n+1st CT image of the target image is aligned with the 1 st CT image of the reference image, the-n+2nd CT image of the target image is aligned with the 2nd CT image of the reference image, and so on. If the shooting sequence of the target image is from top to bottom, the 1 st CT image of the target image is the uppermost image, all CT images in the target image need to be shifted upwards by-N so that the-n+1st CT image of the target image is aligned with the 1 st CT image of the reference image, the-n+2nd CT image of the target image is aligned with the 2nd CT image of the reference image, and so on.
Referring to fig. 4A, in an embodiment of the invention, the target image includes at least 2 CT images; each CT image in the target image corresponds to one or more characteristic values of the reference object; the characteristic values of the reference objects in the target image comprise characteristic values of the reference objects corresponding to each CT image in the target image. Further, in the present embodiment, the CT value of the reference object is significantly different from that of the surrounding adjacent organs or bones, and thus the reference object can be distinguished from the surrounding adjacent organs or bones by the CT value. In this embodiment, the implementation method for obtaining the feature value of the reference object corresponding to any CT image in the target image includes:
s211a, acquiring a CT value range of the reference object; the numerical unit of the CT image is Hounsfield unit (hu), which essentially reflects the density value of human organs or tissues. Different reference objects have different CT value ranges, for example: for bones, the CT value range is greater than 1000hu.
S212a, acquiring pixel points of the reference object according to the CT value range, and further generating a mask of the reference object; ideally, the mask of the reference object can fully cover the reference object and consist of all pixels of the reference object.
S213a, acquiring the characteristic value of the reference object according to the mask of the reference object.
The steps S211a to S213a can obtain the feature value of the reference object corresponding to any CT image in the target image; further, the characteristic values of the reference objects corresponding to all CT images in the target image are combined together, so that the characteristic values of the reference objects in the target image can be obtained.
Referring to fig. 4B, in an embodiment of the present invention, for any CT image, a method for obtaining pixels of the reference object according to the CT value range, and further generating a mask of the reference object includes:
s41, deleting all pixel points with CT values outside the CT value range in the CT image. Since the CT value of the reference object in this embodiment is greatly different from the CT value of the surrounding adjacent organs or bones, the reference object can be distinguished from the surrounding adjacent organs or bones by the CT value, and the organs and/or bones adjacent to the reference object can be deleted by step S41, thereby obtaining one or more positionally discrete organs or bones. For example, when the spine is selected as the reference object, the CT image after step S41 includes only bones that are spatially discrete, such as the spine and ribs.
S42, generating a selection frame according to the position and the shape of the reference object; the selection box contains a portion of the pixels of the reference object. The selection boxes may be specified manually based on a priori knowledge, or may be selected by employing artificial intelligence techniques, which are not limited in this regard. Since the reference object is a closed and continuous shape, at least one border of the selection frame contains the pixels of the reference object. Preferably, the selection frame is a rectangular frame.
S43, adjusting the frames of the selection frame so that all frames of the selection frame do not contain the pixel points of the reference object; at this time, the selection frame is the outer frame of the spine. Specifically, traversing all frames of the selection frame, and selecting the frame containing the reference object pixel points as the frame to be adjusted; and adjusting all the frames to be adjusted in the appointed direction until all the frames to be adjusted do not contain the pixel points of the reference object.
For example, for a rectangular selection frame, if the lower frame of the selection frame includes the pixel point of the reference object, an adjustment step is performed, and it is determined whether the adjusted lower frame still includes the pixel point of the reference object. If the adjusted lower frame does not contain the pixel points of the reference object, finishing the adjustment of the lower frame; otherwise, repeating the adjusting step until the adjusted lower frame does not contain the pixel point of the reference object. The adjusting step is to translate the lower frame downwards for a certain distance and correspondingly extend the left frame and the right frame.
S44, the set formed by all the pixel points in the outer frame is the mask of the reference object.
In this embodiment, a mask of the reference object is obtained through the position and the shape of the reference object, and the feature value of the reference object is extracted based on the mask. Because the reference object is an organ or a skeleton, the shape of the reference object is irregular, the difficulty of directly acquiring the characteristic value of the reference object is high, the acquisition process of the mask can be simplified by setting the selection frame to be rectangular and other conventional shapes, and the extraction of the characteristic value of the reference object is relatively simple on the basis.
In an embodiment of the present invention, a CT image in the medical image is shown in fig. 4C, wherein the spine 51 is selected as the reference object. In this embodiment, step S41 deletes all pixels with CT values less than 1000hu, and the CT image after deletion only includes the spine 51 and the ribs 52. Referring to fig. 4D, in step S42 of the present embodiment, a rectangular selection frame 4 is generated according to a priori knowledge, and the selection frame 4 has an upper frame 411, a left frame 412, a right frame 413 and a lower frame 414. In step S43, the borders of the selection frame are adjusted until all the borders of the selection frame do not include the pixels of the spine 51. For example: referring to fig. 4E, firstly, the right frame 413 is translated to the right until the right frame 413 no longer includes the pixels of the spine 51, at this time, an adjusted right frame 413' is obtained, and the upper frame 411 and the lower frame 414 are correspondingly extended so that the selection frame is kept rectangular; thereafter, the lower frame 414 is continuously translated downward until the lower frame 414 no longer includes pixels of the spine 51, at which point an adjusted lower frame 414 'is obtained, and the left frame 412 and the adjusted right frame 413' are correspondingly elongated so that the selection frame remains rectangular. Next, referring to fig. 4F, the left frame 412 is continuously translated to the left until the left frame 412 no longer includes the pixels of the spine 51, and the adjusted left frame 412' is obtained; the upper frame 411 is translated upwards until the right frame 411 does not contain any more pixels of the spine 51, and at this time, an adjusted upper frame 411' is obtained, and the adjusted upper frame 411', the adjusted left frame 412', and the adjusted right frame 413' are correspondingly extended, so that the adjusted selection frame is kept rectangular, and the adjusted selection frame is the outer frame 4' of the spine. At this time, all the pixels of the spine 51 are contained in the outer frame 4'; in the outer frame 4', the set of all the pixels of the spine 51 is the mask of the spine 51.
In an embodiment of the present invention, for any CT image, a method for obtaining pixels of the reference object according to the CT value range, and further generating a mask of the reference object includes: deleting all pixel points with CT values outside the CT value range in the CT image; by this step, organs and/or bones adjacent to the reference object can be deleted, thereby obtaining one or more positionally discrete organs or bones. Selecting a closed region with the largest area from the one or more organs or bones which are discrete at the positions as a mask of the reference object.
In some embodiments of the present invention, a method for obtaining the feature value of the reference object in the reference image is similar to a method for obtaining the feature value of the reference object in the target image, which is not described herein.
In an embodiment of the invention, the reference object corresponds to a geometric feature of a convex hull, a center point of the reference object, and/or an area of the reference object. The convex hull corresponding to the reference object can be obtained through calculation through the mask, and the geometric features of the convex hull can be the area, the side length, the side number and/or the like of the convex hull; the center point of the reference object may be replaced by the center point of the outer frame of the mask; in addition, in this embodiment, the number of pixels included in the mask may be counted to be the area of the reference object.
Referring to fig. 5, in an embodiment of the invention, a method for obtaining a feature sequence of the target image according to a feature value of a reference object in the target image includes:
s211b, obtaining a feature vector corresponding to the target image according to the feature value of the reference object in the target image. Wherein, all CT images in the target image correspond to a feature vector; the feature vector corresponding to a certain CT image consists of the feature value of the reference object corresponding to the CT image; the reference vector corresponding to the target image comprises feature vectors corresponding to all CT images in the target image. For example, for the first CT image in the target image, the corresponding feature vector f l =[f l,1 ,f l,2 ...f l,m ]Wherein m is the number of the characteristic values of the reference object corresponding to any CT image, and the value of m is a positive integer; f (f) l,k The value of the kth characteristic value corresponding to the kth CT image; k is a positive integer and k is less than or equal to m; l is a positive integer and l is less than or equal to n;
s212b, obtaining the characteristic sequence of the target image according to the characteristic vector corresponding to the target image. The feature sequence of the target image is composed of feature vectors corresponding to all CT images in the target image, preferably the feature sequence of the target imageThe calculation formula of (2) is as follows: />Wherein n is the number of CT images contained in the target image, and the value of n is a positive integer.
The reference isFeature sequence of imagesIs calculated and the characteristic sequence of the target image is->Is similar to the calculation method of (c), and is not described in detail herein.
Referring to fig. 6, in an embodiment of the invention, a method for obtaining a translation distance according to a feature sequence of the target image and a feature sequence of the reference image includes:
s231, acquiring a value range of the translation distance; the value range comprises at least two integer values; the integer value may be a positive integer, a negative integer, or zero; the value range is, for example, -3,3] or, -4,4].
S232, sequentially calculating sequence variances corresponding to all integer values in the value range according to the characteristic sequence of the target image and the characteristic sequence of the reference image, namely: traversing all integer values in the value range, and calculating the respective corresponding sequence variance for each integer value;
s233, selecting a corresponding integer value as the translation distance according to the sequence variance. Preferably, the integer value with the smallest corresponding sequence variance is selected as the translation distance.
In this embodiment, by calculating the sequence variance corresponding to all the integer values and selecting the integer value with the smallest sequence variance as the translation distance, the minimum difference between the translated target image and the reference image can be ensured, thereby achieving a good alignment effect between the reference image and the target image.
In an embodiment of the present invention, according to the feature sequence of the target imageAnd a characteristic sequence of said reference image +.>Calculating a sequence variance corresponding to any integer value x in said range of values>The formula of (2) is:
wherein n is the number of CT images contained in the target image, and m is the number of characteristic values of the reference object corresponding to each CT image in the target image.
Difference valueRepresenting the difference value between the j-th characteristic value of the i-th image in the target image and the corresponding characteristic value of the corresponding image in the reference image after the i-th image translates by x vector distance, and +.>Wherein (1)>Representation->Characteristic value of ith row and jth column,/->Representation->Characteristic values of the ith row and the jth column of the table. When the characteristic sequence of the target imageWhen the image is translated in the forward direction, namely x is more than or equal to 0, the corresponding image of the ith image is the (i+x) th image in the reference image; when the feature sequence of the target image +.>Upon negative translation, i.e. x<And when 0, the corresponding image of the ith image is the ith-x image in the reference image.
For the difference valueIts expected value in the whole characteristic sequence +.>And representing the expectation that the j-th characteristic value of the whole translation x vector distance of the target image and the corresponding characteristic value of the corresponding image in the reference image. The desired value->The calculation formula of (2) is +.>
Above-mentionedThe variance of the m feature sequence differences is selected to be subjected to integral summation processing, so that a plurality of features can be referred in the calculation process, the integral effect after translation is optimal, and errors caused by inaccuracy in extraction of single features are prevented.
In one embodiment of the invention, the reference object is the spinal column. For the abdomen CT image, the position and the shape of the spine are relatively fixed, so that the spine is selected as the reference object, and the alignment effect of the medical image can be improved.
In an embodiment of the present invention, the target image includes: flat scan phase CT images, venous phase CT images and/or arterial phase CT images.
In one embodiment of the invention, a spine is selected as the reference object. Referring to fig. 7A, the medical image alignment method includes:
s71, extracting an image spinal mask;
s72, extracting the characteristic value of the spine in each CT image according to the spine mask;
s73, respectively obtaining characteristic sequences of a flat scanning phase CT image, a venous phase CT image and an arterial phase CT image;
and S74, respectively translating the venous CT image and the arterial CT image based on the plain CT image to align the three-phase CT images.
In this embodiment, step S71 extracts the spinal mask by using a thresholding method. The numerical unit of the CT image is Henry's unit, which reflects the density value of human organs or tissues. Typically, the density of human bones is greater than 1000hu. Therefore, when the spinal mask is extracted by the threshold method, the bones can be obtained by filtering tissues below 1000hu. In order to avoid the influence of non-spine bones such as ribs on the extraction result, the mask extraction method of steps S41 to S44 is further adopted in step S71 to obtain the mask of the spine.
Referring to fig. 7B, in the embodiment, the implementation method for extracting the feature value of the spine in each CT image in step S72 includes:
s721, calculating the center point of the spine in each CT image. Because the outer frame of the spinal mask in this embodiment is rectangular, the center point of the mask can be approximately considered to be the center point of the spinal column according to the symmetry of the outer frame;
s722, calculating the area of the spine. Specifically, the number of pixels included in the spinal mask may be counted as the spinal area.
S723, obtaining the geometrical characteristics of the spine. Specifically, a convex hull can be obtained by calculating the extracted spinal mask, and the area, parameters and the like of the convex hull are sequentially calculated to serve as the geometrical characteristics of the spinal column.
And S724, combining the characteristic vectors according to the central point, the area and the geometric characteristics of the spine.
The characteristic value of the spine in any CT image can be obtained through the steps S721-S724, the characteristic value of the CT image in the flat scanning period can be obtained through obtaining the characteristic values of all CT images in the flat scanning period, and then the characteristic sequence of the CT image in the flat scanning period can be obtained; the characteristic values of the CT images in the venous phase can be obtained by obtaining the characteristic values of all the CT images in the venous phase, so that the characteristic sequences of the CT images in the venous phase can be obtained; the characteristic values of the arterial phase CT images can be obtained by obtaining the characteristic values of all the arterial phase CT images, and further the characteristic sequence of the arterial phase CT images can be obtained.
Referring to fig. 7C, in the present embodiment, the implementation method of step S74 includes:
s741, sequentially selecting all integer values in the range of [ -M, M ], and sequentially translating the venous phase CT image along the spinal column direction according to each selected integer value; wherein M is a positive integer, preferably 3;
s742, sequentially calculating and obtaining sequence variances corresponding to all selected integer values based on the characteristic sequences of the venous phase CT images and the characteristic sequences of the arterial phase CT images;
s743, selecting the integer value with the smallest corresponding sequence variance as a translation distance, and translating the venous CT image, so that the venous CT image and the plain CT image are aligned.
The manner of translating the arterial phase CT image to align the arterial phase CT image with the plain scan phase CT image is similar to S741-S743, and will not be described here again.
Based on the above description of the medical image alignment method, the present invention also provides a computer-readable storage medium having a computer-readable program stored thereon. The computer program, when executed by a processor, implements the medical image alignment method of the present invention.
Based on the above description of the medical image alignment method, the invention also provides electronic equipment. Referring to fig. 8, the electronic device 800 includes: memory 810 storing a computer program; a processor 820, communicatively coupled to the memory, for executing the medical image alignment method of the present invention when the computer program is invoked; a display 830 is communicatively coupled to the processor and the memory for displaying an associated GUI interactive interface for the medical image alignment method.
The protection scope of the medical image alignment method of the present invention is not limited to the execution sequence of the steps listed in the present embodiment, and all the solutions implemented by the steps of increasing or decreasing and step replacing according to the prior art made by the principles of the present invention are included in the protection scope of the present invention.
The medical image alignment method can automatically acquire the characteristic sequences of the target image and the reference image, acquire the translation distance of the target image according to the characteristic sequences, and translate the target image so as to realize the alignment of the target image and the reference image. The whole process basically does not need manual participation, and is convenient to operate and high in efficiency.
The medical image alignment method can obtain a translation distance with minimum error according to the three-dimensional growth shape of the reference object, and further allows CT images to be subjected to alignment pretreatment, so that CT image data processing through artificial intelligence is possible.
In summary, the present invention effectively overcomes the disadvantages of the prior art and has high industrial utility value.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, it is intended that all equivalent modifications and variations of the invention be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.

Claims (6)

1. A medical image alignment method for aligning a target image with a reference image, the medical image alignment method comprising:
acquiring a characteristic value of a reference object in the target image, and acquiring a characteristic sequence of the target image according to the characteristic value of the reference object in the target image;
acquiring a characteristic value of the reference object in the reference image, and acquiring a characteristic sequence of the reference image according to the characteristic value of the reference object in the reference image;
acquiring a translation distance according to the characteristic sequence of the target image and the characteristic sequence of the reference image;
translating the target image according to the translation distance;
the implementation method for obtaining the characteristic value of the reference object in the target image comprises the following steps: acquiring a CT value range of the reference object; acquiring pixel points of the reference object according to the CT value range, and further generating a mask of the reference object; acquiring a characteristic value of the reference object according to the mask of the reference object, wherein the characteristic value of the reference object comprises geometric characteristics of a convex hull corresponding to the reference object, a center point of the reference object and/or an area of the reference object;
the realization method for obtaining the translation distance according to the characteristic sequence of the target image and the characteristic sequence of the reference image comprises the following steps: acquiring the value range of the translation distance; the value range comprises at least two integer values; sequentially calculating sequence variances corresponding to all integer values in the value range according to the characteristic sequence of the target image and the characteristic sequence of the reference image; selecting a corresponding integer value as the translation distance according to the sequence variance;
according to the characteristic sequence of the target imageAnd a characteristic sequence of said reference image +.>Calculating the sequence variance corresponding to the integer value x>The formula of (2) is:
wherein n is the number of CT images contained in the target image, and m is the number of characteristic values of reference objects corresponding to each CT image in the target image; representation->Characteristic value of ith row and jth column,/->Representation->Characteristic value of ith row and jth column,/->Representation->Expected values in the whole signature sequence.
2. The medical image alignment method according to claim 1, wherein the method for obtaining the feature sequence of the target image according to the feature value of the reference object in the target image comprises:
obtaining a feature vector corresponding to the target image according to the feature value of the reference object in the target image;
and obtaining a characteristic sequence of the target image according to the characteristic vector corresponding to the target image.
3. The medical image alignment method of claim 1, wherein: the reference object is the spine.
4. The medical image alignment method of claim 1, wherein the target image comprises: flat scan phase CT images, venous phase CT images and/or arterial phase CT images.
5. A computer-readable storage medium having stored thereon a computer program, characterized by: the computer program, when executed by a processor, implements the medical image alignment method of any of claims 1 to 4.
6. An electronic device, the electronic device comprising:
a memory storing a computer program;
a processor in communication with the memory, the processor executing the medical image alignment method of any one of claims 1 to 4 when the computer program is invoked;
and the display is in communication connection with the processor and the memory and is used for displaying a related GUI interaction interface of the medical image alignment method.
CN202010535461.4A 2020-06-12 2020-06-12 Medical image alignment method, medium and electronic equipment Active CN113808227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010535461.4A CN113808227B (en) 2020-06-12 2020-06-12 Medical image alignment method, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010535461.4A CN113808227B (en) 2020-06-12 2020-06-12 Medical image alignment method, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113808227A CN113808227A (en) 2021-12-17
CN113808227B true CN113808227B (en) 2023-08-25

Family

ID=78892128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010535461.4A Active CN113808227B (en) 2020-06-12 2020-06-12 Medical image alignment method, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113808227B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527360A (en) * 2017-08-23 2017-12-29 维沃移动通信有限公司 A kind of image alignment method and mobile terminal
CN108898567A (en) * 2018-09-20 2018-11-27 北京旷视科技有限公司 Image denoising method, apparatus and system
CN109978965A (en) * 2019-03-21 2019-07-05 江南大学 A kind of simulation CT image generating method, device, computer equipment and storage medium
CN110611767A (en) * 2019-09-25 2019-12-24 北京迈格威科技有限公司 Image processing method and device and electronic equipment
CN110909580A (en) * 2018-09-18 2020-03-24 北京市商汤科技开发有限公司 Data processing method and device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5138431B2 (en) * 2008-03-17 2013-02-06 富士フイルム株式会社 Image analysis apparatus and method, and program
US9119573B2 (en) * 2009-12-10 2015-09-01 Siemens Aktiengesellschaft Stent marker detection using a learning based classifier in medical imaging
JP2015036632A (en) * 2013-08-12 2015-02-23 キヤノン株式会社 Distance measuring device, imaging apparatus, and distance measuring method
US20180108136A1 (en) * 2016-10-18 2018-04-19 Ortery Technologies, Inc. Method of length measurement for 2d photography
KR102123660B1 (en) * 2018-10-12 2020-06-16 라온피플 주식회사 Apparatus and method for generating teeth correction image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527360A (en) * 2017-08-23 2017-12-29 维沃移动通信有限公司 A kind of image alignment method and mobile terminal
CN110909580A (en) * 2018-09-18 2020-03-24 北京市商汤科技开发有限公司 Data processing method and device, electronic equipment and storage medium
CN108898567A (en) * 2018-09-20 2018-11-27 北京旷视科技有限公司 Image denoising method, apparatus and system
CN109978965A (en) * 2019-03-21 2019-07-05 江南大学 A kind of simulation CT image generating method, device, computer equipment and storage medium
CN110611767A (en) * 2019-09-25 2019-12-24 北京迈格威科技有限公司 Image processing method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种类似视频场景分割的家庭数字照片分类方法研究;顾静军等;计算机工程;第29卷(第01期);全文 *

Also Published As

Publication number Publication date
CN113808227A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
US10709394B2 (en) Method and system for 3D reconstruction of X-ray CT volume and segmentation mask from a few X-ray radiographs
US20190261945A1 (en) Three-Dimensional Segmentation from Two-Dimensional Intracardiac Echocardiography Imaging
CN102737395B (en) Image processing method and device in a kind of medical X-ray system
US9613440B2 (en) Digital breast Tomosynthesis reconstruction using adaptive voxel grid
CN109754448B (en) CT cardiac scanning artifact correction method and system
CN108682025B (en) Image registration method and device
CN107154038B (en) Rib fracture auxiliary diagnosis method based on rib visualization
US9659390B2 (en) Tomosynthesis reconstruction with rib suppression
Banerjee et al. A completely automated pipeline for 3D reconstruction of human heart from 2D cine magnetic resonance slices
CN112614169B (en) 2D/3D spine CT (computed tomography) level registration method based on deep learning network
Zhang et al. Hierarchical patch-based sparse representation—A new approach for resolution enhancement of 4D-CT lung data
CN101040297B (en) Image segmentation using isoperimetric trees
JP2008511395A (en) Method and system for motion correction in a sequence of images
WO2019097085A1 (en) Isotropic 3d image reconstruction using 3d patches-based self-similarity learning
KR102472464B1 (en) Image Processing Method and Image Processing Device using the same
CN113808227B (en) Medical image alignment method, medium and electronic equipment
US9147250B2 (en) System and method for automatic magnetic resonance volume composition and normalization
JP2013198603A (en) Image processing apparatus, method, and program
CN107292351A (en) The matching process and device of a kind of tubercle
CN116894783A (en) Metal artifact removal method for countermeasure generation network model based on time-varying constraint
CN105787922B (en) A kind of method and apparatus for realizing automatic MPR batch processing
CN110473241A (en) Method for registering images, storage medium and computer equipment
KR102505908B1 (en) Medical Image Fusion System
CN109903262A (en) A kind of method of image co-registration, system and relevant apparatus
JP6253992B2 (en) Organ position estimation apparatus, organ position estimation apparatus control method, and organ position estimation apparatus control program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Gu Jingjun

Inventor after: Zhou Gonggan

Inventor before: Ding Yuan

Inventor before: Ding Yuhui

Inventor before: Sun Zhongquan

Inventor before: Gu Jingjun

Inventor before: Zhou Gonggan

GR01 Patent grant
GR01 Patent grant