CN115530854A - Imaging method and system - Google Patents

Imaging method and system Download PDF

Info

Publication number
CN115530854A
CN115530854A CN202211152194.8A CN202211152194A CN115530854A CN 115530854 A CN115530854 A CN 115530854A CN 202211152194 A CN202211152194 A CN 202211152194A CN 115530854 A CN115530854 A CN 115530854A
Authority
CN
China
Prior art keywords
imaged
image
distance
ray
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211152194.8A
Other languages
Chinese (zh)
Inventor
孙彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN202211152194.8A priority Critical patent/CN115530854A/en
Publication of CN115530854A publication Critical patent/CN115530854A/en
Priority to US18/460,501 priority patent/US20240087168A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/48Diagnostic techniques
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/44Constructional features of apparatus for radiation diagnosis
    • A61B6/4411Constructional features of apparatus for radiation diagnosis the apparatus being modular
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/44Constructional features of apparatus for radiation diagnosis
    • A61B6/4417Constructional features of apparatus for radiation diagnosis related to combined acquisition of different diagnostic modalities

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Engineering & Computer Science (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Physics & Mathematics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The embodiment of the specification provides an imaging method and a system, wherein the method comprises the following steps: acquiring a first image of an object to be imaged through a photographing device; acquiring a second image of the object to be imaged through an X-ray imaging assembly; determining a thickness of the object to be imaged based on the first image; the ray source emits rays for electron computed tomography imaging; determining a first distance from a reference point of the object to be imaged to the source of the radiation source based on the thickness; and determining a correction factor based on the first distance, wherein the correction factor is used for reflecting the corresponding size of each pixel point on the second image.

Description

Imaging method and system
Technical Field
The present description relates to the field of medical imaging computing, and in particular, to an imaging method and system.
Background
In recent years, medical imaging techniques are widely used for clinical examination and medical diagnosis. For example, with the development of electron computed tomography imaging techniques (e.g., X-ray imaging techniques), C-shaped X-ray imaging systems have become increasingly important in applications such as digital subtraction angiography, breast tomosynthesis, chest examination, and the like.
Therefore, there is a need to provide an imaging method and system for improving the accuracy of measuring the size of an object to be imaged based on medical imaging techniques.
Disclosure of Invention
One of the embodiments of the present specification provides an imaging method, including: acquiring a first image of an object to be imaged through a photographing device; acquiring a second image of the object to be imaged through an X-ray imaging assembly; determining a thickness of the object to be imaged based on the first image; determining a first distance from a reference point of the object to be imaged to the source of radiation based on the thickness; and determining a correction factor based on the first distance, wherein the correction factor is used for reflecting the corresponding size of each pixel point on the second image.
In some embodiments, said determining a correction factor based on said first distance comprises: determining the correction factor based on the formula L1= H1L/H; wherein L1 is the correction factor, H1 is the first distance, L is a size corresponding to each pixel point on a radiation detection mechanism, and H is a distance between the radiation source and the radiation detection mechanism along the emission direction of the radiation, wherein the radiation detection mechanism is configured to receive the radiation emitted by the radiation source.
In some embodiments, the correction factor may be determined more quickly and accurately by the above formula.
In some embodiments, the determining a first distance from a reference point of the object to be imaged to the source of radiation based on the thickness comprises: acquiring a second distance from the sickbed to the ray source; determining a third distance of a reference point of the object to be imaged to the patient bed based on the thickness; determining the first distance based on the second distance and the third distance.
In some embodiments, the determined first distance is made more accurate by acquiring a second distance from the patient's bed to the source of radiation and determining a third distance from a reference point of the object to be imaged to the patient's bed based on the thickness.
In some embodiments, said determining a thickness of said object to be imaged based on said first image comprises: and determining the thickness of the object to be imaged along the ray emission direction based on the first image and the ray emission direction of the ray source of the X-ray imaging assembly relative to the object to be imaged when the second image is acquired.
In some embodiments, the depth image may include depth information of the object to be imaged, and the thickness of the object to be imaged along the emission direction of the ray may be more accurately determined.
In some embodiments, the determining the thickness of the object to be imaged along the ray emission direction based on the ray emission direction of the first image and the ray source of the X-ray imaging assembly relative to the object to be imaged when acquiring the second image includes: establishing a three-dimensional model of the object to be imaged based on the first image; and determining the thickness of the object to be imaged along the ray emission direction based on the ray emission direction and the three-dimensional model.
In some embodiments, a three-dimensional model of the object to be imaged is established through the image, and the thickness of the object to be imaged along the ray emission direction is determined based on the ray emission direction and the three-dimensional model, so that the determined thickness of the object to be imaged along the ray emission direction can be more accurate, and the subsequently determined correction factor can be more accurate.
In some embodiments, the method further comprises: and determining the size of the target part of the object to be imaged based on the correction factor and the number of pixels of the target part on the imaging image of the imaging device.
In some embodiments, the photographing apparatus includes at least two cameras, and at least two of the cameras are disposed above the X-ray imaging assembly.
In some embodiments, at least two cameras are arranged, and when a certain camera is damaged, the other camera can be used for acquiring the image of the object to be imaged, so that the correction is ensured.
In some embodiments, the photographic device comprises a 3D camera.
In some embodiments, the method is applied to a digital subtraction angiography X-ray machine.
One of the embodiments of the present specification provides an imaging system, including a photographing apparatus for acquiring a first image of an object to be imaged; the X-ray imaging assembly is used for acquiring a second image of the object to be imaged; the thickness determining module is used for determining the thickness of the object to be imaged along the ray emission direction based on the first image and the ray emission direction of the ray source relative to the object to be imaged when the second image is acquired; a distance determination module for determining a first distance from a reference point of the object to be imaged to the radiation source based on the thickness; and the size correction module is used for determining a correction factor based on the first distance, wherein the correction factor is used for reflecting the size corresponding to each pixel point on the second image.
According to the imaging method, when the second image is acquired, the thickness of the object to be imaged along the ray emission direction is determined based on the first image and the ray emission direction of the ray source of the X-ray imaging assembly relative to the object to be imaged, the first distance from the reference point of the object to be imaged to the ray source is determined based on the thickness, the correction factor is determined based on the first distance, the correction factor is used for reflecting the corresponding size of each pixel point on the second image, and for each object to be imaged, the imaging method can determine the corresponding correction factor according to the specific position and the thickness of the object to be imaged, so that the correction is more accurate, the correction can be completed without an additional reference object, the operation flow is reduced, the efficiency is improved, and the size of the target part of the object to be detected determined according to the correction factor is more real. Meanwhile, the correction factor can be determined when no reference object with a known size exists, the use by a user is more convenient, the object to be imaged does not need to be placed at the isocenter, and the operation is simpler.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals refer to like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of an imaging system according to some embodiments of the present description;
FIG. 2 is an exemplary flow chart of an imaging method according to some embodiments herein;
FIG. 3 is a schematic illustration of a thickness of an object to be imaged along a direction of radiation emission, in accordance with some embodiments of the present description;
FIG. 4 is a schematic illustration of a thickness of an object to be imaged along a direction of radiation emission, in accordance with further embodiments of the present description;
fig. 5 is an exemplary flow diagram illustrating the determination of a first distance based on a second distance and a third distance in accordance with some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "apparatus", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to or removed from these processes.
FIG. 1 is a schematic illustration of an application scenario 100 of an imaging system according to some embodiments herein.
As shown in fig. 1, in some embodiments, the application scenario 100 may include a processing device 110, a network 120, a user terminal 130, a storage device 140, an imaging device 150, and a camera device 160. The application scenario 100 may quickly and accurately correct the response data of the object to be detected by implementing the methods and/or processes disclosed in this specification.
The processing device 110 may be used to process data and/or information from at least one component of the application scenario 100 or an external data source (e.g., a cloud data center). The processing device 110 may access data or information from the user terminal 130, the storage device 140, the imaging device 150, and/or the camera device 160 via the network 120. Processing device 110 may directly connect with user terminal 130, storage device 140, imaging device 150, and/or camera device 160 to access information and/or data. For example, the processing device 110 may acquire a first image of the object to be imaged from the photographing device 160. Processing device 110 may process the acquired data and/or information. For example, the processing device 110 may determine the thickness of the object to be imaged along the radiation emission direction based on the first image and the radiation emission direction of the radiation source to the object to be imaged when the second image is acquired; determining a first distance from a reference point of an object to be imaged to the radiation source based on the thickness; a correction factor is determined based on the first distance. In some embodiments, the processing device 110 may be a single server or a group of servers. The processing device 110 may be local, remote. For more description of the processing device 110, reference may be made to fig. 2 and its associated description, which are not repeated herein.
In some embodiments, the processing device 110 may include a thickness determination module, a distance determination module, and a size correction module, wherein the thickness determination module may be configured to determine a thickness of the object to be imaged along a ray emission direction based on the first image and the ray emission direction of the ray source of the object to be imaged when the second image is acquired; the distance determination module may be configured to determine a first distance from a reference point of the object to be imaged to the source of radiation based on the thickness; the size correction module may be configured to determine a correction factor based on the first distance, where the correction factor is used to reflect a size corresponding to each pixel point on the second image.
The network 120 may include any suitable network that provides for the exchange of information and/or data capable of facilitating the application scenario 100. In some embodiments, information and/or data may be exchanged between one or more components of the application scenario 100 (e.g., the processing device 110, the user terminal 130, the storage device 140, the imaging device 150, and/or the camera device 160) via the network 120.
In some embodiments, the network 120 may be any one or more of a wired network or a wireless network. In some embodiments, network 120 may include one or more network access points. For example, the network 120 may include wired or wireless network access points, e.g., base stations and/or network switching points, through which one or more components of the application scenario 100 may connect to the network 120 to exchange data and/or information.
User terminal 130 refers to one or more terminals or software used by a user. In some embodiments, user terminal 130 refers to a terminal or software used by a healthcare worker (e.g., a nurse, a doctor, etc.). In some embodiments, the user terminal 130 may include, but is not limited to, a smart phone, a tablet, a laptop, a desktop computer, and the like. In some embodiments, the user terminal 130 may interact with other components in the application scenario 100 through the network 120. For example, user terminal 130 may send one or more control instructions to processing device 110 to control processing device 110 to determine a correction factor based on the first distance.
Storage device 140 may be used to store data, instructions, and/or any other information. In some embodiments, storage device 140 may store data and/or information obtained from user terminal 130, storage device 140, imaging device 150, and/or photography device 160, among others. For example, the storage device 140 may store an image of an object to be imaged captured by the photographing device 160. As another example, the storage device 140 may store a trained machine learning model. In some embodiments, storage device 140 may include mass storage, removable storage, and the like, or any combination thereof.
The imaging device 150 (which may also be referred to as an X-ray imaging assembly) may be a device for acquiring medical images of an object to be imaged. In some embodiments, the imaging device 150 may scan an object to be imaged, obtain scan data and generate a medical image of the user. The object to be imaged may include a human body, an animal, or the like. The object to be imaged may be a whole human or animal body; the object to be imaged may also include a target site, which may include an organ, tissue, lesion, tumor, or any combination thereof. Illustratively, the target site may be the head, chest, abdomen, heart, liver, upper limbs, lower limbs, etc., or any combination thereof. In some embodiments, the imaging device 150 may be one device or one group of devices. Specifically, the Imaging device 150 may be a medical Imaging system, for example, a PET (position Emission Tomography) device, a SPECT (Single Photon Emission Computed Tomography) device, a CT (Computed Tomography) device, an MRI (Magnetic Resonance Imaging) device, or the like. Further, the medical imaging systems may be used alone or in combination. Such as a PETCT device, a PETMRI device, or a SPECTMRI device, etc. In some embodiments, the imaging device may comprise a digital subtraction angiography X-ray machine.
In some embodiments, the imaging device 150 may include a patient bed 151, a gantry 152, a movable assembly 153 coupled to the gantry 152, a radiation source 154 secured to the movable assembly 153, and a radiation detection mechanism 155. The frame 152 may be used to support the movable assembly 153. The movable assembly 153 can drive the radiation source 154 and the radiation detecting mechanism 155 to rotate around the rotation center, so as to obtain projection images of the object to be imaged on the radiation detecting mechanism 155 at different angles. In some embodiments, the movable assembly 153 may be a C-arm, with the radiation source 154 and the radiation detection mechanism 155 located at opposite ends of the C-arm. The patient bed 151 is used for a subject to be imaged to lie down. The radiation source 154 may emit radioactive rays toward an object to be imaged to irradiate the target object. The radiation detection mechanism 155 may be configured to receive radioactive radiation. The radioactive rays may include one or a combination of particulate rays, photon rays, and the like. The particulate radiation may include one or a combination of neutrons, protons, electrons, muitimedia, heavy ions, and the like. The photon rays may include one or a combination of X-rays, gamma rays, alpha rays, beta rays, ultraviolet rays, laser light, and the like. By way of example, the photon radiation may be X-rays and the corresponding imaging device 150 may be one or more of a CT system, a digital radiography system (DR), a multi-modality medical imaging system, and the like. Further, in some embodiments, the multi-modality medical imaging system may include one or more of a CTPET system, a SPECTMRI system, or the like. By way of example, source 154 may be an X-ray tube. The X-ray tube may emit X-rays that are received by the radiation detection mechanism 155. It will be appreciated that when the radiation emitted by the radiation source 154 is X-rays, the imaging device 150 may also be referred to as an X-ray imaging assembly.
The radiation detection mechanism 155 may include a plurality of detector pixel units. In some embodiments, the plurality of detector pixel units on the radiation detection mechanism 155 may be arranged in a predetermined manner, for example, the plurality of detector pixel units are arranged in m rows and n columns, wherein a row may be a row direction of the radiation detection mechanism 155, and a column may be a channel direction of the radiation detection mechanism 155. In some embodiments, the radiation detection mechanism 155 may be a circular radiation detection mechanism, a square radiation detection mechanism, an arc radiation detection mechanism, or the like. The rotation angle of the arc-shaped radiation detection mechanism may be between 0 degrees and 360 degrees. In some embodiments, the angle of rotation of the arcuate radiation detection mechanism may be fixed. In some embodiments, the rotation angle of the arc-shaped radiation detection mechanism can be adjusted as needed. For example, the adjustment may be made according to one or a combination of a resolution of the image, a size of the image, a sensitivity of the radiation detection mechanism, a stability of the radiation detection mechanism, or a combination thereof as desired. In some embodiments, radiation detection mechanism 155 may be a one-dimensional radiation detection mechanism, a two-dimensional radiation detection mechanism, or a three-dimensional radiation detection mechanism.
In some embodiments, the imaging device 150 may also include an image generator to generate images. In some embodiments, the image generator may perform image pre-processing, image reconstruction, and/or region of interest extraction operations to generate a medical image of the object to be imaged. The image generator may be associated with the radiation detection mechanism 155, an operation control computer device, and/or an external data source (not shown). In some embodiments, the image generator may receive data from the radiation detection mechanism 155 or an external data source and generate a medical image of the object to be imaged based on the received data. The external data source may be one or more of a hard disk, a floppy disk, a Random Access Memory (RAM), a Dynamic Random Access Memory (DRAM), a Static Random Access Memory (SRAM), a bubble memory (bubble memory), a thin film memory (thin film memory), a magnetic plated line memory (magnetic plated line memory), a phase change memory (phase change memory), a flash memory (flash memory), a cloud disk (a closed disk), and the like.
Further description of the imaging device 150 can be found in fig. 2 and its associated description, and will not be repeated here.
The photographing apparatus 160 (which may also be referred to as a photographing apparatus) may be used to photograph an image of an object to be imaged. In some embodiments, the photographing apparatus 160 includes cameras, the number of which is at least two. In some embodiments, at least two cameras are disposed above the imaging device 150. In some embodiments, the camera may be a planar camera, e.g., a black and white camera, a color camera, a scanner, or the like, or any combination thereof. In some embodiments, the planar camera may acquire two-dimensional images of the object to be imaged from different angles, and the processing device 110 may reconstruct a depth image of the object to be imaged from the two-dimensional images of the object to be imaged acquired by the planar camera from different angles. In some embodiments, the planar camera may comprise a black and white camera, a color camera, a scanner, or the like, or any combination thereof. In some embodiments, the camera device 160 may include a 3D camera, and the 3D camera may be directly used to acquire a depth image of an object to be imaged. For example, a structured light camera that projects specific light information (e.g., criss-crossed laser lines, black and white squares, circles, etc.) by a projector to an object to be imaged. For example, a binocular Camera, a TOF (Time of light Camera) Camera, and the like.
For more description of the photographing apparatus 160, reference may be made to fig. 2 and the related description thereof, which are not described herein.
It should be noted that the application scenario 100 is provided for illustrative purposes only and is not intended to limit the scope of the present description. It will be apparent to those skilled in the art that various modifications and variations can be made in light of the description of the present specification. For example, the application scenario 100 may also include a database. However, such changes and modifications do not depart from the scope of the present specification.
The correction factor is used to reflect a real size corresponding to each pixel point on an imaging image of the imaging device, for example, a real length and a real width of an object to be imaged corresponding to each pixel point. The correction factor may be determined in a number of ways. By way of example only, in some embodiments, the correction factor may be determined by a reference, i.e., an image corresponding to a reference of known dimensions (e.g., a catheter, ruler, or steel ball) is acquired by the imaging device 150, and the correction factor is determined based on the true dimensions of the reference and the dimensions of the reference on the image. As another example, the object to be imaged is placed at the isocenter of the imaging device 150, and the correction factor is determined. Wherein the radiation source 154 and the radiation detection means 155 rotate around a common central point, i.e. the isocenter, through which the radiation axis of the radiation source 154 passes from within a minimum sphere centered around this point. Both of the above methods have drawbacks. For example, determining the correction factor based on the reference object requires acquiring the size of the reference object in advance, and missing a reference object of known size would make the correction factor undeterminable. For another example, the correction factor is determined by the isocenter, and if the object to be imaged is not placed at the isocenter of the imaging device 150, the determined correction factor may be deviated, resulting in a deviation between the determined size of the object to be imaged and the real size thereof.
The embodiment of the application provides an imaging system and method, the imaging system and method can determine the thickness of an object to be imaged along the ray emission direction of a ray source of an X-ray imaging assembly when the second image is acquired by acquiring the image of the object to be imaged, based on the image and the ray emission direction, determine the first distance from a reference point of the object to be imaged to the ray source based on the thickness, and determine a correction factor based on the first distance, so that the correction factor can be determined even when no reference object with a known size exists, and the determined correction factor is related to the position and the thickness of the object to be imaged, so that the correction factor determined by the method is more accurate.
FIG. 2 is an exemplary flow diagram of an imaging method 200 shown in accordance with some embodiments of the present description. As shown in fig. 2, the imaging method 200 includes the following steps. In some embodiments, the imaging method 200 may be performed by the processing device 110.
Step 210, acquiring a first image of an object to be imaged by a photographing apparatus.
In some embodiments, the processing device 110 may acquire an image of the object to be imaged by the camera device 160.
In some embodiments, the first Image may be a two-dimensional (2D, two-dimensional) Image or a three-dimensional (3D, three-dimensional) Image, which may be in the Format of Joint Photographic Experts Group (JPEG), tagged Image File Format (TIFF), graphics Interchange Format (GIF), digital Imaging and Communications in Medicine (DICOM), and the like. In some embodiments, the first image taken by the camera device 160 may be a Depth image (Depth Images).
In some embodiments, the photographing apparatus 160 may acquire at least two images through at least two cameras, respectively, and the processing apparatus 110 may filter the at least two images, and use the filtered images as the first image for determining the thickness of the object to be imaged. The processing device 110 may filter the at least two images based on information about the images (e.g., sharpness, brightness, reality, etc.).
In some embodiments, at least two cameras are arranged, when a certain camera is damaged, the first image can be acquired through other cameras, and correction is guaranteed.
For more description of the photographing apparatus 160, reference may be made to fig. 1 and the related description thereof, which are not repeated herein.
Step 220, a second image of the object to be imaged is acquired by the X-ray imaging assembly.
The object to be imaged may be scanned by an X-ray imaging assembly (also referred to as an imaging device) to acquire a second image. For the description of the X-ray imaging assembly, reference may be made to fig. 1 and its related description, which are not repeated herein.
In step 230, the thickness of the object to be imaged is determined based on the first image.
In some embodiments, the processing device 110 may determine the thickness of the object to be imaged based on the first image in any manner.
For example, the processing device 110 may identify size information of the object to be imaged based on the first image, thereby obtaining thicknesses at different positions, and calculate a mean value of the thicknesses at the different positions, taking the mean value as the thickness of the object to be imaged.
For another example, the processing device 110 may identify information of the object to be imaged (e.g., a human face, etc.) based on the first image, acquire a historical thickness or a preset thickness of the object to be imaged based on the identified information of the object to be imaged, and acquire the historical thickness or the preset thickness of the object to be imaged as the thickness of the object to be imaged.
In some embodiments, the processing device 110 may determine the thickness of the object to be imaged along the radiation emission direction based on the first image and the radiation source of the X-ray imaging assembly when acquiring the second image. The focal point of the radiation source (e.g., radiation source 154) may have a vertical projection point on the radiation detection mechanism (e.g., radiation detection mechanism 155), and the radiation emission direction of the radiation source may be based on a direction from the focal point of the radiation source to the vertical projection point. The direction in which the radiation source emits radiation may be different when the radiation source 154 and the radiation detecting mechanism 155 are rotated to different angles about the center of rotation. For example, fig. 3 is a schematic diagram of the thickness of the object to be imaged along the radiation emitting direction according to some embodiments of the present disclosure, as shown in fig. 3, when the radiation source 310 and the radiation detecting mechanism 320 rotate around the rotation center to the angle 1, the radiation emitting direction 330 of the radiation source 310 is a direction from the focal point 340 of the radiation source to the vertical projection point 350 on the radiation detecting mechanism 320; fig. 4 is a schematic diagram illustrating a thickness of an object to be imaged along a radiation emitting direction according to other embodiments of the present disclosure, where, as shown in fig. 4, when the radiation source 410 and the radiation detecting mechanism 420 rotate around a rotation center to an angle of 2, the radiation emitting direction 430 of the radiation source 420 is a direction from a focal point 440 of the radiation source 410 to a vertical projection point 450 on the radiation detecting mechanism 420.
In some embodiments, the processing device 110 may obtain the radiation source emission ray emission direction from the user terminal 130, the storage device 140, the imaging device 150, and/or the photographing device 160.
When the ray source and the ray detection mechanism rotate to different angles around the rotation center, the thickness of the object to be imaged along the ray emission direction can be different. Still taking fig. 3 and 4 as an example, as shown in fig. 3, when the radiation source 310 and the radiation detecting mechanism 320 rotate to the angle 1 around the rotation center, the thickness of the object to be imaged in the radiation emitting direction 330 may be a distance from the point 370 to the point 380; as shown in fig. 4, when the radiation source 410 and the radiation detecting mechanism 420 are rotated to an angle 2 around the rotation center, the thickness of the object to be imaged in the radiation emitting direction 430 may be a distance from the point 470 to the point 480.
The processing device 110 may determine the thickness of the object to be imaged along the radiation emission direction in any feasible manner based on the first image of the object to be imaged acquired by the photographing device 160 and the radiation emission direction of the radiation source to the object to be imaged when the second image is acquired.
For example, the processing device 110 may determine the thickness of the object to be imaged along the radiation emission direction by the two-dimensional image of the object to be imaged acquired by the photographing device 160 and the radiation emission direction emitted by the radiation source. For example only, the processing device 110 may determine the number of pixels located in the emission direction of the object to be imaged in the two-dimensional image of the object to be imaged, and determine the thickness of the object to be imaged in the emission direction of the radiation based on the number of pixels.
In some embodiments, the processing device 110 may establish a three-dimensional model of the object to be imaged based on the first image, and then determine a thickness of the object to be imaged along the ray emission direction based on the ray emission direction and the three-dimensional model.
In some embodiments, the first image acquired by the camera device 160 may include acquiring two-dimensional images of the object to be imaged from different angles. The processing device 110 may build a three-dimensional model of the object to be imaged based on the camera device 160 acquiring two-dimensional images of the object to be imaged from different angles. For example, the processing device 110 may reconstruct a depth image of the object to be imaged based on the two-dimensional images of the object to be imaged acquired by the photographing device 160 from different angles, and the processing device 110 may create a three-dimensional model of the object to be imaged based on the reconstructed depth image, wherein the algorithm for reconstructing the depth information includes a PMVS (the patch-based MVS algorithm), an MC (Marching Cube) algorithm, a DC (Dual content) algorithm, and the like.
In some embodiments, the processing device 110 may establish a three-dimensional model of the object to be imaged based on the reconstructed depth image in any manner. By way of example only, the processing device 110 may convert the reconstructed depth image into a three-dimensional point cloud through coordinate conversion, and then build a three-dimensional model of the object to be imaged based on the three-dimensional point cloud.
In some embodiments, the first image acquired by the camera device 160 may comprise a depth image of the object to be imaged. The processing device 110 may build a three-dimensional model of the object to be imaged directly based on the depth image of the object to be imaged acquired by the photographing device 160.
In some embodiments, the processing device 110 may determine the thickness of the object to be imaged along the direction of radiation emission based on the direction of radiation emission and the three-dimensional model. For example, the processing device 110 may determine two contour points for characterizing the thickness of the object to be imaged based on the ray emission direction, the two contour points being located at both ends of the ray emission direction. The processing device 110 may determine the thickness of the object to be imaged in the ray emission direction based on the distance between the two contour points. By way of example only, and again taking fig. 3 and 4 as an example, as shown in fig. 3, when the radiation source 310 and the radiation detection mechanism 320 rotate around the rotation center to the angle 1, the thickness of the object to be imaged along the radiation emission direction 330 may be a distance from the contour point 370 to the contour point 380; as shown in fig. 4, when the radiation source 410 and the radiation detecting mechanism 420 are rotated to an angle 2 around the rotation center, the thickness of the object to be imaged in the radiation emitting direction 430 may be a distance from the contour point 470 to the contour point 480.
In some embodiments, the processing device 110 establishes a three-dimensional model of the object to be imaged through the image, and determines the thickness of the object to be imaged along the ray emission direction based on the ray emission direction and the three-dimensional model when the second image is acquired, so that the determined thickness of the object to be imaged along the ray emission direction can be more accurate, and the subsequently determined correction factor can be more accurate.
A first distance from a reference point of the object to be imaged to the source of radiation is determined based on the thickness, step 240.
The reference point may represent any point in the thickness of the object to be imaged in the direction of ray emission. Such as the center point. Taking fig. 3 as an example, when the radiation source 310 and the radiation detecting mechanism 320 rotate to an angle 1 around the rotation center, the reference point 390 of the object to be imaged may be a center point between the point 370 and the point 380.
The first distance may characterize a distance of a reference point of the object to be imaged to a focal spot of the source of radiation. The first distance may vary with the angle of rotation of the radiation source and the radiation detection mechanism. Still taking fig. 3 and 4 as an example, as shown in fig. 3, when the radiation source 310 and the radiation detecting mechanism 320 rotate to the angle 1 around the rotation center, the first distance may be a distance between the point 390 and the focal point 340 of the radiation source along the radiation emitting direction; as shown in fig. 4, when the radiation source 410 and the radiation detecting mechanism 420 are rotated to an angle of 2 around the rotation center, the first distance may be a distance between the point 490 and the focal point 440 of the radiation source along the radiation emitting direction.
In some embodiments, the processing device 110 may determine the first distance of the reference point of the object to be imaged from the source of radiation in any manner. For example, the processing device 110 may determine a reference point on the two-dimensional image of the object to be imaged, determine a connection line between the reference point and the focal point of the radiation source and the reference point, and determine the first distance based on the number of pixel points on the connection line. For another example, the processing device 110 may set up a three-dimensional model of the imaging device from the image acquired by the photographing device 160, fuse the three-dimensional model of the imaging device and the three-dimensional model of the object to be imaged based on the spatial position relationship, and determine the distance from the reference point of the object to be imaged to the focal point of the radiation source according to the fused three-dimensional model.
In some embodiments, the processing device 110 may obtain a second distance from the patient's bed to the radiation source, determine a third distance from a reference point of the object to be imaged to the patient's bed based on the thickness, determine the first distance based on the second distance and the third distance, and refer to fig. 5 and its related description for further description, which is not repeated herein.
Step 250, determining a correction factor based on the first distance, wherein the correction factor is used for reflecting the corresponding size of each pixel point on the second image.
The correction factor is used for reflecting the corresponding size of each pixel point on the second image, namely the real size of the object to be imaged corresponding to each pixel point.
In some embodiments, the processing device 110 may determine a correction factor based on the first distance by any method, and the correction factor is used to reflect the corresponding size of each pixel point on the second image. For example, the processing device 110 may determine a distance between the isocenter of the imaging device 150 and the focal point of the radiation source (also referred to as an isocenter distance), determine a ratio of the first distance to the isocenter distance (also referred to as a distance ratio), and adjust based on the distance ratio and an initial correction factor. The initial correction factor may be a correction factor obtained by the processing device 110 based on the isocenter of the imaging device 150. For example only, the processing device 110 may use the product of the initial correction factor and the distance scale as the correction factor.
In some embodiments, the processing device 110 may determine the correction factor based on the following equation:
L1=h1*L/H;
wherein, L1 is a correction factor, H1 is a first distance, L is a pixel size of the ray detection mechanism, that is, a size corresponding to each pixel point on the ray detection mechanism, and H is a distance between the ray source and the ray detection mechanism along the ray emission direction.
In some embodiments, the correction factor may be determined relatively quickly and accurately by the above formula.
In some embodiments, the imaging method 200 may further include a step 260 of determining a size of the target portion of the object to be imaged based on the correction factor and the number of pixels of the target portion on the second image.
In some embodiments, processing device 110 may determine the size of the object to be imaged based on the correction factor and the number of pixels of the target site on the second image. For example, the processing device 110 may take the product of the number of pixels of the target portion and the correction factor as the size of the target portion of the object to be imaged.
In some embodiments, the imaging method 200 determines the thickness of the object to be imaged along the radiation emission direction based on the first image and the radiation source in the radiation emission direction of the object to be imaged when the second image is acquired, determines the first distance from a reference point of the object to be imaged to the radiation source based on the thickness, and determines the correction factor based on the first distance, where the correction factor is used to reflect the corresponding size of each pixel point on the second image, and for each object to be imaged, the imaging method 200 may determine the corresponding correction factor according to the specific position and thickness of the object to be imaged, so that the correction is more accurate, and can be completed without an additional reference object, thereby reducing the operation flow, improving the efficiency, and making the size of the target portion of the object to be imaged determined according to the correction factor more real. Meanwhile, the correction factor can be determined when no reference object with a known size exists, the use by a user is more convenient, an object to be imaged does not need to be placed at the isocenter, and the operation is simpler.
It should be noted that the above description of the imaging method 200 is for illustration and description only and does not limit the scope of applicability of the present description. Various modifications and alterations to the imaging method 200 will be apparent to those skilled in the art in light of this description. However, such modifications and variations are intended to be within the scope of the present description.
Fig. 5 is an exemplary flow diagram illustrating the determination of a first distance based on a second distance and a third distance in accordance with some embodiments of the present description. As shown in fig. 5, the process 500 includes the following steps. In some embodiments, flow 500 may be performed by processing device 110.
A second distance from the patient's bed to the source of radiation is acquired, step 510.
The second distance may be indicative of a distance between a focal spot of the radiation source and the patient's bed along a radiation emission direction. In some embodiments, a distance between the focal point of the radiation source and a bed reference point along the radiation emission direction may be used as the second distance, wherein the bed reference point may be a certain point on the bed located in the radiation emission direction, for example, a point located on the top of the bed and located in the radiation emission direction. As shown in fig. 3, the second distance may be indicative of a distance between a focal point 340 of the source and a patient bed reference point (coinciding with point 380) along the ray emission direction 330. In some embodiments, processing device 110 may obtain the second distance directly from user terminal 130, storage device 140, imaging device 150, and/or camera device 160.
In some embodiments, the processing device 110 may determine the second distance based on an image acquired by the photographic device. For example, the processing device 110 may determine a line between the focal point of the radiation source and the patient bed along the radiation emission direction on the image acquired by the photographing device, and determine the second distance based on the number of pixels located on the line. For another example, the processing device 110 may build a three-dimensional model of the imaging device based on the image acquired by the photographing device, and determine the second distance based on the three-dimensional model.
A third distance from the reference point of the object to be imaged to the patient bed is determined based on the thickness, step 520.
The third distance may be indicative of a distance between a reference point of the object to be imaged and the patient bed. In some embodiments, the processing device 110 may take the distance between the reference point of the object to be imaged and the bed reference point as the second distance. As shown in fig. 3, the third distance may characterize a distance between a reference point 390 of the object to be imaged and a patient bed reference point (coincident with point 380) along the ray emission direction 330.
In some embodiments, the processing device 110 may determine a third distance of the reference point of the object to be imaged from the patient bed based on the thickness. For example, when the radiation emission direction of the radiation source is a vertical direction and the bed reference point is located at the top of the bed, the third distance may be half the thickness of the object to be imaged along the radiation emission direction.
In some embodiments, the processing device 110 may also determine the third distance based on an image acquired by the photographic device. For example, the processing device 110 may determine a connecting line between a reference point of the object to be imaged and a reference point of the patient bed along the radiation emission direction on the image acquired by the photographing device, and determine the third distance based on the number of pixels located on the connecting line.
For another example, the processing device 110 may establish a three-dimensional model of the patient's bed based on the image acquired by the photographing device, fuse the three-dimensional model of the patient's bed and the three-dimensional model of the object to be imaged based on the spatial position relationship, and determine the third distance according to the fused three-dimensional model.
Step 530, a first distance is determined based on the second distance and the third distance.
In some embodiments, the processing device 110 may determine the first distance based on the second distance and the third distance. For example, when the patient's bed is positioned between the source of radiation and the object to be imaged, the first distance may be the sum of the second distance and the third distance. For another example, the first distance may be the difference between the second distance and the third distance when the object to be imaged is located between the source of radiation and the patient bed.
In some embodiments, the process 500 makes the determined first distance more accurate by acquiring a second distance from the patient's bed to the source of radiation and determining a third distance from a reference point of the object to be imaged to the patient's bed based on the thickness.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the present specification can be seen as consistent with the teachings of the present specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. An imaging method, comprising:
acquiring a first image of an object to be imaged through a photographing device;
acquiring a second image of the object to be imaged through an X-ray imaging assembly;
determining a thickness of the object to be imaged based on the first image;
determining a first distance from a reference point of the object to be imaged to the source of radiation based on the thickness;
and determining a correction factor based on the first distance, wherein the correction factor is used for reflecting the corresponding size of each pixel point on the second image.
2. The method of claim 1, wherein determining a correction factor based on the first distance comprises:
the correction factor is determined based on the following formula,
L1=h1*L/H;
wherein L1 is the correction factor, H1 is the first distance, L is a size corresponding to each pixel point on a ray detection mechanism, and H is a distance between the ray source and the ray detection mechanism along the emitting direction of the ray, wherein the ray detection mechanism is used for receiving the ray emitted by the ray source.
3. The method of claim 1, wherein said determining a first distance of a reference point of the object to be imaged from the source of radiation based on the thickness comprises:
acquiring a second distance from the sickbed to the ray source;
determining a third distance of a reference point of the object to be imaged to the patient bed based on the thickness;
determining the first distance based on the second distance and the third distance.
4. The method of claim 1, wherein said determining a thickness of the object to be imaged based on the first image comprises:
and determining the thickness of the object to be imaged along the ray emission direction based on the first image and the ray emission direction of the ray source of the X-ray imaging assembly relative to the object to be imaged when the second image is acquired.
5. The method of claim 4, wherein the determining a thickness of the object to be imaged along a radiation emission direction of the object to be imaged based on the first image and a radiation source of the X-ray imaging assembly relative to the radiation emission direction of the object to be imaged when acquiring the second image comprises:
establishing a three-dimensional model of the object to be imaged based on the first image;
and determining the thickness of the object to be imaged along the ray emission direction based on the ray emission direction and the three-dimensional model.
6. The method of claim 5, wherein the method further comprises:
and determining the size of the target part of the object to be imaged based on the correction factor and the number of pixels of the target part on the second image.
7. The method of claim 1, wherein the photographic device comprises at least two cameras; at least two cameras are arranged above the X-ray imaging assembly.
8. The method of claim 7, wherein the camera device comprises a 3D camera.
9. The method of claim 1, wherein the method is applied to a digital subtraction angiography X-ray machine.
10. An imaging system, comprising:
the photographing apparatus is used for acquiring a first image of an object to be imaged;
the X-ray imaging assembly is used for acquiring a second image of the object to be imaged;
the thickness determining module is used for determining the thickness of the object to be imaged along the ray emission direction based on the first image and the ray emission direction of the ray source relative to the object to be imaged when the second image is acquired;
a distance determination module for determining a first distance from a reference point of the object to be imaged to the radiation source based on the thickness;
and the size correction module is used for determining a correction factor based on the first distance, wherein the correction factor is used for reflecting the size corresponding to each pixel point on the second image.
CN202211152194.8A 2022-09-01 2022-09-21 Imaging method and system Pending CN115530854A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211152194.8A CN115530854A (en) 2022-09-21 2022-09-21 Imaging method and system
US18/460,501 US20240087168A1 (en) 2022-09-01 2023-09-01 Method and system for medical imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211152194.8A CN115530854A (en) 2022-09-21 2022-09-21 Imaging method and system

Publications (1)

Publication Number Publication Date
CN115530854A true CN115530854A (en) 2022-12-30

Family

ID=84727707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211152194.8A Pending CN115530854A (en) 2022-09-01 2022-09-21 Imaging method and system

Country Status (1)

Country Link
CN (1) CN115530854A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611524A (en) * 2023-10-26 2024-02-27 北京声迅电子股份有限公司 Express item security inspection method based on multi-source image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611524A (en) * 2023-10-26 2024-02-27 北京声迅电子股份有限公司 Express item security inspection method based on multi-source image
CN117611524B (en) * 2023-10-26 2024-05-31 北京声迅电子股份有限公司 Express item security inspection method based on multi-source image

Similar Documents

Publication Publication Date Title
US10354454B2 (en) System and method for image composition
US9135706B2 (en) Features-based 2D-3D image registration
US8879817B2 (en) Scan plan field of view adjustor, determiner, and/or quality assessor
JP4954887B2 (en) Optimal transformation of 3D image sets between different spaces
US20150342555A1 (en) Radiation imaging apparatus and control method thereof
US10032295B2 (en) Tomography apparatus and method of processing tomography image
CN106999135B (en) Radiation emission imaging system and method
WO2019120196A1 (en) Systems and methods for determining scanning parameter in imaging
US10049465B2 (en) Systems and methods for multi-modality imaging component alignment
WO2017045620A1 (en) Computed tomography method and system
US20140198964A1 (en) Follow up image acquisition planning and/or post processing
CN113647967A (en) Control method, device and system of medical scanning equipment
Herbst et al. Dynamic detector offsets for field of view extension in C‐arm computed tomography with application to weight‐bearing imaging
CN112401919A (en) Auxiliary positioning method and system based on positioning model
US20220054862A1 (en) Medical image processing device, storage medium, medical device, and treatment system
CN115530854A (en) Imaging method and system
US20230083704A1 (en) Systems and methods for determining examination parameters
US20220036609A1 (en) X-ray imaging system with foreign object reduction
Schiffers et al. Disassemblable fieldwork CT scanner using a 3D-printed calibration phantom
CN115414115A (en) Display correction method and system for medical image
Nakazeko et al. Estimation of patient’s angle from skull radiographs using deep learning
Park et al. Iterative closest-point based 3D stitching in dental computed tomography for a larger view of facial anatomy
Huang et al. Learning Perspective Deformation in X-Ray Transmission Imaging
WO2022073744A1 (en) System and method for automated patient and phantom positioning for nuclear medicine imaging
CN117557650A (en) Parallel motion scanning imaging system parameter calibration method, terminal and calibration body

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination