CN112837358A - Multi-perspective-view-coupled table object positioning method - Google Patents

Multi-perspective-view-coupled table object positioning method Download PDF

Info

Publication number
CN112837358A
CN112837358A CN202110318029.4A CN202110318029A CN112837358A CN 112837358 A CN112837358 A CN 112837358A CN 202110318029 A CN202110318029 A CN 202110318029A CN 112837358 A CN112837358 A CN 112837358A
Authority
CN
China
Prior art keywords
perspective
dimensional
registration
ray
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110318029.4A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Chaojing Nanjing Technology Co ltd
Original Assignee
Zhongke Chaojing Nanjing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Chaojing Nanjing Technology Co ltd filed Critical Zhongke Chaojing Nanjing Technology Co ltd
Priority to CN202110318029.4A priority Critical patent/CN112837358A/en
Publication of CN112837358A publication Critical patent/CN112837358A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B15/00Measuring arrangements characterised by the use of electromagnetic waves or particle radiation, e.g. by the use of microwaves, X-rays, gamma rays or electrons
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/04Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/04Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
    • G01N23/046Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material using tomography, e.g. computed tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences

Abstract

The invention discloses a multi-perspective-view-coupled table object positioning method, which comprises the following steps of: inputting accurate position information of an object in a three-dimensional computed tomography image and a positioning scheme, placing the object on a table top, and performing initial positioning of the object according to the positioning scheme; shooting the side of the placed object to form a plurality of two-dimensional X perspective views with different imaging angles; preprocessing the two-dimensional X-ray rendering, wherein the pixel values are proportional to the density value of the object through which the X-ray passes; rigid geometric registration from a two-dimensional X perspective to a three-dimensional computed tomography image is carried out to obtain translation and rotation parameters, and the table top is moved according to the parameters to enable the object to reach a preset treatment position. The invention can be applied to processing imaging hardware systems which are not limited to the two types of hardware, and is flexibly used for accurate positioning of objects in different scenes; the internal structure of the object can be tracked; can not contact with the object and does not damage the surface of the object.

Description

Multi-perspective-view-coupled table object positioning method
Technical Field
The invention relates to an object positioning method, in particular to a table top object positioning method with coupled multiple perspective views.
Background
The field of precise machining generally needs precise three-dimensional positioning of workpieces, and positioning through biorthogonal X-ray registration is a novel workpiece positioning method. The method is also called as an image guide positioning technology, and the technology acquires 1-N two-dimensional perspective images of an object before processing, compares the two-dimensional perspective images with a three-dimensional computed tomography image of the object, obtains the spatial transformation of the processing positioning position and the current position of the object, and controls a table board to move the object to a processing preset position.
The two-dimensional image can be acquired by various hardware, which are commonly known as: 1. two sets of X-ray imaging systems with mutually orthogonal axes are independent of the processing tool, and each set of X-ray imaging system consists of an X-ray bulb tube source and an X-ray detection plate; 2. a KV-class Cone-Beam CT (CBCT) system mounted on a rotating device. The first type of hardware can rapidly shoot 2X perspective images with orthogonal angles, and the second type of hardware can dynamically shoot a plurality of X perspective images with different angles along with the rotation of the C-shaped arm. The first method has strict requirements on the installation position of an imaging system, and has difficulty in adapting to different applications, so that the application range is limited; the second type needs the certain angle of rotary device when fixing a position, leads to the location time longer to because the object can be out of shape at the rotatory process, and rotary device can have unstable phenomenon in the rotatory process, probably leads to positioning accuracy to descend.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a multi-perspective-view-coupled table object positioning method capable of expanding the application range and improving the precision.
The technical scheme is as follows: the invention relates to a multi-perspective-view-coupled table object positioning method, which comprises the following steps:
the method comprises the following steps: inputting accurate position information of an object in a three-dimensional computed tomography image and a positioning scheme, placing the object on a table top, and performing initial positioning of the object according to the positioning scheme;
step two: shooting the side of the placed object to form a plurality of two-dimensional X perspective views with different imaging angles;
step three: preprocessing the two-dimensional X-ray rendering, wherein the pixel values are proportional to the density value of the object through which the X-ray passes;
step four: rigid geometric registration from a two-dimensional X perspective to a three-dimensional computed tomography image is carried out to obtain translation and rotation parameters, and the table top is moved according to the parameters to enable the object to reach a preset treatment position.
In the fourth step, the registration method is based on 1 to a plurality of two-dimensional X perspective views and three-dimensional computed tomography images for registration to obtain rigid registration transformation; in step three, the preprocessing of the two-dimensional X perspective comprises the following steps:
the method comprises the following steps: correcting X-ray scattering, and removing low-frequency scattering components on an X perspective view;
step two: the X-ray hardening correction makes the pixel value of the X-ray perspective and the density of the object passing through form a linear relation.
In the invention, translation vectors and rotation angles around X, y and z axes are interactively adjusted until a digital reconstruction X perspective generated according to translation and rotation dynamics is superposed with a shot perspective to realize rigid geometric registration; based on the two-dimensional registration result of each perspective view and the digital reconstruction X perspective view of the corresponding angle, iterative optimization forms three-dimensional transformation to realize rigid geometric registration; based on an iterative three-dimensional projection gradient descent method, randomly selecting 1-2 perspective views from a plurality of perspective views in each iteration, and calculating the gradient of spatial transformation to obtain rigid registration transformation of the perspective views and a computed tomography image; rigid geometric registration can choose to forego any degree of freedom, compute and output registration results of translation only or translation combined with rotation.
Has the advantages that: compared with the prior art, the invention has the following remarkable advantages: the three-dimensional registration method can be suitable for processing imaging hardware systems which are not limited to the two types of hardware, supports the three-dimensional registration between an X-ray perspective view and a three-dimensional computed tomography image based on 1-N angles, and is flexibly used for the accurate positioning of objects in different scenes; the internal structure of the object can be tracked; can not contact with the object and does not damage the surface of the object.
Drawings
FIG. 1 is a schematic structural view of the present invention;
FIG. 2 is a schematic view of two sets of X-ray imaging devices of the present invention with their axes orthogonal to each other;
FIG. 3 is a schematic view of the imaging system of the present invention mounted on a C-arm.
Detailed Description
The technical solution of the present invention is further explained with reference to the accompanying fig. 1-3.
A multi-perspective coupled table object positioning method as shown in the figure, comprising the steps of:
the method comprises the following steps: inputting accurate position information of an object in a three-dimensional computed tomography image and a positioning scheme, placing the object on a table top, and performing initial positioning of the object according to the positioning scheme; determining a three-dimensional image value at a position p in an object coordinate system under the initial positioning through input information, namely V (p), wherein p represents a coordinate of any position in a CT range under the object coordinate system, and p is (x, y, z)T
Step two: shooting the side of the placed object to form a plurality of two-dimensional X perspective views with different imaging angles; imgii is 1, … n, n is more than or equal to 1, wherein each image combines the source and imaging plane coordinate system information to make the source to the point on the picture
Figure BDA0002992017730000021
And source to point
Figure BDA0002992017730000022
All density information on the rays in between;
step three: preprocessing the two-dimensional X-ray rendering, wherein the pixel values are proportional to the density value of the object through which the X-ray passes;
step four: rigid geometric registration from a two-dimensional X perspective to a three-dimensional computed tomography image is carried out to obtain translation and rotation parameters, and the table top is moved according to the parameters to enable the object to reach a preset treatment position.
In the fourth step, the registration method is based on 1 to a plurality of two-dimensional X perspective views and three-dimensional computed tomography images for registration to obtain rigid registration transformation, namely the number of the two-dimensional X perspective views in the A can be 1 to any plurality;
interactively adjusting translation vectors and rotation angles around the X, y and z axes under the X, y and z coordinate axes until a digital reconstruction X perspective generated according to translation and rotation dynamic is superposed with a shot perspective to realize rigid geometric registration;
based on the two-dimensional registration result of each perspective view and the digital reconstruction X perspective view of the corresponding angle, iterative optimization forms three-dimensional transformation to realize rigid geometric registration;
based on an iterative three-dimensional projection gradient descent method, randomly selecting 1-2 perspective views from a plurality of perspective views in each iteration, and calculating the gradient of spatial transformation to obtain rigid registration transformation of the perspective views and a computed tomography image;
rigid geometric registration can choose to forego any degree of freedom, compute and output registration results of translation only or translation combined with rotation.
In step three, the preprocessing of the two-dimensional X perspective comprises the following steps:
the method comprises the following steps: correcting X-ray scattering, and removing low-frequency scattering components on an X perspective view; in the process that X-rays penetrate through an object, the direction of the rays is unchanged before scattering occurs, the intensity of the rays is attenuated according to the beer law, and if scattering occurs, the direction and the energy of the rays are changed, so that a pixel value in a final imaged image corresponds to a part of scattered X-rays, and therefore scattering correction needs to be carried out, for example, methods such as principal component modulation filtering, grid adding at a source and the like are used;
step two: the X-ray hardening correction makes the pixel value of the X-ray perspective and the density of the object passing through form a linear relation. The x-ray generated by the CT bulb tube has a certain frequency spectrum width, and after the x-ray with a continuous energy spectrum passes through an object, low-energy rays are easy to absorb, high-energy rays are easy to pass, the average energy is high in the process of ray propagation, and the ray is gradually hardened, so that the effect of beam hardening is called. The beam hardening effect enables the pixel brightness in the X-ray perspective view not to be in direct proportion to the thickness of an object through which the ray passes, and the X-ray perspective view can truly reflect the thickness and density information of the object through hardening correction, so that the image registration difficulty is reduced.
The three-dimensional registration from perspective to three-dimensional computed tomography image can be carried out according to the actual situation according to one of the following three registration modes, and the table displacement d ═ x, y, z, the Rotation (Rotation) angle theta, the rolling (Roll) angle phi and the inclination (Pitch) angle psi can be obtained, wherein the spatial Rotation matrix corresponding to each Rotation angle is as follows:
Figure BDA0002992017730000031
Figure BDA0002992017730000032
Figure BDA0002992017730000033
let a coordinate of one point of an object under a coordinate system of a hospital bed be x, and a coordinate transformation calculation formula of the coordinate transformation calculation formula be
Figure BDA0002992017730000034
The method supports the following common form of methods for registration, and can lock theta, phi and psi as 0 in the registration process to ensure that the output space transformation parameters can support implementation on a table top only supporting four-dimensional space transformation and a table top supporting six-dimensional space transformation, and in order to obtain the parameters, one of the following methods is selected according to actual conditions:
(1) manual registration: manually adjusting X, y, z, theta, phi and psi, wherein each adjustment moves a three-dimensional object corresponding to a three-dimensional computed tomography image in a virtual space of a computer according to the corresponding adjusted parameters, and simulates the process of transmitting an X-ray through the object according to the shooting parameters of each X perspective to generate a Digital Reconstructed image (DRR), and each DRR is compared with the X perspective image one by one to confirm whether the DRR is moved in place;
(2) multi-perspective two-dimensional registration fitting three-dimensional space transformation: and aiming at each group of X-ray perspective images and DRRs, carrying out two-dimensional registration on the X-ray perspective images and the DRRs to obtain translation vectors and rotation angles in a two-dimensional plane where the X-ray perspective images and the DRRs are located, establishing a functional relation from global three-dimensional space transformation parameters to translation rotation parameters of each plane, and obtaining optimal global three-dimensional space transformation parameters by a gradient descent method. Iterating the process until the global three-dimensional space transformation parameters converge;
(3) three-dimensional registration based directly on perspective: establishing a corresponding relation from the CT point to a pixel point on a DRR of the perspective image:
Figure BDA0002992017730000041
wherein T (x)i) Represents the coordinate of the ith pixel after spatial transformation, V (T (x)i) The simulated rebinned pixel value for that point, Vol (z) the density of z points in space, z ∈ ray (T (x)i) C) means that z is from the X-ray source point c to T (X)i) The point on the ray is based on the maximum mutual information quantity between the X-ray perspective and the DRR or the maximum gradient of the contour and other objective functions, the optimal space transformation parameters are obtained through a gradient descent method, and for the X-ray perspective at a plurality of angles, the objective functions of all angles can be accumulated to serve as a total objective function to participate in the optimization and complete the registration process;
(4) in the registration process, partial spatial transformation parameters can be locked according to the displacement and rotational freedom supported by the target table top, partial differentiation of the parameter dimensions is ignored in the optimization process to perform partial optimization, and the final spatial transformation parameters which accord with the displacement and rotational capability of the table top are obtained.
The following embodiments are described in terms of two types of stereotactic imaging systems, biorthogonal and C-arm:
reference numerals: 1-imaging flat plate; 2-an object; 3-a table top; 4-an X-ray source; 5-rotation axis.
1. Fig. 1 shows a bi-orthogonal X-ray imaging system mounted on a radiation table 3 beside a treatment head, and the application steps are as follows:
a) the object 2 is placed on the table top 3, and the table top 3 is moved to an initial position according to the coordinate of the center point in the treatment plan;
b) simultaneously shooting images of two imaging planes which are orthogonal to each other and the X-ray source 4 is vertical to the isocenter of the accelerator;
c) during shooting, a light source modulation board is inserted into a source end, after imaging, non-scattering components are obtained by stripping through two-dimensional discrete Fourier transform and main component (non-scattering component) phase shift introduced by the modulation board, scattering correction is realized, and image gray scale is corrected according to a pre-stored non-linear beam hardening correction curve, so that the obtained X perspective view and the equivalent water thickness of an X-ray transmission object form a linear relation;
d) in the registration stage, firstly, DRRs corresponding to two X perspectives are generated, and then registration is performed by adopting the following method according to the situation:
i) comparing the DRR perspective and the X perspective, and if the two perspectives have more difference or the proper space transformation parameters cannot be found by directly using automatic registration or the two perspectives are selected as a manual registration method, carrying out manual registration or obtaining preliminary space transformation parameters;
ii) respectively carrying out two-dimensional plane registration on the two DRR and X perspective views to obtain two-dimensional translation T under a corresponding plane coordinate system1、T2And a rotation angle theta with respect to the imaging center1And theta2Combining the two-dimensional translation and rotation to form translation and rotation parameters in a three-dimensional space according to the orthogonal relation of the two registration planes, updating the DRR and further registering until convergence, and combining three-dimensional space transformation of each iteration to obtain complete registered three-dimensional space transformation;
iii) the post-registration spatial transformation obtained in step ii is typically a rotation matrix
Figure BDA0002992017730000051
And translation vector d ═ (x, y, z)TAccording to the following:
Figure BDA0002992017730000052
can obtain the product
Figure BDA0002992017730000053
Thus, the x, y, z, theta, phi and psi required for moving the bed are obtained.
2. Fig. 2 shows a C-arm imaging system, which is applied as follows:
a) the object 2 is placed on the table top 3, and the table top 3 is moved to an initial position according to the processing planned position;
b) rotating the frame, and shooting N transmission images of the X-ray source 4 at different angles;
c) according to the N X-ray perspective views, an FDK algorithm is utilized to construct a three-dimensional computed tomography image of the target region, a scattering component corresponding to the X-ray perspective views is obtained through a simulation method based on the image and is removed, and the steps are iterated until convergence or a certain time is reached;
d) selecting M (determining M value according to real-time requirement) from N X-ray perspective views, generating DRR images corresponding to the M X-ray perspective views, and obtaining target three-dimensional space transformation by the following steps:
i) comparing the DRR perspective and the X perspective, and if the two perspectives have more difference or the proper space transformation parameters cannot be found by directly using automatic registration or the two perspectives are selected as a manual registration method, carrying out manual registration or obtaining preliminary space transformation parameters;
ii) respectively carrying out two-dimensional plane registration on the M DRRs and the X perspective views to obtain two-dimensional translation T under the corresponding plane coordinate system1、T2、…、TMAnd a rotation angle theta with respect to the imaging center1、θ2、…、θM.
iii) direct perspective-based three-dimensional registration: establishing a corresponding relation of pixel points on a DRR of a k-th perspective image of a CT point:
Figure BDA0002992017730000054
wherein T isk(xi) Represents the coordinates, V, of the ith pixel in space of the graphk(T(xi) A pixel value representing the point, Vol (z) representing the density of z points in space, z ∈ ray (T (x)i) C) means that z is from the X-ray source point c to T (X)i) The point on the ray is based on the maximum total mutual information quantity of each pair of the X perspective and the DRR as an objective function, and the optimal space transformation parameter is obtained by a gradient descent method;
iv) the post-registration spatial transformation obtained in the preceding three steps is converted into a rotation matrix
Figure BDA0002992017730000055
And translation vector d ═ (x, y, z)TRoot of Chinese characterAccording to the following steps:
Figure BDA0002992017730000061
can obtain the product
Figure BDA0002992017730000062
In this way the x, y, z, theta, phi, psi required for the movement of the table top is obtained.

Claims (7)

1. A method of multi-perspective coupled table top object positioning, comprising the steps of:
the method comprises the following steps: inputting accurate position information of an object in a three-dimensional computed tomography image and a positioning scheme, placing the object on a table top, and performing initial positioning of the object according to the positioning scheme;
step two: shooting the side of the placed object to form a plurality of two-dimensional X perspective views with different imaging angles;
step three: preprocessing the two-dimensional X-ray rendering, wherein the pixel values are proportional to the density value of the object through which the X-ray passes;
step four: rigid geometric registration from a two-dimensional X perspective to a three-dimensional computed tomography image is carried out to obtain translation and rotation parameters, and the table top is moved according to the parameters to enable the object to reach a preset treatment position.
2. The multi-perspective coupled table object positioning method of claim 1, wherein in step four, the registration method is based on registering 1 to a plurality of two-dimensional X-ray perspectives with the three-dimensional computed tomography image to obtain a rigid registration transformation.
3. A method for multi-perspective coupled table top object positioning as claimed in claim 1, wherein in step three, the pre-processing of the two-dimensional X-perspective comprises the steps of:
the method comprises the following steps: correcting X-ray scattering, and removing low-frequency scattering components on an X perspective view;
step two: the X-ray hardening correction makes the pixel value of the X-ray perspective and the density of the object passing through form a linear relation.
4. The method as claimed in claim 1, wherein in step four, the translation vector and the rotation angle around the X, y, and z axes are interactively adjusted until the digitally reconstructed X perspective generated by the translation and rotation dynamics coincides with the captured perspective to realize rigid geometric registration.
5. The multi-perspective coupled table object positioning method of claim 1, wherein in step four, based on the two-dimensional registration result of each perspective view and the digitally reconstructed X perspective view of the corresponding angle, iterative optimization forms a three-dimensional transformation to achieve rigid geometric registration.
6. The method as claimed in claim 1, wherein in the fourth step, based on an iterative three-dimensional projection gradient descent method, randomly selecting 1-2 perspective views from the plurality of perspective views each iteration, and calculating the gradient of the spatial transformation to obtain the rigid registration transformation of the multi-perspective views and the computed tomography image.
7. A multi-perspective coupled table object positioning method as claimed in claim 1, wherein in step four, the rigid geometric registration can choose to discard any degree of freedom, calculate and output the registration result as translation only or translation combined with rotation.
CN202110318029.4A 2021-03-25 2021-03-25 Multi-perspective-view-coupled table object positioning method Pending CN112837358A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110318029.4A CN112837358A (en) 2021-03-25 2021-03-25 Multi-perspective-view-coupled table object positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110318029.4A CN112837358A (en) 2021-03-25 2021-03-25 Multi-perspective-view-coupled table object positioning method

Publications (1)

Publication Number Publication Date
CN112837358A true CN112837358A (en) 2021-05-25

Family

ID=75930581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110318029.4A Pending CN112837358A (en) 2021-03-25 2021-03-25 Multi-perspective-view-coupled table object positioning method

Country Status (1)

Country Link
CN (1) CN112837358A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113804105A (en) * 2021-08-17 2021-12-17 中科超精(南京)科技有限公司 Accelerator radiation field geometry and perspective imaging geometry coupling calibration device and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113804105A (en) * 2021-08-17 2021-12-17 中科超精(南京)科技有限公司 Accelerator radiation field geometry and perspective imaging geometry coupling calibration device and method
CN113804105B (en) * 2021-08-17 2024-04-05 中科超精(南京)科技有限公司 Accelerator field geometric and perspective imaging geometric coupling calibration device and method

Similar Documents

Publication Publication Date Title
US11633629B2 (en) Method of calibration of a stereoscopic camera system for use with a radio therapy treatment apparatus
Kim et al. A feasibility study of mutual information based setup error estimation for radiotherapy
CN107281652B (en) Positioning device
KR20000052875A (en) Apparatus for matching x-ray images with reference images
US20160155228A1 (en) Medical image generation apparatus, method, and program
WO1998002091A1 (en) High-speed inter-modality image registration via iterative feature matching
US9566039B2 (en) Bed positioning system for radiation therapy
GB2532077A (en) Method of calibrating a patient monitoring system for use with a radiotherapy treatment apparatus
JP2008043567A (en) Positioning system
JP6305250B2 (en) Image processing apparatus, treatment system, and image processing method
JP2008022896A (en) Positioning system
CN112837358A (en) Multi-perspective-view-coupled table object positioning method
CN111615365A (en) Positioning method and device and radiotherapy system
CN108338802A (en) Method for reducing image artifacts
US10565745B2 (en) Fast projection matching method for computed tomography images
CN113587810A (en) Method and device for generating light source position
CN113538259A (en) Real-time geometric correction method for perspective imaging device
JP7093075B2 (en) Medical image processing equipment, medical image processing methods, and programs
CN111603689A (en) DR image guiding and positioning method and device
WO2023157616A1 (en) Positioning device, radiation therapy device, and positioning method
TWI645836B (en) Particle beam therapy apparatus and digital reconstructed radiography image creation method
CN111615413B (en) Positioning method and device and radiotherapy system
CN113592936A (en) Method and device for generating light source position
KR20240054882A (en) 3D computed tomography apparatus and method
WO2023079811A1 (en) Positioning device, radiation therapy device, and positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination