CN117462163B - Volume layer image generation method, device and system, electronic equipment and storage medium - Google Patents

Volume layer image generation method, device and system, electronic equipment and storage medium Download PDF

Info

Publication number
CN117462163B
CN117462163B CN202311817984.8A CN202311817984A CN117462163B CN 117462163 B CN117462163 B CN 117462163B CN 202311817984 A CN202311817984 A CN 202311817984A CN 117462163 B CN117462163 B CN 117462163B
Authority
CN
China
Prior art keywords
target
point
line segment
projection
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311817984.8A
Other languages
Chinese (zh)
Other versions
CN117462163A (en
Inventor
汪令行
马骏骑
姚玉成
石彤彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Yofo Medical Technology Co ltd
Original Assignee
Hefei Yofo Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Yofo Medical Technology Co ltd filed Critical Hefei Yofo Medical Technology Co ltd
Priority to CN202311817984.8A priority Critical patent/CN117462163B/en
Publication of CN117462163A publication Critical patent/CN117462163A/en
Application granted granted Critical
Publication of CN117462163B publication Critical patent/CN117462163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/44Constructional features of apparatus for radiation diagnosis
    • A61B6/4429Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Optics & Photonics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Pulmonology (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to the technical field of image processing and provides a volume layer image generation method, a device, a system, electronic equipment and a storage medium. According to the present invention, it is possible to avoid the formation of deviations and artifacts in the volume layer image to some extent.

Description

Volume layer image generation method, device and system, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, a system, an electronic device, and a storage medium for generating a volume layer image.
Background
In the field of oral medical treatment, doctors can learn about conditions in the oral cavity of a patient, such as the conditions of teeth, through body layer images. Currently, there are mainly two ways to generate a volume layer image by using an imaging system.
The first method is that after obtaining a three-dimensional CBCT (Cone Beam CT) reconstruction result, a certain algorithm is used to directly further calculate the CBCT reconstruction result to obtain a volume layer image, and the implementation of the method directly depends on the CT reconstruction result, so when there is a deviation or an artifact in the CT reconstruction result, the generated volume layer image also has the deviation and the artifact.
The second is to directly take a photograph by using an imaging system to generate a volume layer image, and when a curved surface or a plane of a volume layer is set, there is a deviation between the set curved surface or plane and an actually desired curved surface or plane, so that the volume layer image obtained by taking the photograph is difficult to reflect the desired region.
Disclosure of Invention
In order to solve at least one of the technical problems, the invention provides a volume layer image generation method, a volume layer image generation device, a volume layer image generation system, electronic equipment and a storage medium.
The first aspect of the present invention provides a method for generating a volume layer image, including: acquiring a CT image of a subject, wherein the CT image is obtained by reconstructing projection data of the subject; determining a target line segment according to the CT image, wherein the shape of the target line segment accords with the shape characteristic of a region of interest formed by a target observation object in the CT image; for each target point on the target line segment, determining a corresponding projection point of the target point on a detector located at a corresponding position according to a projection line formed between the target point and the corresponding projection point, wherein the position of the projection point is determined to be at least capable of reducing information of an unexpected observed object contained in the target projection data, and the projection point is located on the projection line; and generating a volume layer image of the target surface according to the target projection data at the projection point, wherein the target surface is obtained through the target line segment.
According to one embodiment of the present invention, the desired observation object includes part or all of teeth of the subject.
According to one embodiment of the present invention, the method for determining the target line segment according to the CT image includes: obtaining a first image containing a dentition area according to the CT image, wherein the dentition area comprises a region of interest formed by the expected observation object in the CT image; and determining a target line segment according to the first image, wherein the target line segment is positioned in the dentition area.
According to one embodiment of the invention, the first image is a cross-sectional image.
According to one embodiment of the present invention, the method for determining the target line segment according to the first image includes: determining a plurality of position points according to the received input information, and fitting the plurality of position points to obtain a target line segment, wherein the plurality of position points are positioned in the dentition area; or generating a dental arch curve according to the first image, and determining a target line segment according to the dental arch curve, wherein the target line segment is part or all of the dental arch curve.
According to one embodiment of the present invention, the means for determining a target line segment from the dental arch curve comprises: directly taking the dental arch curve as a target line segment; or determining a tooth range according to the received input information, and taking the part corresponding to the tooth range in the dental arch curve as a target line segment; or determining a single position point according to the received input information, determining a first position point closest to the single position point in the dental arch curve, and taking a line segment with a preset length in a tangent line at the first position point as a target line segment, wherein the single position point is positioned in the dentition area, and the target line segment is at least partially positioned in the area of the tooth where the single position point is positioned.
According to one embodiment of the present invention, projection data of a subject is obtained by capturing the subject so as to rotate about the same rotation center.
According to one embodiment of the invention, the respective positions of the detector are obtained by respective source positions obtained by the projection lines and imaging geometry.
According to one embodiment of the invention, the projection point is located outside the area of the tooth.
According to one embodiment of the invention, the projection point is located inside the dental arch curve.
According to one embodiment of the present invention, the projection points corresponding to all the target points on the target line segment are one independent point, a plurality of independent points or can form a projection point line segment.
According to one embodiment of the invention, if the cast point is capable of forming a cast point line segment, the cast point line segment is determined by the position and the shape of the target line segment, wherein the cast point line segment is a single-segment line, a multi-segment line or a curve, and the position of the cast point on the cast point line segment changes along with the change of the position of the target point on the target line segment.
According to one embodiment of the invention, the direction of change of the position of the projected point on the projected point segment is opposite to the direction of change of the position of the target point on the target segment in the coronal axis direction.
According to one embodiment of the present invention, the different projection points corresponding to the target points are different from each other.
According to one embodiment of the invention, the target surface is obtained by extending the target line segment along a vertical axis.
According to one embodiment of the present invention, the method for generating a volume layer image of a target surface according to target projection data at the projection point includes: and arranging the target projection data of each target point in a target surface so as to obtain a volume layer image.
According to one embodiment of the present invention, before the target projection data of each target point in a target plane are arranged to obtain a volume layer image, the target projection data of the target plane are scaled according to the position of the target point, so that the imaging magnification ratio of each target projection data is the same.
The second aspect of the present invention proposes a volume layer image generating apparatus comprising: the CT image acquisition module is used for acquiring a CT image of the object, wherein the CT image is obtained by reconstructing projection data of the object; the target line segment determining module is used for determining a target line segment according to the CT image, wherein the shape of the target line segment accords with the shape characteristics of a region of interest formed by a target observation object in the CT image; a projection point determining module, configured to determine, for each target point on the target line segment, a projection point corresponding to the target point on a detector located at a corresponding position according to a projection line formed between the target point and the corresponding projection point, where a position of the projection point is determined to be at least capable of reducing information of an undesired observation object contained in the target projection data, and the projection point is located on the projection line; and the volume layer image generation module is used for generating a volume layer image of a target surface according to the target projection data at the projection point, wherein the target surface is obtained through the target line segment.
A third aspect of the present invention proposes a volume layer image generation system, comprising: a source for emitting X-rays; a detector for acquiring projection data from received X-rays from the source; a control mechanism for controlling the source and the detector to synchronously rotate around the subject; and a volume layer image generating device according to any of the above embodiments.
A fourth aspect of the present invention proposes an electronic device comprising: a memory storing execution instructions; and a processor, the processor executing the execution instructions stored in the memory, so that the processor executes the volume layer image generation method according to any one of the above embodiments.
A fifth aspect of the present invention proposes a readable storage medium having stored therein execution instructions which, when executed by a processor, are adapted to implement a volume layer image generation method according to any of the above-mentioned embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention.
Fig. 1 is a flow chart of a volume layer image generation method according to an embodiment of the present invention.
FIG. 2 is a flow diagram of determining a target line segment according to one embodiment of the invention.
Fig. 3 is a schematic illustration of a first image according to one embodiment of the invention.
Fig. 4 is a schematic flow chart of determining a target line segment according to another embodiment of the invention.
Fig. 5 and 6 are schematic diagrams of a first image according to other embodiments of the present invention.
Fig. 7 is a schematic diagram of a photographing mode according to an embodiment of the present invention.
Fig. 8 and 9 are schematic diagrams of projection lines and corresponding positions of the detector in the case where all target points on a target line segment correspond to a plurality of independent points.
Fig. 10 is a schematic diagram of projection lines and corresponding positions of the detector in the case where all target points on a target line segment correspond to a projection point line segment.
Fig. 11 is a flow diagram of generating a volumetric layer image from target projection data, according to an embodiment of the invention.
Fig. 12 is a schematic diagram of a volume layer image generation apparatus employing a hardware implementation of a processing system according to one embodiment of the invention.
Fig. 13 is a block diagram of a structure of a volume layer image generation system according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the substances, and not restrictive of the invention. It should be further noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
In addition, the embodiments of the present invention and the features of the embodiments may be combined with each other without collision. The technical scheme of the present invention will be described in detail below with reference to the accompanying drawings in combination with embodiments.
Unless otherwise indicated, the exemplary implementations/embodiments shown are to be understood as providing exemplary features of various details of some of the ways in which the technical concepts of the present invention may be practiced. Thus, unless otherwise indicated, the features of the various implementations/embodiments may be additionally combined, separated, interchanged, and/or rearranged without departing from the technical concepts of the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, when the terms "comprises" and/or "comprising," and variations thereof, are used in the present specification, the presence of stated features, integers, steps, operations, elements, components, and/or groups thereof is described, but the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof is not precluded. It is also noted that, as used herein, the terms "substantially," "about," and other similar terms are used as approximation terms and not as degree terms, and as such, are used to explain the inherent deviations of measured, calculated, and/or provided values that would be recognized by one of ordinary skill in the art.
The volume layer image generation method, apparatus, system, electronic device, and storage medium of the present invention are described below with reference to the accompanying drawings.
Fig. 1 is a flow chart of a volume layer image generation method according to an embodiment of the present invention. Referring to fig. 1, the present invention provides a volume layer image generating method M10, and the volume layer image generating method M10 of the present embodiment may include the following steps S100, S200, S300 and S400.
S100, acquiring CT images of the object. Wherein, the CT image is obtained by reconstructing projection data of the object.
S200, determining a target line segment according to the CT image. The shape of the target line segment accords with the shape characteristics of a region of interest formed by a target observation object in the CT image.
S300, for each target point on the target line segment, determining a corresponding projection point of the target point on the detector at the corresponding position according to the projection line formed between the target point and the corresponding projection point. Wherein the position of the projection point is determined to be at least capable of reducing information of an undesired observation object contained in the target projection data, the projection point being located on the projection line.
S400, generating a volume layer image of the target surface according to the target projection data at the projection points. The target surface is obtained through a target line segment.
According to the volume layer image generation method provided by the embodiment of the invention, after CT scanning of a detected body is completed, the curved surface or plane of the volume layer image can be determined by using the obtained CT image, then the volume layer image is generated by using a part of data content in projection data during CT scanning, compared with the method of directly synthesizing the volume layer image by using a CT reconstruction result, the method can avoid deviation and artifact existing in the CT reconstruction result in the volume layer image to a certain extent, and compared with the method of directly shooting the detected body by using a panoramic machine, the curved surface or plane of the volume layer image determined by using the CT image is more accurate and more in line with the user expectation, so that the volume layer image can accurately reflect the situation of a desired observation area. Compared with the mode of shooting by using a dental film machine and obtaining the small dental film, the method does not need to independently shoot again, reduces the radiation dose, saves the shooting time, does not need to configure the dental film machine for shooting the small dental film, and reduces the equipment cost.
The volume layer image refers to a volume layer sheet, also called a broken layer sheet, such as an oral panorama curved surface broken layer sheet. The tomogram belongs to a two-dimensional image.
The CT image of the subject may be obtained by reconstructing a projection image acquired when the subject is CT scanned in advance. If it is found that it is necessary to acquire an oral cavity broken slice after CT scan is performed, for example, when image blurring occurs due to patient movement during scan to affect diagnosis, or when high density objects such as metal teeth are scanned to affect diagnosis due to metal artifact in image or artifact around implant during implantation, or when a planar slice with locally higher resolution (similar to root tip slice) is required, or CT scan is required and oral cavity broken slice is acquired before detection of a subject is started, CT image and projection data obtained by CT scan can be directly used to generate an oral cavity broken slice.
The target line segment may be a straight line, a broken line, or a curve, for example, may be a portion of or the entire curve of a dental arch curve. The target surface is obtained through the target line segments, and the target surface characterizes the shape, the size and the position of the volume layer image in the imaging area.
The desired observation object is an object that the user desires to observe through the body layer image, and may include part or all of the teeth of the subject, for example. The region of interest comprises a region of the desired observation object, for example, when the desired observation object is all teeth, the region of interest may comprise an entire dentition region. It will be appreciated that the region of interest may be a two-dimensional image or a three-dimensional image.
Because the CT image can reflect the condition of the oral cavity of the detected body, the CT image is utilized to set the target line segment, so that the morphological characteristics of the curvature, the position, the length and the like of the target line segment can be consistent with the shape characteristics of the expected observation tooth region, the target surface can be more consistent with the region expected to be observed by a user, and compared with the condition that the accurate position of the oral cavity of the detected body is not determined, the method has the advantages that the body layer image is generated according to the preset curved surface or plane, and the effect and the accuracy of the image in the aspect of content are improved. In addition, compared with the method for generating the volume layer image by directly calculating the CT reconstruction result, the method has the advantages that the image content and the image style are both closer to those of a traditional curved surface volume slice, and the image content can be better displayed.
FIG. 2 is a flow diagram of determining a target line segment according to one embodiment of the invention. Referring to fig. 2, in step S200, the method of determining the target line segment according to the CT image may include the following steps S210 and S220.
S210, obtaining a first image containing dentition areas according to the CT image. Wherein the dentition region comprises a region of interest formed by a desired observation object in the CT image.
S220, determining a target line segment according to the first image. Wherein the target line segment is located in the dentition area.
Since the position of the dentition region in the subject can be roughly determined, a first image including the dentition region can be acquired from the three-dimensional CT image in a preset position region. The first image may be a two-dimensional image or a three-dimensional image. When the first image is a two-dimensional image, the region of interest may also be a two-dimensional image. When the first image is a three-dimensional image, the region of interest may also be a three-dimensional image.
The first image may be a cross-sectional image, for example. The cross-sectional image is perpendicular to a vertical axis in the basic axis of the human body, can show all teeth, and can be located near the lower root of the teeth. Fig. 3 is a schematic illustration of a first image according to one embodiment of the invention. Referring to fig. 3, a is the dentition area and C is the arch curve of the dentition area.
Fig. 4 is a schematic flow chart of determining a target line segment according to another embodiment of the invention. Referring to fig. 4, in step S220, the method of determining the target line segment according to the first image may include the following step S221 or step S222.
S221, determining a plurality of position points according to the received input information, and fitting the plurality of position points to obtain a target line segment. Wherein the plurality of location points are located in the dentition area.
S222, generating a dental arch curve according to the first image, and determining a target line segment according to the dental arch curve. Wherein the target line segment is part or all of the dental arch curve.
The manner of determining the target line segment may also be entirely user-defined without the dental arch curve (corresponding to step S221). The user views the first image through the computer system, and inputs corresponding information to the computer system to enable the computer system to determine a plurality of position points, and a target line segment of a curve or a straight line is obtained after fitting. The user may input information to the computer system by clicking on the tooth desired to be observed or inputting a tooth number, thereby determining the selected tooth. Then, the central point of the selected tooth can be obtained, and fitting is carried out by taking the central point as a position point; or when the user determines the teeth in a click mode, the coordinate points clicked by the user are directly used as position points to be fitted.
It will be appreciated that the teeth selected or the number entered by the user may be all or part of the teeth. If the user selects all teeth, the finally generated body layer image is an oral panoramic body layer sheet; if the user selects only a portion of the teeth, the resulting volume layer image is a local volume layer of the mouth.
The target line segment may be determined by using a dental arch curve to obtain the target line segment (corresponding to step S222). Because the dental arch curve is a curve capable of representing the position of each tooth, the direct use of a part or all of the segments of the dental arch curve as target segments can meet the requirements of users on the expected observation area to a certain extent.
Illustratively, in step S222, the manner of determining the target line segment according to the dental arch curve may employ any one of the following three manners.
In one mode, the dental arch curve is directly used as the target line segment. Referring to fig. 3, when the user wants to observe all the teeth, and the observation object is all the teeth, the generated dental arch curve C can be directly used as a target line segment. The volume layer image generated by the target line segment C of fig. 3 is an oral panoramic volume ply.
And in a second mode, determining the tooth range according to the received input information, and taking the part corresponding to the tooth range in the dental arch curve as a target line segment. Fig. 5 is a schematic view of a first image according to another embodiment of the invention. Referring to fig. 5, c is an arch curve of the dentition area, including a dotted line portion and a solid line portion. The top six teeth in fig. 5 are tooth ranges, and C1 is the portion of the arch curve corresponding to the tooth range (i.e., the target line segment), corresponding to the solid line portion of the arch curve. In the second mode, the user can set the tooth range by clicking six teeth in the graph, or can set the tooth range by inputting the tooth numbers of the six teeth. The volume layer image generated by the target line segment C1 of fig. 5 is a local volume layer of the oral cavity.
And thirdly, determining a single position point according to the received input information, determining a first position point which is closest to the single position point in the dental arch curve, and taking a line segment with a preset length in a tangent line at the first position point as a target line segment. Wherein the single location point is located in the dentition area and the target line segment is located at least partially in the area of the tooth where the single location point is located.
Fig. 6 is a schematic view of a partial region of a first image according to yet another embodiment of the present invention. Referring to fig. 6, only a portion of teeth are shown, wherein C is an arch curve, P1 is a single location point, T is the tooth where P1 is located, P2 is a first location point, C2 is a target line segment formed by a tangent line, and L is a preset length, i.e., the length of C2.
The user selects the tooth T as a desired observation object by clicking the P1 position point, at this time, a first position point P2 closest to the P1 may be determined from the dental arch curve C, a tangent line at the P2 point is obtained by taking the P2 point as a tangent point of the dental arch curve C, and then a preset length L is cut from the tangent line as a target line segment C2. When the length L is cut, the point P2 can be taken as the center of the L to obtain a target line segment C2, so that a body layer image generated according to the target line segment C2 is a local body layer sheet of the oral cavity, and the situation of the tooth T can be displayed in a targeted mode.
Fig. 7 is a schematic diagram of a photographing mode according to an embodiment of the present invention. Referring to fig. 7, in acquiring projection data of the subject, the subject may be photographed so as to be rotated about the same rotation center to obtain the projection data of the subject. In fig. 7, a single rotation center photographing mode is adopted, SR is a source, D is a detector, and O is a rotation center. The source S and the detector D synchronously rotate around a fixed rotation center O in the shooting process, and the motion trail of the source S in shooting is positioned on the same circle. X-rays are emitted by the source S and data acquisition is performed by the detector D to obtain projection data of the subject. Compared with a shooting mode with multiple rotation centers, the moving rotating shaft does not need to be moved during rotation, the moving mode is more convenient, and the requirements on the structure and the control precision are reduced. In addition, the body layer image generation method provided by the invention can integrate three functions of CT imaging, body layer slice generation and positive side position slice generation on an imaging system with a single rotation center, and can have the three functions by using one imaging system, so that the equipment cost is reduced. In addition, compared to a method of photographing a subject using a panoramic camera, the present embodiment can generate a volume layer image using projection data acquired with the detector D offset.
FIG. 8 is a schematic diagram of the projection line and corresponding detector positions for one embodiment with all target points on a target line segment corresponding to multiple independent points. Referring to fig. 8, SR is a source, D is a detector, O is a fixed rotation center, p1 is one of target points on a dental arch curve as a target line segment, O1, O2, and O3 are three independent projection points, where the projection point O1 is a projection point corresponding to the target point p1, tr is a motion trajectory of the source SR, r1 is a projection line determined by the target point p1 and the projection point O1, and D1 is a projection point corresponding to the target point p1 on the detector D.
When determining the projection point of the detector from the projection line, the corresponding position of the detector can be obtained from the corresponding source position, which can be obtained from the projection line and the imaging geometry. For example, for the target point p1 on the target line segment, after the corresponding projection point o1 of p1 is obtained, the line between the point p1 and the point o1 is the projection line r1. The intersection point position between the projection line r1 and the source rotation track Tr in the imaging geometrical parameter is the corresponding position of the source SR. Since the spatio-temporal trajectories of the source and detector are preset during the shooting, a corresponding rotational position of the detector D can be obtained when the source SR is in a certain rotational position. After the corresponding rotation position of the detector D is obtained, the intersection point between the projection line r1 and the detector D positioned at the corresponding rotation position is the projection point D1. Then, the projection data at the projection point D1 in the projection data acquired by the detector D at the corresponding rotation position can be acquired, and the target projection data of the target point p1 can be obtained.
With the tooth closest to the mandible in the sagittal axis direction as the first tooth, the projection point may be disposed in the vicinity of the side of the first tooth close to the mandible, and the projection point may be located on the opposite side of the corresponding target point, where the opposite side refers to one of the left and right sides of the oral cavity. In fig. 8, the target point p1 is located on the first tooth on the left side of the oral cavity, and correspondingly, the projection point o1 is located near the first tooth on the right side of the oral cavity, so that the projection line r1 does not pass through the bone structures such as the tooth or the cervical vertebra which are not expected to be observed, and therefore, the target projection data at the projection point d1 is less interfered by the projection data of the cervical vertebra and other teeth, and the quality and effect of the generated volume layer image are improved.
For example, the projection points may be located outside of the area of the tooth to avoid that the projection lines pass through the tooth resulting in the target projection data at the determined projection points containing the projection data of other teeth. In addition, the projection points may be located inside the dental arch curve, such as in fig. 8, with projection points o1, o2, and o3 all located within the area enclosed by the dental arch curve.
Illustratively, the projected points corresponding to all target points on the target line segment may be one independent point, a plurality of independent points, or be capable of forming a projected point line segment.
Fig. 8 shows a case where all the target points on the target line segment correspond to a plurality of independent points, as shown in fig. 8, the target line segment is an overall dental arch curve, the dental arch curve is divided into three arcs, wherein all the target points on the left arc correspond to only the projection point o1, all the target points on the middle arc correspond to only the projection point o2, all the target points on the right arc correspond to only the projection point o3, and the dashed lines connected with the projection points in the figure show the ranges of the corresponding arcs. The projection point and the corresponding target line segment may be located on both sides of the rotation center, respectively, e.g., O1 is located on the opposite side of the left circular arc with respect to the rotation center O, and O3 is located on the opposite side of the right circular arc with respect to the rotation center O.
FIG. 9 is a schematic diagram of another embodiment projection line and detector corresponding positions with all target points corresponding to multiple independent points on a target line segment. Referring to fig. 9, p2 is another target point on the dental arch curve as a target line segment, the projection point corresponding to the point p2 is o2, r2 is a projection line formed by the point p2 and the point o2, and d2 is a projection point corresponding to the point p 2. The other target points on the target line segment can be analogized in sequence, so as to obtain the projection points of all the target points on the target line segment and the target projection data thereof. It can be understood that if the target line segment is a straight line segment, a plurality of connected straight line segments, or a part of the dental arch curve, the method can be applied to the situation that all target points on the target line segment correspond to a plurality of independent points, and the number of the projection points can be set according to the shape of the target line segment.
With continued reference to fig. 8, if the projected points corresponding to all the target points on the target line segment may be an independent point, and the target line segment is the left arc of the dental arch curve at this time, the projected point o1 may be used as an independent point to correspond to all the target points on the left arc. If the target line segment is all dental arches, a point can be taken as an independent point at the middle part of the connecting line between the two endpoints of the dental arches to correspond to all target points on the dental arches.
For example, if the projected point is capable of forming a projected point segment, the projected point segment may be determined by the position and morphology of the target segment, where the projected point segment may be a single segment line, a multi-segment line, or a curve, and the position of the projected point on the projected point segment may vary as the position of the target point on the target segment varies.
Fig. 10 is a schematic diagram of projection lines and corresponding positions of the detector in the case where all target points on a target line segment correspond to a projection point line segment. Referring to fig. 10, a broken line formed between points o4 and o5 is a projection point line segment formed by projection points, o ' is a projection point corresponding to the target point p2, o ' is located on the projection point line segment, r3 is a projection line formed by the point p2 and the point o ', and d3 is a projection point corresponding to the point p 2. pa is the left end point of the arch curve, pb is the critical point between the curve in the tooth region and the curve in the non-tooth region in the arc on the left side of the arch curve, and pc is the critical point between the curve in the tooth region and the curve in the non-tooth region in the arc on the right side of the arch curve.
The projected point line segment may increase as the length of the target line segment increases. The projection point line segment can be a single-segment line shown in fig. 10, a multi-segment line formed by connecting a plurality of single-segment lines, or a curve.
If the target line segment is an overall arch curve, all target points from point pa to point pb in the arc on the left side of the arch curve may correspond to point o4. All target points between the target point pb to the target point pc correspond in sequence to all projected points between the projected point o4 to the projected point o5 in the direction from the point o4 to the point o5 in the direction from the point pb to the point pc, for example, the target point p2 in the pbpc curve corresponds to the projected point o' in the straight line segment of o4o 5. All target points between the point pc in the right arc of the arch curve to the end of the right arc may correspond to point o5. It will be appreciated that as the target point moves on the pbpc curve, the corresponding projected point may move in a uniform or variable speed, e.g., first accelerate followed by decelerate, or in other ways.
For example, the direction of change of the position of the projection point on the projection point line segment and the direction of change of the position of the target point on the target line segment may be opposite in the coronal axis direction. If the target line segment is a partial dental arch curve, for example, the target line segment C1 located in the middle circular arc part of the dental arch curve, the target points in the line segment C1 may also be sequentially corresponding to the projection points in the o4o5 straight line segment moving in the direction opposite to the first direction according to the first direction, that is, along with the movement of the target point from left to right on the C1 curve, the corresponding projection points also move from right to left on the o4o5 straight line segment.
For example, the projection points corresponding to different target points may be different. With continued reference to fig. 10, from one end pa to the other end of the dental arch curve, the movement modes of the corresponding projection points can adopt a movement mode of accelerating before decelerating and accelerating, and the projection points corresponding to each target point on the dental arch curve are different.
After the position of the projection point on the detector of each target point on the target line segment is obtained through step S300, the target projection data of each target point in the target plane obtained through the extension of the target line segment can be obtained therefrom, so that the volume layer image of the target plane is generated using these target projection data.
Illustratively, the target surface may be obtained by extending a target line segment along a vertical axis. The vertical axis is perpendicular to both the sagittal and coronal axes. And stretching the target line segment in the vertical direction, wherein the target surface obtained if the target line segment is a curve is a curved surface, and the target surface obtained if the target line segment is a straight line is a plane, and the curved surface and the plane are parallel to the vertical axis. It will be appreciated that the height of the extended target surface may be the maximum allowable height, or may extend only a distance in the up-down direction and not extend to the maximum allowable position or the minimum allowable position.
Fig. 11 is a flow diagram of generating a volumetric layer image from target projection data, according to an embodiment of the invention. Referring to fig. 11, in step S400, a method for generating a volume layer image of a target surface according to target projection data at a projection point may include step S410.
S410, arranging target projection data of each target point in the target surface so as to obtain a volume layer image.
When the projection point of each target point is determined, the position or the rotation angle of the detector corresponding to the projection point can be obtained. Referring to fig. 8 and 9, the position and rotation angle of the detector D in fig. 8 correspond to the position at the time of determining the projection point D1, i.e., the position of the source SR on the projection line r1, and the position and rotation angle of the detector D in fig. 9 correspond to the position at the time of determining the projection point D2, i.e., the position of the source SR on the projection line r 2. By the position of the projection point on the detector and the rotational position/rotation angle of the detector, projection data of the object at the projection point of the detector at the rotational position/rotation angle can be acquired. For each target point on the target surface, corresponding target projection data can be acquired, and a volume layer image corresponding to the target surface can be acquired after all the target projection data are arranged.
Because only the target projection data is adopted to generate the volume layer image, the projection data inherited from CT scan data and influenced by motion artifacts, metal artifacts and other artifacts are fewer, and the influence of the generated volume layer image by the artifacts is reduced.
When the target surface is obtained by extending the target line segments along the vertical axis, when the projection point of each target point on the target line segments is determined, the projection data of all the position points in the vertical column where the projection points are located in the projection data acquired by the detector at the corresponding rotation position/rotation angle are taken as target projection data, when the projection point positions of all the target points of one target line segment are obtained, a corresponding column of target projection data of each projection point can be obtained at the same time, and the target projection data of each column are arranged, so that a volume layer image of the target surface can be obtained. It will be appreciated that when the height of the target surface is less than the maximum allowable height, only the projection data of the position points located at the corresponding portions in the vertical column where the projection points are located may be taken as the target projection data, and the height of the formed volume layer image is less than the maximum height.
For example, before the target projection data of each target point in the target plane is arranged to obtain the volume layer image, the target projection data of the target plane may be scaled according to the position of the target point, so that the imaging magnification ratio of each target projection data is the same.
In determining the projected point of the target point, the position of the source will be different for different target points and the position of the detector will also be different. For example, the distance between the target point p1 and the source SR in fig. 8 is different from the distance between the target point p2 and the source SR in fig. 9, and the distance between the target point p1 and the detector D in fig. 8 is different from the distance between the target point p2 and the detector D in fig. 9. The difference in the above distances makes the magnification ratio at the time of imaging different. Therefore, the magnification ratio corresponding to each target point can be adjusted, so that the magnification ratio corresponding to the target projection data of each target point is the same, and distortion and deformation of the body layer image due to different magnification ratios are avoided.
Fig. 12 is a schematic diagram of a volume layer image generation apparatus employing a hardware implementation of a processing system according to one embodiment of the invention. Referring to fig. 12, the present invention further provides a volume layer image generating apparatus 1000, where the volume layer image generating apparatus 1000 of this embodiment may include a CT image acquisition module 1002, a target line segment determining module 1004, a proxel determining module 1006, and a volume layer image generating module 1008.
The CT image acquisition module 1002 is configured to acquire a CT image of a subject, where the CT image is obtained by reconstructing projection data of the subject.
The target line segment determining module 1004 is configured to determine a target line segment according to the CT image, where a shape of the target line segment matches a shape feature of a region of interest formed by a desired observation object in the CT image. The desired observation target may include some or all of the teeth of the subject.
The manner in which the target line segment determination module 1004 determines the target line segment from the CT image may include the following steps: first, a first image containing a dentition area is obtained according to a CT image, wherein the dentition area comprises a region of interest formed by a desired observation object in the CT image. The first image may be a cross-sectional image. A target line segment is then determined from the first image, wherein the target line segment is located in the dentition area.
The manner in which the target line segment determination module 1004 determines the target line segment from the first image may include the steps of: firstly, determining a plurality of position points according to received input information, and fitting the plurality of position points to obtain a target line segment, wherein the plurality of position points are positioned in a dentition area. And then generating a dental arch curve according to the first image, and determining a target line segment according to the dental arch curve, wherein the target line segment is part or all of the dental arch curve.
The target line segment determination module 1004 may determine the target line segment according to a dental arch curve in any of three ways. In one mode, the dental arch curve is directly used as the target line segment. And in a second mode, determining the tooth range according to the received input information, and taking the part corresponding to the tooth range in the dental arch curve as a target line segment. And thirdly, determining a single position point according to the received input information, determining a first position point which is closest to the single position point in the dental arch curve, and taking a line segment with a preset length in a tangent line at the first position point as a target line segment. Wherein the single location point is located in the dentition area and the target line segment is located at least partially in the area of the tooth where the single location point is located.
The projection point determining module 1006 is configured to determine, for each target point on the target line segment, a projection point corresponding to the target point on the detector located at the corresponding position according to a projection line formed between the target point and the corresponding projection point, where the position of the projection point is determined to be capable of at least reducing information of an undesired observation object contained in the target projection data, and the projection point is located on the projection line.
In acquiring projection data of the subject, the subject may be imaged so as to rotate about the same rotation center to obtain the projection data of the subject. When determining the projection point of the detector from the projection line, the corresponding position of the detector can be obtained from the corresponding source position, which can be obtained from the projection line and the imaging geometry. The projection point may be located outside the area of the tooth. The projection point may be located inside the dental arch line.
The projection points corresponding to all target points on the target line segment can be one independent point, a plurality of independent points or can form the projection point line segment. If the projected point is capable of forming a projected point segment, the projected point segment may be determined by the position and shape of the target segment, where the projected point segment may be a single segment line, a multi-segment line, or a curve, and the position of the projected point on the projected point segment may vary with the position of the target point on the target segment. The direction of change of the position of the projected point on the projected point line segment and the direction of change of the position of the target point on the target line segment may be opposite in the coronal axis direction. The projected points corresponding to the different target points may be different.
The volume layer image generating module 1008 is configured to generate a volume layer image of a target surface according to target projection data at a projection point, where the target surface is obtained by a target line segment. The target surface may be obtained by extending a target line segment along a vertical axis.
The manner in which the volume image generation module 1008 generates the volume image of the target surface from the target projection data at the projection point may include the following steps: and arranging the target projection data of each target point in the target surface so as to obtain a volume layer image.
Before the object projection data of each target point in the object plane is arranged to obtain the object image, the object image generating module 1008 may first scale the object projection data of the object plane according to the position of the target point, so that the imaging magnification ratio of each object projection data is the same.
It should be noted that, details not disclosed in the volume layer image generating device 1000 of the present embodiment may refer to details disclosed in the volume layer image generating method M10 of the above embodiment, which are not described herein.
The volume layer image generating apparatus 1000 may include corresponding modules that perform each or several of the steps in the flowcharts described above. Thus, each step or several steps in the flowcharts described above may be performed by respective modules, and the apparatus may include one or more of these modules. A module may be one or more hardware modules specifically configured to perform the respective steps, or be implemented by a processor configured to perform the respective steps, or be stored within a computer-readable medium for implementation by a processor, or be implemented by some combination.
The hardware structure of the volume layer image generating apparatus 1000 may be implemented using a bus architecture. The bus architecture may include any number of interconnecting buses and bridges depending on the specific application of the hardware and the overall design constraints. Bus 1100 connects together various circuits including one or more processors 1200, memory 1300, and/or hardware modules. Bus 1100 may also connect various other circuits 1400, such as peripherals, voltage regulators, power management circuits, external antennas, and the like.
Bus 1100 may be an industry standard architecture (ISA, industry Standard Architecture) bus, a peripheral component interconnect (PCI, peripheral Component) bus, or an extended industry standard architecture (EISA, extended Industry Standard Component) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one connection line is shown in the figure, but not only one bus or one type of bus.
Fig. 13 is a block diagram of a structure of a volume layer image generation system according to an embodiment of the present invention. Referring to fig. 13, the present invention also provides a volume layer image generation system 10. The volume layer image generation system 10 of the present embodiment may include a source SR, a detector D, a control mechanism 11, and a volume layer image generation apparatus 1000.
The source SR is used to emit X-rays. The detector D is used for projection data acquisition by means of X-rays received from a source. The control mechanism 11 is used for controlling the source and the detector to synchronously rotate around the object.
The volume layer image generating apparatus 1000 is the volume layer image generating apparatus 1000 shown in fig. 12. The volume layer image generation apparatus 1000 may include a CT image acquisition module 1002, a target line segment determination module 1004, a proxel determination module 1006, and a volume layer image generation module 1008. The CT image acquisition module 1002 is configured to acquire a CT image of a subject, where the CT image is obtained by reconstructing projection data of the subject. The target line segment determining module 1004 is configured to determine a target line segment according to the CT image, where a shape of the target line segment matches a shape feature of a region of interest formed by a desired observation object in the CT image. The projection point determining module 1006 is configured to determine, for each target point on the target line segment, a projection point corresponding to the target point on the detector located at the corresponding position according to a projection line formed between the target point and the corresponding projection point, where the position of the projection point is determined to be capable of at least reducing information of an undesired observation object contained in the target projection data, and the projection point is located on the projection line. The volume layer image generating module 1008 is configured to generate a volume layer image of a target surface according to target projection data at a projection point, where the target surface is obtained by a target line segment.
It should be noted that, details not disclosed in the volume layer image generating system 10 of the present embodiment may refer to details disclosed in the volume layer image generating method M10 of the foregoing embodiment, which are not described herein.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention. The processor performs the various methods and processes described above. For example, method embodiments of the present invention may be implemented as a software program tangibly embodied on a machine-readable medium, such as a memory. In some embodiments, part or all of the software program may be loaded and/or installed via memory and/or a communication interface. One or more of the steps of the methods described above may be performed when a software program is loaded into memory and executed by a processor. Alternatively, in other embodiments, the processor may be configured to perform one of the methods described above in any other suitable manner (e.g., by means of firmware).
Logic and/or steps represented in the flowcharts or otherwise described herein may be embodied in any readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It should be understood that portions of the present invention may be implemented in hardware, software, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or part of the steps implementing the method of the above embodiments may be implemented by a program to instruct related hardware, and the program may be stored in a readable storage medium, where the program when executed includes one or a combination of the steps of the method embodiments. The storage medium may be a volatile/nonvolatile storage medium.
In addition, each functional unit in each embodiment of the present invention may be integrated into one processing module, each unit may exist alone physically, or two or more units may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. The storage medium may be a read-only memory, a magnetic disk or optical disk, etc.
The invention also provides an electronic device, comprising: a memory storing execution instructions; and a processor or other hardware module that executes the execution instructions stored in the memory, so that the processor or other hardware module executes the volume layer image generation method of the above embodiment.
The invention also provides a readable storage medium, wherein the readable storage medium stores execution instructions which are used for realizing the volume layer image generation method of any embodiment when being executed by a processor.
For the purposes of this description, a "readable storage medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). In addition, the readable storage medium may even be paper or other suitable medium on which the program can be printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner if necessary, and then stored in a memory.
The invention also provides a computer program product comprising computer programs/instructions which when executed by a processor implement the volume layer image generation method of any of the above embodiments.
In the description of the present specification, the descriptions of the terms "one embodiment/mode," "some embodiments/modes," "specific examples," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment/mode or example is included in at least one embodiment/mode or example of the present invention. In this specification, the schematic representations of the above terms are not necessarily the same embodiments/modes or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments/modes or examples. Furthermore, the various embodiments/implementations or examples described in this specification and the features of the various embodiments/implementations or examples may be combined and combined by persons skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
It will be appreciated by persons skilled in the art that the above embodiments are provided for clarity of illustration only and are not intended to limit the scope of the invention. Other variations or modifications will be apparent to persons skilled in the art from the foregoing disclosure, and such variations or modifications are intended to be within the scope of the present invention.

Claims (18)

1. A method of generating a volume layer image, comprising:
acquiring a CT image of a subject, wherein the CT image is obtained by reconstructing projection data of the subject;
obtaining a first image containing a dentition area according to the CT image, wherein the dentition area comprises a region of interest formed by a desired observation object in the CT image, and the desired observation object comprises part or all of teeth of a subject;
determining a target line segment according to the first image, wherein the target line segment is positioned in the dentition area, and the shape of the target line segment accords with the shape characteristics of a region of interest formed by a target observation object in the CT image;
for each target point on the target line segment, determining a corresponding projection point of the target point on a detector located at a corresponding position according to a projection line formed between the target point and the corresponding projection point, wherein the position of the projection point is determined to be at least capable of reducing information of an unexpected observed object contained in target projection data, and the projection point is located on the projection line; and
And generating a volume layer image of a target surface according to the target projection data at the projection point, wherein the target surface is obtained by extending the target line segment along a vertical axis.
2. The volume layer image generation method according to claim 1, wherein the first image is a cross-sectional image.
3. The method of generating a volume layer image according to claim 2, wherein determining a target line segment from the first image comprises:
determining a plurality of position points according to the received input information, and fitting the plurality of position points to obtain a target line segment, wherein the plurality of position points are positioned in the dentition area; or,
generating a dental arch curve according to the first image, and determining a target line segment according to the dental arch curve, wherein the target line segment is part or all of the dental arch curve.
4. A method of generating a volumetric image according to claim 3, wherein the means for determining a target line segment from the dental arch curve comprises:
directly taking the dental arch curve as a target line segment; or,
determining a tooth range according to the received input information, and taking the part corresponding to the tooth range in the dental arch curve as a target line segment; or,
Determining a single position point according to the received input information, determining a first position point closest to the single position point in the dental arch curve, and taking a line segment with a preset length in a tangent line at the first position point as a target line segment, wherein the single position point is positioned in the dentition area, and the target line segment is at least partially positioned in the area of teeth where the single position point is positioned.
5. The volume layer image generation method according to any one of claims 1 to 4, wherein projection data of the subject is obtained by photographing the subject so as to rotate around the same rotation center.
6. The volume layer image generation method of claim 5, wherein the respective positions of the detector are obtained by respective source positions obtained by the projection lines and imaging geometry.
7. The method of generating a volumetric image of claim 5, wherein said projected points are located outside of the area of the tooth.
8. The method of generating a volume image according to claim 7, wherein the projection point is located inside a dental arch.
9. The method according to claim 7 or 8, wherein the projection points corresponding to all the target points on the target line segment are one independent point, a plurality of independent points, or a projection point line segment can be formed.
10. The volume layer image generation method according to claim 9, wherein if the projected point can form a projected point line segment, the projected point line segment is determined by a position and a shape of the target line segment, wherein the projected point line segment is a single segment line, a multi-segment line, or a curve, and a position of the projected point on the projected point line segment varies with a variation in a position of the target point on the target line segment.
11. The volume layer image generation method according to claim 10, wherein a direction of change of a position of the projected point on the projected point line segment is opposite to a direction of change of a position of the target point on the target line segment in a coronal axis direction.
12. The volume layer image generation method according to claim 9, wherein the projection points corresponding to the different target points are different from each other.
13. The volume layer image generation method according to claim 1, wherein the manner of generating the volume layer image of the target surface from the target projection data at the projection point includes:
And arranging the target projection data of each target point in a target surface so as to obtain a volume layer image.
14. The volume layer image generation method according to claim 13, wherein before the target projection data of each target point in a target plane are arranged to obtain a volume layer image, the target projection data of the target plane is scaled according to the position of the target point so that the imaging magnification ratio of each of the target projection data is the same.
15. A volume layer image generation apparatus, comprising:
the CT image acquisition module is used for acquiring a CT image of the object, wherein the CT image is obtained by reconstructing projection data of the object;
a target line segment determining module, configured to obtain a first image including a dentition area according to the CT image and determine a target line segment according to the first image, where the dentition area includes a region of interest formed by a desired observation object in the CT image, the desired observation object includes a part or all of teeth of a subject, the target line segment is located in the dentition area, and a shape of the target line segment matches a shape feature of the region of interest formed by the desired observation object in the CT image;
A projection point determining module, configured to determine, for each target point on the target line segment, a projection point corresponding to the target point on a detector located at a corresponding position according to a projection line formed between the target point and the corresponding projection point, where the position of the projection point is determined to be at least capable of reducing information of an undesired observation object contained in target projection data, and the projection point is located on the projection line; and
and the volume layer image generation module is used for generating a volume layer image of a target surface according to the target projection data at the projection point, wherein the target surface is obtained by extending the target line segment along a vertical axis.
16. A volumetric image generation system, comprising:
a source for emitting X-rays;
a detector for acquiring projection data from received X-rays from the source;
a control mechanism for controlling the source and the detector to synchronously rotate around the subject; and
the volume layer image generating apparatus according to claim 15.
17. An electronic device, comprising:
a memory storing execution instructions; and
A processor executing the execution instructions stored in the memory, causing the processor to execute the volume layer image generation method according to any one of claims 1 to 14.
18. A readable storage medium, wherein execution instructions are stored in the readable storage medium, which when executed by a processor are configured to implement the volume layer image generation method according to any one of claims 1 to 14.
CN202311817984.8A 2023-12-27 2023-12-27 Volume layer image generation method, device and system, electronic equipment and storage medium Active CN117462163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311817984.8A CN117462163B (en) 2023-12-27 2023-12-27 Volume layer image generation method, device and system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311817984.8A CN117462163B (en) 2023-12-27 2023-12-27 Volume layer image generation method, device and system, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117462163A CN117462163A (en) 2024-01-30
CN117462163B true CN117462163B (en) 2024-03-29

Family

ID=89624138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311817984.8A Active CN117462163B (en) 2023-12-27 2023-12-27 Volume layer image generation method, device and system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117462163B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6289074B1 (en) * 1998-09-02 2001-09-11 J. Morita Manufacturing Corporation X-ray computed tomography method and system
CN111528890A (en) * 2020-05-09 2020-08-14 上海联影医疗科技有限公司 Medical image acquisition method and system
CN111833244A (en) * 2019-04-11 2020-10-27 深圳市深图医学影像设备有限公司 Dental panoramic image generation method and device and computer readable storage medium
CN113069141A (en) * 2021-03-31 2021-07-06 有方(合肥)医疗科技有限公司 Method and system for shooting oral panoramic film, electronic equipment and readable storage medium
CN115736970A (en) * 2022-12-02 2023-03-07 上海博恩登特科技有限公司 Method for generating multilayer dental film image
CN115937410A (en) * 2022-11-07 2023-04-07 有方(合肥)医疗科技有限公司 Oral panorama generation method and device, electronic equipment and storage medium
CN115919351A (en) * 2022-05-19 2023-04-07 上海博恩登特科技有限公司 Oral panoramic imaging method and system
WO2023093748A1 (en) * 2021-11-24 2023-06-01 余文锐 Oral cone beam x-ray imaging system and fast positioning method therefor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9993217B2 (en) * 2014-11-17 2018-06-12 Vatech Co., Ltd. Producing panoramic radiograph

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6289074B1 (en) * 1998-09-02 2001-09-11 J. Morita Manufacturing Corporation X-ray computed tomography method and system
CN111833244A (en) * 2019-04-11 2020-10-27 深圳市深图医学影像设备有限公司 Dental panoramic image generation method and device and computer readable storage medium
CN111528890A (en) * 2020-05-09 2020-08-14 上海联影医疗科技有限公司 Medical image acquisition method and system
CN113069141A (en) * 2021-03-31 2021-07-06 有方(合肥)医疗科技有限公司 Method and system for shooting oral panoramic film, electronic equipment and readable storage medium
WO2023093748A1 (en) * 2021-11-24 2023-06-01 余文锐 Oral cone beam x-ray imaging system and fast positioning method therefor
CN115919351A (en) * 2022-05-19 2023-04-07 上海博恩登特科技有限公司 Oral panoramic imaging method and system
CN115937410A (en) * 2022-11-07 2023-04-07 有方(合肥)医疗科技有限公司 Oral panorama generation method and device, electronic equipment and storage medium
CN115736970A (en) * 2022-12-02 2023-03-07 上海博恩登特科技有限公司 Method for generating multilayer dental film image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
口腔全景锥形束CT图像重建算法;万俊;中国医学物理学杂志;20160531;第33卷(第5期);第437-441页 *

Also Published As

Publication number Publication date
CN117462163A (en) 2024-01-30

Similar Documents

Publication Publication Date Title
JP4015366B2 (en) Local irradiation X-ray CT imaging method and apparatus
JP5171215B2 (en) X-ray CT system
JP5555231B2 (en) Method and X-ray apparatus for generating a three-dimensional X-ray image of a tooth
US8634631B2 (en) Odontological imaging apparatus
US20070025509A1 (en) 4-dimensional digital tomosynthesis and its applications in radiation therapy
EP2022402A1 (en) X-ray ct apparatus
WO1998040013A1 (en) X-ray computerized tomograph having collimator which restricts the irradiation range of x-ray fan beam
US10674971B2 (en) X-ray image display apparatus and method for X-ray image display
JP2002531199A (en) Method and apparatus for calcification leveling
JP7379333B2 (en) Methods, systems, apparatus, and computer program products for extending the field of view of a sensor and obtaining synthetic radiographs
US7392078B2 (en) Radiation imaging apparatus
WO2012093364A1 (en) Computed tomography system and method for tracking a bolus
CN114081524A (en) X-ray imaging system based on X-ray cone beam
CN113069141B (en) Method and system for shooting oral panoramic film, electronic equipment and readable storage medium
CN115937410B (en) Oral panorama generating method and device, electronic equipment and storage medium
KR20200095740A (en) Medical imaging apparatus and controlling method for the same
CN116019474B (en) Multi-source imaging device and method
JP5863454B2 (en) X-ray CT imaging apparatus and X-ray CT imaging method
CN111150419B (en) Method and device for reconstructing image by spiral CT scanning
US20080013813A1 (en) Methods and apparatus for BMD measuring
CN117462163B (en) Volume layer image generation method, device and system, electronic equipment and storage medium
CN114041815A (en) X-ray imaging system with variable imaging field of view
CN110870775A (en) System and method for imaging an object
CN116433476B (en) CT image processing method and device
WO2023093748A1 (en) Oral cone beam x-ray imaging system and fast positioning method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant