CN118071869A - Image processing method, electronic device, readable storage medium, and program product - Google Patents

Image processing method, electronic device, readable storage medium, and program product Download PDF

Info

Publication number
CN118071869A
CN118071869A CN202410482685.1A CN202410482685A CN118071869A CN 118071869 A CN118071869 A CN 118071869A CN 202410482685 A CN202410482685 A CN 202410482685A CN 118071869 A CN118071869 A CN 118071869A
Authority
CN
China
Prior art keywords
image
target
reconstruction
region
processing method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410482685.1A
Other languages
Chinese (zh)
Inventor
张雪艳
马骏骑
余文锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Yofo Medical Technology Co ltd
Original Assignee
Hefei Yofo Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Yofo Medical Technology Co ltd filed Critical Hefei Yofo Medical Technology Co ltd
Priority to CN202410482685.1A priority Critical patent/CN118071869A/en
Publication of CN118071869A publication Critical patent/CN118071869A/en
Pending legal-status Critical Current

Links

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to the technical field of medical images, and in particular provides an image processing method, electronic equipment, a readable storage medium and a program product. According to the invention, the requirements on hardware equipment can be reduced under the condition of ensuring the quality and effect of the image.

Description

Image processing method, electronic device, readable storage medium, and program product
Technical Field
The present invention relates to the field of medical image technology, and in particular, to an image processing method, an electronic device, a readable storage medium, and a program product.
Background
In the process of CBCT (Cone Beam CT) imaging, an acquired scan image is first photographed, then the scan image is reconstructed to obtain a CBCT image, and the CBCT image is displayed. In the process of image reconstruction, if the image reconstruction is performed according to the imaging field of view during shooting as a reconstruction field of view and the reconstructed image is displayed, the performance requirements on hardware devices such as a display card are higher, so that the cost of the devices is higher.
Disclosure of Invention
To solve at least one of the above technical problems, the present invention provides an image processing method, an apparatus, an electronic device, a readable storage medium, and a program product.
The first aspect of the present invention proposes an image processing method, comprising: displaying a target image, wherein the size of the target image corresponds to a first imaging visual field range, the first imaging visual field range is the imaging visual field range of a scanned image obtained by scanning a scanned object, the target image comprises a representation of a target body, and the representation of the target body corresponds to a part of structure in the scanned object; responsive to the received reconstruction indication, obtaining a reconstruction region from a currently determined region of interest, the region of interest corresponding to a partial region within the target image, the region of interest comprising at least a partial region of the target volume; and reconstructing according to the reconstruction region and the scanned image of the scanned object to obtain a three-dimensional CT image.
According to one embodiment of the invention, the target image is obtained prior to displaying the target image by one of the following means: in a first mode, a first reconstruction is performed on a scanned image of a scanned object, a target image is obtained based on the result of the first reconstruction, the spatial resolution corresponding to the first reconstruction is smaller than or equal to the spatial resolution corresponding to the second reconstruction, and the second reconstruction corresponds to the process of obtaining the three-dimensional CT image through reconstruction; in a second mode, a corresponding preset image is obtained as a target image according to the first imaging visual field range, wherein the preset image comprises a preset representation of a scanned object structure.
According to one embodiment of the invention, the target image is a sectional view.
According to one embodiment of the present invention, obtaining a target image based on a result of the first reconstruction includes: and obtaining a target image according to the target section and the result of the first reconstruction.
According to one embodiment of the invention, the target section has a preset distance from a preset position of a first image, the first image corresponding to the result of the first reconstruction, the preset position being located at a spatial edge of the first image.
According to one embodiment of the invention, the target image is a cross-sectional image.
According to one embodiment of the invention, the currently determined region of interest is obtained by a preset overlap region between a target region and the target image, the target region being subject to a change in position on the target image in response to a received movement indication.
According to one embodiment of the invention, the target area changes its size in response to the received first indication.
According to one embodiment of the invention, the reconstruction indication is obtained by monitoring that a target event is triggered on a page that determines the region of interest.
According to one embodiment of the invention, obtaining a reconstructed region from a currently determined region of interest comprises: and obtaining a reconstruction region according to the position characteristics and the size of the currently determined region of interest and a preset first length, wherein the first length is a length in a direction perpendicular to the target image in a three-dimensional image space.
According to one embodiment of the invention, the location feature comprises: and the relative position information of the center point of the interest area between the position of the target image and the position of the center point of the target image.
According to one embodiment of the invention, the scanned object corresponds to at least a partial region of a human head, and the target comprises teeth.
A second aspect of the present invention proposes an image processing apparatus comprising:
The display module is used for displaying a target image, the size of the target image corresponds to a first imaging visual field range, the first imaging visual field range is an imaging visual field range of a scanned image obtained by scanning a scanned object, the target image comprises a representation of a target body, and the representation of the target body corresponds to a part of structure in the scanned object;
the receiving module is used for receiving the reconstruction indication;
A reconstruction region determining module, configured to obtain a reconstruction region according to a currently determined region of interest in response to a received reconstruction indication, where the region of interest corresponds to a partial region in the target image, and the region of interest includes at least a partial region of the target volume;
and the image reconstruction module is used for reconstructing according to the reconstruction region and the scanned image of the scanned object to obtain a three-dimensional CT image.
A third aspect of the present invention proposes an electronic device comprising: a memory storing execution instructions; and a processor that executes the execution instructions stored in the memory, so that the processor executes the image processing method according to any one of the above embodiments.
A fourth aspect of the present invention proposes a readable storage medium having stored therein execution instructions which, when executed by a processor, are adapted to carry out the image processing method according to any of the above-mentioned embodiments.
A fifth aspect of the invention proposes a computer program product comprising a computer program/instruction which, when executed by a processor, implements the image processing method according to any of the embodiments described above.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention.
Fig. 1 is a flow chart of an image processing method according to an embodiment of the present invention.
Fig. 2 is a schematic illustration of a two-dimensional target image obtained in a first manner according to one embodiment of the invention.
FIG. 3 is a schematic diagram of adjusting the position of a target area according to a movement indication, according to one embodiment of the invention.
Fig. 4 is a schematic diagram of a three-dimensional CT image obtained after reconstruction by reconstruction region according to an embodiment of the present invention.
Fig. 5 is a flow chart of an image processing method according to another embodiment of the present invention.
Fig. 6 is a schematic diagram of an image processing apparatus employing a hardware implementation of a processing system according to one embodiment of the invention.
Fig. 7 is a schematic diagram of an image processing apparatus employing a hardware implementation of a processing system according to another embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the substances, and not restrictive of the invention. It should be further noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
In addition, the embodiments of the present invention and the features of the embodiments may be combined with each other without collision. The technical scheme of the present invention will be described in detail below with reference to the accompanying drawings in combination with embodiments.
Unless otherwise indicated, the exemplary implementations/embodiments shown are to be understood as providing exemplary features of various details of some of the ways in which the technical concepts of the present invention may be practiced. Thus, unless otherwise indicated, the features of the various implementations/embodiments may be additionally combined, separated, interchanged, and/or rearranged without departing from the technical concepts of the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, when the terms "comprises" and/or "comprising," and variations thereof, are used in the present specification, the presence of stated features, integers, steps, operations, elements, components, and/or groups thereof is described, but the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof is not precluded. It is also noted that, as used herein, the terms "substantially," "about," and other similar terms are used as approximation terms and not as degree terms, and as such, are used to explain the inherent deviations of measured, calculated, and/or provided values that would be recognized by one of ordinary skill in the art.
The image processing method, apparatus, electronic device, computer-readable storage medium and computer program product of the present invention will be described below by taking an application scenario of oral CBCT image reconstruction as an example, with reference to the accompanying drawings.
In performing CBCT imaging of the oral cavity, the object to be imaged is typically a human head, and the imaging field of view used for imaging may be denoted by U1 x V1. The imaging field of view is generally cylindrical, where U1 represents the diameter of the cylinder and V1 represents the height of the cylinder. For example, U1 may be set to 16 cm and V1 to 9 cm, so that the region including the oral cavity can be photographed.
After the photographing is completed, reconstruction of the scanned image is started. The reconstruction field of view range does not maximally exceed the imaging field of view range, i.e. the reconstruction field of view range is not greater than U1V 1. The reconstructed image is required for film reading or other purposes, so that a certain requirement on spatial resolution is imposed, and the size of voxels adopted in reconstruction may be smaller. If the reconstruction range selects the U1 x V1 range, a high-resolution three-dimensional CBCT image in the U1 x V1 range can be obtained, but this results in a larger amount of data to be reconstructed, a larger amount of computation is required, the reconstruction time is longer, and the requirements on hardware devices such as display cards and video memories are higher, so that it may be difficult for some configurations of the display cards and the video memories to support such a large amount of data, which makes it difficult to complete such a reconstruction task.
In order to ensure that the spatial resolution of the reconstructed CBCT image meets the film reading requirement, reduce the operation amount and reconstruction time required by reconstruction and reduce the performance requirement on hardware equipment, the invention provides an image processing method, wherein after a scan image with a large visual field U1V 1 range is obtained, a small visual field U2V 2 range is taken as a reconstruction range, at least part of the oral cavity area actually interested in film reading is contained in the reconstruction range, and the reconstruction range is contained in and smaller than the imaging visual field range. For example, if the user desires to display the content of the anterior region of the oral cavity in the CBCT image, the reconstruction range may be slightly larger than the area range of the anterior region. The reconstruction range is set smaller than the imaging visual field range, so that the calculation amount and time consumption of reconstruction are reduced, the requirements on equipment are also reduced, the content which a user needs to view actually is reserved, and the quality and effect of the image are ensured. In addition, by selecting the reconstruction range to generate the CBCT image for reading, the positioning requirement on the scanned object can be omitted, and accordingly, the lamplight identification for indication is not required.
Fig. 1 is a flow chart of an image processing method according to an embodiment of the present invention. Referring to fig. 1, the present invention provides an image processing method M100, and the image processing method M100 of the present embodiment may include the following steps S110, S120 and S130.
S110, displaying the target image. The size of the target image corresponds to a first imaging visual field range, the first imaging visual field range is the imaging visual field range of a scanned image obtained by scanning a scanned object, the target image comprises a representation of a target body, and the representation of the target body corresponds to a part of structures in the scanned object.
In oral CBCT imaging, the scanned object may be a human head. The target is an object that the user desires to view when reading, and may be some or all of the teeth. The image size of the target image corresponds to the first imaging field of view U1 x V1.
The scanned image is a two-dimensional image, also known as a projected image. By rotating the source and the detector synchronously around the scanned object while maintaining the relative positions, and upon rotation, the source emits radiation that passes through the scanned object and is acquired by the detector, a scanned image is formed at different scan angles.
Assuming that the target image is a three-dimensional spatial image, the size of the target image may be the same as the spatial range of U1V 1. The target image includes a dental image. By displaying the target image, the user is enabled to see the teeth in the target image. It will be appreciated that the target image may also be a two-dimensional planar image.
S120, responding to the received reconstruction instruction, and obtaining a reconstruction region according to the currently determined region of interest. Wherein the region of interest corresponds to a partial region within the target image, the region of interest comprising at least a partial region of the target volume.
The region of interest may be a planar region, and after the user sees the teeth of the target image, the user may set and adjust the position of the region of interest by operating the system, and may also adjust the size of the region of interest, so that the region of interest may include objects that the user desires to view when reading a film.
For example, when a user desires to view an image of an anterior region, the location of the region of interest may be adjusted by operating the system such that the region of interest contains two-dimensional image content of a plurality of teeth of the anterior region. After the region of interest has been selected, a reconstruction region U2V 2 is determined by the region of interest. The reconstruction region U2 x V2 is a three-dimensional space region in which regions of a plurality of teeth of the anterior tooth region are contained.
S130, reconstructing according to the reconstruction region and the scanned image of the scanned object to obtain a three-dimensional CT image.
The scanned image of the head of the human body is reconstructed according to the reconstruction area, and the reconstruction algorithm can adopt an FDK (filtered back projection) algorithm for reconstruction. CBCT images with spatial dimensions U2 x V2 can be obtained after reconstruction. Since the size of U2V 2 is smaller than U1V 1, the CBCT image obtained after the current reconstruction corresponds to a small-field image. For a small-view image, the voxel size can be set smaller so as to ensure higher resolution of the image, thereby being beneficial to definition and accuracy of teeth expected to be observed when displayed and further facilitating film reading.
In some embodiments, the target image is obtained by one of the following methods before step S110 is performed.
In one aspect, a first reconstruction is performed on a scanned image of a scanned object, and a target image is obtained based on a result of the first reconstruction. The spatial resolution corresponding to the first reconstruction is smaller than or equal to the spatial resolution corresponding to the second reconstruction, and the second reconstruction corresponds to a process of obtaining a three-dimensional CT image through reconstruction.
In a second mode, a corresponding preset image is acquired as a target image according to the first imaging visual field range. The preset image comprises a representation of a preset scanned object structure.
For the first mode, the low-resolution reconstruction can be firstly performed based on the scanned image to obtain a real large-view CBCT image of the scanned object, and then the large-view CBCT image is processed to obtain the target image. The low resolution reconstruction can reduce the time required for reconstruction and the hardware requirements, thereby improving the efficiency of generating the target image. The low resolution reconstruction is a one-time reconstruction that is used to generate a large field of view image for the user to select a region of interest. The subsequent reconstruction performed in step S130 is a secondary reconstruction, where the secondary reconstruction is used to generate a small-view image corresponding to the region of interest, so that the user can read the film when devices such as a video card and a video memory can support displaying the small-view image.
The target image obtained in the first mode is closer to the real oral condition of the scanned object, so that the selected region of interest can be more in line with the expectations of users.
For the second mode, a preset image with the image size corresponding to the first imaging visual field range is directly determined from preset images, and the determined preset image is used as a target image. The preset image may be an oral image. The oral image may be a simulated oral image or an oral image obtained by scanning and image processing a certain subject in advance. The oral cavity image also comprises structures such as teeth and soft tissues, and the positions of the structures such as teeth and soft tissues are set according to the normal positions of the structures in a human body in the preset oral cavity images with different sizes. The structure of teeth etc. in the oral cavity image may be represented by virtual symbols, for example using rectangles to represent teeth, or using a description of the position and shape of teeth only by the contours of teeth. It will be appreciated that, because of the small positional difference between the dentition regions of the human mouth, a preset image that is not related to the actual dentition region position of the scanned object may be used as a basis for selecting the region of interest of the scanned object.
The speed of obtaining the target image is high in the second mode, and the target image can be directly obtained without reconstruction.
Fig. 2 is a schematic illustration of a two-dimensional target image obtained in a first manner according to one embodiment of the invention. Referring to fig. 2, the target image may be a sectional view. For example, the target image may be a cross-sectional image representing a human head, the cross-sectional image including dental regions, and the dental regions being capable of being used to form a complete arch curve, i.e., the dental regions in the cross-sectional image are more complete and may include teeth of all tooth positions.
In the above-described mode one, the mode of obtaining the target image based on the result of the first reconstruction may include the steps of: and obtaining a target image according to the target section and the result of the first reconstruction. The target section may have a preset distance from a preset position of the first image, where the first image corresponds to a result of the first reconstruction, and the preset position may be located at a spatial edge of the first image.
The target image may be a two-dimensional image located within the target section of the CBCT image (i.e. the first image) of the first reconstruction result. Specifically, the position and the size of the target section can be acquired first, and then the image in the target section is acquired as the target image. When the position of the target cross section is acquired, a plane having a predetermined distance in the first direction from a predetermined spatial edge of the first image may be acquired first, and the plane may be used as the target cross section. For example, the preset position may be the bottom surface of the first image, the first direction may be the direction of the vertical axis, and the distance between the teeth and the bottom surface is generally within a range of values, so the preset distance may be set to a specific value within the range of values, so that the image content of all the teeth can be contained in the target image.
In some embodiments, the currently determined region of interest may be obtained by a preset overlap region between the target region and the target image. Wherein the target area may undergo a change in position on the target image in response to the received movement indication.
The region of the target image and the region of interest are two-dimensional planar regions. After displaying the target image, the user may first determine the region of interest by operating the system and then issue a reconstruction indication to the system so that the system can obtain the spatial extent of the reconstructed region in step S120.
After the target image is displayed, a preset target area can be displayed on the page on which the target image is displayed, the target area can be represented as a contour line of a preset closed shape on the page, and the area in the contour line is the target area, namely the area range of the current region of interest. Referring to fig. 2, a is a rectangular target area frame, and the target area frame is displayed on a target image. It will be appreciated that the target area may be displayed in other ways than by using a contour line, for example, a portion of the target image located outside the target area is blurred or grayed, and a portion of the target image within the target area is normally displayed, so that the target area and the non-target area in the target image are distinguished by the difference of display effects between different areas.
The target region may change its size in response to the received first indication. The size of the target area may be a preset size, and the size of the target area may also be adjusted by the received movement instruction, so as to change the size of the CBCT image obtained in step S130.
The user can determine which content of the target image is contained within the current region of interest by means of the contour lines. The system may also display the target area simultaneously when displaying the target image, where the target area may be located at an initial position in the target image at an initial time, for example, at a position such that a corner of the target area coincides with a corner of the target image. The user can adjust the position of the target area by operating the system so that the contour line of the target area changes in position in the target image to a position where the content desired to be observed by the user can be contained in the target area.
The reconstruction indication may be obtained by monitoring that the target event is triggered on a page defining the region of interest. For example, after the user determines the desired region of interest by making a movement instruction to the system, a preset button may be clicked on a page displaying the target image, a reconstruction instruction may be made to the system by using the click event as the target event, and after the system receives the reconstruction instruction, the system starts to obtain the reconstructed region according to the currently determined region of interest in step S120.
In some embodiments, the manner of obtaining the reconstructed region according to the currently determined region of interest in step 120 may include the steps of: and obtaining a reconstruction region according to the currently determined position characteristics and the size of the region of interest and a preset first length, wherein the first length is a length in a direction perpendicular to the target image in the three-dimensional image space.
For example, the region range of two direction axes among the three direction axes X, Y, Z can be determined by the position of the region of interest in the target image and the size of the region of interest determined in step S120, and then the length on the remaining one direction axis is obtained by the preset first length, so as to form a stereoscopic reconstruction region.
If the imaging field of view range U1 x V1 of the scanned image is 16 x 9, the desired CBCT image spatial range U2 x V2 is 8 x 8 or 8*9, and the region of interest is located on a horizontal plane (cross section), and the first length is a length along the vertical axis, the region range of the region of interest may be extended by a distance of the first length along the direction (vertical axis) along which the first length is located, so as to obtain a three-dimensional reconstruction region. If the first length is set to 8 cm, a reconstruction region of 8 x 8 is obtained. The first length may also be equal to the height of the U1V 1 range in the vertical axis direction, i.e. the first length may be 9cm, whereby the resulting reconstruction area is 8*9.
The location features may include: and the relative position information of the center point of the region of interest between the position of the target image and the position of the center point of the target image.
FIG. 3 is a schematic diagram of adjusting the position of a target area according to a movement indication, according to one embodiment of the invention. Referring to fig. 3, a is a rectangular target area, a point O is a center point of the target image, a point P1 is a center point of the target area before the position movement is performed according to the movement instruction, X offset1 is a distance between the point P1 and the point O in the X-axis direction, and Y offset1 is a distance between the point P1 and the point O in the Y-axis direction. The point P2 is a target area center point after the positional movement from the point P1 position in accordance with the movement instruction, X offset2 is a distance between the point P2 and the point O in the X-axis direction, and Y offset2 is a distance between the point P2 and the point O in the Y-axis direction.
When the system receives the movement instruction, the position of the target area is adjusted, and the relative position relationship between the center point of the target area and the center point of the target image is correspondingly changed. The point P1 is a default position of the target area at the initial time, and at this time, the user makes a movement instruction to the system, and the system moves the target area to a position centered on the point P2 according to the movement instruction. At this time, the user clicks a preset button to make a reconstruction instruction, and the system obtains a reconstruction area according to X offset2、Yoffset2, the size and the first length of the target area a.
Fig. 4 is a schematic diagram of a three-dimensional CT image obtained after reconstruction by reconstruction region according to an embodiment of the present invention. Referring to fig. 4, a CBCT image is reconstructed according to the reconstruction region and displayed to a user.
Fig. 5 is a flow chart of an image processing method according to another embodiment of the present invention. Referring to fig. 5, in the present embodiment, a first reconstruction is performed at a first spatial resolution on a scan image of a 16cm x 9cm imaging field acquired by actually photographing a scanned object, and image data of a large-field CBCT image whose image space corresponds to 16cm x 9cm is obtained. And then determining the cross section position with a preset distance from the bottom surface of the large-view CBCT image along the Z-axis direction according to the large-view CBCT image data, obtaining a target image with the cross section overlapped with the large-view CBCT image, and displaying the target image and a default target area frame. After the user makes a movement instruction to the system, the system changes the position of the target area frame based on the movement instruction, so that the target area frame contains a part of anterior teeth area. After the system receives the reconstruction instruction, a reconstruction area (three-dimensional space area) containing part of the anterior teeth area is determined based on the distance between the center point of the target area frame and the center point of the large-view CBCT image in the X-axis and Y-axis directions, the current size of the target area frame and the first length (the default value is the height of the large-view CBCT image in the Z-axis direction, namely 9 cm), and a scanning image of 16cm X9 cm imaging vision is reconstructed according to the reconstruction area and a second spatial resolution higher than the first spatial resolution, so that a small-view CBCT image is obtained for the user to read. If the user considers that the content in the small-view CBCT image does not contain the content expected to be observed, a page displaying the target image and the target area frame can be opened, and a new movement instruction and a new reconstruction instruction are made to obtain the small-view CBCT image with the displayed content more in line with the expected content of the user.
Fig. 6 is a schematic diagram of an image processing apparatus employing a hardware implementation of a processing system according to one embodiment of the invention. Referring to fig. 6, the present invention further provides an image processing apparatus 1000, and the image processing apparatus 1000 of this embodiment may include a display module 1002, a receiving module 1004, a reconstruction region determining module 1006, and an image reconstruction module 1008.
The display module 1002 is configured to display a target image. The size of the target image corresponds to a first imaging visual field range, the first imaging visual field range is the imaging visual field range of a scanned image obtained by scanning a scanned object, the target image comprises a representation of a target body, and the representation of the target body corresponds to a part of structures in the scanned object.
The scanned object may correspond to at least a partial region of a human head and the target volume may include teeth.
The receiving module 1004 is configured to receive a reconstruction indication.
The reconstruction region determination module 1006 is configured to obtain a reconstruction region according to the currently determined region of interest in response to the received reconstruction indication. Wherein the region of interest corresponds to a partial region within the target image, the region of interest comprising at least a partial region of the target volume.
The image reconstruction module 1008 is configured to reconstruct a three-dimensional CT image according to the reconstruction region and the scanned image of the scanned object.
It should be noted that, details not disclosed in the image processing apparatus 1000 of the present embodiment may refer to details disclosed in the image processing method M100 of the above embodiment, which are not described herein.
The image processing apparatus 1000 may include corresponding modules that perform each or several of the steps in the flowcharts described above. Thus, each step or several steps in the flowcharts described above may be performed by respective modules, and the apparatus may include one or more of these modules. A module may be one or more hardware modules specifically configured to perform the respective steps, or be implemented by a processor configured to perform the respective steps, or be stored within a computer-readable medium for implementation by a processor, or be implemented by some combination.
The hardware structure of the image processing apparatus 1000 may be implemented using a bus architecture. The bus architecture may include any number of interconnecting buses and bridges depending on the specific application of the hardware and the overall design constraints. Bus 1100 connects together various circuits including one or more processors 1200, memory 1300, and/or hardware modules. Bus 1100 may also connect various other circuits 1400, such as peripherals, voltage regulators, power management circuits, external antennas, and the like.
Bus 1100 may be an industry standard architecture (ISA, industry Standard Architecture) bus, a peripheral component interconnect (PCI, PERIPHERAL COMPONENT) bus, or an extended industry standard architecture (EISA, extended Industry Standard Component) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one connection line is shown in the figure, but not only one bus or one type of bus.
Fig. 7 is a schematic diagram of an image processing apparatus employing a hardware implementation of a processing system according to another embodiment of the present invention. Referring to fig. 7, the present invention also provides an image processing apparatus 1000, and the image processing apparatus 1000 of this embodiment may include a target image generating module 1001, a display module 1002, a receiving module 1004, a region of interest determining module 1005, a reconstruction region determining module 1006, and an image reconstructing module 1008.
The target image generation module 1001 obtains a target image by one of the following ways before the display module 1002 displays the target image. In one aspect, the control image reconstruction module 1008 performs a first reconstruction of a scan image of a scanned object, and obtains a target image based on a result of the first reconstruction. The spatial resolution corresponding to the first reconstruction is smaller than or equal to the spatial resolution corresponding to the second reconstruction, and the second reconstruction corresponds to a process of obtaining a three-dimensional CT image through reconstruction. In a second mode, a corresponding preset image is acquired as a target image according to the first imaging visual field range. The preset image comprises a representation of a preset scanned object structure.
The target image may be a cross-sectional view.
The manner in which the target image generation module 1001 obtains the target image based on the result of the first reconstruction may include the steps of: and obtaining a target image according to the target section and the result of the first reconstruction. The target section may have a preset distance from a preset position of the first image, where the first image corresponds to a result of the first reconstruction, and the preset position may be located at a spatial edge of the first image.
The region of interest determination module 1005 may obtain the currently determined region of interest through an overlapping region between the preset target region and the target image. Wherein the target area may undergo a change in position on the target image in response to the received movement indication. The target region may change its size in response to the received first indication.
The receiving module 1004 may obtain the reconstruction indication by detecting that a target event is triggered on a page that determines the region of interest.
The manner in which the reconstruction region determination module 1006 obtains the reconstruction region according to the currently determined region of interest may include the following steps: and obtaining a reconstruction region according to the currently determined position characteristics and the size of the region of interest and a preset first length, wherein the first length is a length in a direction perpendicular to the target image in the three-dimensional image space. The location features may include: and the relative position information of the center point of the region of interest between the position of the target image and the position of the center point of the target image.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention. The processor performs the various methods and processes described above. For example, method embodiments of the present invention may be implemented as a software program tangibly embodied on a machine-readable medium, such as a memory. In some embodiments, part or all of the software program may be loaded and/or installed via memory and/or a communication interface. One or more of the steps of the methods described above may be performed when a software program is loaded into memory and executed by a processor. Alternatively, in other embodiments, the processor may be configured to perform one of the methods described above in any other suitable manner (e.g., by means of firmware).
Logic and/or steps represented in the flowcharts or otherwise described herein may be embodied in any readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It should be understood that portions of the present invention may be implemented in hardware, software, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or part of the steps implementing the method of the above embodiments may be implemented by a program to instruct related hardware, and the program may be stored in a readable storage medium, where the program when executed includes one or a combination of the steps of the method embodiments. The storage medium may be a volatile/nonvolatile storage medium.
In addition, each functional unit in each embodiment of the present invention may be integrated into one processing module, each unit may exist alone physically, or two or more units may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. The storage medium may be a read-only memory, a magnetic disk or optical disk, etc.
The invention also provides an electronic device, comprising: a memory storing execution instructions; and a processor or other hardware module that executes the memory-stored execution instructions, causing the processor or other hardware module to perform the image processing method of any of the above embodiments.
The present invention also provides a computer-readable storage medium having stored therein execution instructions which, when executed by a processor, are to implement the image processing method of any of the above embodiments.
For the purposes of this description, a "readable storage medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). In addition, the readable storage medium may even be paper or other suitable medium on which the program can be printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner if necessary, and then stored in a memory.
The invention also provides a computer program product comprising a computer program/instruction which when executed by a processor implements the image processing method of any of the above embodiments.
In the description of the present specification, the descriptions of the terms "one embodiment/mode," "some embodiments/modes," "specific examples," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment/mode or example is included in at least one embodiment/mode or example of the present invention. In this specification, the schematic representations of the above terms are not necessarily the same embodiments/modes or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments/modes or examples. Furthermore, the various embodiments/implementations or examples described in this specification and the features of the various embodiments/implementations or examples may be combined and combined by persons skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
It will be appreciated by persons skilled in the art that the above embodiments are provided for clarity of illustration only and are not intended to limit the scope of the invention. Other variations or modifications of the above-described invention will be apparent to those of skill in the art, and are still within the scope of the invention.

Claims (15)

1. An image processing method, comprising:
Displaying a target image, wherein the size of the target image corresponds to a first imaging visual field range, the first imaging visual field range is the imaging visual field range of a scanned image obtained by scanning a scanned object, the target image comprises a representation of a target body, and the representation of the target body corresponds to a part of structure in the scanned object;
Responsive to the received reconstruction indication, obtaining a reconstruction region from a currently determined region of interest, the region of interest corresponding to a partial region within the target image, the region of interest comprising at least a partial region of the target volume; and
And reconstructing according to the reconstruction region and the scanned image of the scanned object to obtain a three-dimensional CT image.
2. The image processing method according to claim 1, wherein the target image is obtained by one of the following before being displayed:
In a first mode, a first reconstruction is performed on a scanned image of a scanned object, a target image is obtained based on the result of the first reconstruction, the spatial resolution corresponding to the first reconstruction is smaller than or equal to the spatial resolution corresponding to the second reconstruction, and the second reconstruction corresponds to the process of obtaining the three-dimensional CT image through reconstruction;
in a second mode, a corresponding preset image is obtained as a target image according to the first imaging visual field range, wherein the preset image comprises a preset representation of a scanned object structure.
3. The image processing method according to claim 2, wherein the target image is a sectional view.
4. The image processing method according to claim 3, wherein obtaining the target image based on the result of the first reconstruction includes:
And obtaining a target image according to the target section and the result of the first reconstruction.
5. The image processing method according to claim 4, wherein the target cross section has a preset distance from a preset position of a first image, the first image corresponding to a result of the first reconstruction, the preset position being located at a spatial edge of the first image.
6. The image processing method according to any one of claims 3 to 5, wherein the target image is a cross-sectional image.
7. The image processing method according to any one of claims 3 to 5, wherein the currently determined region of interest is obtained by an overlap region between a preset target region and the target image, the target region being subject to a positional change on the target image in response to the received movement indication.
8. The image processing method of claim 7, wherein the target region changes its size in response to the received first indication.
9. The image processing method according to any one of claims 3-5, wherein the reconstruction indication is obtained by monitoring that a target event is triggered on a page defining the region of interest.
10. The image processing method according to any one of claims 3-5, wherein obtaining the reconstructed region from the currently determined region of interest comprises:
and obtaining a reconstruction region according to the position characteristics and the size of the currently determined region of interest and a preset first length, wherein the first length is a length in a direction perpendicular to the target image in a three-dimensional image space.
11. The image processing method according to claim 10, wherein the position feature includes: and the relative position information of the center point of the interest area between the position of the target image and the position of the center point of the target image.
12. The method of claim 1, wherein the scanned object corresponds to at least a partial region of a human head and the target volume comprises teeth.
13. An electronic device, comprising:
a memory storing execution instructions; and
A processor executing the execution instructions stored in the memory, causing the processor to execute the image processing method according to any one of claims 1 to 12.
14. A readable storage medium having stored therein execution instructions which, when executed by a processor, are adapted to carry out the image processing method according to any one of claims 1 to 12.
15. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the image processing method of any of claims 1 to 12.
CN202410482685.1A 2024-04-22 2024-04-22 Image processing method, electronic device, readable storage medium, and program product Pending CN118071869A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410482685.1A CN118071869A (en) 2024-04-22 2024-04-22 Image processing method, electronic device, readable storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410482685.1A CN118071869A (en) 2024-04-22 2024-04-22 Image processing method, electronic device, readable storage medium, and program product

Publications (1)

Publication Number Publication Date
CN118071869A true CN118071869A (en) 2024-05-24

Family

ID=91107786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410482685.1A Pending CN118071869A (en) 2024-04-22 2024-04-22 Image processing method, electronic device, readable storage medium, and program product

Country Status (1)

Country Link
CN (1) CN118071869A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103913472A (en) * 2012-12-31 2014-07-09 同方威视技术股份有限公司 CT imaging system and method
CN104599316A (en) * 2014-12-18 2015-05-06 天津三英精密仪器有限公司 Fault direction adjustable three-dimensional image reconstruction method and system for cone-beam CT (computed tomography)
US20160086329A1 (en) * 2014-09-19 2016-03-24 Frank Dennerlein Device and method for assessing x-ray images
CN110706336A (en) * 2019-09-29 2020-01-17 上海昊骇信息科技有限公司 Three-dimensional reconstruction method and system based on medical image data
CN111402355A (en) * 2020-03-19 2020-07-10 上海联影医疗科技有限公司 PET image reconstruction method and device and computer equipment
CN111402356A (en) * 2020-03-19 2020-07-10 上海联影医疗科技有限公司 Parameter imaging input function extraction method and device and computer equipment
CN111528890A (en) * 2020-05-09 2020-08-14 上海联影医疗科技有限公司 Medical image acquisition method and system
CN116019474A (en) * 2023-02-22 2023-04-28 有方(合肥)医疗科技有限公司 Multi-source imaging device and method
CN116188617A (en) * 2023-04-21 2023-05-30 有方(合肥)医疗科技有限公司 CT image data processing method, device and CT system
CN117594197A (en) * 2023-11-22 2024-02-23 上海联影医疗科技股份有限公司 Preview generation method and device and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103913472A (en) * 2012-12-31 2014-07-09 同方威视技术股份有限公司 CT imaging system and method
US20160086329A1 (en) * 2014-09-19 2016-03-24 Frank Dennerlein Device and method for assessing x-ray images
CN104599316A (en) * 2014-12-18 2015-05-06 天津三英精密仪器有限公司 Fault direction adjustable three-dimensional image reconstruction method and system for cone-beam CT (computed tomography)
CN110706336A (en) * 2019-09-29 2020-01-17 上海昊骇信息科技有限公司 Three-dimensional reconstruction method and system based on medical image data
CN111402355A (en) * 2020-03-19 2020-07-10 上海联影医疗科技有限公司 PET image reconstruction method and device and computer equipment
CN111402356A (en) * 2020-03-19 2020-07-10 上海联影医疗科技有限公司 Parameter imaging input function extraction method and device and computer equipment
CN111528890A (en) * 2020-05-09 2020-08-14 上海联影医疗科技有限公司 Medical image acquisition method and system
CN116019474A (en) * 2023-02-22 2023-04-28 有方(合肥)医疗科技有限公司 Multi-source imaging device and method
CN116188617A (en) * 2023-04-21 2023-05-30 有方(合肥)医疗科技有限公司 CT image data processing method, device and CT system
CN117594197A (en) * 2023-11-22 2024-02-23 上海联影医疗科技股份有限公司 Preview generation method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
霍修坤;韦穗;程志友;: "基于单螺旋CT原始数据的三维图像重建的插补算法", 中国医疗器械杂志, no. 04, 30 July 2006 (2006-07-30) *

Similar Documents

Publication Publication Date Title
US10424118B2 (en) Perspective representation of a virtual scene component
US8199168B2 (en) System and method for 3D graphical prescription of a medical imaging volume
JP2022110067A (en) Methods and systems for patient scan setup
US9058665B2 (en) Systems and methods for identifying bone marrow in medical images
US11154260B2 (en) Apparatus for partial CT imaging comprising a collimator to center a radiation beam toward a region of interest spaced apart from a rotation axis
US8754888B2 (en) Systems and methods for segmenting three dimensional image volumes
US10258306B2 (en) Method and system for controlling computer tomography imaging
JP2009011827A (en) Method and system for multiple view volume rendering
JP2014117611A (en) Integration of intra-oral imagery and volumetric imagery
CN116019474B (en) Multi-source imaging device and method
US20130064440A1 (en) Image data reformatting
CN107705350B (en) Medical image generation method, device and equipment
JP2005103263A (en) Method of operating image formation inspecting apparatus with tomographic ability, and x-ray computerized tomographic apparatus
WO2008120136A1 (en) 2d/3d image registration
US6975897B2 (en) Short/long axis cardiac display protocol
CN109360233A (en) Image interfusion method, device, equipment and storage medium
JP7439075B2 (en) Device and method for editing panoramic radiographic images
CN100583161C (en) Method for depicting an object displayed in a volume data set
CN118071869A (en) Image processing method, electronic device, readable storage medium, and program product
EP3809376A2 (en) Systems and methods for visualizing anatomical structures
JP5196801B2 (en) Digital tomography imaging processor
CN111062998A (en) Image reconstruction method, image reconstruction device, CT system and storage medium
JP2022533583A (en) Protocol-dependent 2D prescan projection images based on 3D prescan volumetric image data
CN109272476A (en) Image co-registration method for drafting, device, equipment and the storage medium of PET/CT
CN117557650A (en) Parallel motion scanning imaging system parameter calibration method, terminal and calibration body

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination