CN115035121A - VR-based X-ray lung image simulation generation system - Google Patents

VR-based X-ray lung image simulation generation system Download PDF

Info

Publication number
CN115035121A
CN115035121A CN202210964794.8A CN202210964794A CN115035121A CN 115035121 A CN115035121 A CN 115035121A CN 202210964794 A CN202210964794 A CN 202210964794A CN 115035121 A CN115035121 A CN 115035121A
Authority
CN
China
Prior art keywords
image
ray
unit
lung
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210964794.8A
Other languages
Chinese (zh)
Other versions
CN115035121B (en
Inventor
袁元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Ugion Technology Co ltd
Jiangsu Yuyuan Intelligent Technology Co ltd
Original Assignee
Shanghai Ugion Technology Co ltd
Jiangsu Yuyuan Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Ugion Technology Co ltd, Jiangsu Yuyuan Intelligent Technology Co ltd filed Critical Shanghai Ugion Technology Co ltd
Priority to CN202210964794.8A priority Critical patent/CN115035121B/en
Publication of CN115035121A publication Critical patent/CN115035121A/en
Application granted granted Critical
Publication of CN115035121B publication Critical patent/CN115035121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an X-ray lung image simulation generation system based on VR, which comprises an image acquisition unit, an image preprocessing unit, a virtual scene generation unit and a modeling unit which are connected in a shunting way, and a fusion interaction unit which is gathered by a central processing unit, wherein the fusion interaction unit is in bidirectional signal butt joint with VR terminal equipment. The method comprises the steps of collecting an X-ray scanning image, a chest scanning image and an appearance image, generating a corresponding chest model according to the chest scanning image and the appearance image by combining related data, generating a three-dimensional model corresponding to a lung according to a multi-angle X-ray scanning image, and fusing the chest model and the three-dimensional model. By applying the simulation generation system, the presentation effect of the position and the shape of the lung in the chest cavity is improved, the definition is greatly enhanced, relevant personnel can easily learn the pictures and eliminate the misreading rate.

Description

VR-based X-ray lung image simulation generation system
Technical Field
The invention relates to the field of medical image optimization equipment, in particular to a VR-based X-ray lung image simulation generation system.
Background
With the rapid development of medical equipment industry, radiographic imaging (mass-cognitive CT) mainly using X-rays has become popular as a technical means for referring to a diagnosis. The technical principle and implementation method for generating the required part and black-and-white contrast image of the X-ray facing the collected object are disclosed by a large amount of public data, so that the principle and exemplary explanation are omitted.
Based on the principle of CT imaging, the X-ray is absorbed facing the object to be acquired, and the difference of the absorption ratios of different parts will show the contrast between black and white and the difference between light and dark on the screen, and the image from black to white and from light to dark, wherein the boundary lines are relatively sharp or gradually moving.
Clinically for X-ray images, such as DR and CT images, due to the similarity of the general and local structures of the irradiated object and the inability of the X-rays to be focused by lens during imaging, the generated X-ray images are usually slightly blurred and the contrast of the desired focus is low. The simple X-ray image is obtained by overlapping multiple layers of images in the scanning process, is fuzzy, has great difficulty for a graph recognizer, and even can cause misreading.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a VR-based X-ray lung image simulation generation system, which solves the problems that an X-ray image obtained by simple superposition is fuzzy, has high recognition difficulty and even influences analysis and judgment.
In order to achieve the purpose, the invention provides the following technical scheme: a system for analog generation of VR-based X-ray lung images, comprising:
the image acquisition unit is used for acquiring original images at least including X-ray scanning images, thorax scanning images and appearance images inside and outside the thorax;
the image preprocessing unit is used for respectively denoising and enhancing various acquired original images;
the virtual scene generation unit is used for generating a thoracic cavity model of a scene where the lung is located;
the modeling unit is used for generating a point cloud model related to the lung and rendering the point cloud model into a three-dimensional model of the lung;
the central processing unit is used for processing the received information and sending an instruction to the back;
the fusion interaction unit is used for fusing the three-dimensional model of the lung with the thoracic cavity model;
the VR terminal equipment is used for checking the fused model generated by simulation;
the virtual scene generation unit and the modeling unit are gathered by a central processing unit and accessed into a fusion interaction unit through one-way signals, and the fusion interaction unit is in two-way signal butt joint with VR terminal equipment.
In the above analog generation system of an X-ray lung image based on VR, the image acquisition unit further includes: the chest cavity image acquisition module is used for acquiring a chest cavity scanning image and an appearance image.
In the above VR-based simulation generation system for X-ray lung images, furthermore, the X-ray scanned images, the chest scanned images and the appearance images are collected in multiple groups and integrally form an original image, and each group of images is obtained by shooting and scanning along a plurality of different angles; the X-ray scanning image acquired from multiple angles corresponds to more than two body positions of an acquisition object.
In the analog generation system of the X-ray lung image based on VR, in the image preprocessing unit, a wavelet decomposition is used in combination with a Log-gabor filter to perform noise reduction processing on the image, and a local energy distribution conversion method in VR imaging is used in combination with a local binary pattern to perform image enhancement processing.
In the above VR-based simulation generation system for X-ray lung images, the virtual scene generation unit further generates a thorax model according to the thorax scan image and the appearance image in combination with related data, where the related data includes references, 3D pictures and physiological structure information related to the thorax.
In the above VR-based simulation generation system for X-ray lung images, the modeling unit 4 generates a preprocessed X-ray scanning image based on data of a three-dimensional model; and the generating step comprises:
associating a coordinate system, acquiring two-dimensional information of the preprocessed X-ray scanning image, representing the image information by using the coordinate value of each pixel point in the image, and acquiring a relation matrix of a world coordinate system and a pixel coordinate system in the image;
point cloud matching and data fusion, namely converting coordinates of all pixel points in an X-ray scanning image, eliminating redundant information through point cloud registration, and then performing fusion operation on data obtained through point cloud matching;
surface generation and surface reconstruction, wherein all surfaces of a cube component are connected to form an isosurface, all isosurfaces of the cube are combined to obtain a complete three-dimensional surface, and then registration operation is carried out on the processed image to obtain a complete point cloud model fused with images at different viewing angles;
and model rendering, namely expressing the object in the form of an image, establishing a geometric model for each fixed point coordinate of the object, wherein the image information of the object contains a plurality of object geometric data.
The simulation generation system of the X-ray lung image based on VR further comprises an access module and a voice module, wherein the access module is used for login and access of a user through VR terminal equipment, and the voice module is used for real-time voice communication and man-machine interaction.
In the above simulation generation system of an X-ray lung image based on VR, further, the VR terminal device is configured to view a model generated by simulation.
The simulation generation system of the invention has the advantages that:
1. the invention expands the acquisition range to X-ray scanning images, chest scanning images and appearance images, generates corresponding chest models by combining the chest scanning images and the appearance images with related data, generates three-dimensional models corresponding to lungs by using the acquired multi-angle X-ray scanning images, realizes the fusion of the chest models and the three-dimensional models, improves the presentation effect of the positions and the shapes of the lungs in the chests, greatly enhances the definition, enables related personnel to easily know the images, and eliminates the misreading rate.
2. According to the invention, the point cloud model is used for three-dimensional modeling, the redundancy of data in the model is effectively reduced, the complexity of an algorithm in the process of three-dimensional model reconstruction is reduced, the generation rate of the model is improved, and the VR terminal equipment is adopted for visual model observation, so that the related data can be known more quickly and comprehensively.
3. The invention reduces noise and enhances the X-ray scanning image, reduces the influence and the blockage of scattered images except for the lung in the X-ray scanning image, enhances the characteristic points in the X-ray scanning image and is convenient to highlight the characteristic points, thereby ensuring that the finally obtained X-ray scanning image is clearer, has obvious characteristics and is convenient to read.
Drawings
FIG. 1 is a schematic diagram of a VR-based X-ray lung image simulation system according to the present invention.
FIG. 2 is a schematic diagram of the internal structure of the image capturing unit according to the present invention.
FIG. 3 is a flow chart illustrating the functional implementation of the modeling unit of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
The invention designs and provides a VR-based X-ray lung image simulation generation system. As shown in fig. 1-2, the outline of the characterization includes an image acquisition unit 1 for acquiring an original image including at least an X-ray scanned image, a scanned image of the thorax, and an appearance image inside and outside the thorax;
the image preprocessing unit 2 is used for respectively denoising and enhancing various acquired original images;
the virtual scene generation unit 3 is used for generating a thoracic cavity model of a scene where the lung is located;
the modeling unit 4 is used for generating a point cloud model related to the lung and rendering the point cloud model into a three-dimensional model of the lung;
the central processing unit 5 is used for processing the received information and sending a related instruction to the back according to the requirement;
the fusion interaction unit 6 is used for fusing the three-dimensional model of the lung with the thoracic cavity model;
the VR terminal device 7 is used for checking the fused model generated by simulation;
the image acquisition unit 1 is connected with the image preprocessing unit 2 through one-way signals, the image preprocessing unit 2 is connected with the virtual scene generating unit 3 and the modeling unit 4 through one-way and branch line signals, the virtual scene generating unit 3 and the modeling unit 4 are gathered through the central processing unit 5 and connected with the fusion interaction unit 6 through one-way signals, and the fusion interaction unit 6 is in two-way signal butt joint with the VR terminal device 7.
In view of further thinning the features, the image acquisition unit 1 described above includes an X-ray image acquisition module 11 and a thorax image acquisition module 12. The X-ray image acquisition module 11 is used for acquiring X-ray scanning images of the lungs, and the thorax image acquisition module 12 is used for acquiring thorax scanning images and appearance images.
In order to improve the depth of an image data set and the definition of a generated model, the X-ray scanning image, the chest scanning image and the appearance image are collected in multiple groups and integrally form an original image, and each group of images are obtained by shooting and scanning along a plurality of different angles respectively. In particular, X-ray scanning images acquired from multiple angles correspond to more than two body positions of an acquired object.
In the image preprocessing unit 2, the wavelet decomposition is combined with a Log-gabor filter to perform the denoising processing of the image, and the local energy distribution conversion method in VR imaging is combined with a local binary pattern to perform the enhancement processing of the image. The specific steps are detailed below.
Respectively performing wavelet de-noising on the X-ray scanning image, the chest scanning image and the appearance image, and enabling the X-ray scanning image, the chest scanning image and the appearance image to be distributed in a subspace of a wavelet domain
Figure 657955DEST_PATH_IMAGE001
And
Figure 106254DEST_PATH_IMAGE002
for Riesz transformation image of wavelet scale in x direction and y direction, local feature decomposition is performed on the image by using pixel values in monogenic direction in wavelet domain
Figure 292516DEST_PATH_IMAGE003
Constructing a principal component feature distribution matrix in the local region
Figure 953305DEST_PATH_IMAGE004
Which is prepared from
Figure 689048DEST_PATH_IMAGE001
And
Figure 42669DEST_PATH_IMAGE002
corresponding PCA principal component, the local block binary pattern of the image is represented as:
Figure 716227DEST_PATH_IMAGE005
wherein:
Figure 180707DEST_PATH_IMAGE006
adopting wavelet decomposition, combining with a Log-gabor filter to filter out local main direction redundant information in the image at a certain scale, and obtaining a phase reference value of a main direction as follows:
Figure 275351DEST_PATH_IMAGE007
the information of the image in the high frequency band is represented by a matrix S, and the distribution function in the main direction of the local block can be represented as:
Figure 799873DEST_PATH_IMAGE008
Figure 960728DEST_PATH_IMAGE009
wherein
Figure 228898DEST_PATH_IMAGE010
For regularization parameters, image information enhancement is carried out by combining VR imaging technology, the influence of noise is inhibited, and the input size is
Figure 408075DEST_PATH_IMAGE011
Image is obtained by local energy distribution conversion method
Figure 369078DEST_PATH_IMAGE012
The main direction of the image block with the size and the obtained information enhanced main direction component diagram model are as follows:
Figure 17228DEST_PATH_IMAGE013
after determining the strong main direction of the image pixel in the local binary pattern, the method is characterized in that
Figure 823510DEST_PATH_IMAGE014
The wavelet characteristics and the local binary pattern are utilized to improve the imaging robustness, and the image pixel projection area is uniformly quantized and divided into T intervals, which are expressed as follows:
Figure 624238DEST_PATH_IMAGE015
in which
Figure 490563DEST_PATH_IMAGE016
Is the main component area after the image pixel vector quantization.
The above-mentioned virtual scene generating unit 3 generates a thorax model from the thorax scan image and the appearance image in combination with related data including references related to the thorax, 3D pictures and physiological structure information. The unit generates a three-dimensional schematic diagram by performing two-dimensional multi-diagram simulation on the generating principle of the thoracic cavity model, belongs to the relatively mature technology in the field at present, and therefore, the detailed process is omitted.
The modeling unit 4 is mainly used for generating an X-ray scanning image which is preprocessed on the basis of data of a three-dimensional model; and the generation steps are detailed below.
Firstly, associating a coordinate system, acquiring two-dimensional information of the preprocessed X-ray scanning image, representing the image information by using the coordinate value of each pixel point in the image, and acquiring a relation matrix of a world coordinate system and a pixel coordinate system in the image as follows:
Figure 953905DEST_PATH_IMAGE017
(ii) a Wherein the content of the first and second substances,
Figure 439245DEST_PATH_IMAGE018
Figure 468380DEST_PATH_IMAGE019
Figure 895820DEST_PATH_IMAGE020
r represents
Figure 377616DEST_PATH_IMAGE021
T is a translation vector;
is provided with
Figure 666646DEST_PATH_IMAGE022
Wherein K is a value of
Figure 815868DEST_PATH_IMAGE023
Figure 647164DEST_PATH_IMAGE024
Figure 616257DEST_PATH_IMAGE025
Figure 708978DEST_PATH_IMAGE026
And the four parameters are only related to the shooting equipment in the process of establishing the three-dimensional model of the image,therefore, by using K as a parameter matrix in the device and using the image capturing device as a world coordinate system, K can be obtained
Figure 712707DEST_PATH_IMAGE027
Median of image
Figure 481948DEST_PATH_IMAGE028
The corresponding coordinates are
Figure 938337DEST_PATH_IMAGE029
Converting the above formula to obtain:
Figure 569170DEST_PATH_IMAGE030
Figure 161825DEST_PATH_IMAGE031
Figure 180597DEST_PATH_IMAGE032
with the known value of K, one can obtain
Figure 750381DEST_PATH_IMAGE033
Wherein
Figure 309538DEST_PATH_IMAGE034
Figure 897645DEST_PATH_IMAGE035
Figure 274269DEST_PATH_IMAGE028
Are respectively as
Figure 705250DEST_PATH_IMAGE029
X, Y and Z axis coordinates in a world coordinate system.
And two-point cloud matching and data fusion, namely converting the coordinates of each pixel point of the X-ray scanning image, registering the point clouds, eliminating redundant information, and then performing fusion operation on the data obtained by point cloud matching.
When an image three-dimensional model is established, repeated parts may appear when the object is shot in multiple angles, so parameter conversion is needed before the image is subjected to three-dimensional model reconstruction, and the point cloud registration distributes images with different shooting time, shooting angles and shooting environments to the same coordinate system, so that redundant information is eliminated. The image information after point cloud registration is distributed in a data space in a disordered way, and the representation of objects in the image is not obvious. The Kinect sensor is located at the original point construction volume grid, and the original point volume grid divides the data space after point cloud registration into a plurality of tiny rectangles which are called as voxels. And adding the voxel with an effective distance field, wherein the value of the effective distance field is the shortest distance from the voxel to the surface of the model, and the closer the value of the effective distance field is to zero, the closer the distance from the voxel to the surface of the reconstructed image three-dimensional model is proved. If the distance is greater than zero, the voxel is proved to be positioned in front of the surface of the reconstructed model; and otherwise, the model is positioned at the rear part of the reconstruction model. The TSDF algorithm is adopted to store a plurality of layers of voxels close to the real surface so as to reduce memory consumption and redundant points and enlarge the reconstruction range of the three-dimensional model, the TSDF algorithm uses a three-dimensional grid to represent the three-dimensional space of an object, and a formula is adopted:
Figure 677886DEST_PATH_IMAGE036
wherein
Figure 510712DEST_PATH_IMAGE037
The initial distance of the grid is represented as,
Figure 19755DEST_PATH_IMAGE038
the distance from the point cloud to the grid is represented, and W represents the weight of the fusion operation on the same grid.
And thirdly, surface generation and surface reconstruction, namely connecting all surfaces of the cubic components to form isosurface, combining all isosurfaces of the cube to obtain a complete three-dimensional surface, and then carrying out a registration operation process on an image obtained after the processed image is processed to obtain a complete point cloud model fused with images at different viewing angles.
And fourthly, model rendering, namely expressing the object in the form of an image when rendering the image three-dimensional model, establishing a geometric model for each fixed point coordinate of the object, wherein the image information of the object contains more object geometric data.
Based on the acquired image point cloud model, rendering the point cloud model under an OpenGl condition to complete reconstruction of an image three-dimensional model, expressing an object in an image form when rendering the image three-dimensional model, establishing a geometric model for each vertex coordinate of the object, wherein the image information of the object contains more object geometric data and can really express the geometric characteristics of the object, carrying out translation or rotation operation on the vertex of the object under the OpenGl condition, randomly changing the space of the object formed by the vertex, expressing the three-dimensional characteristics of the object by illumination of the object under the OpenGl condition, illuminating the vertex of the object, substituting the distance from the vertex to a light source, a direction vector and a visual vector into the illuminated illumination model to obtain the color of the vertex of the object, and expressing the color of the vertex of the object by using the spatial position relation among the object, the visual point and the light source, and obtaining the three-dimensional model effect of the object, and performing rendering operation on the three-dimensional model of the image in order to ensure that the reconstruction effect of the three-dimensional model of the image is more vivid and has stronger stereoscopic impression.
The fusion interaction unit 6 further comprises an access module and a voice module, wherein the access module is used for logging in and accessing a user through the VR terminal device 7, a model generated by simulation is checked through the VR terminal device, and the voice module is used for real-time voice communication and man-machine interaction.
In summary, the present invention will be described in detail with respect to embodiments of an analog generation system for VR-based X-ray lung images, which is applied in related fields and industries, and has technical effects that multiple aspects coexist and are not easy to detect compared to the conventional imaging scheme, specifically:
1. the invention expands the acquisition range to X-ray scanning images, chest scanning images and appearance images, generates corresponding chest models by combining the chest scanning images and the appearance images with related data, generates three-dimensional models corresponding to lungs by using the acquired multi-angle X-ray scanning images, realizes the fusion of the chest models and the three-dimensional models, improves the presentation effect of the positions and the shapes of the lungs in the chests, greatly enhances the definition, enables related personnel to easily know the images, and eliminates the misreading rate.
2. According to the invention, the point cloud model is used for three-dimensional modeling, the redundancy of data in the model is effectively reduced, the complexity of an algorithm in the process of three-dimensional model reconstruction is reduced, the generation rate of the model is improved, and the VR terminal equipment is adopted for visual model observation, so that the related data can be known more quickly and comprehensively.
3. The invention reduces noise and enhances the X-ray scanning image, reduces the influence and the blockage of scattered images except for the lung in the X-ray scanning image, enhances the characteristic points in the X-ray scanning image and is convenient to highlight the characteristic points, thereby ensuring that the finally obtained X-ray scanning image is clearer, has obvious characteristics and is convenient to read.
According to the invention, after the X-ray scanning image is obtained, the noise reduction and enhancement are carried out on the X-ray scanning image, the influence and the blockage of scattered images except for the lung in the X-ray scanning image on the X-ray scanning image are reduced, and meanwhile, the enhancement is carried out on the X-ray scanning image, so that the characteristic points in the X-ray scanning image can be increased, the characteristic points are more prominent, the subsequent observation and discovery are convenient, and the finally obtained X-ray scanning image is ensured to be clearer and has obvious characteristics.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that are within the spirit and principle of the present invention are intended to be included in the scope of the present invention.

Claims (9)

1. A system for analog generation of VR-based X-ray lung images, comprising:
the image acquisition unit (1) is used for acquiring original images at least comprising X-ray scanning images, chest scanning images and appearance images inside and outside the chest;
the image preprocessing unit (2) is used for respectively denoising and enhancing various acquired original images;
the virtual scene generation unit (3) is used for generating a chest model of a scene where the lung is located;
a modeling unit (4) for generating a point cloud model related to the lung and rendering the point cloud model into a three-dimensional model of the lung;
the central processing unit (5) is used for processing the received information and sending an instruction to the back;
a fusion interaction unit (6) for fusing the three-dimensional model of the lung with the thoracic model;
the VR terminal equipment (7) is used for checking the fused model generated by simulation;
the image acquisition unit (1) is connected with the image preprocessing unit (2) through one-way signals, the image preprocessing unit (2) is connected with the virtual scene generating unit (3) and the modeling unit (4) through one-way and branch line signals, the virtual scene generating unit (3) and the modeling unit (4) are collected through the central processing unit (5) and connected with the fusion interaction unit (6) through one-way signals, and the fusion interaction unit (6) is in two-way signal butt joint with the VR terminal device (7).
2. The system for simulated generation of VR-based X-ray lung images as claimed in claim 1, characterized in that the image acquisition unit (1) comprises:
the X-ray image acquisition module (11) is used for acquiring an X-ray scanning image of the lung;
and a thorax image acquisition module (12) for acquiring a thorax scanning image and an appearance image.
3. The system of claim 2, wherein the system is configured to generate an analog VR-based X-ray lung image of a subject by: the X-ray scanning image, the chest scanning image and the appearance image are collected in multiple groups and integrally form an original image, and each group of images are obtained by shooting and scanning along a plurality of different angles and are collected.
4. The VR-based analog generation system for X-ray lung images of claim 3, wherein: the X-ray scanning image acquired from multi-angle acquisition corresponds to more than two body positions of an acquisition object.
5. The system of claim 1, wherein the system is configured to generate an image of a lung by simulation of VR-based X-ray imaging, comprising: in the image preprocessing unit (2), the wavelet decomposition is combined with a Log-gabor filter to perform noise reduction processing on the image, and the local energy distribution conversion method in VR imaging is combined with a local binary pattern to perform enhancement processing on the image.
6. The VR-based analog generation system for X-ray lung images of claim 1, wherein: the virtual scene generation unit (3) generates a thorax model according to the thorax scanning image and the appearance image and combined with related data, wherein the related data comprises a reference relevant to the thorax, a 3D picture and physiological structure information.
7. The VR-based analog generation system for X-ray lung images of claim 1, wherein: the modeling unit (4) generates a preprocessed X-ray scanning image based on data of the three-dimensional model; and the generating step comprises:
associating a coordinate system, acquiring two-dimensional information of the preprocessed X-ray scanning image, representing the image information by using the coordinate value of each pixel point in the image, and acquiring a relation matrix of a world coordinate system and a pixel coordinate system in the image;
point cloud matching and data fusion, namely converting coordinates of all pixel points in an X-ray scanning image, eliminating redundant information through point cloud registration, and then performing fusion operation on data obtained through point cloud matching;
surface generation and surface reconstruction, namely connecting all surfaces of a cubic component to form an isosurface, combining all isosurfaces of the cube to obtain a complete three-dimensional surface, and then carrying out registration operation on the processed image to obtain a complete point cloud model fused with images at different viewing angles;
and (3) model rendering, namely expressing the object in the form of an image, establishing a geometric model for each fixed point coordinate of the object, wherein the image information of the object contains a plurality of object geometric data.
8. The system of claim 1, wherein the system is configured to generate an image of a lung by simulation of VR-based X-ray imaging, comprising: the fusion interaction unit (6) further comprises an access module and a voice module, wherein the access module is used for logging access of a user through VR terminal equipment (7), and the voice module is used for real-time voice communication and man-machine interaction.
9. The VR-based analog generation system for X-ray lung images of claim 1, wherein: the VR terminal device (7) is used for viewing the model generated by the simulation.
CN202210964794.8A 2022-08-12 2022-08-12 VR-based X-ray lung image simulation generation system Active CN115035121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210964794.8A CN115035121B (en) 2022-08-12 2022-08-12 VR-based X-ray lung image simulation generation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210964794.8A CN115035121B (en) 2022-08-12 2022-08-12 VR-based X-ray lung image simulation generation system

Publications (2)

Publication Number Publication Date
CN115035121A true CN115035121A (en) 2022-09-09
CN115035121B CN115035121B (en) 2023-05-23

Family

ID=83130039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210964794.8A Active CN115035121B (en) 2022-08-12 2022-08-12 VR-based X-ray lung image simulation generation system

Country Status (1)

Country Link
CN (1) CN115035121B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109730768A (en) * 2019-01-10 2019-05-10 黄德荣 A kind of cardiac thoracic surgery supplementary controlled system and method based on virtual reality
US10769843B1 (en) * 2019-07-31 2020-09-08 Hongfujin Precision Electronics(Tianjin)Co., Ltd. 3D scene engineering simulation and real-life scene fusion system
CN114298986A (en) * 2021-12-17 2022-04-08 浙江大学滨江研究院 Thoracic skeleton three-dimensional construction method and system based on multi-viewpoint disordered X-ray film

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109730768A (en) * 2019-01-10 2019-05-10 黄德荣 A kind of cardiac thoracic surgery supplementary controlled system and method based on virtual reality
US10769843B1 (en) * 2019-07-31 2020-09-08 Hongfujin Precision Electronics(Tianjin)Co., Ltd. 3D scene engineering simulation and real-life scene fusion system
CN114298986A (en) * 2021-12-17 2022-04-08 浙江大学滨江研究院 Thoracic skeleton three-dimensional construction method and system based on multi-viewpoint disordered X-ray film

Also Published As

Publication number Publication date
CN115035121B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN105869160B (en) The method and system of three-dimensional modeling and holographic display are realized using Kinect
JP4932951B2 (en) Facial image processing method and system
CN105353873B (en) Gesture control method and system based on Three-dimensional Display
JP4865093B2 (en) Method and system for animating facial features and method and system for facial expression transformation
JP3483929B2 (en) 3D image generation method
CN109584349B (en) Method and apparatus for rendering material properties
Remelli et al. Drivable volumetric avatars using texel-aligned features
WO2019140945A1 (en) Mixed reality method applied to flight simulator
CN114863038B (en) Real-time dynamic free visual angle synthesis method and device based on explicit geometric deformation
Cao et al. Sparse photometric 3D face reconstruction guided by morphable models
CN113421328B (en) Three-dimensional human body virtual reconstruction method and device
CN115880443B (en) Implicit surface reconstruction method and implicit surface reconstruction equipment for transparent object
CN109769109A (en) Method and system based on virtual view synthesis drawing three-dimensional object
CN116152417B (en) Multi-viewpoint perspective space fitting and rendering method and device
CN117413300A (en) Method and system for training quantized nerve radiation field
CN117671138A (en) Digital twin modeling method and system based on SAM large model and NeRF
Gering et al. Object modeling using tomography and photography
CN104224230B (en) Three-dimensional and four-dimensional ultrasonic imaging method and device based on GPU (Graphics Processing Unit) platform and system
CN116051696B (en) Reconstruction method and device of human body implicit model capable of being re-illuminated
CN115035121B (en) VR-based X-ray lung image simulation generation system
Chung et al. Enhancement of visual realism with BRDF for patient specific bronchoscopy simulation
Rendle et al. Volumetric avatar reconstruction with spatio-temporally offset rgbd cameras
Iwadate et al. VRML animation from multi-view images
Liu et al. Research on 3D point cloud model reconstruction method based on multi-kinects
i Bartrolı et al. Visualization techniques for virtual endoscopy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant