CN111275825A - Positioning result visualization method and device based on virtual intelligent medical platform - Google Patents

Positioning result visualization method and device based on virtual intelligent medical platform Download PDF

Info

Publication number
CN111275825A
CN111275825A CN202010038150.7A CN202010038150A CN111275825A CN 111275825 A CN111275825 A CN 111275825A CN 202010038150 A CN202010038150 A CN 202010038150A CN 111275825 A CN111275825 A CN 111275825A
Authority
CN
China
Prior art keywords
target object
positioning
dimensional model
real
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010038150.7A
Other languages
Chinese (zh)
Other versions
CN111275825B (en
Inventor
于金明
卢洁
王琳琳
钱俊超
张凯
李彦飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010038150.7A priority Critical patent/CN111275825B/en
Publication of CN111275825A publication Critical patent/CN111275825A/en
Application granted granted Critical
Publication of CN111275825B publication Critical patent/CN111275825B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computer Hardware Design (AREA)
  • Pathology (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Radiation-Therapy Devices (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The disclosure relates to a positioning result visualization method and device based on a virtual intelligent medical platform, wherein the method comprises the following steps: obtaining a three-dimensional visual virtual image according to the target object data; carrying out virtual-real registration on a three-dimensional model related to a target object in a virtual image and a real positioning scene to obtain a registration result; combining a accelerator beam three-dimensional model and a registration result in the virtual image, and rendering to obtain a positioning result; and displaying the positioning result during the positioning process of the radiotherapy. In the embodiment of the disclosure, by combining a mixed reality technology, information such as tumors and rays is visualized, and three-dimensional holographic display of medical images, three-dimensional display of radiotherapy plans and visual display of positioning results are realized; the target object can observe the positioning result more intuitively and efficiently, the positioning and positioning completion degree can be confirmed, and the positioning error is reduced; meanwhile, the positioning result can be corrected through the display information, and the positioning accuracy is improved.

Description

Positioning result visualization method and device based on virtual intelligent medical platform
Technical Field
The disclosure relates to the technical field of computer vision, in particular to a positioning result visualization method and device based on a virtual intelligent medical platform.
Background
With the development of information technology and electronic technology, the traditional information receiving way and information processing mode can not meet the requirement of a target object for efficiently acquiring information; for example, in the medical field, during setup for performing Radiation (RT) therapy, a patient orally obtains setup results by a technician; because the tumor, normal tissues and rays in the human body are not visible to the naked eye and most patients have no medical background, the patients cannot visually and efficiently obtain the positioning result, the patients and technicians cannot effectively interact, and the positioning efficiency is reduced.
Disclosure of Invention
In view of this, the present disclosure provides a positioning result visualization method and apparatus based on a virtual intelligent medical platform.
According to one aspect of the disclosure, a positioning result visualization method based on a virtual intelligent medical platform is provided, which includes:
obtaining a three-dimensional visual virtual image according to the target object data;
carrying out virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene to obtain a registration result;
combining a accelerator beam three-dimensional model in the virtual image and the registration result, and rendering to obtain a positioning result;
and displaying the positioning result during the radiotherapy positioning process.
In a possible implementation manner, the obtaining a three-dimensional visualized virtual image according to target object data includes:
acquiring DICOM (radio In DICOM) data of a target object through a DICOM network;
extracting the target object data according to the DICOM RT data;
establishing an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data;
and obtaining the three-dimensional visual virtual image according to the accelerator beam three-dimensional model and the target object related three-dimensional model.
In one possible implementation, the establishing an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data includes:
analyzing the target object data to obtain radiotherapy related data;
establishing corresponding three-dimensional model data according to the relevant data of the radiotherapy;
and converting the three-dimensional model data into a specified format to obtain the accelerator beam three-dimensional model and a target object related three-dimensional model.
In a possible implementation manner, the obtaining a registration result by performing virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene includes:
acquiring a real-time picture of the real positioning scene;
obtaining the characteristic points of the real positioning scene according to the real-time picture;
and matching the three-dimensional model related to the target object in the virtual image to the corresponding position in the real positioning scene according to the characteristic points to obtain a registration result.
In one possible implementation, the feature points correspond to position markers added to the skin of the target object during a Computed Tomography (CT) scan.
In one possible implementation, the displaying the positioning result during the radiotherapy positioning process includes:
determining at least one target position according to the position and the visual angle of the target object in the real radiotherapy scene;
and displaying the positioning result at the target position through a display device.
In one possible implementation, the target object data includes: basic information of a target object, CT image data, plan information, structure set information and dosage information;
the target object-related three-dimensional model comprises: a target area three-dimensional model, an ROI three-dimensional model and a dose distribution three-dimensional model.
According to another aspect of the present disclosure, there is provided a positioning result visualization apparatus based on a virtual intelligent medical platform, including:
the virtual image construction module is used for obtaining a three-dimensional visual virtual image according to the target object data;
the virtual-real registration module is used for carrying out virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene to obtain a registration result;
the rendering module is used for combining the accelerator beam three-dimensional model in the virtual image and the registration result, and rendering to obtain a positioning result;
and the display module is used for displaying the positioning result in the radiotherapy positioning process.
According to another aspect of the present disclosure, there is provided a positioning result visualization apparatus based on a virtual intelligent medical platform, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, by combining a mixed reality technology, information such as tumors and rays is visualized, and three-dimensional holographic display of medical images, three-dimensional display of radiotherapy plans and visual display of positioning results are realized; the target object can observe the positioning result more intuitively and efficiently, clearly know the positioning condition, confirm the positioning and positioning completion degree and reduce the positioning error. Meanwhile, the display information can be used for assisting communication, and the positioning efficiency is improved. In addition, the doctor can correct the positioning result through the display information, and the positioning accuracy is improved.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 illustrates a flow chart of a method for visualization of a positioning result based on a virtual intelligent medical platform according to an embodiment of the present disclosure;
FIG. 2 shows a device connection diagram for posing result visualization according to an embodiment of the present disclosure;
figure 3 shows a schematic view of a radiotherapy positioning result visualization scenario according to an embodiment of the present disclosure;
fig. 4 shows a block diagram of a setup result visualization apparatus based on a virtual intelligent medical platform according to an embodiment of the present disclosure;
fig. 5 shows a block diagram of an apparatus for virtual intelligent medical platform-based posing result visualization, according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
With the change of disease spectrum, malignant tumors have become the first killer threatening human health; throughout the course of tumor progression, patients of about 2/3 will inevitably receive radiation therapy for the purpose of radical or palliative reduction, etc.
At present, in the actual radiotherapy positioning process in clinic, the positioning result is informed to a patient through dictation of a technician; however, since the tumor, normal tissues and radiation in human body are not visible to the naked eye and most patients have no medical background, the technical dictation can not make the patients understand the specific situation clearly, and can not reduce the fear and worry of the patients to the tumor and radiotherapy, so that the patients have many physical and psychological symptoms and adverse psychological reactions in the radiation therapy application stage, and the life quality and the treatment compliance of the patients are seriously affected, even the treatment effect is seriously affected.
Therefore, the technical scheme for visualizing the positioning result is provided, and the information such as tumors, rays and the like is visualized by combining a mixed reality technology, so that three-dimensional holographic display of medical images, three-dimensional display of radiotherapy plans and visual display of the positioning result are realized; the patient can more directly perceived more high-efficient observe the result of putting, and the clear situation of putting of knowing to can fix a position and put the completion and confirm, reduce the error of putting. Meanwhile, the display information can be used for assisting communication, and the positioning efficiency is improved. In addition, the doctor can correct the positioning result by displaying the information, so that the positioning accuracy is improved
Fig. 1 shows a flowchart of a positioning result visualization method based on a virtual intelligent medical platform according to an embodiment of the present disclosure. As shown in fig. 1, the method may include:
step 10, obtaining a three-dimensional visual virtual image according to target object data;
step 20, carrying out virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene to obtain a registration result;
step 30, combining the accelerator beam three-dimensional model in the virtual image and the registration result, and rendering to obtain a positioning result;
and step 40, displaying the positioning result in the radiotherapy positioning process.
The Virtual Intelligent (VI) medical platform is constructed by combining artificial intelligence, big data analysis and other methods based on Virtual reality, augmented reality, mixed reality and other holographic technologies, is used for assisting and guiding invasive, minimally invasive and noninvasive clinical diagnosis and treatment processes, assists diagnosis and education of patients, and can be applied to the fields of surgery, internal medicine, radiotherapy department, interventional department and the like. The positioning result is the result obtained in the positioning process of radiotherapy, i.e. a doctor firstly delineates the tumor on the image of the planning system so as to determine the central coordinate of the tumor of the patient, and a physicist and an operator place the tumor center of the patient on the treatment center (including isocenter) of radiotherapy equipment according to the central coordinate of the tumor.
Therefore, based on the virtual intelligent medical platform, the existing target object data information of the hospital is analyzed and converted into a three-dimensional visual virtual image, the virtual image is matched with a real scene through a virtual intelligent technology and is displayed on a display terminal, so that three-dimensional holographic display of the medical image, three-dimensional display of a radiotherapy plan and visual display of a positioning result are realized, the positioning result can be observed more visually and more efficiently by the target object, the positioning error is reduced, and the positioning efficiency is improved.
The positioning result visualization scheme based on the virtual intelligent medical platform is exemplified below with reference to fig. 2 and 3.
Fig. 2 shows a connection diagram of an apparatus for visualizing a positioning result according to an embodiment of the present disclosure, and as shown in fig. 2, the apparatus for visualizing a positioning result may include: the system comprises image acquisition equipment (namely a camera 01, a camera 02 and a camera 03 in the figure), display equipment (namely display equipment 01 and display equipment 02 in the figure), processing equipment PC, a server and an in-hospital information system; fig. 3 shows a schematic view of a radiotherapy positioning result visualization scenario according to an embodiment of the present disclosure, as shown in fig. 3, in the scenario, including: image acquisition equipment (namely a camera 01, a camera 02 and a camera 03 in the figure), display equipment (namely a display in the figure), a PC (personal computer), a server, an in-hospital information system and an accelerator.
In the above fig. 2 and fig. 3, the image acquisition device is configured to acquire a picture in a real-time positioning scene in real time, and transmit the picture to the PC in real time in a wired or wireless manner, the PC and the server acquire target object data and the like through an in-hospital information system to perform positioning result visualization processing on the data, and the PC and the server transmit an obtained processing result to the display device in real time to perform terminal display after data extraction, three-dimensional reconstruction, virtual-real registration and the like are performed on the data. It should be noted that the number, the installation position, the connection mode, and the like of each device such as the image capturing device and the display device in fig. 2 and 3 may be set according to actual needs, and the disclosure does not limit this.
In a possible implementation manner, in step 10, the obtaining a three-dimensional visualized virtual image according to the target object data may include: obtaining DICOM RT data through a DICOM network; extracting the target object data according to the DICOM RT data; establishing an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data; and obtaining the three-dimensional visual virtual image according to the accelerator beam three-dimensional model and the target object related three-dimensional model.
DICOM RT data is related data obtained from a hospital DICOM network, which is an international standard for medical images and related information (ISO 12052). DICOM RT data may be acquired by the in-hospital information system, and may include: the CT image data, the RT Plan information, the RT Structure Set information, and the RT Dose information may exemplarily obtain the CT image data of the target object through CI scanning, and further obtain related data such as the Plan information, the Structure Set information, and the Dose information according to the CT image data. Then, according to the information such as the identity of the target object undergoing radiotherapy, data extraction can be performed in the obtained DICOM RT data to obtain corresponding target object data, which may include: relevant information such as target object basic information, CT image data, plan information, structure set information, dosage information and the like; further, the target object data may be segmented and modeled to create a plurality of three-dimensional models, wherein the plurality of three-dimensional models may include: target object related three-dimensional models such as a target area three-dimensional model, a region of interest (ROI) three-dimensional model, a dose distribution three-dimensional model and the like, an accelerator beam three-dimensional model and the like; furthermore, the three-dimensional models can be combined to obtain a three-dimensional visual virtual image according to the established spatial relative position relationship among the three-dimensional models.
For example, a C-Store network service (a relational database for quick query) may be provided by interfacing with a DICOM network in a hospital through the server in fig. 2 or fig. 3, so as to receive DICOM RT data such as CT image data, RT Plan information, RT Structure Set information, RT Dose information, etc. transmitted through the DICOM protocol; then, the DICOM RT data can be analyzed to extract target object data such as target object basic information, target object CT image data, plan information, structure set information, dosage information and the like; furthermore, a plurality of three-dimensional models are established according to the target object data, and finally, a three-dimensional visual virtual image is obtained.
In one possible implementation, the establishing an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data includes: analyzing the target object data to obtain radiotherapy related data; establishing corresponding three-dimensional model data according to the relevant data of the radiotherapy; and converting the three-dimensional model data into a specified format to obtain the accelerator beam three-dimensional model and a target object related three-dimensional model.
For example, the extracted target object data may be subjected to data analysis to obtain radiotherapy-related data, a Json file (that is, a file saved in a Json data format) describing data information is generated, the radiotherapy-related data (that is, the Json file) is imported into the 3D Slicer of medical image processing software, and the segmented Editor and modeling Model Maker modules of the software may be used to segment and Model CT image data, structure set information, and dose information of the target region, the region of interest, the dose distribution, the accelerator beam, and other parts of the planning information of the target object using Python language, so as to obtain a plurality of corresponding three-dimensional models such as a target region three-dimensional Model, a dose distribution three-dimensional Model, and an accelerator beam three-dimensional Model; and finally, storing the target region three-dimensional model, the region-of-interest three-dimensional model, the dose distribution three-dimensional model, the accelerator beam three-dimensional model and the like as OBJ format model data files, and generating Json files for describing the model data files for subsequent processing. Thus, a three-dimensional model is obtained after modeling processing is carried out on target object data based on 3D slicer software, so that automatic and batch processing of data is realized; meanwhile, the target object can visually and efficiently acquire the positioning condition through a three-dimensional visual virtual image obtained by three-dimensional reconstruction of CT image data, subjective judgment is carried out, the positioning and positioning completion degree is confirmed, and errors are reduced.
In one possible implementation manner, in step 20, the obtaining a registration result by performing virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene includes: acquiring a real-time picture of the real positioning scene; obtaining the characteristic points of the real positioning scene according to the real-time picture; and matching the three-dimensional model related to the target object in the virtual image to the corresponding position in the real positioning scene according to the characteristic points to obtain a registration result.
In the embodiment of the present disclosure, the images of the real-world positioning scene may be acquired in real time through one or more image acquisition settings set in the real-world positioning scene, and for example, when the number of the image acquisition devices is greater than 1, the real-time images obtained by the image acquisition devices may be subjected to fusion processing, and then the images subjected to fusion processing and the three-dimensional models related to the target object in the obtained three-dimensional visual virtual image may be subjected to virtual-real registration to obtain the registration result.
Wherein the feature points correspond to position markers added on the target subject's skin during a computed tomography CT scan. Illustratively, it is possible to perform a CT scan at a particular location on the skin such as: marks are added in the middle and two sides of the chest, in the middle and two sides of the abdomen and the like, and the marks are generated into a two-dimensional code form; the mark corresponds to the spatial position of the characteristic point of the real scene obtained through the real-time picture, and meanwhile, the position of the virtual image reconstructed by the CT scanning data and the position of the mark are relatively unchanged; furthermore, according to the relative relation between the feature points obtained by the camera transmitting the picture in real time and the established three-dimensional visual virtual image, the three-dimensional models related to the target objects in the virtual image are matched into the real-time picture, so that the virtual-real registration is realized.
For example, as shown in fig. 3, when performing virtual-real registration, 3 cameras, a PC and other devices may be used, the 3 cameras installed at different positions perform multi-angle real-time image acquisition on a real positioning scene, and transmit the image to the PC in real time, and the PC obtains feature points of the real positioning scene in the image through calculation; then the PC acquires patient data information through a background, and matching is carried out according to the Json file information of the patient extracted from the data; and calling a corresponding three-dimensional reconstructed model data file from the server, and matching the target object related three-dimensional models such as a target area and an ROI interested area which are interested by a patient, a technician and a doctor to the corresponding position of the real positioning scene according to the relative relation between the feature points obtained by transmitting the picture into the camera in real time and the established three-dimensional visual virtual image.
In a possible implementation manner, in step 40, combining the accelerator beam three-dimensional model in the virtual image and the registration result, and rendering to obtain a positioning result; the method can comprise the following steps: according to the registration result, determining the position of an accelerator beam three-dimensional model (a portal model), for example, determining the position of the accelerator beam three-dimensional model based on the coincidence of the isocenter of the target object-related three-dimensional model and the accelerator beam three-dimensional model, and rendering the accelerator beam three-dimensional model and the registration result to obtain a positioning result; it should be noted that, in the embodiment of the present disclosure, the number of the accelerator beam three-dimensional models may be one or more, that is, the positioning result may include a plurality of accelerator beam three-dimensional models with different angles and different shapes.
In one possible implementation, in step 40, the displaying the positioning result during the radiotherapy positioning process includes: determining at least one target position according to the position and the visual angle of the target object in the real radiotherapy scene; and displaying the positioning result at the target position through a display device.
In the embodiment of the disclosure, the number of target positions can be set according to the position, the view angle, the actual environment and other elements of the target object to obtain one or more target positions, so that the registration result can be visually displayed; illustratively, the display device can distinguish the components in the registration result by setting different areas to different colors or different color depths, so that the target object can more intuitively and efficiently acquire the registration result, the target object can conveniently observe and confirm the positioning result, and the positioning efficiency is improved.
For example, as shown in fig. 3, the virtual-real registration result may be displayed by a display device (projector, display, etc.), and a plurality of display devices may be added at different positions and different angles according to the position and the angle of view of the patient in the radiotherapy scene. For example, for a lying patient, a display device can be projected or placed right above the patient, so that the patient can observe and confirm the positioning result conveniently. Therefore, the target object can visually and conveniently observe the registration result, and further subjectively judge the positioning condition by combining the self condition, so that the positioning condition can be corrected; meanwhile, a doctor can correct the positioning result by observing the display equipment, so that the positioning accuracy is improved.
It should be noted that, although the above embodiments are described as examples of a method for visualizing a positioning result based on a virtual intelligent medical platform, those skilled in the art will understand that the present disclosure should not be limited thereto. In fact, the user can flexibly set each implementation mode according to personal preference and/or actual application scene, as long as the technical scheme of the disclosure is met.
Therefore, by combining the mixed reality technology, the information such as tumors, rays and the like is visualized, and the three-dimensional holographic display of medical images, the three-dimensional display of radiotherapy plans and the visual display of positioning results are realized; the patient can more directly perceived more high-efficient observe the result of putting, and the clear understanding is put the position condition to can carry out subjective judgement, participate in the location and put the position completion and confirm, reduce the error of putting. Meanwhile, the method can be used for assisting doctor-patient communication, improving the positioning efficiency, relieving the psychological pressure of patients, eliminating the fear of patients, keeping the patients in a healthy psychological state and good immune function, actively matching treatment, reducing the treatment error of the patients and positively influencing the treatment of tumor radiotherapy patients. In addition, the doctor can correct the positioning result through the three-dimensional image, and the positioning accuracy is improved.
Fig. 4 shows a block diagram of a positioning result visualization apparatus based on a virtual intelligent medical platform according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus may include: a virtual image construction module 41, configured to obtain a three-dimensional visual virtual image according to the target object data; a virtual-real registration module 42, configured to perform virtual-real registration on the virtual image and the real positioning scene to obtain a registration result; a rendering module 43, configured to combine the accelerator beam three-dimensional model in the virtual image and the registration result, and render to obtain a positioning result; and the display module 44 is used for displaying the positioning result in the radiotherapy positioning process.
In a possible implementation manner, the virtual image constructing module 41 may include: a DICOM RT data acquisition unit for acquiring DICOM RT data of the target object through a DICOM network; a target object data extraction unit, configured to extract the target object data according to the DICOM RT data; the three-dimensional model building unit is used for building an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data; and the virtual image acquisition unit is used for obtaining the three-dimensional visual virtual image according to the accelerator beam three-dimensional model and the target object related three-dimensional model.
In a possible implementation manner, the three-dimensional model building unit may include: the data analysis subunit is used for analyzing and processing the target object data to obtain radiotherapy related data; the model data construction subunit is used for establishing corresponding three-dimensional model data according to the radiotherapy related data; and the format conversion subunit is used for converting the three-dimensional model data into a specified format to obtain the accelerator beam three-dimensional model and the target object related three-dimensional model.
In one possible implementation, the virtual-real registration module 42 may include: the real-time picture acquisition unit is used for acquiring a real-time picture of the real positioning scene; the characteristic point calculating unit is used for obtaining the characteristic points of the real positioning scene according to the real-time picture; and the virtual-real registration unit is used for matching the three-dimensional model related to the target object in the virtual image to the corresponding position in the real positioning scene according to the characteristic points to obtain a registration result.
In one possible implementation, the feature points correspond to position markers added on the skin of the target subject during a computed tomography CT scan.
In a possible implementation manner, the display module 44 may include: the target position selecting unit is used for determining at least one target position according to the position and the visual angle of a target object in the real radiotherapy scene; and the display unit is used for displaying the positioning result at the target position through display equipment.
In one possible implementation, the target object data includes: basic information of a target object, CT image data, plan information, structure set information and dosage information; the target object-related three-dimensional model comprises: a target region three-dimensional model, an ROI three-dimensional model, a dose distribution three-dimensional model and an accelerator beam three-dimensional model.
It should be noted that, although the above embodiments are described as examples of a positioning result visualization device based on a virtual intelligent medical platform, those skilled in the art can understand that the disclosure should not be limited thereto. In fact, the user can flexibly set each implementation mode according to personal preference and/or actual application scene, as long as the technical scheme of the disclosure is met.
Therefore, by combining the mixed reality technology, the information such as tumors, rays and the like is visualized, and the three-dimensional holographic display of medical images, the three-dimensional display of radiotherapy plans and the visual display of positioning results are realized; the patient can more directly perceivedly observe the result of putting more high-efficiently, and the clear understanding is put the position condition to can carry out subjective judgement, participate in the confirmation of location pendulum position completion degree, reduce the position error. Meanwhile, the method can be used for assisting doctor-patient communication, improving the positioning efficiency, relieving the psychological pressure of patients, eliminating the fear of patients, keeping the patients in a healthy psychological state and good immune function, actively matching treatment, reducing the treatment error of the patients and positively influencing the treatment of tumor radiotherapy patients. In addition, the doctor can correct the positioning result through the three-dimensional image, and the positioning accuracy is improved.
Fig. 5 shows a block diagram of an apparatus 1900 for virtual intelligent medical platform-based posing result visualization, according to an embodiment of the present disclosure. For example, the apparatus 1900 may be provided as a server. Referring to FIG. 5, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, MacOS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the apparatus 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A positioning result visualization method based on a virtual intelligent medical platform is characterized by comprising the following steps:
obtaining a three-dimensional visual virtual image according to the target object data;
carrying out virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene to obtain a registration result;
combining a accelerator beam three-dimensional model in the virtual image and the registration result, and rendering to obtain a positioning result;
and displaying the positioning result during the radiotherapy positioning process.
2. The method of claim 1, wherein obtaining a virtual image of a three-dimensional visualization from the target object data comprises:
obtaining DICOM RT data of a target object through a DICOM network;
extracting the target object data according to the DICOM RT data;
establishing an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data;
and obtaining the three-dimensional visual virtual image according to the accelerator beam three-dimensional model and the target object related three-dimensional model.
3. The method of claim 2, wherein said building an accelerator beam three-dimensional model and a target object dependent three-dimensional model from said target object data comprises:
analyzing the target object data to obtain radiotherapy related data;
establishing corresponding three-dimensional model data according to the relevant data of the radiotherapy;
and converting the three-dimensional model data into a specified format to obtain the accelerator beam three-dimensional model and a target object related three-dimensional model.
4. The method according to claim 1, wherein the obtaining a registration result by performing virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene comprises:
acquiring a real-time picture of the real positioning scene;
obtaining the characteristic points of the real positioning scene according to the real-time picture;
and matching the three-dimensional model related to the target object in the virtual image to the corresponding position in the real positioning scene according to the characteristic points to obtain a registration result.
5. The method of claim 4, wherein the feature points correspond to position markers added to the skin of the target subject during a computed tomography CT scan.
6. The method of claim 5, wherein displaying the positioning results during the radiation therapy positioning comprises:
determining at least one target position according to the position and the visual angle of the target object in the real radiotherapy scene;
and displaying the positioning result at the target position through a display device.
7. The method according to any one of claims 2-6, wherein the target object data comprises: basic information of a target object, CT image data, plan information, structure set information and dosage information;
the target object-related three-dimensional model comprises: a target area three-dimensional model, an ROI three-dimensional model and a dose distribution three-dimensional model.
8. The utility model provides a put a position result visualization device based on virtual intelligent medical platform which characterized in that includes:
the virtual image construction module is used for obtaining a three-dimensional visual virtual image according to the target object data;
the virtual-real registration module is used for carrying out virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene to obtain a registration result;
the rendering module is used for combining the accelerator beam three-dimensional model in the virtual image and the registration result, and rendering to obtain a positioning result;
and the display module is used for displaying the positioning result in the radiotherapy positioning process.
9. The utility model provides a put a position result visualization device based on virtual intelligent medical platform which characterized in that includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the method of any one of claims 1 to 7 when executing the memory-stored executable instructions.
10. A non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method of any of claims 1 to 7.
CN202010038150.7A 2020-01-14 2020-01-14 Positioning result visualization method and device based on virtual intelligent medical platform Active CN111275825B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010038150.7A CN111275825B (en) 2020-01-14 2020-01-14 Positioning result visualization method and device based on virtual intelligent medical platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010038150.7A CN111275825B (en) 2020-01-14 2020-01-14 Positioning result visualization method and device based on virtual intelligent medical platform

Publications (2)

Publication Number Publication Date
CN111275825A true CN111275825A (en) 2020-06-12
CN111275825B CN111275825B (en) 2024-02-27

Family

ID=71002998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010038150.7A Active CN111275825B (en) 2020-01-14 2020-01-14 Positioning result visualization method and device based on virtual intelligent medical platform

Country Status (1)

Country Link
CN (1) CN111275825B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111870825A (en) * 2020-07-31 2020-11-03 于金明 Radiotherapy precise field-by-field positioning method based on virtual intelligent medical platform
CN112070903A (en) * 2020-09-04 2020-12-11 脸萌有限公司 Virtual object display method and device, electronic equipment and computer storage medium
CN112076400A (en) * 2020-10-15 2020-12-15 上海市肺科医院 Repeated positioning method and system
CN112274166A (en) * 2020-10-18 2021-01-29 上海联影医疗科技股份有限公司 Control method, system and device of medical diagnosis and treatment equipment
CN112401919A (en) * 2020-11-17 2021-02-26 上海联影医疗科技股份有限公司 Auxiliary positioning method and system based on positioning model
CN114306956A (en) * 2021-03-29 2022-04-12 于金明 Spiral tomography radiotherapy system based on virtual intelligent medical platform

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104587609A (en) * 2015-02-03 2015-05-06 瑞地玛医学科技有限公司 Positioning and locating device for radiotherapy and positioning method of dynamic target region
CN105893772A (en) * 2016-04-20 2016-08-24 上海联影医疗科技有限公司 Data acquiring method and data acquiring device for radiotherapy plan
CN108231199A (en) * 2017-12-29 2018-06-29 上海联影医疗科技有限公司 Radiotherapy planning emulation mode and device
CN108335365A (en) * 2018-02-01 2018-07-27 张涛 A kind of image-guided virtual reality fusion processing method and processing device
CN108460843A (en) * 2018-04-13 2018-08-28 广州医科大学附属肿瘤医院 It is a kind of based on virtual reality radiotherapy patient treatment instruct platform
CN109364387A (en) * 2018-12-05 2019-02-22 上海市肺科医院 A kind of radiotherapy AR localization and positioning system
CN110141360A (en) * 2018-02-11 2019-08-20 四川英捷达医疗科技有限公司 Digital technology air navigation aid
CN110237441A (en) * 2019-05-30 2019-09-17 新乡市中心医院(新乡中原医院管理中心) Coordinate method positions in radiotherapy and puts the application of position

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104587609A (en) * 2015-02-03 2015-05-06 瑞地玛医学科技有限公司 Positioning and locating device for radiotherapy and positioning method of dynamic target region
CN105893772A (en) * 2016-04-20 2016-08-24 上海联影医疗科技有限公司 Data acquiring method and data acquiring device for radiotherapy plan
CN108231199A (en) * 2017-12-29 2018-06-29 上海联影医疗科技有限公司 Radiotherapy planning emulation mode and device
CN108335365A (en) * 2018-02-01 2018-07-27 张涛 A kind of image-guided virtual reality fusion processing method and processing device
CN110141360A (en) * 2018-02-11 2019-08-20 四川英捷达医疗科技有限公司 Digital technology air navigation aid
CN108460843A (en) * 2018-04-13 2018-08-28 广州医科大学附属肿瘤医院 It is a kind of based on virtual reality radiotherapy patient treatment instruct platform
CN109364387A (en) * 2018-12-05 2019-02-22 上海市肺科医院 A kind of radiotherapy AR localization and positioning system
CN110237441A (en) * 2019-05-30 2019-09-17 新乡市中心医院(新乡中原医院管理中心) Coordinate method positions in radiotherapy and puts the application of position

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111870825A (en) * 2020-07-31 2020-11-03 于金明 Radiotherapy precise field-by-field positioning method based on virtual intelligent medical platform
CN111870825B (en) * 2020-07-31 2023-08-18 于金明 Radiation therapy accurate field-by-field positioning method based on virtual intelligent medical platform
CN112070903A (en) * 2020-09-04 2020-12-11 脸萌有限公司 Virtual object display method and device, electronic equipment and computer storage medium
CN112076400A (en) * 2020-10-15 2020-12-15 上海市肺科医院 Repeated positioning method and system
WO2022077828A1 (en) * 2020-10-15 2022-04-21 上海市肺科医院 Repeated positioning method and system
CN112274166A (en) * 2020-10-18 2021-01-29 上海联影医疗科技股份有限公司 Control method, system and device of medical diagnosis and treatment equipment
CN112401919A (en) * 2020-11-17 2021-02-26 上海联影医疗科技股份有限公司 Auxiliary positioning method and system based on positioning model
WO2022105813A1 (en) * 2020-11-17 2022-05-27 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for subject positioning
CN114306956A (en) * 2021-03-29 2022-04-12 于金明 Spiral tomography radiotherapy system based on virtual intelligent medical platform
CN114306956B (en) * 2021-03-29 2024-06-04 上海联影医疗科技股份有限公司 Spiral fault radiotherapy system based on virtual intelligent medical platform

Also Published As

Publication number Publication date
CN111275825B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
CN111275825B (en) Positioning result visualization method and device based on virtual intelligent medical platform
EP3726467B1 (en) Systems and methods for reconstruction of 3d anatomical images from 2d anatomical images
TWI663961B (en) Object positioning apparatus, object positioning method, object positioning program, and radiation therapy system
US9554772B2 (en) Non-invasive imager for medical applications
JP6768862B2 (en) Medical image processing method, medical image processing device, medical image processing system and medical image processing program
CN111261265B (en) Medical imaging system based on virtual intelligent medical platform
CN112401919B (en) Auxiliary positioning method and system based on positioning model
CN113662573B (en) Mammary gland focus positioning method, device, computer equipment and storage medium
CN111214764B (en) Radiotherapy positioning verification method and device based on virtual intelligent medical platform
CN111353524A (en) System and method for locating patient features
CN111369675B (en) Three-dimensional visual model reconstruction method and device based on lung nodule pleural projection
CN112150543A (en) Imaging positioning method, device and equipment of medical imaging equipment and storage medium
Advincula et al. Development and future trends in the application of visualization toolkit (VTK): the case for medical image 3D reconstruction
Sarmadi et al. 3D Reconstruction and alignment by consumer RGB-D sensors and fiducial planar markers for patient positioning in radiation therapy
US11980424B1 (en) Use of real-time and storable image data stream for generation of an immersive virtual universe in metaverse or a 3-D hologram, for medical and veterinary teaching and training
RU2552696C2 (en) Device and method for obtaining diagnostic information
US20220000442A1 (en) Image orientation setting apparatus, image orientation setting method, and image orientation setting program
CN111243713A (en) Radiotherapy plan simulation method, device and medium based on virtual intelligent medical platform
Talbot et al. A method for patient set-up guidance in radiotherapy using augmented reality
EP4298994A1 (en) Methods, systems and computer readable mediums for evaluating and displaying a breathing motion
US20090202118A1 (en) Method and apparatus for wireless image guidance
KR20230066526A (en) Image Processing Method, Apparatus, Computing Device and Storage Medium
WO2024083817A1 (en) De-identifying sensitive information in 3d a setting
CN114332223A (en) Mask generation method, image registration method, computer device, and storage medium
CN118212248A (en) Target area sketching method, model training method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230801

Address after: 250117 Shandong city of Ji'nan province Yan Ji Road, No. 440 Shandong Provincial Tumor Hospital

Applicant after: Yu Jinming

Applicant after: Affiliated Tumor Hospital of Shandong First Medical University (Shandong cancer prevention and treatment institute Shandong Cancer Hospital)

Address before: 250117 Shandong city of Ji'nan province Yan Ji Road, No. 440 Shandong Provincial Tumor Hospital

Applicant before: Yu Jinming

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20231007

Address after: 201807 2258 Chengbei Road, Jiading District, Shanghai

Applicant after: Shanghai Lianying Medical Technology Co.,Ltd.

Address before: 250117 Shandong city of Ji'nan province Yan Ji Road, No. 440 Shandong Provincial Tumor Hospital

Applicant before: Yu Jinming

Applicant before: Affiliated Tumor Hospital of Shandong First Medical University (Shandong cancer prevention and treatment institute Shandong Cancer Hospital)

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant