CN115409827A - Tumor image data processing method and device, electronic equipment and storage medium - Google Patents

Tumor image data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115409827A
CN115409827A CN202211141988.4A CN202211141988A CN115409827A CN 115409827 A CN115409827 A CN 115409827A CN 202211141988 A CN202211141988 A CN 202211141988A CN 115409827 A CN115409827 A CN 115409827A
Authority
CN
China
Prior art keywords
tumor
resection
target
boundary information
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211141988.4A
Other languages
Chinese (zh)
Inventor
张清
张余
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Beijing Jishuitan Hospital
Original Assignee
Beihang University
Beijing Jishuitan Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University, Beijing Jishuitan Hospital filed Critical Beihang University
Priority to CN202211141988.4A priority Critical patent/CN115409827A/en
Publication of CN115409827A publication Critical patent/CN115409827A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method and a device for processing tumor image data, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a tumor image dataset of a target object, and determining target tumor boundary information based on the tumor image dataset; generating a tumor resection model corresponding to the target object according to the target tumor boundary information; the tumor resection model comprises a plurality of tumor resection sections, and the tumor resection sections are used for representing predicted resection positions and/or predicted resection directions; and generating a tumor resection guide image corresponding to the target object based on the tumor resection model. The invention can provide the tumor resection guide image with higher accuracy and stronger objectivity.

Description

Tumor image data processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for processing tumor image data, an electronic device, and a storage medium.
Background
With the development of computer-aided technology and medical imaging technology, bone and soft tissue tumor specialists have more clear knowledge of bone tumor images, and can make tumor resection boundaries before surgery according to tumor biological characteristics and perform complex bone tumor resection operations according to preoperative plans during surgery. However, currently, various image data of a patient can only be displayed through an intelligent device (such as a personal computer, etc.), and a doctor customizes a corresponding tumor resection scheme according to respective clinical experience and the image data, so that the subjectivity of the tumor resection scheme is high.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus, an electronic device and a storage medium for processing tumor image data, which can provide a tumor resection guide image with higher accuracy and higher objectivity.
In a first aspect, an embodiment of the present invention provides a method for processing tumor image data, including: acquiring a tumor image dataset of a target object, and determining target tumor boundary information based on the tumor image dataset; generating a tumor resection model corresponding to the target object according to the target tumor boundary information; the tumor resection model comprises a plurality of tumor resection sections, and the tumor resection sections are used for representing predicted resection positions and/or predicted resection directions; and generating a tumor resection guide image corresponding to the target object based on the tumor resection model.
In one embodiment, the step of generating a tumor resection model corresponding to the target object according to the target tumor boundary information includes: acquiring tumor type information corresponding to the target object, and determining a target excision distance corresponding to the tumor type information according to a preset tumor stage principle; the preset tumor staging principle is used for representing the mapping relation between the tumor category information and the resection distance; determining a target resection area and estimated resection boundary information of the target resection area based on the target tumor boundary information and the target resection distance; and generating a tumor resection model corresponding to the target object according to the estimated resection boundary information.
In one embodiment, the step of determining a target resection area and estimated resection boundary information of the target resection area based on the target tumor boundary information and the target resection distance includes: determining an initial resection area corresponding to the target object; for each pixel point in the initial excision region, calculating a first distance value between the pixel point and the target tumor boundary information, and if the first distance value is smaller than or equal to the target excision distance, determining the pixel point as a target pixel point; and determining a target excision region and estimated excision boundary information of the target excision region based on each target pixel point.
In one embodiment, the step of determining target tumor boundary information based on the tumor image dataset comprises: extracting initial tumor boundary information of each tumor image data in the tumor image data set through a boundary extraction network obtained by pre-training; and performing fusion processing on the initial tumor boundary information corresponding to each tumor image data to obtain fused tumor boundary information, and determining the fused tumor boundary information with the largest area range as target tumor boundary information.
In one embodiment, the step of generating a tumor resection guide image corresponding to the target object based on the tumor resection model includes: acquiring a preoperative image dataset of the target object; and performing registration and alignment processing on the preoperative image data set and the tumor resection guide image to obtain a tumor resection guide image corresponding to the target object, so that the surgical navigation equipment determines an actual resection position based on the tumor resection guide image.
In one embodiment, after the step of generating the tumor resection guide image corresponding to the target object based on the tumor resection model, the method further includes: acquiring a postoperative image dataset of the target object; registering and aligning the postoperative image data set and the tumor image data set to obtain actual resection boundary information; determining a risk assessment result based on the actual resection boundary information; wherein the risk assessment result comprises a risk point.
In one embodiment, the step of determining a risk assessment result based on the actual resection boundary information comprises: and for each boundary point in the actual resection boundary information, calculating a second distance value between the boundary point and the estimated resection boundary information, and if the second distance value is greater than a preset distance threshold value, determining the boundary point as a risk point.
In a second aspect, an embodiment of the present invention further provides a device for processing tumor image data, including: the tumor boundary determining module is used for acquiring a tumor image data set of a target object and determining target tumor boundary information based on the tumor image data set; the model generation module is used for generating a tumor resection model corresponding to the target object according to the target tumor boundary information; the tumor resection model comprises a plurality of tumor resection sections, and the tumor resection sections are used for representing predicted resection positions and/or predicted resection directions; and the guide image generation module is used for generating a tumor resection guide image corresponding to the target object based on the tumor resection model.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a processor and a memory, where the memory stores computer-executable instructions that can be executed by the processor, and the processor executes the computer-executable instructions to implement any one of the methods provided in the first aspect.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium storing computer-executable instructions that, when invoked and executed by a processor, cause the processor to implement any one of the methods provided in the first aspect.
The embodiment of the invention provides a method, a device, electronic equipment and a storage medium for processing tumor image data. The method determines the boundary information of the target tumor according to the tumor image data set, generates the tumor resection model based on the boundary information of the target tumor, and can obtain the tumor resection guide image with higher accuracy and stronger objectivity on the basis of the tumor resection model.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart illustrating a method for processing tumor image data according to an embodiment of the present invention;
fig. 2 is a functional block diagram of a method for processing tumor image data according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a device for processing tumor image data according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the embodiments, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, in the prior art, doctors usually customize corresponding tumor resection schemes according to their respective clinical experiences and image data, and at least the following problems exist: (1) Various image data of bone tumor patients cannot be systematically contrasted and analyzed, and the preoperative planning scheme made by surgeons only stays in own brains; (2) The lack of visual guidance during the operation, the surgeon can only perform tumor resection according to the plan envisaged before the operation according to the self clinical experience; (3) Lack of objective evaluation after the resection of bone tumor is not beneficial for surgeons to accurately master the operation completion condition of patients and make individualized follow-up arrangement for the patients.
Based on the above, the invention provides a method and a device for processing tumor image data, an electronic device and a storage medium, which can provide a tumor resection guide image with higher accuracy and stronger objectivity.
To facilitate understanding of the present embodiment, first, a detailed description is given of a method for processing tumor image data disclosed in the present embodiment, referring to a flowchart of a method for processing tumor image data shown in fig. 1, the method can be applied to electronic devices such as computers, and the method mainly includes the following steps S102 to S106:
step S102, a tumor image data set of the target object is obtained, and target tumor boundary information is determined based on the tumor image data set. The target object, i.e., the patient, and the tumor image dataset, i.e., the modal image data of the lesion site of the patient may include an X-ray DR (digital radiography) plain film, an enhanced CT (Computed Tomography), an enhanced MR (magnetic resonance enhanced scan), an ECT (acquired Computed Tomography), a PE-TCT (positron Emission Computed Tomography), and the like of the lesion site of the patient, and the target tumor boundary information may be a three-dimensional boundary of the tumor. In one embodiment, a patient's tumor image data set may be stored in the computer memory module, and a user may read desired tumor image data from the computer memory module to determine target tumor boundary information via image processing algorithms or manual delineation.
And step S104, generating a tumor resection model corresponding to the target object according to the target tumor boundary information. The tumor resection model comprises a plurality of tumor resection sections, and the tumor resection sections are used for representing the predicted resection position and/or the predicted resection direction. In one embodiment, the corresponding target resection area and the predicted resection boundary information thereof can be determined according to the target tumor boundary information, and the tumor resection model can be obtained by adding a tumor resection section at the predicted resection boundary.
And step S106, generating a tumor resection guide image corresponding to the target object based on the tumor resection model. The tumor resection guide image can also be called a tumor resection scheme and is used for guiding a user to perform a tumor resection operation. In one embodiment, the image data set of the patient may be acquired again before the operation, and the tumor resection guide image may be obtained by performing registration and alignment processing on the tumor resection model and the image data set, so as to guide the user to perform the tumor resection operation according to the resection position, the resection direction and the like represented by the tumor resection guide image.
According to the processing method of the tumor image data, provided by the embodiment of the invention, the boundary information of the target tumor is determined according to the tumor image data set, the tumor resection model is generated based on the boundary information of the target tumor, and the tumor resection guide image with higher accuracy and stronger objectivity can be obtained on the basis of the tumor resection model.
In one embodiment, the method for processing tumor image data provided by the embodiment of the present invention is applied to a system for processing tumor image data, and the system includes a computer acquisition and storage unit, a computer data analysis unit, a computer data calculation unit, a computer navigation unit, and a computer comprehensive evaluation unit. The computer acquisition and storage unit is used for storing tumor image data sets of X-ray DR plain film, enhanced CT, enhanced MR, ECT, PETCT and the like of the lesion part of the patient and is used for planning and evaluating subsequent operations. The computer data analysis unit is used for providing a tumor image data set of a patient, so that a user can compare and analyze the performance characteristics of the tumor in different images. The computer data calculation unit is used for determining a tumor resection model so as to provide a preoperative tumor resection scheme of an accurate surgical tumor section for clinic, and is also used for registering and aligning the postoperative image data set and the original tumor image data set after bone tumor resection so as to visually and accurately display the actual surgical resection effect relative to the original preoperative plan. The computer navigation unit is used to perform bone tumor resection under image guidance according to the pre-operative tumor resection scheme. The computer comprehensive evaluation unit is used for counting and analyzing the error between the actual resection boundary information and the estimated resection boundary information, providing possible risk points, analyzing the reason for the patient and making a follow-up treatment follow-up scheme for the patient aiming at the problem of insufficient operation quality.
On the basis of the foregoing embodiment, the embodiment of the present invention provides an implementation manner of step S102, when performing the step of determining the boundary information of the target tumor based on the tumor image data set, which can be seen in the following (1) to (2):
(1) And extracting initial tumor boundary information of each tumor image data in the tumor image data set through a boundary extraction network obtained by pre-training. In one embodiment, the neural network may be trained in advance, and the training set includes tumor image data and tumor boundary labels, so that the neural network is trained by using the training set to obtain a boundary extraction network with higher precision. In practical application, each tumor image data in the tumor image data set is input to the boundary extraction network, and the boundary extraction network can output corresponding initial tumor boundary information.
(2) And performing fusion processing on the initial tumor boundary information corresponding to each tumor image data to obtain fused tumor boundary information, and determining the fused tumor boundary information with the largest area range as target tumor boundary information. In practical applications, considering that the initial tumor boundary information determined by different image data may have some differences, the initial tumor boundary information corresponding to each tumor image data may be fused, and the tumor boundary information of the largest range may be selected as the target tumor boundary information.
In another embodiment, the target tumor boundary may also be delineated manually. Optionally, in the computer data analysis unit, the user draws tumor boundaries layer by layer in the coronal position, the axial position, and the sagittal position of the lesion image, respectively, and uses the tumor range drawn from the CT image and the MR image as the three-dimensional boundary of the tumor.
For the foregoing step S104, an embodiment of the present invention provides an implementation manner of generating a tumor resection model corresponding to a target object according to target tumor boundary information, which is shown in the following steps 1 to 3:
step 1, obtaining tumor type information corresponding to a target object, and determining a target resection distance corresponding to the tumor type information according to a preset tumor stage principle. The preset tumor staging principle (short for bone and soft tissue tumor surgical staging principle) is used for representing the mapping relation between the tumor type information and the resection distance, and the tumor type information is used for representing the tumor severity, such as benign tumor or malignant tumor. In one embodiment, the target resection distance ds between the estimated resection boundary information and the target tumor boundary information may be set according to bone and soft tissue tumor surgical staging principles.
And 2, determining a target resection area and estimated resection boundary information of the target resection area based on the target tumor boundary information and the target resection distance. In one embodiment, see step 2.1 through step 2.3 below:
and 2.1, determining an initial resection area corresponding to the target object. In one embodiment, an anisotropic distance transformation algorithm of the computer data computing unit may be used to obtain a three-dimensional distance image Dt of the tumor region, which is also the initial ablation region.
And 2.2, calculating a first distance value between each pixel point in the initial excision region and the boundary information of the target tumor, and determining the pixel point as a target pixel point if the first distance value is less than or equal to the target excision distance. In one embodiment, a three-dimensional region Dc smaller than the target ablation distance ds in the three-dimensional distance image Dt, which is also the target ablation region, may be further obtained, and the edge of the three-dimensional region Dc is the three-dimensional surgical ablation boundary of the tumor that must be ablated during the operation.
And 2.3, determining the target excision regions and the estimated excision boundary information of the target excision regions based on each target pixel point. As can be understood, the region where each target pixel point is located is the target excision region to be excised, and the target pixel points on the edge of the target excision region are connected, so that the estimated excision boundary information can be obtained.
And 3, generating a tumor resection model corresponding to the target object according to the predicted resection boundary information. In one embodiment, the user may manually place an appropriate number of tumor resection slices outside the generated estimated resection boundary information according to the importance of different anatomical location structures of the tumor, thereby providing a pre-operative tumor resection plan that is accurate for surgical tumor resection slices for clinical use.
To facilitate understanding of the foregoing step S106, the embodiment of the present invention further provides an implementation manner of generating a tumor resection guide image corresponding to the target object based on the tumor resection model, see the following steps a to b:
step a, acquiring a preoperative image data set of a target object. In one embodiment, the image data set of the target object may be acquired again before performing the operation.
And b, registering and aligning the preoperative image data set and the tumor resection guide image to obtain a tumor resection guide image corresponding to the target object, so that the surgical navigation equipment determines the actual resection position based on the tumor resection guide image.
Further, in view of the lack of objective evaluation after resection of bone tumor in the prior art, the embodiment of the present invention provides a risk assessment method, specifically: (1) Acquiring a postoperative image data set of the target object, optionally, scanning the cut surgical specimen by using a CT device to obtain a CT image of the specimen, and storing the CT influence in a computer storage unit; (2) Registering and aligning the postoperative image data set and the tumor image data set to obtain actual resection boundary information, and optionally, registering and aligning the specimen CT image and the original CT image in preoperative planning in a computer data analysis unit to obtain actual resection boundary information; (3) Determining a risk assessment result based on the actual resection boundary information; wherein the risk assessment result comprises a risk point. In one embodiment, for each boundary point in the actual resection boundary information, a second distance value between the boundary point and the estimated resection boundary information is calculated, and if the second distance value is greater than a preset distance threshold value, the boundary point is determined to be a risk point, so that the actual surgical resection effect relative to the original preoperative plan can be visually and accurately displayed. .
After the bone tumor is removed, CT equipment is used for scanning cut surgical specimens to obtain CT images of the specimens, then CT data of the specimens are stored in the device of the invention through a computer storage backup acquisition module 1, then the CT images of the specimens are registered and aligned with original CT images in preoperative planning in a computer data analysis module, and finally the device of the invention can visually and accurately display the actual surgical removal effect relative to the original preoperative plan.
To facilitate understanding of the foregoing embodiments, an application example of a method for processing tumor image data is provided in the embodiments of the present invention, referring to a functional block diagram of a method for processing tumor image data shown in fig. 2, fig. 2 schematically illustrates:
firstly, collecting multi-modal image data of a bone tumor patient before operation, then, a doctor analyzes morphological expressions of tumors in various modal image data in a computer data analysis unit and respectively draws tumor ranges, then, combining tumor areas drawn in the multi-modal image data to determine the maximum boundary of the tumors, then, setting the minimum safe distance between an excision boundary and the tumor boundary (namely, the excision target distance) according to surgical staging rules of the tumors, solving a three-dimensional area needing excision in operation by using a three-dimensional anisotropic distance transformation algorithm, and finally, manually designing a proper number of tumor sections according to the importance of an anatomical structure through the computer analysis unit, forming a visible digital multi-dimensional image for a user to refer to, and even providing remote technical support and technical consultation for a hospital with bone tumor professional technology relatively behind.
And (II) multi-modal image data of the patient and the designed preoperative tumor resection scheme are led into a computer navigation unit or a robot and other devices, and the device is used for guiding a user to accurately implement bone tumor resection operation according to a preoperatively designed resection plan in the operation.
And (III) cutting off CT images of the tumor specimen in postoperative scanning, then returning the CT images to the computer data analysis unit, and finally registering and aligning the CT images of the specimen with original CT images of the patient in preoperative planning.
And (IV) accurately calculating the surgical error of the bone tumor resection operation in the computer comprehensive evaluation unit, and analyzing possible risk points after the bone tumor surgical resection operation, thereby customizing a personalized postoperative rehabilitation treatment scheme and follow-up plan arrangement for the patient
In summary, the processing method of tumor image data provided by the embodiment of the invention has at least the following characteristics:
(1) Providing an accurate digital preoperative planning platform for designing a professional and accurate preoperative planning scheme and an intraoperative guiding scheme;
(2) Through accurately registering the digital images of the post-operation specimens to the preoperative plan, a doctor can accurately analyze the completion precision of the bone tumor operation and indicate risk points existing after the operation is completed, so that a surgeon can accurately and objectively master the surgical treatment result of a patient, and blindness and optimism are avoided;
(3) Can provide technical support for doctors specialized in bone and soft tissue tumor in various places, thereby helping to reduce professional technical differences among different hospitals in remote areas and different cities and finally enabling bone tumor patients in various places to enjoy high-level medical resources.
As to the method for processing tumor image data provided in the foregoing embodiment, an embodiment of the present invention provides a device for processing tumor image data, referring to a schematic structural diagram of a device for processing tumor image data shown in fig. 3, the device mainly includes the following components:
a tumor boundary determining module 302, configured to obtain a tumor image dataset of the target object, and determine target tumor boundary information based on the tumor image dataset;
a model generation module 304, configured to generate a tumor resection model corresponding to the target object according to the target tumor boundary information; the tumor resection model comprises a plurality of tumor resection sections, and the tumor resection sections are used for representing the predicted resection position and/or the predicted resection direction;
the guide image generating module 306 is configured to generate a tumor resection guide image corresponding to the target object based on the tumor resection model.
The processing device of the tumor image data provided by the embodiment of the invention determines the boundary information of the target tumor according to the tumor image data set, generates the tumor resection model based on the boundary information of the target tumor, and can obtain the tumor resection guide image with higher accuracy and stronger objectivity on the basis of the tumor resection model.
In one embodiment, the model generation module 304 is further configured to: acquiring tumor category information corresponding to a target object, and determining a target resection distance corresponding to the tumor category information according to a preset tumor stage principle; the preset tumor staging principle is used for representing the mapping relation between the tumor type information and the resection distance; determining a target resection area and estimated resection boundary information of the target resection area based on the target tumor boundary information and the target resection distance; and generating a tumor resection model corresponding to the target object according to the predicted resection boundary information.
In one embodiment, the model generation module 304 is further configured to: determining an initial resection area corresponding to a target object; for each pixel point in the initial excision region, calculating a first distance value between the pixel point and the boundary information of the target tumor, and if the first distance value is less than or equal to the excision distance of the target, determining the pixel point as a target pixel point; and determining a target excision region and predicted excision boundary information of the target excision region based on each target pixel point.
In one embodiment, the tumor boundary determination module 302 is further configured to: extracting initial tumor boundary information of each tumor image data in the tumor image data set through a boundary extraction network obtained by pre-training; and performing fusion processing on the initial tumor boundary information corresponding to each tumor image data to obtain fused tumor boundary information, and determining the fused tumor boundary information with the largest area range as target tumor boundary information.
In one embodiment, the instructional image generation module 306 is further configured to: acquiring a preoperative image data set of a target object; and carrying out registration and alignment treatment on the preoperative image data set and the tumor resection guide image to obtain a tumor resection guide image corresponding to the target object, so that the surgical navigation equipment determines the actual resection position based on the tumor resection guide image.
In one embodiment, the apparatus further comprises a risk assessment module configured to: acquiring a postoperative image data set of a target object; registering and aligning the postoperative image data set and the tumor image data set to obtain actual resection boundary information; determining a risk assessment result based on the actual resection boundary information; wherein the risk assessment result comprises a risk point.
In one embodiment, the risk assessment module is further configured to: and calculating a second distance value between each boundary point in the actual resection boundary information and the predicted resection boundary information, and if the second distance value is greater than a preset distance threshold value, determining the boundary point as a risk point.
The device provided by the embodiment of the present invention has the same implementation principle and the same technical effects as those of the foregoing method embodiments, and for the sake of brief description, reference may be made to corresponding contents in the foregoing method embodiments for the parts of the device embodiments that are not mentioned.
The embodiment of the invention provides electronic equipment, which particularly comprises a processor and a storage device, wherein the processor is used for processing a plurality of data files; the storage means has stored thereon a computer program which, when executed by the processor, performs the method of any of the above described embodiments.
Fig. 4 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present invention, where the electronic device 100 includes: a processor 40, a memory 41, a bus 42 and a communication interface 43, wherein the processor 40, the communication interface 43 and the memory 41 are connected through the bus 42; the processor 40 is arranged to execute executable modules, such as computer programs, stored in the memory 41.
The Memory 41 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 43 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
The bus 42 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 4, but this does not indicate only one bus or one type of bus.
The memory 41 is used for storing a program, and the processor 40 executes the program after receiving an execution instruction, and the method performed by the apparatus defined by the flow program disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 40, or implemented by the processor 40.
The processor 40 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 40. The Processor 40 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory 41, and the processor 40 reads the information in the memory 41 and completes the steps of the method in combination with the hardware thereof.
The computer program product of the readable storage medium provided in the embodiment of the present invention includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the foregoing method embodiment, which is not described herein again.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: those skilled in the art can still make modifications or changes to the embodiments described in the foregoing embodiments, or make equivalent substitutions for some features, within the scope of the disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for processing tumor image data, comprising:
acquiring a tumor image dataset of a target object, and determining target tumor boundary information based on the tumor image dataset;
generating a tumor resection model corresponding to the target object according to the target tumor boundary information; the tumor resection model comprises a plurality of tumor resection sections, and the tumor resection sections are used for representing predicted resection positions and/or predicted resection directions;
and generating a tumor resection guide image corresponding to the target object based on the tumor resection model.
2. The method of claim 1, wherein the step of generating a tumor resection model corresponding to the target object according to the target tumor boundary information comprises:
acquiring tumor type information corresponding to the target object, and determining a target excision distance corresponding to the tumor type information according to a preset tumor stage principle; the preset tumor staging principle is used for representing the mapping relation between the tumor category information and the resection distance;
determining a target resection area and estimated resection boundary information of the target resection area based on the target tumor boundary information and the target resection distance;
and generating a tumor resection model corresponding to the target object according to the estimated resection boundary information.
3. The method of claim 2, wherein the step of determining a target resection area and estimated resection boundary information for the target resection area based on the target tumor boundary information and the target resection distance comprises:
determining an initial resection area corresponding to the target object;
for each pixel point in the initial excision region, calculating a first distance value between the pixel point and the target tumor boundary information, and if the first distance value is smaller than or equal to the target excision distance, determining the pixel point as a target pixel point;
and determining a target excision region and estimated excision boundary information of the target excision region based on each target pixel point.
4. The method of claim 1, wherein the step of determining target lesion boundary information based on the lesion image dataset comprises:
extracting initial tumor boundary information of each tumor image data in the tumor image data set through a boundary extraction network obtained through pre-training;
and performing fusion processing on the initial tumor boundary information corresponding to each tumor image data to obtain fused tumor boundary information, and determining the fused tumor boundary information with the largest area range as target tumor boundary information.
5. The method of claim 1, wherein the step of generating a tumor resection guide image corresponding to the target object based on the tumor resection model comprises:
acquiring a preoperative image dataset of the target object;
and performing registration and alignment processing on the preoperative image data set and the tumor resection guide image to obtain a tumor resection guide image corresponding to the target object, so that the surgical navigation equipment determines an actual resection position based on the tumor resection guide image.
6. The method of claim 1, wherein after the step of generating a tumor resection guide image corresponding to the target object based on the tumor resection model, the method further comprises:
acquiring a postoperative image dataset of the target object;
registering and aligning the postoperative image data set and the tumor image data set to obtain actual resection boundary information;
determining a risk assessment result based on the actual resection boundary information; wherein the risk assessment result comprises a risk point.
7. The method of claim 6, wherein the step of determining a risk assessment result based on the actual resection boundary information comprises:
and calculating a second distance value between each boundary point in the actual resection boundary information and the predicted resection boundary information, and if the second distance value is greater than a preset distance threshold value, determining the boundary point as a risk point.
8. An apparatus for processing tumor image data, comprising:
the tumor boundary determining module is used for acquiring a tumor image data set of a target object and determining target tumor boundary information based on the tumor image data set;
the model generation module is used for generating a tumor resection model corresponding to the target object according to the target tumor boundary information; the tumor resection model comprises a plurality of tumor resection sections, and the tumor resection sections are used for representing predicted resection positions and/or predicted resection directions;
and the guide image generation module is used for generating a tumor resection guide image corresponding to the target object based on the tumor resection model.
9. An electronic device comprising a processor and a memory, the memory storing computer-executable instructions executable by the processor, the processor executing the computer-executable instructions to implement the method of any of claims 1 to 7.
10. A computer-readable storage medium having computer-executable instructions stored thereon which, when invoked and executed by a processor, cause the processor to perform the method of any of claims 1 to 7.
CN202211141988.4A 2022-09-20 2022-09-20 Tumor image data processing method and device, electronic equipment and storage medium Pending CN115409827A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211141988.4A CN115409827A (en) 2022-09-20 2022-09-20 Tumor image data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211141988.4A CN115409827A (en) 2022-09-20 2022-09-20 Tumor image data processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115409827A true CN115409827A (en) 2022-11-29

Family

ID=84166749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211141988.4A Pending CN115409827A (en) 2022-09-20 2022-09-20 Tumor image data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115409827A (en)

Similar Documents

Publication Publication Date Title
JP7086933B2 (en) Update probability of disease based on medical image annotation
Ruikar et al. Automated fractured bone segmentation and labeling from CT images
US9947090B2 (en) Medical image dectection system and method
Fan et al. Marker-based watershed transform method for fully automatic mandibular segmentation from CBCT images
Wallner et al. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action
Esfandiari et al. A deep learning framework for segmentation and pose estimation of pedicle screw implants based on C-arm fluoroscopy
Wallner et al. A review on multiplatform evaluations of semi-automatic open-source based image segmentation for cranio-maxillofacial surgery
US20210174503A1 (en) Method, system and storage medium with a program for the automatic analysis of medical image data
US10783637B2 (en) Learning data generation support apparatus, learning data generation support method, and learning data generation support program
US20240006053A1 (en) Systems and methods for planning medical procedures
JP2020171687A (en) Systems and methods for processing 3d anatomical volumes based on localization of 2d slices thereof
CN107106104A (en) The system and method analyzed for orthopaedics and treat design
CN106097294B (en) Based on automatic corresponding progress bone reorientation
Powell et al. Atlas-based segmentation of temporal bone surface structures
Londono et al. Evaluation of deep learning and convolutional neural network algorithms accuracy for detecting and predicting anatomical landmarks on 2D lateral cephalometric images: a systematic review and meta-analysis
Willemsen et al. 3D-printed saw guides for lower arm osteotomy, a comparison between a synthetic CT and CT-based workflow
WO2022223042A1 (en) Surgical path processing system, method, apparatus and device, and storage medium
CN115409827A (en) Tumor image data processing method and device, electronic equipment and storage medium
EP3391332B1 (en) Determination of registration accuracy
CN113592768A (en) Rib fracture detection method, rib fracture detection information display method and system
US20220207740A1 (en) Comparison of a region of interest along a time series of images
JP2021175454A (en) Medical image processing apparatus, method and program
WO2022070528A1 (en) Medical image processing device, method, and program
US11837355B2 (en) System and method for assisting verification of labeling and contouring of multiple regions of interest
US20230102745A1 (en) Medical image display apparatus, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination