CN114820663A - Assistant positioning method for determining radio frequency ablation therapy - Google Patents
Assistant positioning method for determining radio frequency ablation therapy Download PDFInfo
- Publication number
- CN114820663A CN114820663A CN202210737793.XA CN202210737793A CN114820663A CN 114820663 A CN114820663 A CN 114820663A CN 202210737793 A CN202210737793 A CN 202210737793A CN 114820663 A CN114820663 A CN 114820663A
- Authority
- CN
- China
- Prior art keywords
- image
- pixel point
- abdominal
- pixel
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000007674 radiofrequency ablation Methods 0.000 title description 6
- 238000002560 therapeutic procedure Methods 0.000 title description 5
- 230000003187 abdominal effect Effects 0.000 claims abstract description 72
- 206010028980 Neoplasm Diseases 0.000 claims abstract description 51
- 238000011297 radiofrequency ablation treatment Methods 0.000 claims abstract description 19
- 230000011218 segmentation Effects 0.000 claims abstract description 11
- 238000000638 solvent extraction Methods 0.000 claims abstract description 6
- 238000010317 ablation therapy Methods 0.000 claims description 6
- 238000003672 processing method Methods 0.000 claims description 3
- 230000004807 localization Effects 0.000 claims description 2
- 210000001015 abdomen Anatomy 0.000 abstract description 11
- 210000004185 liver Anatomy 0.000 description 14
- 208000014018 liver neoplasm Diseases 0.000 description 9
- 238000011282 treatment Methods 0.000 description 8
- 206010019695 Hepatic neoplasm Diseases 0.000 description 7
- 241000282414 Homo sapiens Species 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005192 partition Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 201000007270 liver cancer Diseases 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 241000764238 Isis Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 235000006694 eating habits Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000001105 femoral artery Anatomy 0.000 description 1
- 210000003191 femoral vein Anatomy 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 210000004731 jugular vein Anatomy 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 230000007102 metabolic function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 210000001321 subclavian vein Anatomy 0.000 description 1
- 210000001835 viscera Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Heart & Thoracic Surgery (AREA)
- High Energy & Nuclear Physics (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Optics & Photonics (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Pulmonology (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention relates to the field of image processing, and provides an auxiliary positioning method for determining radio frequency ablation treatment, which comprises the following steps: acquiring an abdominal CT image; the richness of each pixel point is obtained through the CT value of each pixel point and the neighborhood pixel points on the abdominal CT image; partitioning the abdominal CT image, and taking the historical tumor probability of each region on the abdominal CT image as the attention of each pixel point of the region; obtaining a first enhancement coefficient of each pixel point; obtaining the adjusted CT values of all the pixel points; obtaining a second enhancement coefficient of each pixel point; acquiring a CT reconstruction value of each pixel point, and acquiring a new abdomen CT image through the CT reconstruction values of all the pixel points; and performing threshold segmentation on the new abdominal CT image to obtain a tumor region. The invention can obtain clearer tumor boundary and has simple method.
Description
Technical Field
The invention relates to the field of artificial intelligence, in particular to an auxiliary positioning method for determining radio frequency ablation treatment.
Background
The liver is one of the five internal organs of the human body, mainly has metabolic function, and is an important organ for maintaining the life of the human body. Due to the influence of other factors such as poor eating habits, irregular work and rest and the like, tumor lesions of some tumors occur in liver parts, and then liver cancer is formed, which seriously threatens the life health of human beings. China is one of the high incidence areas of liver cancer, and the incidence rate and the fatality rate are high.
The radio frequency ablation treatment method is an effective treatment method for treating liver tumors, and the treatment method is to insert an electrode catheter into the center of a liver tumor through the femoral artery and vein, the internal jugular vein and the subclavian vein, then spread an electrode and start to perform radio frequency ablation. An important step before determining the radio frequency ablation treatment is to scan and locate the liver by CT, and (percutaneously) puncture the tumor after determining the tumor location and size. However, in clinical application, doctors still observe the tumor region in the CT image through their own knowledge and experience, but in the abdominal CT image, the tumor morphology is different between different individuals, and the gray level difference between the tumor itself and the liver is small, so it is difficult to distinguish the clear tumor boundary, so a tumor CT image processing method is needed to improve the accuracy of the doctor in identifying the tumor.
The invention carries out image preprocessing on the liver region on the basis of carrying out liver segmentation on the abdominal CT image, so that the gray characteristic of the tumor is better extracted, the tumor boundary is clearer, and reliable early-stage data support and treatment assistance are provided for determining a radio frequency ablation treatment method and a treatment plan.
Disclosure of Invention
The invention provides an auxiliary positioning method for determining radio frequency ablation treatment, which aims to solve the problem that the boundary of the existing abdomen CT image is not clear.
The invention discloses an auxiliary positioning method for determining radio frequency ablation treatment, which adopts the following technical scheme that the method comprises the following steps:
acquiring an abdominal CT image;
the richness of each pixel point on the abdominal CT image is obtained through the CT values of each pixel point on the abdominal CT image and the neighborhood pixel points;
partitioning the abdominal CT image, and taking the historical tumor probability of each region on the abdominal CT image as the attention of each pixel point of the region;
obtaining a first enhancement coefficient of each pixel point according to the abundance and the attention of each pixel point on the abdominal CT image;
obtaining the adjusted CT values of all the pixel points through the first enhancement coefficient of each pixel point on the abdominal CT image and the CT values of all the pixel points on the abdominal CT image;
obtaining a second enhancement coefficient of each pixel point through the first enhancement coefficient of each pixel point and the maximum value in the adjusted CT values of all the pixel points;
reconstructing the CT value of each pixel point through the CT value, the first enhancement coefficient and the second enhancement coefficient of each pixel point on the abdominal CT image to obtain the CT reconstruction value of each pixel point, and obtaining a new abdominal CT image through the CT reconstruction values of all the pixel points;
and performing threshold segmentation on the new abdominal CT image to obtain a tumor region.
Further, according to the auxiliary positioning method for determining the radio frequency ablation therapy, the abdomen CT image is any one abdomen CT image in an abdomen CT image sequence of the same person, and other abdomen CT images in the abdomen CT image sequence are processed in the same way according to the processing method of the abdomen CT image.
Further, the auxiliary positioning method for determining the radio frequency ablation treatment includes the following steps:
and establishing a coordinate system on the abdominal CT image, and dividing each pixel point on the abdominal CT image into regions by using the coordinates of each pixel point on the abdominal CT image to obtain the region where each pixel point is located.
Further, in the auxiliary positioning method for determining the radiofrequency ablation therapy, the CT value of each pixel point on the abdominal CT image is the redefined CT value of the pixel point;
the expression of the CT value redefined by the pixel point is as follows:
in the formula:to representThe newly defined CT value of the pixel point is processed,to representThe CT value of the pixel point is located,representing the maximum CT value in the abdominal CT image,representing the minimum CT value in an abdominal CT image.
Further, in the auxiliary positioning method for determining the rf ablation therapy, the expression of the CT reconstruction value of the pixel point is:
in the formula:to representThe reconstructed value of the CT at the pixel point,a first enhancement coefficient representing a pixel point,and a second enhancement coefficient representing a pixel point.
Further, in the method for determining an auxiliary location for rf ablation therapy, the expression of the second enhancement factor of the pixel point is:
in the formula:and expressing the maximum value in the adjusted CT values of all the pixel points obtained by the first enhancement coefficients of the pixel points and the redefined CT values of all the pixel points on the abdominal CT image.
Further, in the method for determining an auxiliary location for rf ablation therapy, the first enhancement coefficients of the pixels are obtained by dividing the abundance and the attention of each pixel;
the expression of the first enhancement coefficient of the pixel point is as follows:
in the formula:the attention degree of the pixel point is represented,and expressing the richness of the pixel points.
Further, in the method for determining an auxiliary location for rf ablation therapy, the richness of the pixel points is expressed as:
in the formula:the window in which the central pixel point is positioned is expressedA central pixel point ofThe position of the pixel point is determined,the window in which the central pixel point is positioned is expressedThe redefined CT value of each pixel point,the window in which the central pixel point is positioned is expressedThe redefined CT value of each pixel point is the redefined CT value of the central pixel point,and representing the redefined CT mean value of all pixel points on the abdominal CT image.
The invention has the beneficial effects that: the invention provides an auxiliary positioning method for determining radio frequency ablation treatment, which is characterized in that the abundance of each pixel point is obtained through the CT value of the pixel point, the attention of each pixel point is obtained by utilizing the probability of a tumor in each region in a database, and then the CT reconstruction value of each pixel point is determined, so that a new abdominal CT image is obtained, and treatment assistance is provided for the subsequent determination of the radio frequency ablation treatment method.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of an embodiment of an assisting localization method for determining a radio frequency ablation treatment according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
An embodiment of an assisting positioning method for determining a radio frequency ablation therapy of the present invention, as shown in fig. 1, includes:
101. an abdominal CT image is acquired.
Multi-sequence abdominal CT images containing the liver region are acquired, the multi-sequence being that the probe performs a cross-sectional scan one after the other around a part. The multi-sequence abdominal CT images have various text information and other noise interferences, and therefore, the interferences of such noises need to be removed before the CT images are preprocessed. The embodiment uses a DNN semantic segmentation network method to extract a human tissue area image in an image, namely an abdomen area image, and removes text background noise.
The content of the used DNN semantic segmentation network is:
the input data set of the network is an abdomen CT image set which is screened by a professional diagnostician and contains the liver.
Each CT image in the image set is manually marked, and the CT images are divided into two types: marking the target class of the human tissue area as 1; the text region background class is labeled 0.
In the embodiment, the DNN semantic segmentation network is used for classification, so the cross entropy function adopted by the network is a loss function.
Thus, a multi-sequence abdominal CT image is obtained by the semantic segmentation.
102. The richness of each pixel point on the abdominal CT image is obtained through the CT values of each pixel point on the abdominal CT image and the neighborhood pixel points.
Due to the influence of CT equipment, environmental noise, tumor gray scale characteristics and the like, the boundary of the tumor in the CT image is fuzzy, and the texture of the tumor area is unclear. Therefore, before the step of extracting the liver tumor image, the liver tumor image needs to be preprocessed, the contrast between the tumor region and the liver region is enhanced, and the effect of threshold segmentation is improved, so that the extraction of the liver tumor region is facilitated. The abdominal CT image preprocessing method comprises the following steps:
the CT values in the abdominal CT image are distributed in the range of-1000 to 1000. In order to improve the calculation efficiency and reduce the calculation amount, the CT range is defined to be within the range of 0-255, and the expression of the redefined CT value is as follows:
in the formula:to representThe redefined CT value of the pixel point is in the range of 0 to 255,to representC of processing pixel pointThe value of T is set as the value of,represents the maximum CT value in the CT image,representing the minimum CT value in the CT image.
And analyzing the CT image, wherein the CT value of the area where the tumor is located is different from other tissues of the human body. In the tumor region, the image is expressed in a shadow shape, and the difference between a target pixel point and other pixel points in the neighborhood exists among the pixel points; for the pixels located at the edge of the tumor region, the target pixel and the rest pixels in the neighborhood are also different and are not the same type of CT value pixel. And the textural features of the tumor region are also expressed as the difference of the CT values of the pixel points in the tumor range, namely the difference of the CT values of the pixel points in the tumor region exists.
The embodiment establishesAnd counting the difference of the CT values among the pixel points of the CT image through the neighborhood window with the size. Performing sliding window operation on the CT image, counting the CT values of neighborhood pixels of the target pixel point, and calculating the richness of the target pixel point according to the CT values of the pixels in the window. In the neighborhood of the target pixel point, the larger the difference of CT values among the pixel points is, the larger the richness is, and the better the image effect is; the smaller the difference of CT values is, the less the richness is, and the less obvious the image effect is. Richness degreeThe expression of (a) is:
in the formula:the window in which the central pixel point is positioned is expressedA central pixel point ofThe position of the pixel point is determined,the window in which the central pixel point is positioned is expressedThe redefined CT value of each pixel point,the window in which the central pixel point is positioned is expressedThe redefined CT value of each pixel is the redefined CT value of the central pixel,and representing the redefined CT mean value of all pixel points on the abdominal CT image.
Performing sliding window operation on all pixel points in the CT image, and obtaining the richness of each pixel point in the CT image according to the method。
According to the steps, the richness of each pixel point in the CT image is obtainedThe richness is calculated according to the redefined CT value of each pixel point in the CT image.
103. And partitioning the abdominal CT image, and taking the historical tumor probability of each region on the abdominal CT image as the attention of each pixel point of the region.
According to the embodiment, on the basis of partitioning the liver, the condition that tumors occur in different areas of the liver is counted according to a big data counting technology.
By analyzing a large number of multi-sequence abdominal liver CT images, due to the difference of human postures and the morphological difference between individuals during CT scanning, the acquired CT image sets are not in a uniform orientation, which may hinder the subsequent analysis. Therefore, the attention degree of each pixel point is calculatedIt is previously necessary to subject the images in the image set to a rotational translation operation.
In this embodiment, a coordinate system is established for each image with the vertebra as a center of the coordinate system, and the following rotational and translational alignment is performed, which specifically includes the steps of: 1) a CT image which is artificially corrected is taken as a standard, the center of a vertebral body (a pixel point row with the most center of an abscissa is selected by scanning pixel points of the image, a CT value sequence of the pixel point row is obtained, an area with a larger CT value is obtained, and the point at the most middle of the area is found to be the center) is taken as a coordinate origin, and a coordinate system is established. 2) And establishing a coordinate system in each CT image, comparing the abscissa axis or the ordinate axis in the image with the abscissa axis or the ordinate axis of the standard image, and calculating the offset angle. 3) And carrying out translation rotation on the coordinate axis of the image according to the obtained offset angle to obtain a CT image set of a unified coordinate system.
After the coordinate system is established, the coordinates of each pixel point can be obtained. From a priori knowledge, it can be known that: the liver is located in the upper left and upper right positions of the vertebra. Therefore, straight lines are constructed at angles of 30 °, 60 °, 120 ° and 150 ° to the positive direction of the abscissa axis, respectively, with the origin of the coordinate system as an end point. The image liver area is divided into 6 sections and each section is numbered separately.
After the partition, the area of the pixel point can be judged according to the coordinate of the pixel point, and the expression is as follows:
through the expression, the area where the pixel point is located can be judged. For example, ifCoordinates of the pixel point satisfyJudging the pixel point as the firstPixel points of the region.
In the embodiment, a big data statistics method is adopted, and a professional doctor judges the area of the liver tumor. The method comprises the following specific steps: setting each partition accumulator, counting the frequency of the tumors in each area in all individuals in the database by the partition accumulator, and using the frequency of the tumors in each areaIs shown (therein)Representing I, II, … and VI), the sequence number of the CT image sequence set isIs represented byClassifying the tumor regions of the CT images in each CT image sequence set by each individual according to the interpretation of a professional doctor, counting the number of the tumor regions of each individual in the corresponding CT image sequence to obtain the frequency of the tumors in each region of each individual, calculating the frequency of the tumors in each region, and taking the frequency as the frequency of the tumors in each regionProbability of region occurrence. Calculating the probability of the tumor in each region. WhereinThe calculation expression of (a) is:
in the formula:indicates that the tumor is in the first placeThe probability of an individual region or regions is,indicates that the tumor is in the first placeNumber of image sequences of each region.
After big data statistics, the probability of the tumor in each region is obtained. If the probability is higher, the probability that the tumor appears in the region is higher, so that the attention degree of the pixel points in the region is higher. Thus, passing probabilityAnd obtaining the attention of each pixel point.
Obtaining the attention of each pixel point according to the area of the pixel point, wherein the attention of each pixel point is the probability counted by big dataThe attention of the pixel points of each region isI.e. the tumor is on the firstThe probability of the region is taken asAttention of each pixel point in each region. For the pixel points not belonging to the I, II, … and VI regions, the attention degree is directly set to be 0.001.
In this embodiment, the image is preprocessed in a linear variation manner, and according to the richness of each pixel point in the obtained CT imageAnd degree of attentionAnd calculating the enhancement degree of each pixel point. Wherein the smaller the richness, the greater the attention, the greater the degree of enhancement; the greater the richness, the less attention, and the less enhancement. The expression for the linear variation is:
in the formula:a first enhancement coefficient representing a linear transformation,a second enhancement coefficient representing a linear variation,and the CT value of the pixel point after the linear change is represented, namely the CT reconstruction value of the pixel point.
104. And obtaining a first enhancement coefficient of each pixel point through the abundance and the attention of each pixel point on the abdominal CT image.
The richness of each pixel point has been calculatedAnd degree of attentionBy richness of each pixelAnd degree of attentionObtaining a first enhancement coefficient of each pixel pointThe expression is:
105. and obtaining the adjusted CT value of all the pixel points through the first enhancement coefficient of each pixel point on the abdominal CT image and the CT values of all the pixel points on the abdominal CT image.
Due to the enhanced coefficientAfter the CT image is stretched, the histogram of the obtained CT image is not necessarily distributed in [0,255%]Within the range. Thus by another enhancement factorThe images are adjusted so that the CT images are distributed as much as possible over [0,255 ]]Within. Thus, the first enhancement factor is passed through each pixelStretching each pixel point in the CT image to obtain the range of CT values [ a, b ] of all pixel points after stretching]。
106. And obtaining a second enhancement coefficient of each pixel point through the first enhancement coefficient of each pixel point and the maximum value in the adjusted CT values of all the pixel points.
for example, if a certain pixel pointRichness ofDegree of attentionPassing through the pixel pointThe richness and the attention degree of the pixel point are obtainedA first enhancement factor of, i.e.Stretching each pixel point in the CT image through the first enhancement coefficient to obtain the stretched CT value of all the pixel points, namely multiplying the redefined CT value of each pixel point by the first enhancement coefficient to obtain the stretched CT value range of all the pixel points, and selecting the maximum value of the stretched CT value rangeI.e. b =By passingAnd a first enhancement coefficient of the pixelObtaining pixel pointsSecond enhancement factor of。
107. And reconstructing the CT value of each pixel point through the CT value of each pixel point on the abdominal CT image, the first enhancement coefficient and the second enhancement coefficient to obtain the CT reconstruction value of each pixel point, and obtaining a new abdominal CT image through the CT reconstruction values of all the pixel points.
The expression of linear transformation of each pixel point is obtained through the stepsAnd reconstructing the CT value of each pixel point of the CT image by a linear transformation expression to obtain the CT reconstructed value of each pixel point, thereby obtaining a preprocessed image, namely a new abdomen CT image.
So far, the CT value of each pixel point after enhancement is obtained through the linear transformationThereby obtaining an enhanced image.
108. And performing threshold segmentation on the new abdominal CT image to obtain a tumor region.
Performing threshold segmentation according to the obtained preprocessed image, extracting a liver tumor mask: and (3) adopting an OTSU threshold selection method, segmenting according to the selected threshold, setting the value of the pixel point which is greater than the threshold to be 1, and setting the value of the pixel point which is less than the threshold to be 0, thereby obtaining the mask of the tumor area.
The obtained mask of the tumor region and the CT image are covered to obtain the CT image of the tumor, so that a doctor can conveniently interpret the CT image, the diagnosis efficiency is improved, the position and the size of the tumor are further obtained, and reliable early-stage data support and treatment assistance are provided for determining a radio frequency ablation treatment method and a treatment plan.
The invention provides an auxiliary positioning method for determining radio frequency ablation treatment, which is characterized in that the abundance of each pixel point is obtained through the CT value of the pixel point, the attention of each pixel point is obtained by utilizing the probability of a tumor in each region in a database, and then the CT reconstruction value of each pixel point is determined, so that a new abdominal CT image is obtained, and treatment assistance is provided for the subsequent determination of the radio frequency ablation treatment method.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (8)
1. An assisted localization method for determining a radio frequency ablation treatment, comprising:
acquiring an abdominal CT image;
the richness of each pixel point on the abdominal CT image is obtained through the CT values of each pixel point on the abdominal CT image and the neighborhood pixel points;
partitioning the abdominal CT image, and taking the historical tumor probability of each region on the abdominal CT image as the attention of each pixel point of the region;
obtaining a first enhancement coefficient of each pixel point according to the abundance and the attention of each pixel point on the abdominal CT image;
obtaining the adjusted CT values of all the pixel points through the first enhancement coefficient of each pixel point on the abdominal CT image and the CT values of all the pixel points on the abdominal CT image;
obtaining a second enhancement coefficient of each pixel point through the first enhancement coefficient of each pixel point and the maximum value in the adjusted CT values of all the pixel points;
reconstructing the CT value of each pixel point through the CT value, the first enhancement coefficient and the second enhancement coefficient of each pixel point on the abdominal CT image to obtain the CT reconstruction value of each pixel point, and obtaining a new abdominal CT image through the CT reconstruction values of all the pixel points;
and performing threshold segmentation on the new abdominal CT image to obtain a tumor region.
2. An auxiliary positioning method for determining radio frequency ablation treatment according to claim 1, wherein the abdominal CT image is any one of abdominal CT images in an abdominal CT image sequence of the same person, and other abdominal CT images in the abdominal CT image sequence are processed in the same way according to the processing method of the abdominal CT image.
3. An aided location method for determining radio frequency ablation treatment according to claim 1, wherein the method of partitioning the abdominal CT image is:
and establishing a coordinate system on the abdominal CT image, and dividing each pixel point on the abdominal CT image into regions by using the coordinates of each pixel point on the abdominal CT image to obtain the region where each pixel point is located.
4. An aided location method for determining radio frequency ablation treatment according to claim 1, wherein the CT value of each pixel point on the abdominal CT image is the redefined CT value of the pixel point;
the expression of the CT value redefined by the pixel point is as follows:
5. An aided location method for determining radio frequency ablation treatment according to claim 4, characterized in that the CT reconstruction values of the pixel points are expressed as:
6. An aid location method for determining rf ablation therapy according to claim 5, wherein the expression of the second enhancement factor of the pixel is:
7. The method of claim 6, wherein the first enhancement coefficients of the pixels are obtained by dividing the richness and the attention of each pixel;
the expression of the first enhancement coefficient of the pixel point is as follows:
8. An aid location method for determining rf ablation therapy according to claim 7, wherein the richness of pixels is expressed as:
in the formula:the window in which the central pixel point is positioned is expressedA central pixel point ofThe position of the pixel point is determined,the window in which the central pixel point is positioned is expressedThe redefined CT value of each pixel point,the window in which the central pixel point is positioned is expressedThe redefined CT value of each pixel is the redefined CT value of the central pixel,and representing the redefined CT mean value of all pixel points on the abdominal CT image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210737793.XA CN114820663B (en) | 2022-06-28 | 2022-06-28 | Assistant positioning method for determining radio frequency ablation therapy |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210737793.XA CN114820663B (en) | 2022-06-28 | 2022-06-28 | Assistant positioning method for determining radio frequency ablation therapy |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114820663A true CN114820663A (en) | 2022-07-29 |
CN114820663B CN114820663B (en) | 2022-09-09 |
Family
ID=82522620
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210737793.XA Active CN114820663B (en) | 2022-06-28 | 2022-06-28 | Assistant positioning method for determining radio frequency ablation therapy |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114820663B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116385315A (en) * | 2023-05-31 | 2023-07-04 | 日照天一生物医疗科技有限公司 | Image enhancement method and system for simulated ablation of tumor therapeutic instrument |
CN116993628A (en) * | 2023-09-27 | 2023-11-03 | 四川大学华西医院 | CT image enhancement system for tumor radio frequency ablation guidance |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060002615A1 (en) * | 2004-06-30 | 2006-01-05 | Accuray, Inc. | Image enhancement method and system for fiducial-less tracking of treatment targets |
CN1912927A (en) * | 2006-08-25 | 2007-02-14 | 西安理工大学 | Semi-automatic partition method of lung CT image focus |
DE102007028270A1 (en) * | 2007-06-15 | 2008-12-18 | Siemens Ag | Method for segmenting image data to identify a liver |
CN107464250A (en) * | 2017-07-03 | 2017-12-12 | 深圳市第二人民医院 | Tumor of breast automatic division method based on three-dimensional MRI image |
CN108596887A (en) * | 2018-04-17 | 2018-09-28 | 湖南科技大学 | A kind of abdominal CT sequence image liver neoplasm automatic division method |
CN113674281A (en) * | 2021-10-25 | 2021-11-19 | 之江实验室 | Liver CT automatic segmentation method based on deep shape learning |
-
2022
- 2022-06-28 CN CN202210737793.XA patent/CN114820663B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060002615A1 (en) * | 2004-06-30 | 2006-01-05 | Accuray, Inc. | Image enhancement method and system for fiducial-less tracking of treatment targets |
CN1912927A (en) * | 2006-08-25 | 2007-02-14 | 西安理工大学 | Semi-automatic partition method of lung CT image focus |
DE102007028270A1 (en) * | 2007-06-15 | 2008-12-18 | Siemens Ag | Method for segmenting image data to identify a liver |
CN107464250A (en) * | 2017-07-03 | 2017-12-12 | 深圳市第二人民医院 | Tumor of breast automatic division method based on three-dimensional MRI image |
CN108596887A (en) * | 2018-04-17 | 2018-09-28 | 湖南科技大学 | A kind of abdominal CT sequence image liver neoplasm automatic division method |
CN113674281A (en) * | 2021-10-25 | 2021-11-19 | 之江实验室 | Liver CT automatic segmentation method based on deep shape learning |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116385315A (en) * | 2023-05-31 | 2023-07-04 | 日照天一生物医疗科技有限公司 | Image enhancement method and system for simulated ablation of tumor therapeutic instrument |
CN116385315B (en) * | 2023-05-31 | 2023-09-08 | 日照天一生物医疗科技有限公司 | Image enhancement method and system for simulated ablation of tumor therapeutic instrument |
CN116993628A (en) * | 2023-09-27 | 2023-11-03 | 四川大学华西医院 | CT image enhancement system for tumor radio frequency ablation guidance |
CN116993628B (en) * | 2023-09-27 | 2023-12-08 | 四川大学华西医院 | CT image enhancement system for tumor radio frequency ablation guidance |
Also Published As
Publication number | Publication date |
---|---|
CN114820663B (en) | 2022-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114820663B (en) | Assistant positioning method for determining radio frequency ablation therapy | |
CN110310281B (en) | Mask-RCNN deep learning-based pulmonary nodule detection and segmentation method in virtual medical treatment | |
CN107230206B (en) | Multi-mode data-based 3D pulmonary nodule segmentation method for hyper-voxel sequence lung image | |
CN109064476B (en) | CT chest radiography lung tissue image segmentation method based on level set | |
CN115345893B (en) | Ovarian tissue canceration region segmentation method based on image processing | |
CN111667467B (en) | Clustering algorithm-based lower limb vascular calcification index multi-parameter accumulation calculation method | |
CN109753997B (en) | Automatic accurate robust segmentation method for liver tumor in CT image | |
Chen et al. | Pathological lung segmentation in chest CT images based on improved random walker | |
CN116993628B (en) | CT image enhancement system for tumor radio frequency ablation guidance | |
CN111340825A (en) | Method and system for generating mediastinal lymph node segmentation model | |
CN109544528B (en) | Lung nodule image identification method and device | |
Pisupati et al. | Segmentation of 3D pulmonary trees using mathematical morphology | |
CN114764809A (en) | Self-adaptive threshold segmentation method and device for lung CT (computed tomography) density increase shadow | |
CN116309647B (en) | Method for constructing craniocerebral lesion image segmentation model, image segmentation method and device | |
Kamil et al. | Analysis of tissue abnormality in mammography images using gray level co-occurrence matrix method | |
CN110009645B (en) | Double-layer contour segmentation method for liver cancer focus image | |
Mustafa et al. | Mammography image segmentation: Chan-Vese active contour and localised active contour approach | |
CN110738649A (en) | training method of Faster RCNN network for automatic identification of stomach cancer enhanced CT images | |
CN113838020B (en) | Lesion area quantification method based on molybdenum target image | |
Vanmore et al. | Survey on automatic liver segmentation techniques from abdominal CT images | |
CN117237342B (en) | Intelligent analysis method for respiratory rehabilitation CT image | |
CN117974692B (en) | Ophthalmic medical image processing method based on region growing | |
Gao et al. | Classification of pulmonary nodules by using improved convolutional neural networks | |
Manikandan et al. | Lobar fissure extraction in isotropic CT lung images—an application to cancer identification | |
Hossain et al. | Brain Tumor Location Identification and Patient Observation from MRI Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |