CN117952992A - Intelligent segmentation method and device for CT image - Google Patents

Intelligent segmentation method and device for CT image Download PDF

Info

Publication number
CN117952992A
CN117952992A CN202410327103.2A CN202410327103A CN117952992A CN 117952992 A CN117952992 A CN 117952992A CN 202410327103 A CN202410327103 A CN 202410327103A CN 117952992 A CN117952992 A CN 117952992A
Authority
CN
China
Prior art keywords
image
target
feature map
block
segmented
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410327103.2A
Other languages
Chinese (zh)
Other versions
CN117952992B (en
Inventor
骆志强
华夏
黄峰
范劲松
王志军
谢韶东
蓝燚锋
赖瑞明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan University
Original Assignee
Foshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan University filed Critical Foshan University
Priority to CN202410327103.2A priority Critical patent/CN117952992B/en
Publication of CN117952992A publication Critical patent/CN117952992A/en
Application granted granted Critical
Publication of CN117952992B publication Critical patent/CN117952992B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an intelligent segmentation method and device of CT images, wherein the method comprises the following steps: performing feature map extraction operation on a preset target CT image to obtain a target feature map corresponding to the target CT image; performing block segmentation operation on the target CT image to obtain segmented blocks corresponding to the target CT image; and according to the target feature map and the segmentation block, carrying out fusion operation on the target feature map and the segmentation block to obtain a fused block which is used as a target segmentation block corresponding to the target CT image. Therefore, the method and the device can extract the feature map and segment the target CT image, and then fuse the obtained target feature map and the segmented segment to obtain the target segmented segment corresponding to the target CT image, so that the reliability and accuracy of the target segmented segment can be improved, the segmentation accuracy of the target CT image can be improved, and the segmentation effect of the target CT image can be improved.

Description

Intelligent segmentation method and device for CT image
Technical Field
The invention relates to the technical field of image segmentation, in particular to an intelligent segmentation method and device for CT images.
Background
CT images, which are image tools for intuitively knowing brain tissue, have the advantages of convenient examination, higher resolution, and the like. At present, the CT image is segmented according to the corresponding observation area so as to facilitate rapid identification, screening and analysis of segmented tiles, and the like, but in the traditional CT image segmentation method, the tile segmentation is usually realized by the signs of the CT image and adopting a U-net algorithm model, but the segmentation accuracy of the CT image is difficult to improve due to lower identification sensitivity of the method to the signs, and the segmentation effect of the CT image is influenced. It is seen that it is important to provide a method that can improve the segmentation accuracy of CT images.
Disclosure of Invention
The invention provides an intelligent segmentation method and device for a CT image, which are beneficial to improving the reliability and accuracy of a target segmentation block, and further beneficial to improving the segmentation precision of the target CT image, thereby being beneficial to improving the segmentation effect of the target CT image.
In order to solve the technical problem, a first aspect of the present invention discloses an intelligent segmentation method for CT images, which includes:
Performing feature map extraction operation on a preset target CT image to obtain a target feature map corresponding to the target CT image;
Performing block segmentation operation on the target CT image to obtain segmented blocks corresponding to the target CT image;
And according to the target feature map and the segmentation block, carrying out fusion operation on the target feature map and the segmentation block to obtain a fused block serving as a target segmentation block corresponding to the target CT image.
As an alternative implementation manner, in the first aspect of the present invention, the target CT image is determined by:
According to preset image processing parameters, carrying out normalization operation on an initial CT image to be processed to obtain a normalized CT image; the image processing parameters comprise at least one of an image window level position parameter, an image window width range parameter and an image pixel value range parameter;
Determining element parameters of the normalized CT image, and determining elements to be removed corresponding to the normalized CT image according to the element parameters; the element parameters comprise element CT values and element type parameters;
And performing element removal operation on the normalized CT image according to the element to be removed to obtain an element-removed CT image serving as a target CT image.
In an optional implementation manner, in a first aspect of the present invention, the performing a feature map extracting operation on a preset target CT image to obtain a target feature map corresponding to the target CT image includes:
performing initial feature extraction operation on a preset target CT image to obtain a first feature map corresponding to the target CT image;
Performing size transformation operation on the first feature map according to preset size transformation parameters to obtain a second feature map corresponding to the target CT image;
Performing target residual error processing operation on the second feature map according to preset feature map parameters to obtain a third feature map corresponding to the target CT image; the characteristic map parameters comprise characteristic map weight parameters and characteristic map bias parameters;
And determining the first feature map, the second feature map and the third feature map as target feature maps corresponding to the target CT image.
As an optional implementation manner, in the first aspect of the present invention, the feature map weight parameters include a first feature map weight parameter, a second feature map weight parameter, and a third feature map weight parameter, and the feature map bias parameters include a first feature map bias parameter, a second feature map bias parameter, and a third feature map bias parameter;
the performing a target residual processing operation on the second feature map according to a preset feature map parameter to obtain a third feature map corresponding to the target CT image, including:
Performing a first residual processing operation on the second feature map according to the first feature map weight parameter and the first feature map bias parameter to obtain a first target feature map;
Performing a second residual processing operation on the first target feature map according to the second feature map weight parameter and the second feature map bias parameter to obtain a second target feature map;
And carrying out third residual processing operation on the second target feature map according to the third feature map weight parameter and the third feature map bias parameter to obtain a third target feature map serving as a third feature map corresponding to the target CT image.
In an optional implementation manner, in a first aspect of the present invention, the performing a tile segmentation operation on the target CT image to obtain a segmented tile corresponding to the target CT image includes:
performing block segmentation operation on the target CT image according to preset block segmentation parameters to obtain a plurality of blocks to be mapped and segmented corresponding to the target CT image;
for each block to be mapped, mapping the block to be mapped according to the block parameters of the block to be mapped and preset mapping conversion parameters to obtain a mapped block to be mapped; the mapping conversion parameters comprise mapping size conversion parameters and/or mapping color conversion parameters;
And determining all the mapped segmented tiles as segmented tiles corresponding to the target CT image.
In an optional implementation manner, in a first aspect of the present invention, the fusing operation is performed on the target feature map and the segmented tiles according to the target feature map and the segmented tiles to obtain fused tiles, where the fusing operation includes:
Determining a first block to be fused according to the divided blocks;
Determining a first attention weight parameter matched with the first to-be-fused block and the third feature map according to the first to-be-fused block and the third feature map, and determining a second to-be-fused block according to the first attention weight parameter and the third feature map;
Determining a second attention weight parameter matched with the second to-be-fused image block and the second feature image according to the second to-be-fused image block and the second feature image, and determining a third to-be-fused image block according to the second attention weight parameter and the second feature image;
And determining a third attention weight parameter matched with the third to-be-fused image block and the first feature image according to the third to-be-fused image block and the first feature image, and determining a fused image block according to the third attention weight parameter and the first feature image.
As an optional implementation manner, in the first aspect of the present invention, the determining a first tile to be fused according to the segmented tile includes:
Performing target conversion operation under a plurality of rounds on the divided image blocks to obtain converted divided image blocks;
performing remolding operation on the converted segmented image blocks to obtain remolded segmented image blocks;
performing dimension transformation operation on the remolded segmented image block to obtain a transformed segmented image block serving as a first image block to be fused;
The performing target conversion operation on the divided image blocks under a plurality of rounds to obtain converted divided image blocks includes:
Extracting characteristic information of the segmented image blocks to obtain the characteristic information of the segmented image blocks; the feature information comprises semantic feature information and/or initial feature information; according to the characteristic information, performing pixel combination operation on the segmented image blocks to obtain a plurality of combined image blocks corresponding to the segmented image blocks, and performing splicing operation on all the combined image blocks to obtain spliced image blocks corresponding to the segmented image blocks; normalizing the spliced image blocks to obtain processed image blocks, and performing linear transformation operation on the processed image blocks to obtain transformed image blocks;
determining a current round parameter corresponding to the target conversion operation of the divided image blocks, and judging whether the current round parameter is larger than or equal to a preset round parameter;
If not, updating the transformed image block into the segmented image block, and triggering and executing the characteristic information extraction operation on the segmented image block to obtain the characteristic information of the segmented image block;
And when the judgment result is yes, determining the block after transformation as a block after division after transformation.
The second aspect of the invention discloses an intelligent segmentation device for CT images, which comprises:
The extraction module is used for carrying out feature map extraction operation on a preset target CT image to obtain a target feature map corresponding to the target CT image;
The image block segmentation module is used for carrying out image block segmentation operation on the target CT image to obtain segmented image blocks corresponding to the target CT image;
And the fusion module is used for carrying out fusion operation on the target feature map and the segmentation image blocks according to the target feature map and the segmentation image blocks to obtain fused image blocks serving as target segmentation image blocks corresponding to the target CT image.
In a second aspect of the present invention, as an alternative embodiment, the target CT image is determined by:
According to preset image processing parameters, carrying out normalization operation on an initial CT image to be processed to obtain a normalized CT image; the image processing parameters comprise an image window level position parameter, an image window width range parameter and an image pixel value range parameter;
Determining element parameters of the normalized CT image, and determining elements to be removed corresponding to the normalized CT image according to the element parameters; the element parameters comprise element CT values and element type parameters;
And performing element removal operation on the normalized CT image according to the element to be removed to obtain an element-removed CT image serving as a target CT image.
In a second aspect of the present invention, the extracting module performs a feature map extracting operation on a preset target CT image, and the method for obtaining a target feature map corresponding to the target CT image specifically includes:
performing initial feature extraction operation on a preset target CT image to obtain a first feature map corresponding to the target CT image;
Performing size transformation operation on the first feature map according to preset size transformation parameters to obtain a second feature map corresponding to the target CT image;
Performing target residual error processing operation on the second feature map according to preset feature map parameters to obtain a third feature map corresponding to the target CT image; the characteristic map parameters comprise characteristic map weight parameters and characteristic map bias parameters;
And determining the first feature map, the second feature map and the third feature map as target feature maps corresponding to the target CT image.
As an optional implementation manner, in the second aspect of the present invention, the feature map weight parameters include a first feature map weight parameter, a second feature map weight parameter, and a third feature map weight parameter, and the feature map bias parameters include a first feature map bias parameter, a second feature map bias parameter, and a third feature map bias parameter;
The method for obtaining the third feature map corresponding to the target CT image by the extraction module performing a target residual processing operation on the second feature map according to a preset feature map parameter specifically includes:
Performing a first residual processing operation on the second feature map according to the first feature map weight parameter and the first feature map bias parameter to obtain a first target feature map;
Performing a second residual processing operation on the first target feature map according to the second feature map weight parameter and the second feature map bias parameter to obtain a second target feature map;
And carrying out third residual processing operation on the second target feature map according to the third feature map weight parameter and the third feature map bias parameter to obtain a third target feature map serving as a third feature map corresponding to the target CT image.
In an optional implementation manner, in a second aspect of the present invention, the tile segmentation module performs a tile segmentation operation on the target CT image, so as to obtain a segmented tile corresponding to the target CT image, where the tile segmentation module specifically includes:
performing block segmentation operation on the target CT image according to preset block segmentation parameters to obtain a plurality of blocks to be mapped and segmented corresponding to the target CT image;
for each block to be mapped, mapping the block to be mapped according to the block parameters of the block to be mapped and preset mapping conversion parameters to obtain a mapped block to be mapped; the mapping conversion parameters comprise mapping size conversion parameters and/or mapping color conversion parameters;
And determining all the mapped segmented tiles as segmented tiles corresponding to the target CT image.
In a second aspect of the present invention, the fusing module performs, according to the target feature map and the segmented tiles, a fusing operation on the target feature map and the segmented tiles, where a manner of obtaining the fused tiles specifically includes:
Determining a first block to be fused according to the divided blocks;
Determining a first attention weight parameter matched with the first to-be-fused block and the third feature map according to the first to-be-fused block and the third feature map, and determining a second to-be-fused block according to the first attention weight parameter and the third feature map;
Determining a second attention weight parameter matched with the second to-be-fused image block and the second feature image according to the second to-be-fused image block and the second feature image, and determining a third to-be-fused image block according to the second attention weight parameter and the second feature image;
And determining a third attention weight parameter matched with the third to-be-fused image block and the first feature image according to the third to-be-fused image block and the first feature image, and determining a fused image block according to the third attention weight parameter and the first feature image.
In a second aspect of the present invention, the method for determining the first tile to be fused according to the segmented tile by the fusion module specifically includes:
Performing target conversion operation under a plurality of rounds on the divided image blocks to obtain converted divided image blocks;
performing remolding operation on the converted segmented image blocks to obtain remolded segmented image blocks;
performing dimension transformation operation on the remolded segmented image block to obtain a transformed segmented image block serving as a first image block to be fused;
the method for obtaining the segmented image block after conversion specifically includes the following steps:
Extracting characteristic information of the segmented image blocks to obtain the characteristic information of the segmented image blocks; the feature information comprises semantic feature information and/or initial feature information; according to the characteristic information, performing pixel combination operation on the segmented image blocks to obtain a plurality of combined image blocks corresponding to the segmented image blocks, and performing splicing operation on all the combined image blocks to obtain spliced image blocks corresponding to the segmented image blocks; normalizing the spliced image blocks to obtain processed image blocks, and performing linear transformation operation on the processed image blocks to obtain transformed image blocks;
determining a current round parameter corresponding to the target conversion operation of the divided image blocks, and judging whether the current round parameter is larger than or equal to a preset round parameter;
If not, updating the transformed image block into the segmented image block, and triggering and executing the characteristic information extraction operation on the segmented image block to obtain the characteristic information of the segmented image block;
And when the judgment result is yes, determining the block after transformation as a block after division after transformation.
The third aspect of the present invention discloses another intelligent segmentation device for CT images, which comprises:
a memory storing executable program code;
a processor coupled to the memory;
The processor invokes the executable program code stored in the memory to execute the intelligent segmentation method of the CT image disclosed in the first aspect of the invention.
A fourth aspect of the present invention discloses a computer storage medium storing computer instructions for performing the intelligent segmentation method of CT images disclosed in the first aspect of the present invention when the computer instructions are called.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
In the embodiment of the invention, feature map extraction operation is carried out on a preset target CT image to obtain a target feature map corresponding to the target CT image; performing block segmentation operation on the target CT image to obtain segmented blocks corresponding to the target CT image; and according to the target feature map and the segmentation block, carrying out fusion operation on the target feature map and the segmentation block to obtain a fused block which is used as a target segmentation block corresponding to the target CT image. Therefore, the method and the device can extract the feature map and segment the target CT image, and then fuse the obtained target feature map and the segmented segment to obtain the target segmented segment corresponding to the target CT image, so that the reliability and accuracy of the target segmented segment can be improved, the segmentation accuracy of the target CT image can be improved, and the segmentation effect of the target CT image can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an intelligent segmentation method of a CT image according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method for intelligent segmentation of CT images according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an intelligent CT image segmentation apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of another intelligent CT image segmentation apparatus according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or article that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or article.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The invention discloses an intelligent segmentation method and device for a CT image, which are beneficial to improving the reliability and accuracy of a target segmentation block, and further beneficial to improving the segmentation precision of the target CT image, thereby being beneficial to improving the segmentation effect of the target CT image.
Example 1
Referring to fig. 1, fig. 1 is a flow chart of an intelligent segmentation method of a CT image according to an embodiment of the present invention. The intelligent segmentation method of the CT image described in fig. 1 may be applied to segmentation of various types of CT images, such as brain CT images, chest CT images, and the like, which is not limited in the embodiment of the present invention. Optionally, the method may be implemented by a CT image segmentation apparatus, where the CT image segmentation apparatus may be integrated in a CT image segmentation operation device, such as an intelligent computer, and when the CT image segmentation apparatus exists independently, the CT image segmentation apparatus may also be a local server or a cloud server for processing a CT image segmentation procedure. As shown in fig. 1, the intelligent segmentation method of the CT image may include the following operations:
101. And carrying out feature map extraction operation on a preset target CT image to obtain a target feature map corresponding to the target CT image.
In an embodiment of the present invention, the feature map extraction operation may be implemented through a ResNet network in a trained tile segmentation model (e.g., a trained Trans-swin-unet model). Alternatively, the ResNet network may include three layers: convolutional layer, max-pooling layer, and residual block.
102. And performing block segmentation operation on the target CT image to obtain segmented blocks corresponding to the target CT image.
In an embodiment of the present invention, further, the target CT image is determined by:
according to preset image processing parameters, carrying out normalization operation on an initial CT image to be processed to obtain a normalized CT image;
Determining element parameters of the normalized CT image, and determining elements to be removed corresponding to the normalized CT image according to the element parameters;
And performing element removal operation on the normalized CT image according to the element to be removed to obtain an element-removed CT image serving as a target CT image.
In an embodiment of the present invention, optionally, the image processing parameter includes at least one of an image window level position parameter, an image window width range parameter, and an image pixel value range parameter. Further optionally, the element parameters include an element CT value and an element type parameter.
Specifically, the normalization operation performed on the initial CT image to be processed may be understood as a window width normalization operation on the initial CT image, where the window width normalization may limit the display range of the CT value on the initial CT image, and the window width normalization may change the brightness of different elements (such as brain tissue and skull in the brain CT image) on the initial CT image. For example, to highlight the region of interest of the segmentation task on the initial CT image, window width window level truncation may be performed on the initial CT image, that is, window width is a range interval with window level as the center, an upper bound (window level minus one-half window width) and a lower bound (window level plus one-half window width) of the HU truncation are calculated, and then CT values of all pixels on the initial CT image are limited between the upper bound and the lower bound.
Further specifically, performing an element removal operation on the normalized CT image may be understood as removing image elements with too high and/or irrelevant CT values in the normalized CT image. For example, since the CT value of the skull region is too high, the subsequent image analysis is easily affected, and therefore, the skull elements in the brain CT image need to be removed to preserve the brain tissue elements.
103. And according to the target feature map and the segmentation block, carrying out fusion operation on the target feature map and the segmentation block to obtain a fused block which is used as a target segmentation block corresponding to the target CT image.
Therefore, by implementing the embodiment of the invention, the feature map extraction and the block segmentation operation can be carried out on the target CT image, and then the obtained target feature map and the segmented block are fused to obtain the target segmented block corresponding to the target CT image, so that the reliability and the accuracy of the target segmented block are improved, the segmentation precision of the target CT image is improved, the segmentation effect of the target CT image is improved, and the rapid and accurate observation of target CT image elements is facilitated.
Example two
Referring to fig. 2, fig. 2 is a flow chart of another intelligent segmentation method of CT images according to an embodiment of the present invention. The intelligent segmentation method of the CT image described in fig. 2 may be applied to segmentation of various types of CT images, such as brain CT images, chest CT images, and the like, which is not limited in the embodiment of the present invention. Optionally, the method may be implemented by a CT image segmentation apparatus, where the CT image segmentation apparatus may be integrated in a CT image segmentation operation device, such as an intelligent computer, and when the CT image segmentation apparatus exists independently, the CT image segmentation apparatus may also be a local server or a cloud server for processing a CT image segmentation procedure. As shown in fig. 2, the intelligent segmentation method of the CT image may include the following operations:
201. And carrying out initial feature extraction operation on a preset target CT image to obtain a first feature map corresponding to the target CT image.
In the embodiment of the invention, further, the initial feature extraction operation can be performed on the preset target CT image through the preset initial bias parameter and the initial convolution kernel parameter. For example, if the input target CT image is x, the convolution kernel is w_1, the bias is b_1, and the activation function is ReLU, the first feature map y_1 corresponding to the target CT image may be expressed as: y_1=relu (w1×x+b_1).
202. And performing size transformation operation on the first feature map according to preset size transformation parameters to obtain a second feature map corresponding to the target CT image.
In an embodiment of the present invention, the second feature map y_2 corresponding to the target CT image may be expressed as: y_2=pool (y_1), which represents the maximum pooling operation (i.e., size conversion operation).
203. And carrying out target residual error processing operation on the second feature map according to preset feature map parameters to obtain a third feature map corresponding to the target CT image.
In an embodiment of the present invention, optionally, the feature map parameters include a feature map weight parameter and a feature map bias parameter. Further optionally, the feature map weight parameters include a first feature map weight parameter, a second feature map weight parameter, and a third feature map weight parameter, and the feature map bias parameters include a first feature map bias parameter, a second feature map bias parameter, and a third feature map bias parameter, wherein the target residual processing operation may be implemented by a residual block, and the residual block includes a first convolution layer (for dimension reduction, and corresponding to the first feature map weight parameter and the first feature map bias parameter), a second convolution layer (for feature extraction, and corresponding to the second feature map weight parameter and the second feature map bias parameter), and a third convolution layer (for dimension increase, and corresponding to the third feature map weight parameter and the third feature map bias parameter).
204. And determining the first feature map, the second feature map and the third feature map as target feature maps corresponding to the target CT image.
205. And performing block segmentation operation on the target CT image to obtain segmented blocks corresponding to the target CT image.
206. And according to the target feature map and the segmentation block, carrying out fusion operation on the target feature map and the segmentation block to obtain a fused block which is used as a target segmentation block corresponding to the target CT image.
In the embodiment of the present invention, for other descriptions of step 205 and step 206, please refer to the detailed descriptions of step 102 and step 103 in the first embodiment, and the detailed description of the embodiment of the present invention is omitted.
Therefore, by implementing the embodiment of the invention, the initial feature extraction operation can be performed on the target CT image, the size transformation operation can be performed on the first feature image, and the target residual processing operation can be performed on the second feature image, so that a series of target feature images corresponding to the target CT image can be obtained, the execution reliability and accuracy of the feature image extraction operation on the target CT image can be improved, the reliability and accuracy of the obtained target feature image can be improved, the high efficiency of the subsequent fusion operation between the target feature image and the segmentation image blocks can be improved, and the segmentation accuracy of the target CT image can be improved.
In an optional embodiment, performing a target residual processing operation on the second feature map according to a preset feature map parameter to obtain a third feature map corresponding to the target CT image, including:
According to the first feature map weight parameter and the first feature map bias parameter, performing first residual processing operation on the second feature map to obtain a first target feature map;
Performing a second residual processing operation on the first target feature map according to the second feature map weight parameter and the second feature map bias parameter to obtain a second target feature map;
And carrying out third residual processing operation on the second target feature map according to the third feature map weight parameter and the third feature map bias parameter to obtain a third target feature map serving as a third feature map corresponding to the target CT image.
In this alternative embodiment, the third feature map corresponding to the target CT image may be represented by the following formula: h (y_2) =relu (wj4×relu (wj3×relu (wj2×yj2+b_2) +b3) +b4), wj2 is the first feature map weight parameter, wj3 is the second feature map weight parameter, wj4 is the third feature map weight parameter, b_2 is the first feature map bias parameter, b_3 is the second feature map bias parameter, b_4 is the third feature map bias parameter, and y_2 is the second feature map.
Therefore, in this optional embodiment, residual processing operation can be performed on the corresponding target feature map according to the corresponding feature map weight parameter and the feature map bias parameter, so as to obtain a third feature map corresponding to the target CT image, so that reliability and accuracy of the target residual processing operation on the second feature map can be improved, and further, accuracy of the third feature map corresponding to the obtained target CT image can be improved, and thus, execution efficiency of subsequent image fusion operation can be improved.
In another optional embodiment, performing a tile segmentation operation on the target CT image to obtain segmented tiles corresponding to the target CT image, including:
performing block segmentation operation on the target CT image according to preset block segmentation parameters to obtain a plurality of blocks to be mapped and segmented corresponding to the target CT image;
For each block to be mapped, mapping the block to be mapped according to the block parameters of the block to be mapped and preset mapping conversion parameters to obtain a mapped block to be segmented;
and determining all the mapped segmented tiles as segmented tiles corresponding to the target CT image.
In this alternative embodiment, optionally, the mapping conversion parameters include mapping size conversion parameters and/or mapping color conversion parameters, and the tile parameters of the tiles to be mapped and segmented include tile size parameters and/or tile color parameters. For example, when the target CT image size is 64×64 and the required segmentation tile size is 4*4, the Patch partition+ Linear Embedding module in the trained tile segmentation model may segment the target CT image into 16×164 segmented tiles to be mapped (each segmented tile to be mapped includes corresponding image local information), and then map each segmented tile to be mapped to a new feature space (the dimension of the space is different from the original dimension of the segmented tile to be mapped), for example, map a segmented tile to be mapped having a dimension of 4×4x3 (corresponding to 4×4 pixel size and 3 color channels) to a 96-dimensional feature space, and obtain a mapped segmented tile (B, 96, 16, 16).
Therefore, the image block segmentation can be carried out on the target CT image by the alternative embodiment, and then the obtained image block to be segmented is subjected to mapping operation, so that the segmented image block corresponding to the target CT image is obtained, the image block segmentation reliability and accuracy of the target CT image are improved, the accuracy of the segmented image block corresponding to the obtained target CT image is improved, and the fusion efficiency of the subsequent segmented image block is improved comprehensively.
In yet another alternative embodiment, according to the target feature map and the segmented tiles, a fusion operation is performed on the target feature map and the segmented tiles to obtain fused tiles, including:
Determining a first block to be fused according to the divided blocks;
Determining a first attention weight parameter matched with the first to-be-fused block and the third feature map according to the first to-be-fused block and the third feature map, and determining a second to-be-fused block according to the first attention weight parameter and the third feature map;
determining a second attention weight parameter matched with the second image block to be fused and the second feature image according to the second image block to be fused and the second feature image, and determining a third image block to be fused according to the second attention weight parameter and the second feature image;
And determining a third attention weight parameter matched with the third to-be-fused image block and the first feature image according to the third to-be-fused image block and the first feature image, and determining the fused image block according to the third attention weight parameter and the first feature image.
In this alternative embodiment, it may be specifically understood that the multi-scale feature map (i.e. the first/second/third feature map) is fused with the up-sampling feature map (i.e. the third to-be-fused block), the shallow feature map (i.e. the second to-be-fused block), and the deep feature map (i.e. the first to-be-fused block), so as to reduce the spatial information loss caused by downsampling in the corresponding image processing process. Further, the corresponding Attention weighting parameters may be determined by Attention Gate. Still further, determining a fused tile according to the third attention weighting parameter and the first feature map, including: and determining the to-be-processed fused image block according to the third attention weight parameter and the first feature map, and performing dimension conversion operation on the to-be-processed fused image block according to the preset dimension conversion parameter to obtain the fused image block.
Therefore, the optional embodiment can determine the corresponding tiles to be fused according to the corresponding attention weight parameters, so that the reliability and accuracy of determining the fused tiles can be improved, the reliability and accuracy of the obtained target segmentation tiles can be further improved, the segmentation effect on the target CT image can be improved, and the objects to be observed in the target segmentation tiles are ensured to be more concentrated and clear.
In yet another alternative embodiment, determining a first tile to be fused from the split tiles includes:
performing target conversion operation under a plurality of rounds on the divided image blocks to obtain converted divided image blocks;
Performing remolding operation on the converted segmented image blocks to obtain remolded segmented image blocks;
and performing dimension transformation operation on the remolded segmented image block to obtain a transformed segmented image block serving as a first image block to be fused.
In this alternative embodiment, further, performing a target conversion operation on the split tiles for a plurality of rounds to obtain converted split tiles includes:
Extracting characteristic information of the segmented image blocks to obtain the characteristic information of the segmented image blocks; according to the characteristic information, performing pixel combination operation on the segmented image blocks to obtain a plurality of combined image blocks corresponding to the segmented image blocks, and performing splicing operation on all the combined image blocks to obtain spliced image blocks corresponding to the segmented image blocks; normalizing the spliced image blocks to obtain processed image blocks, and performing linear transformation operation on the processed image blocks to obtain transformed image blocks;
Determining a current round parameter corresponding to the target conversion operation of the segmented image block, and judging whether the current round parameter is larger than or equal to a preset round parameter;
if not, updating the transformed image block into a segmented image block, and triggering the executed segmented image block to perform characteristic information extraction operation to obtain characteristic information of the segmented image block;
and when the judgment result is yes, determining the block after transformation as a block after division after transformation.
In this alternative embodiment, optionally, the feature information comprises semantic feature information and/or initial feature information. For example, a PATCH MERGING module in the block segmentation model would consider each 2x2 neighboring pixel as a patch (block), then combine the pixels at the same location in each patch, and finally form 4 feature maps. Next, the four feature maps are spliced in the longitudinal direction, followed by a layer normalization process. Finally, linear transformation is realized in the depth direction of the feature map through a full-connection layer, so that the width and the height of the feature map are halved and the depth is doubled, and corresponding transformed image blocks are obtained.
Therefore, the optional embodiment can perform the target conversion, remodelling and dimension conversion operation on the segmented image blocks to obtain the first image block to be fused, so that the reliability and accuracy of the obtained converted segmented image blocks and the first image block to be fused are improved, the execution reliability and accuracy of the subsequent fusion operation on the first image block to be fused are improved, and the accurate segmentation on the target CT image is realized.
Example III
Referring to fig. 3, fig. 3 is a schematic structural diagram of an intelligent CT image segmentation apparatus according to an embodiment of the present invention. As shown in fig. 3, the intelligent segmentation apparatus for CT image may include:
The extracting module 301 is configured to perform a feature map extracting operation on a preset target CT image, so as to obtain a target feature map corresponding to the target CT image;
the block segmentation module 302 is configured to perform a block segmentation operation on the target CT image to obtain a segmented block corresponding to the target CT image;
The fusion module 303 is configured to perform a fusion operation on the target feature map and the segmentation block according to the target feature map and the segmentation block, so as to obtain a fused block, which is a target segmentation block corresponding to the target CT image.
In an embodiment of the present invention, further, the target CT image is determined by:
according to preset image processing parameters, carrying out normalization operation on an initial CT image to be processed to obtain a normalized CT image;
determining element parameters of the normalized CT image, and determining elements to be removed corresponding to the normalized CT image according to the element parameters; the element parameters comprise element CT values and element type parameters;
And performing element removal operation on the normalized CT image according to the element to be removed to obtain an element-removed CT image serving as a target CT image.
In an embodiment of the present invention, optionally, the image processing parameters include an image window level position parameter, an image window width range parameter, and an image pixel value range parameter.
Therefore, the intelligent segmentation device for implementing the CT image described in fig. 3 can perform feature map extraction and tile segmentation operation on the target CT image, and then fuse the obtained target feature map and the segmented tiles to obtain the target segmented tiles corresponding to the target CT image, which is beneficial to improving the reliability and accuracy of the target segmented tiles, and further beneficial to improving the segmentation precision of the target CT image, and thus beneficial to improving the segmentation effect of the target CT image.
In an optional embodiment, the extracting module 301 performs a feature map extracting operation on a preset target CT image, and the manner of obtaining a target feature map corresponding to the target CT image specifically includes:
Performing initial feature extraction operation on a preset target CT image to obtain a first feature map corresponding to the target CT image;
Performing size transformation operation on the first feature map according to preset size transformation parameters to obtain a second feature map corresponding to the target CT image;
performing target residual error processing operation on the second feature map according to preset feature map parameters to obtain a third feature map corresponding to the target CT image;
And determining the first feature map, the second feature map and the third feature map as target feature maps corresponding to the target CT image.
In this alternative embodiment, the feature map parameters optionally include a feature map weight parameter and a feature map bias parameter.
Therefore, the intelligent segmentation device for implementing the CT image depicted in fig. 3 can perform an initial feature extraction operation on the target CT image, perform a size transformation operation on the first feature image, and perform a target residual processing operation on the second feature image, so as to obtain a series of target feature images corresponding to the target CT image, so that the reliability and accuracy of the execution of the feature image extraction operation on the target CT image can be improved, the reliability and accuracy of the obtained target feature images can be improved, the efficiency of the subsequent fusion operation between the target feature images and the segmentation image blocks can be improved, and the segmentation accuracy of the target CT image can be improved.
In another optional embodiment, the feature map weight parameters include a first feature map weight parameter, a second feature map weight parameter, and a third feature map weight parameter, and the feature map bias parameters include a first feature map bias parameter, a second feature map bias parameter, and a third feature map bias parameter;
The extracting module 301 performs a target residual processing operation on the second feature map according to a preset feature map parameter, and the manner of obtaining a third feature map corresponding to the target CT image specifically includes:
According to the first feature map weight parameter and the first feature map bias parameter, performing first residual processing operation on the second feature map to obtain a first target feature map;
Performing a second residual processing operation on the first target feature map according to the second feature map weight parameter and the second feature map bias parameter to obtain a second target feature map;
And carrying out third residual processing operation on the second target feature map according to the third feature map weight parameter and the third feature map bias parameter to obtain a third target feature map serving as a third feature map corresponding to the target CT image.
Therefore, the intelligent segmentation device for implementing the CT image described in fig. 3 can perform residual processing operation on the corresponding target feature image according to the corresponding feature image weight parameter and the feature image bias parameter, so as to obtain the third feature image corresponding to the target CT image, so that reliability and accuracy of the target residual processing operation on the second feature image can be improved, and further, accuracy of the third feature image corresponding to the obtained target CT image can be improved, and further, execution efficiency of subsequent image fusion operation can be improved.
In yet another alternative embodiment, the tile segmentation module 302 performs a tile segmentation operation on the target CT image, and the manner of obtaining the segmented tiles corresponding to the target CT image specifically includes:
performing block segmentation operation on the target CT image according to preset block segmentation parameters to obtain a plurality of blocks to be mapped and segmented corresponding to the target CT image;
For each block to be mapped, mapping the block to be mapped according to the block parameters of the block to be mapped and preset mapping conversion parameters to obtain a mapped block to be segmented;
and determining all the mapped segmented tiles as segmented tiles corresponding to the target CT image.
In this alternative embodiment, optionally, the mapping conversion parameters include mapping size conversion parameters and/or mapping color conversion parameters.
Therefore, the intelligent segmentation device for implementing the CT image depicted in fig. 3 can segment the target CT image, and then map the segmented image blocks to be mapped, so as to obtain the segmented image blocks corresponding to the target CT image, which is beneficial to improving the reliability and accuracy of segment segmentation of the target CT image, and further beneficial to improving the accuracy of segment image blocks corresponding to the obtained target CT image, so as to comprehensively improve the fusion efficiency of the subsequent segmented image blocks.
In yet another alternative embodiment, the fusing module 303 performs a fusing operation on the target feature map and the segmented tiles according to the target feature map and the segmented tiles, and the manner of obtaining the fused tiles specifically includes:
Determining a first block to be fused according to the divided blocks;
Determining a first attention weight parameter matched with the first to-be-fused block and the third feature map according to the first to-be-fused block and the third feature map, and determining a second to-be-fused block according to the first attention weight parameter and the third feature map;
determining a second attention weight parameter matched with the second image block to be fused and the second feature image according to the second image block to be fused and the second feature image, and determining a third image block to be fused according to the second attention weight parameter and the second feature image;
And determining a third attention weight parameter matched with the third to-be-fused image block and the first feature image according to the third to-be-fused image block and the first feature image, and determining the fused image block according to the third attention weight parameter and the first feature image.
Therefore, the intelligent segmentation device for implementing the CT image described in fig. 3 can determine the corresponding tiles to be fused according to the corresponding attention weight parameters, so that the reliability and accuracy of determining the fused tiles can be improved, and the reliability and accuracy of the obtained target segmentation tiles can be further improved, so that the segmentation effect on the target CT image can be improved, and the objects to be observed in the target segmentation tiles can be ensured to be more concentrated and clear.
In yet another alternative embodiment, the fusing module 303 determines the first tile to be fused according to the split tiles by specifically including:
performing target conversion operation under a plurality of rounds on the divided image blocks to obtain converted divided image blocks;
Performing remolding operation on the converted segmented image blocks to obtain remolded segmented image blocks;
and performing dimension transformation operation on the remolded segmented image block to obtain a transformed segmented image block serving as a first image block to be fused.
In this alternative embodiment, further, the fusing module 303 performs the target conversion operation on the split tiles under multiple rounds, and the manner of obtaining the converted split tiles specifically includes:
Extracting characteristic information of the segmented image blocks to obtain the characteristic information of the segmented image blocks; according to the characteristic information, performing pixel combination operation on the segmented image blocks to obtain a plurality of combined image blocks corresponding to the segmented image blocks, and performing splicing operation on all the combined image blocks to obtain spliced image blocks corresponding to the segmented image blocks; normalizing the spliced image blocks to obtain processed image blocks, and performing linear transformation operation on the processed image blocks to obtain transformed image blocks;
Determining a current round parameter corresponding to the target conversion operation of the segmented image block, and judging whether the current round parameter is larger than or equal to a preset round parameter;
if not, updating the transformed image block into a segmented image block, and triggering the executed segmented image block to perform characteristic information extraction operation to obtain characteristic information of the segmented image block;
and when the judgment result is yes, determining the block after transformation as a block after division after transformation.
In this alternative embodiment, optionally, the feature information comprises semantic feature information and/or initial feature information.
Therefore, the intelligent segmentation device for implementing the CT image depicted in fig. 3 can perform the target conversion, reshaping and dimension transformation operations on the segmented image blocks to obtain the first image block to be fused, which is beneficial to improving the reliability and accuracy of the obtained segmented image block after conversion and the first image block to be fused, and further beneficial to improving the reliability and accuracy of the subsequent fusion operation on the first image block to be fused, thereby being beneficial to realizing the accurate segmentation of the target CT image.
Example IV
Referring to fig. 4, fig. 4 is a schematic structural diagram of an intelligent CT image segmentation apparatus according to another embodiment of the present invention. As shown in fig. 4, the intelligent segmentation apparatus for CT image may include:
a memory 401 storing executable program codes;
A processor 402 coupled with the memory 401;
The processor 402 invokes executable program codes stored in the memory 401 to perform the steps in the intelligent segmentation method of CT images described in the first or second embodiment of the present invention.
Example five
The embodiment of the invention discloses a computer storage medium which stores computer instructions for executing the steps in the intelligent CT image segmentation method described in the first or second embodiment of the invention when the computer instructions are called.
Example six
An embodiment of the present invention discloses a computer program product comprising a non-transitory computer readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform the steps in the intelligent segmentation method of CT images described in embodiment one or embodiment two.
The apparatus embodiments described above are merely illustrative, wherein the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above detailed description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course by means of hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in part in the form of a software product that may be stored in a computer-readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic disc Memory, tape Memory, or any other medium that can be used for computer-readable carrying or storing data.
Finally, it should be noted that: the embodiment of the invention discloses an intelligent segmentation method and device for CT images, which are disclosed as preferred embodiments of the invention, and are only used for illustrating the technical scheme of the invention, but not limiting the technical scheme; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme recorded in the various embodiments can be modified or part of technical features in the technical scheme can be replaced equivalently; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. An intelligent segmentation method of a CT image, which is characterized by comprising the following steps:
Performing feature map extraction operation on a preset target CT image to obtain a target feature map corresponding to the target CT image;
Performing block segmentation operation on the target CT image to obtain segmented blocks corresponding to the target CT image;
And according to the target feature map and the segmentation block, carrying out fusion operation on the target feature map and the segmentation block to obtain a fused block serving as a target segmentation block corresponding to the target CT image.
2. The intelligent segmentation method of CT images according to claim 1, wherein the target CT image is determined by:
According to preset image processing parameters, carrying out normalization operation on an initial CT image to be processed to obtain a normalized CT image; the image processing parameters comprise at least one of an image window level position parameter, an image window width range parameter and an image pixel value range parameter;
Determining element parameters of the normalized CT image, and determining elements to be removed corresponding to the normalized CT image according to the element parameters; the element parameters comprise element CT values and element type parameters;
And performing element removal operation on the normalized CT image according to the element to be removed to obtain an element-removed CT image serving as a target CT image.
3. The intelligent segmentation method of CT images according to claim 2, wherein the performing feature map extraction on a preset target CT image to obtain a target feature map corresponding to the target CT image includes:
performing initial feature extraction operation on a preset target CT image to obtain a first feature map corresponding to the target CT image;
Performing size transformation operation on the first feature map according to preset size transformation parameters to obtain a second feature map corresponding to the target CT image;
Performing target residual error processing operation on the second feature map according to preset feature map parameters to obtain a third feature map corresponding to the target CT image; the characteristic map parameters comprise characteristic map weight parameters and characteristic map bias parameters;
And determining the first feature map, the second feature map and the third feature map as target feature maps corresponding to the target CT image.
4. The intelligent segmentation method according to claim 3, wherein the feature map weight parameters include a first feature map weight parameter, a second feature map weight parameter, and a third feature map weight parameter, and the feature map bias parameters include a first feature map bias parameter, a second feature map bias parameter, and a third feature map bias parameter;
the performing a target residual processing operation on the second feature map according to a preset feature map parameter to obtain a third feature map corresponding to the target CT image, including:
Performing a first residual processing operation on the second feature map according to the first feature map weight parameter and the first feature map bias parameter to obtain a first target feature map;
Performing a second residual processing operation on the first target feature map according to the second feature map weight parameter and the second feature map bias parameter to obtain a second target feature map;
And carrying out third residual processing operation on the second target feature map according to the third feature map weight parameter and the third feature map bias parameter to obtain a third target feature map serving as a third feature map corresponding to the target CT image.
5. The method for intelligent segmentation of CT images according to any one of claims 1-4, wherein performing a tile segmentation operation on the target CT image to obtain segmented tiles corresponding to the target CT image comprises:
performing block segmentation operation on the target CT image according to preset block segmentation parameters to obtain a plurality of blocks to be mapped and segmented corresponding to the target CT image;
for each block to be mapped, mapping the block to be mapped according to the block parameters of the block to be mapped and preset mapping conversion parameters to obtain a mapped block to be mapped; the mapping conversion parameters comprise mapping size conversion parameters and/or mapping color conversion parameters;
And determining all the mapped segmented tiles as segmented tiles corresponding to the target CT image.
6. The intelligent segmentation method of a CT image according to claim 3, wherein the fusing operation is performed on the target feature map and the segmented image blocks according to the target feature map and the segmented image blocks to obtain fused image blocks, including:
Determining a first block to be fused according to the divided blocks;
Determining a first attention weight parameter matched with the first to-be-fused block and the third feature map according to the first to-be-fused block and the third feature map, and determining a second to-be-fused block according to the first attention weight parameter and the third feature map;
Determining a second attention weight parameter matched with the second to-be-fused image block and the second feature image according to the second to-be-fused image block and the second feature image, and determining a third to-be-fused image block according to the second attention weight parameter and the second feature image;
And determining a third attention weight parameter matched with the third to-be-fused image block and the first feature image according to the third to-be-fused image block and the first feature image, and determining a fused image block according to the third attention weight parameter and the first feature image.
7. The method of claim 6, wherein determining a first block to be fused from the segmented blocks comprises:
Performing target conversion operation under a plurality of rounds on the divided image blocks to obtain converted divided image blocks;
performing remolding operation on the converted segmented image blocks to obtain remolded segmented image blocks;
performing dimension transformation operation on the remolded segmented image block to obtain a transformed segmented image block serving as a first image block to be fused;
The performing target conversion operation on the divided image blocks under a plurality of rounds to obtain converted divided image blocks includes:
Extracting characteristic information of the segmented image blocks to obtain the characteristic information of the segmented image blocks; the feature information comprises semantic feature information and/or initial feature information; according to the characteristic information, performing pixel combination operation on the segmented image blocks to obtain a plurality of combined image blocks corresponding to the segmented image blocks, and performing splicing operation on all the combined image blocks to obtain spliced image blocks corresponding to the segmented image blocks; normalizing the spliced image blocks to obtain processed image blocks, and performing linear transformation operation on the processed image blocks to obtain transformed image blocks;
determining a current round parameter corresponding to the target conversion operation of the divided image blocks, and judging whether the current round parameter is larger than or equal to a preset round parameter;
If not, updating the transformed image block into the segmented image block, and triggering and executing the characteristic information extraction operation on the segmented image block to obtain the characteristic information of the segmented image block;
And when the judgment result is yes, determining the block after transformation as a block after division after transformation.
8. An intelligent segmentation apparatus for CT images, the apparatus comprising:
The extraction module is used for carrying out feature map extraction operation on a preset target CT image to obtain a target feature map corresponding to the target CT image;
The image block segmentation module is used for carrying out image block segmentation operation on the target CT image to obtain segmented image blocks corresponding to the target CT image;
And the fusion module is used for carrying out fusion operation on the target feature map and the segmentation image blocks according to the target feature map and the segmentation image blocks to obtain fused image blocks serving as target segmentation image blocks corresponding to the target CT image.
9. An intelligent segmentation apparatus for CT images, the apparatus comprising:
a memory storing executable program code;
a processor coupled to the memory;
The processor invokes the executable program code stored in the memory to perform the intelligent segmentation method of CT images as set forth in any one of claims 1-7.
10. A computer storage medium storing computer instructions for performing the intelligent segmentation method of CT images according to any one of claims 1-7 when invoked.
CN202410327103.2A 2024-03-21 2024-03-21 Intelligent segmentation method and device for CT image Active CN117952992B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410327103.2A CN117952992B (en) 2024-03-21 2024-03-21 Intelligent segmentation method and device for CT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410327103.2A CN117952992B (en) 2024-03-21 2024-03-21 Intelligent segmentation method and device for CT image

Publications (2)

Publication Number Publication Date
CN117952992A true CN117952992A (en) 2024-04-30
CN117952992B CN117952992B (en) 2024-06-11

Family

ID=90804035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410327103.2A Active CN117952992B (en) 2024-03-21 2024-03-21 Intelligent segmentation method and device for CT image

Country Status (1)

Country Link
CN (1) CN117952992B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298823A (en) * 2021-05-20 2021-08-24 西安泽塔云科技股份有限公司 Image fusion method and device
CN114463341A (en) * 2022-01-11 2022-05-10 武汉大学 Medical image segmentation method based on long and short distance features
CN116188479A (en) * 2023-02-21 2023-05-30 北京长木谷医疗科技有限公司 Hip joint image segmentation method and system based on deep learning
CN116703944A (en) * 2023-05-31 2023-09-05 中国工商银行股份有限公司 Image segmentation method, image segmentation device, electronic device and storage medium
US20230306600A1 (en) * 2022-02-10 2023-09-28 Qualcomm Incorporated System and method for performing semantic image segmentation
CN116843893A (en) * 2023-03-30 2023-10-03 北京工商大学 Three-dimensional image segmentation method and system based on attention mechanism multi-scale convolutional neural network
CN117036246A (en) * 2023-07-06 2023-11-10 东软集团股份有限公司 Image recognition method and device, storage medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298823A (en) * 2021-05-20 2021-08-24 西安泽塔云科技股份有限公司 Image fusion method and device
CN114463341A (en) * 2022-01-11 2022-05-10 武汉大学 Medical image segmentation method based on long and short distance features
US20230306600A1 (en) * 2022-02-10 2023-09-28 Qualcomm Incorporated System and method for performing semantic image segmentation
CN116188479A (en) * 2023-02-21 2023-05-30 北京长木谷医疗科技有限公司 Hip joint image segmentation method and system based on deep learning
CN116843893A (en) * 2023-03-30 2023-10-03 北京工商大学 Three-dimensional image segmentation method and system based on attention mechanism multi-scale convolutional neural network
CN116703944A (en) * 2023-05-31 2023-09-05 中国工商银行股份有限公司 Image segmentation method, image segmentation device, electronic device and storage medium
CN117036246A (en) * 2023-07-06 2023-11-10 东软集团股份有限公司 Image recognition method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN117952992B (en) 2024-06-11

Similar Documents

Publication Publication Date Title
CN112560876B (en) Single-stage small sample target detection method for decoupling measurement
CN115331087B (en) Remote sensing image change detection method and system fusing regional semantics and pixel characteristics
US10430691B1 (en) Learning method and learning device for object detector based on CNN, adaptable to customers' requirements such as key performance index, using target object merging network and target region estimating network, and testing method and testing device using the same to be used for multi-camera or surround view monitoring
CN113139543B (en) Training method of target object detection model, target object detection method and equipment
CN108764039B (en) Neural network, building extraction method of remote sensing image, medium and computing equipment
CN108805023A (en) A kind of image detecting method, device, computer equipment and storage medium
CN111652218A (en) Text detection method, electronic device and computer readable medium
CN112651979A (en) Lung X-ray image segmentation method, system, computer equipment and storage medium
CN114972191A (en) Method and device for detecting farmland change
CN116645592B (en) Crack detection method based on image processing and storage medium
CN112329761A (en) Text detection method, device, equipment and storage medium
CN113065551A (en) Method for performing image segmentation using a deep neural network model
CN111415364A (en) Method, system and storage medium for converting image segmentation samples in computer vision
CN111967401A (en) Target detection method, device and storage medium
CN117952992B (en) Intelligent segmentation method and device for CT image
CN117351487A (en) Medical image segmentation method and system for fusing adjacent area and edge information
CN116091781B (en) Data processing method and device for image recognition
CN114972361B (en) Blood flow segmentation method, device, equipment and storage medium
CN116310832A (en) Remote sensing image processing method, device, equipment, medium and product
CN110689549B (en) Object extraction method, device and equipment
CN115471654A (en) SPECT image left ventricle automatic segmentation method and system
CN113569600A (en) Method and device for identifying weight of object, electronic equipment and storage medium
CN112348021A (en) Text detection method, device, equipment and storage medium
CN117197756B (en) Hidden danger area intrusion detection method, device, equipment and storage medium
KR102102369B1 (en) Method and apparatus for estimating matching performance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant