CN111080573A - Rib image detection method, computer device and storage medium - Google Patents

Rib image detection method, computer device and storage medium Download PDF

Info

Publication number
CN111080573A
CN111080573A CN201911133164.0A CN201911133164A CN111080573A CN 111080573 A CN111080573 A CN 111080573A CN 201911133164 A CN201911133164 A CN 201911133164A CN 111080573 A CN111080573 A CN 111080573A
Authority
CN
China
Prior art keywords
image
region
candidate
rib
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911133164.0A
Other languages
Chinese (zh)
Other versions
CN111080573B (en
Inventor
宋燕丽
宣锴
吴迪嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN201911133164.0A priority Critical patent/CN111080573B/en
Publication of CN111080573A publication Critical patent/CN111080573A/en
Application granted granted Critical
Publication of CN111080573B publication Critical patent/CN111080573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Abstract

The present application relates to a rib image detection method, a computer device, and a storage medium. The method comprises the following steps: acquiring an original medical image, the original medical image including ribs; expanding the ribs in the original medical image to obtain expanded images of the ribs; inputting the expanded image of the rib to a first neural network model for processing to obtain a candidate position of an interested area; performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image that includes a candidate location of the region of interest; and inputting the candidate image area into a second neural network model for processing to obtain the target position of the region of interest. By adopting the method, the detection time can be saved and the detection accuracy can be improved.

Description

Rib image detection method, computer device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a rib image detection method, a computer device, and a storage medium.
Background
The rib is an arc ossicle which is a bony support of a thorax, the front side of the rib is connected with a sternum, the rear side of the rib is connected with a thoracic vertebra, and the rib can play a role in protecting the thoracic cavity, the lung cavity and the heart of a human body, so that the rib is very important for other organs of the chest of the human body, and the rib detection is particularly important.
In the related art, when rib detection is performed, scanning is performed on a human rib by using a scanning device, then scanned data is reconstructed to obtain a rib image, and then the obtained rib image is input into a neural network model to be processed to obtain a result of whether the rib is diseased.
However, the rib detection by the above technology is time-consuming and inaccurate in detection result.
Disclosure of Invention
In view of the above, it is desirable to provide a rib image detection method, apparatus, computer device and storage medium capable of reducing detection time and improving detection accuracy.
A rib image detection method, the method comprising:
acquiring an original medical image, the original medical image including ribs;
expanding the ribs in the original medical image to obtain expanded images of the ribs;
inputting the expanded image of the rib into a first neural network model for processing to obtain a candidate position of an interested area;
performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image area is an image area including candidate positions of the region of interest on the original medical image;
and inputting the candidate image area into a second neural network model for processing to obtain the target position of the region of interest.
A rib image detecting apparatus, comprising:
an acquisition module for acquiring an original medical image, the original medical image comprising ribs;
the unfolding module is used for unfolding the ribs in the original medical image to obtain an unfolded image of the ribs;
the first processing module is used for inputting the expanded image of the rib into a first neural network model for processing to obtain a candidate position of an interested area;
the mapping module is used for carrying out region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image that includes a candidate location of the region of interest;
and the second processing module is used for inputting the candidate image area into a second neural network model for processing to obtain the target position of the interested area.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring an original medical image, the original medical image including ribs;
expanding the ribs in the original medical image to obtain expanded images of the ribs;
inputting the expanded image of the rib to a first neural network model for processing to obtain a candidate position of an interested area;
performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image comprising a candidate position of the region of interest
And inputting the candidate image area into a second neural network model for processing to obtain the target position of the region of interest.
A readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring an original medical image, the original medical image including ribs;
expanding the ribs in the original medical image to obtain expanded images of the ribs;
inputting the expanded image of the rib to a first neural network model for processing to obtain a candidate position of an interested area;
performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image comprising a candidate position of the region of interest
And inputting the candidate image area into a second neural network model for processing to obtain the target position of the region of interest.
According to the rib image detection method, the device, the computer equipment and the storage medium, the original medical image including the ribs is obtained, the ribs are unfolded, the rib unfolding image is input into the first neural network model after the ribs are unfolded, the candidate positions of the region of interest are obtained, the region division is carried out on the original medical image according to the candidate positions of the region of interest, the candidate image regions including the candidate positions of the region of interest are obtained, and the candidate image regions are input into the second neural network model, so that the target position of the region of interest is obtained. In the method, because the two-stage network is adopted for detection when the region of interest is detected, the detection precision of the method is higher; in addition, because the method adopts the rib unfolding image when the interested area is initially positioned, the candidate position of the interested area can be quickly obtained, namely, a part of detection time can be saved; meanwhile, when the interesting region is subjected to fine detection, a candidate image region obtained by mapping the candidate position onto the original image is used as the input of the fine detection, so that the accuracy of an input image source can be ensured, and the finally obtained target position of the interesting region can be more accurate.
Drawings
FIG. 1 is a diagram illustrating an internal structure of a computer device according to an embodiment;
FIG. 2 is a flow diagram illustrating a method for rib image detection in one embodiment;
FIG. 3 is a flowchart illustrating a rib image detection method according to another embodiment;
FIG. 4a is a schematic flowchart of a rib image detection method according to another embodiment;
FIG. 4b is a schematic diagram illustrating a detailed process for expanding ribs according to another embodiment;
FIG. 5a is a schematic flowchart of a rib image detection method according to another embodiment;
FIG. 5b is a schematic diagram of the structure of a classification model in another embodiment;
FIG. 6 is a flowchart illustrating a rib image detection method according to another embodiment;
FIG. 7a is a schematic flowchart of a rib image detection method according to another embodiment;
FIG. 7b is a schematic diagram of a first neural network model according to another embodiment;
FIG. 8 is a block diagram showing the structure of a rib image detection apparatus according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
At present, rib disease diagnosis is a very important aspect of clinical chest diagnosis and treatment, for this reason, many research groups and manufacturers develop many systems for rib disease detection, but on one hand, ribs are oblique curved tubular structures, real rib regions are very few, the rib regions are difficult to effectively separate from other regions, on the other hand, the proportion of real focal regions on the ribs is relatively small, when rib detection is performed, scanning equipment is usually used for scanning human ribs, then scanned data are reconstructed to obtain rib images, and then the obtained rib images are input into a neural network model for processing to obtain results of whether the ribs are diseased or not. The embodiment of the application provides a rib image detection method, a rib image detection device, computer equipment and a storage medium, and aims to solve the problems in the prior art.
The rib image detection method provided by the embodiment of the application can be applied to computer equipment, and the internal structure diagram of the computer equipment can be as shown in fig. 1. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a rib image detection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 1 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The execution subject of the embodiment of the present application may be a rib image detection apparatus or a computer device, and the following embodiment will be described with the computer device as the execution subject.
In one embodiment, a rib image detection method is provided, and the embodiment relates to a specific process of how to unfold ribs and perform secondary detection on a rib unfolded image. As shown in fig. 2, the method may include the steps of:
s202, acquiring an original medical image, wherein the original medical image comprises ribs.
The manner of acquiring the original medical image may include: an original medical image of an object is obtained by performing image reconstruction and correction on data of the object acquired by a scanning apparatus, which may be an MR apparatus (magnetic resonance), a CT apparatus (Computed Tomography), a PET apparatus (Positron Emission Tomography), a PET-CT apparatus, a PET-MR apparatus, or the like; or, the original medical image can be reconstructed and corrected in advance, stored in the computer device, and when the original medical image needs to be processed, the original medical image is directly read from the memory of the computer device; alternatively, the computer device may also obtain the original medical image from an external device, for example, store the original medical image in a cloud, and when a processing operation needs to be performed, the computer device obtains the original medical image from the cloud. The present embodiment does not limit the acquisition mode for acquiring the original medical image.
Specifically, the computer device may acquire the original medical image by the above means, where the original medical image may include ribs, a vertebral body, or other structures.
S204, expanding the ribs in the original medical image to obtain an expanded image of the ribs.
When the ribs are unfolded, part of the ribs can be unfolded, or all the ribs can be unfolded, and the embodiment mainly unfolds all the ribs; when the rib is unfolded, the rib can be unfolded along the rib, or along the central vertebral body of the rib, or along other directions, etc.
Specifically, when the rib is unfolded, the rib, the vertebral body, and the like in the original medical image may be detected to obtain a rib segmentation result and a vertebral body detection result, and the data of the original medical image may be stretched and flattened according to the rib segmentation result and the vertebral body detection result to obtain an unfolded image of the rib. Compared with the original medical image, the expansion image of the rib is more convenient for the subsequent neural network model processing, and the time consumption of calculation can also be reduced.
S206, inputting the expanded image of the rib into the first neural network model for processing to obtain the candidate position of the region of interest.
The candidate position of the region of interest may be one position or a plurality of positions, and the candidate position may be a coordinate, which may be a one-dimensional coordinate, a two-dimensional coordinate, a three-dimensional coordinate, or the like; the first neural network model may be a segmentation model, which may be, for example, a V-Net model, a U-Net model, or the like; the region of interest may be a lesion on a rib, etc., since there may be more than one lesion on a rib, and since the first detected region of interest may also have an inaccuracy problem, there may be a plurality of candidate positions.
Before the expanded image of the rib is input to the first neural network model, the expanded image of the rib may be preprocessed, for example, normalization processing may be performed, for example, the normalization processing may be performed by using a bone window, a mean value, a standard deviation processing, or the like, and if the bone window is used, normalization may be performed by using a window width window level of bone tissue, a maximum value Imax (1000) and a minimum value Imin (0) are set, and normalization is performed by the following formula: ic ═ Ic (Ic-Imin)/(Imax-Imin); if a mean or standard deviation is used, the processing may be performed by (Ic-mean)/standard deviation or a fixed threshold.
Specifically, after obtaining the expanded image of the rib, the computer device may input the expanded image of the rib into the first neural network model for segmentation or detection processing, so as to obtain a candidate position of the region of interest on the expanded image of the rib.
S208, performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image comprising candidate locations of the region of interest.
Specifically, after obtaining the candidate position of the region of interest, the computer device may set the rib expansion image and the original medical image in the same coordinate system, and then map the candidate position of the region of interest onto the original medical image, that is, find a corresponding candidate position of the region of interest on the original medical image, and then perform region division on the original medical image with the found candidate position of the region of interest as a center and with a certain step length to obtain an image region including the candidate position of the region of interest, which is marked as a candidate image region, where the candidate image region may also be a candidate image block. The step size here may be an integer value between 20mm and 64mm, but may be other step size values. In addition, the candidate image area or the candidate image block may be two-dimensional or three-dimensional, and then, the candidate image area or the candidate image block may also be one or more, and a plurality of candidate image areas or candidate image blocks are mainly used in this embodiment.
And S210, inputting the candidate image area into a second neural network model for processing to obtain the target position of the region of interest.
Wherein the second neural network model may be a classification model, a segmentation model, etc., which may be a convolutional neural network, etc. Before inputting the candidate image regions into the neural network model, the candidate image regions may be resampled to image regions with a preset resolution, for example, 0.4mm × 0.4mm to 1.0mm × 1.0mm, which may facilitate subsequent uniform processing of multiple candidate image regions.
Specifically, after obtaining the candidate image region, the computer device may input the candidate image region to the second neural network model for segmentation processing or classification processing, and determine a target position, i.e., a more accurate position, of the region of interest from the candidate positions of the region of interest.
In the rib image detection method, an original medical image including ribs is obtained, the ribs are unfolded, after the ribs are unfolded, a rib unfolding image is input into a first neural network model, candidate positions of a region of interest are obtained, region division is performed on the original medical image according to the candidate positions of the region of interest, a candidate image region including the candidate positions of the region of interest is obtained, and the candidate image region is input into a second neural network model, so that a target position of the region of interest is obtained. In the method, because the two-stage network is adopted for detection when the region of interest is detected, the detection precision of the method is higher; in addition, because the method adopts the rib unfolding image when the interested area is initially positioned, the candidate position of the interested area can be quickly obtained, namely, a part of detection time can be saved; meanwhile, when the interesting region is subjected to fine detection, a candidate image region obtained by mapping the candidate position onto the original image is used as the input of the fine detection, so that the accuracy of an input image source can be ensured, and the finally obtained target position of the interesting region can be more accurate.
In another embodiment, another rib image detection method is provided, and the embodiment relates to a specific process of specifically unfolding the ribs to obtain an unfolded rib image. On the basis of the above embodiment, as shown in fig. 3, the above 204 may include the following steps:
s302, the original medical image is detected and processed to obtain a rib segmentation result and at least one centrum center point.
Specifically, the computer device may obtain the rib segmentation result by manually labeling the rib region, manually segmenting, or inputting the original medical image into the segmentation model to obtain the rib segmentation result, and if the segmentation model is used, the segmentation model may be trained based on the sample image and corresponding rib labels, rib centerline labels, and the like, and may simultaneously segment the rib and the rib centerline, and the rib segmentation result includes the rib segmentation result and the rib centerline segmentation result. In addition, the computer device can perform positioning detection on the vertebral bodies in the original medical image through a vertebral body detection positioning marking method to obtain at least one vertebral body central point, which is generally a plurality of vertebral body central points.
S304, analyzing and processing the center point of at least one vertebral body, and determining the target direction of the vertebral body corresponding to the center point of at least one vertebral body.
Here, the analysis process may include a fitting process, a principal component analysis process, and the like.
Specifically, the computer device may analyze and process the center points of the plurality of vertebral bodies obtained in the previous step, determine a direction in which the vertebral body best fits the reality, and use the direction as the target direction.
S306, unfolding the ribs in the original medical image according to the target direction of the vertebral body to obtain an unfolded image of the ribs.
In this step, when specifically spreading the ribs, optionally, the method steps shown in fig. 4 may be adopted for spreading, as shown in fig. 4a, the spreading steps include the following S402-S404:
s402, establishing a coordinate system based on the target direction of the vertebral body and at least one central point of the vertebral body, and determining a distance image of the rib segmentation result in the coordinate system.
The coordinate system may be a cylindrical coordinate system, a polar coordinate system, or the like, and the embodiment mainly uses the cylindrical coordinate system, and when the coordinate system is established, the coordinate system may be based on one detected central point of a vertebral body, may also be based on all detected central points of the vertebral body, and may also be based on a detected central point of a part of the vertebral body, which is not specifically limited in this embodiment. The distance image is a distance from the coordinate center in the established coordinate system of the rib segmentation result.
Specifically, the computer device may establish a cylindrical coordinate system according to the detected center points of the plurality of vertebral bodies and the target direction of the vertebral body, calculate a distance from the center point of the coordinate in the cylindrical coordinate system of the rib segmentation result, obtain a plurality of distance values, and form a distance image from the plurality of distance values. Wherein each plane of the cylinder is a polar coordinate and each plane is parallel.
S404, according to the distance image and the original medical image of the rib segmentation result in the coordinate system, establishing a mapping relation between the distance image and the original medical image, and according to the mapping relation, mapping the original medical image to the distance image to obtain an expanded image of the rib.
In this step, when the mapping relationship is established, optionally, the following steps a and B may be adopted to establish:
and step A, performing interpolation and smoothing processing on the distance image of the rib segmentation result in the coordinate system to obtain a processed distance image.
And step B, establishing a mapping relation between the processed distance image and the original medical image according to the processed distance image and the original medical image.
Specifically, since there may be points with very obvious contrast in the distance image obtained in the previous step, which may cause the rib expansion image obtained subsequently to be more obtrusive, the distance image is first subjected to interpolation processing, linear interpolation or non-linear interpolation, spline interpolation, and the like may be adopted here, and after the interpolation processing, the distance image may be continuously subjected to smoothing processing means such as convolution and the like, so as to obtain the distance image after the interpolation and smoothing processing. Then, each distance value on the processed distance image can be mapped back to the original medical image, the original medical image is under a cartesian coordinate system, the processed distance image is under a cylindrical coordinate system, and the mapping relationship between the cylindrical coordinate system and the cartesian coordinate system can be obtained by calculating each distance value on the processed distance image and the data of each point on the original medical image, that is, the mapping relationship between the processed distance image and the original medical image can be obtained, and the mapping relationship is similar to a conversion relationship, a conversion matrix and the like.
After the mapping relationship is obtained, the values in the rib segmentation result may be traversed, and processing such as interpolation is performed in the corresponding original medical image to obtain an expanded image of the rib.
The detailed process of spreading the ribs in the embodiment of the present application will be given below by using a specific illustration, and referring to fig. 4b, the specific spreading process of the ribs is as follows:
the computer device, after acquiring the raw medical data, may perform the following steps 1) -6), as follows:
1) the original medical image is detected to obtain the rib segmentation result and at least one centrum center point, as shown in fig. 4 b.
2) Establishing a cylindrical coordinate system according to the detected center point of the cone, wherein each plane of the cylinder is a polar coordinate, all planes are parallel, performing principal component analysis processing on the center point of the cone, and the like to obtain a direction of the maximum component as a normal vector z of all two-dimensional polar coordinate planes, interpolating a straight line from the center point of the vertebra, wherein a point on the straight line deviates about 10-50 mm towards the position of the sternum as the center point of the polar coordinate plane intersected with the straight line, wherein the obtained cylindrical coordinate is shown as a graph (b) in fig. 4b, the coordinate in the direction of the normal vector is recorded as z, parameters rho on the polar coordinate plane corresponding to the z are recorded as theta, and rho is a rib radius, each distance field is equivalent to a layer of image in the three-dimensional cylindrical coordinate system, and the distance field can be obtained by deviating one radius value.
3) Determining the distance rho of the segmentation result from the coordinate center in the cylindrical coordinate system through the established segmentation result and the cylindrical coordinate systemz,θ. Rho corresponding to rib segmentation result by taking z as ordinate and theta as abscissaz,θAs shown in diagram (c) of fig. 4b, specifically, for each z and θ, all possible ρ may be traversed, and if there is a segmentation of the rib for the corresponding ρ, it may be noted; if all there is no rib segmentation, we will correspond to ρz,θIs marked as 0; if there are multiple ρ for rib segmentation, its minimum value can be taken.
4) To fill up ρz,θAnd the resulting unfolded rib results are made as smooth as possible, where p obtained in 3) can be usedz,θInterpolation and smoothing. Linear interpolation can be adopted in the rib region, nearest neighbor interpolation is adopted outside the rib region, in order to smooth the image, a large-scale Gaussian kernel can be selected for convolution, and the result rho 'after interpolation and smoothing is obtained'z,θAs shown in diagram (d) of fig. 4 b.
5) Prepared from rho'z,θProjected back into the original medical image data, can be seen as ρ'z,θThe corresponding plane is smooth and closely attached to the rib, and as shown in the graph (e) in fig. 4b, with the mapping relationship between the cylindrical coordinates and the cartesian coordinate system, z and θ can be traversed in (z, θ, ρ'z,θ) The corresponding original medical image (or rib segmentation image) is interpolated to form an expanded two-dimensional rib image (or segmentation image), as shown in diagram (f) of fig. 4 b.
6) All rho'z,θAnd adding or subtracting an offset to obtain a plurality of two-dimensional rib expansion images, and stacking or splicing the rib expansion images to obtain a three-dimensional rib expansion image.
In the rib image detection method provided by this embodiment, a rib segmentation result and at least one centrum center point are obtained by performing detection processing on an original medical image, the at least one centrum center point is analyzed, a target direction of a centrum corresponding to the at least one centrum center point is obtained, and the original medical image is unfolded according to the target direction of the centrum, so as to obtain an unfolded image of a rib. In this embodiment, since the expanded image of the rib is expanded according to the rib segmentation result and the target direction of the vertebral body, the expanded image of the rib obtained in this embodiment is closer to the actual rib condition, that is, the expanded image of the rib obtained in this embodiment is more accurate.
In another embodiment, another rib image detection method is provided, and the embodiment relates to a specific process of how to process a candidate image region by using a classification model to obtain a target position of a region of interest if the second neural network model is the classification model. On the basis of the above embodiment, as shown in fig. 5a, the above S210 may include the following steps:
s502, inputting the candidate image area into a classification model to obtain the category of the candidate image area; the categories include a target category and a non-target category.
Wherein the target category may be false positive, the non-target category may be true positive, etc. The category output here may be identified as 0 or 1.
Specifically, after obtaining the candidate image regions, which are generally obtained by the computer device as a plurality of candidate image regions, the computer device may perform convolution processing such as downsampling on each candidate image region by using the classification model shown in fig. 5b to obtain the features of each candidate image region, and perform full-concatenation processing and classification processing on the features of each candidate image region by using the full-concatenation layer and softmax to obtain the category of each candidate image region.
S504, a target candidate image area corresponding to the target category is obtained, a candidate position corresponding to the target candidate image area is determined from the original medical image, and the candidate position corresponding to the target candidate image area is determined as the target position of the interesting area.
Specifically, after obtaining the category of each candidate image region, the computer device may find a candidate image region corresponding to the target category according to the category identifier, record the candidate image region as the target candidate image region, map the target candidate image region back to the original medical image, obtain a candidate position corresponding to the target candidate image region, and use the candidate position corresponding to the target candidate image region as a fine position of the region of interest, that is, a target position.
In addition, in this embodiment, before the classification model is used, the classification model may be trained, during the training, a sample image (where the sample image may be a sample image area or a sample image block) may be obtained, and each sample image may be amplified, where the amplification includes translation (random range in three directions plus or minus 10mm), rotation (random angle rotation around a random rotation axis, angle range plus or minus 20 degrees), scaling (random scaling 0.7-1.3 times), and the like, each sample image has been labeled with a category, and then the initial classification model may be trained based on the amplified sample image and the labeled category, so as to obtain the classification model.
In the rib image detection method provided by this embodiment, if the second neural network model is a classification model, the candidate image regions are input to the classification model to obtain the categories of the candidate image regions, where the categories include a target category and a non-target category, the target candidate image region corresponding to the target category is obtained, and the candidate position corresponding to the target candidate image region is determined from the original medical image and is used as the target position of the region of interest. In this embodiment, since each candidate image region can be processed by the classification model, the category of each candidate image region can be obtained quickly and accurately, so that the candidate image region corresponding to the target category is accurately obtained according to the category of each candidate image region, and the target position of the region of interest is accurately obtained.
In another embodiment, another rib image detection method is provided, and the embodiment relates to a specific process of how to process a candidate image region by using a segmentation model to obtain a target position of a region of interest if the second neural network model is the segmentation model. On the basis of the above embodiment, as shown in fig. 6, the above S210 may include the following steps:
s602, inputting the candidate image area into the segmentation model to obtain the initial target position of the interested area.
The segmentation model here can be a graph cut algorithm model, a watershed algorithm model, a GrabCut algorithm model, a machine learning model, etc.
Specifically, after obtaining the candidate image regions, which are generally obtained by the computer device, a plurality of candidate image regions may be convolved to obtain the fine detection position of the region of interest, which is referred to as an initial target position.
S604, fusing the initial target position of the region of interest and the candidate position of the region of interest to obtain the target position of the region of interest.
Specifically, if the first neural network model is also a segmentation model, and the output results of the first neural network model and the segmentation model here may be probability maps, the fusion process here may be to map the probability map detected by the first neural network model onto the original medical image to obtain a probability map I1, and the probability map detected by the segmentation model here is I2, so that the final probability map I ═ a 1+ b × I2, a + b ═ 1, 0< a <1, and 0< b <1, and according to the final probability map and the corresponding position calculation method, the target position of the region of interest may be obtained.
Optionally, after the position of the region of interest is obtained, the connected domain taking processing may be performed on the target position of the region of interest to obtain the size of the region of interest; and carrying out threshold segmentation processing on the size of the region of interest to obtain the category of the region of interest. That is, after the final probability map is obtained, a suitable threshold (selectable range 0.3-0.9) may be selected, the final probability map is converted into a binarized map, and morphological operations, such as operations of selecting connected domains, are performed on the binarized map to obtain the size and the category of the region of interest.
In addition, in this embodiment, before using the segmentation model, the segmentation model may be trained, during the training, a sample image (where the sample image may be a sample image region or a sample image block) may be obtained, and each sample image is augmented, where the augmentation includes translation (a random range in three directions is plus or minus 10mm), rotation (a random angle is rotated around a random rotation axis, and an angle range is plus or minus 20 degrees), scaling (random scaling is 0.7-1.3 times), and the like, and each sample image has an area of interest labeled, and then the initial segmentation model may be trained based on the augmented sample image and the labeled area of interest, so as to obtain the segmentation model.
In the rib image detection method provided by this embodiment, if the second neural network model is a segmentation model, the candidate image region is input to the segmentation model to obtain the initial target position of the region of interest, and the initial target position of the region of interest and the candidate position of the region of interest are subjected to fusion processing to obtain the target position of the region of interest. In this embodiment, since the rib can be finely detected by the two-stage segmentation model, the finally obtained target position of the region of interest of the rib can be more accurate.
In another embodiment, another rib image detection method is provided, and the embodiment relates to a specific process of how to train the first neural network model. On the basis of the above embodiment, as shown in fig. 7a, the training method of the first neural network model includes the following steps:
s702, obtaining a sample image; the sample image is an expanded sample image of the rib, and the sample image comprises the labeling position of the region of interest.
And S704, carrying out normalization processing on the sample image to obtain a normalized sample image.
S706, training the initial first neural network model based on the normalized sample image to obtain the first neural network model.
The number of the sample images used in this embodiment may be one or multiple, and each sample image contains the labeling position information of the region of interest. The first neural network model here can be a segmentation model V-Net, U-Net, etc. The training process can adopt an Adam method, a random gradient descent method SGD and the like. In addition, amplification can be performed on each sample image in the training process, wherein the amplification comprises translation (random ranges in three directions are plus or minus 50mm), rotation (random angles are rotated around a random rotation axis, the angle ranges are plus or minus 20 degrees), scaling (random scaling is 0.7-1.3 times), and the like.
Specifically, when the computer device acquires the sample medical image, the acquisition method may be the same as the method for acquiring the original medical image in S202, and is not described herein again, and in addition, in the training process, the sample image may be randomly block-acquired and interpolated, and the size range of the obtained random block is 40 × 40 to 100 × 100, and the resolution is 0.4mm × 0.4mm to 1.0mm × 1.0 mm; then, normalization of size, gray scale, pixels and the like can be performed on each sample image, the size, gray scale, pixels and the like are fixed in a uniform range, sample images in the uniform range of the size, gray scale, pixels and the like are obtained, and normalized sample images are obtained.
Then, each normalized sample medical image may be input to the initial first neural network model, the first neural network model structure may be as shown in fig. 7b, a predicted position of the region of interest of each sample medical image is obtained, a loss between the labeled position of the region of interest and the predicted position of the region of interest is calculated according to the labeled position of the region of interest and the predicted position of the region of interest, the loss is used as a value of a loss function, and the initial first neural network model is trained by using the value of the loss function, so as to finally obtain the trained first neural network model. Here, the loss may be an error, variance, norm, or the like between the predicted position of the region of interest and the noted position of the region of interest; when the sum of the loss functions of the first neural network model is smaller than a preset threshold value or when the sum of the loss functions of the first neural network model is substantially stable (i.e., no change occurs), it may be determined that the first neural network model is trained, otherwise, the training is continued.
In the rib image detection method provided by this embodiment, a sample image is obtained, the sample image is an expanded sample image of a rib, the sample image includes an annotated position of an area of interest, the sample image is normalized to obtain a normalized sample image, and an initial first neural network model is trained based on the normalized sample image to obtain the first neural network model. In this embodiment, since the first neural network model is obtained by training the sample medical image including the labeled position of the region of interest, the obtained first neural network model is relatively accurate, and further, when the rib expansion image is processed by using the accurate network, the obtained processing result is relatively accurate.
It should be understood that although the various steps in the flowcharts of fig. 2-4, 5a, 6, 7a are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-4, 5a, 6, and 7a may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 8, there is provided a rib image detecting apparatus including: an obtaining module 10, an unfolding module 11, a first processing module 12, a mapping module 13, and a second processing module 14, wherein:
an acquisition module 10 for acquiring an original medical image, the original medical image comprising ribs;
the unfolding module 11 is configured to perform unfolding processing on the ribs in the original medical image to obtain an unfolded image of the ribs;
the first processing module 12 is configured to input the expanded image of the rib into a first neural network model for processing, so as to obtain a candidate position of an area of interest;
a mapping module 13, configured to perform region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image that includes a candidate location of the region of interest;
and the second processing module 14 is configured to input the candidate image region to a second neural network model for processing, so as to obtain a target position of the region of interest.
For specific limitations of the rib image detection apparatus, reference may be made to the above limitations of the rib image detection method, which are not described herein again.
In another embodiment, another rib image detection apparatus is provided, and on the basis of the above embodiment, the expansion module 11 includes: a detection unit, an analysis unit and an unfolding unit, wherein:
the detection unit is used for detecting and processing the original medical image to obtain a rib segmentation result and at least one centrum central point;
the analysis unit is used for analyzing and processing the center point of the at least one vertebral body and determining the target direction of the vertebral body corresponding to the center point of the at least one vertebral body;
and the unfolding unit is used for unfolding the ribs in the original medical image according to the target direction of the vertebral body to obtain an unfolded image of the ribs.
Optionally, the unfolding unit may include: determining a sub-cell and unfolding the sub-cell, wherein:
the determining subunit is used for establishing a coordinate system based on the target direction of the vertebral body and the central point of the at least one vertebral body, and determining a distance image of the rib segmentation result in the coordinate system;
and the expansion subunit is used for establishing a mapping relation between the distance image and the original medical image according to the distance image of the rib segmentation result in the coordinate system and the original medical image, and mapping the original medical image onto the distance image according to the mapping relation to obtain an expanded image of the rib.
Optionally, the expansion subunit is further configured to perform interpolation and smoothing processing on the distance image of the rib segmentation result in the coordinate system to obtain a processed distance image; and establishing a mapping relation between the processed distance image and the original medical image according to the processed distance image and the original medical image.
In another embodiment, another rib image detecting apparatus is provided, based on the above embodiment, if the second neural network model is a classification model, the second processing module 14 includes: a classification unit and a first determination unit, wherein:
the classification unit is used for inputting the candidate image area into the classification model to obtain the category of the candidate image area; the categories include a target category and a non-target category;
a first determining unit, configured to acquire a target candidate image region corresponding to the target category, determine a candidate position corresponding to the target candidate image region from the original medical image, and determine the candidate position corresponding to the target candidate image region as a target position of the region of interest.
In another embodiment, another rib image detecting apparatus is provided, in addition to the above embodiment, if the second neural network model is a segmentation model, the second processing module 14 includes: a segmentation unit and a second determination unit, wherein:
the segmentation unit is used for inputting the candidate image area to the segmentation model to obtain an initial target position of the interested area;
and the second determining unit is used for carrying out fusion processing on the initial target position of the region of interest and the candidate position of the region of interest to obtain the target position of the region of interest.
Optionally, the second processing module 14 further includes a third determining unit, configured to perform connected region fetching processing on the target position of the region of interest to obtain the size of the region of interest; and carrying out threshold segmentation processing on the size of the region of interest to obtain the category of the region of interest.
In another embodiment, another rib image detection apparatus is provided, and on the basis of the above embodiment, the apparatus may further include a training module, where the training module is configured to obtain a sample image; the sample image is an expanded sample image of a rib, and the sample image comprises an annotation position of an interested region; carrying out normalization processing on the sample image to obtain a normalized sample image; and training an initial first neural network model based on the normalized sample image to obtain the first neural network model.
For specific limitations of the rib image detection apparatus, reference may be made to the above limitations of the rib image detection method, which are not described herein again.
The modules in the rib image detection device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring an original medical image, the original medical image including ribs;
expanding the ribs in the original medical image to obtain expanded images of the ribs;
inputting the expanded image of the rib to a first neural network model for processing to obtain a candidate position of an interested area;
performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image that includes a candidate location of the region of interest;
and inputting the candidate image area into a second neural network model for processing to obtain the target position of the region of interest.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
detecting and processing the original medical image to obtain a rib segmentation result and at least one centrum central point;
analyzing and processing the center point of the at least one vertebral body, and determining the target direction of the vertebral body corresponding to the center point of the at least one vertebral body;
and unfolding the ribs in the original medical image according to the target direction of the vertebral body to obtain an unfolded image of the ribs.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
establishing a coordinate system based on the target direction of the vertebral body and the central point of the at least one vertebral body, and determining a distance image of the rib segmentation result in the coordinate system;
and according to the distance image and the original medical image of the rib segmentation result in the coordinate system, establishing a mapping relation between the distance image and the original medical image, and according to the mapping relation, mapping the original medical image onto the distance image to obtain an expanded image of the rib.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
performing interpolation and smoothing processing on the distance image of the rib segmentation result in the coordinate system to obtain a processed distance image; and establishing a mapping relation between the processed distance image and the original medical image according to the processed distance image and the original medical image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
inputting the candidate image area into the classification model to obtain the category of the candidate image area; the categories include a target category and a non-target category;
acquiring a target candidate image area corresponding to the target category, determining a candidate position corresponding to the target candidate image area from the original medical image, and determining the candidate position corresponding to the target candidate image area as the target position of the interested area.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
inputting the candidate image area to the segmentation model to obtain an initial target position of the region of interest;
and carrying out fusion processing on the initial target position of the region of interest and the candidate position of the region of interest to obtain the target position of the region of interest.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
carrying out connected region taking processing on the target position of the region of interest to obtain the size of the region of interest;
and carrying out threshold segmentation processing on the size of the region of interest to obtain the category of the region of interest.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a sample image; the sample image is an expanded sample image of a rib, and the sample image comprises an annotation position of an interested region;
carrying out normalization processing on the sample image to obtain a normalized sample image;
and training an initial first neural network model based on the normalized sample image to obtain the first neural network model.
In one embodiment, a readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring an original medical image, the original medical image including ribs;
expanding the ribs in the original medical image to obtain expanded images of the ribs;
inputting the expanded image of the rib to a first neural network model for processing to obtain a candidate position of an interested area;
performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image that includes a candidate location of the region of interest;
and inputting the candidate image area into a second neural network model for processing to obtain the target position of the region of interest.
In one embodiment, the computer program when executed by the processor further performs the steps of:
detecting and processing the original medical image to obtain a rib segmentation result and at least one centrum central point;
analyzing and processing the center point of the at least one vertebral body, and determining the target direction of the vertebral body corresponding to the center point of the at least one vertebral body;
and unfolding the ribs in the original medical image according to the target direction of the vertebral body to obtain an unfolded image of the ribs.
In one embodiment, the computer program when executed by the processor further performs the steps of:
establishing a coordinate system based on the target direction of the vertebral body and the central point of the at least one vertebral body, and determining a distance image of the rib segmentation result in the coordinate system;
and according to the distance image and the original medical image of the rib segmentation result in the coordinate system, establishing a mapping relation between the distance image and the original medical image, and according to the mapping relation, mapping the original medical image onto the distance image to obtain an expanded image of the rib.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing interpolation and smoothing processing on the distance image of the rib segmentation result in the coordinate system to obtain a processed distance image;
and establishing a mapping relation between the processed distance image and the original medical image according to the processed distance image and the original medical image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
inputting the candidate image area into the classification model to obtain the category of the candidate image area; the categories include a target category and a non-target category;
acquiring a target candidate image area corresponding to the target category, determining a candidate position corresponding to the target candidate image area from the original medical image, and determining the candidate position corresponding to the target candidate image area as the target position of the interested area.
In one embodiment, the computer program when executed by the processor further performs the steps of:
inputting the candidate image area to the segmentation model to obtain an initial target position of the region of interest;
and carrying out fusion processing on the initial target position of the region of interest and the candidate position of the region of interest to obtain the target position of the region of interest.
In one embodiment, the computer program when executed by the processor further performs the steps of:
carrying out connected region taking processing on the target position of the region of interest to obtain the size of the region of interest;
and carrying out threshold segmentation processing on the size of the region of interest to obtain the category of the region of interest.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a sample image; the sample image is an expanded sample image of a rib, and the sample image comprises an annotation position of an interested region;
carrying out normalization processing on the sample image to obtain a normalized sample image;
and training an initial first neural network model based on the normalized sample image to obtain the first neural network model.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A rib image detection method, comprising:
acquiring an original medical image, the original medical image including ribs;
expanding the ribs in the original medical image to obtain expanded images of the ribs;
inputting the expanded image of the rib to a first neural network model for processing to obtain a candidate position of an interested area;
performing region division on the original medical image according to the candidate position of the region of interest to obtain a candidate image region; the candidate image region is an image region on the original medical image that includes a candidate location of the region of interest;
and inputting the candidate image area into a second neural network model for processing to obtain the target position of the region of interest.
2. The method of claim 1, wherein the expanding the ribs in the original medical image to obtain an expanded image of the ribs comprises:
detecting and processing the original medical image to obtain a rib segmentation result and at least one centrum central point;
analyzing and processing the center point of the at least one vertebral body, and determining the target direction of the vertebral body corresponding to the center point of the at least one vertebral body;
and unfolding the ribs in the original medical image according to the target direction of the vertebral body to obtain an unfolded image of the ribs.
3. The method according to claim 2, wherein the expanding the ribs in the original medical image according to the target direction of the vertebral body to obtain an expanded image of the ribs comprises:
establishing a coordinate system based on the target direction of the vertebral body and the central point of the at least one vertebral body, and determining a distance image of the rib segmentation result in the coordinate system;
and according to the distance image and the original medical image of the rib segmentation result in the coordinate system, establishing a mapping relation between the distance image and the original medical image, and according to the mapping relation, mapping the original medical image onto the distance image to obtain an expanded image of the rib.
4. The method according to claim 3, wherein the establishing a mapping relationship between the distance image and the original medical image according to the distance image and the original medical image of the rib segmentation result in the coordinate system comprises:
performing interpolation and smoothing processing on the distance image of the rib segmentation result in the coordinate system to obtain a processed distance image;
and establishing a mapping relation between the processed distance image and the original medical image according to the processed distance image and the original medical image.
5. The method of claim 1, wherein the second neural network model is a classification model, and the inputting the candidate image region into the second neural network model for processing to obtain the target location of the region of interest comprises:
inputting the candidate image area into the classification model to obtain the category of the candidate image area; the categories include a target category and a non-target category;
acquiring a target candidate image area corresponding to the target category, determining a candidate position corresponding to the target candidate image area from the original medical image, and determining the candidate position corresponding to the target candidate image area as the target position of the interested area.
6. The method of claim 1, wherein the second neural network model is a segmentation model, and the inputting the candidate image region into the second neural network model for processing to obtain the target position of the region of interest comprises:
inputting the candidate image area to the segmentation model to obtain an initial target position of the region of interest;
and carrying out fusion processing on the initial target position of the region of interest and the candidate position of the region of interest to obtain the target position of the region of interest.
7. The method of claim 6, further comprising:
carrying out connected region taking processing on the target position of the region of interest to obtain the size of the region of interest;
and carrying out threshold segmentation processing on the size of the region of interest to obtain the category of the region of interest.
8. The method of claim 1, wherein the training method of the first neural network model comprises:
acquiring a sample image; the sample image is an expanded sample image of a rib, and the sample image comprises an annotation position of an interested region;
carrying out normalization processing on the sample image to obtain a normalized sample image;
and training an initial first neural network model based on the normalized sample image to obtain the first neural network model.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 8.
10. A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN201911133164.0A 2019-11-19 2019-11-19 Rib image detection method, computer device and storage medium Active CN111080573B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911133164.0A CN111080573B (en) 2019-11-19 2019-11-19 Rib image detection method, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911133164.0A CN111080573B (en) 2019-11-19 2019-11-19 Rib image detection method, computer device and storage medium

Publications (2)

Publication Number Publication Date
CN111080573A true CN111080573A (en) 2020-04-28
CN111080573B CN111080573B (en) 2024-02-27

Family

ID=70311015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911133164.0A Active CN111080573B (en) 2019-11-19 2019-11-19 Rib image detection method, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN111080573B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968102A (en) * 2020-08-27 2020-11-20 中冶赛迪重庆信息技术有限公司 Target equipment detection method, system, medium and electronic terminal
CN112950552A (en) * 2021-02-05 2021-06-11 慧影医疗科技(北京)有限公司 Rib segmentation marking method and system based on convolutional neural network
CN113139954A (en) * 2021-05-11 2021-07-20 上海杏脉信息科技有限公司 Medical image processing device and method
CN113160242A (en) * 2021-03-17 2021-07-23 中南民族大学 Rectal cancer tumor image preprocessing method and device based on pelvic structure
CN113160199A (en) * 2021-04-29 2021-07-23 武汉联影医疗科技有限公司 Image recognition method and device, computer equipment and storage medium
CN113255762A (en) * 2021-05-20 2021-08-13 推想医疗科技股份有限公司 Image processing method and device
CN113610825A (en) * 2021-08-13 2021-11-05 推想医疗科技股份有限公司 Method and system for identifying ribs of intraoperative image
CN115035136A (en) * 2022-08-09 2022-09-09 南方医科大学第三附属医院(广东省骨科研究院) Method, system, device and storage medium for bone subregion segmentation in knee joint image

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682455A (en) * 2012-05-10 2012-09-19 天津工业大学 Front vehicle detection method based on monocular vision
US20130070996A1 (en) * 2011-09-19 2013-03-21 Siemens Aktiengesellschaft Method and System for Up-Vector Detection for Ribs in Computed Tomography Volumes
CN105550985A (en) * 2015-12-31 2016-05-04 上海联影医疗科技有限公司 Organ cavity wall expanding method
CN107798682A (en) * 2017-08-31 2018-03-13 深圳联影医疗科技有限公司 Image segmentation system, method, apparatus and computer-readable recording medium
US20180247405A1 (en) * 2017-02-27 2018-08-30 International Business Machines Corporation Automatic detection and semantic description of lesions using a convolutional neural network
CN109035141A (en) * 2018-07-13 2018-12-18 上海皓桦科技股份有限公司 Rib cage expanding unit and method
CN109124662A (en) * 2018-07-13 2019-01-04 上海皓桦科技股份有限公司 Rib cage center line detecting device and method
CN109389587A (en) * 2018-09-26 2019-02-26 上海联影智能医疗科技有限公司 A kind of medical image analysis system, device and storage medium
CN109697449A (en) * 2017-10-20 2019-04-30 杭州海康威视数字技术股份有限公司 A kind of object detection method, device and electronic equipment
CN109859233A (en) * 2018-12-28 2019-06-07 上海联影智能医疗科技有限公司 The training method and system of image procossing, image processing model
CN109993726A (en) * 2019-02-21 2019-07-09 上海联影智能医疗科技有限公司 Detection method, device, equipment and the storage medium of medical image
CN110084175A (en) * 2019-04-23 2019-08-02 普联技术有限公司 A kind of object detection method, object detecting device and electronic equipment
CN110458799A (en) * 2019-06-24 2019-11-15 上海皓桦科技股份有限公司 Fracture of rib automatic testing method based on rib cage expanded view

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130070996A1 (en) * 2011-09-19 2013-03-21 Siemens Aktiengesellschaft Method and System for Up-Vector Detection for Ribs in Computed Tomography Volumes
CN102682455A (en) * 2012-05-10 2012-09-19 天津工业大学 Front vehicle detection method based on monocular vision
CN105550985A (en) * 2015-12-31 2016-05-04 上海联影医疗科技有限公司 Organ cavity wall expanding method
US20180247405A1 (en) * 2017-02-27 2018-08-30 International Business Machines Corporation Automatic detection and semantic description of lesions using a convolutional neural network
CN107798682A (en) * 2017-08-31 2018-03-13 深圳联影医疗科技有限公司 Image segmentation system, method, apparatus and computer-readable recording medium
CN109697449A (en) * 2017-10-20 2019-04-30 杭州海康威视数字技术股份有限公司 A kind of object detection method, device and electronic equipment
CN109124662A (en) * 2018-07-13 2019-01-04 上海皓桦科技股份有限公司 Rib cage center line detecting device and method
CN109035141A (en) * 2018-07-13 2018-12-18 上海皓桦科技股份有限公司 Rib cage expanding unit and method
CN109389587A (en) * 2018-09-26 2019-02-26 上海联影智能医疗科技有限公司 A kind of medical image analysis system, device and storage medium
CN109859233A (en) * 2018-12-28 2019-06-07 上海联影智能医疗科技有限公司 The training method and system of image procossing, image processing model
CN109993726A (en) * 2019-02-21 2019-07-09 上海联影智能医疗科技有限公司 Detection method, device, equipment and the storage medium of medical image
CN110084175A (en) * 2019-04-23 2019-08-02 普联技术有限公司 A kind of object detection method, object detecting device and electronic equipment
CN110458799A (en) * 2019-06-24 2019-11-15 上海皓桦科技股份有限公司 Fracture of rib automatic testing method based on rib cage expanded view

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MATTHIAS LENGA等: "Deep Learning Based Rib Centerline Extraction and Labeling", 《ARXIV》 *
SAMUEL GUNZ等: "Automated Rib Fracture Detection of Postmortem Computed Tomography Images Using Machine Learning Techniques", 《ARXIV》 *
王萌: "基于深度学习的股骨分割", 《中国优秀硕士学位论文全文数据库 基础科学辑》 *
赵晓飞: "一种新的可视化肋骨骨折诊断方法研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968102A (en) * 2020-08-27 2020-11-20 中冶赛迪重庆信息技术有限公司 Target equipment detection method, system, medium and electronic terminal
CN111968102B (en) * 2020-08-27 2023-04-07 中冶赛迪信息技术(重庆)有限公司 Target equipment detection method, system, medium and electronic terminal
CN112950552A (en) * 2021-02-05 2021-06-11 慧影医疗科技(北京)有限公司 Rib segmentation marking method and system based on convolutional neural network
CN113160242A (en) * 2021-03-17 2021-07-23 中南民族大学 Rectal cancer tumor image preprocessing method and device based on pelvic structure
CN113160199A (en) * 2021-04-29 2021-07-23 武汉联影医疗科技有限公司 Image recognition method and device, computer equipment and storage medium
CN113139954A (en) * 2021-05-11 2021-07-20 上海杏脉信息科技有限公司 Medical image processing device and method
CN113255762A (en) * 2021-05-20 2021-08-13 推想医疗科技股份有限公司 Image processing method and device
CN113255762B (en) * 2021-05-20 2022-01-11 推想医疗科技股份有限公司 Image processing method and device
CN113610825A (en) * 2021-08-13 2021-11-05 推想医疗科技股份有限公司 Method and system for identifying ribs of intraoperative image
CN115035136A (en) * 2022-08-09 2022-09-09 南方医科大学第三附属医院(广东省骨科研究院) Method, system, device and storage medium for bone subregion segmentation in knee joint image

Also Published As

Publication number Publication date
CN111080573B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
CN111080573B (en) Rib image detection method, computer device and storage medium
CN108520519B (en) Image processing method and device and computer readable storage medium
CN110310256B (en) Coronary stenosis detection method, coronary stenosis detection device, computer equipment and storage medium
CN109754396B (en) Image registration method and device, computer equipment and storage medium
CN110766730B (en) Image registration and follow-up evaluation method, storage medium and computer equipment
CN110717961B (en) Multi-modal image reconstruction method and device, computer equipment and storage medium
CN110570483B (en) Scanning method, scanning device, computer equipment and storage medium
CN111583188A (en) Operation navigation mark point positioning method, storage medium and computer equipment
CN111047572A (en) Automatic spine positioning method in medical image based on Mask RCNN
CN110599465B (en) Image positioning method and device, computer equipment and storage medium
CN110223279B (en) Image processing method and device and electronic equipment
CN111488872B (en) Image detection method, image detection device, computer equipment and storage medium
EP4156096A1 (en) Method, device and system for automated processing of medical images to output alerts for detected dissimilarities
CN115511960A (en) Method and device for positioning central axis of femur, computer equipment and storage medium
CN114155193B (en) Blood vessel segmentation method and device based on feature enhancement
US11961276B2 (en) Linear structure extraction device, method, program, and learned model
CN111192268A (en) Medical image segmentation model construction method and CBCT image bone segmentation method
CN112150485B (en) Image segmentation method, device, computer equipment and storage medium
CN112200780B (en) Bone tissue positioning method, device, computer equipment and storage medium
CN111243026B (en) Anatomical landmark point positioning method, apparatus, computer device, and storage medium
CN114093462A (en) Medical image processing method, computer device, and storage medium
Reddy et al. Anatomical Landmark Detection using Deep Appearance-Context Network
CN115690063A (en) Bone density parameter detection method, computer device and storage medium
CN110349151B (en) Target identification method and device
CN113160199A (en) Image recognition method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant