CN111652301B - Femoral lesser trochanter identification method and device based on deep learning and electronic equipment - Google Patents

Femoral lesser trochanter identification method and device based on deep learning and electronic equipment Download PDF

Info

Publication number
CN111652301B
CN111652301B CN202010464682.7A CN202010464682A CN111652301B CN 111652301 B CN111652301 B CN 111652301B CN 202010464682 A CN202010464682 A CN 202010464682A CN 111652301 B CN111652301 B CN 111652301B
Authority
CN
China
Prior art keywords
lesser trochanter
segmentation
femur
identification
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010464682.7A
Other languages
Chinese (zh)
Other versions
CN111652301A (en
Inventor
张逸凌
刘星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Longwood Valley Medtech Co Ltd
Original Assignee
Longwood Valley Medtech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longwood Valley Medtech Co Ltd filed Critical Longwood Valley Medtech Co Ltd
Priority to CN202010464682.7A priority Critical patent/CN111652301B/en
Publication of CN111652301A publication Critical patent/CN111652301A/en
Application granted granted Critical
Publication of CN111652301B publication Critical patent/CN111652301B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a femur lesser trochanter identification method, a femur lesser trochanter identification device and an electronic device based on deep learning, wherein the method comprises the following steps: acquiring a two-dimensional cross section image to be identified; utilizing a segmentation network to carry out femur segmentation on the two-dimensional cross section image to be identified; classifying the femur segmentation result image by using a layer classifier to obtain a lesser trochanter layer; and identifying the small rotor level by using a point identification network, and positioning the key points of the small rotor. The artificial intelligence is adopted to identify the lesser trochanter, the regional key points of the lesser trochanter defined by the traditional medicine can be obtained, and compared with the method that a doctor manually takes a point in the lesser trochanter region for positioning and calculates the leg length difference, the accuracy of the lesser trochanter positioning can be greatly improved, the calculation precision of the leg length difference can be greatly improved, and reliable data support is provided for the doctor operation.

Description

Femoral lesser trochanter identification method and device based on deep learning and electronic equipment
Technical Field
The application relates to the technical field of medicine, in particular to a femur lesser trochanter identification method and device based on deep learning and electronic equipment.
Background
Whether some key positions can be accurately determined in the medical field is an important influence factor of the success of some medical operations. For example, in osteotomy, since the distance difference between the two sides (L1-L2) can represent the limb shortening distance in the vertical line between the lesser trochanter protrusion and the ischial tuberosity in the osteotomy, see fig. 1, the position of the lesser trochanter determines the position of the femoral neck osteotomy, which is generally located no less than 3mm above the upper edge of the lesser trochanter. Therefore, the accurate positioning small rotor can judge the length difference of the lower limbs before the operation, and the overlarge length difference of the two lower limbs after the operation of the patient is avoided.
However, the leg length difference in the digital reconstructed radiographic image can only be calculated by a doctor by manually taking points, and since the lesser trochanter is an area in medical definition, such as the lesser trochanter shown in fig. 1, and points are manually taken in the lesser trochanter area, the accuracy is too low, the calibration stability is poor, and the length difference of the two lower limbs of the patient after the operation is too large, so that the actual requirement cannot be met.
Therefore, how to determine the precise position of the femoral lesser trochanter becomes a technical problem to be solved urgently
Disclosure of Invention
The application mainly aims to provide a femur lesser trochanter identification method and device based on deep learning and electronic equipment, so as to provide a method for determining the accurate position of the femur lesser trochanter.
In order to achieve the above object, according to a first aspect of the present application, there is provided a femoral lesser trochanter identification method based on deep learning, comprising: acquiring a two-dimensional cross section image to be identified; utilizing a segmentation network to carry out femur segmentation on the two-dimensional cross section image to be identified; classifying the femur two-dimensional cross section segmentation result images by using a layer classifier to obtain a trochanter layer; and identifying the small rotor level by using a point identification network, and positioning the key points of the small rotor.
Optionally, the segmentation network includes a preset segmentation neural network, and the preset segmentation neural network is obtained by training a labeling training sample of a femoral bone surface; and segmenting the two-dimensional cross section image to be identified by utilizing a preset segmentation neural network to obtain a femur two-dimensional cross section segmentation result image.
Optionally, the level classifier includes a preset classification neural network, and the preset classification neural network is obtained by training a labeled training sample of the lesser trochanter level; the classifying the two-dimensional cross section image to be identified by using the layer classifier comprises the following steps: and classifying the femur two-dimensional cross section segmentation result by using a preset classification neural network to obtain a lesser trochanter level.
Optionally, the identifying the small rotor level by using a point identification network, and locating the small rotor key point includes: fitting the medullary cavity axis; identifying the small rotor level by using the point identification network to obtain a plurality of sub key points; and selecting the point farthest from the medullary cavity axis from the plurality of sub-key points as a lesser trochanter key point.
Optionally, the fitting the medullary cavity axis comprises: calculating a plurality of femoral medullary cavity central points according to a centroid formula of the plane image, wherein the medullary cavity central points comprise the medullary cavity central point of the lesser trochanter layer and/or the medullary cavity central point of the femur layer; and performing line fitting on the central point to determine the axis of the medullary cavity.
According to a second aspect, the present application provides a device for identifying a femoral lesser trochanter based on deep learning, comprising: the acquisition module acquires a two-dimensional cross section image to be identified; the segmentation module is used for carrying out femur segmentation on the two-dimensional cross section image to be identified by utilizing a segmentation network; the classification module is used for classifying the segmentation results of the two-dimensional femoral cross section by using the level classifier to obtain a lesser trochanter level; and the identification module is used for identifying the small rotor level by using the point identification network and positioning the key points of the small rotor.
According to a third aspect, the present application provides a leg length difference identification method, comprising: obtaining a trochanter key point by using the femoral trochanter identification method based on deep learning in any one of the first aspect; leg length differences are calculated based on the small rotor key points.
According to a fourth aspect, the present application provides a leg length difference identifying device comprising: a lesser trochanter identification module, configured to obtain a lesser trochanter key point by using the deep learning-based femoral lesser trochanter identification method according to any one of the first aspect; and the calculating module is used for calculating the leg length difference based on the key points of the small rotor.
According to a fifth aspect, embodiments of the present application provide a computer-readable storage medium storing computer instructions for causing a computer to execute the deep learning based femur lesser trochanter identification method according to any one of the above first aspects and/or the leg length difference identification method described in the third aspect.
According to a sixth aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to cause the at least one processor to perform the deep learning based femoral lesser trochanter identification method of any one of the above first aspects and/or the leg length difference identification method of the third aspect.
According to the femur lesser trochanter identification method based on deep learning, the segmentation network can be used for carrying out femur segmentation on the two-dimensional cross section image to obtain a femur two-dimensional cross section segmentation result; and classifying the femur two-dimensional cross section segmentation result images by using a classifier to obtain a lesser trochanter level, and identifying key points in the lesser trochanter level by using a point identification network. The artificial intelligence is adopted to identify the lesser trochanter, the regional key points of the lesser trochanter defined by the traditional medicine can be obtained, and compared with the method that a doctor manually takes a point in the lesser trochanter region for positioning and calculates the leg length difference, the accuracy of the lesser trochanter positioning can be greatly improved, the calculation precision of the leg length difference can be greatly improved, and reliable data support is provided for the doctor operation.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
FIG. 1 is a schematic view of a prior art trochanter-based leg length difference measurement;
fig. 2 is a schematic flowchart of a femoral lesser trochanter identification method based on deep learning according to an embodiment of the present application;
FIG. 3A is a schematic illustration of a small rotor position provided in accordance with an embodiment of the present application;
FIG. 3B is a schematic view of a two-dimensional cross-sectional shape of a small rotor according to an embodiment of the present application
FIG. 4 is a schematic diagram of a small rotor key point identification result provided according to an embodiment of the present application;
FIG. 5 is an annotated schematic representation of a two-dimensional cross-sectional image provided in accordance with an embodiment of the present application;
FIG. 6 is a flowchart of a leg length difference identification method provided according to an embodiment of the present application;
fig. 7 is a schematic diagram of a femoral lesser trochanter identification device based on deep learning provided in accordance with an embodiment of the present application;
fig. 8 is a schematic diagram of an electronic device provided according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
According to an embodiment of the present application, there is provided a femur lesser trochanter identification method based on deep learning, as shown in fig. 2, the method includes the following steps:
s101, acquiring a two-dimensional cross section image to be identified.
The two-dimensional cross-sectional image is a DICOM two-dimensional cross-sectional image generated during CT scanning, and then a bone region corresponding to the medullary cavity of the DICOM two-dimensional cross-sectional image is segmented from the DICOM two-dimensional cross-sectional image.
And S102, carrying out femur segmentation on the two-dimensional cross section image to be identified by utilizing a segmentation network.
And S103, classifying the segmentation result images of the two-dimensional cross section of the femur by using a layer classifier to obtain a small trochanter layer. In this embodiment, the called level classifier is at least one preset neural network model, and can classify the two-dimensional cross sectional image based on the level features of the two-dimensional cross sectional image, and in this embodiment, the level classifier can directly classify the two-dimensional cross sectional image, and also can perform region segmentation on the two-dimensional cross sectional image first to segment out the femur level, and then classify the femur level to identify the lesser trochanter level.
And S104, identifying the small rotor level by using a point identification network, and positioning key points of the small rotor. In this embodiment, since the lesser trochanter is a region, as shown in fig. 3A, the lesser trochanter region may be a segment of the femur, so that the number of the two-dimensional cross sectional images involved is plural, and fig. 3B shows the shape features of the lesser trochanter in the two-dimensional cross sectional images, the key points of the lesser trochanter are identified at the level of the plural lesser trochanters by using the point identification network. In this embodiment, the identifying network may include: any one of LocNet, SegNet, Unet, R-CNN, Fast R-CNN, Faster R-CNN, R-FCN, SSD; the small rotor level output by the level classifier and the coordinates of the manually calibrated key points can be utilized for training, a trained point identification network is established, the positions of the key points can be automatically identified by using the established point identification network, and the result output by the point identification network can be referred to an identification result graph shown in FIG. 4.
From the above description, it can be seen that the femur lesser trochanter identification method based on deep learning in the embodiment of the present application can perform femur segmentation on a two-dimensional cross-sectional image by using a segmentation network to obtain a femur two-dimensional cross-sectional segmentation result; and classifying the femur two-dimensional cross section segmentation result images by using a classifier to obtain a lesser trochanter level, and identifying key points in the lesser trochanter level by using a point identification network. The artificial intelligence is adopted to identify the lesser trochanter, the regional key points of the lesser trochanter defined by the traditional medicine can be obtained, and compared with the method that a doctor manually takes a point in the lesser trochanter region for positioning and calculates the leg length difference, the accuracy of the lesser trochanter positioning can be greatly improved, the calculation precision of the leg length difference can be greatly improved, and reliable data support is provided for the doctor operation.
As an exemplary embodiment, the preset segmented neural network in this embodiment may be obtained by training a plurality of neural network models, for example, the preset segmented neural network may be a full convolution network FCN, a semantic segmentation network SegNet, a deep learning segmentation network uet, a 3D-deep learning segmentation network 3D-uet, an example segmentation network Mask-RCNN, a hole convolution, a semantic segmentation neural network ENet, a semantic segmentation network crfrasnn, a scene parsing network PSPNet, an end-to-end semantic segmentation network ParseNet, an image semantic segmentation network RefineNet, an image segmentation model ReSeg, a semantic segmentation network LSTM-CF, an example segmentation network DeepMask, and the like. The level classifier may include a preset classification neural network that may be trained based on labeled training samples for the lesser trochanter level. In this embodiment, classifying the neural network may include: a convolutional neural network LeNet, a convolutional neural network AlexNet, a visual convolutional neural network ZF-Net, a convolutional neural network GoogleNet, a convolutional neural network VGG, a convolutional neural network inclusion, a convolutional neural network ResNet, a convolutional neural network DensNet, a convolutional neural network inclusion ResNet, and the like; inputting a DICOM two-dimensional cross-sectional image and training the class to which the layer where the artificial label is positioned belongs, and automatically separating the layer where the small rotor is positioned by using the established classification neural network.
In this embodiment, training of the network requires sorting and labeling of training data, which provides input images and artificial labeling information for a segmented convolutional neural network for segmenting the whole femur to facilitate calculation of the level of the femur, provides input images and artificial labeling information for a classified neural network for finding out the level of the lesser trochanter in a two-dimensional cross section, and provides input images and artificial labeling information for a key point recognition neural network for recognizing the lesser trochanter of the femur on the level of the lesser trochanter extracted in the second step. Specific labels reference may be made to the exemplary label illustration shown in fig. 5.
Since the lesser trochanter is a region, and therefore, there are multiple two-dimensional cross-sectional images of the lesser trochanter level, where each two-dimensional cross-sectional image of the lesser trochanter level has a keypoint corresponding to the lesser trochanter, in order to find a more accurate lesser trochanter anatomical point, in an alternative embodiment, the axis of the medullary cavity may be fitted and the keypoint farthest from the axis may be found as the lesser trochanter anatomical point. Specifically, the central points of a plurality of femoral medullary cavities may be calculated, and the specific calculation method may determine the medullary cavity central point of each lesser trochanter level and/or the medullary cavity central point of the femoral level by using a two-dimensional image central point calculation formula. The specific two-dimensional image center point calculation formula can be a plane image centroid calculation formula. In the fitting process, fitting is performed according to the actual three-dimensional spatial relationship corresponding to each pulp cavity layer. The straight line fitting mode can be a least square method, gradient descent, gauss Newton, column-horse algorithm and other straight line fitting algorithms.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
According to an embodiment of the present application, there is also provided a leg length difference identification method, as shown in fig. 6, the method may include:
s201, identifying key points of the lesser trochanter of the femur based on deep learning; in particular, the method for identifying the femoral lesser trochanter based on deep learning described in the above embodiments can be used for identifying the key points.
S202, calculating leg length difference by using key points of the small rotor.
The artificial intelligence is adopted to label the lesser trochanter, and the lesser trochanter region defined by the traditional medicine is positioned to the key point, compared with the method of manually marking points in the lesser trochanter region by doctors and calculating the leg length difference, the accuracy of the lesser trochanter marking can be greatly improved, the calculation precision of the leg length difference can be greatly improved, and reliable data support is provided for the doctor operation.
There is also provided, according to an embodiment of the present application, a deep learning based femoral lesser trochanter identification apparatus for implementing the method of fig. 2, as shown in fig. 7, the apparatus including:
the acquisition module 10 is used for acquiring a two-dimensional cross section image to be identified; a segmentation module 20, configured to perform femur segmentation on the two-dimensional cross-sectional image to be identified by using a segmentation network; the classification module 30 is used for classifying the segmentation results of the two-dimensional femoral cross section by using a level classifier to obtain a lesser trochanter level; and the identification module 40 is used for identifying the small rotor level by using a point identification network and positioning the key points of the small rotor.
As an optional embodiment, the segmentation network includes a preset segmentation neural network, and the preset segmentation neural network is obtained by training a labeling training sample on a femoral aspect;
the segmentation module 30 is further configured to segment the two-dimensional cross section image to be identified by using a preset segmentation neural network, so as to obtain a femur two-dimensional cross section segmentation result.
As an optional embodiment, the level classifier includes a preset classification neural network, and the preset classification neural network is obtained by training a labeled training sample of a lesser trochanter level;
the classification module 30 is further configured to classify the femur two-dimensional cross section segmentation result by using a preset classification neural network, so as to obtain a lesser trochanter level.
As an alternative embodiment, the identification module 40 includes: the fitting unit is used for fitting the medullary cavity axis; the identification unit is used for identifying the small rotor level by using the point identification network to obtain a plurality of sub key points; and the key point selection unit is used for selecting a point which is farthest from the medullary cavity axis from the plurality of sub key points as a minor trochanter key point.
As an alternative embodiment, the fitting unit includes: the center point calculation subunit is used for calculating a plurality of femoral medullary cavity center points according to a mass center formula of the plane image, wherein the medullary cavity center points comprise the medullary cavity center point of the lesser trochanter level and/or the medullary cavity center point of the femur level; and the fitting subunit is used for performing line fitting on the central point to determine the axis of the medullary cavity.
The specific process of implementing the functions of each unit and module in the apparatus in the embodiment of the present application may refer to the related description in the method embodiment, and is not described herein again.
According to an embodiment of the present application, there is further provided a computer-readable storage medium, wherein the computer-readable storage medium stores computer instructions for causing the computer to execute the deep learning-based femur lesser trochanter identification method and/or the leg length difference identification method in the above method embodiments.
An electronic device is provided in the embodiment of the present application, as shown in fig. 7, the electronic device includes one or more processors 51 and a memory 52, and one processor 53 is taken as an example in fig. 7.
The controller may further include: an input device 53 and an output device 54.
The processor 51, the memory 52, the input device 53 and the output device 54 may be connected by a bus or other means, and fig. 7 illustrates the connection by a bus as an example.
The processor 51 may be a Central Processing Unit (CPU). The processor 51 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or combinations thereof. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 52 is a non-transitory computer readable storage medium, and can be used for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the control method in the embodiment of the present application. The processor 51 executes various functional applications of the server and data processing, namely, implementing the deep learning based femoral lesser trochanter identification method or the leg length difference identification method of the above-described method embodiments, by running non-transitory software programs, instructions and modules stored in the memory 52.
The memory 52 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of a processing device operated by the server, and the like. Further, the memory 52 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 52 may optionally include memory located remotely from the processor 51, which may be connected to a network connection device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 53 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the processing device of the server. The output device 54 may include a display device such as a display screen.
One or more modules are stored in the memory 52 and, when executed by the one or more processors 51, perform the method as shown in fig. 2 or 6.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program to instruct related hardware, and the program can be stored in a computer readable storage medium, and when executed, the program can include the processes of the embodiments of the motor control methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-only memory (ROM), a Random Access Memory (RAM), a flash memory (FlashMemory), a hard disk (hard disk drive, abbreviated as HDD) or a Solid State Drive (SSD), etc.; the storage medium may also comprise a combination of memories of the kind described above.
It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present application is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (9)

1. A femoral lesser trochanter identification method based on deep learning, the method comprising:
acquiring a two-dimensional cross section image to be identified;
utilizing a segmentation network to carry out femur segmentation on the two-dimensional cross section image to be identified;
classifying the femur two-dimensional cross section segmentation result images by using a layer classifier to obtain a trochanter layer;
identifying the small rotor level by using a point identification network, and positioning key points of the small rotor;
the identifying of the small rotor level by the point identification network, and the positioning of the key points of the small rotor comprises the following steps:
fitting the medullary cavity axis;
identifying the small rotor level by using the point identification network to obtain a plurality of sub key points;
and selecting the point farthest from the medullary cavity axis from the plurality of sub-key points as a lesser trochanter key point.
2. The deep learning-based femoral lesser trochanter recognition method of claim 1, wherein the segmentation network comprises a preset segmentation neural network obtained by training labeled training samples on a femoral level;
and segmenting the two-dimensional cross section image to be identified by utilizing a preset segmentation neural network to obtain a femur two-dimensional cross section segmentation result image.
3. The deep learning-based femoral lesser trochanter recognition method of claim 2, wherein the level classifier comprises a preset classification neural network, the preset classification neural network being trained from labeled training samples for the lesser trochanter level;
the classifying the two-dimensional cross section image to be identified by using the layer classifier comprises the following steps:
and classifying the femur two-dimensional cross section segmentation result by using a preset classification neural network to obtain a lesser trochanter level.
4. The deep learning-based femoral lesser trochanter identification method of claim 1, wherein fitting the intramedullary canal axis comprises:
calculating a plurality of femoral medullary cavity central points according to a centroid formula of the plane image, wherein the medullary cavity central points comprise the medullary cavity central point of the lesser trochanter layer and/or the medullary cavity central point of the femur layer;
and performing line fitting on the central point to determine the axis of the medullary cavity.
5. A femoral lesser trochanter recognition device based on deep learning, comprising:
the acquisition module acquires a two-dimensional cross section image to be identified;
the segmentation module is used for carrying out femur segmentation on the two-dimensional cross section image to be identified by utilizing a segmentation network;
the classification module is used for classifying the segmentation results of the two-dimensional femoral cross section by using the level classifier to obtain a lesser trochanter level;
the identification module is used for identifying the small rotor level by using a point identification network and positioning the key points of the small rotor;
the identifying of the small rotor level by the point identification network, and the positioning of the key points of the small rotor comprises the following steps:
fitting the medullary cavity axis;
identifying the small rotor level by using the point identification network to obtain a plurality of sub key points;
and selecting the point farthest from the medullary cavity axis from the plurality of sub-key points as a lesser trochanter key point.
6. A leg length difference identification method is characterized by comprising the following steps:
obtaining a trochanter key point by using the femoral trochanter identification method based on deep learning of any one of claims 1 to 4;
leg length differences are calculated based on the small rotor key points.
7. A leg length difference identifying device, comprising:
a lesser trochanter identification module, which is used for obtaining a lesser trochanter key point by using the femur lesser trochanter identification method based on deep learning of any one of claims 1 to 4;
and the calculating module is used for calculating the leg length difference based on the key points of the small rotor.
8. A computer-readable storage medium storing computer instructions for causing a computer to execute the deep learning based femur lesser trochanter identification method according to any one of claims 1 to 4 and/or the leg length difference identification method according to claim 6.
9. An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to cause the at least one processor to perform the method of deep learning based femoral trochanter identification of any one of claims 1 to 4 and/or the method of leg length difference identification of claim 6.
CN202010464682.7A 2020-05-27 2020-05-27 Femoral lesser trochanter identification method and device based on deep learning and electronic equipment Active CN111652301B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010464682.7A CN111652301B (en) 2020-05-27 2020-05-27 Femoral lesser trochanter identification method and device based on deep learning and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010464682.7A CN111652301B (en) 2020-05-27 2020-05-27 Femoral lesser trochanter identification method and device based on deep learning and electronic equipment

Publications (2)

Publication Number Publication Date
CN111652301A CN111652301A (en) 2020-09-11
CN111652301B true CN111652301B (en) 2021-03-02

Family

ID=72346917

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010464682.7A Active CN111652301B (en) 2020-05-27 2020-05-27 Femoral lesser trochanter identification method and device based on deep learning and electronic equipment

Country Status (1)

Country Link
CN (1) CN111652301B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689406B (en) * 2021-08-24 2022-04-08 北京长木谷医疗科技有限公司 Knee joint femoral posterior condylar point identification method and system based on motion simulation algorithm
CN113870229B (en) * 2021-09-29 2022-03-11 北京长木谷医疗科技有限公司 Method, device, equipment and storage medium for processing image data of proximal femoral medullary cavity
CN113822231A (en) * 2021-11-08 2021-12-21 中国人民解放军陆军特色医学中心 Inter-trochanter fracture operation auxiliary system based on deep learning image recognition
CN113850810B (en) * 2021-12-01 2022-03-04 杭州柳叶刀机器人有限公司 Method and surgical system for correcting femur, storage medium, and electronic device
CN115105111B (en) * 2022-03-03 2023-06-13 中国人民解放军总医院第四医学中心 Quantitative analysis method for human hip triangle morphology anatomical structure

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8649577B1 (en) * 2008-11-30 2014-02-11 Image Analysis, Inc. Automatic method and system for measurements of bone density and structure of the hip from 3-D X-ray imaging devices
CN107358608A (en) * 2017-08-23 2017-11-17 西安邮电大学 Bone tissue geometric state parameter auto-testing device and method based on image processing techniques
CN108042194A (en) * 2017-12-25 2018-05-18 臧学慧 A kind of fracture of lesser trochanter replacement and fixation bundling belt
CN108765417A (en) * 2018-06-15 2018-11-06 西安邮电大学 It is a kind of that system and method is generated based on the femur X-ray film of deep learning and digital reconstruction irradiation image
CN109903268A (en) * 2019-01-24 2019-06-18 刘星宇 It determines the method for the Exception Type of spine image collection and calculates equipment
CN111179350A (en) * 2020-02-13 2020-05-19 张逸凌 Hip joint image processing method based on deep learning and computing equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8649577B1 (en) * 2008-11-30 2014-02-11 Image Analysis, Inc. Automatic method and system for measurements of bone density and structure of the hip from 3-D X-ray imaging devices
CN107358608A (en) * 2017-08-23 2017-11-17 西安邮电大学 Bone tissue geometric state parameter auto-testing device and method based on image processing techniques
CN108042194A (en) * 2017-12-25 2018-05-18 臧学慧 A kind of fracture of lesser trochanter replacement and fixation bundling belt
CN108765417A (en) * 2018-06-15 2018-11-06 西安邮电大学 It is a kind of that system and method is generated based on the femur X-ray film of deep learning and digital reconstruction irradiation image
CN109903268A (en) * 2019-01-24 2019-06-18 刘星宇 It determines the method for the Exception Type of spine image collection and calculates equipment
CN111179350A (en) * 2020-02-13 2020-05-19 张逸凌 Hip joint image processing method based on deep learning and computing equipment

Also Published As

Publication number Publication date
CN111652301A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
CN111652301B (en) Femoral lesser trochanter identification method and device based on deep learning and electronic equipment
US11494902B2 (en) Systems and methods for automatic detection and quantification of pathology using dynamic feature classification
ES2967682T3 (en) Computer-aided diagnosis using deep neural networks
Rouzrokh et al. A deep learning tool for automated radiographic measurement of acetabular component inclination and version after total hip arthroplasty
TWI747120B (en) Method, device and electronic equipment for depth model training and storage medium thereof
CN110556179B (en) Method and system for marking whole spine image by using deep neural network
US8379957B2 (en) System and method for segmentation of anatomical structures in MRI volumes using graph cuts
CN110956635A (en) Lung segment segmentation method, device, equipment and storage medium
CN111652888B (en) Method and device for determining medullary cavity anatomical axis based on deep learning
EP3389012B1 (en) Method and node for manufacturing a surgical kit for cartilage repair
CN113076987B (en) Osteophyte identification method, device, electronic equipment and storage medium
US10762629B1 (en) Segmenting medical images
CN113240696B (en) Image processing method and device, model training method and device, and electronic equipment
CN111445440A (en) Medical image analysis method, equipment and storage medium
WO2020134532A1 (en) Deep model training method and apparatus, electronic device, and storage medium
Wang et al. Automatic segmentation of spinal canals in CT images via iterative topology refinement
CN111681247A (en) Lung lobe and lung segment segmentation model training method and device
CN112074912A (en) Interactive coronary artery labeling using interventional X-ray images and deep learning
CN115147359A (en) Lung lobe segmentation network model training method and device, electronic equipment and storage medium
CN109509189B (en) Abdominal muscle labeling method and labeling device based on multiple sub-region templates
CN113592768A (en) Rib fracture detection method, rib fracture detection information display method and system
CN113592029A (en) Automatic medical image labeling method and system under small sample condition
CN113160199A (en) Image recognition method and device, computer equipment and storage medium
CN112163552A (en) Labeling method and device for key points of nose, electronic equipment and storage medium
Kim et al. Automatic segmentation of leg bones by using active contours

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Zhang Yiling

Inventor after: Liu Xingyu

Inventor before: Zhang Yiling

Inventor before: Liu Xingyu

Inventor before: An Yicheng

Inventor before: Chen Peng

Inventor before: Xu Jia

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 101102 room 402, 4th floor, building 28, yard 18, Kechuang 13th Street, economic and Technological Development Zone, Daxing District, Beijing

Patentee after: Beijing Changmugu Medical Technology Co.,Ltd.

Address before: 101102 room 402, 4th floor, building 28, yard 18, Kechuang 13th Street, economic and Technological Development Zone, Daxing District, Beijing

Patentee before: BEIJING CHANGMUGU MEDICAL TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder