CN117204910B - Automatic bone cutting method for real-time tracking of knee joint position based on deep learning - Google Patents

Automatic bone cutting method for real-time tracking of knee joint position based on deep learning Download PDF

Info

Publication number
CN117204910B
CN117204910B CN202311255924.1A CN202311255924A CN117204910B CN 117204910 B CN117204910 B CN 117204910B CN 202311255924 A CN202311255924 A CN 202311255924A CN 117204910 B CN117204910 B CN 117204910B
Authority
CN
China
Prior art keywords
feature map
module
knee joint
bone
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311255924.1A
Other languages
Chinese (zh)
Other versions
CN117204910A (en
Inventor
张逸凌
刘星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Longwood Valley Medtech Co Ltd
Original Assignee
Longwood Valley Medtech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longwood Valley Medtech Co Ltd filed Critical Longwood Valley Medtech Co Ltd
Priority to CN202311255924.1A priority Critical patent/CN117204910B/en
Publication of CN117204910A publication Critical patent/CN117204910A/en
Application granted granted Critical
Publication of CN117204910B publication Critical patent/CN117204910B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides an automatic osteotomy method, device and equipment for real-time tracking of knee joint position based on deep learning and a computer readable storage medium. The knee joint position real-time tracking automatic osteotomy method based on deep learning comprises the following steps: collecting video data; detecting knee joint bone positions in the video data based on a deep-learned bone detection model; optimizing the position of a detection frame corresponding to the bone position of the knee joint; after planning an osteotomy face before operation, detecting the relative position between the knee joint skeleton position and the osteotomy face in real time; based on the relative position, the mechanical arm osteotomy position is adjusted. According to the embodiment of the application, the automatic osteotomy can be performed quickly and accurately.

Description

Automatic bone cutting method for real-time tracking of knee joint position based on deep learning
Technical Field
The application belongs to the technical field of deep learning intelligent recognition, and particularly relates to an automatic bone cutting method, device and equipment for real-time tracking of knee joint positions based on deep learning and a computer readable storage medium.
Background
At present, the knee joint osteotomies are only required to be performed by a doctor according to own experience, the osteotomies are low in efficiency, and the osteotomies are not high in accuracy.
Therefore, how to automatically cut bones quickly and accurately is a technical problem which needs to be solved by the person skilled in the art.
Disclosure of Invention
The embodiment of the application provides an automatic osteotomy method, device and equipment for real-time tracking of knee joint position based on deep learning and a computer readable storage medium, which can automatically osteotomy quickly and accurately.
In a first aspect, an embodiment of the present application provides an automatic bone cutting method for real-time tracking of knee joint positions based on deep learning, including:
collecting video data;
detecting knee joint bone positions in the video data based on a deep-learned bone detection model;
optimizing the position of a detection frame corresponding to the bone position of the knee joint;
After planning an osteotomy face before operation, detecting the relative position between the knee joint skeleton position and the osteotomy face in real time;
based on the relative position, the mechanical arm osteotomy position is adjusted.
Optionally, the obtaining of the bone detection model includes:
collecting video data through a 3D camera;
converting the video data into an image; wherein, the image is provided with a label for marking the bone position of the knee joint;
Converting the image and its corresponding label into a dataset;
dividing the data set into a training set, a verification set and a test set according to a dividing ratio of 7:2:1;
model training is carried out based on the deep learning network, and a bone detection model is obtained.
Optionally, performing model training based on the deep learning network to obtain a bone detection model, including:
Setting the batch_size of training to be 32 in the model training process;
setting the initialized learning rate as 1e-4, adding a learning rate attenuation strategy, and carrying out 5000 times of iteration, wherein the learning rate attenuation is 0.9 of the last learning rate;
setting an optimizer as an Adam optimizer;
setting a loss function as DICE loss;
and setting 1000 times of each iteration, performing one-time verification on the training set and the verification set, judging the network training stop time through an early-stop method, and obtaining a skeleton detection model.
Optionally, the method further comprises:
splitting video data into first video data and second video data;
detecting a knee joint bone position in the first video data using a bone detection model;
and outputting the knee joint bone position and the second video data, and streaming the knee joint position detection video.
Optionally, detecting knee joint bone position in the video data includes:
processing the video data converted image through two convolution layers and a first module to obtain a first feature map; the structure of the first module adopts a residual error unit form, wherein two residual error units are adopted, and the result output by each residual error unit is combined and output through a convolution layer;
Sequentially processing the first feature map through a convolution layer and a first module to obtain a second feature map;
sequentially processing the second feature map through a convolution layer and a first module to obtain a third feature map;
The third feature map is processed by a convolution layer, a first module and a second module in sequence to obtain a fourth feature map; the structure of the second module is processed by the convolution layer and then is connected with three maximum pooling layers, and the output result of each maximum pooling layer is combined and output through the convolution layer;
the fourth feature map is up-sampled and then combined with the third feature map to obtain a fifth feature map;
The fifth feature map is combined with the second feature map after being sequentially subjected to module I and up-sampling, and a sixth feature map is obtained;
Sequentially processing the sixth feature map by the first module and the third module to obtain a seventh feature map; the third module comprises two branches, each branch is output through three convolution layers, and the output is subjected to detection frame loss function regression and classification loss function regression;
Combining the feature map of the sixth feature map after the convolution layer treatment with the feature map of the fifth feature map after the treatment of the first module, and obtaining an eighth feature map after the treatment of the first module;
Processing the eighth feature map by a third module to obtain a ninth feature map;
Combining the feature map processed by the convolution layer of the ninth feature map with the fourth feature map, and sequentially processing the feature map by the first module and the third module to obtain a tenth feature map;
Combining the seventh feature map, the ninth feature map and the tenth feature map to output an eleventh feature map; wherein knee joint bone position is marked in the eleventh feature map.
Optionally, optimizing the detection frame position corresponding to the knee joint bone position includes:
sequentially inputting the eleventh characteristic map into a plurality of residual error networks, and respectively outputting a plurality of corresponding twelfth characteristic maps;
Respectively classifying the side surfaces of the twelfth feature images, and respectively outputting a plurality of corresponding thirteenth feature images;
sequentially merging, fusing and classifying the thirteenth feature images, and outputting a fourteenth feature image;
And marking the optimized detection frame position in the fourteenth characteristic diagram.
Optionally, the side classifying the twelfth feature maps respectively, and outputting a corresponding thirteenth feature map respectively, including:
And respectively performing convolution layer and up-sampling processing on the twelfth feature maps to respectively output a corresponding thirteenth feature map.
In a second aspect, an embodiment of the present application provides an automatic bone osteotomy device for real-time tracking of knee joint position based on deep learning, the device comprising:
The data acquisition module is used for acquiring video data;
A position detection module for detecting knee joint bone position in the video data based on a bone detection model for deep learning;
The position optimization module is used for optimizing the position of the detection frame corresponding to the bone position of the knee joint;
the real-time detection module is used for detecting the relative position between the knee joint skeleton position and the osteotomy face in real time after the osteotomy face is planned before operation;
And the position adjusting module is used for adjusting the osteotomy position of the mechanical arm based on the relative position.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements an automatic osteotomy method based on real-time tracking of the knee joint position by deep learning as shown in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement an automatic osteotomy method for deep learning based real-time tracking of knee joint positions as shown in the first aspect.
The knee joint position real-time tracking automatic osteotomy method, device and equipment based on deep learning and the computer readable storage medium can automatically and rapidly osteotomy.
The knee joint position real-time tracking automatic osteotomy method based on deep learning comprises the following steps: collecting video data; detecting knee joint bone positions in the video data based on a deep-learned bone detection model; optimizing the position of a detection frame corresponding to the bone position of the knee joint; after planning an osteotomy face before operation, detecting the relative position between the knee joint skeleton position and the osteotomy face in real time; based on the relative position, the mechanical arm osteotomy position is adjusted.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an automatic osteotomy method for real-time tracking of knee joint position based on deep learning according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a knee joint bone position detection network according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an image edge detection network according to an embodiment of the present application;
FIG. 4 is a schematic view of optimizing knee joint bone detection positions provided in one embodiment of the present application;
FIG. 5 is a schematic structural view of an automatic osteotomy device for real-time tracking of knee joint position based on deep learning according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings and the detailed embodiments. It should be understood that the particular embodiments described herein are meant to be illustrative of the application only and not limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the application by showing examples of the application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In order to solve the problems in the prior art, the embodiment of the application provides an automatic bone cutting method, device and equipment for real-time tracking of knee joint positions based on deep learning and a computer readable storage medium. The following first describes an automatic osteotomy method for real-time tracking of knee joint position based on deep learning according to an embodiment of the present application.
Fig. 1 is a schematic flow chart of an automatic osteotomy method for real-time tracking of knee joint position based on deep learning according to an embodiment of the present application. As shown in fig. 1, the automatic osteotomy method based on the real-time tracking of the knee joint position of the deep learning comprises the following steps:
s101, collecting video data;
S102, detecting the knee joint bone position in video data based on a bone detection model of deep learning;
s103, optimizing the position of a detection frame corresponding to the bone position of the knee joint;
S104, after the osteotomy face is planned before operation, detecting the relative position between the knee joint skeleton position and the osteotomy face in real time;
S105, adjusting the osteotomy position of the mechanical arm based on the relative position.
Specifically, based on the real-time automatic bone cutting process of knee joint of degree of depth study, gather video data through the 3D camera, detect knee joint skeleton in the video data through the degree of depth study model, later optimize the detection frame that detects knee joint position, ensure the accuracy of detection frame, reduce knee joint skeleton positioning error, whether the real-time detection skeleton position changes, plan the position of bone section and detection frame before the calibration operation, plan the real-time relative position of bone section and intraoperative skeleton before the calculation anew.
In one embodiment, the acquiring of the bone detection model comprises:
collecting video data through a 3D camera;
converting the video data into an image; wherein, the image is provided with a label for marking the bone position of the knee joint;
Converting the image and its corresponding label into a dataset;
dividing the data set into a training set, a verification set and a test set according to a dividing ratio of 7:2:1;
model training is carried out based on the deep learning network, and a bone detection model is obtained.
In one embodiment, model training based on a deep learning network results in a bone detection model comprising:
Setting the batch_size of training to be 32 in the model training process;
setting the initialized learning rate as 1e-4, adding a learning rate attenuation strategy, and carrying out 5000 times of iteration, wherein the learning rate attenuation is 0.9 of the last learning rate;
setting an optimizer as an Adam optimizer;
setting a loss function as DICE loss;
and setting 1000 times of each iteration, performing one-time verification on the training set and the verification set, judging the network training stop time through an early-stop method, and obtaining a skeleton detection model.
In one embodiment, further comprising:
splitting video data into first video data and second video data;
detecting a knee joint bone position in the first video data using a bone detection model;
and outputting the knee joint bone position and the second video data, and streaming the knee joint position detection video.
Specifically, the knee joint bone position tracking method comprises the following steps: video data is acquired through a 3D camera, the video data is converted into an image, the resolution of 1920 x 1080 is high, the knee joint skeleton position is marked on the image, a marked label is generated, the image and the corresponding label are converted into a data set, and the data set is a training set: verification set: the ratio of test sets was 7:2:1 split. Training a skeleton detection model based on a deep learning network, testing the trained model on a test set, outputting a result if the test result meets the requirement, otherwise, adjusting the network to continue training and testing.
In the model training process, the trained batch_size is 32, the initial learning rate is set to be 1e-4, a learning rate attenuation strategy is added, the learning rate attenuation is 0.9 in each iteration, the optimizer uses the Adam optimizer, the loss function is DICE loss, each iteration is set to 1000 times, one verification is carried out on a training set and a verification set, the network training stop time is judged through an early stop method, and a final model is obtained.
The knee joint bone position tracking method comprises the following steps: collecting video data, splitting the video to generate video data 1 and video data 2, detecting the knee joint skeleton position of the video data 1, generating knee joint skeleton position detection data, and outputting the detection data, the video data 2 and the stream to the knee joint position detection video.
In one embodiment, detecting knee joint bone position in video data includes:
processing the video data converted image through two convolution layers and a first module to obtain a first feature map; the structure of the first module adopts a residual error unit form, wherein two residual error units are adopted, and the result output by each residual error unit is combined and output through a convolution layer;
Sequentially processing the first feature map through a convolution layer and a first module to obtain a second feature map;
sequentially processing the second feature map through a convolution layer and a first module to obtain a third feature map;
The third feature map is processed by a convolution layer, a first module and a second module in sequence to obtain a fourth feature map; the structure of the second module is processed by the convolution layer and then is connected with three maximum pooling layers, and the output result of each maximum pooling layer is combined and output through the convolution layer;
the fourth feature map is up-sampled and then combined with the third feature map to obtain a fifth feature map;
The fifth feature map is combined with the second feature map after being sequentially subjected to module I and up-sampling, and a sixth feature map is obtained;
Sequentially processing the sixth feature map by the first module and the third module to obtain a seventh feature map; the third module comprises two branches, each branch is output through three convolution layers, and the output is subjected to detection frame loss function regression and classification loss function regression;
Combining the feature map of the sixth feature map after the convolution layer treatment with the feature map of the fifth feature map after the treatment of the first module, and obtaining an eighth feature map after the treatment of the first module;
Processing the eighth feature map by a third module to obtain a ninth feature map;
Combining the feature map processed by the convolution layer of the ninth feature map with the fourth feature map, and sequentially processing the feature map by the first module and the third module to obtain a tenth feature map;
Combining the seventh feature map, the ninth feature map and the tenth feature map to output an eleventh feature map; wherein knee joint bone position is marked in the eleventh feature map.
Specifically, a target detection algorithm is adopted to detect the bone position of the knee joint, and the network flow is as follows:
The network output end comprises three branches, an input image is processed by the two convolution layers and the first module, then is processed by the convolution layers and the first module, the up-sampled output result is combined with the output result of the third module after being processed by the convolution layers and the first module, the up-sampled output result is combined with the output result of the second module and output, the output result is output through the detection end after being processed by the first module, at the moment, the output end is provided with three branches, the output result of the first branch is input to the next branch after being processed by the convolution layers, the output result of the second branch is input to the third branch after being processed by the convolution layers after being processed by the first module, and the output results of the three detection ends are finally combined and output. The network structure is shown in fig. 2.
As shown in fig. 2, module one: the structure takes the form of residual units, here two residual units, the result output by each residual unit being combined and output through a convolution layer, this being for the purpose of depth extraction of features and to reduce feature loss.
And a second module: the three maximum pooling layers are connected after the convolution layer processing, the output result of each maximum pooling layer is combined, and the output is carried out through the convolution layer, so that the purpose of the design is to extract the global feature.
And a third module: the method comprises two branches, wherein each branch is output through three convolution layers, and the output is subjected to detection frame loss function regression and classification loss function regression.
In one embodiment, optimizing the detection frame position corresponding to the knee joint bone position includes:
sequentially inputting the eleventh characteristic map into a plurality of residual error networks, and respectively outputting a plurality of corresponding twelfth characteristic maps;
Respectively classifying the side surfaces of the twelfth feature images, and respectively outputting a plurality of corresponding thirteenth feature images;
sequentially merging, fusing and classifying the thirteenth feature images, and outputting a fourteenth feature image;
And marking the optimized detection frame position in the fourteenth characteristic diagram.
In one embodiment, the side classification of the twelfth feature maps respectively, and the outputting of the thirteenth feature maps respectively includes:
And respectively performing convolution layer and up-sampling processing on the twelfth feature maps to respectively output a corresponding thirteenth feature map.
Specifically, the position of the knee joint detection frame detected by the deep learning mode may be bigger or smaller, which may cause errors in the relative position of the knee joint osteotomy plane and the bone, so that the detected detection frame needs to be further optimized.
And (3) performing image edge detection on the detected femur region, detecting the bone position, and then detecting the minimum circumscribed rectangle of the bone edge.
The image edge detection network structure is as shown in fig. 3:
By adopting ResNet-101, the original average pooling and full connection layer is removed, and the convolution block of the bottom layer is reserved. The step size of the first and fifth convolution blocks in ResNet-101 is changed from 2 to 1. The dilation factor is introduced into the subsequent convolutional layer to maintain the same receptive field size as original ResNet. Network characteristics:
1. And replacing the classification module at the bottom with a feature extraction module.
2. The classification module is placed on top of the network and is only supervised.
3. A shared connection is performed instead of a slice connection.
As shown in fig. 3, the network structure is an image edge detection network structure, which mainly adopts the characteristic of a residual neural network structure, and aims to reduce the loss of image characteristics while increasing the depth of the network structure. The side classification in the network structure also adopts a residual structure. And merging the results of each side classification, fusing the classifications, and outputting a final result.
As shown in fig. 4, for bone detection, the flowchart is optimized, the bone is detected first, a detection frame is generated, data in the detection frame is extracted, edge segmentation is performed on the extracted data, the detection frame is recalculated, and the relative positions of the knee joint osteotomy face and the bone are calculated for adjustment.
Fig. 5 is a schematic structural diagram of an automatic bone-cutting device for real-time tracking of knee joint position based on deep learning according to an embodiment of the present application, the device includes:
A data acquisition module 501 for acquiring video data;
a position detection module 502 for detecting knee joint bone position in the video data based on a deep-learned bone detection model;
A position optimizing module 503, configured to optimize a position of the detection frame corresponding to the bone position of the knee joint;
The real-time detection module 504 is configured to detect, in real time, a relative position between a knee joint bone position and an osteotomy plane after planning the osteotomy plane before surgery;
The position adjustment module 505 is configured to adjust the mechanical arm osteotomy position based on the relative position.
Fig. 6 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
The electronic device may include a processor 601 and a memory 602 storing computer program instructions.
In particular, the processor 601 may include a Central Processing Unit (CPU), or an Application SPECIFIC INTEGRATED Circuit (ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present application.
Memory 602 may include mass storage for data or instructions. By way of example, and not limitation, memory 602 may include a hard disk drive (HARD DISK DRIVE, HDD), a floppy disk drive, flash memory, optical disk, magneto-optical disk, magnetic tape, or a universal serial bus (Universal Serial Bus, USB) drive, or a combination of two or more of these. The memory 602 may include removable or non-removable (or fixed) media, where appropriate. The memory 602 may be internal or external to the electronic device, where appropriate. In particular embodiments, memory 602 may be a non-volatile solid state memory.
In one embodiment, memory 602 may be Read Only Memory (ROM). In one embodiment, the ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these.
The processor 601 reads and executes the computer program instructions stored in the memory 602 to implement any of the automatic osteotomy method for deep learning based real-time tracking of knee joint positions in the above embodiments.
In one example, the electronic device may also include a communication interface 603 and a bus 610. As shown in fig. 6, the processor 601, the memory 602, and the communication interface 603 are connected to each other through a bus 610 and perform communication with each other.
The communication interface 603 is mainly used for implementing communication between each module, apparatus, unit and/or device in the embodiment of the present application.
Bus 610 includes hardware, software, or both, that couple components of the electronic device to one another. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 610 may include one or more buses, where appropriate. Although embodiments of the application have been described and illustrated with respect to a particular bus, the application contemplates any suitable bus or interconnect.
In addition, in combination with the automatic osteotomy method based on the real-time tracking of the knee joint position of the deep learning in the above embodiment, the embodiment of the application can be realized by providing a computer readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by the processor, implement any of the above embodiments of an automatic osteotomy method based on deep learning knee joint position real-time tracking.
It should be understood that the application is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. The method processes of the present application are not limited to the specific steps described and shown, but various changes, modifications and additions, or the order between steps may be made by those skilled in the art after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. The present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present application are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present application, and they should be included in the scope of the present application.

Claims (1)

1. An automatic osteotomy device for real-time tracking of knee joint position based on deep learning, the device comprising:
The data acquisition module is used for acquiring video data;
A position detection module for detecting knee joint bone position in the video data based on a bone detection model for deep learning;
The position optimization module is used for optimizing the position of the detection frame corresponding to the bone position of the knee joint;
the real-time detection module is used for detecting the relative position between the knee joint skeleton position and the osteotomy face in real time after the osteotomy face is planned before operation;
The position adjusting module is used for adjusting the osteotomy position of the mechanical arm based on the relative position;
Acquisition of a bone detection model comprising:
collecting video data through a 3D camera;
converting the video data into an image; wherein, the image is provided with a label for marking the bone position of the knee joint;
Converting the image and its corresponding label into a dataset;
dividing the data set into a training set, a verification set and a test set according to a dividing ratio of 7:2:1;
model training is carried out based on a deep learning network, and a bone detection model is obtained;
model training is carried out based on a deep learning network to obtain a bone detection model, and the method comprises the following steps:
Setting the batch_size of training to be 32 in the model training process;
setting the initialized learning rate as 1e-4, adding a learning rate attenuation strategy, and carrying out 5000 times of iteration, wherein the learning rate attenuation is 0.9 of the last learning rate;
setting an optimizer as an Adam optimizer;
setting a loss function as DICE loss;
setting 1000 times of each iteration, performing one-time verification on the training set and the verification set, judging the network training stop time through an early-stop method, and obtaining a skeleton detection model;
splitting video data into first video data and second video data;
detecting a knee joint bone position in the first video data using a bone detection model;
outputting knee joint bone position and second video data, and streaming a knee joint position detection video;
Detecting knee joint bone locations in video data, comprising:
processing the video data converted image through two convolution layers and a first module to obtain a first feature map; the structure of the first module adopts a residual error unit form, wherein two residual error units are adopted, and the result output by each residual error unit is combined and output through a convolution layer;
Sequentially processing the first feature map through a convolution layer and a first module to obtain a second feature map;
sequentially processing the second feature map through a convolution layer and a first module to obtain a third feature map;
The third feature map is processed by a convolution layer, a first module and a second module in sequence to obtain a fourth feature map; the structure of the second module is processed by the convolution layer and then is connected with three maximum pooling layers, and the output result of each maximum pooling layer is combined and output through the convolution layer;
the fourth feature map is up-sampled and then combined with the third feature map to obtain a fifth feature map;
The fifth feature map is combined with the second feature map after being sequentially subjected to module I and up-sampling, and a sixth feature map is obtained;
Sequentially processing the sixth feature map by the first module and the third module to obtain a seventh feature map; the third module comprises two branches, each branch is output through three convolution layers, and the output is subjected to detection frame loss function regression and classification loss function regression;
Combining the feature map of the sixth feature map after the convolution layer treatment with the feature map of the fifth feature map after the treatment of the first module, and obtaining an eighth feature map after the treatment of the first module;
Processing the eighth feature map by a third module to obtain a ninth feature map;
Combining the feature map processed by the convolution layer of the ninth feature map with the fourth feature map, and sequentially processing the feature map by the first module and the third module to obtain a tenth feature map;
Combining the seventh feature map, the ninth feature map and the tenth feature map to output an eleventh feature map; wherein knee joint bone position is marked in the eleventh feature map;
optimizing a detection frame position corresponding to a knee joint bone position, comprising:
sequentially inputting the eleventh characteristic map into a plurality of residual error networks, and respectively outputting a plurality of corresponding twelfth characteristic maps;
Respectively classifying the side surfaces of the twelfth feature images, and respectively outputting a plurality of corresponding thirteenth feature images;
sequentially merging, fusing and classifying the thirteenth feature images, and outputting a fourteenth feature image;
marking the position of the optimized detection frame in a fourteenth characteristic diagram;
Classifying the sides of the twelfth feature images respectively, and outputting corresponding thirteenth feature images respectively, wherein the method comprises the following steps:
And respectively performing convolution layer and up-sampling processing on the twelfth feature maps to respectively output a corresponding thirteenth feature map.
CN202311255924.1A 2023-09-26 2023-09-26 Automatic bone cutting method for real-time tracking of knee joint position based on deep learning Active CN117204910B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311255924.1A CN117204910B (en) 2023-09-26 2023-09-26 Automatic bone cutting method for real-time tracking of knee joint position based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311255924.1A CN117204910B (en) 2023-09-26 2023-09-26 Automatic bone cutting method for real-time tracking of knee joint position based on deep learning

Publications (2)

Publication Number Publication Date
CN117204910A CN117204910A (en) 2023-12-12
CN117204910B true CN117204910B (en) 2024-06-25

Family

ID=89044161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311255924.1A Active CN117204910B (en) 2023-09-26 2023-09-26 Automatic bone cutting method for real-time tracking of knee joint position based on deep learning

Country Status (1)

Country Link
CN (1) CN117204910B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191465A (en) * 2018-08-16 2019-01-11 青岛大学附属医院 A kind of system for being determined based on deep learning network, identifying human body or so the first rib cage
CN114246635A (en) * 2021-12-31 2022-03-29 杭州三坛医疗科技有限公司 Osteotomy plane positioning method, osteotomy plane positioning system and osteotomy plane positioning device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110811832B (en) * 2019-11-21 2021-02-23 苏州微创畅行机器人有限公司 Osteotomy checking method, checking equipment, readable storage medium and orthopedic surgery system
CN111345895B (en) * 2020-03-13 2021-08-20 北京天智航医疗科技股份有限公司 Total knee replacement surgery robot auxiliary system, control method and electronic equipment
CN112347964B (en) * 2020-11-16 2023-03-24 复旦大学 Behavior detection method and device based on graph network
CN113538287B (en) * 2021-07-29 2024-03-29 广州安思创信息技术有限公司 Video enhancement network training method, video enhancement method and related devices
CN114404047B (en) * 2021-12-24 2024-06-14 苏州微创畅行机器人有限公司 Positioning method, system, device, computer equipment and storage medium
CN114504384B (en) * 2022-03-25 2022-11-18 中国人民解放军陆军军医大学第二附属医院 Knee joint replacement method and device of laser osteotomy robot
CN116327357A (en) * 2023-03-09 2023-06-27 天津市天津医院 Automatic knee joint simulation operation planning method and system based on deep learning
CN116747016A (en) * 2023-06-01 2023-09-15 北京长木谷医疗科技股份有限公司 Intelligent surgical robot navigation and positioning system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191465A (en) * 2018-08-16 2019-01-11 青岛大学附属医院 A kind of system for being determined based on deep learning network, identifying human body or so the first rib cage
CN114246635A (en) * 2021-12-31 2022-03-29 杭州三坛医疗科技有限公司 Osteotomy plane positioning method, osteotomy plane positioning system and osteotomy plane positioning device

Also Published As

Publication number Publication date
CN117204910A (en) 2023-12-12

Similar Documents

Publication Publication Date Title
CN107895367B (en) Bone age identification method and system and electronic equipment
CN111291825B (en) Focus classification model training method, apparatus, computer device and storage medium
CN113076987B (en) Osteophyte identification method, device, electronic equipment and storage medium
CN112233777A (en) Gallstone automatic identification and segmentation system based on deep learning, computer equipment and storage medium
CN110246579B (en) Pathological diagnosis method and device
CN116309636A (en) Knee joint segmentation method, device and equipment based on multi-task neural network model
US20180053072A1 (en) Verification device, verification method, and verification program
CN116650110A (en) Automatic knee joint prosthesis placement method and device based on deep reinforcement learning
CN117204910B (en) Automatic bone cutting method for real-time tracking of knee joint position based on deep learning
CN116152197B (en) Knee joint segmentation method, knee joint segmentation device, electronic equipment and computer readable storage medium
CN117197345B (en) Intelligent bone joint three-dimensional reconstruction method, device and equipment based on polynomial fitting
CN117350992A (en) Multi-task segmentation network metal implant identification method based on self-guiding attention mechanism
CN116363150A (en) Hip joint segmentation method, device, electronic equipment and computer readable storage medium
CN116597002B (en) Automatic femoral stem placement method, device and equipment based on deep reinforcement learning
CN115861659A (en) Object matching method, device, equipment and computer storage medium
CN111160279B (en) Method, device, equipment and medium for generating target recognition model by using small sample
CN116523841B (en) Deep learning spine segmentation method and device based on multi-scale information fusion
CN115311407B (en) Feature point marking method, device, equipment and storage medium
CN117679160B (en) Method, device, equipment and readable storage medium for reducing wound fracture
CN116883326A (en) Knee joint anatomical site recognition method, device, equipment and readable storage medium
CN117635546B (en) Femoral head necrosis area identification method and device based on contrast learning and weak supervision
CN117274281A (en) CT image segmentation method and device, electronic equipment and storage medium
CN116570367A (en) Intelligent sensing prediction method device and equipment for bone grinding and bone quality of robot operation
CN116188492A (en) Hip joint segmentation method, device, electronic equipment and computer readable storage medium
CN117409958A (en) Bone joint intelligent diagnosis method, device, equipment and medium based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant