CN117495824A - Hip joint detection method, device, equipment and medium - Google Patents

Hip joint detection method, device, equipment and medium Download PDF

Info

Publication number
CN117495824A
CN117495824A CN202311516299.1A CN202311516299A CN117495824A CN 117495824 A CN117495824 A CN 117495824A CN 202311516299 A CN202311516299 A CN 202311516299A CN 117495824 A CN117495824 A CN 117495824A
Authority
CN
China
Prior art keywords
hip joint
image data
ray image
pelvis
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311516299.1A
Other languages
Chinese (zh)
Inventor
陈静
范洨铕
刘扬帆
苏成悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202311516299.1A priority Critical patent/CN117495824A/en
Publication of CN117495824A publication Critical patent/CN117495824A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a hip joint detection method, a device, equipment and a medium, wherein the method comprises the following steps: acquiring pelvis X-ray image data corresponding to the hip joint; constructing a three-channel array, reading age and gender information in the pelvis X-ray image data, writing each pixel value of the pelvis X-ray image data into a first channel in the array, writing the age into a second channel in the array, and writing the gender information into a third channel in the array; detecting pelvis X-ray image data based on the trained hip joint detection model, determining the key point coordinates of the hip joint in the pelvis X-ray image data, and determining the center of the femoral head in the pelvis X-ray image data according to the key point coordinates; the acetabular index of the hip joint is determined according to the position of the femoral head center relative to Hilgenereiner lines and Perkin lines, and the health state of the hip joint is determined according to the acetabular index, age and sex information. The method and the device can accurately and efficiently diagnose the developing hip joint failure, and improve the accuracy of diagnosis.

Description

Hip joint detection method, device, equipment and medium
Technical Field
The present disclosure relates to the field of computer vision, and in particular, to a hip joint detection method, a corresponding device, an electronic apparatus, and a computer readable storage medium.
Background
The progressive hip joint failure (Developmental Dysplasia of the Hip, DDH) refers to a disease that the hip joint is gradually subluxated to completely dislocated due to the abnormal long-term biomechanics of the hip joint caused by the poor coverage of the acetabulum to the femoral head due to the defect of acetabulum development. Developed hip joint failure (DDH) is one of the common orthopedic diseases of infants, and accounts for about 0.15% -2% of the neonatal population, and the ratio of female patients to male patients is 4.2:1. Early diagnosis and treatment is critical for the prevention and treatment of developing hip joint failure (DDH). If diagnosis is early, the external fixation support and osteotomy correction can be used for treatment, and the effect is good.
At present, artificial intelligence has tremendous development in the field of extensive medical image analysis such as disease classification, segmentation, detection, image registration, and the like. Over age, the bone appears in a variety of shapes at different stages of calcification; for example, the appearance of femoral head and tri-radial cartilage centers varies significantly from age to age, as well as from hip to hip. If the developed hip joint failure (DDH) is not found in early diagnosis, serious hip joint damage can be caused, and the later surgical operation treatment is difficult and the cost is high.
In summary, the method is suitable for the problems that as the age goes on, bones are in different stages of calcification, markers show diversity in shape, and progressive hip joint failure (DDH) can possibly bring serious hip joint damage due to early diagnosis discovery, and the later surgical operation treatment difficulty is high, the cost is high, and the like.
Disclosure of Invention
An object of the present application is to solve the above-mentioned problems and provide a hip joint detection method, a corresponding apparatus, an electronic device, and a computer-readable storage medium.
In order to meet the purposes of the application, the application adopts the following technical scheme:
a hip joint detection method according to one of the objects of the present application, comprising the steps of:
acquiring pelvic X-ray image data corresponding to the hip in response to the hip detection event;
constructing a three-channel array, reading age and gender information in the pelvis X-ray image data, writing each pixel value of the pelvis X-ray image data into a first channel in the array, writing the age into a second channel in the array, and writing the gender information into a third channel in the array;
Detecting the pelvis X-ray image data based on a trained hip joint detection model, determining the corresponding key point coordinates of the hip joint in the pelvis X-ray image data, and determining the center of the femoral head in the pelvis X-ray image data according to the key point coordinates;
and determining the acetabulum index of the hip joint according to the positions of the center of the femoral head relative to Hilgenereiner lines and Perkin lines, and determining the health state of the hip joint according to the acetabulum index, the age and the sex information so as to finish the detection of the hip joint.
Optionally, determining an acetabular index of the hip joint according to the position of the femoral head center relative to the Hilgenreiner line and the Perkin line, and determining the health state of the hip joint according to the acetabular index, the age and the sex information, so as to complete the detection of the hip joint, including the following steps:
determining the joint state of the hip joint according to the acetabulum index, and if the acetabulum index is lower than 22 degrees, the center of the femoral head is positioned in the lower quadrants of the Hilgenereiner line and the Perkin line, then the hip joint is in a normal state;
if the acetabular angle is larger than 22 degrees, the center of the femoral head is positioned in the lower quadrant of the Hilgenereiner line and the Perkin line, and the hip joint is in a suspected dislocation state;
If the center of the femoral head is positioned in the outer lower quadrant of the Hilgenerainer line and the Perkin line, the hip joint is in a slight dislocation state;
if the center of the femoral head is positioned in the outer upper quadrant of the Hilgenereiner line and the Perkin line, the hip joint is in a heavy dislocation state.
Optionally, before the step of acquiring the pelvic X-ray image data corresponding to the hip joint, the method comprises the following steps:
acquiring pelvis X-ray image data corresponding to the hip joint from a preset database, and converting a DICOM file corresponding to the pelvis X-ray data into a JPEG image;
the image size is resampled to 1024 x 1024 pixels while maintaining the original aspect ratio and filling with zeros on the shorter side.
Optionally, after the step of acquiring the pelvic X-ray image data corresponding to the hip joint, the method comprises the steps of:
performing a mosaic image enhancement on the pelvic X-ray image data in response to an image enhancement instruction;
and splicing the pelvis X-ray image data based on random scaling, random cutting and random arrangement modes, and determining pelvis X-ray image data after image enhancement.
Optionally, the step of training the hip joint detection model includes the steps of:
Dividing the pelvis X-ray image data after image enhancement into a training set, a testing set and a verification set according to a preset proportion;
acquiring a single training sample and a supervision label thereof in the training set, inputting the training set into a preset hip joint detection model, and extracting image characteristic information of an area corresponding to key point coordinates marked by the corresponding supervision label in the training sample;
the image characteristic information is mapped to preset classification spaces corresponding to the position information representing a plurality of hip joints in a classified mode, and classification probabilities corresponding to the classification spaces are obtained;
determining the position information of the hip joint represented by the classification space with the largest classification probability, and calculating a loss value corresponding to the position information of the hip joint represented by the classification space with the largest classification probability by adopting a loss function based on the position information of the hip joint marked by the supervision tag;
and when the loss values of the various items reach a preset threshold value, training the hip joint detection model to a convergence state, and completing training of the hip joint detection model.
Optionally, detecting the pelvis X-ray image data based on a trained hip joint detection model, determining a key point coordinate corresponding to the hip joint in the pelvis X-ray image data, and determining a femoral head center in the pelvis X-ray image data according to the key point coordinate, including the following steps:
Extracting image characteristic information of each pelvis X-ray image data based on a convolution neural network in a preset hip joint detection model;
fully connecting the image characteristic information based on a classifier in a preset hip joint detection model to obtain classification probabilities corresponding to a plurality of preset hip joints;
and determining the hip joint with the largest classification probability as the position information corresponding to the hip joint.
Optionally, the hip joint detection model is a YOLO5 target detection model.
A hip joint detection device provided in accordance with another object of the present application, comprising:
a data acquisition module configured to acquire pelvic X-ray image data corresponding to a hip joint in response to a hip joint detection event;
an array channel construction module configured to construct an array of three channels, read age and gender information in the pelvic X-ray image data, write each pixel value of the pelvic X-ray image data into a first channel in the array, write the age into a second channel in the array, and write the gender information into a third channel in the array;
the femoral head center determining module is used for detecting the pelvis X-ray image data based on a trained hip joint detection model, determining key point coordinates corresponding to the hip joint in the pelvis X-ray image data, and determining the femoral head center in the pelvis X-ray image data according to the key point coordinates;
The hip joint detection module is used for determining the acetabulum index of the hip joint according to the position of the center of the femoral head relative to Hilgenereiner lines and Perkin lines and determining the health state of the hip joint according to the acetabulum index, the age and the sex information so as to finish the detection of the hip joint.
An electronic device adapted for another object of the present application comprises a central processor and a memory, said central processor being adapted to invoke the steps of executing a computer program stored in said memory for performing the hip detection method described herein.
A computer-readable storage medium is provided in accordance with another object of the present application, which stores in the form of computer-readable instructions a computer program implemented according to the hip joint detection method, which when invoked by a computer, performs the steps comprised by the corresponding method.
Compared with the prior art, the method for diagnosing the developed hip joint defect (DDH) by using the deep learning model has the advantages that aiming at the problems that the bones are in various stages of calcification along with the age, the developed hip joint defect (DDH) is not found in early diagnosis, serious hip joint damage possibly occurs, the later surgical operation treatment difficulty is high, the cost is high and the like, the developed hip joint defect (DDH) is diagnosed by using the method for integrating age and sex information into a picture channel, the developed hip joint defect (DDH) can be accurately and efficiently diagnosed from a hip joint x-ray film, the accuracy of the developed hip joint defect (DDH) diagnosis is greatly improved, the operation difficulty is increased due to the fact that a patient misses the optimal treatment opportunity, the high operation cost is paid, the multi-mode information of the method for diagnosing the DDH disease by using the deep learning model is greatly improved, and particularly the performance on a small data set is greatly improved.
Furthermore, the method does not need human auxiliary judgment, does not depend on the assistance of other measuring equipment, greatly saves manpower and material resources, and can accurately detect the type of the developed hip joint defect (DDH) disease by using less calculation amount.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a method for hip joint detection in an embodiment of the present application;
FIG. 2 is a schematic illustration of various types of pelvic X-ray pictures in an embodiment of the present application;
FIG. 3 is a schematic representation of the change in developing hip joint failure (DDH) over time and pathological abnormalities in an embodiment of the present application;
FIG. 4 is a schematic representation of key markers involved in a diagnostic reference for developing hip joint failure (DDH) in an embodiment of the present application;
FIG. 5 is a schematic diagram of the distribution of disease, gender and age information in the dataset in an embodiment of the present application;
FIG. 6 is a schematic diagram of a workflow of a YOLO5 target detection model in an embodiment of the present application;
FIG. 7 is a schematic diagram of a three-way array in which age and gender information are written into bone basin X-ray image data, respectively, in an embodiment of the present application;
FIG. 8 is a schematic diagram of identifying coordinates of key points corresponding to a hip joint based on a Yolo5 object detection model in an embodiment of the present application;
FIG. 9 is a schematic diagram of training and testing based on the YOLO5 target detection model in an embodiment of the present application;
FIG. 10 is a schematic block diagram of a hip joint detection device in an embodiment of the present application;
fig. 11 is a schematic structural diagram of a computer device in an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of illustrating the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, "client," "terminal device," and "terminal device" are understood by those skilled in the art to include both devices that include only wireless signal receivers without transmitting capabilities and devices that include receiving and transmitting hardware capable of two-way communication over a two-way communication link. Such a device may include: a cellular or other communication device such as a personal computer, tablet, or the like, having a single-line display or a multi-line display or a cellular or other communication device without a multi-line display; a PCS (Personal Communications Service, personal communication system) that may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant ) that can include a radio frequency receiver, pager, internet/intranet access, web browser, notepad, calendar and/or GPS (Global Positioning System ) receiver; a conventional laptop and/or palmtop computer or other appliance that has and/or includes a radio frequency receiver. As used herein, "client," "terminal device" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or adapted and/or configured to operate locally and/or in a distributed fashion, at any other location(s) on earth and/or in space. As used herein, a "client," "terminal device," or "terminal device" may also be a communication terminal, an internet terminal, or a music/video playing terminal, for example, a PDA, a MID (Mobile Internet Device ), and/or a mobile phone with music/video playing function, or may also be a device such as a smart tv, a set top box, or the like.
The hardware referred to by the names "server", "client", "service node" and the like in the present application is essentially an electronic device having the performance of a personal computer, and is a hardware device having necessary components disclosed by von neumann's principle, such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, and an output device, and a computer program is stored in the memory, and the central processing unit calls the program stored in the external memory to run in the memory, executes instructions in the program, and interacts with the input/output device, thereby completing a specific function.
It should be noted that the concept of "server" as referred to in this application is equally applicable to the case of a server farm. The servers should be logically partitioned, physically separate from each other but interface-callable, or integrated into a physical computer or group of computers, according to network deployment principles understood by those skilled in the art. Those skilled in the art will appreciate this variation and should not be construed as limiting the implementation of the network deployment approach of the present application.
One or several technical features of the present application, unless specified in the plain text, may be deployed either on a server to implement access by remotely invoking an online service interface provided by the acquisition server by a client, or directly deployed and run on the client to implement access.
The neural network model cited or possibly cited in the application can be deployed on a remote server and used for implementing remote call on a client, or can be deployed on a client with sufficient equipment capability for direct call unless specified in a clear text, and in some embodiments, when the neural network model runs on the client, the corresponding intelligence can be obtained through migration learning so as to reduce the requirement on the running resources of the hardware of the client and avoid excessively occupying the running resources of the hardware of the client.
The various data referred to in the present application, unless specified in the plain text, may be stored either remotely in a server or in a local terminal device, as long as it is suitable for being invoked by the technical solution of the present application.
Those skilled in the art will appreciate that: although the various methods of the present application are described based on the same concepts so as to be common to each other, the methods may be performed independently, unless otherwise indicated. Similarly, for each of the embodiments disclosed herein, the concepts presented are based on the same inventive concept, and thus, the concepts presented for the same description, and concepts that are merely convenient and appropriately altered although they are different, should be equally understood.
The various embodiments to be disclosed herein, unless the plain text indicates a mutually exclusive relationship with each other, the technical features related to the various embodiments may be cross-combined to flexibly construct a new embodiment, so long as such combination does not depart from the inventive spirit of the present application and can satisfy the needs in the art or solve the deficiencies in the prior art. This variant will be known to the person skilled in the art.
With reference to the above exemplary scenario and referring to fig. 1, in one embodiment, the hip joint detection method of the present application comprises the steps of:
s10, responding to a hip joint detection event, and acquiring pelvis X-ray image data corresponding to the hip joint;
the terminal equipment of each hospital can respond to the hip detection event to acquire pelvis X-ray image data corresponding to the hip from a database preset by each hospital, 11280 pelvis X-ray images can be acquired from the database preset by the cooperation hospital, the original DICOM file in the PACS system is converted into a JPEG image through MicroDicom software, the original aspect ratio is kept, zero is filled at the shorter side, and the image size of the pelvis X-ray image data is resampled to 1024X 1024 pixels.
In some embodiments, referring to fig. 2 and 3, wherein fig. 2 is a graph of various types of pelvic X-ray images, and changes in developed hip joint failure (DDH) over time and pathological abnormalities are shown in fig. 3, pelvic X-ray image data may be screened according to the following inclusion criteria:
(1) The X-ray image is regarded as a reference plane for drawing a three-dimensional axis, in the X-ray image, a horizontal line is an X axis, a vertical line is a Y axis, a line perpendicular to a center point of the X-ray image is a Z axis, and a rotation angle of the reference plane around the Z axis is less than 20 degrees and does not obviously rotate around the X axis and the Y axis;
(2) The acetabulum and the femoral head are not completely shielded, can be easily marked, and a data set containing 7750 pelvis X-ray images is constructed for hip joint detection of the application through multiple rounds of screening.
In some embodiments, referring to fig. 4, fig. 4 illustrates key markers involved in a diagnostic reference for developing hip dysplasia (DDH), the diagnosis of four hip joints being classified from left to right as follows: normal (NM), suspected Dislocation (SD), mild Dislocation (MD), and severe dislocation (HD). The orthopedics expert and the like can be invited to carry out point marking on the pelvis X-ray image data corresponding to the hip joint, a plurality of key points are marked, the number of the key points can be 6, 9 and the like, and the key points can be determined according to the actual situation by the person skilled in the art.
In some embodiments, please refer to fig. 5 and fig. 6, wherein fig. 5 is a distribution of disease, gender and age information in a dataset, and fig. 6 is a workflow of a YOLO5 target detection model; according to the pelvis X-ray image data corresponding to the hip joint, a data set of a hip joint detection model is determined, the data set follows the idea of hierarchical sampling, is randomly divided into a training set, a verification set and a test set on the premise of guaranteeing age and gender distribution, the sample numbers are 6220, 765 and 765 respectively, wherein, the pictures of the same gender and age are all classified into the training set, the three data sets of the same gender and age are all classified into one part, the rest cases are divided according to the proportion of 8:1:1, and the hip joints on two sides are independently counted, and the total number of the hip joints is 15500.
Step S20, constructing a three-channel array, reading age and gender information in the pelvis X-ray image data, writing each pixel value of the pelvis X-ray image data into a first channel in the array, writing the age into a second channel in the array, and writing the gender information into a third channel in the array;
specifically, the multi-mode information is integrated into a picture channel corresponding to the pelvis X-ray image data, and age and gender information corresponding to the picture is read. When the data set is manufactured, the age and sex information corresponding to each picture is stored in the file name, and the naming rule can be as follows: gender_age_serialization (8-digit) _renaming date to prevent conflict with subsequent files;
Referring to fig. 7, the age and sex are added to the second and third channels of the picture through a certain operation. Firstly, the age and the sex are read out through a Python script, then, an array of three channels is created by using Numpy, the size of the array is the same as the resolution of the pelvis X-ray image data, the first channel of the array is written into each pixel value of the pelvis X-ray image data, the second channel in the array is written into the age characteristic, the minimum channel value of the age is 0, the maximum channel value of the age is 255, and the interval between adjacent ages is 10; the third channel in the array writes a gender feature, with a female value of 0 and a male value of 255 to maximize variance.
Step S30, detecting the pelvis X-ray image data based on a trained hip joint detection model, determining key point coordinates corresponding to the hip joint in the pelvis X-ray image data, and determining the center of a femoral head in the pelvis X-ray image data according to the key point coordinates;
before detecting the pelvis X-ray image data based on the trained hip joint detection model, dividing the pelvis X-ray image data with enhanced data into a training set, a testing set and a verification set according to a preset proportion; acquiring a single training sample and a supervision label thereof in the training set, inputting the training set into a preset hip joint detection model, and extracting image characteristic information of an area corresponding to key point coordinates marked by the corresponding supervision label in the training sample; the image characteristic information is mapped to preset classification spaces corresponding to the position information representing a plurality of hip joints in a classified mode, and classification probabilities corresponding to the classification spaces are obtained; determining the position information of the hip joint represented by the classification space with the largest classification probability, and calculating a loss value corresponding to the position information of the hip joint represented by the classification space with the largest classification probability by adopting a loss function based on the position information of the hip joint marked by the supervision tag; and when the loss values of the various items reach a preset threshold value, training the hip joint detection model to a convergence state, and completing training of the hip joint detection model.
After a trained hip joint detection model is determined, detecting the pelvis X-ray image data based on the trained hip joint detection model, determining key point coordinates corresponding to the hip joint in the pelvis X-ray image data, and extracting image characteristic information of each pelvis X-ray image data based on a convolution neural network in a preset hip joint detection model; fully connecting the image characteristic information based on a classifier in a preset hip joint detection model to obtain classification probabilities corresponding to a plurality of preset hip joints; and determining the hip joint with the highest classification probability as position information corresponding to the hip joint, namely, the corresponding key point coordinates of the hip joint, and determining the center of the femoral head in the pelvis X-ray image data according to the key point coordinates.
In some embodiments, the hip joint detection model may be a YOLO5 target detection model, and the present embodiment takes the YOLO5 target detection model as an example, and is not limited to this application. Referring to fig. 8, the YOLO5 object detection model extracts a square image patch P with a key point center and a side length l as a local neighborhood, and when l is in a proper size, the key point coordinates can be identified by the YOLO5 object detection model.
Referring to fig. 9, the data of the pelvis X-ray image is detected by using a YOLO5 object detection model, and when the YOLO5 object detection model is trained and detected, the image is adjusted to 640X640 pixels, 120 epochs are trained for each experiment, and the batch size is 16; selecting an SGD optimizer, momentum of 0.93; initial learning rate is 0.01,optimizer weight decay and is 0.0005; applying a wakeup training strategy, wherein the epoch number of wakeup is 3,warmup initial momentum and 0.8,warmup initial bias learning rate is 0.1; training is based on GPU implementation on a PyTorch architecture, and NVIDIA GeForce 3090GPU is used for training.
And S40, determining the acetabular index of the hip joint according to the positions of the femoral head center relative to Hilgenereiner lines and Perkin lines, and determining the health state of the hip joint according to the acetabular index, the age and the sex information so as to finish the detection of the hip joint.
Determining an acetabular index of a hip joint according to the position of the femoral head center relative to Hilgenereiner lines and Perkin lines, and determining the health state of the hip joint according to the acetabular index, age and sex information, wherein the diagnosis and typing of the hip joint can be divided into four cases: determining the joint state of the hip joint according to the acetabulum index, and if the acetabulum index is lower than 22 degrees, the center of the femoral head is positioned in the lower quadrants of the Hilgenereiner line and the Perkin line, then the hip joint is in a normal state; if the acetabular angle is larger than 22 degrees, the center of the femoral head is positioned in the lower quadrant of the Hilgenereiner line and the Perkin line, and the hip joint is in a suspected dislocation state; if the center of the femoral head is positioned in the outer lower quadrant of the Hilgenerainer line and the Perkin line, the hip joint is in a slight dislocation state, and if the center of the femoral head is positioned in the outer upper quadrant of the Hilgenerainer line and the Perkin line, the hip joint is in a heavy dislocation state.
Compared with the prior art, the method for diagnosing the developed hip joint defect (DDH) by using the age and sex information integrated into the image channel can accurately and efficiently diagnose the developed hip joint defect (DDH) from the hip joint x-ray film, greatly improve the accuracy of the developed hip joint defect (DDH) diagnosis, avoid the problem that a patient misses the optimal treatment time to increase the operation difficulty and pay high operation cost, and greatly improve the effect of diagnosing the DDH disease by using the multi-mode information of the method, particularly the performance on a small data set.
Furthermore, the method does not need human auxiliary judgment, does not depend on the assistance of other measuring equipment, greatly saves manpower and material resources, and can accurately detect the type of the developed hip joint defect (DDH) disease by using less calculation amount.
On the basis of any embodiment of the application, before the step of acquiring the pelvic X-ray image data corresponding to the hip joint, the method comprises the following steps:
acquiring pelvis X-ray image data corresponding to the hip joint from a preset database, and converting a DICOM file corresponding to the pelvis X-ray data into a JPEG image;
the image size is resampled to 1024 x 1024 pixels while maintaining the original aspect ratio and filling with zeros on the shorter side.
On the basis of any embodiment of the application, the step of acquiring the pelvis X-ray image data corresponding to the hip joint comprises the following steps:
performing a mosaic image enhancement on the pelvic X-ray image data in response to an image enhancement instruction;
and splicing the pelvis X-ray image data based on random scaling, random cutting and random arrangement modes, and determining pelvis X-ray image data after image enhancement.
The terminal equipment of each hospital can respond to the image enhancement instruction to carry out the mosaic image enhancement on the pelvis X-ray image data; and splicing the pelvis X-ray image data based on random scaling, random cutting and random arrangement modes, wherein 4 pictures can be randomly used, randomly scaled and randomly distributed for splicing, and the pelvis X-ray image data after image enhancement is determined.
According to the embodiment, the method has the advantages that the mosaics image enhancement is carried out on the pelvis X-ray image data, so that the data set of the hip joint detection model can be enriched, random scaling is carried out, then the detection data set is greatly enriched, and particularly, a plurality of small targets are added through random scaling, so that the robustness of the network is better; the GPU video memory is greatly reduced, so that the Mini-batch size can achieve a better effect without being very large.
On the basis of any embodiment of the application, the step of training the hip joint detection model comprises the following steps:
dividing the pelvis X-ray image data after image enhancement into a training set, a testing set and a verification set according to a preset proportion;
acquiring a single training sample and a supervision label thereof in the training set, inputting the training set into a preset hip joint detection model, and extracting image characteristic information of an area corresponding to key point coordinates marked by the corresponding supervision label in the training sample;
the image characteristic information is mapped to preset classification spaces corresponding to the position information representing a plurality of hip joints in a classified mode, and classification probabilities corresponding to the classification spaces are obtained;
Determining the position information of the hip joint represented by the classification space with the largest classification probability, and calculating a loss value corresponding to the position information of the hip joint represented by the classification space with the largest classification probability by adopting a loss function based on the position information of the hip joint marked by the supervision tag;
and when the loss values of the various items reach a preset threshold value, training the hip joint detection model to a convergence state, and completing training of the hip joint detection model.
Specifically, the data set of the hip joint detection model follows the idea of hierarchical sampling, is randomly divided into a training set, a verification set and a test set on the premise of guaranteeing age and gender distribution, the sample numbers are 6220, 765 and 765 respectively, wherein the pictures of the same gender and age are all classified into the training set, the three data sets of the same gender and age are divided into one part each, the rest is divided according to the proportion of 8:1:1, the hip joints on two sides are counted independently, 15500 hip joints are counted independently, and the target detection model can be selected from a Yolo series model or an SSD series model, such as a Yolo5 target detection model and the like.
Because the Yolo5 target detection model has the advantages of high detection speed, high accuracy and the like, the method takes the Yolo5 target detection model as an example, after a training set of the Yolo5 target detection model is determined, a single training sample in the training set and a supervision label thereof are obtained, the training sample is input into the target detection model, image feature information of a region corresponding to coordinates marked by a corresponding supervision label in the training sample is extracted, the image feature information is classified and mapped to classification spaces corresponding to preset key point coordinates representing a plurality of hip joints, classification probability mapped to each classification space is obtained, classification space represented key point coordinates of the hip joint with the largest classification probability are determined, a loss value corresponding to the classification space represented by a loss function is calculated based on the key point coordinates of the hip joint marked by the supervision label, when the loss value reaches a preset threshold value, the Yolo5 target detection model is trained to a convergence state, accordingly, the image feature information is mapped to classification spaces corresponding to the preset key point coordinates representing a plurality of hip joints, classification probability is determined, the classification space represented by the classification probability is calculated based on the key point coordinates of the hip joint, the classification space represented by the classification probability, the classification probability is calculated, when the loss value reaches the threshold value, otherwise, the loss value reaches the preset value, the training model is not reached, the convergence state is further, the training model is carried out, and the image model is further is repeated until the image model is not reached, and the training is further, and can be further understood, and the training is performed, until the image model is further, and can be corrected, and the image model is further understood by the image model has the condition is reached, and the condition that the image has the condition has been further, and can be reached.
In some embodiments, the pelvis X-ray image data is input to a trained Yolo5 model, so as to obtain a frame position of the hip joint in the pelvis X-ray image data, and the model can perform corresponding target detection on an image to obtain position information of an image area where a detection target is located, for example: the frame position information corresponding to the hip joint is generally represented by a rectangular frame coordinate, for example, represented by (x 0, y0, x1, y 1), where (x 0, y 0) represents the upper left corner coordinate of the rectangular frame and (x 1, y 1) represents the lower right corner coordinate of the rectangular frame, so that the frame position corresponding to the hip joint can be determined.
According to the embodiment, after the Yolo5 target detection model is trained to a convergence state, the capability of carrying out target detection on the pelvis X-ray image data to determine the key point coordinates of the hip joint is obtained, and the Yolo5 target detection model has the advantages of high detection speed, high accuracy and the like, can accurately and rapidly determine the position information of the hip joint in the pelvis X-ray image data, provides accurate data support for subsequent diagnosis of the health state of the hip joint, and greatly improves the accuracy of the hip joint.
On the basis of any embodiment of the present application, detecting the pelvis X-ray image data based on a trained hip joint detection model, determining a key point coordinate corresponding to the hip joint in the pelvis X-ray image data, and determining a femoral head center in the pelvis X-ray image data according to the key point coordinate, including the following steps:
extracting image characteristic information of each pelvis X-ray image data based on a convolution neural network in a preset hip joint detection model;
detecting the pelvis X-ray image data for each image frame by adopting a preset hip joint detection model, and determining key point coordinates in the pelvis X-ray image data, wherein a convolution network in the model generally comprises a plurality of convolution layers, each convolution layer comprises a plurality of convolution kernels (also called filters), the weight corresponding to each convolution kernel is different and is used for extracting different image features, the convolution kernels sequentially scan the whole image frame from left to right and from top to bottom to extract corresponding image feature information, and in the process, shallow image features corresponding to shallow image frames, namely front convolution layers, in the convolution network comprise local and detail information, for example: the method comprises the steps of extracting deep image features corresponding to an image frame to comprise more complex and abstract information, and according to the information, merging shallow image features and deep image features through operation of all convolution layers, and obtaining abstract representations of the image frame at different scales, namely image feature information.
Fully connecting the image characteristic information based on a classifier in a preset hip joint detection model to obtain classification probabilities corresponding to a plurality of preset hip joints;
the classifier can be an MLP (multi-layer perceptron), wherein the classifier comprises an input layer, a hidden layer and an output layer, the specific layer number of the hidden layer can be flexibly set by a person skilled in the art, and the layers in the multi-layer perceptron are fully connected. The input layer receives the image characteristic information and inputs the image characteristic information into the hidden layer, the image characteristic information is classified and mapped into a preset classified space through an activation function corresponding to the hidden layer, the output of the hidden layer of the last layer is normalized through an activation function of the output layer, the classification probability corresponding to a first type space and a second type space in the classified space is calculated, the first type space represents a binary number value of '1', and the second type space represents a binary number value of '0'. The activation function may be flexibly set by a person skilled in the art, the activation function corresponding to the hidden layer may be a ReLU (Rectified Linear Unit) function, etc., and the activation function of the output layer may be a Softmax function or a sighard function, etc.
And determining the hip joint with the largest classification probability as the position information corresponding to the hip joint.
Judging whether the maximum classification probability exceeds a preset threshold, and when the maximum classification probability exceeds the preset threshold, determining the hip joint with the maximum classification probability as the identification result of the current image frame, and determining the position of the hip joint with the maximum classification probability as the position information corresponding to the hip joint.
Referring to fig. 10, a hip joint detection device provided for one of the purposes of the present application includes a data acquisition module 1100, an array channel construction module 1200, a femoral head center determination module 1300, and a hip joint detection module 1400. Wherein the data acquisition module 1100 is configured to acquire pelvic X-ray image data corresponding to the hip in response to a hip detection event; an array channel construction module 1200 configured to construct an array of three channels, read age and gender information in the pelvic X-ray image data, write individual pixel values of the pelvic X-ray image data to a first channel in the array, write the age to a second channel in the array, and write the gender information to a third channel in the array; a femoral head center determination module 1300 configured to detect the pelvic X-ray image data based on a trained hip detection model, determine a key point coordinate corresponding to the hip in the pelvic X-ray image data, and determine a femoral head center in the pelvic X-ray image data according to the key point coordinate; the hip joint detection module 1400 is configured to determine an acetabular index of a hip joint according to a position of the femoral head center relative to a Hilgenreiner line and a Perkin line, and determine a health state of the hip joint according to the acetabular index, age and sex information, so as to complete the detection of the hip joint.
On the basis of any embodiment of the present application, please refer to fig. 11, another embodiment of the present application further provides an electronic device, where the electronic device may be implemented by a computer device, and as shown in fig. 11, the internal structure of the computer device is schematically shown. The computer device includes a processor, a computer readable storage medium, a memory, and a network interface connected by a system bus. The computer readable storage medium of the computer device stores an operating system, a database and computer readable instructions, the database can store a control information sequence, and the computer readable instructions when executed by a processor can enable the processor to realize a hip joint detection method. The processor of the computer device is used to provide computing and control capabilities, supporting the operation of the entire computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, may cause the processor to perform the hip joint detection method of the present application. The network interface of the computer device is for communicating with a terminal connection. It will be appreciated by those skilled in the art that the structure shown in fig. 11 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
The processor in this embodiment is configured to execute specific functions of each module and its sub-module in fig. 10, and the memory stores program codes and various data required for executing the above-mentioned modules or sub-modules. The network interface is used for data transmission between the user terminal or the server. The memory in the present embodiment stores program codes and data necessary for executing all modules/sub-modules in the hip joint detecting device of the present application, and the server can call the program codes and data of the server to execute the functions of all the sub-modules.
The present application also provides a storage medium storing computer readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of the hip joint detection method of any of the embodiments of the present application.
The present application also provides a computer program product comprising computer programs/instructions which when executed by one or more processors implement the steps of the hip joint detection method according to any of the embodiments of the present application.
Those skilled in the art will appreciate that implementing all or part of the above-described methods of embodiments of the present application may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed, may comprise the steps of embodiments of the methods described above. The storage medium may be a computer readable storage medium such as a magnetic disk, an optical disk, a Read-only memory (ROM), or a random access memory (Random Access Memory, RAM).
The foregoing is only a partial embodiment of the present application, and it should be noted that, for a person skilled in the art, several improvements and modifications can be made without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.
In summary, further, the method does not need human auxiliary judgment, does not depend on the assistance of other measuring equipment, greatly saves manpower and material resources, and can accurately detect the type of the developed hip joint failure (DDH) disease by using less calculation amount.

Claims (10)

1. A method of hip joint detection comprising the steps of:
acquiring pelvic X-ray image data corresponding to the hip in response to the hip detection event;
constructing a three-channel array, reading age and gender information in the pelvis X-ray image data, writing each pixel value of the pelvis X-ray image data into a first channel in the array, writing the age into a second channel in the array, and writing the gender information into a third channel in the array;
detecting the pelvis X-ray image data based on a trained hip joint detection model, determining the corresponding key point coordinates of the hip joint in the pelvis X-ray image data, and determining the center of the femoral head in the pelvis X-ray image data according to the key point coordinates;
And determining the acetabulum index of the hip joint according to the positions of the center of the femoral head relative to Hilgenereiner lines and Perkin lines, and determining the health state of the hip joint according to the acetabulum index, the age and the sex information so as to finish the detection of the hip joint.
2. The method of claim 1, wherein the step of determining the acetabular index of the hip joint based on the position of the femoral head center relative to the Hilgenreiner line, perkin line, and determining the health status of the hip joint based on the acetabular index, age, and gender information to complete the detection of the hip joint comprises the steps of:
determining the joint state of the hip joint according to the acetabulum index, and if the acetabulum index is lower than 22 degrees, the center of the femoral head is positioned in the lower quadrants of the Hilgenereiner line and the Perkin line, then the hip joint is in a normal state;
if the acetabular angle is larger than 22 degrees, the center of the femoral head is positioned in the lower quadrant of the Hilgenereiner line and the Perkin line, and the hip joint is in a suspected dislocation state;
if the center of the femoral head is positioned in the outer lower quadrant of the Hilgenerainer line and the Perkin line, the hip joint is in a slight dislocation state;
if the center of the femoral head is positioned in the outer upper quadrant of the Hilgenereiner line and the Perkin line, the hip joint is in a heavy dislocation state.
3. The method of claim 1, wherein prior to the step of acquiring pelvic X-ray image data corresponding to the hip, comprising the steps of:
acquiring pelvis X-ray image data corresponding to the hip joint from a preset database, and converting a DICOM file corresponding to the pelvis X-ray data into a JPEG image;
the image size is resampled to 1024 x 1024 pixels while maintaining the original aspect ratio and filling with zeros on the shorter side.
4. The method of claim 1, wherein the step of acquiring pelvic X-ray image data corresponding to the hip joint is followed by the step of:
performing a mosaic image enhancement on the pelvic X-ray image data in response to an image enhancement instruction;
and splicing the pelvis X-ray image data based on random scaling, random cutting and random arrangement modes, and determining pelvis X-ray image data after image enhancement.
5. The method of claim 4, wherein the step of training the hip detection model comprises the steps of:
dividing the pelvis X-ray image data after image enhancement into a training set, a testing set and a verification set according to a preset proportion;
Acquiring a single training sample and a supervision label thereof in the training set, inputting the training set into a preset hip joint detection model, and extracting image characteristic information of an area corresponding to key point coordinates marked by the corresponding supervision label in the training sample;
the image characteristic information is mapped to preset classification spaces corresponding to the position information representing a plurality of hip joints in a classified mode, and classification probabilities corresponding to the classification spaces are obtained;
determining the position information of the hip joint represented by the classification space with the largest classification probability, and calculating a loss value corresponding to the position information of the hip joint represented by the classification space with the largest classification probability by adopting a loss function based on the position information of the hip joint marked by the supervision tag;
and when the loss values of the various items reach a preset threshold value, training the hip joint detection model to a convergence state, and completing training of the hip joint detection model.
6. The method of claim 1, wherein detecting the pelvic X-ray image data based on a trained hip detection model, determining a key point coordinate corresponding to the hip in the pelvic X-ray image data, and determining a center of a femoral head in the pelvic X-ray image data based on the key point coordinate, comprises:
Extracting image characteristic information of each pelvis X-ray image data based on a convolution neural network in a preset hip joint detection model;
fully connecting the image characteristic information based on a classifier in a preset hip joint detection model to obtain classification probabilities corresponding to a plurality of preset hip joints;
and determining the hip joint with the largest classification probability as the position information corresponding to the hip joint.
7. The hip joint detection method according to any one of claims 1 to 6, wherein the hip joint detection model is a YOLO5 target detection model.
8. A hip joint detection device, comprising:
a data acquisition module configured to acquire pelvic X-ray image data corresponding to a hip joint in response to a hip joint detection event;
an array channel construction module configured to construct an array of three channels, read age and gender information in the pelvic X-ray image data, write each pixel value of the pelvic X-ray image data into a first channel in the array, write the age into a second channel in the array, and write the gender information into a third channel in the array;
The femoral head center determining module is used for detecting the pelvis X-ray image data based on a trained hip joint detection model, determining key point coordinates corresponding to the hip joint in the pelvis X-ray image data, and determining the femoral head center in the pelvis X-ray image data according to the key point coordinates;
the hip joint detection module is used for determining the acetabulum index of the hip joint according to the position of the center of the femoral head relative to Hilgenereiner lines and Perkin lines and determining the health state of the hip joint according to the acetabulum index, the age and the sex information so as to finish the detection of the hip joint.
9. An electronic device comprising a central processor and a memory, characterized in that the central processor is arranged to invoke a computer program stored in the memory for performing the steps of the method according to any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that it stores in the form of computer-readable instructions a computer program implemented according to the method of any one of claims 1 to 7, which, when invoked by a computer, performs the steps comprised by the corresponding method.
CN202311516299.1A 2023-11-14 2023-11-14 Hip joint detection method, device, equipment and medium Pending CN117495824A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311516299.1A CN117495824A (en) 2023-11-14 2023-11-14 Hip joint detection method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311516299.1A CN117495824A (en) 2023-11-14 2023-11-14 Hip joint detection method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117495824A true CN117495824A (en) 2024-02-02

Family

ID=89672410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311516299.1A Pending CN117495824A (en) 2023-11-14 2023-11-14 Hip joint detection method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117495824A (en)

Similar Documents

Publication Publication Date Title
Yap et al. Deep learning in diabetic foot ulcers detection: A comprehensive evaluation
US10499857B1 (en) Medical protocol change in real-time imaging
CN110853111B (en) Medical image processing system, model training method and training device
Han et al. Automated pathogenesis-based diagnosis of lumbar neural foraminal stenosis via deep multiscale multitask learning
JP2022517769A (en) 3D target detection and model training methods, equipment, equipment, storage media and computer programs
CN112419326B (en) Image segmentation data processing method, device, equipment and storage medium
CN112614133B (en) Three-dimensional pulmonary nodule detection model training method and device without anchor point frame
CN114298234B (en) Brain medical image classification method and device, computer equipment and storage medium
Wu et al. BA‐GCA Net: Boundary‐Aware Grid Contextual Attention Net in Osteosarcoma MRI Image Segmentation
US20230005138A1 (en) Lumbar spine annatomical annotation based on magnetic resonance images using artificial intelligence
Yue et al. Retinal vessel segmentation using dense U-net with multiscale inputs
Hussain et al. Deep learning-based diagnosis of disc degenerative diseases using MRI: a comprehensive review
Yan et al. Cine MRI analysis by deep learning of optical flow: Adding the temporal dimension
CN111260636B (en) Model training method and device, image processing method and device, and medium
CN116258933A (en) Medical image segmentation device based on global information perception
Lu et al. PKRT-Net: prior knowledge-based relation transformer network for optic cup and disc segmentation
Li et al. Automatic bone age assessment of adolescents based on weakly-supervised deep convolutional neural networks
CN116758087B (en) Lumbar vertebra CT bone window side recess gap detection method and device
Reddy et al. A deep learning based approach for classification of abdominal organs using ultrasound images
CN112164447B (en) Image processing method, device, equipment and storage medium
Wang et al. Automatic consecutive context perceived transformer GAN for serial sectioning image blind inpainting
CN111798452A (en) Carotid artery handheld ultrasonic image segmentation method, system and device
CN115346074B (en) Training method, image processing device, electronic equipment and storage medium
CN116485853A (en) Medical image registration method and device based on deep learning neural network
CN117495824A (en) Hip joint detection method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination