CN111383212A - Method for analyzing ultrasonic video image of pelvic floor - Google Patents

Method for analyzing ultrasonic video image of pelvic floor Download PDF

Info

Publication number
CN111383212A
CN111383212A CN202010151184.7A CN202010151184A CN111383212A CN 111383212 A CN111383212 A CN 111383212A CN 202010151184 A CN202010151184 A CN 202010151184A CN 111383212 A CN111383212 A CN 111383212A
Authority
CN
China
Prior art keywords
video
pelvic floor
ultrasonic
video frame
analyzed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010151184.7A
Other languages
Chinese (zh)
Other versions
CN111383212B (en
Inventor
杨鑫
曾兴涛
高睿
李锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Duying Medical Technology Co ltd
Original Assignee
Shenzhen Duying Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Duying Medical Technology Co ltd filed Critical Shenzhen Duying Medical Technology Co ltd
Priority to CN202010151184.7A priority Critical patent/CN111383212B/en
Publication of CN111383212A publication Critical patent/CN111383212A/en
Application granted granted Critical
Publication of CN111383212B publication Critical patent/CN111383212B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses an analysis method of a pelvic floor ultrasonic video image, which automatically identifies key anatomical structures of a plurality of video frames in a pelvic floor ultrasonic video to be analyzed by analyzing the pelvic floor ultrasonic video to be analyzed; the maximum Valsalva video frame and the measurement result corresponding to the ultrasonic video of the pelvic floor to be analyzed are determined based on the obtained key anatomical structure of each video frame, so that on one hand, the key anatomical structure of each video frame can be automatically identified, and the maximum Valsalva video frame is automatically determined based on the key anatomical structure, thereby reducing the manual operation of doctors and improving the inspection speed of the ultrasonic pelvic floor; on the other hand, the artificial error brought to the examination result due to insufficient experience of the doctor can be reduced.

Description

Method for analyzing ultrasonic video image of pelvic floor
Technical Field
The invention relates to the technical field of ultrasound, in particular to an analysis method of a pelvic floor ultrasonic video image.
Background
Female Pelvic Floor Dysfunction (FPFD) is a common disease caused by pelvic floor support structure defect or degeneration, injury and dysfunction, mainly including Pelvic Organ Prolapse (POP), Stress Urinary Incontinence (SUI), fecal incontinence and sexual dysfunction, which seriously affect female health and quality of life.
The current ultrasonic examination of the pelvic floor generally adopts a two-dimensional pelvic floor examination mode, and the two-dimensional pelvic floor ultrasonic examination and the analysis and measurement process thereof are as follows: acquiring and freezing pelvic floor ultrasonic image video segments in a resting state and a Valsalva action state respectively, and selecting a resting video frame and a maximum Valsalva action video frame from the video segments by a doctor operating a trackball of ultrasonic image acquisition equipment according to experience; then, according to experience, a doctor identifies and marks key anatomical structures on the selected video frames; finally, the biomass is measured. In the process, on one hand, the ultrasonic image video segment needs to be manually frozen for many times, and the rest and maximum Valsalva action frames, the key anatomical structures and the like are manually selected from the video segment, so that the operation process is complicated; on the other hand, the identification of key anatomical structures of the ultrasonic images is completely performed by depending on the personal experience of doctors, and due to the experience difference of different doctors, the examination deviation and the inconsistency of the examination results of different doctors can be caused.
Disclosure of Invention
The invention aims to solve the technical problem of providing an analysis method of a pelvic floor ultrasonic video image aiming at the defects of the prior art.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a method of analysis of pelvic floor ultrasound video images, the method comprising:
acquiring a pelvic floor ultrasonic video to be analyzed, wherein the pelvic floor ultrasonic video to be analyzed carries at least one ultrasonic video corresponding to Valsalva action;
determining key anatomical structures of a plurality of video frames in the ultrasonic video of the pelvic floor to be analyzed;
determining a maximum Valsalva video frame corresponding to the ultrasonic video of the pelvic floor to be analyzed based on the obtained key anatomical structure of each video frame, and determining measurement data corresponding to the ultrasonic video of the pelvic floor to be analyzed based on the maximum Valsalva video frame.
The analysis method of the pelvic floor ultrasonic video image, wherein the acquiring of the pelvic floor ultrasonic video to be analyzed specifically comprises:
acquiring a first ultrasonic video of a patient in a resting state;
when the current video frame of the first ultrasonic video meets a first preset condition, taking the current video frame as a starting point, and continuously acquiring a second ultrasonic video of the patient executing the Valsalva action;
when the second ultrasonic video meets a second preset condition, selecting a moment after the moment meeting the second preset condition as an end point;
and intercepting the ultrasonic video segment between the starting point and the ending point to obtain the ultrasonic video of the pelvic floor to be analyzed.
According to the analysis method of the pelvic floor ultrasonic video image, a video frame corresponding to the starting point of the to-be-analyzed pelvic floor ultrasonic video is a rest video frame of the to-be-analyzed pelvic floor ultrasonic video.
The method for analyzing the pelvic floor ultrasound video image, wherein the determining of the measurement data corresponding to the pelvic floor ultrasound video to be analyzed based on the maximum Valsalva video frame specifically includes:
and determining the measurement data corresponding to the ultrasonic video of the pelvic floor to be analyzed based on the maximum Valsalva video frame and the rest video frame.
The method for analyzing the pelvic floor ultrasound video image, wherein the determining of the key anatomical structures of a plurality of video frames in the to-be-analyzed pelvic floor ultrasound video specifically comprises:
inputting the ultrasonic video of the pelvic floor to be analyzed into a trained recognition algorithm model, wherein the recognition algorithm model is used for recognizing a key anatomical structure of an ultrasonic video frame of the pelvic floor;
and outputting key anatomical structures of a plurality of video frames in the ultrasonic video of the pelvic floor to be analyzed through the identification algorithm model.
The method for analyzing the pelvic floor ultrasound video image, wherein the determining the maximum Valsalva video frame corresponding to the pelvic floor ultrasound video to be analyzed based on the key anatomical structure of each acquired video frame specifically includes:
for the key anatomical structures of a plurality of acquired video frames, converting the key anatomical structures of the video frames into geometric figures according to a preset rule;
and determining the maximum Valsalva video frame corresponding to the ultrasonic video of the pelvic floor to be analyzed based on the acquired geometric figure of each video frame.
The method for analyzing the pelvic floor ultrasound video image, wherein the determining the maximum Valsalva video frame corresponding to the pelvic floor ultrasound video to be analyzed based on the acquired geometric figure of each video frame specifically includes:
for a plurality of video frames, selecting a first geometric figure corresponding to the lowest point of the posterior wall of the bladder from the geometric figures of the video frames;
determining the vertical distance from the lowest point of the back wall of the bladder to a preset reference line in the video frame according to the first geometric figure;
and determining the maximum Valsalva video frame corresponding to the ultrasonic video of the pelvic floor to be analyzed according to all the acquired vertical distances.
The method for analyzing the pelvic floor ultrasound video image, wherein after determining the maximum Valsalva video frame corresponding to the pelvic floor ultrasound video to be analyzed according to all the acquired vertical distances, the method further comprises the following steps:
drawing the geometric figure of the maximum Valsalva video frame and a preset reference line to an image layer, and marking the measurement data in the image layer.
The method for analyzing the pelvic floor ultrasonic video image further comprises the following steps:
for the preset biomass, determining the corresponding measurement data of each video frame based on the obtained key anatomical structure of each video frame;
and drawing a change curve of the preset biomass corresponding to the ultrasonic video of the pelvic floor to be analyzed based on all the acquired measurement data.
A computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps in the method for analysis of pelvic floor ultrasound video images as described in any of the above.
An ultrasound apparatus, comprising: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
the communication bus realizes connection communication between the processor and the memory;
the processor, when executing the computer readable program, implements the steps in the method of analysis of pelvic floor ultrasound video images as described in any one of the above.
Has the advantages that: compared with the prior art, the invention provides an analysis method of a pelvic floor ultrasonic video image, which automatically identifies key anatomical structures of a plurality of video frames in a pelvic floor ultrasonic video to be analyzed by analyzing the pelvic floor ultrasonic video to be analyzed; the maximum Valsalva video frame and the measurement result corresponding to the ultrasonic video of the pelvic floor to be analyzed are determined based on the obtained key anatomical structure of each video frame, so that on one hand, the key anatomical structure of each video frame can be automatically identified, and the maximum Valsalva video frame is automatically determined based on the key anatomical structure, thereby reducing the manual operation of doctors and improving the inspection speed of the ultrasonic pelvic floor; on the other hand, the artificial error brought to the examination result due to insufficient experience of the doctor can be reduced.
Drawings
Fig. 1 is a flowchart of an analysis method of a pelvic floor ultrasound video image provided by the present invention.
Fig. 2 is a schematic diagram of a pelvic floor ultrasound video frame in the method for analyzing a pelvic floor ultrasound video image according to the present invention.
FIG. 3 is a schematic diagram of a geometric figure in the method for analyzing the ultrasonic video image of the pelvic floor according to the present invention.
Fig. 4 is a flowchart illustrating an embodiment of a method for analyzing an ultrasound video image of a pelvic floor according to the present invention.
Fig. 5 is a bladder descent distance variation trend chart in the basin bottom ultrasonic video image analysis method provided by the invention.
FIG. 6 is a bladder neck distance variation trend chart in the method for analyzing the pelvic floor ultrasound video images provided by the present invention.
Fig. 7 is a trend chart of bladder posterior angle change in the method for analyzing the pelvic floor ultrasonic video image provided by the invention.
Fig. 8 is a diagram of the variation trend of the urethral inclination angle in the method for analyzing the pelvic floor ultrasonic video image provided by the invention.
Fig. 9 is a schematic structural diagram of an ultrasound apparatus provided by the present invention.
Detailed Description
The invention provides a method for analyzing a pelvic floor ultrasonic video image, which is further described in detail below by referring to the attached drawings and embodiments in order to make the purposes, technical schemes and effects of the invention clearer and clearer. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The embodiment provides an analysis method of a pelvic floor ultrasonic video image, which can be applied to electronic equipment with a front-camera function or a rear-camera function, and the electronic equipment can be realized in various forms. Such as a cell phone, a tablet computer, a palm top computer, a Personal Digital Assistant (PDA), etc. In addition, the functions realized by the method can be realized by calling the program code by a processor in the electronic equipment, and the program code can be saved in a computer storage medium.
The present implementation provides a method for analyzing a pelvic floor ultrasound video image, as shown in fig. 1, the method may include the following steps:
s10, obtaining a pelvic floor ultrasonic video to be analyzed, wherein the pelvic floor ultrasonic video to be analyzed carries at least one ultrasonic video corresponding to Valsalva action.
Specifically, the pelvic floor ultrasound video to be analyzed may be formed by multiple frames of acquired pelvic floor ultrasound images according to pelvic floor tissue information acquired by the ultrasound image acquisition device. The ultrasonic video of the pelvic floor to be analyzed at least comprises a complete Valsalva motion. It is understood that, during the process of acquiring the pelvic floor tissue information through the ultrasound image acquisition device, the user performs at least one complete Valsalva action to acquire the ultrasound video corresponding to at least one complete Valsalva action. For example, the pelvic floor ultrasound video includes several pelvic floor ultrasound video frames as described in fig. 2. Certainly, in practical applications, the pelvic floor ultrasound video to be analyzed may also be a multi-frame ultrasound video frame, where the multi-frame ultrasound video frame may be a plurality of video frames selected during the pelvic floor ultrasound examination, where two adjacent frames in the plurality of video frames may be continuous frames acquired by the ultrasound image acquisition device during the pelvic floor ultrasound examination, or may also be discontinuous frames, for example, the video frame a and the video frame B are adjacent frames constituting a multi-frame ultrasound video frame in the pelvic floor ultrasound video to be analyzed, and in the process of acquiring the multi-frame pelvic floor ultrasound video frame, after the video frame a is acquired, the video frame B may be acquired after a preset frame is separated.
In an implementation manner of this embodiment, the acquiring an ultrasound video of a pelvic floor to be analyzed specifically includes:
and S11, acquiring the first ultrasonic video of the patient in a resting state.
Specifically, the first ultrasound video is used for being acquired in a resting state, that is, before the first ultrasound video is acquired, a doctor enables an examinee to be in a resting state, and after the examinee is in the resting state, the ultrasound image acquisition device is started to acquire an ultrasound image through the ultrasound image acquisition device, and the acquired ultrasound image is displayed on a display interface of the ultrasound image acquisition device.
And S12, when the current video frame of the first ultrasonic video meets the first preset condition, taking the current video frame as a starting point, and continuously acquiring a second ultrasonic video of the patient for executing Valsalva action.
Specifically, the current video frame is an ultrasound video frame acquired at the current moment. The first preset condition is used for determining whether the acquisition of an ultrasound image corresponding to the Valsalva action can be started, wherein the first preset condition can be that the image quality of the current video frame reaches a preset image quality. It can be understood that after the current video frame is acquired, whether the image quality of the current video frame reaches the preset image quality is judged, and if the image quality reaches the preset image quality, it is judged that the current video frame of the first ultrasonic video meets the first preset condition. The image quality can be image definition and the like, and the judgment of whether the image quality of the current video frame reaches the preset image quality can be determined manually by a doctor. In addition, in practical application, the reference pelvic floor ultrasonic image can be preset according to the preset image quality, after the current video frame is obtained, the current video frame can be compared with the reference pelvic floor ultrasonic image, and if the image quality of the current video frame is higher than that of the reference pelvic floor ultrasonic image, it is determined that the current video frame of the first ultrasonic video meets the first preset condition.
In addition, the taking of the current video frame as the starting point may be implemented by a preset control key, for example, when the current video frame meets a first preset condition, the preset control key is clicked to mark the current video frame as the starting point. The control key may be a virtual control key arranged on an interaction interface of a touch screen of the ultrasound image acquisition device, or an entity control key arranged on an operation panel of the ultrasound image acquisition device.
Further, the second ultrasound video is an ultrasound image acquired during the Valsalva action performed by the examinee, and the second ultrasound video is the first ultrasound video which is a continuous ultrasound video. It can be understood that after the current video frame meets the first preset condition, the doctor lets the examinee perform Valsalva action, and continuously acquires the ultrasound image of the examinee. That is, the first ultrasound video and the second ultrasound video are detected by one ultrasound examination, and the last frame video frame of the first ultrasound video is the first frame video frame of the second ultrasound video.
And S13, when the second ultrasonic video meets the second preset condition, selecting a moment after the moment meeting the second preset condition as an end point.
Specifically, the second preset condition is that the second ultrasound video frame includes at least one complete ultrasound video corresponding to the Valsalva action. It is understood that the second ultrasound video satisfies the second preset condition that the second ultrasound video includes at least one ultrasound video corresponding to the complete Valsalva action. In addition, the end point can be determined by clicking a preset control key when the time after the second preset condition is met is selected as the end point. The control key may be a virtual control key arranged on an interaction interface of a touch screen of the ultrasound image acquisition device, or an entity control key arranged on an operation panel of the ultrasound image acquisition device.
And S14, intercepting the ultrasonic video segment between the starting point and the ending point to obtain the ultrasonic video of the pelvic floor to be analyzed.
Specifically, the ultrasonic video of the pelvic floor to be analyzed is an ultrasonic video segment between the starting point and the ending point, and the ultrasonic video of the pelvic floor to be analyzed includes a video frame corresponding to the starting point. The ultrasonic video of the pelvic floor to be analyzed can comprise a video frame corresponding to the end point or not. In addition, the starting point is an ultrasonic image acquired when the user is in a resting state, so that a video frame corresponding to the starting point of the ultrasonic video of the pelvic floor to be analyzed can be used as a resting video frame of the ultrasonic video of the pelvic floor to be analyzed.
And S20, determining key anatomical structures of a plurality of video frames in the ultrasonic video of the pelvic floor to be analyzed.
Specifically, the video frames may be selected from the ultrasonic video of the pelvic floor to be analyzed according to a preset rule (for example, one frame is selected at intervals of one frame), may be all video frames in the ultrasonic video of the pelvic floor to be analyzed, and may also be randomly selected from the ultrasonic video of the pelvic floor to be analyzed. In one implementation manner of this embodiment, the video frames are all video frames in the ultrasound video to be analyzed. The key anatomical structures may include the pubic symphysis posterior inferior border, the pubic symphysis medial axis, the bladder neck, the proximal urethral medial axis, the bladder posterior wall, and the bladder posterior wall nadir, among others. The key anatomical structures are identified through a pre-established identification algorithm model, wherein the identification algorithm model may be a deep learning model for locating the key anatomical structures in the pelvic floor ultrasound image, or may be a traditional machine learning model (e.g., random forest, adaboost, etc.). In an implementation manner of this embodiment, since the deep learning model is a machine learning model capable of simulating a neural structure of a human brain, and has a complex network structure, a strong image processing capability, and forward propagation, so that a learning result is closest to a human brain result, the recognition algorithm model employs the deep learning model.
Further, in an implementation manner of this embodiment, the determining a key anatomical structure of a plurality of video frames in the pelvic floor ultrasound video to be analyzed specifically includes:
s21, inputting the pelvic floor ultrasonic video to be analyzed into a trained recognition algorithm model, wherein the recognition algorithm model is used for recognizing a key anatomical structure of a pelvic floor ultrasonic video frame;
and S22, outputting key anatomical structures of a plurality of video frames in the ultrasonic video of the pelvic floor to be analyzed through the recognition algorithm model.
Specifically, an input item of the recognition algorithm model is a video segment, an output item of the recognition algorithm model is a key anatomical structure diagram corresponding to each frame of video in the video segment, and the key anatomical structure diagram carries all key anatomical structures corresponding to the video frame, wherein the key anatomical structure diagram can carry one or more key anatomical structures. For example, the key anatomical map may carry one or more of a pubic symphysis posterior inferior border, a pubic symphysis medial axis, a bladder neck, a proximal urethral medial axis, a bladder posterior wall, and a bladder posterior wall nadir. In addition, in a specific implementation manner of this embodiment, the input item of the recognition algorithm model may be a video segment in a 2D format.
Further, since the critical anatomical structures may be plural, the recognition algorithm model needs to recognize plural anatomical structures. Thus, the recognition algorithm model may comprise a plurality of recognition units, each recognition unit being a single-task algorithm model, and the plurality of recognition units being used in parallel in a coordinated manner, wherein each recognition unit is adapted to recognize one critical anatomical structure. For example, the identification algorithm model includes six identification units, and the six identification units are sequentially used for identifying the pubis symphysis posterior lower edge, the pubis symphysis central axis, the bladder neck, the proximal urethra central axis, the bladder posterior wall and the bladder posterior wall lowest point. It is understood that the establishment process of the recognition algorithm model may be: firstly, training a recognition algorithm model for each key anatomical structure, and taking each recognition algorithm model as a recognition unit to obtain a recognition algorithm model; and when the recognition algorithm model is used, the ultrasonic video of the pelvic floor to be analyzed is respectively input into each recognition unit, for a plurality of video frames of the ultrasonic video of the pelvic floor, the corresponding key anatomical structures are output through each recognition unit, and then the key anatomical structures output by each recognition unit are integrated, so that the key anatomical structures corresponding to the video frames are obtained.
Further, in an implementation manner of this embodiment, the manner of identifying the key anatomical result in the algorithm model may include target segmentation, heat map, position parameter regression, target detection, and the like. Wherein the target segmentation can be to segment key anatomical structures from the image by using a segmentation method; the heat map may convert key anatomical structures into a heat map representing confidence magnitudes for regression; the position parameter regression may be a direct regression of the position coordinates of the key anatomical structure in the image; the object detection may be to generate different frames in the image and then identify whether the image in the frame contains critical anatomical structures.
Further, the training process of the recognition algorithm model may be: the method comprises the steps of inputting an ultrasonic video into a preset neural algorithm model, extracting image texture features through the preset neural algorithm model, outputting a predicted key anatomical structure based on the extracted image texture features, determining a loss function based on the predicted key anatomical structure and a real anatomical structure, and training the preset algorithm model based on the loss function to obtain a recognition algorithm model. In this embodiment, the loss function may be an error between the predicted critical anatomical structure and the real critical anatomical structure, for example, the loss function Dice loss 1-2 predicts a coincidence region of the critical anatomical structure and the real critical anatomical structure/(the predicted critical anatomical structure + the real critical anatomical structure), and the like.
S30, determining the maximum Valsalva video frame corresponding to the ultrasonic video of the pelvic floor to be analyzed based on the key anatomical structure of each acquired video frame, and determining the measurement data corresponding to the ultrasonic video of the pelvic floor to be analyzed based on the maximum Valsalva video frame.
Specifically, the measurement data may include a resting bladder neck distance, a maximum Valsalva action bladder neck distance, a resting urethra inclination angle, a maximum Valsalva action urethra inclination angle, a resting bladder urethra relief angle, a maximum Valsalva action bladder descent distance, a bladder neck movement degree, a urethra rotation angle, and the like. Therefore, the measurement data corresponding to the ultrasonic video of the pelvic floor to be analyzed based on the maximum Valsalva video frame is determined based on the maximum Valsalva video frame and a rest video frame, wherein the rest video frame is a starting video frame of the ultrasonic video of the pelvic floor to be analyzed.
Further, in an implementation manner of this embodiment, the determining, based on the key anatomical structure of each acquired video frame, the maximum Valsalva video frame corresponding to the pelvic floor ultrasound video to be analyzed specifically includes:
for the key anatomical structures of a plurality of acquired video frames, converting the key anatomical structures of the video frames into geometric figures according to a preset rule;
and determining the maximum Valsalva video frame corresponding to the ultrasonic video of the pelvic floor to be analyzed based on the acquired geometric figure of each video frame.
In particular, the preset rules are pre-set for transforming each anatomical structure into a geometric figure. The preset rule may be that the lowest point of the posterior wall of the urinary bladder corresponds to a point, the central axis of pubic symphysis corresponds to a line segment, the lower edge of the posterior symphysis corresponds to a point, the posterior wall of the urinary bladder corresponds to a polygon (e.g., a circle), the neck of the urinary bladder corresponds to a point, and the central axis of the proximal urethra corresponds to a line segment, wherein the polygon corresponding to the posterior wall of the urinary bladder may be determined based on the edge contour of the posterior wall of the urinary bladder, and the polygonal shape may be a closed polygon. Therefore, as shown in fig. 3, after determining the geometric figure corresponding to each key anatomical structure of a plurality of video frames in the pelvic floor ultrasound video to be analyzed, the maximum Valsalva video frame corresponding to the pelvic floor ultrasound video to be analyzed is determined according to the geometric figure corresponding to each video frame.
In an optional embodiment, the determining, based on the acquired geometric figure of each video frame, the maximum Valsalva video frame corresponding to the ultrasonic video of the pelvic floor to be analyzed specifically includes:
for a plurality of video frames, selecting a first geometric figure corresponding to the lowest point of the posterior wall of the bladder from the geometric figures of the video frames;
determining the vertical distance from the lowest point of the back wall of the bladder to a preset reference line in the video frame according to the first geometric figure;
and determining the maximum Valsalva video frame corresponding to the ultrasonic video of the pelvic floor to be analyzed according to all the acquired vertical distances.
Specifically, after the geometric figure corresponding to each frame of video frame is acquired, a pubis combined central axis and a pubis combined rear lower edge point in the video frame may be determined, and after the pubis combined central axis and the pubis combined rear lower edge point are determined, a preset parameter line determined according to the pubis combined central axis and the pubis combined rear lower edge point may be determined, for example, the preset reference line is a straight line which passes through the pubis combined rear lower edge point and forms an included angle of 135 ° with the pubis combined central axis. In addition, in an implementation manner of this embodiment, the preset reference line is determined only according to the pubic symphysis posterior-inferior border, and the reference line may be the pubic symphysis posterior-inferior border and the image horizontal line is a straight line, so that the step of determining the preset reference line may be simplified, and the accuracy of the preset reference line is improved.
Further, after a preset reference line corresponding to each frame of video frame is determined, for each frame of video frame, calculating a vertical distance from the lowest point of the back wall of the bladder in the video frame to the reference line, wherein the vertical distance refers to a distance from a point to the reference line, the reference line is a zero-point dividing line, a distance from a point above the reference line to the reference line is recorded as a forward distance, and the value of the forward distance is a positive number; the distance from a point below the reference line to the reference line is recorded as a negative distance, and the value of the negative distance is a negative number. After the vertical distance from the lowest point of the back bladder wall of each frame of video frame to the reference line is obtained, the video frame corresponding to the minimum distance in all the vertical distances is selected as the maximum Valsalva video frame, wherein the minimum distance refers to the minimum distance between all the positive distances and the negative distances. For example, the ultrasonic video of the pelvic floor to be analyzed comprises four video frames, which are respectively marked as a video frame a, a video frame B, a video frame C and a video frame D, wherein the distance from the video frame a to the reference line is 1, the distance from the video frame B to the reference line is 2, the distance from the video frame C to the reference line is-1, the distance from the video frame D to the reference line is-2, and then the minimum distance in the four video frames is-2, that is, the video frame D is the maximum Valsalva video frame. It should be noted that, when the minimum distance corresponds to multiple video frames, one video frame may be randomly selected as the maximum Valsalva video frame, for example, in the above example, the distances from the video frames a and B to the reference line are not changed, and the distances from the video frames C and D to the reference line are both-2, so that one video frame is randomly selected as the maximum Valsalva video frame from the video frames C and D, for example, the video frame C is selected as the maximum Valsalva video frame.
In one implementation manner of the embodiment, after the maximum Valsalva video frame and the rest video frame are determined, the measurement data corresponding to the pelvic floor ultrasound video to be analyzed is automatically measured according to the maximum Valsalva video frame and the rest video frame. Wherein the resting bladder neck distance is the vertical distance from the bladder neck to the reference line in the resting video frame, and is obtained by the distance from the bladder neck point to the reference line in the resting video frame (which means the distance from the point to the straight line); the vertical distance between the bladder neck and the reference line in the maximum Valsalva video frame during the maximum Valsalva action is obtained by the distance between the bladder neck point and the reference line in the maximum Valsalva video frame; the angle of inclination of the urethra in the resting state is the included angle between the central axis of the urethra and the perpendicular line from the bladder neck to the reference line in the resting video frame; the urethral inclination angle during the maximum Valsalva action is the included angle between the central axis of the urethra and the perpendicular line from the bladder neck to the reference line in the maximum Valsalva video frame; the posterior angle of the bladder urethra in the resting state is an included angle between the central axis of the urethra and the tangent line of the rear wall of the bladder in the resting video frame; the posterior angle of the urinary bladder during the maximum Valsalva action is the included angle between the central axis of the urinary bladder and the tangent line of the posterior wall of the urinary bladder in the maximum Valsalva video frame; the bladder descending distance in the maximum Valsalva action is the vertical distance from the lowest point of the back wall of the bladder to the reference line in the maximum Valsalva video frame; the bladder neck mobility is the difference between the bladder neck distance in the maximum Valsalva action and the bladder neck distance in the resting state; the urethra rotation angle is the difference between the urethra inclination angle at maximum Valsalva action and the urethra inclination angle at rest. It should be noted that the measurement data of each measurement parameter includes a measurement value and a sign, for example, the difference between the bladder neck distance and the resting bladder neck distance may be a positive value or a negative value, and when the difference is a negative value, the measurement data needs to carry a negative sign.
In addition, the geometric figure corresponding to each key anatomical structure corresponding to each frame of video is drawn on the same image (denoted as the first image), the image size of the image is the same as that of the corresponding video frame, and the position of each geometric figure on the image corresponds to the position of the corresponding key anatomical structure in the video frame. Therefore, after the resting video frame and the maximum Valsalva video frame are acquired, the measurement data corresponding to the resting video frame, such as the resting bladder neck distance, the resting urethra inclination angle, the resting bladder urethra rear angle and the like, can be automatically calculated according to the position relation of each geometric figure and the reference line in the first image according to the first image corresponding to the resting video frame. Similarly, the measurement data corresponding to the maximum Valsalva video frame, such as the bladder neck distance during maximum Valsalva action, the urethra inclination angle during maximum Valsalva action, and the bladder urethra rear angle during maximum Valsalva action, can be automatically calculated according to the position relationship of the predicted reference lines of the geometric figures in the first image corresponding to the maximum Valsalva video frame. Finally, after the measurement data corresponding to the rest video frame and the maximum Valsalva video frame are obtained through calculation, the bladder descent distance, the bladder neck mobility and the urethra rotation angle during the maximum Valsalva action can be calculated according to the measurement data corresponding to the rest video frame and the maximum Valsalva video frame.
Further, in an implementation manner of this embodiment, after determining the maximum Valsalva video frame corresponding to the ultrasonic video of the pelvic floor to be analyzed according to all the acquired vertical distances, the method further includes:
drawing the geometric figure of the maximum Valsalva video frame and a preset reference line to an image layer, and marking the measurement data in the image layer.
Specifically, after the measurement data is acquired, the geometry of the maximum Valsalva video frame may be drawn on the image layer, and the preset reference line corresponding to the maximum geometry may be drawn on the image layer, so that the image layer synchronously displays the geometry of the maximum Valsalva video frame and the preset reference line. In addition, the doctor can conveniently know the measurement data, and the measurement data can be marked in the image layer, so that the measurement data can be quickly determined.
In one implementation manner of this embodiment, the method further includes:
for the preset biomass, determining the corresponding measurement data of each video frame based on the obtained key anatomical structure of each video frame;
and drawing a change curve of the preset biomass corresponding to the ultrasonic video of the pelvic floor to be analyzed based on all the acquired measurement data.
Specifically, the preset biomass may include a resting bladder neck distance, a maximum Valsalva action bladder neck distance, a resting urethra inclination angle, a maximum Valsalva action urethra inclination angle, a resting bladder urethra relief angle, a maximum Valsalva action bladder descent distance, a bladder neck mobility and a urethra rotation angle. For preset biomass, measurement data of the preset biomass (which refers to a measurement value of the preset biomass) in each frame of video frame can be acquired, and then the acquired measurement data of the preset biomass is drawn into a change curve according to a video frame acquisition time sequence to display a dynamic change trend of the preset biomass in a Valsalva action, wherein the change curve takes an acquired image frame corresponding to the video frame as a horizontal axis and measurement data corresponding to the preset biomass as a vertical axis. For example, a trend of bladder descent distance as shown in fig. 5; bladder neck distance trend as described in figure 6; trend of bladder posterior urethral angle as shown in fig. 7; and a trend of change in the angle of inclination of the urethra as shown in fig. 8, etc. In addition, the change curve is provided with a current frame indicating mark, when a doctor operates the ultrasonic image acquisition equipment to slide in the ultrasonic image video segment to check each frame of image, the current frame indicating mark on the change curve is linked with the video frame, so that the doctor can continuously check the measurement data of each frame of image.
In an implementation manner of this embodiment, as shown in fig. 4, after obtaining and determining a still video frame and a maximum Valsalva video frame, an adjustment instruction may be further monitored, where the adjustment instruction is used to adjust the still video frame and/or the maximum Valsalva video frame to be a specified still video frame and/or a specified maximum Valsalva video frame; or for instructing to re-execute the step of determining the key anatomical structures of a plurality of video frames in the ultrasonic video segment of the pelvic floor to be analyzed. Wherein, the adjusting instruction can be input by a doctor, can be transmitted by an external device, and the like. Of course, it should be noted that the designated rest video frame and the designated maximum Valsalva video frame are both video frames in the ultrasonic image video of the pelvic floor to be analyzed.
For example, the following steps are carried out: the doctor switches the video frame displaying the ultrasonic image video segment by manually operating the track ball of the ultrasonic image acquisition equipment, and then designates other video frames as a rest video frame or a maximum Valsalva video frame. The other video frames are designated as the rest video frame or the maximum Valsalva video frame, and the rest video frame or the maximum Valsalva video frame is set by the doctor clicking a preset key, namely when the doctor clicks the preset key, the current video frame can be triggered to be set as the rest video frame or the maximum Valsalva video frame. The preset key can be a preset virtual key arranged on an interaction interface of a touch screen of the ultrasonic image acquisition equipment, or an entity key arranged on an operation panel of the ultrasonic image acquisition equipment, and the like. In addition, when the still video frame or the maximum Valsalva video frame is changed, the key anatomical structure of the changed still video frame or the maximum Valsalva video frame is abstracted into editable geometric figures to be drawn on the upper layer of the image, and the measurement data is displayed on the screen.
In one implementation of this embodiment, after the editable geometry is drawn on the top layer of the image and the measurement data is displayed on the screen, it can be determined whether an adjustment needs to be made to the geometry to achieve more accurate positioning, where the adjustment may include, but is not limited to: moving, stretching and rotating. In addition, when adjustments to the geometry are needed, the physician may manually edit and adjust the critical anatomy geometry (e.g., adjust points to more accurate locations of the image) and update and display the measurement data based on the adjusted critical anatomy geometry.
Based on the method for analyzing the pelvic floor ultrasound video image, the present embodiment provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps in the method for analyzing the pelvic floor ultrasound video image according to the above embodiment.
Based on the above analysis method of the pelvic floor ultrasound video image, the present invention also provides an ultrasound apparatus, as shown in fig. 9, which includes at least one processor (processor) 20; a display screen 21; and a memory (memory)22, and may further include a communication Interface (Communications Interface)23 and a bus 24. The processor 20, the display 21, the memory 22 and the communication interface 23 can communicate with each other through the bus 24. The display screen 21 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 23 may transmit information. The processor 20 may call logic instructions in the memory 22 to perform the methods in the embodiments described above.
Furthermore, the logic instructions in the memory 22 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product.
The memory 22, which is a computer-readable storage medium, may be configured to store a software program, a computer-executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 20 executes the functional application and data processing, i.e. implements the method in the above-described embodiments, by executing the software program, instructions or modules stored in the memory 22.
The memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the ultrasound apparatus, and the like. Further, the memory 22 may include a high speed random access memory and may also include a non-volatile memory. For example, a variety of media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, may also be transient storage media.
In addition, the specific processes loaded and executed by the storage medium and the instruction processors in the ultrasound device are described in detail in the method, and are not stated herein.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for analyzing a pelvic floor ultrasound video image, the method comprising:
acquiring a pelvic floor ultrasonic video to be analyzed, wherein the pelvic floor ultrasonic video to be analyzed carries at least one ultrasonic video corresponding to Valsalva action;
determining key anatomical structures of a plurality of video frames in the ultrasonic video of the pelvic floor to be analyzed;
determining a maximum Valsalva video frame corresponding to the ultrasonic video of the pelvic floor to be analyzed based on the obtained key anatomical structure of each video frame, and determining measurement data corresponding to the ultrasonic video of the pelvic floor to be analyzed based on the maximum Valsalva video frame.
2. The method for analyzing the ultrasonic video image of the pelvic floor according to claim 1, wherein the acquiring the ultrasonic video of the pelvic floor to be analyzed specifically comprises:
acquiring a first ultrasonic video of a patient in a resting state;
when the current video frame of the first ultrasonic video meets a first preset condition, taking the current video frame as a starting point, and continuously acquiring a second ultrasonic video of the patient executing the Valsalva action;
when the second ultrasonic video meets a second preset condition, selecting a moment after the moment meeting the second preset condition as an end point;
and intercepting the ultrasonic video segment between the starting point and the ending point to obtain the ultrasonic video of the pelvic floor to be analyzed.
3. The method for analyzing the ultrasonic video image of the pelvic floor according to claim 2, wherein the video frame corresponding to the starting point of the ultrasonic video of the pelvic floor to be analyzed is a still video frame of the ultrasonic video of the pelvic floor to be analyzed.
4. The method for analyzing the ultrasonic video image of the pelvic floor according to claim 3, wherein the determining the measurement data corresponding to the ultrasonic video of the pelvic floor to be analyzed based on the maximum Valsalva video frame specifically comprises:
and determining the measurement data corresponding to the ultrasonic video of the pelvic floor to be analyzed based on the maximum Valsalva video frame and the rest video frame.
5. The method for analyzing the pelvic floor ultrasound video image according to claim 1, wherein the determining the key anatomical structures of the video frames in the pelvic floor ultrasound video to be analyzed specifically comprises:
inputting the ultrasonic video of the pelvic floor to be analyzed into a trained recognition algorithm model, wherein the recognition algorithm model is used for recognizing a key anatomical structure of an ultrasonic video frame of the pelvic floor;
and outputting key anatomical structures of a plurality of video frames in the ultrasonic video of the pelvic floor to be analyzed through the identification algorithm model.
6. The method for analyzing the pelvic floor ultrasound video image according to claim 1, wherein the determining the maximum Valsalva video frame corresponding to the pelvic floor ultrasound video to be analyzed based on the key anatomical structure of each acquired video frame specifically comprises:
for the key anatomical structures of a plurality of acquired video frames, converting the key anatomical structures of the video frames into geometric figures according to a preset rule;
and determining the maximum Valsalva video frame corresponding to the ultrasonic video of the pelvic floor to be analyzed based on the acquired geometric figure of each video frame.
7. The method for analyzing the ultrasonic video image of the pelvic floor according to claim 6, wherein the determining the maximum Valsalva video frame corresponding to the ultrasonic video of the pelvic floor to be analyzed based on the acquired geometric figure of each video frame specifically comprises:
for a plurality of video frames, selecting a first geometric figure corresponding to the lowest point of the posterior wall of the bladder from the geometric figures of the video frames;
determining the vertical distance from the lowest point of the back wall of the bladder to a preset reference line in the video frame according to the first geometric figure;
and determining the maximum Valsalva video frame corresponding to the ultrasonic video of the pelvic floor to be analyzed according to all the acquired vertical distances.
8. The method for analyzing ultrasound video images of the pelvic floor according to claim 1, further comprising:
for the preset biomass, determining the corresponding measurement data of each video frame based on the obtained key anatomical structure of each video frame;
and drawing a change curve of the preset biomass corresponding to the ultrasonic video of the pelvic floor to be analyzed based on all the acquired measurement data.
9. A computer readable storage medium storing one or more programs which are executable by one or more processors to implement the steps in the detection method of the method for analyzing pelvic floor ultrasound video images according to any one of claims 1 to 8.
10. An ultrasound device, comprising: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
the communication bus realizes connection communication between the processor and the memory;
the processor, when executing the computer readable program, implements the steps in the method of detection of a method of analysis of pelvic floor ultrasound video images as claimed in any of claims 1-8.
CN202010151184.7A 2020-03-06 2020-03-06 Analysis method of basin bottom ultrasonic video image Active CN111383212B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010151184.7A CN111383212B (en) 2020-03-06 2020-03-06 Analysis method of basin bottom ultrasonic video image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010151184.7A CN111383212B (en) 2020-03-06 2020-03-06 Analysis method of basin bottom ultrasonic video image

Publications (2)

Publication Number Publication Date
CN111383212A true CN111383212A (en) 2020-07-07
CN111383212B CN111383212B (en) 2023-09-01

Family

ID=71218645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010151184.7A Active CN111383212B (en) 2020-03-06 2020-03-06 Analysis method of basin bottom ultrasonic video image

Country Status (1)

Country Link
CN (1) CN111383212B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024113521A1 (en) * 2022-11-29 2024-06-06 四川大学华西第二医院 Multi-modal data fusion method and device for pelvic floor function overall evaluation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170065249A1 (en) * 2015-09-08 2017-03-09 Advanced Tactile Imaging Inc. Methods and probes for vaginal tactile and ultrasound imaging
FR3070255A1 (en) * 2017-08-31 2019-03-01 Eurl Cornier METHOD FOR MODELING ELASTICITY OF PELVIS TISSUES AND ELASTIC DEFORMATIONS OF FEMALE URETH ASSOCIATED WITH URINARY INCONTINENCES
CN109893146A (en) * 2019-03-07 2019-06-18 深圳大学 A kind of female pelvic dysfunction appraisal procedure and its system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170065249A1 (en) * 2015-09-08 2017-03-09 Advanced Tactile Imaging Inc. Methods and probes for vaginal tactile and ultrasound imaging
FR3070255A1 (en) * 2017-08-31 2019-03-01 Eurl Cornier METHOD FOR MODELING ELASTICITY OF PELVIS TISSUES AND ELASTIC DEFORMATIONS OF FEMALE URETH ASSOCIATED WITH URINARY INCONTINENCES
CN109893146A (en) * 2019-03-07 2019-06-18 深圳大学 A kind of female pelvic dysfunction appraisal procedure and its system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024113521A1 (en) * 2022-11-29 2024-06-06 四川大学华西第二医院 Multi-modal data fusion method and device for pelvic floor function overall evaluation

Also Published As

Publication number Publication date
CN111383212B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
JP6947759B2 (en) Systems and methods for automatically detecting, locating, and semantic segmenting anatomical objects
CN107492099B (en) Medical image analysis method, medical image analysis system, and storage medium
CN107895367B (en) Bone age identification method and system and electronic equipment
US20220087635A1 (en) Method for measuring parameters in ultrasonic image and ultrasonic imaging system
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
CN111292362A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110338841A (en) The display processing method and 3-D supersonic imaging method and system of three-dimensional imaging data
CN108309334B (en) Data processing method of spine X-ray image
CN109512464A (en) A kind of disorder in screening and diagnostic system
CN113241155B (en) Method and system for acquiring mark points in skull side position slice
CN109636910A (en) A kind of cranium face restored method generating confrontation network based on depth
CN112233087A (en) Artificial intelligence-based ophthalmic ultrasonic disease diagnosis method and system
CN111768379A (en) Standard section detection method of three-dimensional uterine ultrasound image
CN113782184A (en) Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
CN106780491A (en) The initial profile generation method used in GVF methods segmentation CT pelvis images
CN112508902A (en) White matter high signal grading method, electronic device and storage medium
Williams et al. Automatic extraction of hiatal dimensions in 3-D transperineal pelvic ultrasound recordings
JP6258084B2 (en) Medical image display device, medical image display system, and medical image display program
CN112801968B (en) Double-layer depth network model, method and device for nuclear magnetic image segmentation
CN111383212B (en) Analysis method of basin bottom ultrasonic video image
CN110827283A (en) Head and neck blood vessel segmentation method and device based on convolutional neural network
CN108898601B (en) Femoral head image segmentation device and method based on random forest
CN111160431A (en) Method and device for identifying keratoconus based on multi-dimensional feature fusion
CN104915989A (en) CT image-based blood vessel three-dimensional segmentation method
CN116128942A (en) Registration method and system of three-dimensional multi-module medical image based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant