CN115919367A - Ultrasonic image processing method and device, electronic equipment and storage medium - Google Patents

Ultrasonic image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115919367A
CN115919367A CN202211586750.2A CN202211586750A CN115919367A CN 115919367 A CN115919367 A CN 115919367A CN 202211586750 A CN202211586750 A CN 202211586750A CN 115919367 A CN115919367 A CN 115919367A
Authority
CN
China
Prior art keywords
fetus
fetuses
ultrasound image
measurement item
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211586750.2A
Other languages
Chinese (zh)
Inventor
王长成
周国义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Opening Of Biomedical Technology Wuhan Co ltd
Original Assignee
Opening Of Biomedical Technology Wuhan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Opening Of Biomedical Technology Wuhan Co ltd filed Critical Opening Of Biomedical Technology Wuhan Co ltd
Priority to CN202211586750.2A priority Critical patent/CN115919367A/en
Publication of CN115919367A publication Critical patent/CN115919367A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The embodiment of the invention provides an ultrasonic image processing method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring an initial ultrasonic image; performing multi-tire identification on the initial ultrasonic image to obtain a multi-tire identification result; marking a plurality of fetuses in the initial ultrasound image based on the multi-fetal recognition result; acquiring an ultrasound image to be tested containing at least one of the marked fetuses; and measuring at least one measuring item of at least one fetus to be measured based on the ultrasonic image to be measured. This technical scheme can assist the user to carry out the discernment and the measurement of child scene, and great degree alleviates user's work load to user's inspection efficiency and degree of accuracy have been promoted.

Description

Ultrasonic image processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an ultrasound image processing method, an ultrasound image processing apparatus, an electronic device, and a storage medium.
Background
Multiple pregnancies fall into the high risk pregnancy range, and increase morbidity and mortality of perinatal related diseases such as hypertension, premature labor, dystocia and the like for both mothers and fetuses. In recent years, with the annual increase of the incidence rate of multiple pregnancy, a series of brought clinical problems arouse the attention of more and more obstetricians. Therefore, prenatal detection is enhanced, early diagnosis and early treatment are performed on the fetus, and complications related to the mother and the infant can be reduced. Prenatal ultrasonic examination can determine the types of multiple fetuses and the condition of amnion, and is beneficial to monitoring the growth and development of fetuses and the like.
In the prior art, doctors mainly judge whether the pregnancy is multiple pregnancy or not by checking the number of chorionic sacs and amniotic sacs through human eyes in the ultrasonic examination process, which is time-consuming. Moreover, the existing ultrasonic image automatic identification and measurement technology is only suitable for single-tire scenes and cannot be applied to multi-tire scenes.
Therefore, a new method for processing ultrasound images is needed to solve the above problems.
Disclosure of Invention
To at least partially solve the problems in the prior art, an ultrasound image processing method, an ultrasound image processing apparatus, an electronic device, and a storage medium are provided.
According to an aspect of the present invention, there is provided an ultrasound image processing method including: acquiring an initial ultrasonic image; performing multi-tire identification on the initial ultrasonic image to obtain a multi-tire identification result; marking a plurality of fetuses in the initial ultrasonic image based on the multi-fetal recognition result; acquiring an ultrasound image to be tested containing at least one of the marked fetuses; and measuring at least one measuring item of at least one fetus to be measured based on the ultrasonic image to be measured.
Illustratively, the measurement of the at least one measurement item of the at least one fetus under test based on the ultrasound image under test includes: performing standard section identification on the ultrasonic image to be detected to judge whether the ultrasonic image to be detected belongs to one of at least one preset type of standard sections; the ultrasound image to be measured belongs to
Under the condition of a specific preset type of standard section, carrying out image segmentation on the ultrasonic image to be detected, and segmenting at least one target structure from the ultrasonic image to be detected by 5, wherein at least one target structure belongs to and is specific to
Determining a structure related to a standard tangent plane of a preset type; and measuring at least one measurement item based on the segmentation result of the at least one target structure, wherein the at least one measurement item is a measurement item related to the at least one target structure.
Illustratively, the number of the at least one fetus to be measured is greater than or equal to two, and after the measuring the at least one measurement item of the at least one fetus to be measured based on the ultrasonic image to be measured, the method further includes: for a first measurement item in the at least one measurement item, calculating a first difference between the measurement results of the at least one fetus to be measured on the first measurement item; and outputting the first prompt message when the first difference is greater than or equal to the first difference threshold value.
Illustratively, the first measurement item includes a head-hip length.
At least one measurement of at least one fetus under test based on the ultrasound image under test
After the item is measured, the method further comprises: for a second measurement item in the at least one measurement item, calculating a second difference between the measurement result of any fetus to be measured on the second measurement item and the standard measurement result of the standard fetus on the second measurement item; and outputting second prompt information when the second difference is greater than or equal to the second difference threshold value.
0 illustratively, the second measurement items include one or more of: length of double apical diameter, thickness of placenta
Degree, maximum depth of amniotic fluid.
Illustratively, after measuring at least one measurement item of at least one fetus under test based on the ultrasound image under test, the method further comprises: for a third measurement item of the at least one measurement item
And selecting the largest measurement result of 5 from the measurement results of at least one fetus to be measured on the third measurement item, and outputting the largest measurement result.
Illustratively, acquiring an ultrasound image under test containing at least one fetus under test of the marked plurality of fetuses includes: performing target tracking on an ultrasonic image sequence acquired in real time in the process of operating the ultrasonic probe by a user based on a fetal marking result so as to determine the track of any target fetus to be detected in at least one fetus to be detected in the ultrasonic image sequence; if the target fetus to be detected is found to be lost in the track 0, outputting third prompt information; after the track of the target fetus to be detected is lost for a preset time period or when an examination ending instruction is received, determining that the ultrasonic image containing the target fetus to be detected in the ultrasonic image sequence is the ultrasonic image to be detected containing the target fetus to be detected.
Illustratively, marking a plurality of fetuses in the initial ultrasound image based on the multi-fetal recognition results includes: identifying a location of a cervical os from the initial ultrasound image; determining a first relative position relation between a plurality of fetuses and the cervical orifice based on the multi-fetus identification result and the cervical orifice identification result; ranking the plurality of fetuses based on the first relative positional relationship; the plurality of fetuses are marked in a sorted order.
Illustratively, marking a plurality of fetuses in the initial ultrasound image based on the multi-fetal recognition results includes: receiving marking information input by a user; the plurality of fetuses are tagged based on the tagging information.
Illustratively, the multi-fetal recognizing is performed on the initial ultrasound image to obtain a multi-fetal recognition result, including: performing multi-fetal identification on the initial ultrasound image through a target detection algorithm to obtain a multi-fetal identification result, wherein the multi-fetal identification result comprises a target detection frame for indicating the positions of the fetuses, and after performing multi-fetal identification on the initial ultrasound image to obtain the multi-fetal identification result, the method further comprises: determining a second relative position relationship of at least part of fetuses in the plurality of fetuses based on the multi-fetus identification result, wherein the second relative position relationship comprises an orientation relationship between two fetuses and/or a distance between target detection frames corresponding to the two fetuses; position information indicating the second relative positional relationship is output.
According to another aspect of the present invention, there is provided an ultrasound image processing apparatus including: the first acquisition module is used for acquiring an initial ultrasonic image; the identification module is used for carrying out multi-tire identification on the initial ultrasonic image so as to obtain a multi-tire identification result; a marking module for marking a plurality of fetuses in the initial ultrasonic image based on the multi-fetal recognition result; a second obtaining module, configured to obtain an ultrasound image to be measured, where the ultrasound image includes at least one of the marked fetuses; the measuring module is used for measuring at least one measuring item of at least one fetus to be measured based on the ultrasonic image to be measured.
According to yet another aspect of the present invention, there is provided an electronic device comprising a processor and a memory, the memory having stored therein a computer program, the processor executing the computer program to implement the ultrasound image processing method described above.
Illustratively, the electronic device is an ultrasound diagnostic device or an ultrasound workstation.
According to yet another aspect of the present invention, there is provided a storage medium storing a computer program/instructions which, when executed by a processor, implement the ultrasound image processing method described above.
According to the ultrasound image processing method, the ultrasound image processing device, the electronic device and the storage medium of the embodiment of the invention, a multi-fetal recognition result can be obtained based on the initial ultrasound image, and at least one measurement item of at least one fetus to be measured can be measured based on the multi-fetal recognition result. The method can assist the user in identifying and measuring the multi-fetal scene, and greatly reduce the workload of the user, thereby improving the inspection efficiency and accuracy of the user.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 shows a schematic flow diagram of a method of processing an ultrasound image according to one embodiment of the present invention;
FIG. 2 shows a schematic flow diagram of a method of processing ultrasound images according to one embodiment of the present invention;
FIG. 3 shows a schematic block diagram of an apparatus for processing ultrasound images according to an embodiment of the present invention; and
FIG. 4 shows a schematic block diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments described in the present application without inventive step, shall fall within the scope of protection of the present application.
In order to at least partially solve the above technical problem, an embodiment of the present invention provides an ultrasound image processing method. FIG. 1 shows a schematic diagram of an ultrasound image processing method 100 according to one embodiment of the present invention. As shown in fig. 1, the ultrasound image processing method 100 may include the following steps S110, S120, S130, S140 and S150.
Step S110, an initial ultrasound image is acquired.
Illustratively, an ultrasound scan video (which may be simply referred to as "ultrasound video") for an object to be measured may be acquired with an image acquisition device. The object to be measured may be a certain biological tissue, such as a uterine part of a human body or the like. The ultrasound scanning video may include a plurality of frames, each frame may correspond to an ultrasound image, and then a plurality of frames of ultrasound images may be obtained according to the obtained ultrasound scanning video. For example, 200 ultrasound images may be obtained for a 200 frame ultrasound scan video, and the 200 ultrasound images may optionally be sorted in the order of acquisition of the ultrasound scan video segment. Any one or more ultrasound images may be selected from the ultrasound scan video as the initial ultrasound image. In one embodiment, an ultrasound image located at a middle position (e.g., a 100 th frame ultrasound image) among the 200 ultrasound images may be used as the initial ultrasound image. In another embodiment, the ultrasound images at the front and back ends of the ultrasound scanning video can be partially removed, and the rest of the ultrasound images can be used as the initial ultrasound images. For example, the ultrasound images of the 1 st to 50 th frames and the ultrasound images of the 150 th to 200 th frames are removed, and the ultrasound images of the 51 st to 149 th frames are used as initial ultrasound images.
And step S120, performing multi-tire identification on the initial ultrasonic image to obtain a multi-tire identification result.
For any of the initial ultrasound images, the multi-fetal recognition result may include position information indicating the positions of the plurality of fetuses in the initial ultrasound image. The location of the plurality of fetuses in the initial ultrasound image may include a location of the entirety of the plurality of fetuses in the initial ultrasound image and/or a location of each of the plurality of fetuses in the initial ultrasound image. Preferably, the multiple-fetus identification result includes position information associated with the position of each of the plurality of fetuses in the initial ultrasound image.
Illustratively, the initial ultrasound image may be multi-fetal identified using a multi-fetal identification model. Illustratively, the multi-tire recognition model may be implemented using any suitable neural network model, such as any suitable existing or future-occurrence target detection network model. For example, the multi-tire identification model may include one or more of: see Only Once (You Only Look at Once) series, regional-Convolutional Neural Networks (RCNN) series, retinal Networks (Retina-Net), and other network models. Of course, the above target detection network model is only an example, and the multi-tire recognition model may also be implemented by using any suitable image segmentation network model that exists or may appear in the future. For example, the multi-tire identification model may include one or more of: network models such as Full Convolutional Networks (FCN), U-type Networks (uet), deep laboratory (deep lab) series, V-type Networks (Vnet), and the like. It will be appreciated that the image segmentation network model may also identify the location of a target object (e.g., a fetus as described herein). The target detection frame for indicating the position of the fetus can be obtained through the target detection network model, and the target detection frame can be in any suitable shape, preferably a rectangle. A mask or envelope indicating where the fetus is located may be obtained by the image segmentation network model.
Further, by way of example and not limitation, the multi-fetal recognition result may further include category information indicating a category to which the plurality of fetuses in the initial ultrasound image belong. For example, in the case where the initial ultrasound image includes double-sided tires, the two-sided tires can be classified into double-velvet double-sided tires and single-velvet double-sided tires. The double pile double batt and the single pile double batt may be divided according to the number of chorions. The double-velvet double-fetus is formed by respectively fertilizing two ova, when two gestational sacs are observed, and single embryo and fetal heart beat in each sac are respectively detected, the double-velvet double-fetus can be diagnosed, and the double fetus is separated by a fused thick-layer chorion and two layers of amnion, one on each side. The single-velvet double-fetus is a double fetus formed by fertilization of an ovum, and when only one gestational sac is observed, two germs and fetus heart pulsation are detected in the sac, the double fetus is a single-velvet double fetus, and the double fetus is separated by two layers of thinner amnion. In one embodiment, when the target detection network model or the image segmentation network model is used for multi-fetus recognition, the target detection network model or the image segmentation network model can output the category information of a plurality of fetuses in addition to the position information of the plurality of fetuses. For example, in the case of a double fetus as described above, the category information may include information on whether a plurality of fetuses belong to a double-velvet double fetus or a single-velvet double fetus. Those skilled in the art can understand the manner in which the target detection network model or the image segmentation network model outputs the category information, which is not described herein in detail. In another embodiment, the multi-tire identification model may include a location detection model and a classification model that are completely independent or only partially parameter shared. The location detection model may be the above-described object detection network model or an image segmentation network model. The classification model may be implemented using any suitable existing or future-emerging classification network model, including but not limited to one or more of the following: network models such as Visual Geometry Group (VGG) networks, open-end (inclusion) networks, residual networks (respet), and the like. The position information of a plurality of fetuses can be obtained through the position detection model, and the category information of the fetuses can be obtained through the classification model.
The multi-tire recognition model used in step S120 may be obtained through training of a first training data set. The first training data set may include a plurality of first sample ultrasound images and first label information (group route) corresponding one-to-one to the plurality of first sample ultrasound images. The first sample ultrasound image may contain a plurality of sample fetuses. The first annotation information may include position information indicating positions of the plurality of sample fetuses in the corresponding first sample ultrasound images. And respectively inputting the plurality of first sample ultrasonic images into the initial multi-tire identification model to obtain the corresponding prediction multi-tire identification results. The initial multi-tire identification model is consistent with the network structure of the multi-tire identification model employed in step S120 but the parameters may not be consistent. After the parameters in the initial multi-tire recognition model are trained, the obtained multi-tire recognition model is the multi-tire recognition model adopted in step S120. The predicted multi-fetal recognition result and the first annotation information of the plurality of first sample ultrasound images can be substituted into a first preset loss function for loss calculation, and a first loss value is obtained. Parameters in the initial multi-tire identification model may then be optimized using back-propagation and gradient descent algorithms based on the first loss values. The optimization of the parameters may be performed iteratively until the multi-beat recognition model reaches a converged state. After training is finished, the obtained multi-tire recognition model can be used for subsequent multi-tire recognition, and the stage can be called as an inference stage of the model.
In step S130, a plurality of fetuses in the initial ultrasound image are marked based on the multi-fetal recognition result.
The marking operation of step S130 is mainly for distinguishing a plurality of fetuses. The marking mode can be set arbitrarily according to the requirement. For example, the plurality of fetuses may be labeled as fetus 1, fetus 2, fetus 3 … …, respectively, or as fetus a, fetus B, fetus C … …, respectively, and so on. For example, for any initial ultrasound image, the user may manually mark multiple fetuses in the initial ultrasound image. For example, a user may input marking information, and a device (e.g., an ultrasound imaging system) for performing ultrasound image processing method 100 may mark a fetus based on the marking information in conjunction with the multi-fetus recognition results. Optionally, the apparatus for performing the ultrasound image processing method 100 may also automatically mark a plurality of fetuses in the initial ultrasound image based on the multi-fetal recognition result. For example, two fetuses are identified in the initial ultrasound image, and the apparatus for performing ultrasound image processing method 100 may mark the two fetuses as fetus a and fetus B, respectively. Alternatively, the marking may be any marking as long as a plurality of fetuses can be distinguished. Optionally, the marking may be performed according to a certain rule, for example, according to the order of the distance from the fetus to the cervical opening.
Step S140, acquiring an ultrasound image to be measured including at least one of the marked fetuses.
Illustratively, according to the marked fetuses, any one or more of the fetuses can be used as a fetus to be tested, and the ultrasound images of the fetus to be tested are obtained. For example, an ultrasound image to be measured of fetus a may be acquired first, and then an ultrasound image to be measured of fetus B may be acquired, and so on. For each fetus to be tested, one or more ultrasound images to be tested containing the fetus may be acquired. The ultrasound image to be measured corresponding to each fetus to be measured may only include the fetus, or may include other fetuses other than the fetus. Preferably, the ultrasound image to be measured corresponding to each fetus to be measured only includes the fetus, so as to eliminate the influence of other fetuses as much as possible. This can be achieved in the following way. For example, image acquisition may be performed on a physical area where a corresponding fetus to be detected exists alone when acquiring an ultrasound image to be detected, and a fetus other than the corresponding fetus to be detected may be detected and removed from the ultrasound image by means of target detection or the like after acquiring an original ultrasound image to be detected, so as to obtain an ultrasound image to be detected that only includes the corresponding fetus to be detected. The ultrasound image to be detected may be at least a part of the ultrasound image in the initial ultrasound image, or may be one or more ultrasound images different from the initial ultrasound image, which are re-acquired by the user with respect to the physical region where the fetus to be detected is located by using the image acquisition device.
Step S150, measuring at least one measuring item of at least one fetus to be measured based on the ultrasonic image to be measured.
For example, measurement of fetal a may be performed based on one or more acquired ultrasound images to be tested that contain fetal a. The measurement items may include, but are not limited to, head-hip length, placenta thickness, maximum depth of amniotic fluid, and the like. When the structure corresponding to one or more measurement items exists in the ultrasound image to be measured, the measurement of the one or more measurement items can be performed on the ultrasound image to be measured, so as to obtain a corresponding measurement result. For example, when the ultrasound image to be measured is an ultrasound image of a horizontal cross section of the thalamus, the apparatus for performing the ultrasound image processing method 100 may automatically determine that the measurement items matched with the ultrasound image to be measured are head circumference, double apical diameter, and the like. The apparatus for performing the ultrasound image processing method 100 can automatically measure the size of the head circumference and the double apical diameter on the ultrasound image to be measured.
For example, all the ultrasound images to be measured corresponding to any fetus to be measured may be measured with respect to the measurement items after all the ultrasound images are acquired, or may be measured while acquiring (i.e., acquiring and measuring in real time). For example, when a corresponding measurement item exists in the current real-time ultrasound image to be measured for the fetus a to be measured, the current real-time ultrasound image to be measured can be measured to obtain a corresponding measurement result, so that automatic measurement of the current real-time ultrasound image to be measured is realized, and the efficiency of ultrasound scanning is further improved.
According to the ultrasonic image processing method provided by the embodiment of the invention, a multi-fetal recognition result can be obtained based on the initial ultrasonic image, and at least one measurement item of at least one fetus to be measured can be measured based on the multi-fetal recognition result. The method can assist the user in identifying and measuring the multi-fetal scene, and greatly reduce the workload of the user, thereby improving the inspection efficiency and accuracy of the user.
Illustratively, the measurement of the at least one measurement item of the at least one fetus under test based on the ultrasound image under test includes: performing standard section identification on the ultrasonic image to be detected to judge whether the ultrasonic image to be detected belongs to one of at least one preset type of standard sections; under the condition that the ultrasonic image to be detected belongs to the standard section of the specific preset type, performing image segmentation on the ultrasonic image to be detected so as to segment at least one target structure from the ultrasonic image to be detected, wherein the at least one target structure belongs to a structure related to the standard section of the specific preset type; and measuring at least one measurement item based on the segmentation result of the at least one target structure, wherein the at least one measurement item is a measurement item related to the at least one target structure.
In one embodiment, the predetermined types of standard cut surfaces described herein may include, but are not limited to, one or more of the following: standard section of head and hip length, horizontal cross section of thalamus, blood flow section of umbilicus, cross section of upper abdomen, long axis section of femur, placenta section, amniotic fluid section, and sagittal section of cervical canal.
The standard section identification of the ultrasonic image to be detected can be realized through a standard section identification model. The standard tangent plane identification model may be any suitable neural network model, such as any suitable existing or future emerging target detection network model and/or classification network model. For example, the standard tangent plane identification model may include one or more of: and network models such as YOLO series, RCNN series, retina-Net, VGG network, inclusion network, resnet and the like. Preferably, the standard section identification model may be an object detection network model. Through the target detection network model, besides the category information of the ultrasonic image to be detected, the position information of the target structure in the ultrasonic image to be detected can also be determined. The category information of the ultrasound image to be measured is information about which type of standard section the ultrasound image to be measured belongs to. The position information of the target structure in the ultrasonic image to be detected is information for indicating the position of the target structure in the ultrasonic image to be detected. Similarly to the target detection for the fetus, in the present embodiment, the target detection box for indicating the position of the target structure can be obtained through the target detection network model. When the target structure is subsequently segmented, the position information output by the target detection network model can be optionally input into the image segmentation network model to assist in segmenting the target structure. Compared with a classification network model, the accuracy of standard section identification of the to-be-detected ultrasonic image through the target detection network model is higher.
The standard tangent plane recognition model can be obtained through training of a second training data set. The second training data set may include a plurality of second sample ultrasound images and second labeling information in one-to-one correspondence with the plurality of second sample ultrasound images. The plurality of second sample ultrasound images may include at least one group of sample ultrasound images corresponding to at least one preset type of standard section one to one, each group of sample ultrasound images includes one or more second sample ultrasound images, and the one or more second sample ultrasound images belong to the same preset type of standard section. The second annotation information may include a standard section indicating what type of preset the corresponding second sample ultrasound image belongs to. And respectively inputting the plurality of second sample ultrasonic images into the initial standard section identification model to obtain the corresponding prediction standard section identification results. The predicted standard section recognition result and the second labeling information of the plurality of second sample ultrasound images can be substituted into a second preset loss function for loss calculation, so that a second loss value is obtained. Parameters in the initial standard tangent plane identification model may then be optimized using back-propagation and gradient descent algorithms based on the second loss values. The optimization of the parameters may be iteratively performed until the standard tangent plane recognition model reaches a converged state. After the training is finished, the obtained standard tangent plane recognition model can be used for subsequent standard tangent plane recognition.
It is understood that the ultrasound image to be measured may or may not belong to a standard slice. And under the condition that the standard section identification result of the ultrasonic image to be detected belongs to the standard section of the specific preset type, carrying out image segmentation on the ultrasonic image to be detected. It is understood that the standard cut surface of a specific preset type is one of the standard cut surfaces of at least one preset type. The at least one preset type of standard cut surface may be a preset standard cut surface. Different types of standard slices may contain different target structures. After the type of the standard tangent plane to which the current ultrasound image to be detected belongs is determined, the corresponding target structure contained in the current ultrasound image to be detected can be segmented. For example, if the standard section of the ultrasound image to be measured is identified as belonging to the horizontal thalamic section, the ultrasound image to be measured may be subjected to image segmentation to segment the thalamic structure therefrom. The target structures described herein may include, but are not limited to: one or more of amniotic fluid, placenta, thalamus, tetralocular heart, etc.
The operation of image segmentation on the ultrasound image to be detected can be obtained by using a pre-trained image segmentation model. The image segmentation model may be implemented using any suitable existing or future emerging neural network model. For example, the image segmentation model may include one or more of: FCN, unet, deep Lab series, vnet, etc.
The image segmentation model may be trained on a third training data set. The third training data set may include a plurality of third sample ultrasound images and third labeling information in one-to-one correspondence with the plurality of third sample ultrasound images. The third sample ultrasound images may include at least one group of sample ultrasound images corresponding to at least one preset type of standard section, where each group of sample ultrasound images includes one or more third sample ultrasound images belonging to the same preset type of standard section. The third annotation information can include position information indicating a position of at least one target structure in the corresponding third sample ultrasound image. The plurality of third sample ultrasound images are respectively input into the initial image segmentation model, and the corresponding prediction image segmentation results can be obtained. The prediction image segmentation result and the third labeling information of the plurality of third sample ultrasound images can be substituted into a third loss function to perform loss calculation, so as to obtain a third loss value. Parameters in the initial image segmentation model may then be optimized using back propagation and gradient descent algorithms based on the third loss value. The optimization of the parameters may be performed iteratively until the image segmentation model reaches a converged state. After the training is finished, the obtained image segmentation model can be used for subsequent image segmentation aiming at the target structure.
The segmentation result of the target structure may include a mask or envelope indicating where each target structure is located. According to the segmentation result of the target structure, a more precise position of each of the at least one target structure can be determined. Each target structure may correspond to its own measurement item, and the measurement item corresponding to each target structure may be one or more. For example, in the standard head-hip length cut plane, the target structure may include a head-hip region, and the measurement items corresponding to the region may include a head-hip length. Therefore, according to the division result of the head-hip part, the head-hip length of the head-hip part can be measured, and the measurement result related to the head-hip length can be obtained.
According to the technical scheme, standard section identification is firstly carried out on the ultrasonic image to be detected, then under the condition that the ultrasonic image to be detected belongs to a specific preset type of standard section, image segmentation is carried out on the ultrasonic image to be detected, and at least one measurement item of at least one target structure obtained by segmentation is measured. The scheme has high measurement efficiency and accuracy.
Illustratively, the number of the at least one fetus to be measured is greater than or equal to two, and the measurement of the at least one measurement item of the at least one fetus to be measured based on the ultrasound image to be measured includes: for a first measurement item in the at least one measurement item, calculating a first difference between the measurement results of the at least one fetus to be measured on the first measurement item; and outputting first prompt information when the first difference is greater than or equal to a first difference threshold value.
In one embodiment, the number of fetuses to be tested in the ultrasound image to be tested may be greater than or equal to two. For example, the number of fetuses to be tested may be two, i.e., a fetus to be tested a and a fetus to be tested B. And measuring at least one measuring item of the two fetuses to be measured in the ultrasonic images to be measured. The at least one measurement item may include, but is not limited to, one or more of: head-hip length, double-top diameter length, placenta thickness, maximum amniotic fluid depth. The first measurement item may be any one or more of at least one measurement item.
Illustratively, the first measurement item may include a head-hip length.
For example, the first measurement item may be a head-hip length. For two fetuses to be tested, the head-hip length corresponding to each fetus can be measured. According to the measurement result of the first measurement item, the head-hip length of the fetus A to be measured is L A The length of the head and the hip of the fetus B to be measured is L B . Can calculate the length L of the head and the hip A Head and hip length L B Difference between them, and thus the first difference DeltaL is obtained 1 . The user may also preset the first difference threshold L T . For example for head-hip length, a first difference threshold L T May be equal to 3mm. If the first difference Δ L 1 Greater than or equal to 3mm, a first prompt message may be output to prompt the user to focus on the current measurement.
According to the technical scheme, the difference between the measurement results of a plurality of fetuses to be measured on certain measurement items can be calculated. When the difference is too large, the first prompt message can be output for the user to view. The scheme can assist the user in measuring the measurement items and timely output first prompt information to the user so that the user can timely take measures against the situation.
Illustratively, the measurement of the at least one measurement item of the at least one fetus under test based on the ultrasound image under test includes: for a second measurement item in the at least one measurement item, calculating a second difference between the measurement result of any fetus to be measured on the second measurement item and the standard measurement result of the standard fetus on the second measurement item; and outputting second prompt information when the second difference is greater than or equal to the second difference threshold value.
In one embodiment, the fetus to be tested in the ultrasound image to be tested may include fetus to be tested a and fetus to be tested B. And measuring at least one measuring item of the two fetuses to be measured in the ultrasonic images to be measured. The at least one measurement item may include, but is not limited to, one or more of: head-hip length, double apical diameter length, placenta thickness, maximum depth of amniotic fluid. The second measurement item may be any one or more of the at least one measurement item.
Illustratively, the second measurement item may include one or more of: length of double apical diameter, thickness of placenta, maximum depth of amniotic fluid.
For example, the second measurement item may be the maximum depth of amniotic fluid. For two fetuses to be tested, the maximum depth of amniotic fluid corresponding to each of the fetuses can be measured. According to the measurement result of the second measurement item, the maximum depth of the amniotic fluid of the fetus A to be measured is H A The maximum depth of amniotic fluid of the fetus B to be detected is H B . Maximum amniotic fluid depth H of standard fetus T Are known. Can calculate the maximum depth H of the amniotic fluid of any fetus to be detected and the maximum depth H of the amniotic fluid of the standard fetus T And a difference therebetween, thereby obtaining a second difference. For example, the maximum depth H of amniotic fluid of the fetus A to be tested can be compared A Maximum depth of amniotic fluid H from standard fetus T Difference therebetween, obtaining a second difference Δ H 1 . The user may preset the second difference threshold H T1 . E.g. for maximum depth of amniotic fluid, second difference threshold H T1 May be equal to 5mm. If the second difference Δ H 1 Greater than or equal to 5mm, a second prompt message may be output to prompt the user to focus on the current measurement.
Illustratively, the maximum depth of amniotic fluid may be selected to be H A And maximum depth of amniotic fluid is H B The greater of the above, and the maximum amniotic fluid depth H of the standard fetus T A comparison is made and a second difference between the two is calculated.
According to the technical scheme, the difference between the measurement results of any fetus to be measured and the standard fetus on certain measurement items can be calculated, and when the difference is overlarge, second prompt information can be output for a user to check. The scheme can assist the user in measuring the measurement items, and timely output the second prompt information to the user so that the user can timely take countermeasures against the situation.
Illustratively, after measuring at least one measurement item of at least one fetus to be measured based on the ultrasound image to be measured, the method further comprises: and for a third measurement item in the at least one measurement item, selecting the largest measurement result from the measurement results of the at least one fetus to be measured on the third measurement item to be output.
The third measurement item may include one or more of: head-hip length, double apical diameter length, placenta thickness, maximum depth of amniotic fluid. For some measurement items, the largest measurement result can be selected from the measurement results of the fetus to be measured for output, so that the user can conveniently check the measurement results. For example, a larger value may be selected from the measurement results of the head-hip lengths of the double tires as the head-hip length result of the double tires and output, and the output head-hip length result may be used to calculate the gestational weeks of the double tires.
The maximum measurement result can be output by the output device. The output device may be an output device included in the apparatus for performing the ultrasound image processing method 100 or an output device communicably connected with the apparatus for performing the ultrasound image processing method 100. The output devices may include, but are not limited to, one or more of the following: display devices, wired and/or wireless communication devices, speakers, signal lights, and the like.
Illustratively, acquiring an ultrasound image under test containing at least one fetus under test of the marked plurality of fetuses includes: performing target tracking on an ultrasonic image sequence acquired in real time in the process of operating the ultrasonic probe by a user based on a fetal marking result so as to determine the track of any target fetus to be detected in at least one fetus to be detected in the ultrasonic image sequence; if the target fetus to be detected is found to be lost, outputting third prompt information; after the track of the target fetus to be detected is lost for a preset time period or when an examination ending instruction is received, determining that the ultrasonic image containing the target fetus to be detected in the ultrasonic image sequence is the ultrasonic image to be detected containing the target fetus to be detected.
The target fetus to be tested is any one of the at least one fetus to be tested. Of course, for a plurality of fetuses to be tested, each fetus can be respectively regarded as a target fetus to perform the tracking operation described in the embodiment. The target fetus to be tested may be user-specified or automatically selected by the apparatus for performing the ultrasound image processing method 100.
In one embodiment, the marked fetus "fetus a under test" in step S130 may be used as an initial tracking object for target tracking in an ultrasound image sequence acquired in real time during the operation of the ultrasound probe by the user. Target tracking may be implemented using any suitable existing or future-capable target tracking algorithm, including but not limited to various correlation filter-based tracking algorithms and/or various Convolutional Neural Network (CNN) -based tracking algorithms, and the like. For example, target tracking may be performed by a visual tracking method (DLT). According to the target tracking result, the track of the target fetus to be detected (namely, the fetus A to be detected) in the ultrasonic image sequence can be determined. If the track of the target fetus to be detected (i.e. fetus a to be detected) is found to be lost, a third prompt message can be output to prompt the user. The loss of trajectory of the target fetus to be tested may include the absence of any fetus detected in the subsequent ultrasound images, or the ability to detect a fetus but not the target fetus to be tested. Through the third prompt information, the user can timely know that the current detection position of the user deviates from the physical area where the target fetus to be detected is located. If the ultrasonic examination of the target fetus to be detected is not finished, the user can return to the physical area where the target fetus to be detected is located in time to continue the detection. If the ultrasound examination for the target fetus to be examined is completed, the user may not perform any operation or input an examination end instruction to the apparatus for performing the ultrasound image processing method 100 through the input device. The input device may be an input device included in the apparatus for performing the ultrasound image processing method 100 or an input device communicably connected with the apparatus for performing the ultrasound image processing method 100. Input devices may include, but are not limited to, one or more of the following: mouse, keyboard, touch screen, microphone, etc.
After the track of the target fetus to be measured (i.e., the fetus a to be measured) is lost for the preset time period, or when the instruction for ending the inspection is received, it may be determined that the ultrasonic inspection for the target fetus to be measured is ended, and at this time, it is determined that the ultrasonic image including the target fetus to be measured in the ultrasonic image sequence is the ultrasonic image including the target fetus to be measured, which is used for performing the measurement operation in step S150. The ultrasound image to be tested containing the target fetus to be tested is at least part of the image of the ultrasound image to be tested containing at least one fetus to be tested.
The preset time duration may be any time duration set by the user to be greater than 0. For example, the preset time period may be 3 seconds, 5 seconds, 10 seconds, etc. Whether the ultrasonic examination for the target fetus to be detected is finished or not is determined according to the track loss time of the target fetus to be detected, the scheme does not need user intervention, automatic judgment of the examination process can be achieved, the automation degree is high, and the user experience is good. Further, an inspection end instruction may be input by the user through the input device. For example, the input device may be a display device, and the display interface of the display device may include an operable control therein. After the user clicks the operable control, the apparatus for performing the ultrasound image processing method 100 may receive an end-of-examination instruction through the input device. For example, the operable control may be a rectangular control and may be identified on one side (e.g., the right side) with a "knot
The character of the bundle ". According to the scheme, whether the ultrasonic examination of the target fetus to be tested is finished or not can be determined by whether the user inputs an examination finishing instruction or not, the scheme has high user autonomy and allows the user to use
The user decides the examination ending time by himself as required.
According to the technical scheme, in the process of carrying out ultrasonic inspection on the target fetus to be detected, the target fetus to be detected can be automatically tracked, and the situation that the track of the target fetus to be detected is lost is found
In case of this, the third prompt information can be output in time. The method can assist the user to better complete the ultrasonic examination of the target 0 target fetus to be detected, and has higher intelligent degree and better user experience.
The first prompt message, the second prompt message, and the third prompt message may be any form of prompt message, including but not limited to one or more of the following: image information, video information, audio information, light information, etc. The first prompt message, the second prompt message and the third prompt message can be output by the output device.
5 illustratively, marking of multiple fetuses in an initial ultrasound image based on the multiple-fetal recognition result
Note, include: identifying a location of a cervical os from the initial ultrasound image; determining a first relative position relation between a plurality of fetuses and the cervical orifice based on the multi-fetus identification result and the cervical orifice identification result; ranking the plurality of fetuses based on the first relative positional relationship; the plurality of fetuses are marked in a sorted order.
0 in one embodiment, the multi-tire identification model described above can be utilized to identify the ultrasound image from the initial ultrasound image
The image identifies the location of the cervical os. That is, by the multi-fetal recognition operation of step S120, the recognition result of the cervical os may be optionally obtained in addition to the multi-fetal recognition result. Of course, alternatively, the cervical os may be identified using an independent cervical os identification model different from the above-described multi-fetus identification model
Of the position of (a). The cervical os identification model may also be a target detection network model or an image segmentation network model 5, which can be understood with reference to the above description and will not be described herein.
According to the recognition result of the multiple fetuses and the recognition result of the cervical orifice, for example, the positions of the two fetuses and the cervical orifice are respectively recognized through the target detection network model, so that the first relative position relation between the two fetuses and the cervical orifice can be respectively determined. The relative position relationship may be that each fetus is associated with the cervix
The distance between the ports. By way of example and not limitation, the plurality of fetuses may be sorted in a near-far order of 0 distance from the cervical os based on the first relative positional relationship. For example, a fetus closer to the cervical os may be labeled as fetus a (which is potentially delivered first), and a fetus further from the cervical os may be labeled as fetus B.
According to the technical scheme, the fetuses are sorted and marked according to the sorting sequence through the first relative position relation between the fetuses and the cervical orifice. The method can realize automatic marking of the fetus, and can effectively reduce the workload of a user.
Illustratively, marking a plurality of fetuses in the initial ultrasound image based on the multi-fetal recognition result may include: receiving marking information input by a user; the plurality of fetuses are tagged based on the tagging information.
In one embodiment, the user may manually mark multiple fetuses based on the identified multiple fetuses. For example, the output device may be a display device, and the device for executing the image processing method 100 may display information of a plurality of fetuses on a display interface through the display device, for example, an ultrasound image including a plurality of fetuses on the display interface. The user may mark the fetus based on the location of each fetus via the input device. For example, after the user selects the fetus closest to the cervical orifice by using the mouse, the user inputs the marking information "fetus a" by using the keyboard to indicate that the fetus is marked as fetus a. The order in which the user manually marks may be arbitrary.
According to the technical scheme, the user is allowed to mark a plurality of fetuses manually, and the memory and the distinction of the user can be facilitated.
Illustratively, performing multi-pass identification on the initial ultrasound image to obtain a multi-pass identification result includes: after multi-fetal recognizing the initial ultrasound image through the target detection algorithm to obtain multi-fetal recognition results, the multi-fetal recognition results including target detection frames indicating respective positions of the fetuses, and the multi-fetal recognizing method 100 further includes: determining a second relative position relationship of at least part of fetuses in the plurality of fetuses based on the multi-fetus identification result, wherein the second relative position relationship comprises an orientation relationship between two fetuses and/or a distance between target detection frames corresponding to the two fetuses; and outputting position information representing the second relative position relationship.
In one embodiment, a multi-fetal recognition result including a target detection box indicating the position of each of the plurality of fetuses may be obtained by performing multi-fetal recognition on the initial ultrasound image through a target detection algorithm (e.g., the target detection network model described above).
Based on the multiple-fetus recognition results, a second relative positional relationship of at least a portion of the plurality of fetuses may be determined. The second relative positional relationship may include an orientation relationship between the two fetuses, for example, with respect to one of the two fetuses, and in the initial ultrasound image, the other fetus may be located at any position of the fetus, such as above, below, left, or right. The second relative positional relationship may further include a distance between the target detection boxes corresponding to the two fetuses. The distance between the corresponding features of the two target detection frames may be calculated as the distance between the target detection frames corresponding to the two fetuses, respectively. For example, the corresponding feature may include a center point of each of the two target detection boxes, an upper left corner point of each of the two target detection boxes, and the like. The apparatus for performing the image processing method 100 may output the second relative positional relationship through the above-described output means, for example, displayed in a display interface for reference by a user.
According to the technical scheme, the second relative position relation between the two fetuses can be determined based on the target detection frames of the positions of the fetuses, and the relative position relation is output. The method is convenient for the user to refer to, and can better assist the user in carrying out the examination on the fetus.
Fig. 2 shows a schematic flow diagram of an ultrasound image processing method according to an embodiment of the present invention. As shown in fig. 2, during the examination of the abdomen of a pregnant woman by an ultrasound imaging system, an ultrasound video may be acquired based on an ultrasound probe and an imaging module in the ultrasound imaging system. One or more ultrasound images (i.e., initial ultrasound images) may be obtained from the ultrasound video. One or more ultrasonic images are input into the multi-tire identification model, and a multi-tire identification result corresponding to each ultrasonic image can be obtained. The user can manually mark the multi-fetal recognition result and input the marked ultrasonic image into the standard section recognition model to determine whether the marked ultrasonic image belongs to a certain preset type of standard section. And under the condition that the ultrasonic image is input into a standard tangent plane of a preset type, inputting the ultrasonic image into an image segmentation model to obtain a target structure, and measuring a measurement item aiming at the target structure. In addition, fig. 2 also shows the operation of training for the above three models. It is noted that although the multi-tire recognition model is shown as "video annotation and model training," it may also be trained using annotation data based on discrete images, i.e., it uses training data sets whose images do not necessarily originate from the same video. The training of the standard section recognition model is similar to this, and is not repeated. Of course, conversely, the image segmentation model may also be trained using video-based annotation data.
According to still another aspect of the present invention, an ultrasound image processing apparatus is also provided. Fig. 3 shows a schematic block diagram of an ultrasound image processing apparatus 300 according to an embodiment of the present invention, and as shown in fig. 3, the apparatus 300 may include: a first acquisition module 310, an identification module 320, a labeling module 330, a second acquisition module 340, and a measurement module 350.
A first acquiring module 310 is used for acquiring an initial ultrasound image.
The identification module 320 may be configured to perform multi-pass identification on the initial ultrasound image to obtain a multi-pass identification result.
The marking module 330 may be configured to mark a plurality of fetuses in the initial ultrasound image based on the multi-fetal recognition result.
The second obtaining module 340 may be configured to obtain an ultrasound image to be measured including at least one fetus to be measured in the marked plurality of fetuses.
The measurement module 350 may be configured to measure at least one measurement item of at least one fetus under test based on the ultrasound image under test.
Illustratively, the measurement module 350 may include: the first identification submodule can be used for identifying a standard section of the ultrasonic image to be detected so as to judge whether the ultrasonic image to be detected belongs to one of at least one preset type of standard sections; the image segmentation submodule can be used for carrying out image segmentation on the ultrasonic image to be detected under the condition that the ultrasonic image to be detected belongs to the standard tangent plane of the specific preset type so as to segment at least one target structure from the ultrasonic image to be detected, wherein the at least one target structure belongs to a structure related to the standard tangent plane of the specific preset type; and the measurement submodule can be used for measuring at least one measurement item based on the segmentation result of the at least one target structure, wherein the at least one measurement item is a measurement item related to the at least one target structure.
Illustratively, the number of at least one fetus to be tested is greater than or equal to two, the apparatus 300 may further include: a first calculating module, configured to calculate, for a first measurement item in the at least one measurement item, a first difference between measurement results of the at least one fetus under test on the first measurement item after the measuring module 350 measures the at least one measurement item of the at least one fetus under test based on the ultrasound image under test; the first output module may be configured to output the first prompt message when the first difference is greater than or equal to a first difference threshold.
Illustratively, the first measurement item includes a head-hip length.
Illustratively, the apparatus 300 may further include: a second calculation module, configured to calculate, for a second measurement item of the at least one measurement item, a second difference between a measurement result of any fetus to be measured on the second measurement item and a standard measurement result of the standard fetus on the second measurement item after the measurement module 350 measures the at least one measurement item of the at least one fetus to be measured based on the ultrasound image to be measured; and the second output module can be used for outputting second prompt information when the second difference is greater than or equal to a second difference threshold value.
Illustratively, the second measurement item includes one or more of: length of double apical diameter, thickness of placenta, maximum depth of amniotic fluid.
Illustratively, the apparatus 300 further comprises: and a selecting module, configured to, after the measuring module 350 measures at least one measurement item of the at least one fetus to be measured based on the ultrasound image to be measured, for a third measurement item in the at least one measurement item, select a largest measurement result to output from measurement results of the at least one fetus to be measured on the third measurement item.
For example, the second obtaining module 340 may include: the target tracking submodule can be used for carrying out target tracking on an ultrasonic image sequence acquired in real time in the process of operating the ultrasonic probe by a user based on a fetus marking result so as to determine the track of any target fetus to be detected in at least one fetus to be detected in the ultrasonic image sequence; the first output submodule can be used for outputting third prompt information if the track of the target fetus to be detected is found to be lost; the first determining submodule may be configured to determine, after the track of the target fetus to be detected is lost for a preset time period or when an examination end instruction is received, that the ultrasound image including the target fetus to be detected in the ultrasound image sequence is the ultrasound image including the target fetus to be detected.
Illustratively, the marking module 330 may include: a second identification submodule operable to identify a location of the cervical os from the initial ultrasound image; a second determining submodule, configured to determine a first relative positional relationship between the plurality of fetuses and the cervical os based on the multi-fetus recognition result and the cervical os recognition result; a ranking submodule operable to rank the plurality of fetuses based on the first relative positional relationship; a first labeling submodule operable to label the plurality of fetuses in a sorted order.
Illustratively, the marking module 330 may include: the receiving sub-module can be used for receiving marking information input by a user; a second labeling submodule operable to label the plurality of fetuses based on the labeling information.
Illustratively, the identifying module 320 may include: and the third identification submodule is used for carrying out multi-fetal identification on the initial ultrasonic image through a target detection algorithm so as to obtain a multi-fetal identification result, and the multi-fetal identification result comprises a target detection frame for indicating the positions of the fetuses respectively. The apparatus 300 may further comprise: a determining module, configured to determine, after the identifying module 320 performs multi-fetal identification on the initial ultrasound image to obtain a multi-fetal identification result, a second relative position relationship of at least some of the fetuses based on the multi-fetal identification result, where the second relative position relationship includes an orientation relationship between two fetuses and/or a distance between target detection frames corresponding to the two fetuses; and the third output module can be used for outputting the position information for representing the second relative position relation.
According to another aspect of the invention, an electronic device is also provided. Fig. 4 shows a schematic block diagram of an electronic device according to an embodiment of the invention. As shown, the electronic device 400 includes a processor 410 and a memory 420, wherein the memory 420 stores computer program instructions, and the computer program instructions are executed by the processor 410 to perform the ultrasound image processing method 100.
Illustratively, the electronic device 400 may be an ultrasound diagnostic device or an ultrasound workstation.
The ultrasonic diagnostic apparatus may include a probe, a first processor, a first memory, a first display, and the like. In the case where the electronic device 400 is an ultrasonic diagnostic device, the processor 410 may be a first processor and the memory 420 may be a first memory. The probe may be used to transmit ultrasonic waves to a target object (e.g., a fetus) and receive ultrasonic echoes returned from the target object, thereby obtaining ultrasonic echo signals. The probe transmits the ultrasonic echo signal to the first processor. The first processor may process the ultrasound echo signals to obtain an ultrasound image of the target object, i.e., an initial ultrasound image, an ultrasound image to be measured, etc., as described herein. The ultrasound images obtained by the first processor may be stored in a first memory. These ultrasound images may optionally be displayed on a first display for viewing by a user. Further, the first processor may be used to perform the ultrasound image processing method 100 described herein, and may optionally store some of the processing information generated during processing in the first memory. The processed information may include intermediate data and/or final results, such as the recognition of multiple births, fetal markers, cervical os recognition, segmentation of target structures, fetal measurements on various measurement items, and so forth, as described herein. Optionally, the ultrasonic diagnostic apparatus may further include a first input device, such as one or more of a mouse, a keyboard, a touch screen, and the like, for a user to input information, instructions, and the like.
The ultrasound workstation may also be referred to as an ultrasound imaging workstation. The ultrasonic workstation is a device integrating functional modules of patient registration, image acquisition, diagnosis and editing, report printing, image post-processing, medical record query, statistical analysis and the like. The ultrasound workstation may be communicatively connected with the ultrasound diagnostic apparatus, for example, by any wired or wireless communication means. The ultrasound diagnostic apparatus may transmit information such as the acquired ultrasound echo signals and/or image data of the ultrasound image to the ultrasound workstation.
The ultrasound workstation may include a second processor, a second memory, a second display, and the like. Where electronic device 400 is an ultrasound workstation, processor 410 may be a second processor and memory 420 may be a second memory. The second processor may receive the ultrasound echo signal and/or the image data of the ultrasound image from the ultrasound diagnostic apparatus through the communication interface, and may obtain the ultrasound image based on the ultrasound echo signal or obtain the ultrasound image directly based on the image data of the ultrasound image. The second processor may store the obtained ultrasound image in the second memory. These ultrasound images may optionally be displayed on a second display for viewing by the user. In addition, a second processor may be used to perform ultrasound image processing method 100 described herein, and may optionally store some of the processing information generated during processing in a second memory. Examples of processing information may refer to the description above. Optionally, the ultrasonic diagnostic apparatus may further include a second input device, such as one or more of a mouse, a keyboard, a touch screen, and the like, for a user to input information, instructions, and the like.
In addition, the ultrasonic workstation may also include other functional modules such as a printing device. Through the second processor, the second memory and various functional modules of the ultrasonic workstation, the functions of processing, storing, replaying, printing, counting, retrieving and the like of the ultrasonic image can be completed.
According to yet another aspect of the present invention, there is also provided a storage medium having stored thereon program instructions for executing the ultrasound image processing method 100 described above when executed. The storage medium may include, for example, a storage component of a tablet computer, a hard disk of a personal computer, an erasable programmable read-only memory (EPROM), a portable read-only memory (CD-ROM), a USB memory, or any combination of the above storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
A person skilled in the art can understand specific implementation schemes of the ultrasound image processing apparatus, the electronic device and the storage medium by reading the above description related to the ultrasound image processing method, and details are not repeated herein for brevity.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some of the examples of the method,
well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the disclosure and facilitate an understanding of one of the various inventive aspects
One or more, in the description of the exemplary embodiments of the invention, the various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof 5. However, this book should not be construed to be limited to
The method of the invention is interpreted to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment
The characteristics solve the corresponding technical problems. Thus, claim 0 following the detailed description is hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be appreciated by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings) may be practiced in any combination, except combinations where there is a mutual exclusion of features
The features and all processes or elements of any method or apparatus so disclosed are combined. Disclosed in this specification (including the accompanying claims, abstract and drawings), unless explicitly stated otherwise, in this specification (including any accompanying claims, abstract and drawings)
Each feature may be replaced by an alternative feature serving the same, equivalent or similar purpose.
In addition, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, but not others, the features of different embodiments
Are intended to be within the scope of the invention and form a part of the various embodiments. For example, in claim 0, any of the claimed embodiments may be implemented in any combination
The preparation is used.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art
It will be appreciated that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement 5 some or all of the functions of some of the modules in an ultrasound image processing apparatus according to an embodiment of the present invention
Can be used. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs embodying the present invention may be stored on computer-readable media, or may take the form of one or more signals
Formula (II) is shown in the specification. Such a signal may be downloaded from an internet website or provided on a carrier signal, 0 or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means can be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the purpose of describing the embodiments of the present invention or the description thereof, and the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present invention, and shall cover the scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (15)

1. An ultrasound image processing method comprising:
acquiring an initial ultrasonic image;
performing multi-fetal identification on the initial ultrasonic image to obtain a multi-fetal identification result;
marking a plurality of fetuses in the initial ultrasound image based on the multiple-fetus recognition results;
acquiring an ultrasound image to be tested containing at least one of the marked fetuses;
and measuring at least one measuring item of the at least one fetus to be measured based on the ultrasonic image to be measured.
2. The method of claim 1, wherein the measuring at least one measurement item of the at least one fetus to be measured based on the ultrasound image to be measured comprises:
performing standard section identification on the ultrasonic image to be detected to judge whether the ultrasonic image to be detected belongs to one of at least one preset type of standard sections;
under the condition that the ultrasonic image to be detected belongs to a standard tangent plane of a specific preset type, performing image segmentation on the ultrasonic image to be detected so as to segment at least one target structure from the ultrasonic image to be detected, wherein the at least one target structure belongs to a structure related to the standard tangent plane of the specific preset type;
and measuring the at least one measurement item based on the segmentation result of the at least one target structure, wherein the at least one measurement item is a measurement item related to the at least one target structure.
3. The method of claim 1, wherein the number of the at least one fetus to be tested is greater than or equal to two, and after the measuring the at least one measurement item of the at least one fetus to be tested based on the ultrasound image to be tested, the method further comprises:
for a first measurement item of the at least one measurement item, calculating a first difference between the measurements of the at least one fetus under test on the first measurement item;
and outputting first prompt information when the first difference is greater than or equal to a first difference threshold value.
4. The method of claim 3, wherein the first measurement item comprises head-hip length.
5. The method of claim 1, wherein after the measuring the at least one measurement item of the at least one fetus under test based on the ultrasound image under test, the method further comprises:
for a second measurement item in the at least one measurement item, calculating a second difference between the measurement result of any fetus to be measured on the second measurement item and the standard measurement result of the standard fetus on the second measurement item;
and outputting second prompt information when the second difference is greater than or equal to a second difference threshold value.
6. The method of claim 5, wherein the second measurement item comprises one or more of: length of double apical diameter, thickness of placenta, maximum depth of amniotic fluid.
7. The method of claim 1, wherein after the measuring at least one measurement item of the at least one fetus to be measured based on the ultrasound image to be measured, the method further comprises:
and for a third measurement item in the at least one measurement item, selecting the largest measurement result from the measurement results of the at least one fetus to be measured on the third measurement item to be output.
8. The method of any one of claims 1-7, wherein said obtaining an ultrasound image under test containing at least one fetus under test of the marked plurality of fetuses comprises:
performing target tracking on an ultrasonic image sequence acquired in real time in the process of operating an ultrasonic probe by a user based on a fetal marking result so as to determine the track of any target fetus to be detected in the at least one fetus to be detected in the ultrasonic image sequence;
if the target fetus to be detected is found to be lost, outputting third prompt information;
after the track of the target fetus to be detected is lost for a preset time period or when an examination ending instruction is received, determining that the ultrasonic image containing the target fetus to be detected in the ultrasonic image sequence is the ultrasonic image containing the target fetus to be detected.
9. The method of any one of claims 1-7, wherein said marking a plurality of fetuses in the initial ultrasound image based on the multi-fetal recognition result comprises:
identifying a location of a cervical os from the initial ultrasound image;
determining a first relative positional relationship between the plurality of fetuses and the cervical os based on the multi-fetus recognition result and the recognition result of the cervical os;
ranking the plurality of fetuses based on the first relative positional relationship;
the plurality of fetuses are marked in a sorted order.
10. The method of any one of claims 1-7, wherein said marking a plurality of fetuses in the initial ultrasound image based on the multi-fetal recognition result comprises:
receiving marking information input by a user;
tagging the plurality of fetuses based on the tagging information.
11. The method of any one of claims 1-7, wherein said performing multi-pass identification on said initial ultrasound image to obtain a multi-pass identification result comprises:
performing multi-fetal identification on the initial ultrasound image through an object detection algorithm to obtain multi-fetal identification results, wherein the multi-fetal identification results comprise object detection frames for indicating the positions of the fetuses respectively,
after the performing multi-fetal identification on the initial ultrasound image to obtain a multi-fetal identification result, the method further comprises:
determining a second relative position relationship of at least part of the fetuses in the plurality of fetuses based on the multi-fetus recognition result, wherein the second relative position relationship comprises an orientation relationship between two fetuses and/or a distance between target detection frames corresponding to the two fetuses;
and outputting position information representing the second relative position relationship.
12. An ultrasound image processing apparatus comprising:
the first acquisition module is used for acquiring an initial ultrasonic image;
the identification module is used for carrying out multi-tire identification on the initial ultrasonic image so as to obtain a multi-tire identification result;
a marking module for marking a plurality of fetuses in the initial ultrasound image based on the multi-fetal recognition result;
a second obtaining module, configured to obtain an ultrasound image to be detected of at least one fetus to be detected in the marked multiple fetuses;
and the measuring module is used for measuring at least one measuring item of the at least one fetus to be measured based on the ultrasonic image to be measured.
13. An electronic device comprising a processor and a memory, the memory having stored therein a computer program, the processor executing the computer program to implement the ultrasound image processing method of any of claims 1-11.
14. The electronic device of claim 13, wherein the electronic device is an ultrasound diagnostic device or an ultrasound workstation.
15. A storage medium storing a computer program/instructions which, when executed by a processor, implement the ultrasound image processing method of any one of claims 1-11.
CN202211586750.2A 2022-12-09 2022-12-09 Ultrasonic image processing method and device, electronic equipment and storage medium Pending CN115919367A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211586750.2A CN115919367A (en) 2022-12-09 2022-12-09 Ultrasonic image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211586750.2A CN115919367A (en) 2022-12-09 2022-12-09 Ultrasonic image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115919367A true CN115919367A (en) 2023-04-07

Family

ID=86650456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211586750.2A Pending CN115919367A (en) 2022-12-09 2022-12-09 Ultrasonic image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115919367A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005253693A (en) * 2004-03-11 2005-09-22 Matsushita Electric Ind Co Ltd Ultrasonic diagnostic apparatus
KR20120049777A (en) * 2010-11-09 2012-05-17 삼성메디슨 주식회사 Ultrasound system and method for providing biometry information of fetus
WO2019062842A1 (en) * 2017-09-30 2019-04-04 深圳开立生物医疗科技股份有限公司 Ultrasound image processing method and device, ultrasonic diagnosis device, and storage medium
CN111462060A (en) * 2020-03-24 2020-07-28 湖南大学 Method and device for detecting standard section image in fetal ultrasonic image
CN112654299A (en) * 2018-11-22 2021-04-13 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging method, ultrasonic imaging apparatus, storage medium, processor, and computer apparatus
WO2022062458A1 (en) * 2020-09-24 2022-03-31 广州爱孕记信息科技有限公司 Method and apparatus for determining optimal fetal standard view
CN115063395A (en) * 2022-06-30 2022-09-16 开立生物医疗科技(武汉)有限公司 Ultrasonic image processing method, device, equipment and medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005253693A (en) * 2004-03-11 2005-09-22 Matsushita Electric Ind Co Ltd Ultrasonic diagnostic apparatus
KR20120049777A (en) * 2010-11-09 2012-05-17 삼성메디슨 주식회사 Ultrasound system and method for providing biometry information of fetus
WO2019062842A1 (en) * 2017-09-30 2019-04-04 深圳开立生物医疗科技股份有限公司 Ultrasound image processing method and device, ultrasonic diagnosis device, and storage medium
CN112654299A (en) * 2018-11-22 2021-04-13 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging method, ultrasonic imaging apparatus, storage medium, processor, and computer apparatus
CN111462060A (en) * 2020-03-24 2020-07-28 湖南大学 Method and device for detecting standard section image in fetal ultrasonic image
WO2022062458A1 (en) * 2020-09-24 2022-03-31 广州爱孕记信息科技有限公司 Method and apparatus for determining optimal fetal standard view
CN115063395A (en) * 2022-06-30 2022-09-16 开立生物医疗科技(武汉)有限公司 Ultrasonic image processing method, device, equipment and medium

Similar Documents

Publication Publication Date Title
US20170367685A1 (en) Method for processing 3d image data and 3d ultrasonic imaging method and system
CN111179227B (en) Mammary gland ultrasonic image quality evaluation method based on auxiliary diagnosis and subjective aesthetics
CN109858540B (en) Medical image recognition system and method based on multi-mode fusion
US20090252392A1 (en) System and method for analyzing medical images
CN102247172A (en) System and method of automated gestational age assessment of fetus
Yaqub et al. Automatic detection of local fetal brain structures in ultrasound images
CN115429325A (en) Ultrasonic imaging method and ultrasonic imaging equipment
CN110279433A (en) A kind of fetus head circumference automatic and accurate measurement method based on convolutional neural networks
Potočnik et al. Computerized detection and recognition of follicles in ovarian ultrasound images: a review
Rasheed et al. Automated fetal head classification and segmentation using ultrasound video
CN113274056A (en) Ultrasonic scanning method and related device
CN111481233B (en) Thickness measuring method for transparent layer of fetal cervical item
Anzalone et al. A system for the automatic measurement of the nuchal translucency thickness from ultrasound video stream of the foetus
WO2020103098A1 (en) Ultrasonic imaging method and apparatus, storage medium, processor and computer device
Aji et al. Automatic measurement of fetal head circumference from 2-dimensional ultrasound
WO2021120065A1 (en) Automatic measurement method and ultrasonic imaging system for anatomical structure
CN116912229A (en) Ultrasonic standard section detection method for cross section of lateral ventricle of fetus
US20220249060A1 (en) Method for processing 3d image data and 3d ultrasonic imaging method and system
CN115919367A (en) Ultrasonic image processing method and device, electronic equipment and storage medium
KR102483122B1 (en) System and method for determining condition of fetal nervous system
WO2023133929A1 (en) Ultrasound-based human tissue symmetry detection and analysis method
WO1994014132A1 (en) Non-invasive medical scanning
CN114680942A (en) Evaluation method based on salpingography imaging and ultrasonic imaging system
WO2023216594A1 (en) Ultrasonic imaging system and method
WO2022141085A1 (en) Ultrasonic detection method and ultrasonic imaging system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination