CN114503166A - Method and system for measuring three-dimensional volume data, medical instrument, and storage medium - Google Patents

Method and system for measuring three-dimensional volume data, medical instrument, and storage medium Download PDF

Info

Publication number
CN114503166A
CN114503166A CN201980101217.2A CN201980101217A CN114503166A CN 114503166 A CN114503166 A CN 114503166A CN 201980101217 A CN201980101217 A CN 201980101217A CN 114503166 A CN114503166 A CN 114503166A
Authority
CN
China
Prior art keywords
dimensional
target object
volume data
target
section
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980101217.2A
Other languages
Chinese (zh)
Inventor
邹耀贤
林穆清
杨剑
龚闻达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mindray Bio Medical Electronics Co Ltd
Original Assignee
Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mindray Bio Medical Electronics Co Ltd filed Critical Shenzhen Mindray Bio Medical Electronics Co Ltd
Publication of CN114503166A publication Critical patent/CN114503166A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A method (200) for measuring three-dimensional volume data, a system (10) for measuring three-dimensional volume data, a medical instrument, and a computer storage medium. The method comprises the following steps: acquiring three-dimensional volume data of a target object (S210); selecting at least two intersecting cut planes containing the target object in the three-dimensional volume data, and drawing a two-dimensional contour of the target object on the cut planes (S220); the three-dimensional volume data is segmented according to the two-dimensional contour to obtain a three-dimensional contour of the target object (S230). By the method, the contour of the target object can be more accurately obtained, and further more parameters of the target object can be more accurately and effectively obtained. The method has the advantages of universality, simplicity in operation and capability of segmenting the three-dimensional volume data of the difficult target.

Description

Method and system for measuring three-dimensional volume data, medical instrument, and storage medium
Description
Technical Field
The present application relates to the field of three-dimensional imaging, and in particular, to a method and system for measuring three-dimensional volume data, a medical apparatus, and a computer storage medium.
Background
In the ultrasonic examination, the size of the tissue structure or the lesion is the content of clinical key examination, and the conventional clinical principle is to measure the long and short diameters of the tissue structure or the lesion under two-dimensional ultrasound. Compared with radial line measurement of two-dimensional ultrasound, the volume of the tissue structure and the focus can provide more accurate diagnosis information for clinic, and the current three-dimensional ultrasound volume measurement method mainly comprises the following steps: the manual measuring method comprises the steps of generating a plurality of sections through rotation or translation, manually or semi-automatically drawing two-dimensional contours one by a user, and finally fitting the two-dimensional contours into three-dimensional contours. The method is generally adopted in clinical research at present, but the operation is extremely complex and time-consuming, and the accuracy of the measurement result is poor.
Disclosure of Invention
A first aspect of the present application provides a method of measuring three-dimensional volume data, the method including:
acquiring three-dimensional volume data of a target object;
determining at least two intersecting cut planes containing the target object in the three-dimensional volume data, and drawing a two-dimensional contour of the target object on the cut planes;
and segmenting the three-dimensional volume data according to the two-dimensional contour to obtain the three-dimensional contour of the target object.
A second aspect of the present application provides a method of measuring three-dimensional volume data, the method including:
acquiring three-dimensional volume data of a target object;
determining at least two cross sections containing different positions of the target object in the three-dimensional volume data, and drawing a two-dimensional contour of the target object in the cross sections;
and segmenting the three-dimensional volume data according to the two-dimensional contour to obtain the three-dimensional contour of the target object.
A third aspect of the present application provides a method of measuring three-dimensional volume data, the method including:
acquiring three-dimensional volume data of a target object;
determining at least two intersecting cut planes containing the target object in the three-dimensional volume data, and drawing a two-dimensional contour of the target object on the cut planes;
determining the corresponding contours of other areas outside the cross section in the three-dimensional volume data according to the two-dimensional contours;
and determining the three-dimensional contour of the target object according to the two-dimensional contour and the contours corresponding to the other areas.
Optionally, the method further comprises:
receiving a revision instruction for the two-dimensional profile;
revising the two-dimensional outline according to the revising instruction, and re-segmenting the three-dimensional volume data according to the revised two-dimensional outline to obtain a new three-dimensional outline of the target object.
Optionally, the method further comprises:
and displaying the three-dimensional contour.
Optionally, the method further comprises:
determining a volume of the target object from the three-dimensional contour.
Optionally, determining the cross-section and drawing a two-dimensional contour of the target object comprises:
selecting a first section, translating or rotating the first section to the central area of the target object, and drawing a two-dimensional contour of the target object on the first section;
and generating a second section containing the target object by taking the determined first section as a position reference, and drawing a two-dimensional contour of the target object on the second section, wherein the second section comprises at least one section.
Optionally, determining the cross-section and drawing a two-dimensional contour of the target object comprises:
selecting a first section, translating or rotating the first section to the central area of the target object, and drawing a two-dimensional contour of the target object on the first section;
and generating a second section intersecting the first section at the center of the determined two-dimensional contour of the target object on the first section and drawing the two-dimensional contour of the target object on the second section.
Optionally, determining the cross-section and drawing a two-dimensional contour of the target object comprises:
selecting a first section, translating or rotating the first section to the central area of the target object, and drawing a two-dimensional contour of the target object on the first section;
generating a second section plane parallel to the first section plane in a central area of the determined two-dimensional contour of the target object on the first section plane and drawing the two-dimensional contour of the target object on the second section plane.
Optionally, determining the cross-section and drawing a two-dimensional contour of the target object comprises:
moving the sections to the central area of the target object, and determining at least two sections;
drawing a two-dimensional contour of the target object on the determined at least two sections.
Optionally, the two-dimensional contour and/or the three-dimensional contour is distinguished and displayed from a non-target region not containing the target object by at least one of a boundary line, a color, and a brightness.
Optionally, segmenting the three-dimensional volume data according to the two-dimensional contour includes:
generating a target area containing the target object;
generating a non-target region not containing the target object;
and segmenting the three-dimensional volume data according to the target region and the non-target region.
Optionally, generating a target region including the target object includes:
and determining a region in the drawn two-dimensional contour of the target object as a target region.
Optionally, generating a non-target region not containing the target object comprises:
determining a region outside a target region within the drawn two-dimensional contour of the target object as a non-target region; and/or
And performing morphological expansion on the drawn two-dimensional outline of the target object to generate the non-target area.
Optionally, segmenting the three-dimensional volume data according to the target region and the non-target region includes:
segmenting the three-dimensional volume data based on an interactive segmentation algorithm to segment points in the three-dimensional volume data into a target region or a non-target region.
Optionally, the method for segmenting the three-dimensional volume data based on an interactive segmentation algorithm includes:
selecting target area seed points and non-target area seed points from the target area and the non-target area and constructing a graph in graph theory;
determining a segmentation function according to the selected target region seed points and the selected non-target region seed points;
and performing segmentation calculation on the unmarked points in the three-dimensional volume data by using the segmentation function so as to determine whether the unmarked points in the three-dimensional volume data belong to a target region or a non-target region.
Optionally, segmenting the three-dimensional volume data according to the target region and the non-target region includes:
the three-dimensional volume data is segmented based on a classification-based segmentation method.
Optionally, the segmenting the three-dimensional volume data based on a classification segmentation method includes:
training an image classifier for learning to obtain features which can distinguish the target region from the non-target region;
and generating an image classifier according to the characteristics, wherein the image classifier is used for classifying the regions which are not marked with the target region and the non-target region, and judging that the unmarked points in the three-dimensional volume data belong to the target region or the non-target region.
Optionally, the segmenting the three-dimensional volume data based on a classification segmenting method includes:
taking a point of the target area as a center, and taking a cubic three-dimensional image block as a positive sample;
taking a point of the non-target area as a center, and taking a cubic three-dimensional image block as a negative sample;
training an image classifier for learning features that distinguish the positive samples from the negative samples;
taking each point in the regions where the target region and the non-target region are not determined as a center, and taking a cubic three-dimensional image block to be segmented;
and classifying the three-dimensional image block to be segmented through the image classifier, and judging that the three-dimensional image block belongs to a target area or a non-target area.
Optionally, segmenting the three-dimensional volume data according to the target region and the non-target region includes:
and segmenting the three-dimensional volume data based on a deep learning method.
Optionally, segmenting the three-dimensional volume data based on a deep learning method includes:
inputting a mask consisting of the three-dimensional volume data and the two-dimensional outline of the target object;
outputting a segmentation mask through a deep learning network;
and determining a target region and/or a non-target region of the three-dimensional volume data according to the segmentation mask.
Optionally, determining a volume of the target object from the three-dimensional contour:
determining a volume of the segmentation masks and a number of the segmentation masks;
determining the volume of the target object according to the volume of the segmentation masks and the number of the segmentation masks.
Optionally, the intersections are completely orthogonal, skew, or approximately orthogonal.
Optionally, at least two of said profiles are parallel or intersect each other.
A fourth aspect of the present application provides a system for measuring three-dimensional volume data, comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the processor implements the steps of the method described above when executing the computer program.
A fifth aspect of the present application provides a medical instrument comprising the three-dimensional volume data measurement system as described above.
A sixth aspect of the present application provides a computer storage medium having stored thereon a computer program which, when executed by a computer or processor, performs the steps of the method as set forth above.
According to the measuring method and the measuring system of the three-dimensional volume data, after the three-dimensional volume data and two-dimensional profiles of at least two sections are obtained, the three-dimensional volume data are segmented according to the two-dimensional profiles to obtain the three-dimensional profile of the target object. By the method, the contour of the target object can be more accurately obtained, and further more parameters of the target object can be more accurately and effectively obtained. The method has the advantages of universality, simplicity in operation and capability of segmenting the three-dimensional volume data of the difficult target.
The medical instrument comprises the three-dimensional data measuring system, so that more parameters of a target object can be acquired more accurately and effectively, and the advantages of universality, simplicity in operation, capability of segmenting the three-dimensional data of a difficult target and the like can be taken into consideration.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 shows a schematic block diagram of an apparatus for acquiring three-dimensional volume data of a target object in implementing a method for measuring three-dimensional volume data according to an embodiment of the present application;
FIG. 2 shows a schematic flow chart of a method of measurement of three-dimensional volumetric data according to one embodiment of the present application;
fig. 3 shows a schematic flow chart of acquiring three-dimensional volume data of a target object in a method of measuring three-dimensional volume data according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a method of measuring three-dimensional volumetric data to determine a cross-section in an ultrasound image according to one embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a method for measuring three-dimensional volume data according to an embodiment of the present application for segmenting a target region and a non-target region;
FIG. 6 shows a schematic flow chart of a method of measurement of three-dimensional volumetric data according to another embodiment of the present application;
FIG. 7 shows a schematic block diagram of a measurement system for three-dimensional volumetric data according to yet another embodiment of the present application;
fig. 8 shows a schematic block diagram of a measurement system of three-dimensional volume data according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, exemplary embodiments according to the present application will be described in detail below with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the application described in the application without inventive step, shall fall within the scope of protection of the application.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present application. It will be apparent, however, to one skilled in the art, that the present application may be practiced without one or more of these specific details. In other instances, well-known features of the art have not been described in order to avoid obscuring the present application.
It is to be understood that the present application is capable of implementation in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
In order to provide a thorough understanding of the present application, detailed steps and detailed structures will be provided in the following description in order to explain the technical solutions proposed in the present application. The following detailed description of the preferred embodiments of the present application, however, will suggest that the present application may have other embodiments in addition to these detailed descriptions.
First, an exemplary measurement system of three-dimensional volume data for implementing the measurement method of three-dimensional volume data of the embodiment of the present application is described with reference to fig. 1.
Fig. 1 is a block diagram illustrating a structure of an exemplary three-dimensional volume data measurement system 10 for implementing a three-dimensional volume data measurement method according to an embodiment of the present application. As shown in fig. 1, the measurement system 10 of three-dimensional volume data may include an ultrasound probe 100, a transmission/reception selection switch 101, a transmission/reception sequence controller 102, a processor 103, a display 104, and a memory 105. The transmit/receive sequence controller 102 may excite the ultrasonic probe 100 to transmit ultrasonic waves to a target object (measured object), and may also control the ultrasonic probe 100 to receive an ultrasonic echo returned from the target object, so as to obtain an ultrasonic echo signal/data, where the ultrasonic probe 100 may be a three-dimensional volume probe, a two-dimensional linear array probe, a convex array probe, a phased array probe, and the like, and is not limited herein. The processor 103 processes the ultrasound echo signals/data to obtain tissue related parameters and ultrasound images of the target object. Ultrasound images obtained by the processor 103 may be stored in the memory 105 and displayed on the display 104.
In this embodiment, the display 104 of the measurement system 10 for three-dimensional volume data may be a touch display screen, a liquid crystal display, or the like, or may be an independent display device such as a liquid crystal display, a television, or the like, which is independent of the measurement system 10 for three-dimensional volume data, or may be a display screen on an electronic device such as a mobile phone, a tablet computer, or the like.
In the embodiment of the present application, the memory 105 of the three-dimensional volume data measuring system 10 may be a flash memory card, a solid-state memory, a hard disk, or the like.
The embodiment of the present application further provides a computer-readable storage medium, where multiple program instructions are stored, and after the multiple program instructions are called and executed by the processor 103, part of or all of the steps in the method for measuring three-dimensional volume data in the embodiments of the present application, or any combination of the steps in the method may be performed.
In one embodiment, the computer readable storage medium may be memory 105, which may be a non-volatile storage medium such as a flash memory card, solid state memory, hard disk, or the like.
In this embodiment, the processor 103 of the aforementioned three-dimensional volume data measurement system 10 may be implemented by software, hardware, firmware or a combination thereof, and may use a circuit, a single or multiple Application Specific Integrated Circuits (ASICs), a single or multiple general purpose integrated circuits, a single or multiple microprocessors, a single or multiple programmable logic devices, or a combination of the aforementioned circuits or devices, or other suitable circuits or devices, so that the processor 103 may perform the corresponding steps of the three-dimensional volume data measurement method in each embodiment.
Referring to fig. 2, the method for measuring three-dimensional volume data provided by the first aspect of the present application is applied to a system 10 for measuring three-dimensional volume data, and is particularly suitable for a system 10 for measuring three-dimensional volume data including a touch display screen, so that a touch screen operation can be input by touching the touch display screen.
Fig. 2 shows a schematic flow chart of a method for measuring three-dimensional volume data according to an embodiment of the present application, and as shown in fig. 2, the method 200 for measuring three-dimensional volume data includes the following steps:
step S210: acquiring three-dimensional volume data of a target object;
step S220: determining at least two intersecting cut planes containing the target object in the three-dimensional volume data, and drawing a two-dimensional contour of the target object on the cut planes;
step S230: and segmenting the three-dimensional volume data according to the two-dimensional contour to obtain the three-dimensional contour of the target object.
Specifically, in step S210, in an example, three-dimensional volume data of a target object of an object to be measured is first acquired by a three-dimensional ultrasound imaging system.
The three-dimensional volume data may be a three-dimensional image with gray scale, and the like, and the accurate three-dimensional contour and information of the target object may need to be obtained in subsequent steps.
In an embodiment of the present application, the object to be measured may be a person to be subjected to an ultrasonic examination, and the target object of the object to be measured may be a region of a body tissue of the object to be measured where the ultrasonic examination is performed.
In an example of the present application, as shown in fig. 3, a three-dimensional ultrasound imaging system for three-dimensional imaging includes a probe 2, a transmission/reception selection switch 3, a transmission circuit 4, a reception circuit 5, a beam synthesis module 6, a signal processing module 7, a three-dimensional imaging module 8, and a display 9, wherein the beam synthesis module 6, the signal processing module 7, and the three-dimensional imaging module 8 can be regarded as modules with processors integrated.
During testing, the transmitting circuit 4 sends a group of delayed focused pulses to the probe 2, the probe 2 transmits ultrasonic waves to the tested organism tissue, receives ultrasonic echoes with tissue information reflected from the tested organism tissue after a certain time delay, and converts the ultrasonic echoes into electric signals again. The receiving circuit 5 receives these electrical signals and sends these ultrasound echo signals to the beam forming module 6. The ultrasonic echo signals are focused, delayed, weighted and summed in the beam forming module 6, and then processed in the signal processing module 7. The signal processed by the signal processing module 7 is sent to the three-dimensional imaging module 8, processed by the three-dimensional imaging module 8 to obtain visual information such as a three-dimensional image, and then sent to the display 9 for displaying, so as to obtain the three-dimensional volume data of the target object.
In this step, when acquiring the three-dimensional volume data, a physician may aim the ultrasound probe at a region where a target object to be detected is located, the transmitting module transmits ultrasound waves to the target object to be detected, and the echo signal received by the receiving module represents an echo of an internal structure of the target object to be detected. The gray level image obtained by processing the echo can reflect the internal structure of the target object to be detected.
Illustratively, this real-time acquisition process may guide the physician. That is, a prompt for the physician to operate, etc., may be given so that the physician operates according to the prompt to obtain the ultrasound image.
In step S220, at least two cross-sections are determined to perform the two-dimensional contour of the target object.
When determining the profile, since the position of the selected profile does not necessarily include the target object to be segmented, before the contour is drawn, the profile needs to be moved to the target object to be segmented, for example, the profile is translated or rotated to the central region of the target object to be segmented, or even moved to the exact center of the target object to be segmented, so that the determined profile includes more profile area of the target object and/or the target object. The central region refers to a region extending outward to a certain extent around a center point of a target object, for example, the central region is a circle having a radius of any value greater than zero and having a center point of the target object, or the central region is a square having a center of the target object as a symmetry center and extending around.
Wherein, determining the section and drawing the two-dimensional contour of the target object comprise the following two methods:
firstly, the method comprises the following steps: at least two cross sections are selected, and then a two-dimensional outline is drawn in the selected cross sections, in the method, specifically, but not limited to, the following ways are included:
for example, when the at least two selected cross sections do not contain the target object, the at least two selected cross sections need to be moved into the area containing the target object. For example, a section plane is moved to a central area of the target object, at least two section planes are selected and determined in the central area to ensure that the selected section plane contains as much of the contour of the target object as possible, and then a two-dimensional contour of the target object is drawn on the determined two section planes.
The movement may include, but is not limited to, one of the following manners, such as translation or rotation, sliding, etc., which may be selected according to actual needs, and the subsequent movement is explained with reference to the explanation without specific description.
In one example of the invention, two intersecting cross sections may be taken at will, and then both moved to the central region of the target object.
For another example, when the at least two selected cross-sections contain the target object, the two-dimensional contour can be directly drawn.
In another example of the present invention, the two selected cross sections intersect and contain the target object, but are not located in the central area, and in order to obtain more information of the target object, the two cross sections may be further moved to the central area, and certainly not moved to the central area, so as to achieve the purpose of the present application.
In another example of the present invention, a cross-section that intersects and contains the target object may also be directly selected.
In another example of the invention, it is also possible to directly extract a cross section that intersects and contains a central region where the target object is also located.
Secondly, the method comprises the following steps: selecting a section, drawing a two-dimensional contour of a target object, then selecting a section, and drawing a two-dimensional contour of the target object.
Specifically, in an embodiment of the present application, a first cross section is selected, the first cross section is moved to a central area of the target object, and a two-dimensional contour of the target object is drawn on the first cross section; and generating a second section containing the target object by taking the determined first section as a position reference, and drawing a two-dimensional contour of the target object on the second section, wherein the second section comprises at least one section. In this example, the first cross section and the second cross section may be parallel or intersect, and are not limited to a certain one.
In one example, if the cross-section includes a target object, a two-dimensional contour of the target object is directly drawn on the cross-section, a second cross-section including the target object is generated with the first cross-section as a positional reference, and the two-dimensional contour of the target object is drawn on the second cross-section.
In one example, if the section plane includes the target object but is not in the central region of the target object, the section plane is moved to the central region of the target object, and the two-dimensional contour of the target object is drawn on the first section plane; and generating a second section containing the target object by taking the determined first section as a position reference, and drawing a two-dimensional contour of the target object on the second section.
In one example, a section plane is moved to a central region of the target object, a first section plane is determined and a two-dimensional contour of the target object is drawn on the first section plane; generating a second section plane at the determined center of the two-dimensional contour of the target object on the first section plane and drawing the two-dimensional contour of the target object on at least the second section plane, wherein the first section plane and the second section plane intersect.
In another example, a section plane is moved to a central region of the target object, a first section plane is determined and a two-dimensional contour of the target object is drawn on the first section plane; generating a second section plane parallel to the first section plane in a central area of the determined two-dimensional contour of the target object on the first section plane and drawing the two-dimensional contour of the target object on the second section plane. In this example, the second cross section is not located at the center of the first cross section, but only located at the center area thereof, so that the second cross section may be parallel to the first cross section, or may intersect with the first cross section, and may be selected according to actual needs.
In the above example, the first cross section may be selected by first selecting the first cross section arbitrarily, and then moving the first cross section to the central area of the target object, even to the center of the target object, or may be selected directly in the central area or the center of the target object after determining the central area or the center of the target object based on experience or a three-dimensional image.
It should be noted that, in the second method, the determination methods of the first cross section and the second cross section may be selected from any one of the determination methods mentioned in the first method without contradiction, and a description thereof will not be repeated.
For example, in different examples of the present application, three, four, or five or more sections may be determined in the three-dimensional volume data, where the more the number of the determined sections is, the more the two-dimensional contour of the target object is drawn, the more the related information of the obtained three-dimensional volume data is, which is more beneficial to the segmentation of the three-dimensional volume data, and a more accurate three-dimensional contour is obtained, but when the number of the determined sections reaches a certain degree, the improvement of the segmentation and drawing accuracy of the three-dimensional volume data is no longer obvious, the selection of the sections may be stopped. In the present application, the number of defined profiles is typically 2-6 profiles.
Further, the determined position relationship of at least two sections is at least intersection, that is, two planes are arranged in a three-dimensional space to intersect with each other and have a common straight line.
In particular, the intersections are completely orthogonal, approximately orthogonal, or oblique. Wherein, in the present application, the completely orthogonal means that two cross sections are perpendicular to each other, and the included angle between the two cross sections is 90 °, wherein the approximately orthogonal means that the two cross sections are substantially perpendicular to each other, for example, the included angle between the two cross sections is 85 ° -95 °, or 88 ° -92 °, or 89 ° -91 °, and the nearly perpendicular state is not strictly required to be completely perpendicular. Wherein, the skew refers to the intersection of two cross sections and is not perpendicular. The explanations and illustrations of intersecting, completely orthogonal, nearly orthogonal or oblique are all referred to without specific remarks.
When the section is determined, the sections at different positions can be selected, so that the obtained two-dimensional profile is more comprehensive, and the section at the orthogonal position can be selected in the application. In an example of the present application, three orthogonal cross sections are taken, as shown in fig. 4, wherein the three cross sections are perpendicular to each other in space, for example, the extending directions of the three cross sections are the directions of the X axis, the Y axis and the Z axis in the three-dimensional coordinate axes. Wherein each profile can be rotated or translated. In another example of the present application, two of the three orthogonal cross-sections shown in fig. 4 may also be taken.
In the present application, the determined at least two cross sections each contain information of the target object, and when the at least two cross sections are parallel, the two cross sections are located at different positions, after the two parallel cross sections are determined, images of the target object at different positions can be displayed, after the two cross sections are overlaid with each other, different images at different positions are displayed, different information is provided, and after the two cross sections are subjected to contour drawing, different contours can be obtained, but the drawn two-dimensional contours each contain the target object and are associated with each other to be used for drawing a three-dimensional contour together.
Similarly, the determined at least two cross sections each contain information of the target object, when the at least two cross sections intersect, the two cross sections can display images of the target object at different positions after being superposed with each other, and the intersecting parts of the two cross sections have the same image information and two-dimensional contours, the two cross sections can display image information of different parts after being superposed with each other, different two-dimensional contours can be obtained after two-dimensional contour drawing is carried out, and the drawn two-dimensional contours each contain the target object and are associated with each other to be used for drawing the three-dimensional contours together. When the two sections are orthogonal, the acquired sections can be more uniformly distributed around the target object, and the image of the target object can be more comprehensively displayed, so that a more effective two-dimensional contour can be obtained, and a more accurate three-dimensional contour can be obtained.
In a specific example of the present application, after obtaining three-dimensional volume data, a user selects any one of three orthogonal cross sections, translates or rotates the plane to the center or the vicinity of a target object to be segmented, then draws a two-dimensional contour of the segmented target object on the cross section, and then generates two other orthogonal cross sections with the center (or the vicinity of the center point) of the two-dimensional contour. Finally, the user draws a two-dimensional contour in at least one of the generated two orthogonal planes. Through the above steps, the contours of at least two cross-sections of the target object are obtained.
For example, in the cut section, according to the gray scale of the two-dimensional image of the section and the experience of the user, which areas and/or points in the image are the target areas and which areas and/or points are the non-target areas are determined, and the areas are marked and drawn, so as to obtain the two-dimensional contour of the target object.
In addition, some semi-automatic algorithms can be adopted for automatic edge pasting, and then the two-dimensional outline of the target object is obtained.
Wherein, the semi-automatic algorithm includes but is not limited to edge detection algorithm (Livewire) and/or dynamic programming.
In an example of the application, the two-dimensional contour of the target object is drawn through an edge detection algorithm (Livewire), for example, after analyzing at least two obtained sections, it can be known that gray levels of edge pixel points and non-edge pixel points in a target region of an image are different, and a certain jump is usually present, whether the pixel points are two-dimensional contour edges is judged by detecting whether gray levels of the pixel points are suddenly changed, the target region and the non-target region in the sections are sequentially divided, and then the drawing of the two-dimensional contour of the target object is finally obtained.
It should be noted that, as long as a method capable of performing two-dimensional contouring on the image in the cross section can be used in the present application, the present invention is not limited thereto.
In step S230, the three-dimensional volume data is segmented according to the two-dimensional contour of the target object rendered in step S220 to obtain a three-dimensional contour of the target object.
Different from the conventional method, the step is not to fit the two-dimensional contour, but to improve the precision, the three-dimensional volume data is segmented to obtain a complete three-dimensional contour, so that the obtained three-dimensional image is more accurate, and more effective information is extracted.
Specifically, the segmentation of the three-dimensional volume data means that after two-dimensional contours of at least two cross sections are obtained, which is equivalent to that which regions are target regions and which regions are non-target regions are already determined on at least two cross sections, and the contour information is used for guiding the segmentation of other regions in the three-dimensional volume data, that is, which regions remaining in the three-dimensional volume data belong to the target regions and which regions belong to the non-target regions, so as to obtain the three-dimensional contours.
Therefore, before the three-dimensional volume data is segmented, a target region and a non-target region need to be confirmed, and the method for segmenting the three-dimensional volume data according to the two-dimensional contour specifically includes:
step S2301: generating a target area containing the target object;
step S2302: generating a non-target region not containing the target object;
step S2303: and segmenting the three-dimensional volume data according to the target region and the non-target region.
Of course, in some possible implementation manners, only the target region for generating the target object needs to be determined, the three-dimensional volume data is segmented based on the target region, and it is not necessary to determine the non-target region not including the target object, and the specific segmentation method may refer to the following description of several related segmentation methods, which is not described herein again.
In step S2301, the target region (foreground region) is determined by prior knowledge or directly input by a user, and the non-target region (background region) is determined.
In an example of the present application, the user has drawn a two-dimensional contour of the target object on at least two determined cross-sections (e.g. at least two of the three orthogonal planes), and the area within the two-dimensional contour is certainly the target area, and thus the area within the drawn two-dimensional contour of the target object is determined as the target area.
In step S2302, generating a non-target region not including a target object includes: determining a region outside the target region within the rendered two-dimensional contour of the target object as a non-target region, in an example of the present application, the relationship between the target region (foreground region) and the non-target region (background region) is as shown in fig. 6, the user has rendered the two-dimensional contour of the target object on at least two determined cross-sections (e.g., at least two orthogonal planes of three orthogonal planes), and the region outside the two-dimensional contour is certainly the non-target region.
In an example of the present application, a two-dimensional contour of a target object drawn on at least two determined cross-sections (for example, at least two orthogonal planes of three orthogonal planes) is used as a foreground, and the two-dimensional contour drawn by the user is morphologically expanded to obtain a background region.
The morphological dilation is a process of merging all background points contacting with a background area into the background area and expanding a boundary outwards so as to fill up a hole in an object.
For example, a convolution kernel is defined, in this example, the convolution kernel is a pixel point of a background region, the kernel may be of any shape and size, and has a separately defined reference point-anchor point (anchor), which is usually a square or a disk with a reference point, the kernel may be called a template or a mask, then the mask is compared with a point in a two-dimensional contour, if the mask falls in the background region, the region is a background region, and the remaining points in the two-dimensional contour may be compared one by one in the manner, so as to obtain a complete background region.
After the two-dimensional contour is drawn, a very obvious boundary is formed between the two-dimensional contour and the non-target area, so that the two-dimensional contour can be displayed very prominently, and the two-dimensional contour and the non-target area can be identified.
In an example of the application, the two-dimensional contour may be displayed more clearly as a boundary line of the two-dimensional contour after being drawn, the shape of the two-dimensional contour is identified by the boundary line, and the two-dimensional contour and the non-target area can be identified by the boundary line. For example, the border lines of the two-dimensional contour are black, the non-target area has no lines, and the two-dimensional contour is a gray background, so that the two-dimensional contour can be displayed very clearly. In addition, in another example, the boundary line may also be a colored line, so that the non-target area is more clearly divided.
In another example of the present application, the two-dimensional contour may be a colored contour, and different portions of the colored contour have different colors, so as to more closely approximate the appearance and shape of the actual target object, and more effectively distinguish the non-target area.
In addition to the above manner, the two-dimensional contour may be distinguished according to the brightness and the gray scale of the image, for example, after the two-dimensional contour is drawn, the two-dimensional contour is displayed as a bright area of the image, and the background of the non-target area is a dark area, so that the two-dimensional contour is divided into the non-target area more clearly by the contrast of the bright area and the dark area. It should be noted that the display of the two-dimensional outline is not limited to the above-mentioned manner, and may include other display manners, which are not listed here.
In step S2303, the method of segmenting the three-dimensional volume data is roughly classified into the following three categories:
first, the three-dimensional volume data is segmented based on an interactive segmentation algorithm according to the target region and the non-target region generated in the step S2302 to segment points in the three-dimensional volume data into a target region or a non-target region.
The interactive segmentation algorithm may include algorithms such as Graph Cut, Grab Cut, Random Walker, and the like, but is not limited to the enumerated algorithms, and all segmentation algorithms capable of realizing the three-dimensional volume data may be applied to the present application.
In the following, the Graph Cut algorithm is taken as an example to describe the segmentation of the three-dimensional volume data in detail, and the objective to be achieved in this step is to divide the image of the three-dimensional volume data into two disjoint parts, namely a foreground region and a background region. Firstly, the image is composed of vertexes and edges, the edges have weights, a Graph theory Graph is required to be constructed in the Graph Cut, the Graph theory Graph is provided with two types of vertexes, two types of edges and two types of weights, a common vertex is composed of each pixel of the image, then an edge exists between every two adjacent pixels, and the weights of the edges are determined by the boundary smooth energy item. And two terminal vertexes s (target) and t (background), wherein each common vertex and s have a connection, namely an edge, the weight of the edge is determined by a region energy item Rp (1), and the weight of the edge connected by each common vertex and t is determined by a region energy item Rp (0). The weights of all edges can thus be determined, i.e. the graph. The smallest cut can then be found by the min cut algorithm, which is the set of weights and smallest edges whose break just allows the target and background to be segmented, i.e. min cut corresponds to the minimization of energy.
Specifically, in an example of the present application, after a foreground region and a background region are obtained through calculation, an interactive segmentation algorithm may be used to segment three-dimensional volume data, some foreground seed points (i.e., segmented target regions) and background seed points (i.e., non-target regions) are provided for the algorithm in the Graph Cut segmentation algorithm, and the Graph Cut segmentation algorithm automatically segments the remaining unmarked points to determine whether the remaining unmarked points belong to the foreground or the background. The Graph Cut algorithm principle is that an image is constructed into a Graph (Graph) in Graph theory, pixels in the image are used as Graph nodes, the relation between the pixels and other pixels in surrounding neighborhoods is used as Graph edges, then cost functions (segmentation functions) of boundaries and regions are defined, and the image segmentation is realized by minimizing the cost functions so as to obtain a three-dimensional contour of a target object.
Second, the three-dimensional volume data is divided according to the target region and the non-target region generated in the step S2302 based on a classification division method to divide points in the three-dimensional volume data into a target region or a non-target region.
In the classification segmentation method, through different gray level distributions of a foreground region and a background region, a classifier can be trained to learn and obtain a feature capable of distinguishing the target region from the non-target region, wherein the feature can be gray level, a relationship between the feature and surrounding points and edges, and the like. And then generating an image classifier according to the characteristics to classify the regions which are not marked with the target region and the non-target region in the three-dimensional volume data, and judging that the unmarked regions belong to the target region or the non-target region so as to realize the segmentation of the three-dimensional volume data and obtain the three-dimensional contour of the target object.
Common feature extraction and classification methods include, but are not limited to: SVM (support vector machine), PCA (principal component analysis), neural networks, deep learning networks (e.g., CNN, VGG, inclusion, MobileNet, etc.).
In the following description of feature extraction by taking a neural network as an example, the neural network firstly needs to learn according to a certain learning criterion and then can work. Taking the identification of the "target area" and the "non-target area" by the artificial neural network as an example, it is specified that "1" should be output when the "target area" is input to the network, and "0" should be output when the "non-target area" is input.
The criteria for web learning should be: if the network makes a wrong decision, learning through the network should cause the network to reduce the likelihood of making the same mistake the next time. Firstly, random values in the (0, 1) interval are given to each connection weight value of the network, the image mode corresponding to the target area is input to the network, the network carries out weighted summation, comparison with a threshold and nonlinear operation on the input mode, and the output of the network is obtained. In this case, the probabilities of the network outputs being "1" and "0" are each 50%, that is to say completely random. If the output is "1" (the result is correct), the connection weight is increased so that the network can still make a correct decision when encountering the "target area" mode input again.
If the output is "0" (i.e., the result is incorrect), the network connection weights are adjusted in a direction that reduces the composite input weights, with the goal of reducing the likelihood of the network making the same error the next time it encounters the "target area" mode input. By such operation and adjustment, after a plurality of target areas and non-target areas are input to the network in turn, the accuracy of network judgment is greatly improved after the network learns for a plurality of times according to the learning method. The network has successfully learned the two patterns, and the two patterns are distributively memorized on each connection weight of the network. When the network encounters either mode again, a quick and accurate determination and identification can be made, i.e., a "target area" and a "non-target area" can be distinguished.
In an example of the present application, segmenting the three-dimensional volume data based on a classification-based segmentation method may include:
step A: taking a cubic three-dimensional image block as a positive sample by taking the point of the target area as a center, for example, taking an n × n × n three-dimensional image block as a positive sample by taking the point of the target area as a center; similarly, a cubic three-dimensional image block is taken as a negative sample with the point of the non-target region as the center, and for example, an n × n × n three-dimensional image block is taken as a negative sample with the point of the non-target region as the center.
And B: training an image classifier for learning features that distinguish the positive samples from the negative samples; the specific training method may refer to the learning method of the neural network described above.
And C: taking each point in the regions where the target region and the non-target region are not determined as a center, taking a cubic three-dimensional image block to be divided, for example, taking an n × n × n three-dimensional image block with an unmarked point as a center;
step D: and classifying each point by the image classifier by adopting the learned feature extraction and classification method, namely classifying the three-dimensional image block to be segmented and judging whether the three-dimensional image block belongs to a target region or a non-target region. And after all unmarked points are traversed, the segmentation of the whole three-dimensional volume data is realized.
Thirdly, the three-dimensional volume data is segmented according to the target region and the non-target region generated in the step S2302 based on a depth learning method to segment points in the three-dimensional volume data into target regions or non-target regions.
The conventional method based on the deep learning generally takes an image to be segmented (two-dimensional image or three-dimensional volume data) as an input, and then outputs a segmented mask through operations such as stacked convolution, pooling and over-activation functions.
In an example of the application, the deep learning is to learn the intrinsic rules and the expression levels of a "target region" and a "non-target region", the information obtained in the learning process is very helpful for explaining data such as characters, images and sounds, and the purpose of the deep learning is to enable a machine to have the analysis learning capability like a human, to recognize the data of the "target region" and the "non-target region", and further to realize the segmentation of three-dimensional volume data.
In order to improve the accuracy of segmentation, the method is different from a conventional depth learning method in that a mask consisting of three-dimensional volume data and a two-dimensional contour of the target object is input in the process of segmenting the three-dimensional volume data by a depth learning-based method, namely the mask comprises a two-dimensional contour of a drawn target object. For example, in an example of the present application, based on the generated target region and non-target region, information of such target region and non-target region and the original image to be segmented (three-dimensional volume data) may be stitched together as an input of the deep learning segmentation network, so that the deep learning network learns the features of the target to be segmented according to the partial contour calibrated by the user, thereby segmenting other unmarked contour regions. Because the information of the target region and the non-target region obtained before is added in the method, the deep learning network can extract the features more accurately, so that the unmarked region can be segmented more accurately, and the segmentation of the three-dimensional contour is more accurate.
And after deep learning, outputting a segmentation mask through a deep learning network, finally segmenting the regions which are not marked with the target region and the non-target region based on the segmentation mask, and judging that the points in the three-dimensional volume data belong to the target region or the non-target region.
In an example of the present application, a three-dimensional segmentation network is taken as an embodiment, the input of the deep learning may be three-dimensional volume data and a three-dimensional mask (the size of the three-dimensional mask is the same as that of the volume data) composed of two-dimensional contours drawn by at least two orthogonal surfaces of three orthogonal surfaces by a user, in the three-dimensional mask, a region value of a contour drawn by the user is 1, and values of the remaining regions are 0, and by combining the three-dimensional mask, the deep learning network can be guided to better learn characteristics of a target, so that other regions not drawn by the user can be better segmented, thereby realizing segmentation of the three-dimensional volume data and obtaining a three-dimensional contour of the target object.
The above-mentioned deep learning algorithm is only exemplary, and it should be understood that the present application may also learn the features of the target region and the non-target region through other machine learning or deep learning algorithms to segment the three-dimensional volume data.
According to the measuring method of the three-dimensional volume data, after the three-dimensional volume data and the two-dimensional profiles of at least two sections are obtained, the three-dimensional volume data are segmented according to the two-dimensional profiles to obtain the three-dimensional profile of the target object. By the method, the three-dimensional contour of the target object can be more accurately obtained, and further more parameters of the target object can be more accurately and effectively obtained. The method has the advantages of universality, simplicity in operation and capability of segmenting the three-dimensional volume data of the difficult target.
In addition to the above steps, the method for measuring three-dimensional volume data according to the present application may further include other steps, for example, to further improve the accuracy of three-dimensional contour segmentation, and may further include a step of revising the two-dimensional contour.
In an example of the present application, the method further comprises: receiving a revision instruction for the two-dimensional profile; revising the two-dimensional outline according to the revising instruction, and re-segmenting the three-dimensional volume data according to the revised two-dimensional outline to obtain a new three-dimensional outline of the target object.
Specifically, in the present application, the segmentation of the three-dimensional volume data is already realized through steps S210 to S230, but the segmentation algorithms all have a certain accuracy, and there is a possibility that a partially erroneous region may be segmented. In order to improve the segmentation accuracy and the universality, in the segmentation method of the method, the contour is drawn on at least two sections of three orthogonal sections, and the two-dimensional contour drawn by a user is used for guiding the segmentation algorithm to segment to obtain a more accurate result. As more user inputs are added during editing, the segmentation result is more accurate, and the purpose of editing is achieved.
In an example of the present application, the method may further include: and displaying the three-dimensional contour. For example, a three-dimensional profile obtained by processing the three-dimensional data by a measurement system may be stored in a memory, and the three-dimensional profile may be displayed on a display.
After the three-dimensional contour is drawn, a very obvious boundary is formed between the three-dimensional contour and the non-target area, so that the three-dimensional contour can be displayed very prominently, the three-dimensional contour and the non-target area are identified, and relevant information of the three-dimensional contour is obtained.
In an example of the application, the three-dimensional contour may be displayed more clearly as a boundary line of the three-dimensional contour after being segmented, and the shape of the three-dimensional contour is identified by the boundary line, and the three-dimensional contour and the non-target region can be identified by the boundary line. For example, the border lines of the three-dimensional contour are black, the non-target area has no lines, and the three-dimensional contour is a gray background, so that the three-dimensional contour can be displayed very clearly. In addition, in another example, the boundary line may also be a colored line, so that the non-target area is more clearly divided.
In another example of the present application, the whole three-dimensional contour may be a colored contour, and different portions of the colored contour have different colors, so as to more closely approximate the appearance and shape of the actual target object, and to more effectively distinguish the non-target area.
In addition to the above manner, the three-dimensional contour may be distinguished according to the brightness and the gray scale of the image, for example, after the three-dimensional contour is drawn, the three-dimensional contour is displayed as a bright area of the image, and the background of the non-target area is a dark area, so that the three-dimensional contour is divided into the non-target area more clearly by the contrast of the bright area and the dark area. It should be noted that the display of the three-dimensional outline is not limited to the above-mentioned manner, and may include other display manners, which are not listed here.
In an example of the present application, the method may further include: determining a volume of the target object from the three-dimensional contour. As described above, in an example of the present application, in an embodiment of a three-dimensional segmentation network, an input of the deep learning may be three-dimensional volume data and a three-dimensional mask (the size of the three-dimensional mask is the same as that of the volume data) composed of two-dimensional contours drawn by at least two orthogonal surfaces of three orthogonal surfaces by a user, and after the reference of the three-dimensional mask and the number of the segmentation masks are determined, a volume of the target object is calculated according to the volume of the segmentation masks and the number of the segmentation masks, that is, a product of the two is a volume of the target object, although the volume of the target object may be obtained by other methods after the three-dimensional contour of the target object is obtained, which is not limited herein.
The above exemplarily illustrates a method for measuring three-dimensional volume data according to an embodiment of the present application, in which a segmentation algorithm is guided to segment the three-dimensional volume data by drawing contours at least two cross-sections using the drawn two-dimensional contours to obtain more accurate results. The method has the advantages of universality, simplicity in operation and capability of segmenting the three-dimensional volume data of the difficult target.
A second aspect of the present application provides another method for measuring three-dimensional volume data, and a schematic flowchart of a method for measuring three-dimensional volume data according to another embodiment of the present application is described below with reference to fig. 6, where, as shown in fig. 6, the method 600 for measuring three-dimensional volume data includes the following steps:
step S610: acquiring three-dimensional volume data of a target object;
step S620: determining at least two cross sections containing different positions of the target object in the three-dimensional volume data, and drawing a two-dimensional contour of the target object in the cross sections;
step S630: and segmenting the three-dimensional volume data according to the two-dimensional contour to obtain the three-dimensional contour of the target object.
Steps S610 and S630 in the method 600 for measuring three-dimensional volume data according to the embodiment of the present application described with reference to fig. 6 are the same as steps S210 and S230 in the method 200 for measuring three-dimensional volume data according to the embodiment of the present application described with reference to fig. 2, and for the explanation and explanation of the steps S610 and S630, reference may be made to the explanation and explanation of steps S210 and S230 in the foregoing, and of course, modifications, substitutions, and the like for the explanation and explanation of the steps are also included in the method 600 for measuring three-dimensional volume data according to the embodiment of the present application.
As will be described in detail with respect to step S620, at least two cross-sections including different positions of the target object are determined in the three-dimensional volume data, and a two-dimensional contour of the target object is drawn in the cross-sections.
In this step at least two sections need to be determined for the rendering of the two-dimensional contour of the target object. The position relationship between the two profiles is not limited to intersection, and as long as the profiles are cut at different positions of the target object and contain the target object, the two-dimensional contour of the target object can be drawn, and the relationship between the two profiles is not limited.
The different positions of the sections mean that the two sections are not overlapped in a three-dimensional space, and both the two sections cut the target object, so that different two-dimensional outlines of the target object are obtained, and more effective reference and guidance are provided for subsequent three-dimensional volume data segmentation.
In an example of the application, the at least two profiles may be parallel or intersect each other. Wherein, the parallel refers to two parallel cross sections which are located at different positions. The intersections are completely orthogonal, nearly orthogonal, or skew. Wherein, in the present application, the completely orthogonal means that two cross sections are perpendicular to each other, and the included angle between the two cross sections is 90 °, wherein the approximately orthogonal means that the two cross sections are substantially perpendicular to each other, for example, the included angle between the two cross sections is 85 ° -95 °, or 88 ° -92 °, or 89 ° -91 °, and the nearly perpendicular state is not strictly required to be completely perpendicular. Wherein, the skew refers to the intersection of two cross sections and is not perpendicular. The explanations and illustrations of intersecting, perfectly orthogonal, nearly orthogonal, or skew are all referred to without specific recitation.
When the section is determined, the sections at different positions can be selected, so that the obtained two-dimensional profile is more comprehensive, and the section at the orthogonal position can be selected in the application. In an example of the present application, three orthogonal cross sections are taken, as shown in fig. 4, wherein the three cross sections are mutually perpendicular in space, for example, the extending directions of the three cross sections are the directions of an X-axis, a Y-axis and a Z-axis in three-dimensional coordinate axes. Wherein each profile can be rotated or translated. In another example of the present application, two of the three orthogonal cross-sections shown in fig. 4 may also be taken.
In a specific example of the present application, after obtaining three-dimensional volume data, a user selects any one of three orthogonal cross sections, translates or rotates the plane to the center or the vicinity of a target object to be segmented, then draws a two-dimensional contour of the segmented target object on the cross section, and then generates two other orthogonal cross sections with the center (or the vicinity of the center point) of the two-dimensional contour. Finally, the user draws a two-dimensional contour in at least one of the generated two orthogonal planes. Through the above steps, the contours of at least two cross-sections of the target object are obtained.
For example, in the cut section, according to the gray scale of the two-dimensional image of the section and the experience of the user, which areas and/or points in the image are the target areas and which areas and/or points are the non-target areas are determined, and the areas are marked and drawn, so as to obtain the two-dimensional contour of the target object.
In addition, some semi-automatic algorithms can be adopted for automatic edge pasting, and then the two-dimensional outline of the target object is obtained.
Wherein, the semi-automatic algorithm includes but is not limited to edge detection algorithm (Livewire) and/or dynamic programming.
In an example of the present application, a two-dimensional contour of the target object is drawn through an edge detection algorithm (Livewire), for example, after analyzing at least two obtained profiles, it may be known that gray levels of edge pixel points and non-edge pixel points in a target region of an image are different, and a certain jump is usually present, and whether the pixel points are two-dimensional contour edges is determined by detecting whether gray levels of the pixel points are suddenly changed, and the target region and the non-target region in the profiles are sequentially divided, so that the drawing of the two-dimensional contour of the target object is finally obtained.
It should be noted that, as long as a method capable of performing two-dimensional contouring on the image in the cross section can be used in the present application, the present invention is not limited thereto.
The above exemplarily illustrates a method for measuring three-dimensional volume data according to another embodiment of the present application, which is based on the above description and guides a segmentation algorithm to segment the three-dimensional volume data by drawing a contour at least two cross-sections using the drawn two-dimensional contour to obtain a more accurate result. The method has the advantages of universality, simplicity in operation and capability of segmenting the three-dimensional volume data of the difficult target.
A third aspect of the present application provides another method for measuring three-dimensional volume data, and a schematic flowchart of a method for measuring three-dimensional volume data according to another embodiment of the present application is described below with reference to fig. 7, where, as shown in fig. 7, the method 700 for measuring three-dimensional volume data includes the following steps:
step S710: acquiring three-dimensional volume data of a target object;
step S720: determining at least two intersecting cross sections in the three-dimensional volume data, the cross sections including the target object, and drawing a two-dimensional contour of the target object in the cross sections;
step S730: determining the corresponding contours of other areas outside the cross section in the three-dimensional volume data according to the two-dimensional contours;
step S740: and determining the three-dimensional contour of the target object according to the two-dimensional contour and the contours corresponding to the other areas.
Steps S710 and S720 in the method 700 for measuring three-dimensional volume data according to the embodiment of the present application described with reference to fig. 7 are the same as steps S210 and S220 in the method 200 for measuring three-dimensional volume data according to the embodiment of the present application described with reference to fig. 2, and for the explanation and explanation of the steps S710 and S720, reference may be made to the explanation and explanation of steps S210 and S220 in the foregoing, and of course, modifications, substitutions, and the like for the explanation and explanation of the steps are also included in the method 700 for measuring three-dimensional volume data according to the embodiment of the present application.
The following is a detailed description of step S730, where the difference between step S730 and step S230 is that the contour of the three-dimensional volume data is determined in step S730 to be a contour corresponding to another region outside the cross section in the three-dimensional volume data, and does not include a two-dimensional contour formed by the cross section, where the other region may be a spatially arbitrary cross section, a three-dimensional surface, or the like, and is not limited herein. In step S230, the three-dimensional volume data is segmented according to the two-dimensional contour, the three-dimensional volume data is fully segmented to obtain a complete three-dimensional contour of the target object, and the contour obtained in step S730 is not the complete three-dimensional contour of the target object.
It should be noted that, in the step S730, the method for determining the profile corresponding to the other region outside the cross section in the three-dimensional volume data according to the two-dimensional profile may adopt various segmentation methods in the step S230, which are not described herein again, and the segmentation methods may be selected according to actual needs, so as to determine the profile corresponding to the other region outside the cross section in the three-dimensional volume data.
In this embodiment, the method further includes a step S740, since the two-dimensional contours of the target object of the two intersecting cross sections in the three-dimensional volume data are obtained in the step S720, and the contours corresponding to the other regions outside the cross section in the three-dimensional volume data are obtained in the step S730, which are not complete three-dimensional contours of the target object, the contours obtained in the step S720 and the step S730 are combined in the step S740, so that the complete three-dimensional contour of the target object can be obtained.
In addition, step S720 may also be to determine at least two cross-sections including different positions of the target object in the three-dimensional volume data, and draw the two-dimensional contour of the target object on the cross-sections, which may specifically participate in the related description of step S620 shown in fig. 6, and is not repeated here.
In this embodiment, other steps or methods may refer to relevant steps or methods in the methods described in the first and second aspects of the present invention without contradiction, and thus, detailed description thereof is omitted here.
The fourth aspect of the present application also provides a system for measuring three-dimensional volume data, and the system for measuring three-dimensional volume data provided by the fourth aspect of the present application is described below with reference to fig. 8. Fig. 8 shows a schematic block diagram of a measurement system 800 for three-dimensional volume data according to an embodiment of the application. The system 800 for measuring three-dimensional volume data includes a memory 810 and a processor 820.
The memory 810 stores therein computer program codes for implementing respective steps in the measurement method of three-dimensional volume data according to an embodiment of the present application. The processor 820 is used for executing the computer program codes stored in the memory 810 to perform the corresponding steps of the measurement method of three-dimensional volume data according to the embodiment of the present application.
In one embodiment, the computer program code, when executed by the processor 820, causes the measurement system 800 for three-dimensional volumetric data to perform at least one of the following steps: acquiring three-dimensional volume data of a target object; determining at least two intersecting cut planes in the three-dimensional volume data and containing the target object, drawing a two-dimensional contour of the target object on the cut planes; and segmenting the three-dimensional volume data according to the two-dimensional contour to obtain the three-dimensional contour of the target object.
In another embodiment, the computer program code, when executed by the processor 820, causes the measurement system 800 for three-dimensional volumetric data to perform at least one of the following steps: acquiring three-dimensional volume data of a target object; determining at least two cross sections containing different positions of the target object in the three-dimensional volume data, and drawing a two-dimensional contour of the target object in the cross sections; and segmenting the three-dimensional volume data according to the two-dimensional contour to obtain the three-dimensional contour of the target object.
In another embodiment, the computer program code, when executed by the processor 820, causes the measurement system 800 for three-dimensional volumetric data to perform the steps of: receiving a revision instruction for the two-dimensional profile; revising the two-dimensional outline according to the revising instruction, and re-segmenting the three-dimensional volume data according to the revised two-dimensional outline to obtain a new three-dimensional outline of the target object.
In another embodiment, the computer program code, when executed by the processor 820, causes the measurement system 800 for three-dimensional volumetric data to perform the steps of: and displaying the three-dimensional contour.
In another embodiment, the computer program code, when executed by the processor 820, causes the measurement system 800 for three-dimensional volumetric data to perform the steps of: determining a volume of the target object from the three-dimensional contour.
In another embodiment, the computer program code, when executed by the processor 820, causes the measurement system 800 for three-dimensional volumetric data to perform the steps of: determining a volume of the target object from the three-dimensional contour.
The fifth aspect of the present application also provides a medical instrument that may include the measurement system 800 of three-dimensional volume data shown in fig. 8. The medical instrument can implement the method for measuring three-dimensional volume data shown in the previous fig. 2, fig. 6 or fig. 7.
The medical instrument comprises the three-dimensional data measuring system, so that more parameters of a target object can be acquired more accurately and effectively, and the advantages of universality, simplicity in operation, capability of segmenting the three-dimensional data of a difficult target and the like can be taken into consideration.
The sixth aspect of the present application also provides a storage medium on which computer program instructions are stored, which when executed by a computer or a processor are used for executing the corresponding steps of the measurement method of three-dimensional volume data of the embodiments of the present application. The storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media.
In one embodiment, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the steps of: in one embodiment, the program code, when executed by the processor 820, causes the measurement system 800 that calculates three-dimensional volume data to perform at least one of the following steps: acquiring three-dimensional volume data of a target object; determining at least two intersecting cut planes in the three-dimensional volume data and containing the target object, drawing a two-dimensional contour of the target object on the cut planes; and segmenting the three-dimensional volume data according to the two-dimensional contour to obtain the three-dimensional contour of the target object.
In another embodiment, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the steps of: acquiring three-dimensional volume data of a target object; determining at least two cross sections containing different positions of the target object in the three-dimensional volume data, and drawing a two-dimensional contour of the target object in the cross sections; and segmenting the three-dimensional volume data according to the two-dimensional contour to obtain the three-dimensional contour of the target object.
In another embodiment, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the steps of: acquiring three-dimensional volume data of a target object; determining at least two intersecting cut planes containing the target object in the three-dimensional volume data, and drawing a two-dimensional contour of the target object on the cut planes; determining the corresponding contours of other areas outside the cross section in the three-dimensional volume data according to the two-dimensional contours; and determining the three-dimensional contour of the target object according to the two-dimensional contour and the contours corresponding to the other areas.
In another embodiment, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the steps of: receiving a revision instruction for the two-dimensional profile; revising the two-dimensional outline according to the revising instruction, and re-segmenting the three-dimensional volume data according to the revised two-dimensional outline to obtain a new three-dimensional outline of the target object.
In another embodiment, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the steps of: and displaying the three-dimensional contour.
In another embodiment, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the steps of: determining a volume of the target object from the three-dimensional contour.
In another embodiment, the computer program instructions, when executed by a computer or processor, cause the computer or processor to perform the steps of: determining a volume of the target object from the three-dimensional contour.
Although the example embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above-described example embodiments are merely illustrative and are not intended to limit the scope of the present application thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present application. All such changes and modifications are intended to be included within the scope of the present application as claimed in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the present application, various features of the present application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present application should not be construed to reflect the intent: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules in an item analysis apparatus according to embodiments of the present application. The present application may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiments of the present application or the description thereof, and the protection scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope disclosed in the present application, and shall be covered by the protection scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (27)

  1. A method of measuring three-dimensional volume data, the method comprising:
    acquiring three-dimensional volume data of a target object;
    determining at least two intersecting cut planes containing the target object in the three-dimensional volume data, and drawing a two-dimensional contour of the target object on the cut planes;
    determining the corresponding contours of other areas outside the cross section in the three-dimensional volume data according to the two-dimensional contours;
    and determining the three-dimensional contour of the target object according to the two-dimensional contour and the contours corresponding to the other areas.
  2. A method of measuring three-dimensional volume data, the method comprising:
    acquiring three-dimensional volume data of a target object;
    determining at least two intersecting cut planes containing the target object in the three-dimensional volume data, and drawing a two-dimensional contour of the target object on the cut planes;
    and segmenting the three-dimensional volume data according to the two-dimensional contour to obtain a three-dimensional contour of the target object.
  3. A method of measuring three-dimensional volume data, the method comprising:
    acquiring three-dimensional volume data of a target object;
    determining at least two cross sections containing different positions of the target object in the three-dimensional volume data, and drawing a two-dimensional contour of the target object in the cross sections;
    and segmenting the three-dimensional volume data according to the two-dimensional contour to obtain a three-dimensional contour of the target object.
  4. The method according to any one of claims 1 to 3, further comprising:
    receiving a revision instruction for the two-dimensional profile;
    revising the two-dimensional outline according to the revising instruction, and re-segmenting the three-dimensional volume data according to the revised two-dimensional outline to obtain a new three-dimensional outline of the target object.
  5. The method according to any one of claims 1 to 4, further comprising:
    and displaying the three-dimensional contour.
  6. The method according to any one of claims 1 to 3, further comprising:
    determining a volume of the target object from the three-dimensional contour.
  7. The method of claim 3, wherein determining the cross-section, mapping the two-dimensional contour of the target object comprises:
    selecting a first section, moving the first section to the central area of the target object, and drawing a two-dimensional contour of the target object on the first section;
    and generating a second section containing the target object by taking the determined first section as a position reference, and drawing a two-dimensional contour of the target object on the second section, wherein the second section comprises at least one section.
  8. The method of claim 1 or 2, wherein determining the cross-section, mapping the two-dimensional contour of the target object, comprises:
    selecting a first section, moving the first section to the central area of the target object, and drawing a two-dimensional contour of the target object on the first section;
    and generating a second section intersecting the first section at the center of the determined two-dimensional contour of the target object on the first section and drawing the two-dimensional contour of the target object on the second section.
  9. The method of any of claims 1 to 3, wherein determining the cross-section, mapping the two-dimensional profile of the target object, comprises:
    selecting a first section, wherein the first section comprises the target object, and drawing a two-dimensional contour of the target object on the first section;
    and generating a second section containing the target object by taking the first section as a position reference, and drawing a two-dimensional contour of the target object on the second section, wherein the second section comprises at least one section.
  10. The method of any of claims 1 to 3, wherein determining the cross-section, mapping the two-dimensional profile of the target object, comprises:
    moving the sections to the central area of the target object, and determining at least two sections;
    drawing a two-dimensional contour of the target object on the determined at least two sections.
  11. The method according to any one of claims 1 to 3, wherein the two-dimensional contour and/or the three-dimensional contour is distinguished and displayed from a non-target region not containing the target object by at least one of a boundary line, a color, and a brightness.
  12. The method of claim 2 or 3, wherein segmenting the three-dimensional volume data from the two-dimensional contours comprises:
    generating a target area containing the target object according to the two-dimensional contour;
    generating a non-target area not containing the target object according to the two-dimensional contour;
    and segmenting the three-dimensional volume data according to the target region and the non-target region.
  13. The method of claim 12, wherein generating a target region containing the target object from the two-dimensional contour comprises:
    and determining a region in the drawn two-dimensional contour of the target object as a target region.
  14. The method of claim 12, wherein generating a non-target region that does not contain a target object from the two-dimensional contour comprises:
    determining a region outside a target region within the drawn two-dimensional contour of the target object as a non-target region; and/or the presence of a gas in the gas,
    and performing morphological expansion on the drawn two-dimensional outline of the target object to generate the non-target area.
  15. The method of claim 12, wherein segmenting the three-dimensional volumetric data according to the target region and the non-target region comprises:
    segmenting the three-dimensional volume data based on an interactive segmentation algorithm to segment points in the three-dimensional volume data into a target region or a non-target region.
  16. The method of claim 15, wherein segmenting the three-dimensional volume data based on an interactive segmentation algorithm comprises:
    selecting target area seed points and non-target area seed points from the target area and the non-target area and constructing a graph in graph theory;
    determining a segmentation function according to the selected target region seed points and the selected non-target region seed points;
    and performing segmentation calculation on the unmarked points in the three-dimensional volume data by using the segmentation function so as to determine whether the unmarked points in the three-dimensional volume data belong to a target region or a non-target region.
  17. The method of claim 12, wherein segmenting the three-dimensional volumetric data according to the target region and the non-target region comprises:
    the three-dimensional volume data is segmented based on a classification-based segmentation method.
  18. The method of claim 17, wherein segmenting the three-dimensional volumetric data based on a classification-based segmentation method comprises:
    training an image classifier for learning to obtain features which can distinguish the target region from the non-target region;
    and generating an image classifier according to the characteristics, wherein the image classifier is used for classifying the regions which are not marked with the target region and the non-target region, and judging that the unmarked points in the three-dimensional volume data belong to the target region or the non-target region.
  19. The method of claim 17, wherein segmenting the three-dimensional volumetric data based on a classification-based segmentation method comprises:
    taking a point of the target area as a center, and taking a cubic three-dimensional image block as a positive sample;
    taking a point of the non-target area as a center, and taking a cubic three-dimensional image block as a negative sample;
    training an image classifier for learning features that distinguish the positive samples from the negative samples;
    taking each point in the regions where the target region and the non-target region are not determined as a center, and taking a cubic three-dimensional image block to be segmented;
    and classifying the three-dimensional image block to be segmented through the image classifier, and judging that the three-dimensional image block belongs to a target area or a non-target area.
  20. The method of claim 12, wherein segmenting the three-dimensional volumetric data according to the target region and the non-target region comprises:
    and segmenting the three-dimensional volume data based on a deep learning method.
  21. The method of claim 20, wherein segmenting the three-dimensional volumetric data based on a deep learning approach comprises:
    inputting a mask consisting of the three-dimensional volume data and the two-dimensional outline of the target object;
    outputting a segmentation mask through a deep learning network;
    and determining a target region and/or a non-target region in the three-dimensional volume data according to the segmentation mask.
  22. The method of claim 21, wherein determining the volume of the target object from the three-dimensional contour:
    determining a volume of the segmentation masks and a number of the segmentation masks;
    determining the volume of the target object according to the volume of the segmentation masks and the number of the segmentation masks.
  23. The method of claim 1 or 2, wherein the intersections are completely orthogonal, skew or approximately orthogonal.
  24. A method according to claim 3, wherein at least two of said profiles are parallel or intersect each other.
  25. A system for measuring three-dimensional volume data, comprising a memory, a processor and a computer program stored on the memory and running on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 24 when executing the computer program.
  26. A medical instrument comprising the system for measuring three-dimensional volume data of claim 25.
  27. A computer storage medium on which a computer program is stored, the computer program, when being executed by a computer or a processor, realizing the steps of the method according to any one of claims 1 to 24.
CN201980101217.2A 2019-12-18 2019-12-18 Method and system for measuring three-dimensional volume data, medical instrument, and storage medium Pending CN114503166A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/126359 WO2021120059A1 (en) 2019-12-18 2019-12-18 Measurement method and measurement system for three-dimensional volume data, medical apparatus, and storage medium

Publications (1)

Publication Number Publication Date
CN114503166A true CN114503166A (en) 2022-05-13

Family

ID=76476984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980101217.2A Pending CN114503166A (en) 2019-12-18 2019-12-18 Method and system for measuring three-dimensional volume data, medical instrument, and storage medium

Country Status (2)

Country Link
CN (1) CN114503166A (en)
WO (1) WO2021120059A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392735A (en) * 2023-12-12 2024-01-12 深圳市宗匠科技有限公司 Face data processing method, device, computer equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513135A (en) * 2015-09-15 2016-04-20 浙江大学 Spatial position automatic setting method of three-dimensional clothing pattern
CN106934807B (en) * 2015-12-31 2022-03-01 深圳迈瑞生物医疗电子股份有限公司 Medical image analysis method and system and medical equipment
CN105761304B (en) * 2016-02-02 2018-07-20 飞依诺科技(苏州)有限公司 Three-dimensional internal organs model construction method and device
CN109242947B (en) * 2017-07-11 2023-07-21 中慧医学成像有限公司 Three-dimensional ultrasonic image display method
CN108665544A (en) * 2018-05-09 2018-10-16 中冶北方(大连)工程技术有限公司 Three-dimensional geological model modeling method
CN109934905A (en) * 2019-01-16 2019-06-25 中德(珠海)人工智能研究院有限公司 It is a kind of for generating the system and its generation method of threedimensional model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392735A (en) * 2023-12-12 2024-01-12 深圳市宗匠科技有限公司 Face data processing method, device, computer equipment and storage medium
CN117392735B (en) * 2023-12-12 2024-03-22 深圳市宗匠科技有限公司 Face data processing method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
WO2021120059A1 (en) 2021-06-24

Similar Documents

Publication Publication Date Title
JP6467041B2 (en) Ultrasonic diagnostic apparatus and image processing method
US9277902B2 (en) Method and system for lesion detection in ultrasound images
CN102920477B (en) Device and method for determining target region boundary of medical image
US20140200452A1 (en) User interaction based image segmentation apparatus and method
JP2007307358A (en) Method, apparatus and program for image treatment
US9092867B2 (en) Methods for segmenting images and detecting specific structures
KR102519515B1 (en) Information processing device, information processing method, computer program
Fenster et al. Sectored snakes: Evaluating learned-energy segmentations
JP2010000133A (en) Image display, image display method and program
CN112672691A (en) Ultrasonic imaging method and equipment
CN111932495B (en) Medical image detection method, device and storage medium
CN112568933B (en) Ultrasonic imaging method, apparatus and storage medium
CN110163907B (en) Method and device for measuring thickness of transparent layer of fetal neck and storage medium
CN115546232A (en) Liver ultrasonic image working area extraction method and system and electronic equipment
CN114503166A (en) Method and system for measuring three-dimensional volume data, medical instrument, and storage medium
CN107169978B (en) Ultrasonic image edge detection method and system
Aji et al. Automatic measurement of fetal head circumference from 2-dimensional ultrasound
CN110390671B (en) Method and device for detecting mammary gland calcification
JP2015156894A (en) Medical image processor, medical object area extraction method thereof and medical object area extraction processing program
CN111383323B (en) Ultrasonic imaging method and system and ultrasonic image processing method and system
CN114699106A (en) Ultrasonic image processing method and equipment
CN114631849A (en) Abdominal aorta imaging method and related apparatus
CN111403007A (en) Ultrasonic imaging optimization method, ultrasonic imaging system and computer-readable storage medium
WO2022134049A1 (en) Ultrasonic imaging method and ultrasonic imaging system for fetal skull
US20240005513A1 (en) Medical image processing apparatus and medical image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination