WO2021120059A1 - 三维体数据的测量方法、测量***、医疗器械及存储介质 - Google Patents

三维体数据的测量方法、测量***、医疗器械及存储介质 Download PDF

Info

Publication number
WO2021120059A1
WO2021120059A1 PCT/CN2019/126359 CN2019126359W WO2021120059A1 WO 2021120059 A1 WO2021120059 A1 WO 2021120059A1 CN 2019126359 W CN2019126359 W CN 2019126359W WO 2021120059 A1 WO2021120059 A1 WO 2021120059A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
target object
volume data
cross
target area
Prior art date
Application number
PCT/CN2019/126359
Other languages
English (en)
French (fr)
Inventor
邹耀贤
林穆清
杨剑
龚闻达
Original Assignee
深圳迈瑞生物医疗电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳迈瑞生物医疗电子股份有限公司 filed Critical 深圳迈瑞生物医疗电子股份有限公司
Priority to CN201980101217.2A priority Critical patent/CN114503166A/zh
Priority to PCT/CN2019/126359 priority patent/WO2021120059A1/zh
Publication of WO2021120059A1 publication Critical patent/WO2021120059A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • This application relates to the field of three-dimensional imaging, in particular to a method for measuring three-dimensional volume data, a system for measuring three-dimensional volume data, medical equipment, and computer storage media.
  • the tissue structure or the size of the lesion is the focus of clinical examination.
  • Routine clinical practice is mainly to measure the length and short diameter of the tissue structure or lesion under two-dimensional ultrasound. Compared with the diameter measurement of two-dimensional ultrasound, the tissue structure and the volume of the lesion can provide more accurate diagnostic information for the clinic.
  • the current three-dimensional ultrasound volume measurement method is mainly: manual measurement method, which generates multiple sections through rotation or translation , The user manually or semi-automatically draws two-dimensional contours one by one, and finally fits the two-dimensional contour into a three-dimensional contour. This method is currently commonly used in clinical research, but the operation is extremely complicated and time-consuming, and the accuracy of the measurement results is also poor.
  • the first aspect of the present application provides a method for measuring three-dimensional volume data, the method including:
  • the three-dimensional volume data is segmented according to the two-dimensional contour to obtain the three-dimensional contour of the target object.
  • the second aspect of the present application provides a method for measuring three-dimensional volume data, the method including:
  • the three-dimensional volume data is segmented according to the two-dimensional contour to obtain the three-dimensional contour of the target object.
  • the third aspect of the present application provides a method for measuring three-dimensional volume data, the method including:
  • the three-dimensional contour of the target object is determined according to the two-dimensional contour and contours corresponding to the other regions.
  • the method further includes:
  • the two-dimensional contour is revised according to the revision instruction, and the three-dimensional volume data is re-segmented according to the revised two-dimensional contour to obtain a new three-dimensional contour of the target object.
  • the method further includes:
  • the three-dimensional outline is displayed.
  • the method further includes:
  • the volume of the target object is determined according to the three-dimensional contour.
  • determining the cross-section and drawing the two-dimensional outline of the target object includes:
  • the determined first cross-section is a position reference to generate a second cross-section containing the target object and draw a two-dimensional outline of the target object on the second cross-section, wherein the second cross-section includes at least one cross-section.
  • determining the cross-section and drawing the two-dimensional outline of the target object includes:
  • determining the cross-section and drawing the two-dimensional outline of the target object includes:
  • determining the cross-section and drawing the two-dimensional outline of the target object includes:
  • the two-dimensional contour and/or the three-dimensional contour and the non-target area not containing the target object are distinguished and displayed by at least one of a boundary line, a color, and a brightness.
  • segmenting the three-dimensional volume data according to the two-dimensional contour includes:
  • the three-dimensional volume data is segmented according to the target area and the non-target area.
  • generating a target area containing the target object includes:
  • the area within the drawn two-dimensional contour of the target object is determined as the target area.
  • generating a non-target area that does not contain the target object includes:
  • the drawn two-dimensional contour of the target object is morphologically expanded to generate the non-target area.
  • segmenting the three-dimensional volume data according to the target area and the non-target area includes:
  • the three-dimensional volume data is segmented based on an interactive segmentation algorithm to segment the points in the three-dimensional volume data into target regions or non-target regions.
  • the method for segmenting the three-dimensional volume data based on an interactive segmentation algorithm includes:
  • the segmentation function is used to perform segmentation calculation on the unmarked points in the three-dimensional volume data to determine whether the unmarked points in the three-dimensional volume data belong to a target area or a non-target area.
  • segmenting the three-dimensional volume data according to the target area and the non-target area includes:
  • the classification-based segmentation method segments the three-dimensional volume data.
  • the segmentation of the three-dimensional volume data based on the classification method includes:
  • An image classifier is generated based on the features, which is used to classify the areas where the target area and the non-target area are not marked, and determine whether the unmarked points in the three-dimensional volume data belong to the target area or the non-target area.
  • the segmentation method based on classification to segment the three-dimensional volume data includes:
  • the three-dimensional image block to be segmented is classified by the image classifier, and it is judged that it belongs to a target area or a non-target area.
  • segmenting the three-dimensional volume data according to the target area and the non-target area includes:
  • the three-dimensional volume data is segmented based on a deep learning method.
  • segmenting the three-dimensional volume data based on a deep learning method includes:
  • the volume of the target object is determined according to the volume of the segmentation mask and the number of the segmentation masks.
  • intersection is completely orthogonal, oblique or approximately orthogonal.
  • At least two of the cross sections are parallel or intersect each other.
  • the fourth aspect of the present application provides a three-dimensional volume data measurement system, including a memory, a processor, and a computer program stored on the memory and running on the processor, wherein the processor executes
  • the computer program implements the steps of the method described above.
  • the fifth aspect of the present application provides a medical device including the three-dimensional volume data measurement system described above.
  • the sixth aspect of the present application provides a computer storage medium on which a computer program is stored, and when the computer program is executed by a computer or a processor, the steps of the foregoing method are implemented.
  • the three-dimensional volume data measurement method and the measurement system of the embodiment of the present application after acquiring the three-dimensional volume data and the two-dimensional contours of at least two sections, the three-dimensional volume data is segmented according to the two-dimensional contours to obtain the target object Three-dimensional outline.
  • the contour of the target object can be obtained more accurately, and more parameters of the target object can be obtained more accurately and effectively.
  • the method can take into account versatility, simple operation, and segmentation of three-dimensional volume data of difficult targets.
  • the medical device described in this application includes the three-dimensional volume data measurement system, so it also has more parameters that can obtain target objects more accurately and effectively, and can take into account versatility, simple operation, and can segment three-dimensional volume data of difficult targets, etc. advantage.
  • FIG. 1 shows a schematic block diagram of a device for acquiring three-dimensional volume data of a target object in a method for measuring three-dimensional volume data according to an embodiment of the present application
  • Fig. 2 shows a schematic flowchart of a method for measuring three-dimensional volume data according to an embodiment of the present application
  • FIG. 3 shows a schematic flowchart of acquiring three-dimensional volume data of a target object in a method for measuring three-dimensional volume data according to an embodiment of the present application
  • FIG. 4 shows a schematic diagram of determining a cross-section in an ultrasound image according to a method for measuring three-dimensional volume data according to an embodiment of the present application
  • FIG. 5 shows a schematic diagram of segmenting a target area and a non-target area according to a method for measuring three-dimensional volume data according to an embodiment of the present application
  • Fig. 6 shows a schematic flowchart of a method for measuring three-dimensional volume data according to another embodiment of the present application
  • Fig. 7 shows a schematic block diagram of a system for measuring three-dimensional volume data according to still another embodiment of the present application.
  • Fig. 8 shows a schematic block diagram of a system for measuring three-dimensional volume data according to an embodiment of the present application.
  • FIG. 1 an exemplary three-dimensional volume data measurement system for implementing the three-dimensional volume data measurement method of the embodiment of the present application will be described.
  • FIG. 1 is a schematic structural block diagram of an exemplary three-dimensional volume data measurement system 10 used to implement a three-dimensional volume data measurement method according to an embodiment of the present application.
  • the three-dimensional volume data measurement system 10 may include an ultrasonic probe 100, a transmission/reception selection switch 101, a transmission/reception sequence controller 102, a processor 103, a display 104, and a memory 105.
  • the transmitting/receiving sequence controller 102 can excite the ultrasonic probe 100 to transmit ultrasonic waves to a target object (object under test), and can also control the ultrasonic probe 100 to receive ultrasonic echoes returned from the target object, thereby obtaining ultrasonic echo signals/data, where,
  • the ultrasound probe 100 may be a three-dimensional volume probe, or a two-dimensional linear array probe, a convex array probe, a phased array probe, etc., which are not specifically limited here.
  • the processor 103 processes the ultrasound echo signal/data to obtain tissue-related parameters and ultrasound images of the target object.
  • the ultrasound images obtained by the processor 103 may be stored in the memory 105, and these ultrasound images may be displayed on the display 104.
  • the display 104 of the aforementioned three-dimensional volume data measurement system 10 may be a touch screen, a liquid crystal display, etc., or it may be a liquid crystal display, a television, etc. other than the three-dimensional volume data measurement system 10
  • An independent display device can also be a display screen on an electronic device such as a mobile phone or a tablet computer.
  • the memory 105 of the aforementioned three-dimensional volume data measurement system 10 may be a flash memory card, a solid-state memory, a hard disk, and the like.
  • the embodiments of the present application also provide a computer-readable storage medium that stores a plurality of program instructions. After the plurality of program instructions are invoked and executed by the processor 103, the three-dimensional program in the various embodiments of the present application can be executed. Part or all of the steps in the volume data measurement method or any combination of the steps.
  • the computer-readable storage medium may be the memory 105, which may be a non-volatile storage medium such as a flash memory card, a solid-state memory, a hard disk, or the like.
  • the processor 103 of the aforementioned three-dimensional volume data measurement system 10 may be implemented by software, hardware, firmware, or a combination thereof, and may use a circuit, a single or multiple application specific integrated circuits (ASIC). , Single or multiple general integrated circuits, single or multiple microprocessors, single or multiple programmable logic devices, or a combination of the foregoing circuits or devices, or other suitable circuits or devices, so that the processor 103 can execute Corresponding steps of the three-dimensional volume data measurement method in each embodiment.
  • ASIC application specific integrated circuits
  • An embodiment of the present application provides a three-dimensional volume data measurement method, which is applied to a three-dimensional volume data measurement system 10 , It is especially suitable for the measurement system 10 containing the three-dimensional volume data of the touch display screen, which can be used to input touch screen operations by touching the touch display screen.
  • FIG. 2 shows a schematic flowchart of a method for measuring three-dimensional volume data according to an embodiment of the present application.
  • the method 200 for measuring three-dimensional volume data includes the following steps:
  • Step S210 Acquire three-dimensional volume data of the target object
  • Step S220 Determine in the three-dimensional volume data at least two intersecting cross-sections that contain the target object, and draw a two-dimensional outline of the target object on the cross-sections;
  • Step S230 Segment the three-dimensional volume data according to the two-dimensional contour to obtain a three-dimensional contour of the target object.
  • the three-dimensional volume data of the target object of the measured target is acquired through the three-dimensional ultrasound imaging system.
  • the three-dimensional volume data includes various information of the target object, for example, the image, shape, size, etc. of the target object can be known, and the three-dimensional volume data may be a gray-scale three-dimensional image, etc. Accurate three-dimensional contours and information need to be further acquired through subsequent steps.
  • the measured object may be a person to be subjected to ultrasound inspection
  • the target object of the measured object may be an area where the body tissue of the measured object is subjected to ultrasound inspection.
  • a three-dimensional ultrasound imaging system for three-dimensional imaging includes a probe 2, a transmitting/receiving selection switch 3, a transmitting circuit 4, a receiving circuit 5, a beam combining module 6, a signal processing module 7.
  • the transmitting circuit 4 sends a group of delayed focused pulses to the probe 2.
  • the probe 2 transmits ultrasound to the body tissue under test, and after a certain delay, it receives the ultrasound with tissue information reflected from the body tissue under test. Echo, and re-convert this ultrasonic echo into an electrical signal.
  • the receiving circuit 5 receives these electrical signals and sends these ultrasonic echo signals to the beam synthesis module 6.
  • the ultrasonic echo signal completes the focus delay, weighting and channel summation in the beam synthesis module 6, and then passes through the signal processing module 7 for signal processing.
  • the signal processed by the signal processing module 7 is sent to the three-dimensional imaging module 8, processed by the three-dimensional imaging module 8, to obtain visual information such as three-dimensional images, and then sent to the display 9 for display, thereby obtaining three-dimensional volume data of the target object.
  • the physician can aim the ultrasound probe at the area where the target object to be detected is located, the transmitter module transmits ultrasonic waves to the target object to be detected, and the echo signal received by the receiver module indicates The echo of the internal structure of the target object to be detected.
  • the grayscale image obtained by processing the echo can reflect the internal structure of the target object to be detected.
  • the real-time acquisition process can guide the physician to proceed.
  • step S220 at least two sections need to be determined to draw the two-dimensional contour of the target object.
  • the position of the selected section does not necessarily include the target object to be divided. Therefore, before the contour is drawn, the section needs to be moved to the target object to be divided, for example, the section is translated or rotated to the target object to be divided.
  • the center area of the segmented target object, or even the cross section is moved to the exact center of the target object to be segmented, so that the determined cross section contains the target object and/or more cross-sectional area of the target object.
  • the central area refers to a certain area extending from the center of the target object to the surroundings. For example, the central area is a circle with the center of the target object as the center and a radius of any value greater than zero. Or the central area is a square with the center of the target object as the center of symmetry spreading to the surroundings, etc.
  • determining the cross-section and drawing the two-dimensional outline of the target object include the following two methods:
  • This method can specifically include but is not limited to the following methods:
  • the at least two selected sections need to be moved to the area containing the target object. For example, move the section to the central area of the target object, select in the central area and determine at least two sections to ensure that the selected section contains as many contours of the target object as possible, and then confirm Draw a two-dimensional contour of the target object on the two cross-sections.
  • the movement may include, but is not limited to, one of the following methods, such as translation or rotation, sliding, etc., which can be selected according to actual needs, and subsequent movements are referred to this explanation without special instructions.
  • two intersecting sections can be randomly selected, and then both are moved to the center area of the target object.
  • a two-dimensional contour can be drawn directly.
  • the two selected sections both intersect and contain the target object, but are not located in the central area.
  • the two sections can be further moved to the central area.
  • the purpose of this application can be achieved without moving to the central area.
  • the cross section that intersects and contains the target object can also be directly selected.
  • a first cross-section is selected, and the first cross-section is moved to the central area of the target object, and the two-dimensional outline of the target object is drawn on the first cross-section
  • the determined first section is a position reference to generate a second section containing a target object and draw a two-dimensional outline of the target object on the second section, wherein the second section includes at least one section.
  • the first section and the second section may be parallel or intersecting, and are not limited to one.
  • the section includes the target object
  • the two-dimensional outline of the target object is directly drawn on the section
  • the first section is used as a position reference to generate a second section containing the target object
  • the second section is drawn on the second section. The two-dimensional outline of the target object.
  • the section includes the target object but is not in the central area of the target object, move the section to the central area of the target object, and draw the two-dimensional outline of the target object on the first section;
  • the first section is a position reference to generate a second section containing the target object and draw a two-dimensional contour of the target object on the second section.
  • the cross section is moved to the central area of the target object, a first cross section is determined, and the two-dimensional outline of the target object is drawn on the first cross section; the determined target on the first cross section
  • the center of the two-dimensional contour of the object generates a second section and draws the two-dimensional contour of the target object on at least the second section, wherein the first section and the second section intersect.
  • the cross section is moved to the central area of the target object, a first cross section is determined, and the two-dimensional outline of the target object is drawn on the first cross section;
  • a second cross-section parallel to the first cross-section is generated in the central area of the two-dimensional contour of the target object, and the two-dimensional contour of the target object is drawn on the second cross-section.
  • the second cross-section is not located at the center of the first cross-section, as long as it is located in its central area. Therefore, the second cross-section can be parallel to the first cross-section, or of course, can also intersect. Need to make a choice.
  • the selection of the first section can be to select the first section arbitrarily, and then move the first section to the center area of the target object, or even to the center of the target object, and it can also be based on experience Or the three-dimensional image determines the central area or center of the target object and directly selects the central area or center of the target object.
  • the determination methods of the first profile and the second profile can be selected from any of the determination methods mentioned in the first method without conflicting each other. The description will not be repeated here.
  • the number of the sections is not limited to a certain numerical range.
  • three sections, four sections, or five sections or more may be determined in the three-dimensional volume data, wherein The more the number of sections, the more the two-dimensional contours of the drawn target object and the more relevant information of the three-dimensional volume data obtained, which is more conducive to the segmentation of the three-dimensional volume data and obtain a more accurate three-dimensional contour.
  • the section selection can be stopped.
  • the number of sections determined is usually 2-6 sections.
  • the determined positional relationship of the at least two cross-sections is at least intersecting, that is, the two planes intersect each other in the three-dimensional space and have a common straight line.
  • intersection is completely orthogonal, approximately orthogonal or oblique.
  • completely orthogonal means that the two sections are perpendicular to each other, and the angle between the two sections is 90°
  • approximately orthogonal means that the two sections are substantially perpendicular to each other,
  • the angle between the two sections is 85°-95°, or 88°-92° or 89°-91°, which is almost vertical, and it is not strictly required to be completely vertical.
  • oblique intersection means that two sections intersect and are not perpendicular. In the absence of special instructions, the explanations and descriptions of intersection, complete orthogonality, approximately orthogonality or oblique intersection refer to the explanation and description.
  • sections at different positions may be selected to make the obtained two-dimensional profile more comprehensive.
  • sections at an orthogonal position may be selected.
  • three orthogonal cross-sections are selected, as shown in FIG. 4, where the three cross-sections are perpendicular to each other in space.
  • the extension directions of the three cross-sections are the X-axis and the three-dimensional coordinate axis.
  • each section can be rotated or translated.
  • two of the three orthogonal cross sections as shown in FIG. 4 can also be selected.
  • the determined at least two cross-sections both contain the information of the target object.
  • the two cross-sections are located at different positions. After the two parallel cross-sections are determined, they can display different positions. After the two sections are superimposed on each other, different images at different positions are displayed and different information is provided. After contour drawing of the two sections, different contours can be obtained, but the drawn two-dimensional contours are all Contains the target object and is related to each other, and is used to draw the three-dimensional contour together.
  • the determined at least two cross-sections both contain the information of the target object.
  • the images of the target object at different positions can be displayed after the two cross-sections are superimposed on each other, and the part where the two cross-sections intersect It has the same image information and two-dimensional contour.
  • the two sections can display the image information of different parts.
  • different two-dimensional contours can be obtained.
  • the drawn two-dimensional contours all contain the target Objects are also related to each other and used to draw three-dimensional contours together.
  • the section obtained can be more evenly distributed around the target object, and it can display the image of the target object more comprehensively, so as to obtain a more effective two-dimensional contour, and thus a more accurate three-dimensional contour.
  • the user after obtaining the three-dimensional volume data, the user selects any of the three orthogonal cross-sections, translates or rotates the plane to the center or the vicinity of the target object to be segmented, and then places it on the cross-section Draw the two-dimensional contour of the segmentation target object, and then generate two other orthogonal sections from the center (or near the center point) of the two-dimensional contour. Finally, the user draws a two-dimensional contour on at least one of the two orthogonal planes generated. Through the above steps, the contours of at least two sections of the target object are obtained.
  • the drawing method of the two-dimensional contour may be manual.
  • the gray level of the two-dimensional image of the section is combined with the user’s own experience to determine which areas and/or areas in the image are.
  • Points are target areas, which areas and/or points are non-target areas, and mark and draw the areas to obtain the two-dimensional outline of the target object.
  • some semi-automatic algorithms can also be used to automatically attach the edges to obtain the two-dimensional contour of the target object.
  • the semi-automatic algorithm includes, but is not limited to, an edge detection algorithm (Livewire) and/or dynamic programming.
  • the two-dimensional contour of the target object is drawn by an edge detection algorithm (Livewire).
  • Livewire For example, after analyzing at least two sections obtained, the edge pixels of the target area of the image can be known. Different from the gray level of non-edge pixels, it usually presents a certain jump. By detecting whether the gray value of the pixel is abrupt to determine whether the pixel is a two-dimensional contour edge, the target area in the profile and the The non-target area is divided, and finally the two-dimensional outline of the target object is drawn.
  • the three-dimensional volume data is segmented according to the two-dimensional contour of the target object drawn in the step S220 to obtain the three-dimensional contour of the target object.
  • the three-dimensional volume data is segmented to obtain a complete three-dimensional contour, thereby making the acquired three-dimensional image more accurate. Accurate, and then extract more and more effective information.
  • segmenting the three-dimensional volume data means that after obtaining the two-dimensional contours of at least two cross-sections, it is equivalent to clarifying which areas are target areas and which areas are non-target areas on at least the two cross-sections. , Use these contour information to guide the segmentation of other regions in the 3D volume data, that is, determine which of the remaining regions in the 3D volume data belong to the target area and which belong to the non-target area, and then obtain the 3D contour.
  • the method of segmenting the 3D volume data according to the 2D contour specifically includes:
  • Step S2301 Generate a target area containing the target object
  • Step S2302 Generate a non-target area that does not contain the target object
  • Step S2303 Segment the three-dimensional volume data according to the target area and the non-target area.
  • the target area refers to the determined target area through prior knowledge or the user directly inputs the determined target area
  • the non-target area refers to the determined non-target area
  • the user has drawn a two-dimensional outline of the target object on at least two determined sections (for example, at least two orthogonal planes among the three orthogonal planes), and the two-dimensional outline
  • the area within is definitely the target area, so the area within the drawn two-dimensional contour of the target object is determined as the target area.
  • generating the non-target area that does not contain the target object includes: determining the area outside the target area within the drawn two-dimensional outline of the target object as the non-target area.
  • the relationship between the target area (foreground area) and the non-target area (background area) is shown in Figure 6.
  • the user has drawn on at least two determined sections (for example, at least two of the three orthogonal surfaces) If the two-dimensional contour of the target object is calculated, the area outside the two-dimensional contour must be a non-target area.
  • the two-dimensional outline of the target object is drawn on the upper side of the determined at least two sections (for example, at least two of the three orthogonal faces).
  • the outline drawn by the user is used as the foreground, and the user draws Perform morphological expansion of the two-dimensional contour of, and get the background area.
  • the morphological expansion is a process of merging all the background points in contact with the background area into the background area to expand the boundary to the outside, so as to fill in the holes in the object.
  • the convolution kernel is a pixel in the background area.
  • the kernel can be of any shape and size, and has a separately defined reference point-anchor point (anchorpoint) .
  • anchorpoint anchorpoint
  • the core can be called a template or a mask, and then the mask is compared with the points in the two-dimensional contour, if the mask falls in the background area, then This area is a background area, and the remaining points in the two-dimensional contour can be compared one by one by the method, and then a complete background area can be obtained.
  • the display of the two-dimensional contour may be that the boundary lines of the two-dimensional contour are more clearly drawn after being drawn, and the shape of the two-dimensional contour is identified by the boundary line, and the two-dimensional contour can be combined with the boundary line.
  • Non-target areas are identified.
  • the boundary lines of the two-dimensional contour are black, and the non-target area has no lines and has a gray background, so the two-dimensional contour can be displayed very clearly.
  • the boundary line may also be a colored line, so as to more clearly divide the non-target area.
  • the two-dimensional contour may be a contour with colors, and different parts of which have different colors, so as to be closer to the actual appearance and shape of the target object, and to more effectively distinguish the non-target area.
  • the two-dimensional contour is displayed as the bright area of the image, and the background of the non-target area is the dark area, so the bright area Compared with the dark area, the two-dimensional outline is more clearly divided from the non-target area.
  • the display of the two-dimensional contour is not limited to the above manner, and may also include other display manners, which will not be listed here.
  • step S2303 the methods for segmenting the three-dimensional volume data are roughly divided into the following three categories:
  • the three-dimensional volume data is segmented based on an interactive segmentation algorithm to segment the points in the three-dimensional volume data into target areas Or non-target area.
  • the interactive segmentation algorithm may include Graph Cut, Grab Cut, Random Walker, etc., but is not limited to the above-listed algorithms, and the segmentation algorithms that can realize the three-dimensional volume data can be applied to this application.
  • the following uses the Graph Cut algorithm as an example to describe the segmentation of three-dimensional volume data in detail.
  • the goal to be achieved in this step is to divide the image of the three-dimensional volume data into two disjoint parts of the foreground area and the background area.
  • the image is composed of vertices and edges, and the edges have weights.
  • Graph Cut a graph of graph theory needs to be constructed.
  • the graph of graph theory has two types of vertices, two types of edges and two types of weights. Pixel composition, and then there is an edge between every two neighboring pixels, and its weight is determined by the "boundary smoothing energy term" mentioned above.
  • the weight of the edge is determined by the "regional energy term” Rp(1).
  • the weight of the edge connecting the vertex and t is determined by the "regional energy term” Rp(0). From this, the weights of all edges can be determined, that is, the graph is determined. Then the min cut algorithm can be used to find the smallest cut. This min cut is the set of weights and the smallest edges. The disconnection of these edges can separate the target and the background, that is, the min cut corresponds to the smallest energy. ⁇ .
  • the interactive segmentation algorithm can be used to segment the three-dimensional volume data, and some foreground seed points are provided to the algorithm in the Graph Cut segmentation algorithm (That is, the segmented target area) and the background seed point (ie, the non-target area), the Graph Cut segmentation algorithm will automatically segment the remaining unmarked points to determine whether they belong to the foreground or the background.
  • the principle of the Graph Cut algorithm is to construct an image into a graph in graph theory, with the pixels in the image as the nodes of the graph, and the relationship between the pixels and other pixels in the surrounding neighborhood as the edges of the graph, and then define the boundary and The cost function of the region (segmentation function), realizes image segmentation by minimizing the cost function, so as to obtain the three-dimensional contour of the target object.
  • the three-dimensional volume data is segmented based on the classification method to segment the points in the three-dimensional volume data into the target area Or non-target area.
  • a classifier can be trained to learn features that can distinguish the target region from the non-target region, where the feature can be The gray level can be the relationship with the surrounding points and edges, etc. Then an image classifier is generated with this feature to classify the areas that are not marked with the target area and the non-target area in the three-dimensional volume data, and determine whether the unmarked area belongs to the target area or the non-target area, so as to achieve The three-dimensional volume data is segmented to obtain the three-dimensional contour of the target object.
  • feature extraction and classification methods include but are not limited to: SVM (support vector machine), PCA (principal component analysis), neural network, deep learning network (such as CNN, VGG, Inception, MobileNet, etc.).
  • the neural network must first learn with certain learning criteria before it can work. Now take the artificial neural network's recognition of "target area” and "non-target area” as an example. It is stipulated that when the "target area” is input to the network, it should output “1", and when the input is “non-target area”, The output is "0".
  • the criterion for network learning should be: if the network makes a wrong decision, through the network learning, the network should reduce the possibility of making the same mistake next time.
  • the network adds the weight of the input mode and compares it with the threshold. Non-linear operation, get the output of the network. In this case, the probability that the network output is "1" and "0" is each 50%, which means it is completely random. If the output is "1" (the result is correct), the connection weight is increased so that when the network encounters the "target area” mode input again, it can still make a correct judgment.
  • the segmentation of the three-dimensional volume data based on the classification method may include:
  • Step A Take the point of the target area as the center, and take a cube-shaped three-dimensional image block as a positive sample, for example, take the point of the target area as the center and take an n ⁇ n ⁇ n three-dimensional image block as a positive sample; Similarly, take the point of the non-target area as the center, and take a cube-shaped three-dimensional image block as a negative sample. For example, take the point of the non-target area as the center and take an n ⁇ n ⁇ n three-dimensional image block as the negative sample. sample.
  • Step B Train an image classifier to learn features that can distinguish the positive sample from the negative sample; the specific training method can refer to the above-mentioned neural network learning method.
  • Step C Take each point in the area where the target area and the non-target area are not determined as the center, take a cube-shaped three-dimensional image block to be divided, for example, take an n ⁇ n ⁇ n for the unmarked point as the center
  • Step D Classify each point by the image classifier using the above-mentioned learned feature extraction and classification methods, that is, classify the three-dimensional image block to be segmented, and determine whether it belongs to the target area or the non-target area. After all unmarked points are traversed, the segmentation of the entire 3D volume data is realized.
  • the three-dimensional volume data is segmented based on a deep learning method to segment the points in the three-dimensional volume data into target areas Or non-target area.
  • deep learning is a common method of image segmentation.
  • the commonly used methods based on deep learning generally take the image to be segmented (two-dimensional image or three-dimensional volume data) as input, and then go through stacked convolution, pooling, and over-activation function in the middle. Wait for operations to output the divided mask.
  • the deep learning is to learn the internal laws and representation levels of the "target area” and "non-target area”.
  • the information obtained in the learning process can explain data such as text, image and sound. It is a great help.
  • the goal is to make the machine have the ability to analyze and learn like humans, and can identify the data of the "target area” and “non-target area”, and then realize the segmentation of the three-dimensional volume data.
  • the difference from the conventional deep learning method is that in the process of segmenting the three-dimensional volume data based on the deep learning method, the three-dimensional volume data and the target object are A mask input composed of a two-dimensional contour, that is, the mask includes a two-dimensional contour of the drawn target object.
  • the information of such target area and non-target area and the original image (three-dimensional volume data) to be segmented can be spliced together as the depth Learn the input of the segmentation network, so that the deep learning network will learn the features of the target to be segmented according to the partial contours calibrated by the user, so as to perform segmentation of other unlabeled contour regions. Since the previously obtained target area and non-target area information is added to the method, the deep learning network can extract features more accurately, and then segment the unlabeled area more accurately, so that the three-dimensional contour Segmentation is more accurate.
  • the segmentation mask is output through the deep learning network, and finally the regions that are not marked with the target area and the non-target area are segmented based on the segmentation mask, and it is determined that the points in the three-dimensional volume data belong to the target Area or non-target area.
  • the input of deep learning can be three-dimensional volume data and a three-dimensional contour composed of two-dimensional contours drawn by the user on at least two of the three orthogonal surfaces.
  • Mask the same size as volume data.
  • the value of the contour area drawn by the user is 1, and the value of the remaining area is 0.
  • the above-mentioned deep learning algorithm is only exemplary, and it should be understood that this application can also learn features performed by the target region and the non-target region through other machine learning or deep learning algorithms, and then segment the three-dimensional volume data.
  • the three-dimensional volume data measurement method of the embodiment of the present application after obtaining the three-dimensional volume data and the two-dimensional contours of at least two sections, the three-dimensional volume data is segmented according to the two-dimensional contours to obtain the three-dimensional contour of the target object.
  • the three-dimensional contour of the target object can be obtained more accurately, and more parameters of the target object can be obtained more accurately and effectively.
  • the method can take into account versatility, simple operation, and segmentation of three-dimensional volume data of difficult targets.
  • the method for measuring three-dimensional volume data described in this application may further include other steps.
  • it may further include Steps to revise the contour.
  • the method further includes: receiving a revision instruction for the two-dimensional contour; revising the two-dimensional contour according to the revision instruction, and correcting the two-dimensional contour according to the revised two-dimensional contour.
  • the three-dimensional volume data is re-segmented to obtain a new three-dimensional contour of the target object.
  • the segmentation of three-dimensional volume data has been achieved through steps S210 to S230, but the segmentation algorithm has a certain accuracy rate, and it may also segment a part of the wrong area.
  • the method described in this application draws contours on at least two of the three orthogonal sections, and uses the two-dimensional contours drawn by the user to guide the segmentation algorithm to obtain more accurate results.
  • the revision step the user can observe the segmentation results of the entire 3D volume data through rotation and translation.
  • the user can re-correct the 2D contour in the segmented inaccurate section, and then use the user’s correction
  • the two-dimensional contour and the original two-dimensional contour drawn by the user on at least two cross-sections use the method similar to step S230 to guide the segmentation algorithm to re-segment the three-dimensional volume data. Since more user input is added during editing, the segmentation result will be more accurate, achieving the purpose of editing.
  • the method may further include: displaying the three-dimensional contour.
  • the three-dimensional contour obtained after processing by the three-dimensional data measurement system can be stored in a memory, and the three-dimensional contour can be displayed on a display.
  • the three-dimensional contour is drawn, there is a very obvious boundary between the three-dimensional contour and the non-target area, and the three-dimensional contour can be displayed very conspicuously, and the three-dimensional contour and the non-target area can be identified, and then obtained Information about the three-dimensional contour.
  • the display of the three-dimensional contour may be that the boundary lines of the three-dimensional contour after segmentation are clearer, the shape of the three-dimensional contour is identified by the boundary line, and the three-dimensional contour and the non-target area can be identified through the boundary line. come out.
  • the boundary lines of the three-dimensional contour are black, and the non-target area has no lines and has a gray background, so the three-dimensional contour can be displayed very clearly.
  • the boundary line may also be a colored line, so as to more clearly divide the non-target area.
  • the entire three-dimensional contour may be a contour with colors, and different parts thereof have different colors, so as to be more close to the actual appearance and shape of the target object, and to distinguish the non-target area more effectively.
  • the three-dimensional contour is displayed as the bright area of the image, and the background of the non-target area is the dark area. The contrast of the area, so that the three-dimensional contour is more clearly divided from the non-target area.
  • the display of the three-dimensional contour is not limited to the above-mentioned manner, and may also include other display manners, which will not be listed here.
  • the method may further include: determining the volume of the target object according to the three-dimensional contour.
  • the input of deep learning can be three-dimensional volume data and two-dimensional data drawn by the user on at least two of the three orthogonal surfaces.
  • the three-dimensional mask composed of contours (the same size as the volume data), after determining the mention of the three-dimensional mask and the number of the segmentation mask, according to the volume of the segmentation mask and the size of the segmentation mask
  • the volume of the target object is calculated by the number, that is, the product of the two is the volume of the target object.
  • the volume of the target object can also be obtained by other methods. limited.
  • the above exemplarily shows a method for measuring three-dimensional volume data according to an embodiment of the present application.
  • outlines are drawn on at least two sections, and the drawn two-dimensional outlines are used to guide the segmentation algorithm to perform the three-dimensional volume data. Split to get more accurate results.
  • the method can take into account versatility, simple operation, and segmentation of three-dimensional volume data of difficult targets.
  • the second aspect of the present application provides another method for measuring three-dimensional volume data.
  • the following describes a schematic flowchart of a method for measuring three-dimensional volume data according to another embodiment of the present application in conjunction with FIG. 6, as shown in FIG.
  • the method 600 for measuring three-dimensional volume data includes the following steps:
  • Step S610 Acquire three-dimensional volume data of the target object
  • Step S620 Determine at least two sections containing different positions of the target object in the three-dimensional volume data, and draw a two-dimensional outline of the target object in the sections;
  • Step S630 Segment the three-dimensional volume data according to the two-dimensional contour to obtain a three-dimensional contour of the target object.
  • Steps S610 and S630 in the three-dimensional volume data measurement method 600 according to the embodiment of the present application described with reference to FIG. 6 are the same as steps S210 and S210 and step S630 in the three-dimensional volume data measurement method 200 according to the embodiment of the present application described with reference to FIG. 2
  • Step S230 is the same.
  • step S610 and step S630 please refer to the explanation and description of step S210 and step S230 in the previous text.
  • the explanation and description of the steps are also included in the implementation of this application.
  • the method 600 of measuring three-dimensional volume data are the same.
  • the step S620 is described in detail below. At least two sections containing different positions of the target object are determined in the three-dimensional volume data, and a two-dimensional outline of the target object is drawn in the sections.
  • At least two sections need to be determined to draw the two-dimensional contour of the target object.
  • the positional relationship between the two sections is not limited to the intersection, as long as it is a section taken at different positions of the target object, the section contains the target object, and the two-dimensional contour of the target object can be drawn. The relationship between them is not restricted.
  • the different positions of the cross-sections means that in the three-dimensional space, the two cross-sections do not overlap with each other, and both cross-sections cut through the target object, so as to obtain different two-dimensional contours of the target object, which is the subsequent three-dimensional volume.
  • Data segmentation provides more effective reference and guidance.
  • the at least two cross sections may be parallel to or intersect each other.
  • parallel refers to two parallel sections located at different positions.
  • the intersection is completely orthogonal, approximately orthogonal or oblique.
  • completely orthogonal means that the two sections are perpendicular to each other, and the angle between the two sections is 90°
  • approximately orthogonal means that the two sections are substantially perpendicular to each other,
  • the angle between the two sections is 85°-95°, or 88°-92° or 89°-91°, which is almost vertical, and it is not strictly required to be completely vertical.
  • the oblique intersection means that two sections intersect and are not perpendicular. In the absence of special instructions, the explanations and descriptions of intersection, complete orthogonality, approximately orthogonal or oblique intersection refer to the explanation and description.
  • sections at different positions may be selected to make the obtained two-dimensional profile more comprehensive.
  • sections at an orthogonal position may be selected.
  • three orthogonal cross-sections are selected, as shown in FIG. 4, where the three cross-sections are perpendicular to each other in space.
  • the extension directions of the three cross-sections are the X-axis and the three-dimensional coordinate axis.
  • each section can be rotated or translated.
  • two of the three orthogonal cross sections as shown in FIG. 4 can also be selected.
  • the user after obtaining the three-dimensional volume data, the user selects any of the three orthogonal cross-sections, translates or rotates the plane to the center or the vicinity of the target object to be segmented, and then places it on the cross-section Draw the two-dimensional contour of the segmentation target object, and then generate two other orthogonal sections from the center (or near the center point) of the two-dimensional contour. Finally, the user draws a two-dimensional contour on at least one of the two orthogonal planes generated. Through the above steps, the contours of at least two sections of the target object are obtained.
  • the drawing method of the two-dimensional contour may be manual.
  • the gray level of the two-dimensional image of the section is combined with the user’s own experience to determine which areas and/or areas in the image are.
  • Points are target areas, which areas and/or points are non-target areas, and mark and draw the areas to obtain the two-dimensional outline of the target object.
  • some semi-automatic algorithms can also be used to automatically attach the edges to obtain the two-dimensional contour of the target object.
  • the semi-automatic algorithm includes, but is not limited to, an edge detection algorithm (Livewire) and/or dynamic programming.
  • the two-dimensional contour of the target object is drawn by an edge detection algorithm (Livewire).
  • Livewire For example, after analyzing at least two sections obtained, the edge pixels of the target area of the image can be known. Different from the gray level of non-edge pixels, it usually presents a certain jump. By detecting whether the gray value of the pixel is abrupt to determine whether the pixel is a two-dimensional contour edge, the target area in the profile and the The non-target area is divided, and finally the two-dimensional outline of the target object is drawn.
  • the above exemplarily shows a method for measuring three-dimensional volume data according to another embodiment of the present application.
  • outlines are drawn on at least two sections, and the drawn two-dimensional outlines are used to guide the segmentation.
  • the algorithm divides the three-dimensional volume data to get more accurate results.
  • the method can take into account versatility, simple operation, and segmentation of three-dimensional volume data of difficult targets.
  • the third aspect of the present application provides another three-dimensional volume data measurement method.
  • the following describes a schematic flowchart of the three-dimensional volume data measurement method according to another embodiment of the present application in conjunction with FIG. 7. As shown in FIG. 7, The method 700 for measuring three-dimensional volume data includes the following steps:
  • Step S710 Acquire three-dimensional volume data of the target object
  • Step S720 Determine at least two intersecting cross-sections containing the target object in the three-dimensional volume data, and draw a two-dimensional outline of the target object in the cross-sections;
  • Step S730 Determine contours corresponding to other areas outside the cross-section in the three-dimensional volume data according to the two-dimensional contours;
  • Step S740 Determine the three-dimensional contour of the target object according to the two-dimensional contour and contours corresponding to the other regions.
  • Steps S710 and S720 in the three-dimensional volume data measurement method 700 according to the embodiment of the present application described with reference to FIG. 7 are the same as steps S210 and S210 and step S720 in the three-dimensional volume data measurement method 200 according to the embodiment of the present application described with reference to FIG. 2
  • Step S220 is the same.
  • step S710 and step S720 please refer to the explanation and description of step S210 and step S220 above.
  • the explanation and description of the steps are also included in the implementation of this application.
  • the method 700 for measuring three-dimensional volume data are the same.
  • step S730 will be described in detail below.
  • the difference between the step S730 and the step S230 is that it is determined in the step S730 that the contour of the three-dimensional volume data is the section of the three-dimensional volume data.
  • the contours corresponding to other regions outside do not include the two-dimensional contours formed by the cross-section.
  • the other regions may be arbitrary cross-sections or three-dimensional surfaces in space, etc., which are not specifically limited here.
  • step S230 the three-dimensional volume data is segmented according to the two-dimensional contour, which is a comprehensive segmentation of the three-dimensional volume data to obtain a complete three-dimensional contour of the target object, which is obtained in step S730
  • the contour is not a complete three-dimensional contour of the target object.
  • the method of determining the contour corresponding to the other area outside the section in the three-dimensional volume data according to the two-dimensional contour can be selected from various segmentation methods in step S230.
  • the segmentation method will not be repeated here, and can be selected according to actual needs, and then the contours corresponding to other regions outside the section in the three-dimensional volume data can be determined.
  • the method further includes step S740. Since what is obtained in step S720 is the two-dimensional contour of the target object of two intersecting sections in the three-dimensional volume data, the two-dimensional contour of the target object in the three-dimensional volume data is obtained in step S730. Is the contour corresponding to other areas outside the section in the three-dimensional volume data, and therefore is not a complete three-dimensional contour of the target object.
  • step S740 the contours obtained in the steps S720 and S730 are performed The combination can obtain a complete three-dimensional contour of the target object.
  • step S720 may also be to determine at least two cross sections containing different positions of the target object in the three-dimensional volume data, and to draw a two-dimensional outline of the target object on the cross-sections, which can be specifically shown in FIG. 6 The related description of step S620 shown is not repeated here.
  • the fourth aspect of the present application also provides a three-dimensional volume data measurement system.
  • the following describes the three-dimensional volume data measurement system provided by the fourth aspect of the present application with reference to FIG. 8.
  • FIG. 8 shows a schematic block diagram of a system 800 for measuring three-dimensional volume data according to an embodiment of the present application.
  • the measurement system 800 of three-dimensional volume data includes a memory 810 and a processor 820.
  • the memory 810 stores computer program codes for implementing corresponding steps in the method for measuring three-dimensional volume data according to an embodiment of the present application.
  • the processor 820 is configured to run the computer program code stored in the memory 810 to execute the corresponding steps of the three-dimensional volume data measurement method according to the embodiment of the present application.
  • the system 800 for measuring three-dimensional volume data when the computer program code is executed by the processor 820, the system 800 for measuring three-dimensional volume data performs at least one of the following steps: acquiring three-dimensional volume data of a target object; determining in the three-dimensional volume data At least two intersecting cross-sections that contain the target object, draw the two-dimensional contour of the target object on the cross-section; divide the three-dimensional volume data according to the two-dimensional contour to obtain the three-dimensional contour.
  • the system 800 for measuring three-dimensional volume data executes at least one of the following steps: acquiring three-dimensional volume data of a target object; Determine at least two sections containing different positions of the target object in the section, and draw a two-dimensional contour of the target object in the section; segment the three-dimensional volume data according to the two-dimensional contour to obtain the target The three-dimensional outline of the object.
  • the three-dimensional volume data measurement system 800 when the computer program code is executed by the processor 820, the three-dimensional volume data measurement system 800 is caused to perform the following steps: receiving a revision instruction for the two-dimensional contour; The two-dimensional contour is revised, and the three-dimensional volume data is re-segmented according to the revised two-dimensional contour to obtain a new three-dimensional contour of the target object.
  • the three-dimensional volume data measurement system 800 when the computer program code is executed by the processor 820, the three-dimensional volume data measurement system 800 is caused to perform the following steps: displaying the three-dimensional contour.
  • the three-dimensional volume data measurement system 800 when executed by the processor 820, the three-dimensional volume data measurement system 800 is caused to perform the following steps: determine the volume of the target object according to the three-dimensional contour.
  • the three-dimensional volume data measurement system 800 when executed by the processor 820, the three-dimensional volume data measurement system 800 is caused to perform the following steps: determine the volume of the target object according to the three-dimensional contour.
  • the fifth aspect of the present application also provides a medical device, which may include the three-dimensional volume data measurement system 800 shown in FIG. 8.
  • the medical device can implement the three-dimensional volume data measurement method shown in FIG. 2, FIG. 6 or FIG. 7.
  • the medical device described in this application includes the three-dimensional volume data measurement system, so it also has more parameters that can obtain target objects more accurately and effectively, and can take into account versatility, simple operation, and can segment three-dimensional volume data of difficult targets, etc. advantage.
  • the sixth aspect of the present application also provides a storage medium on which computer program instructions are stored, which are used to execute the three-dimensional volume data of the embodiments of the present application when the computer program instructions are run by a computer or a processor.
  • the storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a portable compact disk read-only memory (CD-ROM), USB memory, or any combination of the above storage media.
  • the computer program instructions when run by a computer or processor, cause the computer or processor to perform the following steps:
  • the program code when run by the processor 820, it causes three-dimensional volume data to be calculated.
  • the measurement system 800 performs at least one of the following steps: acquiring three-dimensional volume data of a target object; determining at least two intersecting sections and containing the target object in the three-dimensional volume data, and drawing the section on the section The two-dimensional contour of the target object; the three-dimensional volume data is segmented according to the two-dimensional contour to obtain the three-dimensional contour of the target object.
  • the computer or the processor executes the following steps: acquiring three-dimensional volume data of a target object; determining that the target object is contained in the three-dimensional volume data At least two cross-sections at different positions of, and a two-dimensional contour of the target object is drawn in the cross-section; the three-dimensional volume data is segmented according to the two-dimensional contour to obtain a three-dimensional contour of the target object.
  • the computer program instructions when run by the computer or processor, cause the computer or processor to perform the following steps: obtain three-dimensional volume data of the target object; determine at least two intersecting three-dimensional volume data in the three-dimensional volume data And include the cross-section of the target object, and draw the two-dimensional contour of the target object on the cross-section; determine the contour corresponding to the other area outside the cross-section in the three-dimensional volume data according to the two-dimensional contour; according to The two-dimensional contour and contours corresponding to the other regions determine the three-dimensional contour of the target object.
  • the computer or the processor executes the following steps: receiving a revision instruction for the two-dimensional contour; The contour is revised, and the three-dimensional volume data is re-segmented according to the revised two-dimensional contour to obtain a new three-dimensional contour of the target object.
  • the computer program instructions when run by the computer or processor, cause the computer or processor to perform the following steps: displaying the three-dimensional outline.
  • the computer program instructions when run by the computer or processor, cause the computer or processor to perform the following steps: determine the volume of the target object according to the three-dimensional contour.
  • the computer program instructions when run by the computer or processor, cause the computer or processor to perform the following steps: determine the volume of the target object according to the three-dimensional contour.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another device, or some features can be ignored or not implemented.
  • the various component embodiments of the present application may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some modules in the article analysis device according to the embodiments of the present application.
  • This application can also be implemented as a device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for implementing the present application may be stored on a computer-readable medium, or may have the form of one or more signals.
  • Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种三维体数据的测量方法(200)、三维体数据的测量***(10)、医疗器械以及计算机存储介质。所述方法包括:获取目标对象的三维体数据(S210);在所述三维体数据中选取至少两个相交的且包含所述目标对象的剖面,在所述剖面上绘制所述目标对象的二维轮廓(S220);根据所述二维轮廓对所述三维体数据进行分割,以得到目标对象的三维轮廓(S230)。通过所述方法可以更加准确的获得目标对象的轮廓,进而更加准确和有效地获取目标对象的更多参数。所述方法可以兼顾通用性、操作简单且能分割困难目标的三维体数据。

Description

三维体数据的测量方法、测量***、医疗器械及存储介质
说明书
技术领域
本申请涉及三维成像领域,具体涉及一种三维体数据的测量方法、三维体数据的测量***、医疗器械以及计算机存储介质。
背景技术
在超声检查中,组织结构或病灶的大小是临床重点检查的内容,常规临床主要是在二维超声下测量组织结构或病灶的长短径。相比于二维超声的径线测量,组织结构、病灶的体积能够给临床提供更准确的诊断信息,目前三维超声体积测量方法主要是:手动测量方法,该方法通过旋转或平移生成多个剖面,用户逐个剖面手动或半自动绘制二维轮廓,最后将二维轮廓拟合成三维轮廓。该方法是目前临床研究中普遍采用的方法,但操作极其复杂耗时,测量结果准确性也较差。
发明内容
本申请的第一方面提供了一种三维体数据的测量方法,所述方法包括:
获取目标对象的三维体数据;
在所述三维体数据中确定至少两个相交的且包含所述目标对象的剖面,并在所述剖面上绘制所述目标对象的二维轮廓;
根据所述二维轮廓对所述三维体数据进行分割,以得到目标对象的三维轮廓。
本申请的第二方面提供了一种三维体数据的测量方法,所述方法包括:
获取目标对象的三维体数据;
在所述三维体数据中确定包含所述目标对象的不同位置的至少两个剖面,并在所述剖面中绘制所述目标对象的二维轮廓;
根据所述二维轮廓对所述三维体数据进行分割,以得到目标对象的三维轮廓。
本申请的第三方面提供了一种三维体数据的测量方法,所述方法包括:
获取目标对象的三维体数据;
在所述三维体数据中确定至少两个相交的且包含所述目标对象的剖面,并在所述剖面上绘制所述目标对象的二维轮廓;
根据所述二维轮廓确定所述三维体数据中所述剖面外的其他区域对应的轮廓;
根据所述二维轮廓和所述其他区域对应的轮廓确定所述目标对象的三维轮廓。
可选地,所述方法还包括:
接收对所述二维轮廓的修订指令;
根据所述修订指令对所述二维轮廓进行修订,并根据修订后的二维轮廓对所述三维体数据进行重新分割,以得到所述目标对象的新的三维轮廓。
可选地,所述方法还包括:
显示所述三维轮廓。
可选地,所述方法还包括:
根据所述三维轮廓确定所述目标对象的体积。
可选地,确定所述剖面、绘制所述目标对象的二维轮廓包括:
选取第一剖面,并将所述第一剖面平移或旋转至所述目标对象的中心区域,在所述第一剖面上绘制所述目标对象的二维轮廓;
以确定的所述第一剖面为位置参照生成包含目标对象的第二剖面并在所述第二剖面上绘制所述目标对象的二维轮廓,其中,所述第二剖面包括至少一个剖面。
可选地,确定所述剖面、绘制所述目标对象的二维轮廓包括:
选取第一剖面,并将所述第一剖面平移或旋转至所述目标对象的中心区域,在所述第一剖面上绘制所述目标对象的二维轮廓;
在确定的所述第一剖面上的所述目标对象的二维轮廓的中心生成与第一剖面相交的第二剖面并在所述第二剖面上绘制所述目标对象的二维轮廓。
可选地,确定所述剖面、绘制所述目标对象的二维轮廓包括:
选取第一剖面,并将所述第一剖面平移或旋转至所述目标对象的中心区域,在所述第一剖面上绘制所述目标对象的二维轮廓;
在确定的所述第一剖面上的所述目标对象的二维轮廓的中心区域内生成与所述第一剖面平行的第二剖面并在所述第二剖面上绘制所述目标对象的二维轮廓。
可选地,确定所述剖面、绘制所述目标对象的二维轮廓包括:
将剖面移动至所述目标对象的中心区域,确定至少两个剖面;
在所述确定的所述至少两个剖面上绘制所述目标对象的二维轮廓。
可选地,所述二维轮廓和/或所述三维轮廓与不包含所述目标对象的非目标区域之间通过边界线、颜色以及亮度中的至少一种区分和显示。
可选地,根据所述二维轮廓对所述三维体数据进行分割包括:
生成包含所述目标对象的目标区域;
生成不包含所述目标对象的非目标区域;
根据所述目标区域和所述非目标区域对所述三维体数据进行分割。
可选地,生成包含所述目标对象的目标区域包括:
将绘制的所述目标对象的二维轮廓内的区域确定为目标区域。
可选地,生成不包含目标对象的非目标区域包括:
将绘制的所述目标对象的二维轮廓内目标区域之外的区域确定为非目标区域;和/或
将绘制的所述目标对象的二维轮廓进行形态学膨胀,以生成所述非目标区域。
可选地,根据所述目标区域和所述非目标区域对所述三维体数据进行分割包括:
基于交互式分割算法对所述三维体数据进行分割,以将所述三维体数据中的点分割为目标区域或非目标区域。
可选地,基于交互式分割算法对所述三维体数据进行分割的方法包括:
在所述目标区域和所述非目标区域中选取目标区域种子点和非目标区域种子点并构造图论中的图;
以选取的所述目标区域种子点和非目标区域种子点确定分割函数;
利用所述分割函数对所述三维体数据中未标记的点进行分割计算,以确定所述三维体数据中未标记的点属于目标区域或非目标区域。
可选地,根据所述目标区域和所述非目标区域对所述三维体数据进行分割包括:
基于分类的分割方法对所述三维体数据进行分割。
可选地,基于分类的分割方法对所述三维体数据进行分割包括:
训练一个图像分类器,用于学习得到能区分所述目标区域和所述非目标区域的特征;
以所述特征生成图像分类器,用于对未标记所述目标区域和所述非目标区域的区域进行分类,判断所述三维体数据中未标记的点属于目标区域或非目标区域。
可选地,基于分类的分割方法对所述三维体数据进行分割,包括:
以所述目标区域的点为中心,取立方体形的三维图像块作为正样本;
以所述非目标区域的点为中心,取立方体形的三维图像块作为负样本;
训练一个图像分类器,用于学习能区分所述正样本和所述负样本的特征;
以未确定所述目标区域和非目标区域的区域中的每个点为中心,取立方体形的待分割三维图像块;
通过所述图像分类器对所述待分割三维图像块进行分类,判断其属于目标区域或非目标区域。
可选地,根据所述目标区域和所述非目标区域对所述三维体数据进行分割包括:
基于深度学习的方法对所述三维体数据进行分割。
可选地,基于深度学习的方法对所述三维体数据进行分割包括:
将所述三维体数据以及所述目标对象的二维轮廓组成的掩膜输入;
通过深度学习网络输出分割掩膜;
根据所述分割掩膜确定所述三维体数据的目标区域和/或非目标区域。
可选地,根据所述三维轮廓确定所述目标对象的体积:
确定所述分割掩膜的体积和所述分割掩膜的数目;
根据所述分割掩膜的体积和所述分割掩膜的数目确定所述目标对象的体积。
可选地,所述相交为完全正交,斜交或者近似正交。
可选地,至少两个所述剖面相互平行或相交。
本申请的第四方面提供了一种三维体数据的测量***,包括存储器、处理器及存储在所述存储器上且在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现前文所述方法的步骤。
本申请的第五方面提供了一种医疗器械,包括前文所述的三维体数据的测量***。
本申请的第六方面提供了一种计算机存储介质,其上存储有计算机程序,所述计算机程序被计算机或处理器执行时实现前文所述方法的步骤。
根据本申请实施例的三维体数据的测量方法和的测量***,在获取三维体数据和至少两个剖面的二维轮廓之后,根据二维轮廓对所述三维体数据进行分割,以得到目标对象的三维轮廓。通过所述方法可以更加准确的获得目标对象的轮廓,进而更加准确和有效地获取目标对象的更多参数。所述方法可以兼顾通用性、操作简单且能分割困难目标的三维体数据。
本申请所述医疗器械包含所述三维体数据的测量***,因此也具有可以更加准确和有效地获取目标对象的更多参数,可以兼顾通用性、操作简单且能分割困难目标的三维体数据等优点。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了用于实现根据本申请实施例的三维体数据的测量方法中获取目标对象的三维体数据的装置的示意性框图;
图2示出了根据本申请一个实施例的三维体数据的测量方法的示意性流程图;
图3示出了根据本申请一个实施例的三维体数据的测量方法中获取目标对象的三维体数据的示意性流程图;
图4示出了根据本申请一个实施例的三维体数据的测量方法在超声图像中确定剖面的示意图;
图5示出了根据本申请一个实施例的三维体数据的测量方法对目标区域和非目标区域进行分割的示意图;
图6示出了根据本申请另一实施例的三维体数据的测量方法的示意性流程图;
图7示出根据本申请再一实施例的三维体数据的测量***的示意性框图;
图8示出根据本申请一实施例的三维体数据的测量***的示意性框图。
具体实施方式
为了使得本申请的目的、技术方案和优点更为明显,下面将参照附图详细描述根据本申请的示例实施例。显然,所描述的实施例仅仅是本申请的一部分实施例,而不是本申请的全部实施例,应理解,本申请不受这里描述的示例实施例的限制。基于本申请中描述的本申请实施例,本领域技术人员在没有付出创造性劳动的情况下所得到的所有其它实施例都应落入本申请的保护范围之内。
在下文的描述中,给出了大量具体的细节以便提供对本申请更为彻底的理解。然而,对于本领域技术人员而言显而易见的是,本申请可以无需一个或多个这些细节而得以实施。在其他的例子中,为了避免与本申请发生混淆,对于本领域公知的一些技术特征未进行描述。
应当理解的是,本申请能够以不同形式实施,而不应当解释为局限于这里提出的实施例。相反地,提供这些实施例将使公开彻底和完全,并且将本申请的范围完全地传递给本领域技术人员。
在此使用的术语的目的仅在于描述具体实施例并且不作为本申请的限制。 在此使用时,单数形式的“一”、“一个”和“所述/该”也意图包括复数形式,除非上下文清楚指出另外的方式。还应明白术语“组成”和/或“包括”,当在该说明书中使用时,确定所述特征、整数、步骤、操作、元件和/或部件的存在,但不排除一个或更多其它的特征、整数、步骤、操作、元件、部件和/或组的存在或添加。在此使用时,术语“和/或”包括相关所列项目的任何及所有组合。
为了彻底理解本申请,将在下列的描述中提出详细的步骤以及详细的结构,以便阐释本申请提出的技术方案。本申请的较佳实施例详细描述如下,然而除了这些详细描述外,本申请还可以具有其他实施方式。
首先,参照图1来描述用于实现本申请实施例的三维体数据的测量方法的示例性三维体数据的测量***。
图1为用于实现本申请实施例的三维体数据的测量方法的示例性三维体数据的测量***10的结构框图示意图。如图1所示,该三维体数据的测量***10可以包括超声探头100、发射/接收选择开关101、发射/接收序列控制器102、处理器103、显示器104和存储器105。发射/接收序列控制器102可以激励超声探头100向目标对象(被测对象)发射超声波,还可以控制超声探头100接收从目标对象返回的超声回波,从而获得超声回波信号/数据,其中,超声探头100可以是三维容积探头,也可以是二维线阵探头,凸阵探头,相控阵探头等,此处不做具体限定。处理器103对该超声回波信号/数据进行处理,以获得目标对象的组织相关参数和超声图像。处理器103获得的超声图像可以存储于存储器105中,这些超声图像可以在显示器104上显示。
本申请实施例中,前述的三维体数据的测量***10的显示器104可为触摸显示屏、液晶显示屏等,也可以是独立于三维体数据的测量***10之外的液晶显示器、电视机等独立显示设备,也可为手机、平板电脑等电子设备上的显示屏。
本申请实施例中,前述的三维体数据的测量***10的存储器105可为闪存卡、固态存储器、硬盘等。
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质存 储有多条程序指令,该多条程序指令被处理器103调用执行后,可执行本申请各个实施例中的三维体数据的测量方法中的部分步骤或全部步骤或其中步骤的任意组合。
一个实施例中,该计算机可读存储介质可为存储器105,其可以是闪存卡、固态存储器、硬盘等非易失性存储介质。
本申请实施例中,前述的三维体数据的测量***10的处理器103可以通过软件、硬件、固件或者其组合实现,可以使用电路、单个或多个专用集成电路(application specific integrated circuits,ASIC)、单个或多个通用集成电路、单个或多个微处理器、单个或多个可编程逻辑器件、或者前述电路或器件的组合、或者其他适合的电路或器件,从而使得该处理器103可以执行各个实施例中的三维体数据的测量方法的相应步骤。
下面对本申请第一方面提供的三维体数据的测量方法进行详细的描述,请参阅图2,本申请实施例提供的一种三维体数据的测量方法,该方法应用于三维体数据的测量***10,特别适用于包含触摸显示屏的三维体数据的测量***10,用于可以利用接触触摸显示屏来输入触屏操作。
其中,图2示出根据本申请一个实施例的三维体数据的测量方法的示意性流程图,如图2所示,所述三维体数据的测量方法200包括以下步骤:
步骤S210:获取目标对象的三维体数据;
步骤S220:在所述三维体数据中确定至少两个相交的且包含所述目标对象的剖面,并在所述剖面上绘制所述目标对象的二维轮廓;
步骤S230:根据所述二维轮廓对所述三维体数据进行分割,以得到目标对象的三维轮廓。
具体地,在所述步骤S210中,在一示例中,首先通过三维超声成像***获取被测目标的目标对象的三维体数据。
其中,所述三维体数据包括目标对象的各种信息,例如可以获知所述目标对象的图像、形状、大小等,所述三维体数据可以为具有灰度的三维图像等,所述目标对象的准确的三维轮廓和信息需要执行后续的步骤进一步的获取。
在本申请的实施例中,被测对象可以是待进行超声检查的人,被测对象的目标对象可以是被测对象的机体组织进行超声检查的区域。
在本申请的一示例中,如图3所示,用于三维成像的三维超声成像***包括探头2、发射/接收选择开关3、发射电路4、接收电路5、波束合成模块6、信号处理模块7、三维成像模块8和显示器9,其中,波束合成模块6,信号处理模块7以及三维成像模块8均可看作集合有处理器的模块。
在测试时,发射电路4将一组经过延迟聚焦的脉冲发送到探头2,探头2向受测机体组织发射超声波,经一定延时后接收从受测机体组织反射回来的带有组织信息的超声回波,并将此超声回波重新转换为电信号。接收电路5接收这些电信号,并将这些超声回波信号送入波束合成模块6。超声回波信号在波束合成模块6完成聚焦延时、加权和通道求和,再经过信号处理模块7进行信号处理。经过信号处理模块7处理的信号送入三维成像模块8,经过三维成像模块8处理,得到三维图像等可视信息,然后送入显示器9进行显示,从而获取目标对象的三维体数据。
在该步骤中,在对三维体数据进行采集时,可以由医师将超声探头对准待检测的目标对象所在的区域,发射模块发射超声波到待检测目标对象,接收模块接收到的回波信号表示待检测目标对象的内部结构的回波。将回波进行处理得到的灰度图像可以反映待检测目标对象的内部结构。
示例性地,该实时采集过程可以引导医师进行。也就是说,可以给医师操作的提示等,以便医师按照提示进行操作,从而得到超声图像。
在所述步骤S220中,需要确定至少两个剖面进行目标对象的二维轮廓的绘制。
在确定所述剖面时,由于选择的剖面的位置不一定包括了待分割的目标对象,因此,在绘制轮廓之前,需要将剖面移动至待分割的目标对象处,例如将剖面平移或旋转至待分割的目标对象的中心区域,甚至是将剖面移动至待分割的目标对象的正中心,以使确定的剖面包含所述目标对象和/或目标对象更多的剖面面积。其中,所述中心区域是指以目标对象的中心点为参照向四周外延 一定的区域,例如所述中心区域是以所述目标对象的中心为圆心,半径为大于零的任意数值的圆形,或者所述中心区域是以所述目标对象的中心作为对称中心向四周扩散的方形等。
其中,确定所述剖面、绘制所述目标对象的二维轮廓包括以下两种方法:
第一:先选取至少两个剖面,然后在选取的剖面中进行二维轮廓的绘制,在该方法中,具体地可以包括但不限于以下几种方式:
例如当选取的至少两个剖面不包含目标对象时,需要将选取的至少两个剖面移动至包含所述目标对象的区域中。例如,将剖面移动至所述目标对象的中心区域,在所述中心区域进行选择并确定至少两个剖面,以保证选择的所述剖面尽可能多的包含目标对象的轮廓,然后在所述确定的所述两个剖面上绘制所述目标对象的二维轮廓。
其中,所述移动可以包括但不限于以下方式中的一种,例如平移或旋转、滑动等方式,可以根据实际需要进行选择,后续的移动在没有特殊说明的情况下均参照该解释。
在本发明的一示例中,可以随便选取两个相交的剖面,然后都移动至目标对象的中心区域。
又例如当选取的至少两个剖面包含目标对象则可以直接绘制二维轮廓。
在本发明的另一示例中,选取的两个剖面既相交又包含目标对象,但是并不位于中心区域,为了获取更多的目标对象的信息,还可以进一步将两个剖面移动到中心区域,当然不移动至中心区域也可以实现本申请的所述目的。
在本发明的另一示例中,还可以直接选取相交的且包含目标对象的剖面。
在本发明的另一示例中,还可以直接选取相交的且包含目标对象还位于的中心区域的剖面。
第二:选取一个剖面,绘制一个目标对象二维轮廓,然后再选择一个剖面,再绘制一个目标对象二维轮廓。
具体地,在本申请的一实施例中,选取第一剖面,并将所述第一剖面移动至所述目标对象的中心区域,在所述第一剖面上绘制所述目标对象的二维轮廓;以确定的所述第一剖面为位置参照生成包含目标对象的第二剖面并在所述第 二剖面上绘制所述目标对象的二维轮廓,其中,所述第二剖面包括至少一个剖面。在该示例中,所述第一剖面和所述第二剖面可以平行或相交,并不局限于某一种。
在一示例中,若该剖面包括目标对象,则直接在该剖面上绘制目标对象的二维轮廓,以第一剖面为位置参照生成包含目标对象的第二剖面,并在第二剖面上绘制该目标对象的二维轮廓。
在一示例中,若该剖面包括目标对象,但是不在目标对象的中心区域,则将该剖面移动至目标对象的中心区域,在所述第一剖面上绘制目标对象的二维轮廓;以确定的第一剖面为位置参照生成包含目标对象的第二剖面并在第二剖面上绘制所述目标对象的二维轮廓。
在一示例中,将剖面移动至所述目标对象的中心区域,确定第一剖面并在第一剖面上绘制所述目标对象的二维轮廓;以确定的所述第一剖面上的所述目标对象的二维轮廓的中心生成第二剖面并在至少第二剖面上绘制所述目标对象的二维轮廓,其中,所述第一剖面和所述第二剖面相交。
在另一示例中,将剖面移动至所述目标对象的中心区域,确定第一剖面并在第一剖面上绘制所述目标对象的二维轮廓;在确定的所述第一剖面上的所述目标对象的二维轮廓的中心区域内生成与所述第一剖面平行的第二剖面并在所述第二剖面上绘制所述目标对象的二维轮廓。在该示例中,所述第二剖面并非位于所述第一剖面的中心,只要位于其中心区域即可,因此所述第二剖面可以与第一剖面相平行,当然也可以相交,可以根据实际需要进行选择。
其中,在上述示例中,所述第一剖面的选取可以是先任意选取第一剖面,然后将所述第一剖面移动至目标对象的中心区域,甚至移动至目标对象的中心,还可以根据经验或三维图像确定目标对象的中心区域或者中心之后直接在目标对象的中心区域或者中心直接选取。
需要说明的是,在第二种方法中所述第一剖面和第二剖面的确定方法在不相互矛盾的情况下可以选择第一种方法中提及的所述确定方法中的任意一种,在此不再重复说明。
其中,所述剖面的数目并不局限于某一数值范围,例如在本申请的不同示 例中,可以在所述三维体数据中确定三个剖面,四个剖面或者五个剖面及以上,其中确定的剖面数量越多,绘制的目标对象的二维轮廓越多,获得的三维体数据的相关信息越多,更有利于所述三维体数据的分割,获得更为准确的三维轮廓,但是当确定的剖面的数目到达一定程度,对于三维体数据的分割和绘制的精准度的提高不再明显时,即可停止剖面的选择。在本申请中,确定的剖面的数目通常为2-6个剖面。
进一步,确定的至少两个剖面的位置关系至少为相交,即两个平面在三维空间内相互交叉设置,具有一条共用的直线。
具体地,所述相交为完全正交、近似正交或者斜交。其中,在本申请中,所述完全正交是指两个剖面相互垂直,两个剖面之间的夹角为90°,其中,所述近似正交是指所述两个剖面大致相互垂直,例如两个剖面之间的夹角为85°-95°,或者88°-92°或者89°-91°之间,近乎垂直的状态,并不要求严格的完全垂直。其中,所述斜交是指两个剖面相交,且并不垂直。在没有特殊说明的情况下,相交、完全正交、近似正交或者斜交的解释和说明均参照该解释和说明。
其中,在确定所述剖面时,可以选择位于不同位置的剖面,以使获取的二维轮廓更加全面,在本申请中可以选取处于正交位置的剖面。在本申请的一示例中,选取三个正交的剖面,如图4所示,其中所述三个剖面在空间内相互垂直,例如三个剖面的延伸方向为三维坐标轴中的X轴、Y轴和Z轴的方向。其中,每个剖面均可以进行旋转或平移。在本申请的另一示例中,还可以选取如图4所示的三个正交的剖面中的两个。
在本申请中,确定的至少两个剖面均包含目标对象的信息,当至少两个剖面平行时,所述两个剖面位于不同的位置,在确定两个平行的剖面之后,其可以显示不同位置处目标对象的图像,两个剖面相互叠加之后显示了不同位置处不同的图像,提供不同的信息,对所述两个剖面进行轮廓绘制之后可以得到不同的轮廓,但是所绘制的二维轮廓均包含目标对象并且相互关联,一起用于对三维轮廓的绘制。
类似的,确定的至少两个剖面均包含目标对象的信息,当至少两个剖面相 交时,所述两个剖面叠加相互叠加之后可以显示不同位置处目标对象的图像,并且两个剖面相交的部分具有相同的图像信息和二维轮廓,在两个剖面相互叠加之后其可以显示不同部位的图像信息,在进行二维轮廓绘制之后可以得到不同的二维轮廓,所绘制的二维轮廓均包含目标对象并且相互关联,一起用于对三维轮廓的绘制。当两个剖面正交时获取的剖面可以更加均匀的分布于目标对象的周围,其可以更加全面的显示目标对象的图像,以获得更加有效的二维轮廓,进而得到更加准确的三维轮廓。
在本申请的一具体示例中,在得到三维体数据后,用户选择三个正交剖面中的任一剖面,将该平面平移或旋转至待分割目标对象的中心或附近,然后在该剖面上绘制分割目标对象的二维轮廓,再以该二维轮廓的中心(或中心点附近)生成另外两个正交剖面。最后用户在生成的两个正交面中的至少一个剖面绘制二维轮廓。通过以上步骤,获得了目标对象的至少两个剖面的轮廓。
其中,所述二维轮廓的绘制方式可以是手动的,例如在截取的剖面中,根据该剖面的二维图像的灰度结合用户自身的经验,判断在所述为图像中哪些区域和/或点为目标区域,哪些区域和/或点为非目标区域,并对所述区域进行标记和绘制,进而得到所述所述目标对象的二维轮廓。
此外,还可以采用一些半自动算法进行自动贴边,进而得到所述目标对象的二维轮廓。
其中,所述半自动算法包括但不限于边缘检测算法(Livewire)和/或动态规划等。
在本申请的一示例中,通过边缘检测算法(Livewire)进行所述目标对象的二维轮廓的绘制,例如在对获得的至少两个剖面进行分析后,可以知道图像的目标区域的边缘像素点和非边缘像素点的灰度不同,通常会呈现一定的跃变,通过检测像素点的灰度值是否突变来判断该像素点是否为二维轮廓边缘,依次将所述剖面中的目标区域和非目标区域进行划分,进而最终得到所述目标对象的二维轮廓的绘制。
需要说明的是,只要能够对所述剖面中的图像进行二维轮廓绘制的方法均可以用于本申请,在此不做限制。
在所述步骤S230中,根据步骤S220中绘制的目标对象的二维轮廓对所述三维体数据进行分割,以得到目标对象的三维轮廓。
与常规方法不同的是,在该步骤中并非对二维轮廓进行拟合,而是为了提高精准度,对所述三维体数据进行分割,以得到完整的三维轮廓,从而使获取的三维图像更加准确,进而提取更多,更有效的信息。
具体地,对所述三维体数据进行分割是指在得到了至少两个剖面的二维轮廓后,相当于在至少所述两个剖面上已经明确了哪些区域是目标区域,哪些区域是非目标区域,将这些轮廓信息用于指导三维体数据中其它区域的分割,即判断三维体数据中剩余的区域哪些属于目标区域,哪些属于非目标区域,进而得到三维轮廓。
因此在进行三维体数据分割之前,需要确认目标区域和非目标区域,根据所述二维轮廓对所述三维体数据进行分割的方法具体包括:
步骤S2301:生成包含所述目标对象的目标区域;
步骤S2302:生成不包含所述目标对象的非目标区域;
步骤S2303:根据所述目标区域和所述非目标区域对所述三维体数据进行分割。
当然,在一些可能的实现方式中,只需要确定生成目标对象的目标区域,基于该目标区域对三维体数据进行分割,无需确定不包含目标对象的非目标区域,具体分割方法可以参阅下面涉及的几种分割方法的相关描述,此处不再赘述。
其中,在所述步骤S2301中,目标区域(前景区域)是指通过先验知识确定目标区域或用户直接输入已确定的目标区域,非目标区域(背景区域)是指已确定的非目标区域。
在本申请的一示例中,用户已经在确定的至少两个剖面中上(例如三个正交面中至少两个正交面)上绘制了目标对象的二维轮廓,在所述二维轮廓内的区域肯定是目标区域,因此将绘制的所述目标对象的二维轮廓内的区域确定为目标区域。
在所述步骤S2302中,生成不包含目标对象的非目标区域包括:将绘制的所述目标对象的二维轮廓内目标区域之外的区域确定为非目标区域,在本申请的一示例中,目标区域(前景区域)和非目标区域(背景区域)的关系如图6所示,用户已经在确定的至少两个剖面中上(例如三个正交面中至少两个正交面)上绘制了目标对象的二维轮廓,在所述二维轮廓之外的区域肯定是非目标区域。
在本申请的一示例中,确定的至少两个剖面中上(例如三个正交面中至少两个正交面)上绘制了目标对象的二维轮廓用户绘制的轮廓作为前景,对用户绘制的二维轮廓进行形态学膨胀,得到背景区域。
其中,所述形态学膨胀是将与背景区域接触的所有背景点合并到该背景区域中,使边界向外部扩张的过程,以用来填补物体中的空洞。
例如,定义一个卷积核,在该示例中所述卷积核为背景区域的一个像素点,该核可以是任何的形状和大小,且拥有一个单独定义出来的参考点-锚点(anchorpoint),通常和为带参考点的正方形或者圆盘,可将核称为模板或掩膜,然后将该掩膜与二维轮廓中的点进行比对,如果该掩膜落在背景区域中,则该区域为被背景区域,通过所述方式可以在所述二维轮廓中将剩余点进行一一对比,进而得到完整的背景区域。
其中,在对二维轮廓进行绘制之后,在所述二维轮廓和非目标区域之间具有非常明显的分界,进而可以非常醒目的显示所述二维轮廓,将二维轮廓和非目标区域识别出来。
在本申请的一示例中,所述二维轮廓的显示可以为经过绘制之后二维轮廓的边界线条更加清楚,通过边界线条标识二维轮廓的形状,通过所述边界线可以将二维轮廓和非目标区域识别出来。例如,所述二维轮廓的边界线条为黑色,非目标区域没有线条,为灰色背景,因此二维轮廓可以非常清晰的显示出来。此外,在另一示例中,所述边界线条还可以为有颜色的线条,从而更加清楚的与非目标区域进行划分。
在本申请的另一示例中,二维轮廓可以为具有颜色的轮廓,其不同的部位具有不同的颜色,进而更加切近实际的目标对象的外貌和形状,更加有效的与 非目标区域进行区分。
除了上述方式之外,还可根据图像的亮度和灰度进行区分,例如对二维轮廓绘制之后,二维轮廓显示为图像的亮区,非目标区域的背景则为暗区,因此通过亮区和暗区的对比,从而二维轮廓更加清楚的与非目标区域进行划分。需要说明的是,二维轮廓的显示并不局限于上述方式,还可以包含其他的显示方式,在此不再一一列举。
在所述步骤S2303中,对所述三维体数据进行分割的方法大体分成以下三类:
第一,根据所述步骤S2302中生成的所述目标区域和所述非目标区域,基于交互式分割算法对所述三维体数据进行分割,以将所述三维体数据中的点分割为目标区域或非目标区域。
其中,所述交互式分割算法可以包括Graph Cut、Grab Cut、Random Walker等算法,但并不局限于所列举的上述算法,能够实现所述三维体数据的分割算法均可应用于本申请。
下面以Graph Cut算法为例对三维体数据的分割进行详细的说明,在该步骤中所要实现的目标是将三维体数据的图像分为前景区域和背景区域两个不相交的部分。首先,图像由顶点和边来组成,边有权值,在Graph Cut需要构建一个图论的图,图论的图有两类顶点,两类边和两类权值,普通顶点由图像每个像素组成,然后每两个邻域像素之间存在一条边,它的权值由上面说的“边界平滑能量项”来决定。还有两个终端顶点s(目标)和t(背景),每个普通顶点和s都存在连接,也就是边,边的权值由“区域能量项”Rp(1)来决定,每个普通顶点和t连接的边的权值由“区域能量项”Rp(0)来决定。由此所有边的权值就可以确定了,也就是图就确定了。然后就可以通过min cut算法来找到最小的割,这个min cut就是权值和最小的边的集合,这些边的断开恰好可以使目标和背景被分割开,也就是min cut对应于能量的最小化。
具体地,在本申请中的一示例中,在计算得到前景区域和背景区域后,即可采用交互式分割算法对三维体数据进行分割,在Graph Cut分割算法中给算法提供一些前景种子点(即分割的目标区域)和背景种子点(即非目标区域), Graph Cut分割算法将自动将其余未标记的点进行分割,判断其是属于前景还是背景。Graph Cut算法原理是将图像构建成一个图论中的图(graph),以图像中的像素为图的节点,以像素和周围邻域内其它像素之间的关系为图的边,然后定义边界和区域的代价函数(分割函数),通过最小化代价函数来实现图像分割,以得到目标对象的三维轮廓。
第二,根据所述步骤S2302中生成的所述目标区域和所述非目标区域,基于分类的分割方法对所述三维体数据进行分割,以将所述三维体数据中的点分割为目标区域或非目标区域。
在所述分类的分割方法中,通过前景和背景区域的不同灰度分布,可以训练一个分类器来学习得到能够区分区分所述目标区域和所述非目标区域的特征,其中,该特征可以为灰度,可以为与周围的点、边之间的关系等。然后以此特征生成图像分类器,以对所述三维体数据中未被标记所述目标区域和所述非目标区域的区域进行分类,判断未标记的区域属于目标区域或非目标区域,以实现对三维体数据的分割,得到目标对象的三维轮廓。
其中,常用的特征提取及分类方法包括但不限于:SVM(支持向量机)、PCA(主成分分析)、神经网络、深度学习网络(如CNN、VGG、Inception、MobileNet等)。
下面以神经网络为例进行特征提取的说明,神经网络首先要以一定的学习准则进行学习,然后才能工作。现以人工神经网络对于“目标区域”和“非目标区域”的识别为例进行说明,规定当“目标区域”输入网络时,应该输出“1”,而当输入为“非目标区域”时,输出为“0”。
所以网络学习的准则应该是:如果网络作出错误的的判决,则通过网络的学习,应使得网络减少下次犯同样错误的可能性。首先,给网络的各连接权值赋予(0,1)区间内的随机值,将“目标区域”所对应的图象模式输入给网络,网络将输入模式加权求和、与门限比较、再进行非线性运算,得到网络的输出。在此情况下,网络输出为“1”和“0”的概率各为50%,也就是说是完全随机的。如果输出为“1”(结果正确),则使连接权值增大,以便使网络再次遇到“目标区域”模式输入时,仍然能作出正确的判断。
如果输出为“0”(即结果错误),则把网络连接权值朝着减小综合输入加权值的方向调整,其目的在于使网络下次再遇到“目标区域”模式输入时,减小犯同样错误的可能性。如此操作调整,当给网络轮番输入若干个“目标区域”、“非目标区域”后,经过网络按以上学习方法进行若干次学习后,网络判断的正确率将大大提高。说明网络对这两个模式的学习已经获得了成功,它已将这两个模式分布地记忆在网络的各个连接权值上。当网络再次遇到其中任何一个模式时,能够作出迅速、准确的判断和识别,即可以区分“目标区域”和“非目标区域”。
在本申请的一示例中,基于分类的分割方法对所述三维体数据进行分割可以包括:
步骤A:以所述目标区域的点为中心,取立方体形的三维图像块作为正样本,例如以所述目标区域的点为中心,取一个n×n×n的三维图像块作为正样本;同样地,以所述非目标区域的点为中心,取立方体形的三维图像块作为负样本,例如以所述非目标区域的点为中心,取一个n×n×n的三维图像块作为负样本。
步骤B:训练一个图像分类器,用于学习能区分所述正样本和所述负样本的特征;具体训练方法可以参照上述神经网络的学习方法。
步骤C:以未确定所述目标区域和非目标区域的区域中的每个点为中心,取立方体形的待分割三维图像块,例如对于未标记的点为中心,取一个n×n×n的三维图像块;
步骤D:通过所述图像分类器采用上述已经学习好的特征提取及分类方法对每个点进行分类,即对所述待分割三维图像块进行分类,判断其属于目标区域或非目标区域。所有未标记点遍历完后,即实现了整个三维体数据的分割。
第三,根据所述步骤S2302中生成的所述目标区域和所述非目标区域,基于深度学习的方法对所述三维体数据进行分割,以将所述三维体数据中的点分割为目标区域或非目标区域。
其中,深度学习是图像分割的常用方法,常用的基于深度学习的方法一般都以待分割图像(二维图像或三维体数据)作为输入,然后中间经过堆叠的卷 积,池化,过激活函数等操作,输出分割后的掩膜。
在本申请的一示例中,所述深度学习是学习“目标区域”和“非目标区域”的内在规律和表示层次,该学习过程中获得的信息对诸如文字,图像和声音等数据的解释有很大的帮助,其目标是让机器能够像人一样具有分析学习能力,能够识别“目标区域”和“非目标区域”的数据,进而实现对三维体数据的分割。
在本申请中为了提高分割的准确性,与常规深度学习方法不同之处在于,在基于深度学习的方法对所述三维体数据进行分割的过程中将所述三维体数据以及所述目标对象的二维轮廓组成的掩膜输入,即所述掩膜中包括绘制的目标对象的二维轮廓。例如在本申请的一示例中,根据前文所述的生成的目标区域和非目标区域,可将此类目标区域和非目标区域的信息和原始待分割图像(三维体数据)拼接在一起作为深度学习分割网络的输入,如此深度学习网络会根据用户标定的部分轮廓来学习待分割目标的特征,从而进行其它未标记轮廓区域进行分割。由于在所述方法中增加了之前获得的目标区域和非目标区域的信息,因此可以是深度学习网络更加精准的对特征进行提取,进而更加准确的对未标记的区域进行分割,使三维轮廓的分割更加准确。
在深度学习之后,通过深度学习网络输出分割掩膜,最后基于所述分割掩膜对未标记所述目标区域和所述非目标区域的区域进行分割,判断所述三维体数据中的点属于目标区域或非目标区域。
在本申请的一示例中,以三维分割网络为实施例,深度学习的输入可以为三维体数据及由用户在三个正交面中至少两个正交面绘制的二维轮廓所组成的三维掩膜(和体数据大小相同),在该三维掩膜中,用户绘制的轮廓区域值为1,其余区域值为0,通过结合三维掩膜,可指导深度学习网络更好地学习目标的特征,从而能更好地分割其它未被用户绘制轮廓的区域,以实现对三维体数据的分割,得到目标对象的三维轮廓。
以上提到的深度学习算法仅是示例性的,应理解,本申请还可以通过其他机器学习或深度学习算法学习目标区域和非目标区域进行的特征,进而对所述三维体数据进行分割。
根据本申请实施例的三维体数据的测量方法,在获取三维体数据和至少两个剖面的二维轮廓之后,根据二维轮廓对所述三维体数据进行分割,以得到目标对象的三维轮廓。通过所述方法可以更加准确的获得目标对象的三维轮廓,进而更加准确和有效的获取目标对象的更多参数。所述方法可以兼顾通用性、操作简单且能分割困难目标的三维体数据。
需要说明的是,本申请所述三维体数据的测量方法除了具有上述各个步骤之外,还可以进一步包含其他步骤,例如为了进一步提高三维轮廓分割的准确性,还可以进一步包含对所述二维轮廓进行修订的步骤。
在本申请的一示例中,所述方法还包括:接收对所述二维轮廓的修订指令;根据所述修订指令对所述二维轮廓进行修订,并根据修订后的二维轮廓对所述三维体数据进行重新分割,以得到所述目标对象的新的三维轮廓。
具体地,在本申请中通过步骤S210-步骤S230已经实现了三维体数据的分割,但是分割算法都有一定的准确率,也有可能会分割出部分错误的区域。为了提高分割准确性和通用性,本申请所述方法分割方法中通过在三个正交剖面中的至少两个剖面绘制轮廓,利用用户绘制的二维轮廓来指导分割算法分割得到更加准确的结果,在所述修订步骤中,用户可通过旋转和平移观察整个三维体数据的分割结果,在发现分割不够准确的区域时,可以在分割不准确的剖面重新修正二维轮廓,再以用户修正的二维轮廓和原来用户在至少两个剖面绘制的二维轮廓一起采用类似步骤S230的所述方法来指导分割算法重新对三维体数据进行分割。由于编辑时加入了更多的用户输入,其分割结果也将更加准确,达到了编辑的目的。
在本申请的一示例中,所述方法还可以进一步包括:显示所述三维轮廓。例如通过所述三维数据的测量***处理后得到的三维轮廓可以存储于存储器中,三维轮廓可以在显示器上显示。
其中,在对三维轮廓进行绘制之后,在所述三维轮廓和非目标区域之间具有非常明显的分界,进而可以非常醒目的显示所述三维轮廓,将三维轮廓和非目标区域识别出来,进来获取三维轮廓的相关信息。
在本申请的一示例中,所述三维轮廓的显示可以为经过分割之后三维轮廓 的边界线条更加清楚,通过边界线条标识三维轮廓的形状,通过所述边界线可以将三维轮廓和非目标区域识别出来。例如,所述三维轮廓的边界线条为黑色,非目标区域没有线条,为灰色背景,因此三维轮廓可以非常清晰的显示出来。此外,在另一示例中,所述边界线条还可以为有颜色的线条,从而更加清楚的与非目标区域进行划分。
在本申请的另一示例中,整个三维轮廓可以为具有颜色的轮廓,其不同的部位具有不同的颜色,进而更加切近实际的目标对象的外貌和形状,更加有效的与非目标区域进行区分。
除了上述方式之外,还可根据图像的亮度和灰度进行区分,例如对三维轮廓绘制之后,三维轮廓显示为图像的亮区,非目标区域的背景则为暗区,因此通过亮区和暗区的对比,从而三维轮廓更加清楚的与非目标区域进行划分。需要说明的是,三维轮廓的显示并不局限于上述方式,还可以包含其他的显示方式,在此不再一一列举。
在本申请的一示例中,所述方法还可以进一步包括:根据所述三维轮廓确定所述目标对象的体积。如前文所述,在本申请的一示例中,以三维分割网络为实施例,深度学习的输入可以为三维体数据及由用户在三个正交面中至少两个正交面绘制的二维轮廓所组成的三维掩膜(和体数据大小相同),在确定所述三维掩膜的提及和所述分割掩膜的数目之后,根据所述分割掩膜的体积和所述分割掩膜的数目计算所述目标对象的体积,即两者的乘积即为目标对象的体积,当然在获得所述目标对象的三维轮廓之后还可以通过其他方法对获取所述目标对象的体积,在此不做限定。
以上示例性地示出了根据本申请一个实施例的三维体数据的测量方法,在所述方法中通过在至少两个剖面绘制轮廓,利用绘制的二维轮廓来指导分割算法对三维体数据进行分割,以得到更加准确的结果。所述方法可以兼顾通用性、操作简单且能分割困难目标的三维体数据。
本申请的第二方面提供了另外一种三维体数据的测量方法,下面结合图6描述根据本申请另一实施例的三维体数据的测量方法的示意性流程图,如图6所示,所述三维体数据的测量方法600包括以下步骤:
步骤S610:获取目标对象的三维体数据;
步骤S620:在所述三维体数据中确定包含所述目标对象的不同位置的至少两个剖面,并在所述剖面中绘制所述目标对象的二维轮廓;
步骤S630:根据所述二维轮廓对所述三维体数据进行分割,以得到目标对象的三维轮廓。
在参考图6描述的根据本申请实施例的三维体数据的测量方法600中的步骤S610和步骤S630与参考图2描述的根据本申请实施例的三维体数据的测量方法200中的步骤S210和步骤S230相同,关于所述步骤S610和步骤S630的解释和说明可以参照前文中步骤S210和步骤S230的解释和说明,当然对于所述步骤的解释和说明的变形,替换等也包含在本申请实施例的三维体数据的测量方法600中。
下面针对所述步骤S620进行详细的说明,在所述三维体数据中确定包含所述目标对象的不同位置的至少两个剖面,并在所述剖面中绘制所述目标对象的二维轮廓。
在该步骤中需要确定至少两个剖面进行目标对象的二维轮廓的绘制。所述两个剖面的位置关系不局限于相交,只要是目标对象的不同位置处截取的剖面,在该剖面中包含目标对象,可以对目标对象的二维轮廓进行绘制即可,两个剖面之间的关系不做限制。
其中,所述剖面的不同位置是指在三维空间内,两个剖面不相互重合,并且两个剖面均剖切过所述目标对象,从而获得目标对象不同的二维轮廓,为后续的三维体数据分割提供更加有效的参照和指导。
在本申请的一示例中,所述至少两个所述剖面可以相互平行或相交。其中,所述平行是指两个位于不同位置的且平行的剖面。所述相交为完全正交、近似正交或者斜交。其中,在本申请中,所述完全正交是指两个剖面相互垂直,两个剖面之间的夹角为90°,其中,所述近似正交是指所述两个剖面大致相互垂直,例如两个剖面之间的夹角为85°-95°,或者88°-92°或者89°-91°之间,近乎垂直的状态,并不要求严格的完全垂直。其中,所述斜交是指两个剖面相交,且并不垂直。在没有特殊说明的情况下,相交、完全正交、近似正 交或者斜交的解释和说明均参照该解释和说明。
其中,在确定所述剖面时,可以选择位于不同位置的剖面,以使获取的二维轮廓更加全面,在本申请中可以选取处于正交位置的剖面。在本申请的一示例中,选取三个正交的剖面,如图4所示,其中所述三个剖面在空间内相互垂直,例如三个剖面的延伸方向为三维坐标轴中的X轴、Y轴和Z轴的方向。其中,每个剖面均可以进行旋转或平移。在本申请的另一示例中,还可以选取如图4所示的三个正交的剖面中的两个。
在本申请的一具体示例中,在得到三维体数据后,用户选择三个正交剖面中的任一剖面,将该平面平移或旋转至待分割目标对象的中心或附近,然后在该剖面上绘制分割目标对象的二维轮廓,再以该二维轮廓的中心(或中心点附近)生成另外两个正交剖面。最后用户在生成的两个正交面中的至少一个剖面绘制二维轮廓。通过以上步骤,获得了目标对象的至少两个剖面的轮廓。
其中,所述二维轮廓的绘制方式可以是手动的,例如在截取的剖面中,根据该剖面的二维图像的灰度结合用户自身的经验,判断在所述为图像中哪些区域和/或点为目标区域,哪些区域和/或点为非目标区域,并对所述区域进行标记和绘制,进而得到所述所述目标对象的二维轮廓。
此外,还可以采用一些半自动算法进行自动贴边,进而得到所述目标对象的二维轮廓。
其中,所述半自动算法包括但不限于边缘检测算法(Livewire)和/或动态规划等。
在本申请的一示例中,通过边缘检测算法(Livewire)进行所述目标对象的二维轮廓的绘制,例如在对获得的至少两个剖面进行分析后,可以知道图像的目标区域的边缘像素点和非边缘像素点的灰度不同,通常会呈现一定的跃变,通过检测像素点的灰度值是否突变来判断该像素点是否为二维轮廓边缘,依次将所述剖面中的目标区域和非目标区域进行划分,进而最终得到所述目标对象的二维轮廓的绘制。
需要说明的是,只要能够对所述剖面中的图像进行二维轮廓绘制的方法均可以用于本申请,在此不做限制。
以上示例性地示出了根据本申请另一个实施例的三维体数据的测量方法,基于上面的描述,在所述方法中通过在至少两个剖面绘制轮廓,利用绘制的二维轮廓来指导分割算法对三维体数据进行分割,以得到更加准确的结果。所述方法可以兼顾通用性、操作简单且能分割困难目标的三维体数据。
本申请的第三方面提供了另外一种三维体数据的测量方法,下面结合图7描述根据本申请另一实施例的三维体数据的测量方法的示意性流程图,如图7所示,所述三维体数据的测量方法700包括以下步骤:
步骤S710:获取目标对象的三维体数据;
步骤S720:在所述三维体数据中确定至少两个相交的且包含所述目标对象的剖面,并在所述剖面中绘制所述目标对象的二维轮廓;
步骤S730:根据所述二维轮廓确定所述三维体数据中所述剖面外的其他区域对应的轮廓;
步骤S740:根据所述二维轮廓和所述其他区域对应的轮廓确定所述目标对象的三维轮廓。
在参考图7描述的根据本申请实施例的三维体数据的测量方法700中的步骤S710和步骤S720与参考图2描述的根据本申请实施例的三维体数据的测量方法200中的步骤S210和步骤S220相同,关于所述步骤S710和步骤S720的解释和说明可以参照前文中步骤S210和步骤S220的解释和说明,当然对于所述步骤的解释和说明的变形,替换等也包含在本申请实施例的三维体数据的测量方法700中。
下面针对所述步骤S730进行详细的说明,其中,所述步骤S730与所述步骤S230的不同在于,在所述步骤S730中确定所述三维体数据的轮廓为所述三维体数据中所述剖面外的其他区域对应的轮廓,并不包含所述剖面形成的二维轮廓,其中,其他区域可以是空间上任意剖面或者三维面等,此处不做具体限定。而在步骤S230中则是根据所述二维轮廓对所述三维体数据进行分割,是对三维体数据进行全面的分割,以得到完整的目标对象的三维轮廓,在所述步骤S730中得到的轮廓并非完整的目标对象的三维轮廓。
需要说明的是,在所述步骤S730中,根据所述二维轮廓确定所述三维体 数据中所述剖面外的其他区域对应的轮廓的方法可以选用步骤S230中的各种分割方法,所述分割方法在此不再赘述,可以根据实际需要进行选择,进而确定所述三维体数据中所述剖面外的其他区域对应的轮廓。
在该实施例中,所述方法还进一步包括步骤S740,由于在所述步骤S720中获得的是所述三维体数据中两个相交剖面的目标对象的二维轮廓,在所述步骤S730中获得的是所述三维体数据中所述剖面外的其他区域对应的轮廓,因此都不是完整的目标对象的三维轮廓,在所述步骤S740中将所述步骤S720和所述S730中获得的轮廓进行组合即可得到完整的所述目标对象的三维轮廓。
另外,步骤S720还可以是在所述三维体数据中确定包含所述目标对象的不同位置的至少两个剖面,并在所述剖面上绘制所述目标对象的二维轮廓,具体可参与图6所示的步骤S620的相关描述,此处不再赘述。
在该实施例中,其他步骤或方法在不相互矛盾的情况下均可以参照本发明第一方面和第二方面所述方法中相关的步骤或方法,在此不再赘述。
本申请的第四方面还提供了一种三维体数据的测量***,下面结合图8描述本申请第四方面提供的三维体数据的测量***。图8示出了根据本申请实施例的三维体数据的测量***800的示意性框图。三维体数据的测量***800包括存储器810以及处理器820。
其中,存储器810存储用于实现根据本申请实施例的三维体数据的测量方法中的相应步骤的计算机程序代码。处理器820用于运行存储器810中存储的计算机程序代码,以执行根据本申请实施例的三维体数据的测量方法的相应步骤。
在一个实施例中,在所述计算机程序代码被处理器820运行时使得三维体数据的测量***800执行以下步骤的至少一个步骤:获取目标对象的三维体数据;在所述三维体数据中确定至少两个相交的且包含所述目标对象的剖面,在所述剖面上绘制所述目标对象的二维轮廓;根据所述二维轮廓对所述三维体数据进行分割,以得到目标对象的三维轮廓。
在另一个实施例中,在所述计算机程序代码被处理器820运行时使得三维体数据的测量***800执行以下步骤中的至少一个步骤:获取目标对象的三维 体数据;在所述三维体数据中确定包含所述目标对象的不同位置的至少两个剖面,并在所述剖面中绘制所述目标对象的二维轮廓;根据所述二维轮廓对所述三维体数据进行分割,以得到目标对象的三维轮廓。
在另一个实施例中,在所述计算机程序代码被处理器820运行时使得三维体数据的测量***800执行以下步骤:接收对所述二维轮廓的修订指令;根据所述修订指令对所述二维轮廓进行修订,并根据修订后的二维轮廓对所述三维体数据进行重新分割,以得到所述目标对象的新的三维轮廓。
在另一个实施例中,在所述计算机程序代码被处理器820运行时使得三维体数据的测量***800执行以下步骤:显示所述三维轮廓。
在另一个实施例中,在所述计算机程序代码被处理器820运行时使得三维体数据的测量***800执行以下步骤:根据所述三维轮廓确定所述目标对象的体积。
在另一个实施例中,在所述计算机程序代码被处理器820运行时使得三维体数据的测量***800执行以下步骤:根据所述三维轮廓确定所述目标对象的体积。
本申请的第五方面还提供了一种医疗器械,该医疗器械可以包括图8所示的三维体数据的测量***800。该医疗器械可以实现前述图2、图6或图7所示的三维体数据的测量方法。
本申请所述医疗器械包含所述三维体数据的测量***,因此也具有可以更加准确和有效地获取目标对象的更多参数,可以兼顾通用性、操作简单且能分割困难目标的三维体数据等优点。
本申请的第六方面还提供了一种存储介质,在所述存储介质上存储了计算机程序指令,在所述计算机程序指令被计算机或处理器运行时用于执行本申请实施例的三维体数据的测量方法的相应步骤。所述存储介质例如可以包括智能电话的存储卡、平板电脑的存储部件、个人计算机的硬盘、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器、或者上述存储介质的任意组合。
在一个实施例中,所述计算机程序指令在被计算机或处理器运行时使计算 机或处理器执行以下步骤:在一个实施例中,在所述程序代码被处理器820运行时使得计算三维体数据的测量***800执行以下步骤的至少一个步骤:获取目标对象的三维体数据;在所述三维体数据中确定至少两个相交的且包含所述目标对象的剖面,在所述剖面上绘制所述目标对象的二维轮廓;根据所述二维轮廓对所述三维体数据进行分割,以得到目标对象的三维轮廓。
在另一个实施例中,所述计算机程序指令在被计算机或处理器运行时使计算机或处理器执行以下步骤:获取目标对象的三维体数据;在所述三维体数据中确定包含所述目标对象的不同位置的至少两个剖面,并在所述剖面中绘制所述目标对象的二维轮廓;根据所述二维轮廓对所述三维体数据进行分割,以得到目标对象的三维轮廓。
在另一个实施例中,所述计算机程序指令在被计算机或处理器运行时使计算机或处理器执行以下步骤:获取目标对象的三维体数据;在所述三维体数据中确定至少两个相交的且包含所述目标对象的剖面,并在所述剖面上绘制所述目标对象的二维轮廓;根据所述二维轮廓确定所述三维体数据中所述剖面外的其他区域对应的轮廓;根据所述二维轮廓和所述其他区域对应的轮廓确定所述目标对象的三维轮廓。
在另一个实施例中,所述计算机程序指令在被计算机或处理器运行时使计算机或处理器执行以下步骤:接收对所述二维轮廓的修订指令;根据所述修订指令对所述二维轮廓进行修订,并根据修订后的二维轮廓对所述三维体数据进行重新分割,以得到所述目标对象的新的三维轮廓。
在另一个实施例中,所述计算机程序指令在被计算机或处理器运行时使计算机或处理器执行以下步骤:显示所述三维轮廓。
在另一个实施例中,所述计算机程序指令在被计算机或处理器运行时使计算机或处理器执行以下步骤:根据所述三维轮廓确定所述目标对象的体积。
在另一个实施例中,所述计算机程序指令在被计算机或处理器运行时使计算机或处理器执行以下步骤:根据所述三维轮廓确定所述目标对象的体积。
尽管这里已经参考附图描述了示例实施例,应理解上述示例实施例仅仅是示例性的,并且不意图将本申请的范围限制于此。本领域普通技术人员可以在 其中进行各种改变和修改,而不偏离本申请的范围和精神。所有这些改变和修改意在被包括在所附权利要求所要求的本申请的范围之内。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。例如,以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个设备,或一些特征可以忽略,或不执行。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本申请的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本申请并帮助理解各个发明方面中的一个或多个,在对本申请的示例性实施例的描述中,本申请的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该本申请的方法解释成反映如下意图:即所要求保护的本申请要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如相应的权利要求书所反映的那样,其发明点在于可以用少于某个公开的单个实施例的所有特征的特征来解决相应的技术问题。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本申请的单独实施例。
本领域的技术人员可以理解,除了特征之间相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本申请的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
本申请的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本申请实施例的物品分析设备中的一些模块的一些或者全部功能。本申请还可以实现为用于执行这里所描述的方法的一部分或者全部的装置程序(例如,计算机程序和计算机程序产品)。这样的实现本申请的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
应该注意的是上述实施例对本申请进行说明而不是对本申请进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本申请可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
以上所述,仅为本申请的具体实施方式或对具体实施方式的说明,本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。本申请的保护范围应以权利要求的保护范围为准。

Claims (27)

  1. 一种三维体数据的测量方法,其特征在于,所述方法包括:
    获取目标对象的三维体数据;
    在所述三维体数据中确定至少两个相交的且包含所述目标对象的剖面,并在所述剖面上绘制所述目标对象的二维轮廓;
    根据所述二维轮廓确定所述三维体数据中所述剖面外的其他区域对应的轮廓;
    根据所述二维轮廓和所述其他区域对应的轮廓确定所述目标对象的三维轮廓。
  2. 一种三维体数据的测量方法,其特征在于,所述方法包括:
    获取目标对象的三维体数据;
    在所述三维体数据中确定至少两个相交的且包含所述目标对象的剖面,并在所述剖面上绘制所述目标对象的二维轮廓;
    根据所述二维轮廓对所述三维体数据进行分割,以得到目标对象的三维轮廓。
  3. 一种三维体数据的测量方法,其特征在于,所述方法包括:
    获取目标对象的三维体数据;
    在所述三维体数据中确定包含所述目标对象的不同位置的至少两个剖面,并在所述剖面中绘制所述目标对象的二维轮廓;
    根据所述二维轮廓对所述三维体数据进行分割,以得到目标对象的三维轮廓。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述方法还包括:
    接收对所述二维轮廓的修订指令;
    根据所述修订指令对所述二维轮廓进行修订,并根据修订后的二维轮廓对所述三维体数据进行重新分割,以得到所述目标对象的新的三维轮廓。
  5. 根据权利要求1至4任一项所述的方法,其特征在于,所述方法还包 括:
    显示所述三维轮廓。
  6. 根据权利要求1至3任一项所述的方法,其特征在于,所述方法还包括:
    根据所述三维轮廓确定所述目标对象的体积。
  7. 根据权利要求3所述的方法,其特征在于,确定所述剖面、绘制所述目标对象的二维轮廓包括:
    选取第一剖面,并将所述第一剖面移动至所述目标对象的中心区域,在所述第一剖面上绘制所述目标对象的二维轮廓;
    以确定的所述第一剖面为位置参照生成包含目标对象的第二剖面并在所述第二剖面上绘制所述目标对象的二维轮廓,其中,所述第二剖面包括至少一个剖面。
  8. 根据权利要求1或2所述的方法,其特征在于,确定所述剖面、绘制所述目标对象的二维轮廓包括:
    选取第一剖面,并将所述第一剖面移动至所述目标对象的中心区域,在所述第一剖面上绘制所述目标对象的二维轮廓;
    在确定的所述第一剖面上的所述目标对象的二维轮廓的中心生成与第一剖面相交的第二剖面并在所述第二剖面上绘制所述目标对象的二维轮廓。
  9. 根据权利要求1至3任一项所述的方法,其特征在于,确定所述剖面、绘制所述目标对象的二维轮廓包括:
    选取第一剖面,所述第一剖面包括所述目标对象,在所述第一剖面上绘制所述目标对象的二维轮廓;
    以所述第一剖面为位置参照生成包含目标对象的第二剖面并在所述第二剖面上绘制所述目标对象的二维轮廓,其中,所述第二剖面包括至少一个剖面。
  10. 根据权利要求1至3任一项所述的方法,其特征在于,确定所述剖面、绘制所述目标对象的二维轮廓包括:
    将剖面移动至所述目标对象的中心区域,确定至少两个剖面;
    在所述确定的所述至少两个剖面上绘制所述目标对象的二维轮廓。
  11. 根据权利要求1至3任一项所述的方法,其特征在于,所述二维轮廓和/或所述三维轮廓与不包含所述目标对象的非目标区域之间通过边界线条、颜色以及亮度中的至少一种区分和显示。
  12. 根据权利要求2或3所述的方法,其特征在于,根据所述二维轮廓对所述三维体数据进行分割包括:
    根据所述二维轮廓生成包含所述目标对象的目标区域;
    根据所述二维轮廓生成不包含所述目标对象的非目标区域;
    根据所述目标区域和所述非目标区域对所述三维体数据进行分割。
  13. 根据权利要求12所述的方法,其特征在于,所述根据所述二维轮廓生成包含所述目标对象的目标区域包括:
    将绘制的所述目标对象的二维轮廓内的区域确定为目标区域。
  14. 根据权利要求12所述的方法,其特征在于,所述根据所述二维轮廓生成不包含目标对象的非目标区域包括:
    将绘制的所述目标对象的二维轮廓内目标区域之外的区域确定为非目标区域;和/或,
    将绘制的所述目标对象的二维轮廓进行形态学膨胀,以生成所述非目标区域。
  15. 根据权利要求12所述的方法,其特征在于,根据所述目标区域和所述非目标区域对所述三维体数据进行分割包括:
    基于交互式分割算法对所述三维体数据进行分割,以将所述三维体数据中的点分割为目标区域或非目标区域。
  16. 根据权利要求15所述的方法,其特征在于,基于交互式分割算法对所述三维体数据进行分割的方法包括:
    在所述目标区域和所述非目标区域中选取目标区域种子点和非目标区域种子点并构造图论中的图;
    以选取的所述目标区域种子点和非目标区域种子点确定分割函数;
    利用所述分割函数对所述三维体数据中未标记的点进行分割计算,以确定所述三维体数据中未标记的点属于目标区域或非目标区域。
  17. 根据权利要求12所述的方法,其特征在于,根据所述目标区域和所述非目标区域对所述三维体数据进行分割包括:
    基于分类的分割方法对所述三维体数据进行分割。
  18. 根据权利要求17所述的方法,其特征在于,基于分类的分割方法对所述三维体数据进行分割包括:
    训练一个图像分类器,用于学习得到能区分所述目标区域和所述非目标区域的特征;
    以所述特征生成图像分类器,用于对未标记所述目标区域和所述非目标区域的区域进行分类,判断所述三维体数据中未标记的点属于目标区域或非目标区域。
  19. 根据权利要求17所述的方法,其特征在于,基于分类的分割方法对所述三维体数据进行分割,包括:
    以所述目标区域的点为中心,取立方体形的三维图像块作为正样本;
    以所述非目标区域的点为中心,取立方体形的三维图像块作为负样本;
    训练一个图像分类器,用于学习能区分所述正样本和所述负样本的特征;
    以未确定所述目标区域和非目标区域的区域中的每个点为中心,取立方体形的待分割三维图像块;
    通过所述图像分类器对所述待分割三维图像块进行分类,判断其属于目标区域或非目标区域。
  20. 根据权利要求12所述的方法,其特征在于,根据所述目标区域和所述非目标区域对所述三维体数据进行分割包括:
    基于深度学习的方法对所述三维体数据进行分割。
  21. 根据权利要求20所述的方法,其特征在于,基于深度学习的方法对所述三维体数据进行分割包括:
    将所述三维体数据以及所述目标对象的二维轮廓组成的掩膜输入;
    通过深度学习网络输出分割掩膜;
    根据所述分割掩膜确定所述三维体数据中的目标区域和/或非目标区域。
  22. 根据权利要求21所述的方法,其特征在于,根据所述三维轮廓确定 所述目标对象的体积:
    确定所述分割掩膜的体积和所述分割掩膜的数目;
    根据所述分割掩膜的体积和所述分割掩膜的数目确定所述目标对象的体积。
  23. 根据权利要求1或2所述的方法,其特征在于,所述相交为完全正交,斜交或者近似正交。
  24. 根据权利要求3所述的方法,其特征在于,至少两个所述剖面相互平行或相交。
  25. 一种三维体数据的测量***,包括存储器、处理器及存储在所述存储器上且在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至24中任一项所述方法的步骤。
  26. 一种医疗器械,其特征在于,包括权利要求25所述的三维体数据的测量***。
  27. 一种计算机存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被计算机或处理器执行时实现权利要求1至24中任一项所述方法的步骤。
PCT/CN2019/126359 2019-12-18 2019-12-18 三维体数据的测量方法、测量***、医疗器械及存储介质 WO2021120059A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980101217.2A CN114503166A (zh) 2019-12-18 2019-12-18 三维体数据的测量方法、测量***、医疗器械及存储介质
PCT/CN2019/126359 WO2021120059A1 (zh) 2019-12-18 2019-12-18 三维体数据的测量方法、测量***、医疗器械及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/126359 WO2021120059A1 (zh) 2019-12-18 2019-12-18 三维体数据的测量方法、测量***、医疗器械及存储介质

Publications (1)

Publication Number Publication Date
WO2021120059A1 true WO2021120059A1 (zh) 2021-06-24

Family

ID=76476984

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/126359 WO2021120059A1 (zh) 2019-12-18 2019-12-18 三维体数据的测量方法、测量***、医疗器械及存储介质

Country Status (2)

Country Link
CN (1) CN114503166A (zh)
WO (1) WO2021120059A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392735B (zh) * 2023-12-12 2024-03-22 深圳市宗匠科技有限公司 面部数据处理方法、装置、计算机设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513135A (zh) * 2015-09-15 2016-04-20 浙江大学 一种三维服装纸样的空间位置自动设置方法
CN105761304A (zh) * 2016-02-02 2016-07-13 飞依诺科技(苏州)有限公司 三维脏器模型构造方法和装置
CN106934807A (zh) * 2015-12-31 2017-07-07 深圳迈瑞生物医疗电子股份有限公司 一种医学影像分析方法、***及医疗设备
CN108665544A (zh) * 2018-05-09 2018-10-16 中冶北方(大连)工程技术有限公司 三维地质模型建模方法
WO2019011160A1 (zh) * 2017-07-11 2019-01-17 中慧医学成像有限公司 三维超声图像显示方法
CN109934905A (zh) * 2019-01-16 2019-06-25 中德(珠海)人工智能研究院有限公司 一种用于生成三维模型的***及其生成方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513135A (zh) * 2015-09-15 2016-04-20 浙江大学 一种三维服装纸样的空间位置自动设置方法
CN106934807A (zh) * 2015-12-31 2017-07-07 深圳迈瑞生物医疗电子股份有限公司 一种医学影像分析方法、***及医疗设备
CN105761304A (zh) * 2016-02-02 2016-07-13 飞依诺科技(苏州)有限公司 三维脏器模型构造方法和装置
WO2019011160A1 (zh) * 2017-07-11 2019-01-17 中慧医学成像有限公司 三维超声图像显示方法
CN108665544A (zh) * 2018-05-09 2018-10-16 中冶北方(大连)工程技术有限公司 三维地质模型建模方法
CN109934905A (zh) * 2019-01-16 2019-06-25 中德(珠海)人工智能研究院有限公司 一种用于生成三维模型的***及其生成方法

Also Published As

Publication number Publication date
CN114503166A (zh) 2022-05-13

Similar Documents

Publication Publication Date Title
EP3826544B1 (en) Ultrasound system with an artificial neural network for guided liver imaging
CN110338840B (zh) 三维成像数据的显示处理方法和三维超声成像方法及***
US11715203B2 (en) Image processing method and apparatus, server, and storage medium
US9277902B2 (en) Method and system for lesion detection in ultrasound images
TWI501754B (zh) 影像辨識方法及影像辨識系統
CN110176010B (zh) 一种图像检测方法、装置、设备及存储介质
US20140200452A1 (en) User interaction based image segmentation apparatus and method
JP2010000133A (ja) 画像表示装置、画像表示方法及びプログラム
US11633235B2 (en) Hybrid hardware and computer vision-based tracking system and method
CN112568933B (zh) 超声成像方法、设备和存储介质
CN114022554A (zh) 一种基于yolo的按摩机器人穴位检测与定位方法
WO2021120059A1 (zh) 三维体数据的测量方法、测量***、医疗器械及存储介质
US11944486B2 (en) Analysis method for breast image and electronic apparatus using the same
US11452494B2 (en) Methods and systems for projection profile enabled computer aided detection (CAD)
JP2895414B2 (ja) 超音波体積演算装置
CN113940704A (zh) 一种基于甲状腺的肌肉和筋膜检测装置
CN113768544A (zh) 乳腺的超声成像方法及设备
CN111383323B (zh) 一种超声成像方法和***以及超声图像处理方法和***
CN113229850A (zh) 超声盆底成像方法和超声成像***
WO2016076104A1 (ja) 画像処理方法、画像処理装置、及びプログラム
Wang et al. Ellipse guided multi-task network for fetal head circumference measurement
WO2020133236A1 (zh) 一种脊柱的成像方法以及超声成像***
CN111986165B (zh) 一种***图像中的钙化检出方法及装置
US20210251601A1 (en) Method for ultrasound imaging and related equipment
WO2022134049A1 (zh) 胎儿颅骨的超声成像方法和超声成像***

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19956446

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19956446

Country of ref document: EP

Kind code of ref document: A1