WO2018214063A1 - Ultrasonic device and three-dimensional ultrasonic image display method therefor - Google Patents

Ultrasonic device and three-dimensional ultrasonic image display method therefor Download PDF

Info

Publication number
WO2018214063A1
WO2018214063A1 PCT/CN2017/085736 CN2017085736W WO2018214063A1 WO 2018214063 A1 WO2018214063 A1 WO 2018214063A1 CN 2017085736 W CN2017085736 W CN 2017085736W WO 2018214063 A1 WO2018214063 A1 WO 2018214063A1
Authority
WO
WIPO (PCT)
Prior art keywords
subset
volume data
subsets
display
fusion
Prior art date
Application number
PCT/CN2017/085736
Other languages
French (fr)
Chinese (zh)
Inventor
梁天柱
邹耀贤
林穆清
龚闻达
朱磊
Original Assignee
深圳迈瑞生物医疗电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳迈瑞生物医疗电子股份有限公司 filed Critical 深圳迈瑞生物医疗电子股份有限公司
Priority to PCT/CN2017/085736 priority Critical patent/WO2018214063A1/en
Priority to CN201780079242.6A priority patent/CN110087553B/en
Publication of WO2018214063A1 publication Critical patent/WO2018214063A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation

Definitions

  • the present invention relates to an ultrasound apparatus, and in particular to a technique of displaying a three-dimensional ultrasound image on an ultrasound apparatus.
  • the ultrasound apparatus also divides the volume data into regions, for example, extracting objects or regions of interest (such as key clinical structures and parts such as fetal faces) from the volume data, or dividing the volume data into multiple corresponding correspondences. A subset of different objects or regions of interest, etc., and then rendered and displayed for each object or region. Whether it's rendering a 3D ultrasound image or rendering it for each object or region, medical ultrasound instruments now typically use the same rendering.
  • the present application provides an ultrasound apparatus and a three-dimensional ultrasound image display method thereof, so that a doctor can visually see a rendered image of a plurality of objects on a display interface.
  • an embodiment provides a three-dimensional ultrasound image display method, including:
  • an embodiment provides a three-dimensional ultrasound image display method, including:
  • another embodiment provides a three-dimensional ultrasound image display method, including:
  • an ultrasound apparatus including:
  • an ultrasound probe for transmitting ultrasound waves to a region of interest within the biological tissue and receiving echoes of the ultrasound waves
  • a transmit/receive sequence controller for generating a transmit sequence and/or a receive sequence, and outputting the transmit sequence and/or the receive sequence to an ultrasound probe, controlling the ultrasound probe to transmit ultrasound waves to the region of interest and receiving echoes of the ultrasound waves ;
  • the processor is configured to generate ultrasound three-dimensional volume data according to the ultrasonic echo data, obtain a volume data set, and identify multiple subsets from the volume data set, and the processor is further configured to establish multiple sets of different display configurations, and use multiple sets.
  • the display configuration performs differential rendering on multiple subsets, obtains fusion coefficients of each subset, multiplies the rendering results of the multiple subsets by respective fusion coefficients, and performs superimposed display;
  • a human-machine interaction device comprising a display for displaying an ultrasound rendered image.
  • an ultrasound apparatus including:
  • an ultrasound probe for transmitting ultrasound waves to a region of interest within the biological tissue and receiving echoes of the ultrasound waves
  • a transmit/receive sequence controller for generating a transmit sequence and/or a receive sequence, and outputting the transmit sequence and/or the receive sequence to an ultrasound probe, controlling the ultrasound probe to transmit ultrasound to the region of interest and receiving the ultrasound Wave echo
  • a processor configured to acquire ultrasound three-dimensional volume data for fetal detection, obtain a volume data set, identify a plurality of subsets from the volume data set according to image features of the fetus, and render part or all of the plurality of subsets to obtain more a sub-image, merging part or all of the plurality of sub-images to obtain a three-dimensional image, and outputting the three-dimensional image to a display for display;
  • a human-machine interaction device comprising a display for displaying an ultrasound three-dimensional image.
  • an ultrasound apparatus including:
  • a processor configured to implement the above method by executing the program stored by the memory.
  • an embodiment provides a computer readable storage medium comprising a program executable by a processor to implement the above method.
  • an embodiment provides a three-dimensional ultrasound image display system, including:
  • an acquisition unit for acquiring ultrasound three-dimensional volume data and obtaining a volume data set
  • an identification unit for identifying a plurality of subsets from the volume data set
  • a rendering unit for creating multiple sets of different display configurations and using different sets of display configurations to differentially render multiple subsets
  • a fusion unit for acquiring fusion coefficients of each subset, multiplying the rendering results of the plurality of subsets by respective fusion coefficients, and performing superimposed display.
  • multiple sets of display configurations are used to differentially render multiple subsets, and the rendering results are fused and displayed, and the fusion coefficient is used to achieve the display effect of reducing the opacity rendering parameters of each subset, so that Each area or object presents a translucent display effect.
  • the rendered image can be displayed at the same time.
  • This kind of peer display helps the user to observe the details of each area or object simultaneously, so as to the overall volume data.
  • the table has a better grasp, and peer users can also visually see the division of each region or object, so as to make subsequent adjustments to the division of the region or object.
  • DRAWINGS 1 is a schematic structural view of an ultrasonic device in an embodiment
  • FIG. 2 is a flow chart showing display of a three-dimensional ultrasound image in an embodiment
  • FIG. 3 is a schematic diagram of rendering and displaying multiple objects in the prior art
  • FIG. 4 is a schematic diagram of rendering and displaying multiple objects in an embodiment of the present invention.
  • FIG. 5 is a flow chart of a ray tracing method in an embodiment
  • FIG. 6 is a schematic diagram of a ray tracing method in an embodiment
  • FIG. 7 is a schematic structural diagram of a control terminal in an embodiment
  • FIG. 8 is a schematic diagram of adjusting a portion of a superimposed display of two objects in an embodiment
  • FIG. 9 is a flow chart of adjusting a partition in an embodiment
  • FIG. 10 is a flow chart of a key to remove an obstruction in an embodiment
  • FIG. 11 is a schematic structural diagram of a three-dimensional ultrasonic image display system in an embodiment.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • the ultrasound probe 101 is configured to emit ultrasound waves to a region of interest within the biological tissue 108 and receive echoes of the ultrasound waves.
  • the ultrasonic probe 101 may be a volume probe or an area array probe.
  • the inside of the volume probe consists of a conventional one-dimensional probe array of elements and a built-in stepper motor that oscillates the sequence of the array elements.
  • the stepping motor can oscillate the scanning plane of the array sequence along its normal direction, transmit ultrasonic waves to different scanning planes, and receive echoes thereof, thereby obtaining echo data of a plurality of scanning planes, thereby realizing scanning in a three-dimensional space. .
  • the area array probe has thousands of array elements arranged in the form of a matrix, which can directly transmit and receive in different directions of the three-dimensional space, and simultaneously receive ultrasonic echo data returned by the scanning planes in different directions, thereby realizing fast three-dimensional. Space scan.
  • the transmit/receive sequence controller 102 is configured to generate a transmit sequence and/or a receive sequence, and output the transmit sequence and/or the receive sequence to the ultrasound probe, and control the ultrasound probe to transmit the ultrasound to the region of interest and receive the ultrasound back wave.
  • the emission sequence is used to provide the number of transducers for transmission in the ultrasound probe 101 and parameters for transmitting ultrasound to the biological tissue (eg amplitude, frequency, number of waves, wave angle, waveform, etc.), and the receiving sequence is used to provide an ultrasound probe The number of transducers received in 101 and the parameters of the echoes they receive (eg, received angle, depth, etc.).
  • the type of probe is different, and the transmission sequence and reception sequence are also different.
  • the echo processing module 104 is configured to process the ultrasonic echo, for example, filtering and amplifying the ultrasonic echo.
  • the memory 103 is used to store various data and programs, for example, ultrasonic echo data can be stored in a memory.
  • the processor 105 is configured to execute a program in the memory 103, process the received data, and/or perform ultrasound on
  • the processor 105 is configured to generate ultrasound image data according to the ultrasound echo data, and the ultrasound image data may be image data for displaying a two-dimensional ultrasound image, or may be used for displaying Volume data of a three-dimensional ultrasound image.
  • the processor generates one frame of two-dimensional ultrasound image data according to the echo data of each scanning plane, and directly outputs the two-dimensional ultrasound image data to the display for display.
  • the ultrasonic probe 101 transmits ultrasonic waves to the plurality of scanning planes and receives echo data of the plurality of scanning planes, and the processor 105 generates multi-frame two-dimensional images according to the echo data of the plurality of scanning planes.
  • the voxel value reflects the reflection ability of the position for ultrasonic waves.
  • the set of voxel values of each point together with the spatial positional relationship between them constitutes a three-dimensional volume data set of the measured tissue.
  • a series of processes such as smoothing, denoising, and the like can be performed on the three-dimensional volume data.
  • the processor also identifies multiple subsets from the volume data set, depending on the object.
  • the processor is further configured to render multiple subsets to obtain multiple sub-images. For example, a plurality of subsets may be similarly rendered by using one set of display configurations to obtain multiple sub-images with the same display configuration, or multiple sets of different displays may be used. Configuration, using multiple sets of display configurations to differentially render multiple subsets to obtain multiple sub-images with different display configurations. Then, the sub-images of the plurality of subsets after the rendering are fused and displayed to obtain a fused image.
  • the plurality of sub-images corresponding to the rendering results of the plurality of subsets are multiplied by the respective fusion coefficients.
  • the overlay is performed, and the superimposed result is output to the display to display the fused three-dimensional ultrasound image.
  • the human-machine interaction device 106 includes a control terminal 1061 and a display 1062.
  • the user inputs an instruction through the control panel 1061 or interacts with an image output on the display 1062.
  • the control terminal 1061 can be provided with a keyboard, an operation key, a gain key, and a scroll wheel. Or touch screen, etc.
  • Display 1062 is used to display various visualization data output by the processor and present it to the user in the form of images, graphics, video, text, numbers, and/or characters.
  • a scheme for displaying a three-dimensional ultrasound image is as shown in FIG. 2, and includes the following steps:
  • Step 10 Acquire ultrasonic three-dimensional volume data to obtain a volume data set. Generally, the voxel value of the measured tissue is obtained according to the multi-frame two-dimensional ultrasonic image data having a certain relative positional relationship, and a three-dimensional ultrasound volume data set is obtained.
  • Step 11 Identify a plurality of subsets from the volume data set. After dividing the subset, you can use any of the following methods:
  • the volume data set is identified as a plurality of subsets according to different objects, and each subset corresponds to one detection object, and the object may be an organ tissue in the body, or may be a specific structure of the tissue, for example, the object may be Tissues such as the heart, liver, and uterus may also be specific structures such as the fetal face, fetal limbs, placenta, and umbilical cord in the uterus.
  • An object such as a fetal face, is first identified from the volume data set, and a subset of the object is obtained, and the other parts are treated as a subset and are no longer distinguished.
  • a plurality of objects are identified from the volume data set, and a subset of the plurality of objects is obtained, and the other portions are regarded as a subset and are not distinguished.
  • any existing image recognition technology may be adopted according to the object recognition subset from the volume data set, for example: [0073] A mathematical model trained on the same type of sample as the measured object is used to perform ultrasonic three-dimensional volume data. The analysis is performed to determine three-dimensional volume data belonging to the measured object, and the three-dimensional volume data is combined with a subset of the measured object.
  • the ultrasound is detected for the fetus
  • the three-dimensional volume data of the fetus is obtained, and a plurality of subsets can be identified from the volume data set according to the image characteristics of the fetus, and the image characteristics of the fetus may be fetal facial features, fetuses
  • the limb structure features, the fetal umbilical cord features, and the like, the subset identified according to the fetal facial features is a subset of the fetal face, and is used to generate a fetal facial image.
  • the fetal facial features include: image characteristics corresponding to the anatomical structure of one or more tissue structures on the fetal face in the ultrasound three-dimensional volume data, the one or more tissue structures from the fetal eye, the fetal nose, the fetal forehead, Choose from fetal chin, fetal cheeks, fetal ears, fetal face contours and fetal mouth.
  • Step 12 Render all or part of the plurality of subsets.
  • Distinctive rendering is illustrated as an example. Multiple sets of different display configurations are pre-established, and the contents of the display configuration include but are not limited to:
  • rendering modes such as surface mode, HDLive mode, minimum mode, maximum mode, X-Ray mode, reverse mode, etc.
  • volume data pre-processing/post-processing such as gain gear position, smooth gear position, denoising gear position, and the like.
  • each display configuration at least one content is different from other display configurations, so that different sets of display configurations are configured, and multiple sets of display configurations are used to respectively render multiple subsets, and each set of display configurations can render one.
  • Kind of rendering results In rendering, each subset can be rendered using a different set of display configurations than the other subsets, resulting in a different rendering result than the other subsets. Or divide multiple subsets into at least two groups, each group is rendered using a set of display configurations different from the other groups, each group having a different rendering result than the other groups.
  • all or part of the plurality of sub-images may be fused and displayed to obtain an overall three-dimensional image, so that each sub-image is presented in the three-dimensional image.
  • Translucent display effect so-called translucent means that the volume data of its own subset can be displayed, but the display of the volume data of other subsets is not blocked, so that multiple sub-images can be displayed simultaneously.
  • the fused display may be performed on all or part of the plurality of sub-images in various manners.
  • the fusion display is implemented by using a fusion coefficient, as shown in steps 13-14.
  • Step 13 Acquire a fusion coefficient of each subset.
  • the fusion coefficient may be a fusion coefficient preset by the system, or may be a fusion coefficient set by the user.
  • a fusion coefficient is set for each object, and the fusion coefficient of the object located in front is specified. For 1 or 0.6, the fusion coefficient of the object located at the back is 0 or 0.4, and the user can set the relative positional relationship of the object according to needs or clinical meaning.
  • adaptive fusion coefficients may also be used.
  • the adaptive fusion coefficients may vary according to the position of the object, may vary according to the thickness of the object, or may vary according to the density of the object.
  • the adaptive fusion coefficients for each subset can be calculated according to various fusion rules.
  • the fusion coefficient is a 0 to 1 The value between, but the sum of the fusion coefficients of each subset can be equal to 1, or can be greater or less than 1.
  • Step 14 Multiply the rendering results of the plurality of subsets by the respective fusion coefficients and perform overlay display. Multiplying the rendering results obtained by the subsets in step 12 by the respective fusion coefficients, and then superimposing the results of multiplying the fusion coefficients, is equivalent to attenuating the rendering results of the subsets according to their respective fusion coefficients, for example
  • the fusion coefficient of the subset is 0.4, which is equivalent to a 60% reduction in the rendering effect of the subset.
  • the value of the opacity parameter in the render parameters for each subset is 1, which is visually opaque. When the rendering result is weakened, the display effect becomes lighter and thinner, and the visual effect becomes more transparent.
  • the placenta portion 313a is located in front of the fetal face 313b, the placenta portion 313a and the fetal face 313b are rendered in the same display configuration, and the opacity of the rendering parameters is 1, so in the rendering result, A partial area of the fetal face 313b is blocked by the placental portion 313a, in which case the doctor does not see the full rendered image of the fetal face 313b.
  • both the placenta portion 313a and the fetal face 313b exhibit a translucent rendering effect.
  • the placenta portion 313a is located in front of the fetal face 313b, it does not completely cover the posterior fetal face image, and the doctor can simultaneously The detailed representation of the placental portion 313a and the fetal face 313b of the overlapping portion is visually seen.
  • the fusion coefficient adopts the adaptive fusion coefficient ⁇ of each subset, the transparency of different parts displayed in the visual effect is different, so that the doctor can intuitively feel the nature of the object, such as the thickness of the object, the density of the object, and the like.
  • the ray tracing method can be used to calculate the adaptive fusion of each subset.
  • the principle of ray tracing is to follow the opposite direction of the light reaching the viewpoint, which is equivalent to tracking the light emitted from the eye, and calculating the reflection and refraction of the light when the light intersects with the object or medium in the scene. absorb.
  • the process of calculating the adaptive fusion coefficient of each subset by using the ray tracing method is as shown in FIG. 5, and includes the following steps:
  • Step 131 Calculate the voxel value of each subset on each tracking ray by using a ray tracing method.
  • a plurality of beam tracking rays are emitted from the viewpoint 210, and the tracking rays are simulated rays, and the tracking rays are simulated to pass through the three-dimensional images of the objects to be displayed, and each of the tracking rays is incident on the three-dimensional image according to one viewing angle.
  • some tracking rays pass through multiple objects, and some tracking rays pass through only one object. The following description will be made by taking the tracking ray 220 in FIG. 4 through the first object 230 and the second object 240 as an example.
  • the second subject 240 is a fetal face
  • the first subject 230 can be a placental portion
  • the placenta portion is obscured in front of the fetal face.
  • the voxels passing through the first object 230 and the second object 240 are identified, and voxel values of the path voxels are obtained, and the voxel values are obtained according to the multi-frame ultrasonic echo data, and the voxel values are reflected. The intensity of the reflection of this position on the ultrasonic waves.
  • the reflection of the ultrasound is weak, the ultrasonic echo signal is weak, and the obtained voxel value is small; for dense solids (such as bones or interventions), its boundary with other soft tissues The reflection of the ultrasonic wave is strong, the ultrasonic echo signal is strong, and the obtained voxel value is large.
  • Step 132 Acquire spatial distribution of each subset on the tracking ray. That is, according to the subset to which the voxel belongs, the regions in which the subsets are distributed on the tracking ray are obtained, that is, the thickness distribution of each subset on the tracking ray.
  • Step 133 Identify a spatial position where each subset is in the direction in which the light is incident.
  • the tracking ray 220 is emitted by the viewpoint 210, first through the first object 230, and then through the second object 240, so the subset of the first object 230 is a spatially advanced child relative to the second object 240 subset.
  • the second object 240 subset is a subset of the spatial position.
  • Step 134 Determine a fusion coefficient of each subset on each tracking ray, and the fusion coefficient is between 0 and 1. Value.
  • the fusion coefficients of the subsets on the tracking ray may be determined according to at least one of voxel values, spatial distributions, and spatial locations of the subsets on the tracking ray, and the fusion coefficients may be determined according to at least one of the following manners:
  • the subset of the front of the spatial position has a larger fusion coefficient with respect to the subset of the space position, which means that the front object is more opaque with respect to the latter object.
  • the user can determine the relative position of the object according to the opacity of the object.
  • a subset with a larger voxel value has a larger fusion coefficient than a subset with a smaller voxel value, which means that a denser solid (eg, bone) is relative to the muscle and The liquid is more opaque, allowing the user to determine which tissue the object is based on the opacity of the object.
  • a denser solid eg, bone
  • the subset having a larger voxel distribution range has a larger fusion coefficient than the subset having a smaller voxel distribution range, which means that the thicker portion is more opaque, so that the user
  • the opacity of the object determines the thickness of the object at that location.
  • the fusion coefficient of each subset on the tracking ray 220 can be determined by the method in this step, since the fusion coefficient follows the thickness, density and/or spatial position of the subset.
  • the change therefore, is an adaptive fusion coefficient, and so on, the fusion coefficients of the subsets on the tracking ray of each view can be obtained.
  • the rendering result of each subset on the tracking ray is multiplied by the fusion coefficient of each subset on the tracking ray and then superimposed and displayed.
  • step 13 can also be exchanged with step 12, that is, the fusion coefficients of each subset are obtained first, and then the subsets are separately rendered.
  • Step 14 can be merged after rendering of each subset, or it can be fused during the rendering of each subset, that is, edge rendering.
  • step 11 when the slave data set identifies each subset according to the object ⁇
  • the ⁇ will identify a part of the volume data to the first subset, and identify the part of the volume data to the second subset.
  • the volume data identified in at least two subsets is called the consensus data.
  • rendering is performed according to any display configuration used by the belonging subset, or by using a display configuration that is different from each related subset or group, and this is also calculated.
  • the fusion coefficient of the partial union data is finally superimposed with other objects by multiplying the rendering result by its fusion coefficient.
  • Embodiment 2 [0101] Embodiment 2:
  • the processor is further configured to adjust the volume data of the boundary part of the adjacent subset from the previously belonged subset to the adjacent subset, and use the display configuration pair of the adjacent subset.
  • the adjusted volume data is rendered to adjust the displayed object or area. Adjustments include but are not limited to:
  • step 11 when each subset is identified from the volume dataset according to the object, a certain part of the volume data may be misidentified, so that a doctor may have a part of the body based on clinical experience.
  • the subset to which the data belongs is adjusted.
  • a fine adjustment tool such as a brush or an eraser is required to adjust the partial volume data.
  • the user's adjustment operation can be used to adjust the output image on the display by inputting commands through the control terminal.
  • FIG. 7 is a schematic diagram of a control terminal.
  • the human-machine interaction device 300 includes a display 310, a control panel 320, and a touch screen 330.
  • the control panel 320 and the touch screen 330 constitute a control terminal. In some embodiments, There is also no touch screen.
  • the display 310 includes a display area 311 on which various images such as a two-dimensional ultrasonic image 312 and a three-dimensional ultrasonic image 313 can be displayed.
  • the touch screen 330 includes a plurality of operable icons 331, corresponding to a plurality of different functions.
  • the operation panel 320 may be provided with various operation keys, such as a keyboard 321 and a knob 322- 326, gain button 327, scroll wheel (or trackball) 328.
  • each rendering object may correspond to one operation key.
  • a plurality of rendering objects such as a placenta portion 313a and a fetal face 313b, are displayed in the three-dimensional ultrasound image 313.
  • the placenta portion 313a corresponds to the knob 32 2
  • the fetal face 313b corresponds to the knob 323, that is, when the user operates the knob 322, it is considered to be the selection and operation of the placenta portion 31 3a.
  • the knob 324 can be a multi-step switch for selecting the operation type, and the operation type includes cropping, merging, splitting surface adjustment, brush, eraser, etc., for example, when the knob 324 selects the brush ⁇ , by operation
  • the roller 32 8 can perform a painting operation on the image in the display area 311, and when the knob 324 selects the eraser, the wiping operation can be performed on the image in the display area 311 by operating the roller 328.
  • the operation flow is described below by taking a wiping operation as an example. As shown in FIG. 9, the following steps are included:
  • Step 20 Detect an adjustment operation selected by the user. The user selects the wiping operation via knob 324 on the control panel.
  • Step 21 Detect a subset selected by the user and an area selected on the superimposed display image.
  • the user controls the mouse movement of the display area through the scroll wheel 328, and can control the mouse to move to the area where the placenta portion 313a and the fetal face 3 13b overlap.
  • the user detects that the knob 322 is rotated, the user selects that the selected data belongs to the placenta portion.
  • the volume data of the subset 313a when it is detected that the user rotates the knob 323, considers that the data selected by the user is volume data belonging to a subset of the fetal face 313b.
  • Step 22 Determine the adjusted volume data according to the subset and the region selected by the user.
  • the area selected by the user can be determined by a circle 313c, and the circle 313c represents a sphere having a radius of the circle radius, that is, the three-dimensional volume data in the sphere is the adjusted volume data, and the user can change the radius of the circle 313c by rotating the knobs 322 and 323. , that is, changing the radius of the sphere, thereby changing the range of the adjusted volume data.
  • Step 23 Adjust the adjusted volume data from the originally assigned subset to the adjacent subset in response to the simulated wiping operation input by the user.
  • the user operates the scroll wheel 328.
  • the icon on the display screen can be changed to an eraser pattern, and as the user reciprocates the roller 328, it will be The adjustment volume data is adjusted from the previously assigned subset to the adjacent subset.
  • the scroll wheel 328 can continue to operate, the system again determines the adjusted volume data according to the circle 313c, and adjusts the adjusted volume data from the previously assigned subset to the adjacent subset.
  • the volume data is adjusted from the subset of the fetal face 313b.
  • a subset of the placental portion 313a referred to as "erasing”
  • Step 24 Render the adjusted volume data by using the display configuration of the adjacent subset, so that the adjusted volume data has the same rendering effect as the newly assigned subset.
  • the present invention performs differential rendering and fusion display on a plurality of different objects, so that the user can view the current status of the adjusted volume data, and can assist the user in determining the determined status. Whether the adjusted volume data exists and whether the adjustment operation is correct. For example, for fetal facial images, the doctor judges that the image of the fetal nose is defective according to clinical experience. This defect may be caused by the misidentification of the ultrasound device in identifying each subset, or it may be that the fetus does have a nose defect.
  • the effects of "erasing” and “anti-erasing” may also be achieved by another scheme, for example, when the fetal face is referred to as a reference object, the fetal face corresponding knob 323 When the user operates the knob 323, which is considered to be the selection and operation of the fetal face 313b, the operation of "erasing" the fetal face includes the following steps:
  • [0120] 1.1 receiving a second instruction entered by the user on the three-dimensional image.
  • the user can select the knob 324 to "erase” by operating the knob 324, and the user's operation is considered to be the second command input by the user.
  • the input of the user is identified to correspond to the first location on the three-dimensional image and the subset in which the first location is located.
  • the user moves the cursor to the position where "erasing" is required by moving the cursor.
  • the knob 324 to "erase” the file
  • the icon of the cursor can be changed to the corresponding icon, for example, the icon of the cursor is converted into a circle.
  • the icon the user can change the size of the circular icon by rotating the knob 323, thereby determining the size of the area covered by the first position selected by the user.
  • the user Since the user operates the knob 323, the user is considered to be an "erase" operation on the face of the fetus, the subset of which is the subset of the fetal face. [0122] 1.3 determining, according to the first location, volume data included in the first location in the subset in which the first location is located. For the cursor of the circular icon, the volume data included in the first position refers to the volume data of the fetal face located in the sphere having the radius of the radius of the circular icon.
  • [0123] 1.4 reducing the fusion coefficient of the sub-image corresponding to the subset in which the first location is located, in this embodiment, reducing the fusion coefficient of the fetal face sub-image, so that the fetal facial image looks more transparent, thereby realizing The fetal face image is "erased”.
  • the fusion coefficient of the sub-image corresponding to the volume data included in the first position may be reduced, that is, the fusion coefficient of the image of the area covered by the first position in the fetal facial image is reduced, so that The image of the area covered by the first location appears to be more transparent, thereby achieving a partial "erasing" effect on the fetal facial image.
  • 2.1 Receive a third instruction entered by the user on the three-dimensional image.
  • the user can select the knob 324 to "anti-erase” by operating the knob 324, and the user's operation is considered to be the user inputting the third command.
  • the input of the user is identified to correspond to the second location on the three-dimensional image and the subset at which the second location is located.
  • the user moves the cursor to the position where "anti-erase” is required by moving the cursor.
  • the user can change the size of the circular icon to determine the size of the area covered by the second position selected by the user. Since the user operates the knob 323, the user is considered to be "anti-erasing" the face of the fetus, and the subset at which the second position is located is a subset of the fetal face.
  • volume data included in the second position in the subset in which the second position is located is located in a circle
  • the radius of the icon is the volume data of the fetal face within the radius of the sphere.
  • [0129] 2.4 increasing the fusion coefficient of the sub-image corresponding to the subset in which the second position is located, or increasing the fusion coefficient of the sub-image corresponding to the volume data included in the second position. That is, the fusion coefficient of the fetal face image is improved, so that the fetal face image looks more opaque, or the fusion coefficient of the image of the area covered by the second position in the fetal facial image is improved, so that the image of the area covered by the second position is seen. It is more opaque, so that the image of the fetal face is "anti-erased".
  • Embodiment 3 When the three-dimensional image is rotated to an angle ⁇ , or the three-dimensional image is observed from a certain angle of view, the flaws in the fetal face may be obscured by other structures, and thus it is desired to block the fetal face by performing an operation. The obstruction of the part is removed.
  • a control button (such as button 329) is disposed on the control panel, corresponding to a button to remove the occlusion function.
  • the button 329 When the fetal face is completely or partially blocked, the user presses the button 329. You can enter a command to remove the occlusion.
  • a process for removing a occlusion by a key is as shown in FIG. 10, and includes the following steps:
  • Step 30 Acquire ultrasonic three-dimensional volume data to obtain a volume data set.
  • Step 31 Determine, according to the facial features of the fetus, the depth of each voxel on the contour of the fetal face in the volume data set to form a depth change surface of the contour of the fetal face.
  • Step 32 The volume data set is segmented into at least two subsets based on the depth variation surface, and one of the subsets includes three-dimensional volume data of the fetal face.
  • Step 33 Rendering all or part of the plurality of subsets to obtain a plurality of sub-images.
  • Step 34 Perform fusion display on all or part of the plurality of sub-images.
  • Step 35 Receive a first instruction generated by the user through a single operation. Set a button on the control panel 32
  • Step 36 According to the first instruction, reduce a fusion coefficient of the sub-image corresponding to the subset other than the subset of the fetal face, so that the display effect of the sub-image corresponding to the other subset is more transparent. , making the fetal face image more prominent.
  • This embodiment can remove the obstruction of the fetal face by a single operation, and the operation of the user (for example, a doctor) is simplified to the utmost extent.
  • the depth variation surface is used in steps 31-32 to distinguish the subset of the fetal face and other subsets of the structure.
  • a subset of the fetal face subset and other structures are identified by other means of identification such that the fusion coefficients of the sub-images corresponding to the subset other than the fetal face subset can be reduced in a subsequent step 36.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • the embodiment further provides a three-dimensional ultrasound image display system, as shown in FIG.
  • the system includes an acquisition unit 410, an identification unit 420, a rendering unit 430, and a fusion unit 440.
  • the obtaining unit 410 is configured to acquire ultrasound three-dimensional volume data and obtain a volume data set.
  • the acquiring unit 410 is responsible for collecting the ultrasonic three-dimensional volume data, and acquiring the two-dimensional image in a series of scanning planes, and integrating according to the three-dimensional spatial relationship, thereby realizing the volume data collection in the three-dimensional space.
  • the selection and control of the scanning plane in which the two-dimensional image is acquired can be realized by a volume probe or an area array probe.
  • the inside of the volume probe consists of a conventional one-dimensional probe array of elements and a built-in stepper motor that oscillates the sequence of the array elements.
  • the stepping motor can oscillate the scanning plane of the array of elements in the normal direction to realize the scanning of the three-dimensional space.
  • the area array probe has thousands of array elements arranged in a matrix, which can directly transmit and receive in different directions of the three-dimensional space, realizing rapid volume data acquisition.
  • the two-dimensional images acquired by each scanning plane are reconstructed according to their spatial relationship.
  • the coordinate transformation is performed according to the spatial position of each plane, and the voxel values of each point in the three-dimensional volume data are obtained by interpolation.
  • the three-dimensional volume data output by the acquisition unit 410 can also be subjected to a series of processing such as smoothing, denoising, and the like.
  • the identification unit 420 divides the input three-dimensional volume data into a plurality of regions.
  • the identification unit 420 is configured to identify a plurality of subsets from the volume data set according to different objects, wherein at least one of the Object.
  • the identification unit 420 may be divided according to a geometric shape, or may be divided according to an object or an organizational structure, and the manner of division includes but is not limited to:
  • Cropping of volume data such as clipping based on geometric shapes such as plane, cuboid, sphere, ellipsoid, etc.
  • Identification of key parts or structures in body data such as adult cardiac endocardium or ventricle/atrium Identification, fetal face recognition, identification of endometrial or uterine attachments, etc.;
  • Segmentation of volume data such as dividing fetal body data into fetal regions and amniotic fluid regions, and the like.
  • the rendering unit 430 is configured to render part or all of the plurality of subsets to obtain a plurality of sub-images.
  • the rendering unit 430 is configured to establish multiple sets of different display configurations and use multiple sets of displays. Configure for differential rendering of multiple subsets.
  • the fusion unit 440 is configured to fuse part or all of the plurality of sub-images obtained after the rendering unit 430 is rendered, thereby obtaining a three-dimensional image of the superimposed display.
  • the merging unit 440 obtains the fusion coefficients of each subset, and multiplies the rendering results of the multiple subsets by the respective fusion coefficients to perform overlay display. Show, get the final display image.
  • the way to integrate includes but is not limited to:
  • Each region rendering result is directly superimposed according to a certain fusion ratio, and the user may use a preset fusion ratio combination or specify the fusion ratio of each region by itself;
  • the rendering results of each region are superimposed according to an adaptive fusion ratio, and the fusion ratio can be calculated according to the spatial positional relationship of each region, the voxel value distribution, the preset weight, and the preset fusion rule.
  • the user can also change the way in which the adaptive fusion ratio is calculated (note that in the adaptive fusion ratio, different voxels in the same divided region have different proportional coefficients);
  • each area is displayed in a certain order, and the rendering result at the rear will be blocked by the result in front (the user can also dynamically adjust the order of each area);
  • the 1-2th fusion mode will make each divided area appear in the semi-transparent form in the final fusion result, while the 3-5th fusion mode does not exist in the middle.
  • a transparent display may cause partial rendering of partially divided areas to be obscured by other divided areas.
  • the 3-5th fusion mode it can also be understood that at a certain spatial position, the fusion coefficient of one of the divided regions is 1, and the fusion coefficient of the other divided regions is 0. Regardless of the fusion method, the final display image can display the detailed performance of different divided areas.
  • the three-dimensional ultrasound image display system further includes an editing unit 450 and a setting unit 460.
  • the editing unit 450 is responsible for the adjustment of the divided regions, for example, for adjusting the volume data of the boundary portion of the adjacent subset from the previously assigned subset to the adjacent subset, and using the display configuration of the adjacent subset to adjust the number of the adjusted objects. According to the rendering. The user can make overall or partial adjustments to the results of the area division as needed.
  • the adjustment methods include but are not limited to:
  • the editing unit 450 needs the user to interactively complete the adjustment of the divided area according to the requirements and the target, and the cooperation of the rendering unit 430 and the merging unit 440. Since the rendering unit 430 and the merging unit 440 can simultaneously display the range and details of each divided area, it is intuitive and convenient to interactively adjust the division range of each area.
  • the setting unit 460 is responsible for adjusting the display effect of each divided area, for example, for setting a subset displayed on the final display interface, a display configuration of each subset, a fusion coefficient of each subset, and a calculation method of the fusion coefficient. At least one of them.
  • the range that the setting unit 460 can set includes but is not limited to:
  • the setting unit 460 requires the user to interactively complete the adjustment of the display effect according to the requirements and the target, in cooperation with the rendering unit 430 and the merging unit 440.
  • the adjustment of the display effect may be performed separately for each divided area, or may be performed for multiple divided areas or all divided areas. Since the rendering unit 430 and the merging unit 440 are capable of displaying the detailed expression of each divided area Therefore, it is intuitive and convenient to interactively adjust the display effect of each area.
  • all or part of the functions of the various methods in the above embodiments may be implemented by hardware or by a computer program.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc.
  • the computer executes the program to implement the above functions.
  • the program is stored in the memory of the device, and when the program in the memory is executed by the processor, all or part of the above functions can be realized.
  • the program may also be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk or a mobile hard disk, by downloading or The copy is saved to the memory of the local device, or the system of the local device is updated.
  • a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk or a mobile hard disk, by downloading or The copy is saved to the memory of the local device, or the system of the local device is updated.

Abstract

An ultrasonic device and a three-dimensional ultrasonic image display method therefor. The method comprises: performing distinctive rendering on multiple subsets in an ultrasonic three-dimensional volume data set by using multiple sets of display configurations; multiplying the rendering results of the multiple subsets by the respective fusion coefficients thereof for superimposed display; reducing the opacity of the rendering result of each of the subsets by means of the fusion coefficient; and displaying rendered images simultaneously for multiple regions or objects that may occlude each other from a certain perspective. The simultaneous display is beneficial for a user to observe the details of various regions or objects simultaneously, so as to better master the overall performance of volume data, and furthermore, the user can also intuitively see the division of various regions or objects, so as to make subsequent adjustments to the division of the regions and objects.

Description

超声设备及其三维超声图像显示方法 技术领域  Ultrasonic device and three-dimensional ultrasonic image display method thereof
[0001] 本发明涉及一种超声设备, 具体涉及在超声设备上对三维超声图像进行显示的 技术。  [0001] The present invention relates to an ultrasound apparatus, and in particular to a technique of displaying a three-dimensional ultrasound image on an ultrasound apparatus.
背景技术  Background technique
[0002] 在医学超声仪器的使用过程中, 常常需要采集三维的体数据, 形成体数据集, 并对该体数据集进行渲染显示, 以显示被测组织的三维超声图像。 有些情况下 , 超声仪器还对该体数据进行区域的划分, 例如从体数据中提取感兴趣对象或 区域 (如胎儿脸部等关键临床结构与部位) , 或者将体数据划分为多个分别对 应不同感兴趣对象或区域的子集等, 然后对各对象或区域进行渲染显示。 不管 是对三维超声图像进行渲染显示还是对各对象或区域进行渲染显示, 现在医学 超声仪器通常都是采用同一种渲染。 这样的显示方式带来的问题是, 当多个对 象在某个视角上处于前后重叠吋, 通常只对医生选择的感兴趣对象或区域进行 渲染, 这导致医生只能看到一个对象的细节, 而不能同吋直观地看到多个对象 的表现。  [0002] During the use of a medical ultrasound instrument, it is often necessary to acquire three-dimensional volume data, form a volume data set, and render the volume data set to display a three-dimensional ultrasound image of the measured tissue. In some cases, the ultrasound apparatus also divides the volume data into regions, for example, extracting objects or regions of interest (such as key clinical structures and parts such as fetal faces) from the volume data, or dividing the volume data into multiple corresponding correspondences. A subset of different objects or regions of interest, etc., and then rendered and displayed for each object or region. Whether it's rendering a 3D ultrasound image or rendering it for each object or region, medical ultrasound instruments now typically use the same rendering. The problem with such a display method is that when multiple objects are overlapped at a certain angle of view, usually only the object or region of interest selected by the doctor is rendered, which causes the doctor to see only the details of one object. It is not possible to visually see the performance of multiple objects.
技术问题  technical problem
[0003] 本申请提供一种超声设备及其三维超声图像显示方法, 使得医生在显示界面上 可直观地同吋看到多个对象的渲染图像。  [0003] The present application provides an ultrasound apparatus and a three-dimensional ultrasound image display method thereof, so that a doctor can visually see a rendered image of a plurality of objects on a display interface.
问题的解决方案  Problem solution
技术解决方案  Technical solution
[0004] 根据第一方面, 一种实施例中提供一种三维超声图像显示方法,包括:  [0004] According to a first aspect, an embodiment provides a three-dimensional ultrasound image display method, including:
[0005] 获取超声三维体数据, 得到体数据集;  [0005] acquiring ultrasound three-dimensional volume data to obtain a volume data set;
[0006] 从体数据集中识别出多个子集;  [0006] identifying a plurality of subsets from the volume data set;
[0007] 建立多套不同的显示配置;  [0007] Establishing multiple sets of different display configurations;
[0008] 使用多套显示配置对多个子集进行区别性渲染;  [0008] Differential rendering of multiple subsets using multiple sets of display configurations;
[0009] 获取各子集的融合系数; [0010] 将多个子集的渲染结果乘以各自的融合系数后进行叠加显示。 Obtaining a fusion coefficient of each subset; [0010] Multiplying the rendering results of the plurality of subsets by the respective fusion coefficients and performing superimposed display.
[0011] 根据第二方面, 一种实施例中提供一种三维超声图像显示方法,包括:  [0011] According to a second aspect, an embodiment provides a three-dimensional ultrasound image display method, including:
[0012] 建立多套不同的显示配置;  [0012] establishing multiple sets of different display configurations;
[0013] 使用多套显示配置对超声三维体数据集中的多个子集进行区别性渲染;  [0013] differentially rendering a plurality of subsets of the ultrasound three-dimensional volume data set using multiple sets of display configurations;
[0014] 获取各子集的融合系数;  Obtaining a fusion coefficient of each subset;
[0015] 将多个子集的渲染结果乘以各自的融合系数后进行叠加显示。  [0015] Multiplying the rendering results of the plurality of subsets by respective fusion coefficients and performing superimposed display.
[0016] 根据第三方面, 另一种实施例中还提供一种三维超声图像显示方法, 包括: [0016] According to the third aspect, another embodiment provides a three-dimensional ultrasound image display method, including:
[0017] 获取针对胎儿检测的超声三维体数据, 得到体数据集; [0017] obtaining ultrasound three-dimensional volume data for fetal detection to obtain a volume data set;
[0018] 根据胎儿的图像特征从所述体数据集中识别出多个子集;  [0018] identifying a plurality of subsets from the volume data set according to image features of the fetus;
[0019] 渲染多个子集的部分或全部获得多个子图像;  [0019] rendering part or all of the plurality of subsets to obtain a plurality of sub-images;
[0020] 融合多个子图像的部分或全部获得三维图像; 和,  [0020] merging part or all of the plurality of sub-images to obtain a three-dimensional image; and,
[0021] 显示所述三维图像。  [0021] displaying the three-dimensional image.
[0022] 根据第四方面, 一种实施例中提供一种超声设备,包括:  [0022] According to a fourth aspect, an embodiment provides an ultrasound apparatus, including:
[0023] 超声探头, 用于向生物组织内的感兴趣区域发射超声波, 并接收超声波的回波  [0023] an ultrasound probe for transmitting ultrasound waves to a region of interest within the biological tissue and receiving echoes of the ultrasound waves
[0024] 发射 /接收序列控制器, 用于产生发射序列和 /或接收序列, 并将发射序列和 /或 接收序列输出至超声探头, 控制超声探头向感兴趣区域发射超声波并接收超声 波的回波; [0024] a transmit/receive sequence controller for generating a transmit sequence and/or a receive sequence, and outputting the transmit sequence and/or the receive sequence to an ultrasound probe, controlling the ultrasound probe to transmit ultrasound waves to the region of interest and receiving echoes of the ultrasound waves ;
[0025] 处理器, 用于根据超声波回波数据生成超声三维体数据, 得到体数据集, 从体 数据集中识别出多个子集, 处理器还用于建立多套不同的显示配置, 使用多套 显示配置对多个子集进行区别性渲染, 获取各子集的融合系数, 将多个子集的 渲染结果乘以各自的融合系数后进行叠加显示;  [0025] The processor is configured to generate ultrasound three-dimensional volume data according to the ultrasonic echo data, obtain a volume data set, and identify multiple subsets from the volume data set, and the processor is further configured to establish multiple sets of different display configurations, and use multiple sets. The display configuration performs differential rendering on multiple subsets, obtains fusion coefficients of each subset, multiplies the rendering results of the multiple subsets by respective fusion coefficients, and performs superimposed display;
[0026] 人机交互装置, 人机交互装置包括用于显示超声渲染图像的显示器。 [0026] A human-machine interaction device, the human-machine interaction device comprising a display for displaying an ultrasound rendered image.
[0027] 根据第五方面, 另一种实施例中提供一种超声设备, 包括: [0027] According to a fifth aspect, in another embodiment, an ultrasound apparatus is provided, including:
[0028] 超声探头, 用于向生物组织内的感兴趣区域发射超声波, 并接收超声波的回波 [0028] an ultrasound probe for transmitting ultrasound waves to a region of interest within the biological tissue and receiving echoes of the ultrasound waves
[0029] 发射 /接收序列控制器, 用于产生发射序列和 /或接收序列, 并将发射序列和 /或 接收序列输出至超声探头, 控制超声探头向感兴趣区域发射超声波并接收超声 波的回波; [0029] a transmit/receive sequence controller for generating a transmit sequence and/or a receive sequence, and outputting the transmit sequence and/or the receive sequence to an ultrasound probe, controlling the ultrasound probe to transmit ultrasound to the region of interest and receiving the ultrasound Wave echo
[0030] 处理器, 用于获取针对胎儿检测的超声三维体数据, 得到体数据集, 根据胎儿 的图像特征从所述体数据集中识别出多个子集, 渲染多个子集的部分或全部获 得多个子图像, 融合多个子图像的部分或全部获得三维图像, 并输出所述三维 图像至显示器进行显示;  [0030] a processor, configured to acquire ultrasound three-dimensional volume data for fetal detection, obtain a volume data set, identify a plurality of subsets from the volume data set according to image features of the fetus, and render part or all of the plurality of subsets to obtain more a sub-image, merging part or all of the plurality of sub-images to obtain a three-dimensional image, and outputting the three-dimensional image to a display for display;
[0031] 人机交互装置, 人机交互装置包括用于显示超声三维图像的显示器。  [0031] A human-machine interaction device, the human-machine interaction device comprising a display for displaying an ultrasound three-dimensional image.
[0032] 根据第六方面, 一种实施例中提供一种超声设备, 包括:  [0032] According to a sixth aspect, an embodiment provides an ultrasound apparatus, including:
[0033] 存储器, 用于存储程序;  [0033] a memory for storing a program;
[0034] 处理器, 用于通过执行所述存储器存储的程序以实现上述方法。  [0034] a processor, configured to implement the above method by executing the program stored by the memory.
[0035] 根据第七方面, 一种实施例中提供一种计算机可读存储介质, 包括程序, 所述 程序能够被处理器执行以实现上述方法。  [0035] According to a seventh aspect, an embodiment provides a computer readable storage medium comprising a program executable by a processor to implement the above method.
[0036] 根据第八方面, 一种实施例中提供一种三维超声图像显示***,包括: [0036] According to an eighth aspect, an embodiment provides a three-dimensional ultrasound image display system, including:
[0037] 用于获取超声三维体数据并得到体数据集的获取单元; [0037] an acquisition unit for acquiring ultrasound three-dimensional volume data and obtaining a volume data set;
[0038] 用于从体数据集中识别出多个子集的识别单元; [0038] an identification unit for identifying a plurality of subsets from the volume data set;
[0039] 用于建立多套不同的显示配置, 并使用多套显示配置对多个子集进行区别性渲 染的渲染单元;  [0039] a rendering unit for creating multiple sets of different display configurations and using different sets of display configurations to differentially render multiple subsets;
[0040] 用于获取各子集的融合系数, 并将多个子集的渲染结果乘以各自的融合系数后 进行叠加显示的融合单元。  [0040] a fusion unit for acquiring fusion coefficients of each subset, multiplying the rendering results of the plurality of subsets by respective fusion coefficients, and performing superimposed display.
发明的有益效果  Advantageous effects of the invention
有益效果  Beneficial effect
[0041] 依据上述实施例, 使用多套显示配置对多个子集进行区别性渲染, 并对渲染结 果进行融合显示, 通过融合系数达到了与降低各子集的不透明度渲染参数的显 示效果, 使得各区域或对象呈现半透明显示效果, 对于具有重叠的区域或对象 , 可同吋显示其渲染图像, 这种同吋显示有利于用户同吋观察各区域或对象的 细节, 以对体数据的整体表现有更好的掌握, 同吋用户也可直观地看到各区域 或对象的划分情况, 以便后续对区域或对象的划分作出调整。  [0041] According to the above embodiment, multiple sets of display configurations are used to differentially render multiple subsets, and the rendering results are fused and displayed, and the fusion coefficient is used to achieve the display effect of reducing the opacity rendering parameters of each subset, so that Each area or object presents a translucent display effect. For an area or object with overlapping, the rendered image can be displayed at the same time. This kind of peer display helps the user to observe the details of each area or object simultaneously, so as to the overall volume data. The table has a better grasp, and peer users can also visually see the division of each region or object, so as to make subsequent adjustments to the division of the region or object.
对附图的简要说明  Brief description of the drawing
附图说明 [0042] 图 1为一种实施例中超声设备的结构示意图; DRAWINGS 1 is a schematic structural view of an ultrasonic device in an embodiment;
[0043] 图 2为一种实施例中对三维超声图像进行显示的流程图;  2 is a flow chart showing display of a three-dimensional ultrasound image in an embodiment;
[0044] 图 3为现有技术中对多个对象进行渲染显示的示意图;  [0044] FIG. 3 is a schematic diagram of rendering and displaying multiple objects in the prior art;
[0045] 图 4为本发明一种实施例中对多个对象进行渲染显示的示意图;  4 is a schematic diagram of rendering and displaying multiple objects in an embodiment of the present invention;
[0046] 图 5为一种实施例中光线跟踪法流程图;  [0046] FIG. 5 is a flow chart of a ray tracing method in an embodiment;
[0047] 图 6为一种实施例中光线跟踪法的原理图;  6 is a schematic diagram of a ray tracing method in an embodiment;
[0048] 图 7为一种实施例中控制终端的结构示意图;  7 is a schematic structural diagram of a control terminal in an embodiment;
[0049] 图 8为一种实施例中对两个对象的叠加显示的部分进行调整的示意图;  8 is a schematic diagram of adjusting a portion of a superimposed display of two objects in an embodiment; [0049] FIG.
[0050] 图 9为一种实施例中对分区进行调整的流程图;  [0050] FIG. 9 is a flow chart of adjusting a partition in an embodiment;
[0051] 图 10为一种实施例中一键去除遮挡物的流程图;  [0051] FIG. 10 is a flow chart of a key to remove an obstruction in an embodiment;
[0052] 图 11为一种实施例中三维超声图像显示***的结构示意图。  11 is a schematic structural diagram of a three-dimensional ultrasonic image display system in an embodiment.
本发明的实施方式 Embodiments of the invention
[0053] 具体实施方式 DETAILED DESCRIPTION
[0054] 下面通过具体实施方式结合附图对本发明作进一步详细说明。 其中不同实施方 式中类似元件采用了相关联的类似的元件标号。 在以下的实施方式中, 很多细 节描述是为了使得本申请能被更好的理解。 然而, 本领域技术人员可以毫不费 力的认识到, 其中部分特征在不同情况下是可以省略的, 或者可以由其他元件 、 材料、 方法所替代。 在某些情况下, 本申请相关的一些操作并没有在说明书 中显示或者描述, 这是为了避免本申请的核心部分被过多的描述所淹没, 而对 于本领域技术人员而言, 详细描述这些相关操作并不是必要的, 他们根据说明 书中的描述以及本领域的一般技术知识即可完整了解相关操作。  [0054] The present invention will be further described in detail below with reference to the accompanying drawings. Similar elements in different embodiments employ associated similar component numbers. In the following embodiments, many detailed descriptions are made to enable the present application to be better understood. However, those skilled in the art can easily recognize that some of the features may be omitted in different situations, or may be replaced by other components, materials, and methods. In some cases, some of the operations related to the present application are not shown or described in the specification, in order to avoid that the core portion of the present application is overwhelmed by excessive description, and those skilled in the art will describe these in detail. Related operations are not necessary, they can fully understand the relevant operations according to the description in the manual and the general technical knowledge in the field.
[0055] 另外, 说明书中所描述的特点、 操作或者特征可以以任意适当的方式结合形成 各种实施方式。 同吋, 方法描述中的各步骤或者动作也可以按照本领域技术人 员所能显而易见的方式进行顺序调换或调整。 因此, 说明书和附图中的各种顺 序只是为了清楚描述某一个实施例, 并不意味着是必须的顺序, 除非另有说明 其中某个顺序是必须遵循的。  In addition, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. In the meantime, the steps or actions in the method description can also be sequentially changed or adjusted in a manner apparent to those skilled in the art. Therefore, the various orders in the specification and the drawings are only for the purpose of describing a particular embodiment, and are not intended to be a necessary order unless otherwise indicated.
[0056] 本文中为部件所编序号本身, 例如"第一"、 "第二 "等, 仅用于区分所描述的对 象, 不具有任何顺序或技术含义。 而本申请所说 "连接"、 "联接", 如无特别说明 , 均包括直接和间接连接 (联接) 。 [0056] The serial numbers themselves for the components, such as "first", "second", etc., are used only to distinguish the described pairs. Image, does not have any order or technical meaning. As used herein, "connected" and "coupled", unless otherwise specified, include both direct and indirect connections (joins).
[0057]  [0057]
[0058] 实施例一:  [0058] Embodiment 1:
[0059] 请参考图 1, 超声设备 100包括超声探头 101、 发射 /接收序列控制模块 102、 回 波处理模块 104、 处理器 105、 存储器 103和人机交互装置 106。 处理器 105与发射 / 接收序列控制器 102、 存储器 103和人机交互装置 106分别连接, 超声探头 101与 发射 /接收序列控制器 102连接, 超声探头 101还与回波处理模块 104连接, 回波处 理模块 104的输出端与处理器 105连接。  Referring to FIG. 1, the ultrasound apparatus 100 includes an ultrasound probe 101, a transmit/receive sequence control module 102, an echo processing module 104, a processor 105, a memory 103, and a human-machine interaction device 106. The processor 105 is connected to the transmitting/receiving sequence controller 102, the memory 103 and the human-machine interaction device 106, respectively, the ultrasound probe 101 is connected to the transmitting/receiving sequence controller 102, and the ultrasonic probe 101 is also connected to the echo processing module 104, and the echo is The output of the processing module 104 is coupled to the processor 105.
[0060] 超声探头 101用于向生物组织 108内的感兴趣区域发射超声波, 并接收该超声波 的回波。 本实施例中, 超声探头 101可以是容积探头或者面阵探头。 容积探头内 部是由一个常规的一维探头阵元序列以及一个带动阵元序列摆动的内置步进电 机组成。 步进电机可使阵元序列的扫描平面沿其法线方向来回摆动, 向不同的 扫描平面发射超声波, 并接收其回波, 从而得到多个扫描平面的回波数据, 从 而实现三维空间的扫描。 面阵探头则具有上千个阵元, 以矩阵的形式排列, 可 以向三维空间的不同方向直接进行发射与接收, 同吋接收不同方向的扫描平面 返回的超声回波数据, 从而实现快速的三维空间扫描。  [0060] The ultrasound probe 101 is configured to emit ultrasound waves to a region of interest within the biological tissue 108 and receive echoes of the ultrasound waves. In this embodiment, the ultrasonic probe 101 may be a volume probe or an area array probe. The inside of the volume probe consists of a conventional one-dimensional probe array of elements and a built-in stepper motor that oscillates the sequence of the array elements. The stepping motor can oscillate the scanning plane of the array sequence along its normal direction, transmit ultrasonic waves to different scanning planes, and receive echoes thereof, thereby obtaining echo data of a plurality of scanning planes, thereby realizing scanning in a three-dimensional space. . The area array probe has thousands of array elements arranged in the form of a matrix, which can directly transmit and receive in different directions of the three-dimensional space, and simultaneously receive ultrasonic echo data returned by the scanning planes in different directions, thereby realizing fast three-dimensional. Space scan.
[0061] 发射 /接收序列控制器 102用于产生发射序列和 /或接收序列, 并将发射序列和 / 或接收序列输出至超声探头, 控制超声探头向感兴趣区域发射超声波并接收该 超声波的回波。 发射序列用于提供超声探头 101中发射用的换能器数和向生物组 织发射超声波的参数 (例如幅度、 频率、 发波次数、 发波角度、 波型等) , 接 收序列用于提供超声探头 101中接收用的换能器数以及其接收回波的参数 (例如 接收的角度、 深度等) 。 探头的类型不同, 发射序列和接收序列也不同。  [0061] The transmit/receive sequence controller 102 is configured to generate a transmit sequence and/or a receive sequence, and output the transmit sequence and/or the receive sequence to the ultrasound probe, and control the ultrasound probe to transmit the ultrasound to the region of interest and receive the ultrasound back wave. The emission sequence is used to provide the number of transducers for transmission in the ultrasound probe 101 and parameters for transmitting ultrasound to the biological tissue (eg amplitude, frequency, number of waves, wave angle, waveform, etc.), and the receiving sequence is used to provide an ultrasound probe The number of transducers received in 101 and the parameters of the echoes they receive (eg, received angle, depth, etc.). The type of probe is different, and the transmission sequence and reception sequence are also different.
[0062] 回波处理模块 104用于对超声回波进行处理, 例如对超声回波进行滤波、 放大 [0062] The echo processing module 104 is configured to process the ultrasonic echo, for example, filtering and amplifying the ultrasonic echo.
、 波束合成等处理。 , beam synthesis and other processing.
[0063] 存储器 103用于存储各种数据和程序, 例如可将超声波回波数据存储在存储器 中。  [0063] The memory 103 is used to store various data and programs, for example, ultrasonic echo data can be stored in a memory.
[0064] 处理器 105用于执行存储器 103中的程序、 对接收的数据进行处理和 /或对超声 设备的各部分进行控制, 本发明实施例中, 处理器 105用于根据超声回波数据生 成超声图像数据, 超声图像数据可以是用于显示二维超声图像的图像数据, 也 可以是用于显示三维超声图像的体数据。 处理器根据每个扫描平面的回波数据 生成一帧二维超声图像数据, 可直接将该二维超声图像数据输出到显示器进行 显示。 当需要生成被测组织的三维图像吋, 超声探头 101向多个扫描平面发射超 声波并接收多个扫描平面的回波数据, 处理器 105根据多个扫描平面的回波数据 生成多帧二维图像, 根据各扫描平面的空间位置关系进行图像重建, 例如通过 坐标转换、 插值等方法得到三维坐标系中各点的体素值, 对于超声图像而言, 体素值反映该位置对于超声波的反射能力, 各点的体素值连同它们之间的空间 位置关系的集合组成被测组织的三维体数据集。 在较优的实施例中, 还可以对 三维体数据进行平滑、 去噪等一系列处理。 处理器还根据对象的不同, 从体数 据集中识别出多个子集。 处理器还用于对多个子集进行渲染获得多个子图像, 例如, 可以采用一套显示配置对多个子集进行同样的渲染获得多个显示配置相 同的子图像, 也可以采用多套不同的显示配置, 使用多套显示配置对多个子集 进行区别性渲染获得多个显示配置不同的子图像。 然后将渲染后的多个子集的 子图像进行融合显示获得融合后的图像, 例如根据得到的各子集的融合系数, 将多个子集的渲染结果对应的多个子图像乘以各自的融合系数后进行叠加, 将 叠加后的结果输出到显示器以显示融合的三维超声图像。 [0064] The processor 105 is configured to execute a program in the memory 103, process the received data, and/or perform ultrasound on In the embodiment of the present invention, the processor 105 is configured to generate ultrasound image data according to the ultrasound echo data, and the ultrasound image data may be image data for displaying a two-dimensional ultrasound image, or may be used for displaying Volume data of a three-dimensional ultrasound image. The processor generates one frame of two-dimensional ultrasound image data according to the echo data of each scanning plane, and directly outputs the two-dimensional ultrasound image data to the display for display. When it is required to generate a three-dimensional image of the measured tissue, the ultrasonic probe 101 transmits ultrasonic waves to the plurality of scanning planes and receives echo data of the plurality of scanning planes, and the processor 105 generates multi-frame two-dimensional images according to the echo data of the plurality of scanning planes. Perform image reconstruction according to the spatial positional relationship of each scanning plane, for example, by means of coordinate transformation, interpolation, etc., to obtain voxel values of points in the three-dimensional coordinate system. For ultrasonic images, the voxel value reflects the reflection ability of the position for ultrasonic waves. The set of voxel values of each point together with the spatial positional relationship between them constitutes a three-dimensional volume data set of the measured tissue. In a preferred embodiment, a series of processes such as smoothing, denoising, and the like can be performed on the three-dimensional volume data. The processor also identifies multiple subsets from the volume data set, depending on the object. The processor is further configured to render multiple subsets to obtain multiple sub-images. For example, a plurality of subsets may be similarly rendered by using one set of display configurations to obtain multiple sub-images with the same display configuration, or multiple sets of different displays may be used. Configuration, using multiple sets of display configurations to differentially render multiple subsets to obtain multiple sub-images with different display configurations. Then, the sub-images of the plurality of subsets after the rendering are fused and displayed to obtain a fused image. For example, according to the obtained fusion coefficients of the subsets, the plurality of sub-images corresponding to the rendering results of the plurality of subsets are multiplied by the respective fusion coefficients. The overlay is performed, and the superimposed result is output to the display to display the fused three-dimensional ultrasound image.
[0065] 人机交互装置 106包括控制终端 1061和显示器 1062, 用户通过控制面板 1061输 入指令或与显示器 1062上输出的图像进行交互, 控制终端 1061上可设置有键盘 、 操作键、 增益键、 滚轮或触控屏等。 显示器 1062用于显示处理器输出的各种 可视化数据, 并以图像、 图形、 视频、 文本、 数字和 /或字符的方式呈现给用户  [0065] The human-machine interaction device 106 includes a control terminal 1061 and a display 1062. The user inputs an instruction through the control panel 1061 or interacts with an image output on the display 1062. The control terminal 1061 can be provided with a keyboard, an operation key, a gain key, and a scroll wheel. Or touch screen, etc. Display 1062 is used to display various visualization data output by the processor and present it to the user in the form of images, graphics, video, text, numbers, and/or characters.
[0066] 基于上述超声设备, 在一种实施例中, 对三维超声图像进行显示的方案如图 2 所示, 包括以下步骤: [0066] Based on the above ultrasonic device, in one embodiment, a scheme for displaying a three-dimensional ultrasound image is as shown in FIG. 2, and includes the following steps:
[0067] 步骤 10, 获取超声三维体数据, 得到体数据集。 通常情况下, 根据具有一定相 对位置关系的多帧二维超声图像数据得到被测组织的体素值, 得到三维超声体 数据集。 [0068] 步骤 11, 从体数据集中识别出多个子集。 在划分子集吋, 可采用以下方法中的 任一种: [0067] Step 10: Acquire ultrasonic three-dimensional volume data to obtain a volume data set. Generally, the voxel value of the measured tissue is obtained according to the multi-frame two-dimensional ultrasonic image data having a certain relative positional relationship, and a three-dimensional ultrasound volume data set is obtained. [0068] Step 11: Identify a plurality of subsets from the volume data set. After dividing the subset, you can use any of the following methods:
[0069] 将体数据集根据对象的不同识别为多个子集, 每个子集对应着一个检测对象, 对象可以是身体内的一个器官组织, 也可以是组织的某个特定结构, 例如对象 可以是心脏、 肝脏、 子宫等组织, 也可以是子宫内的胎儿脸部、 胎儿肢体、 胎 盘、 脐带等特定结构。  [0069] The volume data set is identified as a plurality of subsets according to different objects, and each subset corresponds to one detection object, and the object may be an organ tissue in the body, or may be a specific structure of the tissue, for example, the object may be Tissues such as the heart, liver, and uterus may also be specific structures such as the fetal face, fetal limbs, placenta, and umbilical cord in the uterus.
[0070] 先从体数据集中识别出某个对象, 例如胎儿脸部, 得到该对象的子集, 其它部 分作为一个子集, 不再区分。  [0070] An object, such as a fetal face, is first identified from the volume data set, and a subset of the object is obtained, and the other parts are treated as a subset and are no longer distinguished.
[0071] 从体数据集中识别出多个对象, 得到该多个对象的子集, 其它部分作为一个子 集, 不再区分。 [0071] A plurality of objects are identified from the volume data set, and a subset of the plurality of objects is obtained, and the other portions are regarded as a subset and are not distinguished.
[0072] 从体数据集中根据对象识别子集可采用已有的任一种图像识别技术, 例如: [0073] 采用经与被测对象相同类型的样本训练而成的数学模型对超声三维体数据进行 分析, 确定归属于被测对象的三维体数据, 这些三维体数据组合该被测对象的 子集。  [0072] Any existing image recognition technology may be adopted according to the object recognition subset from the volume data set, for example: [0073] A mathematical model trained on the same type of sample as the measured object is used to perform ultrasonic three-dimensional volume data. The analysis is performed to determine three-dimensional volume data belonging to the measured object, and the three-dimensional volume data is combined with a subset of the measured object.
[0074] 根据被测对象特征从超声三维体数据中采用图像处理和 /或图像分割算法检测 一个或多个被测对象特征的位置, 根据被测对象特征的位置确定归属于被测对 象的三维体数据, 这些三维体数据组合该被测对象的子集。  [0074] detecting, by using an image processing and/or an image segmentation algorithm, the position of the one or more measured object features from the ultrasonic three-dimensional volume data according to the measured object feature, and determining the three-dimensional belonging to the measured object according to the position of the measured object feature. Volume data, these three-dimensional volume data are combined into a subset of the measured object.
[0075] 将超声三维体数据输出至显示器以显示三维图像, 根据用户在三维图像上确定 的一个或多个被测对象特征的位置, 根据被测对象特征的位置确定归属于被测 对象的三维体数据, 这些三维体数据组合该被测对象的子集。  [0075] outputting the ultrasound three-dimensional volume data to the display to display the three-dimensional image, determining the three-dimensional belonging to the measured object according to the position of the measured object feature according to the position of the one or more measured object features determined by the user on the three-dimensional image Volume data, these three-dimensional volume data are combined into a subset of the measured object.
[0076] 当针对胎儿进行超声检测吋, 得到的是胎儿的三维体数据, 可根据胎儿的图像 特征从所述体数据集中识别出多个子集, 胎儿的图像特征可以是胎儿脸部特征 、 胎儿肢体结构特征、 胎儿脐带特征等, 根据胎儿脸部特征识别出的子集为胎 儿脸部子集, 用于生成胎儿脸部图像。 胎儿脸部特征包括: 在超声三维体数据 中胎儿脸部上一个或多个组织结构的解剖学结构对应的图像特性, 所述 1个或多 个组织结构从胎儿眼睛、 胎儿鼻子、 胎儿额头、 胎儿下巴、 胎儿脸颊、 胎儿耳 朵、 胎儿脸轮廓和胎儿嘴等中选择。  [0076] When the ultrasound is detected for the fetus, the three-dimensional volume data of the fetus is obtained, and a plurality of subsets can be identified from the volume data set according to the image characteristics of the fetus, and the image characteristics of the fetus may be fetal facial features, fetuses The limb structure features, the fetal umbilical cord features, and the like, the subset identified according to the fetal facial features is a subset of the fetal face, and is used to generate a fetal facial image. The fetal facial features include: image characteristics corresponding to the anatomical structure of one or more tissue structures on the fetal face in the ultrasound three-dimensional volume data, the one or more tissue structures from the fetal eye, the fetal nose, the fetal forehead, Choose from fetal chin, fetal cheeks, fetal ears, fetal face contours and fetal mouth.
[0077] 步骤 12, 对多个子集的全部或部分进行渲染。 本实施例中, 以对多个子集进行 区别性渲染为例进行说明。 预先建立多套不同的显示配置, 显示配置的内容包 含但不局限于: [0077] Step 12: Render all or part of the plurality of subsets. In this embodiment, to perform on multiple subsets Distinctive rendering is illustrated as an example. Multiple sets of different display configurations are pre-established, and the contents of the display configuration include but are not limited to:
[0078] 1.  [0078] 1.
渲染模式的选择, 如表面模式、 HDLive模式、 最小值模式、 最大值模式、 X-Ray 模式、 反转模式等等;  Selection of rendering modes, such as surface mode, HDLive mode, minimum mode, maximum mode, X-Ray mode, reverse mode, etc.
[0079] 2.渲染参数的选择, 如伪彩、 色调、 亮度、 阈值、 不透明度、 对比度、 深度渲 染参数以及 VR图的后处理参数等等;  [0079] 2. Selection of rendering parameters, such as pseudo color, hue, brightness, threshold, opacity, contrast, depth rendering parameters, and post processing parameters of the VR map;
[0080] 3.光源的选择, 如光源模式 (点光源、 平行光源等) 、 光源位置等; [0080] 3. Selection of the light source, such as a light source mode (point light source, parallel light source, etc.), light source position, etc.;
[0081] 4.体数据前处理 /后处理的选择, 如增益档位、 平滑档位、 去噪档位等等。 [0081] 4. Selection of volume data pre-processing/post-processing, such as gain gear position, smooth gear position, denoising gear position, and the like.
[0082] 每套显示配置中, 至少有一项内容不同于其他的显示配置, 因此构成不同的多 套显示配置, 采用多套显示配置分别对多个子集进行渲染, 每套显示配置可渲 染出一种渲染结果。 在渲染吋, 可以是每个子集使用不同于其它子集的一套显 示配置进行渲染, 得到一种不同于其它子集的渲染结果。 或将多个子集分成至 少两组, 每组使用不同于其它组的一套显示配置进行渲染, 每组有一种不同于 其它组的渲染结果。 [0082] In each display configuration, at least one content is different from other display configurations, so that different sets of display configurations are configured, and multiple sets of display configurations are used to respectively render multiple subsets, and each set of display configurations can render one. Kind of rendering results. In rendering, each subset can be rendered using a different set of display configurations than the other subsets, resulting in a different rendering result than the other subsets. Or divide multiple subsets into at least two groups, each group is rendered using a set of display configurations different from the other groups, each group having a different rendering result than the other groups.
[0083] 在对多个子集的全部或部分进行渲染后获得多个子图像, 可对该多个子图像全 部或部分进行融合显示, 得到整体的三维图像, 使得在该三维图像中, 各子图 像呈现半透明显示效果, 所谓半透明是指既能对本身子集的体数据进行显示, 但又不遮挡其它子集的体数据的显示, 从而可同吋显示出多个子图像。  [0083] After rendering all or part of the plurality of subsets to obtain a plurality of sub-images, all or part of the plurality of sub-images may be fused and displayed to obtain an overall three-dimensional image, so that each sub-image is presented in the three-dimensional image. Translucent display effect, so-called translucent means that the volume data of its own subset can be displayed, but the display of the volume data of other subsets is not blocked, so that multiple sub-images can be displayed simultaneously.
[0084] 可采用各种方式对多个子图像全部或部分进行融合显示, 在一种具体实施例中 , 采用融合系数的方式实现融合显示, 如步骤 13-14所示。  [0084] The fused display may be performed on all or part of the plurality of sub-images in various manners. In a specific embodiment, the fusion display is implemented by using a fusion coefficient, as shown in steps 13-14.
[0085] 步骤 13, 获取各子集的融合系数。 在具体的实施例中, 融合系数可以是***预 设好的融合系数, 也可以是用户设定好的融合系数, 例如, 为每个对象设定一 个融合系数, 规定位于前方的对象的融合系数为 1或 0.6, 位于后方的对象的融合 系数为 0或 0.4, 用户可根据需要或临床含义设定对象的前后相对位置关系。 在有 的实施例中, 还可以采用自适应的融合系数, 自适应融合系数可根据对象的位 置不同而不同, 可以根据对象的厚薄而变化, 也可以根据对象的密度而变化。 各子集的自适应融合系数可按照各种融合规则计算出来。 融合系数是一个 0至 1 之间的值, 但各子集的融合系数的总和可以等于 1, 也可以大于或小于 1。 [0085] Step 13: Acquire a fusion coefficient of each subset. In a specific embodiment, the fusion coefficient may be a fusion coefficient preset by the system, or may be a fusion coefficient set by the user. For example, a fusion coefficient is set for each object, and the fusion coefficient of the object located in front is specified. For 1 or 0.6, the fusion coefficient of the object located at the back is 0 or 0.4, and the user can set the relative positional relationship of the object according to needs or clinical meaning. In some embodiments, adaptive fusion coefficients may also be used. The adaptive fusion coefficients may vary according to the position of the object, may vary according to the thickness of the object, or may vary according to the density of the object. The adaptive fusion coefficients for each subset can be calculated according to various fusion rules. The fusion coefficient is a 0 to 1 The value between, but the sum of the fusion coefficients of each subset can be equal to 1, or can be greater or less than 1.
[0086] 步骤 14, 将多个子集的渲染结果乘以各自的融合系数后进行叠加显示。 分别将 各子集在步骤 12中得到的渲染结果乘以各自融合系数, 然后将乘以融合系数的 结果叠加在一起, 相当于将各子集的渲染结果按照各自的融合系数进行了削弱 , 例如, 子集的融合系数是 0.4, 相当于将该子集的渲染效果削弱了 60%。 通常 情况下, 各子集的渲染参数中不透明参数的值为 1, 在视觉效果上是不透明。 渲 染结果削弱后使其显示效果变得更加轻薄, 在视觉效果上变得更透明。  [0086] Step 14. Multiply the rendering results of the plurality of subsets by the respective fusion coefficients and perform overlay display. Multiplying the rendering results obtained by the subsets in step 12 by the respective fusion coefficients, and then superimposing the results of multiplying the fusion coefficients, is equivalent to attenuating the rendering results of the subsets according to their respective fusion coefficients, for example The fusion coefficient of the subset is 0.4, which is equivalent to a 60% reduction in the rendering effect of the subset. Typically, the value of the opacity parameter in the render parameters for each subset is 1, which is visually opaque. When the rendering result is weakened, the display effect becomes lighter and thinner, and the visual effect becomes more transparent.
[0087] 当多个对象在某个视角上处于前后重叠吋, 即从某个视角看, 前后的对象的全 部或部分或遮挡后面对象的全部或部分, 不管是只采用一种显示配置对多个对 象进行渲染, 还是采用不同的显示配置对多个对象进行渲染, 如果不降低渲染 结果的不透明度, 则要么是位于前面的对象会遮挡后面的对象, 导致后面的对 象显示不出来, 要么是只将医生选择的感兴趣对象进行渲染显示, 而其他对象 中与感兴趣对象重叠的部分则不再被渲染显示, 而医生可根据需要选择其中的 一个或多个对象为感兴趣对象。 如图 3所示, 图中胎盘部分 313a位于胎儿脸部 313 b前面, 胎盘部分 313a和胎儿脸部 313b采用相同的显示配置进行渲染, 并且渲染 参数的不透明度都是 1, 因此渲染结果中, 胎儿脸部 313b的部分区域被胎盘部分 313a遮挡, 这种情况下, 医生看不到胎儿脸部 313b的完整渲染图像。  [0087] When a plurality of objects are overlapped in a certain perspective, that is, from a certain perspective, all or part of the objects before and after or occlusion of all or part of the objects behind, whether or not only one display configuration is used. Objects are rendered, or multiple objects are rendered with different display configurations. If the opacity of the rendering result is not reduced, either the object in front will obscure the object behind it, causing the subsequent objects to not be displayed, or Only the object of interest selected by the doctor is rendered and displayed, and the parts of other objects that overlap with the object of interest are no longer rendered, and the doctor can select one or more of the objects as objects of interest as needed. As shown in FIG. 3, the placenta portion 313a is located in front of the fetal face 313b, the placenta portion 313a and the fetal face 313b are rendered in the same display configuration, and the opacity of the rendering parameters is 1, so in the rendering result, A partial area of the fetal face 313b is blocked by the placental portion 313a, in which case the doctor does not see the full rendered image of the fetal face 313b.
[0088] 在采用本实施例的渲染融合方案吋, 通过将渲染结果乘以融合系数后按照一定 的比例叠加显示, 使得不同区域根据各自的比例同吋显示在屏幕上, 获得部分 透明 (也称为半透明) 的效果, 可使得重叠部分中各对象呈现出半透明状态, 即使空间相对位置上处于后面的对象也不会被前面的对象完全遮盖, 因此用户 [0088] In the rendering fusion scheme of the embodiment, by multiplying the rendering result by the fusion coefficient and superimposing the display according to a certain ratio, different regions are displayed on the screen according to their respective proportions, and partial transparency is obtained (also called The effect of being translucent can make each object in the overlapping part appear translucent, even if the object in the opposite position of the space is not completely covered by the previous object, so the user
(例如医生) 可同吋直观地看到多个对象的表现。 如图 4所示, 胎盘部分 313a和 胎儿脸部 313b都呈现出半透明渲染效果, 虽然胎盘部分 313a位于胎儿脸部 313b的 前面, 也不会完全遮盖后面的胎儿脸部图像, 医生可同吋直观地看到重叠部分 的胎盘部分 313a和胎儿脸部 313b的细节表现。 (for example, a doctor) can visually see the performance of multiple objects. As shown in FIG. 4, both the placenta portion 313a and the fetal face 313b exhibit a translucent rendering effect. Although the placenta portion 313a is located in front of the fetal face 313b, it does not completely cover the posterior fetal face image, and the doctor can simultaneously The detailed representation of the placental portion 313a and the fetal face 313b of the overlapping portion is visually seen.
[0089] 当融合系数采用各子集的自适应融合系数吋, 在视觉效果上不同部分显示的透 明度不同, 可使医生直观的感受到对象的性质, 例如对象的厚度、 对象的密度 等等。 在本实施例的一种实例中, 可采用光线跟踪法来计算各子集的自适应融 合系数, 光线跟踪法的原理是沿着到达视点的光线的反方向进行跟踪, 相当于 跟踪从眼睛发出的光线, 当光线与场景中的物体或者媒介相交的吋候计算光线 的反射、 折射以及吸收。 本实施例中, 采用光线跟踪法计算各子集的自适应融 合系数的流程如图 5所示, 包括以下步骤: [0089] When the fusion coefficient adopts the adaptive fusion coefficient 各 of each subset, the transparency of different parts displayed in the visual effect is different, so that the doctor can intuitively feel the nature of the object, such as the thickness of the object, the density of the object, and the like. In an example of the embodiment, the ray tracing method can be used to calculate the adaptive fusion of each subset. Coincidence coefficient, the principle of ray tracing is to follow the opposite direction of the light reaching the viewpoint, which is equivalent to tracking the light emitted from the eye, and calculating the reflection and refraction of the light when the light intersects with the object or medium in the scene. absorb. In this embodiment, the process of calculating the adaptive fusion coefficient of each subset by using the ray tracing method is as shown in FIG. 5, and includes the following steps:
[0090] 步骤 131, 采用光线跟踪法计算各子集在各跟踪光线上的体素值。 如图 6所示, 从视点 210发出很多束跟踪光线, 跟踪光线为模拟出的光线, 模拟这些跟踪光线 分别穿过要显示的各对象的三维图像, 每束跟踪光线按照一个视角入射到三维 图像中, 有的跟踪光线穿过多个对象, 有的跟踪光线只穿过一个对象。 下面以 图 4中的跟踪光线 220穿过第一对象 230和第二对象 240为例进行说明。 临床上, 例如第二对象 240是胎儿脸部, 第一对象 230可以是胎盘部分, 胎盘部分遮挡在 胎儿脸部前面。 首先识别出跟踪光线 220穿过第一对象 230和第二对象 240吋途经 的体素, 获取这些途经体素的体素值, 该体素值根据多帧超声回波数据得到, 体素值反映该位置对超声波的反射强度。 对于液体 (例如血液) , 其对超声波 的反射较弱, 超声波回波信号较弱, 得到的体素值较小; 对于密度较大的固体 (例如骨头或介入物) , 其与其他软组织的边界对超声波的反射较强, 超声波 回波信号较强, 得到的体素值较大。  [0090] Step 131: Calculate the voxel value of each subset on each tracking ray by using a ray tracing method. As shown in FIG. 6, a plurality of beam tracking rays are emitted from the viewpoint 210, and the tracking rays are simulated rays, and the tracking rays are simulated to pass through the three-dimensional images of the objects to be displayed, and each of the tracking rays is incident on the three-dimensional image according to one viewing angle. In some, some tracking rays pass through multiple objects, and some tracking rays pass through only one object. The following description will be made by taking the tracking ray 220 in FIG. 4 through the first object 230 and the second object 240 as an example. Clinically, for example, the second subject 240 is a fetal face, the first subject 230 can be a placental portion, and the placenta portion is obscured in front of the fetal face. First, the voxels passing through the first object 230 and the second object 240 are identified, and voxel values of the path voxels are obtained, and the voxel values are obtained according to the multi-frame ultrasonic echo data, and the voxel values are reflected. The intensity of the reflection of this position on the ultrasonic waves. For liquids (such as blood), the reflection of the ultrasound is weak, the ultrasonic echo signal is weak, and the obtained voxel value is small; for dense solids (such as bones or interventions), its boundary with other soft tissues The reflection of the ultrasonic wave is strong, the ultrasonic echo signal is strong, and the obtained voxel value is large.
[0091] 然后确定途经体素所归属的子集, 计算归属第一对象 230子集的途经体素的体 素值综合, 即第一对象 230子集在该跟踪光线 220上的体素值, 计算归属第二对 象 240子集的途经体素的体素值综合, 即第二对象 240子集在该跟踪光线 220上的 体素值。  [0091] then determining the subset to which the voxel belongs, and calculating the voxel value integration of the path voxels belonging to the subset of the first object 230, that is, the voxel value of the subset of the first object 230 on the tracking ray 220, The voxel value integration of the path voxels belonging to the subset of the second object 240 is calculated, that is, the voxel value of the second object 240 subset on the tracking ray 220.
[0092] 步骤 132, 获取各子集在跟踪光线上的空间分布。 即根据途经体素所归属的子 集, 得到各子集在跟踪光线上所分布的区域, 也就是各子集在跟踪光线上厚度 分布。  [0092] Step 132: Acquire spatial distribution of each subset on the tracking ray. That is, according to the subset to which the voxel belongs, the regions in which the subsets are distributed on the tracking ray are obtained, that is, the thickness distribution of each subset on the tracking ray.
[0093] 步骤 133, 识别各子集在跟踪光线入射方向上所处的空间位置。 如图 6所示, 跟 踪光线 220由视点 210发出, 先经第一对象 230, 再经第二对象 240, 因此第一对 象 230子集相对于第二对象 240子集为空间位置靠前的子集, 第二对象 240子集为 空间位置靠后的子集。  [0093] Step 133: Identify a spatial position where each subset is in the direction in which the light is incident. As shown in FIG. 6, the tracking ray 220 is emitted by the viewpoint 210, first through the first object 230, and then through the second object 240, so the subset of the first object 230 is a spatially advanced child relative to the second object 240 subset. Set, the second object 240 subset is a subset of the spatial position.
[0094] 步骤 134, 确定各子集在各跟踪光线上的融合系数, 融合系数为一 0到 1之间的 值。 各子集在跟踪光线上的融合系数可根据各子集在跟踪光线上的体素值、 空 间分布和空间位置中的至少一者确定, 可以根据以下方式中的至少一个规则确 定融合系数: [0094] Step 134: Determine a fusion coefficient of each subset on each tracking ray, and the fusion coefficient is between 0 and 1. Value. The fusion coefficients of the subsets on the tracking ray may be determined according to at least one of voxel values, spatial distributions, and spatial locations of the subsets on the tracking ray, and the fusion coefficients may be determined according to at least one of the following manners:
[0095] 在该跟踪光线入射方向上, 空间位置靠前的子集相对于空间位置靠后的子集具 有较大的融合系数, 这意味着前面的对象相对于后面的对象更加的不透明度, 使得用户根据对象的不透明度即可判断对象的空间相对位置。  [0095] In the incident direction of the tracking ray, the subset of the front of the spatial position has a larger fusion coefficient with respect to the subset of the space position, which means that the front object is more opaque with respect to the latter object. The user can determine the relative position of the object according to the opacity of the object.
[0096] 在该跟踪光线上, 体素值较大的子集相对于体素值较小的子集具有较大的融合 系数, 这意味着密度较大的固体 (例如骨头) 相对于肌肉和液体而言更加不透 明, 使得用户根据对象的不透明度即可判断对象是何种组织。  [0096] On the tracking ray, a subset with a larger voxel value has a larger fusion coefficient than a subset with a smaller voxel value, which means that a denser solid (eg, bone) is relative to the muscle and The liquid is more opaque, allowing the user to determine which tissue the object is based on the opacity of the object.
[0097] 在该跟踪光线上, 体素分布范围较大的子集相对于体素分布范围较小的子集具 有较大的融合系数, 这意味着厚度较大的部分更加不透明, 使得用户根据对象 的不透明度即可判断对象在该位置处的厚度。  [0097] On the tracking ray, the subset having a larger voxel distribution range has a larger fusion coefficient than the subset having a smaller voxel distribution range, which means that the thicker portion is more opaque, so that the user The opacity of the object determines the thickness of the object at that location.
[0098] 根据***预定的规则或用户设置的规则, 采用本步骤中的方法可确定各子集在 跟踪光线 220上的融合系数, 由于该融合系数跟随子集的厚度、 密度和 /或空间位 置而变化, 因此属于自适应的融合系数, 依此类推可得到各视角的跟踪光线上 的各子集的融合系数。 在步骤 14中, 将跟踪光线上各子集的渲染结果乘以各子 集在该跟踪光线上的融合系数后进行叠加显示。  [0098] According to a predetermined rule of the system or a rule set by the user, the fusion coefficient of each subset on the tracking ray 220 can be determined by the method in this step, since the fusion coefficient follows the thickness, density and/or spatial position of the subset. The change, therefore, is an adaptive fusion coefficient, and so on, the fusion coefficients of the subsets on the tracking ray of each view can be obtained. In step 14, the rendering result of each subset on the tracking ray is multiplied by the fusion coefficient of each subset on the tracking ray and then superimposed and displayed.
[0099] 上述步骤中, 本领域技术人员应当理解, 步骤 13也可以和步骤 12调换吋序, 即 先获取各子集的融合系数, 再对各子集进行区别性渲染。 步骤 14可在各子集渲 染之后融合, 也可以在各子集渲染过程中融合, 即边渲染边融合。  [0099] In the above steps, those skilled in the art should understand that step 13 can also be exchanged with step 12, that is, the fusion coefficients of each subset are obtained first, and then the subsets are separately rendered. Step 14 can be merged after rendering of each subset, or it can be fused during the rendering of each subset, that is, edge rendering.
[0100] 在有些情况下, 在步骤 11中, 当从体数据集中根据对象的不同识别各个子集吋 [0100] In some cases, in step 11, when the slave data set identifies each subset according to the object 吋
, 有吋会将一部分体数据既识别到第一子集, 同吋又将该部分体数据识别到第 二子集中, 本文将被识别到至少两个子集中的体数据称为共有体数据, 这种情 况下, 对于共有体数据, 按照其归属子集所使用的任一种显示配置进行渲染, 或采用一种与各相关子集或分组均不相同的显示配置进行渲染, 同样也会计算 这部分共用体数据的融合系数, 最后将其渲染结果乘以其融合系数后与其他对 象进行叠加显示。 The 吋 will identify a part of the volume data to the first subset, and identify the part of the volume data to the second subset. In this paper, the volume data identified in at least two subsets is called the consensus data. In this case, for the consensus data, rendering is performed according to any display configuration used by the belonging subset, or by using a display configuration that is different from each related subset or group, and this is also calculated. The fusion coefficient of the partial union data is finally superimposed with other objects by multiplying the rendering result by its fusion coefficient.
[0101] [0102] 实施例二: [0101] Embodiment 2:
[0103] 在实施例一的基础上, 处理器还用于将相邻子集的边界部分的体数据由原先归 属的子集调整到相邻子集, 并使用相邻子集的显示配置对被调整体数据进行渲 染, 以对显示的对象或区域进行调整。 调整的方式包括但不局限于:  [0103] On the basis of the first embodiment, the processor is further configured to adjust the volume data of the boundary part of the adjacent subset from the previously belonged subset to the adjacent subset, and use the display configuration pair of the adjacent subset. The adjusted volume data is rendered to adjust the displayed object or area. Adjustments include but are not limited to:
[0104] 1.选择不同的划分策略 /算法对区域或对象进行重新划分, 如根据不同的分割 目标或临床意义, 使用不同的模型对区域或对象进行重新的划分;  [0104] 1. Selecting different partitioning strategies/algorithms to re-divide regions or objects, such as re-dividing regions or objects using different models according to different segmentation goals or clinical significance;
[0105] 2.使用裁剪的操作或应用额外的算法, 将现有的划分区域或对象分为多个更小 的感兴趣区域; 或者去除现有划分区域或对象的一部分, 缩小其感兴趣区域的 范围;  [0105] 2. Using the cropping operation or applying an additional algorithm, dividing the existing divided area or object into a plurality of smaller areas of interest; or removing the existing divided area or a part of the object, and narrowing the area of interest thereof Scope
[0106] 3.合并现有的划分区域; 或者扩充感兴趣区域的范围, 使部分非感兴趣区域加 入到感兴趣区域中;  [0106] 3. Merging the existing divided regions; or expanding the range of the regions of interest to add some non-interesting regions to the region of interest;
[0107] 4.对划分区域之间的分割面作整体的调整, 如对分割面作整体的平移与旋转、 调整分割面上控制点的位置或分割面方程的参数以改变分割面形状与位置, 或 者更换分割面的数学模型等等;  [0107] 4. The overall adjustment of the segmentation plane between the divided regions, such as the overall translation and rotation of the segmentation surface, the adjustment of the position of the control point on the segmentation surface or the parameters of the segmentation plane equation to change the shape and position of the segmentation surface , or replace the mathematical model of the split face, etc.;
[0108] 5.对划分区域或对象之间的分割面作局部的调整, 如使用画笔、 橡皮擦等工具 或挪动分割面上的部分控制点来局部地增加 /减少某个划分区域的范围; 与此同 吋, 其他划分区域或对象也可能会相应地减少 /增加其范围。  [0108] 5. Locally adjusting the divided surface between the divided regions or objects, such as using a brush, an eraser or the like or moving some of the control points on the split surface to locally increase/decrease the range of a certain divided region; Similarly, other partitions or objects may also reduce/increase their range accordingly.
[0109] 在有些情况下, 在步骤 11中, 当从体数据集中根据对象的不同识别各个子集吋 , 有吋会对一部分体数据产生误识别, 因此可能会出现医生根据临床经验对一 部分体数据所归属的子集进行调整, 这种情况下就需要用到画笔或橡皮擦等微 调工具调整该部分体数据。 用户的调整操作可通过控制终端输入指令, 以达到 调整显示器上输出图像的目的。 如图 7所示为一种控制终端的示意图, 图中人机 交互装置 300包括显示器 310、 控制面板 320和触控屏 330, 控制面板 320和触控屏 330构成控制终端, 在有的实施例中, 也可以没有触控屏。  [0109] In some cases, in step 11, when each subset is identified from the volume dataset according to the object, a certain part of the volume data may be misidentified, so that a doctor may have a part of the body based on clinical experience. The subset to which the data belongs is adjusted. In this case, a fine adjustment tool such as a brush or an eraser is required to adjust the partial volume data. The user's adjustment operation can be used to adjust the output image on the display by inputting commands through the control terminal. FIG. 7 is a schematic diagram of a control terminal. The human-machine interaction device 300 includes a display 310, a control panel 320, and a touch screen 330. The control panel 320 and the touch screen 330 constitute a control terminal. In some embodiments, There is also no touch screen.
[0110] 如图 7所示, 显示器 310包括显示区域 311, 在显示区域上可显示各种图像, 例 如二维超声图像 312和三维超声图像 313。  [0110] As shown in FIG. 7, the display 310 includes a display area 311 on which various images such as a two-dimensional ultrasonic image 312 and a three-dimensional ultrasonic image 313 can be displayed.
[0111] 触控屏 330上包括多个可操作图标 331, 对应着多个不同功能。  [0111] The touch screen 330 includes a plurality of operable icons 331, corresponding to a plurality of different functions.
[0112] 控制面板 320上可设置有各种操作键, 例如通过按压操作的键盘 321、 旋钮 322- 326、 增益键 327、 滚轮 (或轨迹球) 328。 对于显示有多个渲染对象的三维超声 图像, 每个渲染对象可对应一个操作键, 如图 8所示, 三维超声图像 313中显示 有多个渲染对象, 例如胎盘部分 313a和胎儿脸部 313b, 胎盘部分 313a对应旋钮 32 2, 胎儿脸部 313b对应旋钮 323, 即当用户操作旋钮 322吋, 认为是对胎盘部分 31 3a的选定和操作, 当用户操作旋钮 323吋, 认为是对胎儿脸部 313b的选定和操作 , 旋钮 324可以是一个多档幵关, 用于选择操作类型, 操作类型包括裁剪、 合并 、 分割面调整、 画笔、 橡皮擦等, 例如当旋钮 324选择画笔吋, 通过操作滚轮 32 8可在显示区域 311中的图像上进行涂画操作, 当旋钮 324选择橡皮擦吋, 通过操 作滚轮 328可在显示区域 311中的图像上进行擦拭操作。 下面以擦拭操作为例进 行说明其操作流程, 如图 9所示包括以下步骤: [0112] The operation panel 320 may be provided with various operation keys, such as a keyboard 321 and a knob 322- 326, gain button 327, scroll wheel (or trackball) 328. For a three-dimensional ultrasound image displaying a plurality of rendering objects, each rendering object may correspond to one operation key. As shown in FIG. 8, a plurality of rendering objects, such as a placenta portion 313a and a fetal face 313b, are displayed in the three-dimensional ultrasound image 313. The placenta portion 313a corresponds to the knob 32 2, and the fetal face 313b corresponds to the knob 323, that is, when the user operates the knob 322, it is considered to be the selection and operation of the placenta portion 31 3a. When the user operates the knob 323, it is considered to be the face of the fetus. For the selection and operation of 313b, the knob 324 can be a multi-step switch for selecting the operation type, and the operation type includes cropping, merging, splitting surface adjustment, brush, eraser, etc., for example, when the knob 324 selects the brush 吋, by operation The roller 32 8 can perform a painting operation on the image in the display area 311, and when the knob 324 selects the eraser, the wiping operation can be performed on the image in the display area 311 by operating the roller 328. The operation flow is described below by taking a wiping operation as an example. As shown in FIG. 9, the following steps are included:
[0113] 步骤 20, 检测用户所选择的调整操作。 用户通过控制面板上旋钮 324选择擦拭 操作。 [0113] Step 20: Detect an adjustment operation selected by the user. The user selects the wiping operation via knob 324 on the control panel.
[0114] 步骤 21, 检测用户所选择的子集和在叠加显示图像上所选择的区域。 用户通过 滚轮 328控制显示区域的鼠标移动, 可控制鼠标移动到胎盘部分 313a和胎儿脸部 3 13b重叠的区域, 当检测到用户对旋钮 322旋转吋, 认为用户选定的数据是归属 到胎盘部分 313a子集的体数据, 当检测到用户对旋钮 323旋转吋, 认为用户选定 的数据是归属到胎儿脸部 313b子集的体数据。  [0114] Step 21: Detect a subset selected by the user and an area selected on the superimposed display image. The user controls the mouse movement of the display area through the scroll wheel 328, and can control the mouse to move to the area where the placenta portion 313a and the fetal face 3 13b overlap. When the user detects that the knob 322 is rotated, the user selects that the selected data belongs to the placenta portion. The volume data of the subset 313a, when it is detected that the user rotates the knob 323, considers that the data selected by the user is volume data belonging to a subset of the fetal face 313b.
[0115] 步骤 22, 根据用户选择的子集和区域确定被调整体数据。 用户选择的区域可通 过圆圈 313c确定, 圆圈 313c代表以该圆半径为半径的圆球, 即圆球内的三维体数 据是被调整体数据, 用户可通过旋转旋钮 322、 323改变圆圈 313c的半径, 即改变 圆球的半径, 从而改变被调整体数据的范围。  [0115] Step 22: Determine the adjusted volume data according to the subset and the region selected by the user. The area selected by the user can be determined by a circle 313c, and the circle 313c represents a sphere having a radius of the circle radius, that is, the three-dimensional volume data in the sphere is the adjusted volume data, and the user can change the radius of the circle 313c by rotating the knobs 322 and 323. , that is, changing the radius of the sphere, thereby changing the range of the adjusted volume data.
[0116] 步骤 23, 响应于用户输入的模仿擦拭操作, 将被调整体数据由原先归属的子集 调整到相邻子集。 当圆圈 313c的半径确定后, 用户对滚轮 328进行操作, 基于用 户对滚轮 328的操作, 显示屏上的图标可改变为一橡皮擦图形, 并随着用户对滚 轮 328的操作往复擦拭, 将被调整体数据由原先归属的子集调整到相邻子集。 当 用户需要继续调整吋, 可继续对滚轮 328进行操作, ***再次根据圆圈 313c确定 被调整体数据, 并将被调整体数据由原先归属的子集调整到相邻子集。 如果以 胎儿脸部为参照对象, 则当用户选择橡皮擦将体数据由胎儿脸部 313b子集调整 到胎盘部分 313a子集吋, 称为"擦除", 将体数据由胎盘部分 313a子集调整到胎儿 脸部 313b子集吋, 称为"反擦除"。 [0116] Step 23: Adjust the adjusted volume data from the originally assigned subset to the adjacent subset in response to the simulated wiping operation input by the user. When the radius of the circle 313c is determined, the user operates the scroll wheel 328. Based on the user's operation of the scroll wheel 328, the icon on the display screen can be changed to an eraser pattern, and as the user reciprocates the roller 328, it will be The adjustment volume data is adjusted from the previously assigned subset to the adjacent subset. When the user needs to continue adjusting, the scroll wheel 328 can continue to operate, the system again determines the adjusted volume data according to the circle 313c, and adjusts the adjusted volume data from the previously assigned subset to the adjacent subset. If the fetal face is used as a reference object, when the user selects the eraser, the volume data is adjusted from the subset of the fetal face 313b. A subset of the placental portion 313a, referred to as "erasing", is adjusted from the subset of the placental portion 313a to a subset of the fetal face 313b, referred to as "anti-erasing."
[0117] 步骤 24, 使用相邻子集的显示配置对被调整体数据进行渲染, 使被调整体数据 与新归属的子集具有相同的渲染效果。  [0117] Step 24: Render the adjusted volume data by using the display configuration of the adjacent subset, so that the adjusted volume data has the same rendering effect as the newly assigned subset.
[0118] 本实施例在进行体数据调整前, 先将多个不同的对象进行区别性渲染并融合显 示, 这使得用户可査看到被调整的体数据的现状, 可辅助用户判断所确定的被 调整体数据是否存在, 以及调整操作是否正确。 例如, 对于胎儿脸部图像, 医 生根据临床经验判断胎儿鼻子部分的图像有缺损, 这种缺损有可能是超声设备 在识别各个子集吋产生的误识别, 也有可能是该胎儿确实存在鼻子缺损的缺陷 , 如果能够同吋看到与鼻子重叠的其他对象 (例如胎盘) 的渲染显示, 则可根 据经验判断是否存在体数据的误识别, 例如将本应归属到胎儿脸部子集的体数 据误识别到了胎盘子集, 这种情况下, 医生可将这部分体数据由胎盘子集调整 到胎儿脸部子集, 使胎儿的鼻子部分的图像变得完整, 同理, 如果不存在体数 据的误识别, 例如胎盘图像中也没有这部分体数据, 则医生可判断胎儿确实存 在鼻子缺损的缺陷。 因此, 本实施例可提高后期诊断的准确性。  [0118] Before performing volume data adjustment, the present invention performs differential rendering and fusion display on a plurality of different objects, so that the user can view the current status of the adjusted volume data, and can assist the user in determining the determined status. Whether the adjusted volume data exists and whether the adjustment operation is correct. For example, for fetal facial images, the doctor judges that the image of the fetal nose is defective according to clinical experience. This defect may be caused by the misidentification of the ultrasound device in identifying each subset, or it may be that the fetus does have a nose defect. Defects, if you can see the rendered display of other objects (such as the placenta) that overlap with the nose, you can judge whether there is misidentification of the volume data based on experience, such as the body data that should belong to the fetal face subset. The set of fetal plates is identified. In this case, the doctor can adjust this part of the body data from the set of fetal plates to the subset of the fetal face, so that the image of the nose part of the fetus becomes complete. Similarly, if there is no volume data. Misidentification, such as the absence of this part of the body data in the placenta image, the doctor can determine that the fetus does have a defect in the nose defect. Therefore, this embodiment can improve the accuracy of the later diagnosis.
[0119] 在另一种实施例中, "擦除 "和"反擦除"的效果还可通过另一种方案实现, 例如 , 当以胎儿脸部为参照对象吋, 胎儿脸部对应旋钮 323, 当用户操作旋钮 323吋 , 认为是对胎儿脸部 313b的选定和操作, 对胎儿脸部进行"擦除"的操作包括以下 步骤:  [0119] In another embodiment, the effects of "erasing" and "anti-erasing" may also be achieved by another scheme, for example, when the fetal face is referred to as a reference object, the fetal face corresponding knob 323 When the user operates the knob 323, which is considered to be the selection and operation of the fetal face 313b, the operation of "erasing" the fetal face includes the following steps:
[0120] 1.1接收用户在三维图像上输入的第二指令。 例如用户可通过操作旋钮 324, 将 旋钮 324选择到 "擦除 "档, 用户的该操作认为是用户输入的第二指令。  [0120] 1.1 receiving a second instruction entered by the user on the three-dimensional image. For example, the user can select the knob 324 to "erase" by operating the knob 324, and the user's operation is considered to be the second command input by the user.
[0121] 1.2根据第二指令, 识别用户的输入对应在三维图像上的第一位置以及第一位 置所处的子集。 用户通过移动光标, 将光标移动到需要"擦除"的位置, 当用户将 旋钮 324选择到"擦除"档吋, 可将光标的图标改变为对应的图标, 例如将光标的 图标变换为圆形图标, 用户通过旋转旋钮 323, 可改变圆形图标的大小, 从而确 定用户选择的第一位置所覆盖区域的大小。 因为用户是对旋钮 323进行操作, 因 此可认为用户是对胎儿脸部进行"擦除"操作, 第一位置所处的子集是胎儿脸部子 集。 [0122] 1.3根据第一位置, 确定在第一位置所处的子集中第一位置所包括的体数据。 对于圆形图标的光标而言, 第一位置所包括的体数据是指位于以圆形图标的半 径为半径的圆球内的胎儿脸部的体数据。 [0121] 1.2 According to the second instruction, the input of the user is identified to correspond to the first location on the three-dimensional image and the subset in which the first location is located. The user moves the cursor to the position where "erasing" is required by moving the cursor. When the user selects the knob 324 to "erase" the file, the icon of the cursor can be changed to the corresponding icon, for example, the icon of the cursor is converted into a circle. The icon, the user can change the size of the circular icon by rotating the knob 323, thereby determining the size of the area covered by the first position selected by the user. Since the user operates the knob 323, the user is considered to be an "erase" operation on the face of the fetus, the subset of which is the subset of the fetal face. [0122] 1.3 determining, according to the first location, volume data included in the first location in the subset in which the first location is located. For the cursor of the circular icon, the volume data included in the first position refers to the volume data of the fetal face located in the sphere having the radius of the radius of the circular icon.
[0123] 1.4降低第一位置所处的子集所对应的子图像的融合系数, 本实施例中, 即降 低胎儿脸部子图像的融合系数, 使得胎儿脸部图像看起来更加透明, 从而实现 胎儿脸部图像被"擦除"的效果。 [0123] 1.4 reducing the fusion coefficient of the sub-image corresponding to the subset in which the first location is located, in this embodiment, reducing the fusion coefficient of the fetal face sub-image, so that the fetal facial image looks more transparent, thereby realizing The fetal face image is "erased".
[0124] 在另一实施例中, 也可以降低第一位置所包括的体数据所对应的子图像的融合 系数, 即降低胎儿脸部图像中第一位置所覆盖区域的图像的融合系数, 使得第 一位置所覆盖区域的图像看起来更加透明, 从而实现胎儿脸部图像上的局部被" 擦除"的效果。 [0124] In another embodiment, the fusion coefficient of the sub-image corresponding to the volume data included in the first position may be reduced, that is, the fusion coefficient of the image of the area covered by the first position in the fetal facial image is reduced, so that The image of the area covered by the first location appears to be more transparent, thereby achieving a partial "erasing" effect on the fetal facial image.
[0125] 当对胎儿脸部进行"反擦除 "操作吋, 包括以下流程:  [0125] When the "anti-erase" operation is performed on the fetal face, the following processes are included:
[0126] 2.1接收用户在三维图像上输入的第三指令。 例如用户可通过操作旋钮 324, 将 旋钮 324选择到 "反擦除"档, 用户的该操作认为是用户输入第三指令。  [0126] 2.1 Receive a third instruction entered by the user on the three-dimensional image. For example, the user can select the knob 324 to "anti-erase" by operating the knob 324, and the user's operation is considered to be the user inputting the third command.
[0127] 2.2根据第三指令, 识别用户的输入对应在三维图像上的第二位置以及第二位 置所处的子集。 同样, 用户通过移动光标, 将光标移动到需要"反擦除 "的位置, 用户通过旋转旋钮 323, 可改变圆形图标的大小, 从而确定用户选择的第二位置 所覆盖区域的大小。 因为用户是对旋钮 323进行操作, 因此可认为用户是对胎儿 脸部进行"反擦除"操作, 第二位置所处的子集是胎儿脸部子集。  [0127] 2.2 According to the third instruction, the input of the user is identified to correspond to the second location on the three-dimensional image and the subset at which the second location is located. Similarly, the user moves the cursor to the position where "anti-erase" is required by moving the cursor. By rotating the knob 323, the user can change the size of the circular icon to determine the size of the area covered by the second position selected by the user. Since the user operates the knob 323, the user is considered to be "anti-erasing" the face of the fetus, and the subset at which the second position is located is a subset of the fetal face.
[0128] 2.3根据第二位置, 确定在第二位置所处的子集中第二位置所包括的体数据; 对于圆形图标的光标而言, 第二位置所包括的体数据是指位于以圆形图标的半 径为半径的圆球内的胎儿脸部的体数据。  [0128] 2.3 determining, according to the second position, volume data included in the second position in the subset in which the second position is located; for the cursor of the circular icon, the volume data included in the second position is located in a circle The radius of the icon is the volume data of the fetal face within the radius of the sphere.
[0129] 2.4提高第二位置所处的子集所对应的子图像的融合系数, 或提高第二位置所 包括的体数据所对应的子图像的融合系数。 即提高胎儿脸部子图像的融合系数 , 使得胎儿脸部图像看起来更加不透明, 或提高胎儿脸部图像中第二位置所覆 盖区域的图像的融合系数, 使得第二位置所覆盖区域的图像看起来更加不透明 , 从而实现胎儿脸部图像被"反擦除 "的效果。  [0129] 2.4 increasing the fusion coefficient of the sub-image corresponding to the subset in which the second position is located, or increasing the fusion coefficient of the sub-image corresponding to the volume data included in the second position. That is, the fusion coefficient of the fetal face image is improved, so that the fetal face image looks more opaque, or the fusion coefficient of the image of the area covered by the second position in the fetal facial image is improved, so that the image of the area covered by the second position is seen. It is more opaque, so that the image of the fetal face is "anti-erased".
[0130]  [0130]
[0131] 实施例三: [0132] 当三维图像旋转到某个角度吋, 或三维图像从某个视角去观察吋, 胎儿脸部有 吋会被其它结构所遮挡, 此吋希望通过进行一个操作即可将遮挡在胎儿脸部的 遮挡物去除。 本实施例中, 如图 7所示, 在控制面板上设置一控制键 (例如按钮 329) , 对应一键去除遮挡物功能, 当胎儿脸部全部或部分被遮挡吋, 用户通过 按压按钮 329即可输入一键去除遮挡物的指令。 在一种具体实施例中, 一键去除 遮挡物的处理流程如图 10所示, 包括以下步骤: [0131] Embodiment 3: [0132] When the three-dimensional image is rotated to an angle 吋, or the three-dimensional image is observed from a certain angle of view, the flaws in the fetal face may be obscured by other structures, and thus it is desired to block the fetal face by performing an operation. The obstruction of the part is removed. In this embodiment, as shown in FIG. 7, a control button (such as button 329) is disposed on the control panel, corresponding to a button to remove the occlusion function. When the fetal face is completely or partially blocked, the user presses the button 329. You can enter a command to remove the occlusion. In a specific embodiment, a process for removing a occlusion by a key is as shown in FIG. 10, and includes the following steps:
[0133] 步骤 30, 获取超声三维体数据, 得到体数据集。  [0133] Step 30: Acquire ultrasonic three-dimensional volume data to obtain a volume data set.
[0134] 步骤 31, 根据胎儿脸部特征, 确定胎儿脸部轮廓上各体素在所述体数据集中的 深度, 形成胎儿脸部轮廓的深度变化曲面。  [0134] Step 31: Determine, according to the facial features of the fetus, the depth of each voxel on the contour of the fetal face in the volume data set to form a depth change surface of the contour of the fetal face.
[0135] 步骤 32, 基于所述深度变化曲面将所述体数据集分割为至少两个子集, 其中一 个子集包括胎儿脸部的三维体数据。 [0135] Step 32: The volume data set is segmented into at least two subsets based on the depth variation surface, and one of the subsets includes three-dimensional volume data of the fetal face.
[0136] 步骤 33, 对多个子集的全部或部分进行渲染获得多个子图像。 [0136] Step 33: Rendering all or part of the plurality of subsets to obtain a plurality of sub-images.
[0137] 步骤 34, 对多个子图像全部或部分进行融合显示。 [0137] Step 34: Perform fusion display on all or part of the plurality of sub-images.
[0138] 步骤 35, 接收用户通过单一操作产生的第一指令。 在控制面板上设置一按钮 32 [0138] Step 35: Receive a first instruction generated by the user through a single operation. Set a button on the control panel 32
9, 对应一键去除遮挡物功能, 当胎儿脸部全部或部分被遮挡吋, 用户通过按压 按钮即可输入第一指令。 9, corresponding to a key to remove the occlusion function, when the fetal face is completely or partially occluded, the user can input the first command by pressing the button.
[0139] 步骤 36, 根据第一指令, 降低除所述包含胎儿脸部的子集之外的其他子集对应 的子图像的融合系数, 从而使其他子集对应的子图像的显示效果更加透明, 使 得胎儿脸部图像更加凸显。 [0139] Step 36: According to the first instruction, reduce a fusion coefficient of the sub-image corresponding to the subset other than the subset of the fetal face, so that the display effect of the sub-image corresponding to the other subset is more transparent. , making the fetal face image more prominent.
[0140] 本实施例通过单一操作即可去除胎儿脸部的遮挡物, 最大程度地简化了用户 ( 例如医生) 的操作。 [0140] This embodiment can remove the obstruction of the fetal face by a single operation, and the operation of the user (for example, a doctor) is simplified to the utmost extent.
[0141] 本实施例中, 步骤 31-32中采用深度变化曲面的方式来区分胎儿脸部子集和其 它结构的子集, 本领域技术人员应当理解, 在另外的具体实施例中, 也可以通 过其它识别方式识别出胎儿脸部子集和其它结构的子集, 从而可在后续步骤 36 中降低除胎儿脸部子集之外的其他子集对应的子图像的融合系数。  [0141] In this embodiment, the depth variation surface is used in steps 31-32 to distinguish the subset of the fetal face and other subsets of the structure. Those skilled in the art should understand that, in another specific embodiment, A subset of the fetal face subset and other structures are identified by other means of identification such that the fusion coefficients of the sub-images corresponding to the subset other than the fetal face subset can be reduced in a subsequent step 36.
[0142]  [0142]
[0143] 实施例四:  Embodiment 4:
[0144] 本实施例还提供一种三维超声图像显示***,如图 11所示, 该维超声图像显示系 统包括获取单元 410、 识别单元 420、 渲染单元 430和融合单元 440。 [0144] The embodiment further provides a three-dimensional ultrasound image display system, as shown in FIG. The system includes an acquisition unit 410, an identification unit 420, a rendering unit 430, and a fusion unit 440.
[0145] 获取单元 410用于获取超声三维体数据并得到体数据集。 具体实施例中, 获取 单元 410负责超声三维体数据的采集, 通过在一系列扫描平面内采集二维图像, 并依照其三维空间关系进行整合, 实现三维空间中的体数据采集。 其中采集二 维图像的扫描平面的选择与控制可由容积探头或者面阵探头实现。 容积探头内 部是由一个常规的一维探头阵元序列以及一个带动阵元序列摆动的内置步进电 机组成。 步进电机可使阵元序列的扫描平面沿其法线方向来回摆动, 实现三维 空间的扫描。 面阵探头则具有上千个阵元, 以矩阵的形式排列, 可以向三维空 间的不同方向直接进行发射与接收, 实现快速的体数据采集。 各扫描平面采集 得到的二维图像按照其空间关系进行图像重建, 根据各平面的空间位置进行坐 标转换, 插值获得三维体数据中各点的体素值。 获取单元 410所输出的三维体数 据还可以经过平滑、 去噪等一系列的处理。 [0145] The obtaining unit 410 is configured to acquire ultrasound three-dimensional volume data and obtain a volume data set. In a specific embodiment, the acquiring unit 410 is responsible for collecting the ultrasonic three-dimensional volume data, and acquiring the two-dimensional image in a series of scanning planes, and integrating according to the three-dimensional spatial relationship, thereby realizing the volume data collection in the three-dimensional space. The selection and control of the scanning plane in which the two-dimensional image is acquired can be realized by a volume probe or an area array probe. The inside of the volume probe consists of a conventional one-dimensional probe array of elements and a built-in stepper motor that oscillates the sequence of the array elements. The stepping motor can oscillate the scanning plane of the array of elements in the normal direction to realize the scanning of the three-dimensional space. The area array probe has thousands of array elements arranged in a matrix, which can directly transmit and receive in different directions of the three-dimensional space, realizing rapid volume data acquisition. The two-dimensional images acquired by each scanning plane are reconstructed according to their spatial relationship. The coordinate transformation is performed according to the spatial position of each plane, and the voxel values of each point in the three-dimensional volume data are obtained by interpolation. The three-dimensional volume data output by the acquisition unit 410 can also be subjected to a series of processing such as smoothing, denoising, and the like.
[0146] 识别单元 420将输入的三维体数据划分为多个区域, 在具体实施例中, 识别单 元 420用于根据对象的不同, 从体数据集识别出多个子集, 其中至少包含一个感 兴趣对象。 对识别单元 420输出的区域划分结果, 其空间位置可能存在重叠部分 。 更为特殊的是, 其中某个区域可能完全包含于其他某些区域之中 (如心脏区 域包含于躯干区域之中) 。 识别单元 420划分区域可按照几何形状来划分, 也可 按照对象或组织结构来划分, 划分的方式包括但不局限于: [0146] The identification unit 420 divides the input three-dimensional volume data into a plurality of regions. In a specific embodiment, the identification unit 420 is configured to identify a plurality of subsets from the volume data set according to different objects, wherein at least one of the Object. As for the result of the area division output by the recognition unit 420, there may be an overlapping portion of the spatial position. More specifically, one of these areas may be completely contained in some other areas (such as the heart area contained in the torso area). The identification unit 420 may be divided according to a geometric shape, or may be divided according to an object or an organizational structure, and the manner of division includes but is not limited to:
[0147] 1.体数据的裁剪, 如基于平面、 长方体、 球面、 椭球面等几何形状的裁剪; [0148] 2.体数据中关键部位或结构的识别, 如成人心脏内外膜或心室 /心房的识别、 胎儿脸部识别、 子宫内膜或子宫附件的识别等等; [0147] 1. Cropping of volume data, such as clipping based on geometric shapes such as plane, cuboid, sphere, ellipsoid, etc. [0148] 2. Identification of key parts or structures in body data, such as adult cardiac endocardium or ventricle/atrium Identification, fetal face recognition, identification of endometrial or uterine attachments, etc.;
[0149] 3.体数据的分割, 如将胎儿体数据分割为胎儿区域与羊水区域等等。 [0149] 3. Segmentation of volume data, such as dividing fetal body data into fetal regions and amniotic fluid regions, and the like.
[0150] 渲染单元 430用于对多个子集的部分或全部进行渲染从而获得多个子图像, 在 一种具体实施例中, 渲染单元 430用于建立多套不同的显示配置, 并使用多套显 示配置对多个子集进行区别性渲染。 [0150] The rendering unit 430 is configured to render part or all of the plurality of subsets to obtain a plurality of sub-images. In a specific embodiment, the rendering unit 430 is configured to establish multiple sets of different display configurations and use multiple sets of displays. Configure for differential rendering of multiple subsets.
[0151] 融合单元 440用于对渲染单元 430渲染后得到的多个子图像的部分或全部进行融 合, 从而获得叠加显示的三维图像。 在一种具体实施例中, 融合单元 440获取各 子集的融合系数, 并将多个子集的渲染结果乘以各自的融合系数后进行叠加显 示, 获得最终的显示图像。 融合的方式包括但不局限于: The fusion unit 440 is configured to fuse part or all of the plurality of sub-images obtained after the rendering unit 430 is rendered, thereby obtaining a three-dimensional image of the superimposed display. In a specific embodiment, the merging unit 440 obtains the fusion coefficients of each subset, and multiplies the rendering results of the multiple subsets by the respective fusion coefficients to perform overlay display. Show, get the final display image. The way to integrate includes but is not limited to:
[0152] 1.各区域渲染结果按照一定的融合比例进行直接叠加, 用户可使用预设的融合 比例组合或自行指定各区域的融合比例;  [0152] 1. Each region rendering result is directly superimposed according to a certain fusion ratio, and the user may use a preset fusion ratio combination or specify the fusion ratio of each region by itself;
[0153] 2.各区域渲染结果按照一个自适应的融合比例进行叠加, 其融合比例可根据各 区域的空间位置关系、 体素值分布、 预设权重以及预设的融合规则等等计算得 到, 用户也可更改其自适应融合比例的计算方式 (注意在自适应融合比例中, 同一个划分区域中的不同体素是具有不同的比例系数的) ;  [0153] 2. The rendering results of each region are superimposed according to an adaptive fusion ratio, and the fusion ratio can be calculated according to the spatial positional relationship of each region, the voxel value distribution, the preset weight, and the preset fusion rule. The user can also change the way in which the adaptive fusion ratio is calculated (note that in the adaptive fusion ratio, different voxels in the same divided region have different proportional coefficients);
[0154] 3.根据用户的指定, 各区域按一定的前后顺序进行显示, 位于后方的渲染结果 将会被前方的结果遮挡 (用户也可动态调整各区域的前后顺序) ;  [0154] 3. According to the user's designation, each area is displayed in a certain order, and the rendering result at the rear will be blocked by the result in front (the user can also dynamically adjust the order of each area);
[0155] 4.根据不同区域在空间中的前后位置关系进行显示, 位于后方的渲染结果将会 被前方的结果遮挡;  [0155] 4. According to the positional relationship of different regions in space, the rendering result at the rear will be blocked by the result in the front;
[0156] 5.根据其他原则 (如划分区域的临床含义) , 选择一种划分区域的前后顺序进 行显示, 位于后方的渲染结果将会被前方的结果遮挡;  [0156] 5. According to other principles (such as the clinical meaning of the divided area), the front and back of the selected area are selected for display, and the rendering result at the rear will be obscured by the result in the front;
[0157] 6.以上各种方式的结合。 用户可选择以上任何一种方式进行融合, 不同的空间 位置也可以使用不同的方式进行融合。  [0157] 6. Combination of the above various modes. Users can choose to merge in any of the above ways, and different spatial locations can be merged in different ways.
[0158] 在以上各种融合方式中, 第 1-2种融合方式将使各划分区域以半透明的形式同 吋显示在最终的融合结果中, 而第 3-5种融合方式则不存在半透明的显示, 可能 会使部分划分区域的部分渲染结果被其他划分区域所遮挡。 对于第 3-5种融合方 式, 也可以理解为在某个空间位置, 其中一个划分区域的融合系数是 1, 而其他 被其遮挡的划分区域的融合系数为 0。 无论是何种融合方式, 最终的显示图像均 能同吋显示不同划分区域的细节表现。  [0158] In the above various fusion modes, the 1-2th fusion mode will make each divided area appear in the semi-transparent form in the final fusion result, while the 3-5th fusion mode does not exist in the middle. A transparent display may cause partial rendering of partially divided areas to be obscured by other divided areas. For the 3-5th fusion mode, it can also be understood that at a certain spatial position, the fusion coefficient of one of the divided regions is 1, and the fusion coefficient of the other divided regions is 0. Regardless of the fusion method, the final display image can display the detailed performance of different divided areas.
[0159] 在使用了不同显示配置对各划分区域或对象进行区别显示后, 用户可直观地看 到各区域的划分情况, 以便后续对区域划分作出调整, 并且各划分区域的同吋 显示有利于用户同吋观察各区域的细节, 以对体数据的整体表现有更好的掌握  [0159] After different display configurations are used to display differently divided regions or objects, the user can visually see the division of each region, so as to adjust the region division subsequently, and the peer display of each divided region is favorable. Users can observe the details of each area to better understand the overall performance of the volume data.
[0160] 在改进的实施例中, 三维超声图像显示***还包括编辑单元 450和设置单元 460 。 编辑单元 450负责划分区域的调整, 例如用于将相邻子集的边界部分的体数据 由原先归属的子集调整到相邻子集, 并使用相邻子集的显示配置对被调整体数 据进行渲染。 用户可根据需要对区域划分的结果进行整体的或局部的调整, 调 整的方式包括但不局限于: [0160] In a modified embodiment, the three-dimensional ultrasound image display system further includes an editing unit 450 and a setting unit 460. The editing unit 450 is responsible for the adjustment of the divided regions, for example, for adjusting the volume data of the boundary portion of the adjacent subset from the previously assigned subset to the adjacent subset, and using the display configuration of the adjacent subset to adjust the number of the adjusted objects. According to the rendering. The user can make overall or partial adjustments to the results of the area division as needed. The adjustment methods include but are not limited to:
[0161] 1.选择不同的划分策略 /算法对区域进行重新划分, 如根据不同的分割目标或 临床意义, 使用不同的模型对区域进行重新的划分; [0161] 1. Selecting different partitioning strategies/algorithms to re-divide the regions, such as re-dividing regions using different models according to different segmentation goals or clinical significance;
[0162] 2.使用裁剪的操作或应用额外的算法, 将现有的划分区域分为多个更小的感兴 趣区域; 或者去除现有划分区域的一部分, 缩小其感兴趣区域的范围; [0162] 2. Using the cropping operation or applying an additional algorithm, dividing the existing divided area into a plurality of smaller interest areas; or removing a part of the existing divided area and narrowing the range of the interesting area;
[0163] 3.合并现有的划分区域; 或者扩充感兴趣区域的范围, 使部分非感兴趣区域加 入到感兴趣区域中; [0163] 3. Merging the existing divided regions; or expanding the range of the regions of interest to add some non-interesting regions to the region of interest;
[0164] 4.对划分区域之间的分割面作整体的调整, 如对分割面作整体的平移与旋转、 调整分割面上控制点的位置或分割面方程的参数以改变分割面形状与位置, 或 者更换分割面的数学模型等等;  [0164] 4. The overall adjustment of the segmentation plane between the divided regions, such as the overall translation and rotation of the segmentation surface, the adjustment of the position of the control point on the segmentation surface or the parameters of the segmentation plane equation to change the shape and position of the segmentation surface , or replace the mathematical model of the split face, etc.;
[0165] 5.对划分区域之间的分割面作局部的调整, 如使用画笔、 橡皮擦等工具或挪动 分割面上的部分控制点来局部地增加 /减少某个划分区域的范围; 与此同吋, 其 他划分区域也可能会相应地减少 /增加其范围。  [0165] 5. Locally adjusting the segmentation plane between the divided regions, such as using a brush, an eraser or the like or moving some of the control points on the segmentation surface to locally increase/decrease the range of a certain segmentation region; Peer, other subregions may also reduce/increase their range accordingly.
[0166] 无论以上何种方式, 编辑单元 450均需要用户根据其需求与目标, 在渲染单元 4 30、 融合单元 440的配合下, 交互式地完成划分区域的调整。 由于渲染单元 430 与融合单元 440能够同吋显示各划分区域的范围与细节, 因此交互式地调整各区 域的划分范围变得直观与方便。  [0166] Regardless of the above manner, the editing unit 450 needs the user to interactively complete the adjustment of the divided area according to the requirements and the target, and the cooperation of the rendering unit 430 and the merging unit 440. Since the rendering unit 430 and the merging unit 440 can simultaneously display the range and details of each divided area, it is intuitive and convenient to interactively adjust the division range of each area.
[0167] 设置单元 460负责调节各划分区域的显示效果, 例如, 用于设置在最终的显示 界面上显示的子集、 各子集的显示配置、 各子集的融合系数和融合系数的计算 方式中的至少一项。 设置单元 460可设置的范围包括但不局限于:  [0167] The setting unit 460 is responsible for adjusting the display effect of each divided area, for example, for setting a subset displayed on the final display interface, a display configuration of each subset, a fusion coefficient of each subset, and a calculation method of the fusion coefficient. At least one of them. The range that the setting unit 460 can set includes but is not limited to:
[0168] 1.各划分区域是否显示在最终的融合结果中;  [0168] 1. Whether each divided area is displayed in the final fusion result;
[0169] 2.各划分区域的融合系数, 或者其融合系数的计算方式、 所需的参数等; [0170] 3.各划分区域的显示配置。  2. The fusion coefficient of each divided region, or the calculation method of the fusion coefficient, the required parameters, and the like; [0170] 3. Display configuration of each divided region.
[0171] 无论以上何种方式, 设置单元 460均需要用户根据其需求与目标, 在渲染单元 4 30、 融合单元 440的配合下, 交互式地完成显示效果的调节。 显示效果的调节可 以是对各个划分区域分别执行的, 也可以是对多个划分区域或者是所有划分区 域同吋执行。 由于渲染单元 430与融合单元 440能够显示各划分区域的细节表现 , 因此交互式地调节各区域的显示效果变得直观与方便。 [0171] Regardless of the above manners, the setting unit 460 requires the user to interactively complete the adjustment of the display effect according to the requirements and the target, in cooperation with the rendering unit 430 and the merging unit 440. The adjustment of the display effect may be performed separately for each divided area, or may be performed for multiple divided areas or all divided areas. Since the rendering unit 430 and the merging unit 440 are capable of displaying the detailed expression of each divided area Therefore, it is intuitive and convenient to interactively adjust the display effect of each area.
[0172] 本领域技术人员可以理解, 上述实施方式中各种方法的全部或部分功能可以通 过硬件的方式实现, 也可以通过计算机程序的方式实现。 当上述实施方式中全 部或部分功能通过计算机程序的方式实现吋, 该程序可以存储于一计算机可读 存储介质中, 存储介质可以包括: 只读存储器、 随机存储器、 磁盘、 光盘、 硬 盘等, 通过计算机执行该程序以实现上述功能。 例如, 将程序存储在设备的存 储器中, 当通过处理器执行存储器中程序, 即可实现上述全部或部分功能。 另 夕卜, 当上述实施方式中全部或部分功能通过计算机程序的方式实现吋, 该程序 也可以存储在服务器、 另一计算机、 磁盘、 光盘、 闪存盘或移动硬盘等存储介 质中, 通过下载或复制保存到本地设备的存储器中, 或对本地设备的***进行 版本更新, 当通过处理器执行存储器中的程序吋, 即可实现上述实施方式中全 部或部分功能。 It will be understood by those skilled in the art that all or part of the functions of the various methods in the above embodiments may be implemented by hardware or by a computer program. When all or part of the functions in the above embodiments are implemented by a computer program, the program may be stored in a computer readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc. The computer executes the program to implement the above functions. For example, the program is stored in the memory of the device, and when the program in the memory is executed by the processor, all or part of the above functions can be realized. In addition, when all or part of the functions in the above embodiments are implemented by a computer program, the program may also be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk or a mobile hard disk, by downloading or The copy is saved to the memory of the local device, or the system of the local device is updated. When the program in the memory is executed by the processor, all or part of the functions in the above embodiments may be implemented.
[0173]  [0173]
[0174] 以上应用了具体个例对本发明进行阐述, 只是用于帮助理解本发明, 并不用以 限制本发明。 对于本发明所属技术领域的技术人员, 依据本发明的思想, 还可 以做出若干简单推演、 变形或替换。  The invention has been described above with reference to specific examples, which are intended to be illustrative of the invention and are not intended to limit the invention. For the person skilled in the art to which the invention pertains, several simple derivations, variations or substitutions can be made in accordance with the inventive concept.

Claims

权利要求书 Claim
[权利要求 1] 一种三维超声图像显示方法,其特征在于包括:  [Claim 1] A method for displaying a three-dimensional ultrasonic image, comprising:
获取超声三维体数据, 得到体数据集;  Acquiring ultrasonic three-dimensional volume data to obtain a volume data set;
从体数据集中识别出多个子集;  Identifying multiple subsets from the volume data set;
建立多套不同的显示配置; 使用多套显示配置对多个子集进行区别性渲染; 获取各子集的融合系数;  Establish multiple sets of different display configurations; use multiple sets of display configurations to differentially render multiple subsets; obtain fusion coefficients for each subset;
将多个子集的渲染结果乘以各自的融合系数后进行叠加显示。  Multiply the rendering results of multiple subsets by their respective blending coefficients and display them in a superimposed manner.
[权利要求 2] —种三维超声图像显示方法,其特征在于包括:  [Claim 2] A three-dimensional ultrasonic image display method, comprising:
建立多套不同的显示配置; 使用多套显示配置对超声三维体数据集中的多个子集进行区别性渲染  Create multiple sets of different display configurations; use multiple sets of display configurations to differentiate multiple subsets of the ultrasound 3D volume dataset
获取各子集的融合系数; Obtaining the fusion coefficients of each subset;
将多个子集的渲染结果乘以各自的融合系数后进行叠加显示。  Multiply the rendering results of multiple subsets by their respective blending coefficients and display them in a superimposed manner.
[权利要求 3] 如权利要求 1或 2所述的方法, 其特征在于, 所述融合系数是预设的融 合系数或按照融合规则计算出的自适应融合系数。 [Claim 3] The method according to claim 1 or 2, wherein the fusion coefficient is a preset fusion coefficient or an adaptive fusion coefficient calculated according to a fusion rule.
[权利要求 4] 如权利要求 3所述的方法, 其特征在于, 获取各子集的融合系数包括 [Claim 4] The method according to claim 3, wherein acquiring the fusion coefficients of each subset includes
采用光线跟踪法计算各子集在各跟踪光线上的体素值; The ray tracing method is used to calculate the voxel value of each subset on each tracking ray;
根据各子集在各跟踪光线上的体素值和空间分布, 确定各子集在各跟 踪光线上的融合系数, 所述融合系数为一 0到 1的值;  Determining a fusion coefficient of each subset on each tracking ray according to a voxel value and a spatial distribution of each subset on the tracking ray, wherein the fusion coefficient is a value of 0 to 1.
或者获取各子集的融合系数包括:  Or to obtain the fusion coefficients of each subset including:
采用光线跟踪法计算各子集在各跟踪光线上的体素值;  The ray tracing method is used to calculate the voxel value of each subset on each tracking ray;
识别各子集在跟踪光线入射方向上所处的空间位置;  Identifying the spatial location of each subset in tracking the direction of incidence of the light;
根据各子集在各跟踪光线上的体素值、 空间分布和空间位置, 确定各 子集在各跟踪光线上的融合系数, 所述融合系数为一 0到 1之间的值。  The fusion coefficients of the subsets on the respective tracking rays are determined according to the voxel values, spatial distributions, and spatial positions of the respective subsets on the tracking rays, and the fusion coefficients are values between 0 and 1.
[权利要求 5] 如权利要求 4所述的方法, 其特征在于, 融合系数根据以下方式中的 至少一个确定: 在该跟踪光线入射方向上, 空间位置靠前的子集相对于空间位置靠后 的子集具有较大的融合系数; [Claim 5] The method according to claim 4, wherein the fusion coefficient is determined according to at least one of the following manners: In the incident direction of the tracking ray, the subset of the front of the spatial position has a larger fusion coefficient with respect to the subset of the space position;
在该跟踪光线上, 体素值较大的子集相对于体素值较小的子集具有较 大的融合系数;  On the tracking ray, a subset having a larger voxel value has a larger fusion coefficient than a subset having a smaller voxel value;
在该跟踪光线上, 体素分布范围较大的子集相对于体素分布范围较小 的子集具有较大的融合系数。  On the tracking ray, the subset with a larger voxel distribution has a larger fusion coefficient than the subset with a smaller voxel distribution.
[权利要求 6] 如权利要求 4所述的方法, 其特征在于, 将多个子集的渲染结果乘以 各自的融合系数后进行叠加显示包括: 将跟踪光线上各子集的渲染结 果乘以各子集在该跟踪光线上的融合系数后进行叠加。  [Claim 6] The method according to claim 4, wherein multiplying the rendering results of the plurality of subsets by the respective fusion coefficients and performing the superimposed display comprises: multiplying the rendering results of the subsets on the tracking ray by each The subset is superimposed after the fusion coefficients on the tracking ray.
[权利要求 7] 如权利要求 1或 2所述的方法, 其特征在于还包括, 判断体数据集中是 否存在被识别到至少两个子集中的共有体数据; 对于共有体数据, 按 照其归属子集所使用的任一种显示配置进行渲染, 或采用一种与其归 属子集所使用的显示配置均不相同的显示配置进行渲染。  [Claim 7] The method according to claim 1 or 2, further comprising: determining whether there is consensus data recognized in at least two subsets in the volume data set; and for the consensus data, according to the belonging subset Any of the display configurations used for rendering, or rendering with a display configuration that is different from the display configuration used by the belonging subset.
[权利要求 8] 如权利要求 1或 2所述的方法, 其特征在于, 使用多套显示配置对多个 子集进行区别性渲染包括: 每个子集使用不同于其它子集的一套显示 配置进行渲染, 或将多个子集分成至少两组, 每组使用不同于其它组 的一套显示配置进行渲染。  [Claim 8] The method according to claim 1 or 2, wherein the distinctive rendering of the plurality of subsets using the plurality of sets of display configurations comprises: each subset being performed using a set of display configurations different from the other subsets Render, or divide multiple subsets into at least two groups, each group rendering using a set of display configurations that are different from the other groups.
[权利要求 9] 如权利要求 1至 8中任一项所述的方法, 其特征在于还包括:  [Claim 9] The method according to any one of claims 1 to 8, further comprising:
将相邻子集的边界部分的体数据由原先归属的子集调整到相邻子集; 使用相邻子集的显示配置对被调整体数据进行渲染。  The volume data of the boundary portion of the adjacent subset is adjusted from the previously assigned subset to the adjacent subset; the adjusted volume data is rendered using the display configuration of the adjacent subset.
[权利要求 10] 如权利要求 9所述的方法, 其特征在于, 将相邻子集的边界部分的体 数据由原先归属的子集调整到相邻子集包括:  [Claim 10] The method according to claim 9, wherein adjusting the volume data of the boundary portion of the adjacent subset from the previously assigned subset to the adjacent subset comprises:
检测用户所选择的子集和在叠加显示图像上所选择的区域; 根据用户选择的子集和区域确定被调整体数据; 响应于用户输入的模仿擦拭或涂画的操作, 将被调整体数据由原先归 属的子集调整到相邻子集。  Detecting a subset selected by the user and an area selected on the superimposed display image; determining the adjusted volume data according to the subset and the region selected by the user; adjusting the volume data in response to the operation of the imitation wipe or painting by the user input The subset that was originally assigned is adjusted to the adjacent subset.
[权利要求 11] 一种三维超声图像显示方法,其特征在于包括: [Claim 11] A method for displaying a three-dimensional ultrasonic image, comprising:
获取针对胎儿检测的超声三维体数据, 得到体数据集; 根据胎儿的图像特征从所述体数据集中识别出多个子集; Acquiring ultrasonic three-dimensional volume data for fetal detection to obtain a volume data set; Identifying a plurality of subsets from the volume data set based on image characteristics of the fetus;
渲染多个子集的部分或全部获得多个子图像; Rendering part or all of multiple subsets to obtain multiple sub-images;
融合多个子图像的部分或全部获得三维图像; 和, Combining part or all of multiple sub-images to obtain a three-dimensional image; and,
显示所述三维图像。 The three-dimensional image is displayed.
如权利要求 11所述的方法, 其特征在于, 所述渲染多个子集的部分或 全部获得多个子图像包括: 基于不同的显示配置渲染多个子集的部分 或全部获得多个子图像。 The method of claim 11, wherein the rendering a portion or all of the plurality of subsets to obtain the plurality of sub-images comprises: rendering a plurality of sub-images by rendering portions or all of the plurality of subsets based on different display configurations.
如权利要求 11所述的方法, 其特征在于, 所述融合多个子图像的部 分或全部获得三维图像包括: 按照预设的或者自适应计算得到的融合 系数将多个子图像的部分或全部进行融合, 获得使多个子图像呈现半 透明显示效果的三维图像。 The method according to claim 11, wherein the merging part or all of the plurality of sub-images to obtain the three-dimensional image comprises: merging part or all of the plurality of sub-images according to a preset or adaptively calculated fusion coefficient , obtaining a three-dimensional image that causes a plurality of sub-images to exhibit a translucent display effect.
如权利要求 13所述的方法, 其特征在于, 所述融合多个子图像的部分 或全部获得三维图像还包括通过以下方式之一为多个子图像的部分或 全部设置不同的融合系数: The method according to claim 13, wherein the merging part or all of the plurality of sub-images to obtain the three-dimensional image further comprises setting different fusion coefficients for part or all of the plurality of sub-images by one of:
设定观察视点获得一条或多条跟踪光线, 计算各子集在各跟踪光线上 的体素值, 根据各子集在各跟踪光线上的体素值和空间分布, 确定各 子集在各跟踪光线上的融合系数; 和, Set the observation viewpoint to obtain one or more tracking rays, calculate the voxel values of each subset on each tracking ray, and determine the subsets in each tracking according to the voxel values and spatial distribution of each subset on each tracking ray. Fusion coefficient on light; and,
设定观察视点获得一条或多条跟踪光线, 计算各子集在各跟踪光线上 的体素值, 识别各子集在跟踪光线入射方向上所处的空间位置, 根据 各子集在各跟踪光线上的体素值、 空间分布和空间位置, 确定各子集 在各跟踪光线上的融合系数。 Set the observation viewpoint to obtain one or more tracking rays, calculate the voxel values of each subset on each tracking ray, and identify the spatial position of each subset in the incident direction of the tracking ray, according to each subset in each tracking ray The voxel values, spatial distribution, and spatial position on the top determine the fusion coefficients of each subset on each of the tracking rays.
如权利要求 11所述的方法, 其特征在于, 胎儿的图像特征至少包括胎 儿脸部特征, 多个子集中至少一个子集用于生成胎儿脸部子图像。 如权利要求 15所述的方法, 其特征在于, 所述胎儿脸部特征包括: 在 超声三维体数据中胎儿脸部上一个或多个组织结构的解剖学结构对应 的图像特性, 所述一个或多个组织结构从胎儿眼睛、 胎儿鼻子、 胎儿 额头、 胎儿下巴、 胎儿脸颊、 胎儿耳朵、 胎儿脸轮廓和胎儿嘴中选择 [权利要求 17] 如权利要求 15所述的方法, 其特征在于, 根据胎儿的图像特征从所述 体数据集中识别出多个子集包括: The method of claim 11 wherein the image features of the fetus comprise at least fetal facial features and at least one subset of the plurality of subsets is used to generate a fetal facial sub-image. The method according to claim 15, wherein the fetal facial features comprise: image characteristics corresponding to an anatomical structure of one or more tissue structures on a fetal face in the ultrasound three-dimensional volume data, the one or Multiple tissue structures from fetal eye, fetal nose, fetal forehead, fetal chin, fetal cheeks, fetal ears, fetal face contours and fetal mouth [Claim 17] The method according to claim 15, wherein the identifying a plurality of subsets from the volume data set according to image features of the fetus comprises:
根据胎儿脸部特征, 确定胎儿脸部轮廓上各体素在所述体数据集中的 深度, 形成胎儿脸部轮廓的深度变化曲面;  Determining the depth of each voxel on the contour of the fetal face in the volume data set according to the facial features of the fetus, forming a depth change surface of the contour of the fetal face;
基于所述深度变化曲面将所述体数据集分割为至少两个子集, 其中一 个子集包括胎儿脸部的三维体数据。  The volume data set is segmented into at least two subsets based on the depth variation surface, one of which includes three-dimensional volume data of the fetal face.
[权利要求 18] 如权利要求 15所述的方法, 其特征在于, 所述方法还包括: [Claim 18] The method of claim 15, wherein the method further comprises:
接收用户通过单一操作产生的第一指令;  Receiving a first instruction generated by a user through a single operation;
根据所述第一指令, 降低除所述包含胎儿脸部的子集之外的其他子集 对应的子图像的融合系数。  According to the first instruction, the fusion coefficients of the sub-images corresponding to the subsets other than the subset of fetal faces are reduced.
[权利要求 19] 如权利要求 11所述的方法, 其特征在于, 所述方法还包括: [Claim 19] The method of claim 11, wherein the method further comprises:
接收用户在所述三维图像上输入的第二指令;  Receiving a second instruction input by the user on the three-dimensional image;
根据所述第二指令, 识别用户的输入对应在三维图像上的第一位置以 及第一位置所处的子集;  Determining, according to the second instruction, a first location of the user's input corresponding to the three-dimensional image and a subset of the first location;
根据所述第一位置, 确定在第一位置所处的子集中第一位置所包括的 体数据;  Determining, according to the first location, volume data included in the first location in the subset in which the first location is located;
降低第一位置所处的子集所对应的子图像的融合系数, 或降低第一位 置所包括的体数据所对应的子图像的融合系数。  The fusion coefficient of the sub-image corresponding to the subset in which the first position is located is lowered, or the fusion coefficient of the sub-image corresponding to the volume data included in the first position is reduced.
[权利要求 20] 如权利要求 11所述的方法, 其特征在于, 所述方法还包括: [Claim 20] The method of claim 11, wherein the method further comprises:
接收用户在所述三维图像上输入的第三指令;  Receiving a third instruction input by the user on the three-dimensional image;
根据所述第三指令, 识别用户的输入对应在三维图像上的第二位置以 及第二位置所处的子集;  Determining, according to the third instruction, a second location of the user's input corresponding to the three-dimensional image and a subset of the second location;
根据所述第二位置, 确定在第二位置所处的子集中第二位置所包括的 体数据;  Determining, according to the second location, volume data included in the second location in the subset in which the second location is located;
提高第二位置所处的子集所对应的子图像的融合系数, 或提高第二位 置所包括的体数据所对应的子图像的融合系数。  The fusion coefficient of the sub-image corresponding to the subset in which the second position is located is raised, or the fusion coefficient of the sub-image corresponding to the volume data included in the second position is increased.
[权利要求 21] —种超声设备,其特征在于包括: [Claim 21] An ultrasound apparatus characterized by comprising:
超声探头, 用于向生物组织内的感兴趣区域发射超声波, 并接收超声 波的回波; An ultrasound probe for transmitting ultrasound to a region of interest within a biological tissue and receiving ultrasound Wave echo
发射 /接收序列控制器, 用于产生发射序列和 /或接收序列, 并将发射 序列和 /或接收序列输出至超声探头, 控制超声探头向感兴趣区域发 射超声波并接收超声波的回波;  a transmit/receive sequence controller for generating a transmit sequence and/or a receive sequence, and outputting the transmit sequence and/or the receive sequence to an ultrasound probe, controlling the ultrasound probe to emit ultrasound waves to the region of interest and receiving echoes of the ultrasound waves;
处理器, 用于根据超声波回波数据生成超声三维体数据, 得到体数据 集, 从体数据集中识别出多个子集, 处理器还用于建立多套不同的显 示配置, 使用多套显示配置对多个子集进行区别性渲染, 获取各子集 的融合系数, 将多个子集的渲染结果乘以各自的融合系数后进行叠加 显示;  The processor is configured to generate ultrasonic three-dimensional volume data according to the ultrasonic echo data, obtain a volume data set, and identify multiple subsets from the volume data set, and the processor is further configured to establish multiple sets of different display configurations, and use multiple sets of display configuration pairs. Different subsets are subjected to distinctive rendering, and the fusion coefficients of each subset are obtained, and the rendering results of the plurality of subsets are multiplied by the respective fusion coefficients and superimposed and displayed;
人机交互装置, 人机交互装置包括用于显示超声渲染图像的显示器。  The human-machine interaction device, the human-machine interaction device includes a display for displaying an ultrasound rendered image.
[权利要求 22] 如权利要求 21所述的超声设备, 其特征在于, 所述融合系数是预设的 融合系数或按照融合规则计算出的自适应融合系数。 [Claim 22] The ultrasound apparatus according to claim 21, wherein the fusion coefficient is a preset fusion coefficient or an adaptive fusion coefficient calculated according to a fusion rule.
[权利要求 23] 如权利要求 22所述的超声设备, 其特征在于, 处理器根据各子集在跟 踪光线上的空间位置、 体素值和空间分布中的至少一个确定各子集在 该跟踪光线上的融合系数, 所述融合系数为一 0到 1之间的值。 [Claim 23] The ultrasound apparatus according to claim 22, wherein the processor determines, according to at least one of a spatial position, a voxel value, and a spatial distribution of the tracking ray on each subset, the subset is in the tracking A fusion coefficient on the light, the fusion coefficient being a value between 0 and 1.
[权利要求 24] 如权利要求 23所述的超声设备, 其特征在于, 融合系数根据以下方式 中的至少一个确定: [Attachment 24] The ultrasound apparatus according to claim 23, wherein the fusion coefficient is determined according to at least one of the following manners:
在该跟踪光线入射方向上, 空间位置靠前的子集相对于空间位置靠后 的子集具有较大的融合系数;  In the incident direction of the tracking ray, the subset of the front of the spatial position has a larger fusion coefficient with respect to the subset of the space position;
在该跟踪光线上, 体素值较大的子集相对于体素值较小的子集具有较 大的融合系数;  On the tracking ray, a subset having a larger voxel value has a larger fusion coefficient than a subset having a smaller voxel value;
在该跟踪光线上, 体素分布范围较大的子集相对于体素分布范围较小 的子集具有较大的融合系数。  On the tracking ray, the subset with a larger voxel distribution has a larger fusion coefficient than the subset with a smaller voxel distribution.
[权利要求 25] 如权利要求 21所述的超声设备, 其特征在于, 处理器通过融合系数降 低各子集的渲染结果的不透明度。 [Claim 25] The ultrasound apparatus according to claim 21, wherein the processor reduces the opacity of the rendering result of each subset by the fusion coefficient.
[权利要求 26] 如权利要求 21所述的超声设备, 其特征在于还包括, 处理器判断体数 据集中是否存在被识别到至少两个子集中的共有体数据, 对于共有体 数据, 按照其归属子集所使用的任一种显示配置进行渲染, 或采用一 种与其归属子集所使用的显示配置均不相同的显示配置进行渲染。 [Attachment 26] The ultrasound apparatus according to claim 21, further comprising: the processor determining whether there is a consensus volume data identified in the at least two subsets in the volume data set, and according to the owner data of the consensus volume data Render any of the display configurations used by the set, or use one A display configuration that is different from the display configuration used by the belonging subset is rendered.
[权利要求 27] 如权利要求 21所述的超声设备, 其特征在于, 处理器在使用多套显示 配置对多个子集进行区别性渲染吋, 对每个子集使用不同于其它子集 的一套显示配置进行渲染, 或将多个子集分成至少两组, 对每组使用 不同于其它组的一套显示配置进行渲染。  [Claim 27] The ultrasound apparatus according to claim 21, wherein the processor performs differential rendering on the plurality of subsets using the plurality of sets of display configurations, and uses a set different from the other subsets for each subset Display the configuration for rendering, or divide multiple subsets into at least two groups, and render each group with a different set of display configurations than the other groups.
[权利要求 28] 如权利要求 21至 27中任一项所述的超声设备, 其特征在于, 处理器将 相邻子集的边界部分的体数据由原先归属的子集调整到相邻子集, 并 使用相邻子集的显示配置对被调整体数据进行渲染。  The ultrasonic device according to any one of claims 21 to 27, wherein the processor adjusts the volume data of the boundary portion of the adjacent subset from the previously assigned subset to the adjacent subset , and render the adjusted volume data using the display configuration of the adjacent subset.
[权利要求 29] 如权利要求 28所述的超声设备, 其特征在于, 人机交互装置还包括控 制面板, 所述控制面板上设置有第一控制键, 处理器检测用户所选择 的第一控制键和在叠加显示图像上所选择的区域, 根据用户选择的子 集和区域确定被调整体数据, 响应于用户输入的模仿擦拭或涂画的操 作, 将被调整体数据由原先归属的子集调整到相邻子集。  The ultrasonic device according to claim 28, wherein the human-machine interaction device further comprises a control panel, the control panel is provided with a first control button, and the processor detects the first control selected by the user. The key and the selected area on the superimposed display image, the adjusted volume data is determined according to the subset and the region selected by the user, and the adjusted volume data is adjusted from the previously assigned subset in response to the user-entered operation of the imitation wipe or painting Adjust to adjacent subsets.
[权利要求 30] —种超声设备, 其特征在于包括:  [Claim 30] An ultrasound apparatus characterized by comprising:
存储器, 用于存储程序;  Memory for storing programs;
处理器, 用于通过执行所述存储器存储的程序以实现如权利要求 1-20 中任一项所述的方法。  A processor for performing the method of the memory storage to implement the method of any one of claims 1-20.
[权利要求 31] —种计算机可读存储介质, 其特征在于, 包括程序, 所述程序能够被 处理器执行以实现如权利要求 1-20中任一项所述的方法。  [Claim 31] A computer readable storage medium, comprising a program, the program being executable by a processor to implement the method of any one of claims 1-20.
[权利要求 32] —种三维超声图像显示***,其特征在于包括:  [Claim 32] A three-dimensional ultrasonic image display system, comprising:
用于获取超声三维体数据并得到体数据集的获取单元;  An acquisition unit for acquiring ultrasound three-dimensional volume data and obtaining a volume data set;
用于从体数据集中识别出多个子集的识别单元; 用于建立多套不同的显示配置, 并使用多套显示配置对多个子集进行 区别性渲染的渲染单元;  An identifying unit for identifying a plurality of subsets from a volume data set; a rendering unit for establishing a plurality of sets of different display configurations and using different sets of display configurations to differentially render the plurality of subsets;
用于获取各子集的融合系数, 并将多个子集的渲染结果乘以各自的融 合系数后进行叠加显示的融合单元。  A fusion unit that acquires the fusion coefficients of each subset and multiplies the rendering results of the multiple subsets by their respective fusion coefficients.
[权利要求 33] 如权利要求 32所述的***,其特征在于, 还包括编辑单元, 用于将相 邻子集的边界部分的体数据由原先归属的子集调整到相邻子集, 并使 用相邻子集的显示配置对被调整体数据进行渲染。 [Claim 33] The system according to claim 32, further comprising an editing unit, configured to adjust volume data of a boundary portion of the adjacent subset from the previously assigned subset to the adjacent subset, and Make The adjusted volume data is rendered with the display configuration of the adjacent subset.
如权利要求 32所述的***,其特征在于, 还包括设置单元, 用于设置 在最终的显示界面上显示的子集、 各子集的显示配置、 各子集的融合 系数和融合系数的计算方式中的至少一项。 A system according to claim 32, further comprising setting means for setting a subset displayed on the final display interface, display configuration of each subset, fusion coefficient of each subset, and calculation of fusion coefficients At least one of the ways.
一种超声设备,其特征在于包括: An ultrasonic device characterized by comprising:
超声探头, 用于向生物组织内的感兴趣区域发射超声波, 并接收超声 波的回波; An ultrasound probe for transmitting ultrasound waves to a region of interest within the biological tissue and receiving echoes of the ultrasound waves;
发射 /接收序列控制器, 用于产生发射序列和 /或接收序列, 并将发射 序列和 /或接收序列输出至超声探头, 控制超声探头向感兴趣区域发 射超声波并接收超声波的回波; a transmit/receive sequence controller for generating a transmit sequence and/or a receive sequence, and outputting the transmit sequence and/or the receive sequence to an ultrasound probe, controlling the ultrasound probe to emit ultrasound waves to the region of interest and receiving echoes of the ultrasound waves;
处理器, 用于获取针对胎儿检测的超声三维体数据, 得到体数据集, 根据胎儿的图像特征从所述体数据集中识别出多个子集, 渲染多个子 集的部分或全部获得多个子图像, 融合多个子图像的部分或全部获得 三维图像, 并输出所述三维图像至显示器进行显示; a processor, configured to acquire ultrasound three-dimensional volume data for fetal detection, obtain a volume data set, identify a plurality of subsets from the volume data set according to image features of the fetus, and render part or all of the plurality of subsets to obtain a plurality of sub-images, Combining part or all of the plurality of sub-images to obtain a three-dimensional image, and outputting the three-dimensional image to the display for display;
人机交互装置, 人机交互装置包括用于显示超声三维图像的显示器。 如权利要求 35所述的超声设备, 其特征在于, 所述处理器基于不同的 显示配置渲染多个子集的部分或全部获得多个子图像。 The human-machine interaction device, the human-machine interaction device includes a display for displaying an ultrasound three-dimensional image. The ultrasound apparatus according to claim 35, wherein the processor renders a plurality of sub-images by rendering part or all of the plurality of subsets based on different display configurations.
如权利要求 35所述的超声设备, 其特征在于, 所述处理器按照预设的 融合系数或者自适应融合系数将多个子图像的部分或全部进行融合, 获得使多个子图像呈现半透明显示效果的三维图像。 The ultrasound apparatus according to claim 35, wherein the processor fuses part or all of the plurality of sub-images according to a preset fusion coefficient or an adaptive fusion coefficient to obtain a semi-transparent display effect of the plurality of sub-images. 3D image.
如权利要求 35所述的超声设备, 其特征在于, 所述自适应融合系数根 据各子集在跟踪光线上的体素值、 空间分布和 /或空间位置而设定。 如权利要求 35所述的超声设备, 其特征在于, 胎儿的图像特征至少包 括胎儿脸部特征, 多个子集中至少一个子集用于生成胎儿脸部子图像The ultrasound apparatus according to claim 35, wherein said adaptive fusion coefficient is set according to voxel values, spatial distributions, and/or spatial positions of the subsets on the tracking ray. The ultrasound apparatus according to claim 35, wherein the image features of the fetus include at least a fetal facial feature, and at least one subset of the plurality of subsets is used to generate a fetal facial sub-image
, 所述处理器根据胎儿的图像特征从所述体数据集中识别出多个子集 包括: The processor identifies a plurality of subsets from the volume data set according to image features of the fetus including:
根据胎儿脸部特征, 确定胎儿脸部轮廓上各体素在所述体数据集中的 深度, 形成胎儿脸部轮廓的深度变化曲面; 基于所述深度变化曲面将所述体数据集分割为至少两个子集, 其中一 个子集包括胎儿脸部的三维体数据。 Determining the depth of each voxel on the contour of the fetal face in the volume data set according to the facial features of the fetus, forming a depth change surface of the contour of the fetal face; The volume data set is segmented into at least two subsets based on the depth variation surface, wherein one subset includes three-dimensional volume data of a fetal face.
[权利要求 40] 如权利要求 35所述的超声设备, 其特征在于, 人机交互装置还包括控 制面板, 所述控制面板上设置有对应一键去除遮挡物功能的第二控制 键, 所述处理器还用于接收用户通过对第二控制键的单一操作而产生 的第一指令, 并根据所述第一指令, 降低除所述包含胎儿脸部的子集 之外的其他子集对应的子图像的融合系数。  The ultrasonic device according to claim 35, wherein the human-machine interaction device further comprises a control panel, wherein the control panel is provided with a second control button corresponding to a key to remove the occlusion function, The processor is further configured to receive a first instruction generated by the user by a single operation on the second control key, and according to the first instruction, reduce a subset corresponding to the subset other than the fetal face The fusion coefficient of the sub image.
[权利要求 41] 如权利要求 35所述的超声设备, 其特征在于, 所述处理器还用于接收 用户在所述三维图像上输入的第二指令, 根据所述第二指令识别用户 的输入对应在三维图像上的第一位置, 并降低第一位置所处的子集所 对应的子图像的融合系数, 或降低第一位置所包括的体数据所对应的 子图像的融合系数。  The ultrasonic device according to claim 35, wherein the processor is further configured to receive a second instruction input by the user on the three-dimensional image, and identify a user input according to the second instruction. Corresponding to the first position on the three-dimensional image, and reducing the fusion coefficient of the sub-image corresponding to the subset in which the first position is located, or reducing the fusion coefficient of the sub-image corresponding to the volume data included in the first position.
[权利要求 42] 如权利要求 35所述的超声设备, 其特征在于, 所述处理器还用于接收 用户在所述三维图像上输入的第三指令, 根据所述第三指令, 识别用 户的输入对应在三维图像上的第二位置, 并提高第二位置所处的子集 所对应的子图像的融合系数, 或提高第二位置所包括的体数据所对应 的子图像的融合系数。  The ultrasonic device according to claim 35, wherein the processor is further configured to receive a third instruction input by the user on the three-dimensional image, and identify the user according to the third instruction. The second position corresponding to the three-dimensional image is input, and the fusion coefficient of the sub-image corresponding to the subset in which the second position is located is increased, or the fusion coefficient of the sub-image corresponding to the volume data included in the second position is increased.
PCT/CN2017/085736 2017-05-24 2017-05-24 Ultrasonic device and three-dimensional ultrasonic image display method therefor WO2018214063A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/085736 WO2018214063A1 (en) 2017-05-24 2017-05-24 Ultrasonic device and three-dimensional ultrasonic image display method therefor
CN201780079242.6A CN110087553B (en) 2017-05-24 2017-05-24 Ultrasonic device and three-dimensional ultrasonic image display method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/085736 WO2018214063A1 (en) 2017-05-24 2017-05-24 Ultrasonic device and three-dimensional ultrasonic image display method therefor

Publications (1)

Publication Number Publication Date
WO2018214063A1 true WO2018214063A1 (en) 2018-11-29

Family

ID=64396181

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/085736 WO2018214063A1 (en) 2017-05-24 2017-05-24 Ultrasonic device and three-dimensional ultrasonic image display method therefor

Country Status (2)

Country Link
CN (1) CN110087553B (en)
WO (1) WO2018214063A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211216A (en) * 2019-06-14 2019-09-06 北京理工大学 A kind of 3-D image airspace fusion method based on the weighting of volume drawing opacity
CN110223371A (en) * 2019-06-14 2019-09-10 北京理工大学 A kind of 3-D image fusion method based on shearing wave conversion and the weighting of volume drawing opacity
CN111353328A (en) * 2018-12-20 2020-06-30 核动力运行研究所 Ultrasonic three-dimensional volume data online display and analysis method
CN111836584A (en) * 2020-06-17 2020-10-27 深圳迈瑞生物医疗电子股份有限公司 Ultrasound contrast imaging method, ultrasound imaging apparatus, and storage medium
US20230305126A1 (en) * 2022-03-25 2023-09-28 Shenzhen Mindray Bio-Medical Electronics Co., Ltd. Ultrasound beamforming method and device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112137693B (en) * 2020-09-08 2023-01-03 深圳蓝影医学科技股份有限公司 Imaging method and device for four-dimensional ultrasonic guided puncture

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090015587A1 (en) * 2007-07-09 2009-01-15 Kabushiki Kaisha Toshiba Ultrasonic imaging apparatus
US20120190984A1 (en) * 2011-01-26 2012-07-26 Samsung Medison Co., Ltd. Ultrasound system with opacity setting unit
CN103493125A (en) * 2011-02-28 2014-01-01 瓦里安医疗***国际股份公司 Method and system for interactive control of window/level parameters of multi-image displays
CN103908299A (en) * 2012-12-31 2014-07-09 通用电气公司 Systems and methods for ultrasound image rendering
CN106055188A (en) * 2015-04-03 2016-10-26 登塔尔图像科技公司 System and method for displaying volumetric images
CN106236133A (en) * 2015-06-12 2016-12-21 三星麦迪森株式会社 For the method and apparatus showing ultrasonoscopy

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7691063B2 (en) * 2004-02-26 2010-04-06 Siemens Medical Solutions Usa, Inc. Receive circuit for minimizing channels in ultrasound imaging
CN102855655A (en) * 2012-08-03 2013-01-02 吉林禹硕动漫游戏科技股份有限公司 Parallel ray tracing rendering method based on GPU (Graphic Processing Unit)
GB2513698B (en) * 2013-03-15 2017-01-11 Imagination Tech Ltd Rendering with point sampling and pre-computed light transport information
CN104157004B (en) * 2014-04-30 2017-03-29 常州赞云软件科技有限公司 The method that a kind of fusion GPU and CPU calculates radiancy illumination
JP5920897B2 (en) * 2014-07-03 2016-05-18 株式会社ソニー・インタラクティブエンタテインメント Image generating apparatus and image generating method
WO2016054775A1 (en) * 2014-10-08 2016-04-14 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic virtual endoscopic imaging system and method, and apparatus thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090015587A1 (en) * 2007-07-09 2009-01-15 Kabushiki Kaisha Toshiba Ultrasonic imaging apparatus
US20120190984A1 (en) * 2011-01-26 2012-07-26 Samsung Medison Co., Ltd. Ultrasound system with opacity setting unit
CN103493125A (en) * 2011-02-28 2014-01-01 瓦里安医疗***国际股份公司 Method and system for interactive control of window/level parameters of multi-image displays
CN103908299A (en) * 2012-12-31 2014-07-09 通用电气公司 Systems and methods for ultrasound image rendering
CN106055188A (en) * 2015-04-03 2016-10-26 登塔尔图像科技公司 System and method for displaying volumetric images
CN106236133A (en) * 2015-06-12 2016-12-21 三星麦迪森株式会社 For the method and apparatus showing ultrasonoscopy

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353328A (en) * 2018-12-20 2020-06-30 核动力运行研究所 Ultrasonic three-dimensional volume data online display and analysis method
CN111353328B (en) * 2018-12-20 2023-10-24 核动力运行研究所 Ultrasonic three-dimensional volume data online display and analysis method
CN110211216A (en) * 2019-06-14 2019-09-06 北京理工大学 A kind of 3-D image airspace fusion method based on the weighting of volume drawing opacity
CN110223371A (en) * 2019-06-14 2019-09-10 北京理工大学 A kind of 3-D image fusion method based on shearing wave conversion and the weighting of volume drawing opacity
CN110211216B (en) * 2019-06-14 2020-11-03 北京理工大学 Three-dimensional image spatial domain fusion method based on volume rendering opacity weighting
CN111836584A (en) * 2020-06-17 2020-10-27 深圳迈瑞生物医疗电子股份有限公司 Ultrasound contrast imaging method, ultrasound imaging apparatus, and storage medium
CN111836584B (en) * 2020-06-17 2024-04-09 深圳迈瑞生物医疗电子股份有限公司 Ultrasound contrast imaging method, ultrasound imaging apparatus, and storage medium
US20230305126A1 (en) * 2022-03-25 2023-09-28 Shenzhen Mindray Bio-Medical Electronics Co., Ltd. Ultrasound beamforming method and device

Also Published As

Publication number Publication date
CN110087553B (en) 2022-04-26
CN110087553A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
WO2018214063A1 (en) Ultrasonic device and three-dimensional ultrasonic image display method therefor
US10515452B2 (en) System for monitoring lesion size trends and methods of operation thereof
JP4510817B2 (en) User control of 3D volume space crop
US20070046661A1 (en) Three or four-dimensional medical imaging navigation methods and systems
US11521363B2 (en) Ultrasonic device, and method and system for transforming display of three-dimensional ultrasonic image thereof
JP6629094B2 (en) Ultrasound diagnostic apparatus, medical image processing apparatus, and medical image processing program
KR101043331B1 (en) 3-dimension supersonic wave image user interface apparatus and method for displaying 3-dimension image at multiplex view point of ultrasonography system by real time
JP4711957B2 (en) User interface for 3D color ultrasound imaging system
WO2006022815A1 (en) View assistance in three-dimensional ultrasound imaging
WO2007043310A1 (en) Image displaying method and medical image diagnostic system
JP7010948B2 (en) Fetal ultrasound imaging
JP7267928B2 (en) Volume rendered ultrasound image
JP7177870B2 (en) Ultrasound Imaging System with Simplified 3D Imaging Controls
JP6382050B2 (en) Medical image diagnostic apparatus, image processing apparatus, image processing method, and image processing program
JP2007135843A (en) Image processor, image processing program and image processing method
US20130328874A1 (en) Clip Surface for Volume Rendering in Three-Dimensional Medical Imaging
CN106030657B (en) Motion Adaptive visualization in medicine 4D imaging
US20200330076A1 (en) An ultrasound imaging system and method
JP2007512064A (en) Method for navigation in 3D image data
JP2001276066A (en) Three-dimensional image processor
JP7008713B2 (en) Ultrasound assessment of anatomical features
JP5498090B2 (en) Image processing apparatus and ultrasonic diagnostic apparatus
KR102500589B1 (en) Method and system for providing freehand render start line drawing tools and automatic render preset selections
CN112689478B (en) Ultrasonic image acquisition method, system and computer storage medium
CN112581596A (en) Ultrasound image drawing method, ultrasound image drawing apparatus, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17910527

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.07.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17910527

Country of ref document: EP

Kind code of ref document: A1