CN105407811B - Method and system for 3D acquisition of ultrasound images - Google Patents

Method and system for 3D acquisition of ultrasound images Download PDF

Info

Publication number
CN105407811B
CN105407811B CN201480042479.3A CN201480042479A CN105407811B CN 105407811 B CN105407811 B CN 105407811B CN 201480042479 A CN201480042479 A CN 201480042479A CN 105407811 B CN105407811 B CN 105407811B
Authority
CN
China
Prior art keywords
image
acquired
ultrasound
interest
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201480042479.3A
Other languages
Chinese (zh)
Other versions
CN105407811A (en
Inventor
德尔菲娜·里贝斯
马蒂亚斯·彼得汉斯
斯特凡·韦伯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CASCINATION AG
Universitaet Bern
Original Assignee
CASCINATION AG
Universitaet Bern
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CASCINATION AG, Universitaet Bern filed Critical CASCINATION AG
Publication of CN105407811A publication Critical patent/CN105407811A/en
Application granted granted Critical
Publication of CN105407811B publication Critical patent/CN105407811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • A61B8/5261Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from different diagnostic modalities, e.g. ultrasound and X-ray
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0891Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/13Tomography
    • A61B8/14Echo-tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/42Details of probe positioning or probe attachment to the patient
    • A61B8/4245Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
    • A61B8/4254Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient using sensors mounted on the probe
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/466Displaying means of special interest adapted to display 3D data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/467Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means
    • A61B8/469Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means for selection of a region of interest
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5269Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving detection or reduction of artifacts
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/54Control of the diagnostic device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/42Details of probe positioning or probe attachment to the patient
    • A61B8/4245Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • A61B8/5246Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from the same or different imaging techniques, e.g. color Doppler and B-mode
    • A61B8/5253Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from the same or different imaging techniques, e.g. color Doppler and B-mode combining overlapping images, e.g. spatial compounding

Abstract

The present invention relates to a method of 3D ultrasound image acquisition and a system for implementing the method. The proposed method detects whether a current ultrasound image (401, 402) has at least one pixel in a volume of interest (301), wherein in case the current image (401, 402) has no pixels in the volume of interest (301), the current image (401, 402) is discarded, otherwise the current ultrasound image (401, 402) is segmented and combined onto the 3D model (403) to be generated, which is displayed in real time on the display (101) and in particular overlaid on the displayed pre-acquired image (305), wherein in particular in case a new current ultrasound image (401, 402) is combined onto the 3D model (403), the 3D model (403) displayed on the display (101) is updated. Further, a quality measure of the 3D model (403) to be generated is calculated after the acquisition of the ultrasound images (401, 402), the acquisition of the ultrasound images (401, 402) being ended when the quality measure reaches a predefined level.

Description

Method and system for 3D acquisition of ultrasound images
Description
The present invention relates to a method and system for use in Ultrasound (US) imaging of biological soft tissue. More particularly, it relates to US acquisition protocols with real-time feedback to user interaction. The method allows for rapid and accurate imaging and localization of specific anatomical structures of interest for (but not limited to) image-guided surgical or diagnostic interventions and during the same. Especially internal organs such as the liver. In addition, the present invention ensures satisfactory image content for further image processing, particularly for diagnosis, segmentation (e.g., segmentation of a digital image into two or more regions corresponding to features of an imaging subject such as blood vessels), and registration.
Background
Due to the high potential for the application of 3D representations based on anatomical structures, three-dimensional (3D) ultrasound imaging is increasingly used and becomes widely used in clinical settings. In conventional two-dimensional (2D) ultrasound imaging, a physician acquires a series of images of a region of interest while moving an ultrasound transducer by hand. Based on the motion pattern and content used, he then performs a psycho-logical 3D reconstruction of the underlying anatomy. This psychological process has various disadvantages: the quantitative information (distance between anatomical structures, distance between exact positions in relation to other organs, etc.) is lost and the resulting 3D information depends on and is only known to the physician performing the scan.
The use of 3D Ultrasound (US) imaging and appropriate processing of the image data significantly helps to obviate the above-mentioned drawbacks. Further benefits of 3D echography (echography) are as follows: the spatial relationship among the so-called 2D slices is preserved in the 3D volume, which allows for off-line detection of ultrasound images pre-recorded by another physician. Using the so-called arbitrary planar slice technique, image planes that cannot be acquired due to geometric constraints imposed by other structures of the patient can now be rendered easily. In addition, the diagnostic task can be greatly improved by body visualization and accurate body assessment [1 ].
3D US-images are acquired using sophisticated ultrasound systems, which are described in various patent applications. There are mainly two methods to obtain 3D data: one is to use a 2D phased array probe that allows scanning of the volume of interest, and the other is to reconstruct a 3D volume from a series of 2D images acquired using standard ultrasound that is moved over the region of interest.
2D phased array probe technology uses a two-dimensional array of piezoelectric elements. The body is scanned by electrically steering the array elements. Dedicated 3D US probes have been introduced for real-time 3D volume acquisition mainly in obstetrics and cardiac imaging. A typical device example is
Figure BDA0000915826610000021
730(GE medical System) and
Figure BDA0000915826610000022
(Philips medical systems, Bothell, WA, USA). Both systems aim at generating high quality 3D US images in all spatial directions (axial, transverse and elevation) with a high acquisition rate of typically 40 volumes per second. A fully filled 3D volume can be obtained using this technique.
The main disadvantage of this technique is that the field of view is limited by the size of the probe and such probes are expensive and only exist in high-end ultrasound equipment. An alternative is to combine 3D scans from a series of 2D images as proposed [2], [3 ]. This technique uses a standard ultrasound probe (1-D piezo array) and different methods of scanning the region of interest (probe interpretation, probe rotation, free arm scanning using probe position tracking). Ultrasound systems, probes and methods for 3D imaging are prior art and are described in [4-10 ]. In the case of so-called free-arm ultrasound probes, free-arm ultrasound calibration is required [11 ]. To ensure uniform and complete filling of the 3D volume, data acquisition can be performed using uniform velocity, equal direction and angle, as described in [6 ]. To overcome acquisition artifacts, complex reconstruction and combination algorithms have been described as examples in US6012458A and [12 ].
An emerging application of 3D ultrasound is its use in the registration of procedures for manipulating soft tissue. Surgical manipulation systems are used to guide physicians based on 3D image data, often acquired in Computed Tomography (CT) or Magnetic Resonance Imaging (MRI) prior to surgery. Because soft tissue can deform and move in the time between imaging and surgery, additional intra-operative information data is required to warp (register) the pre-operative image data to the patient undergoing surgery. Ultrasound imaging, as real-time, non-invasive and commonly available, is a promising modality for acquiring data during such procedures. As mentioned above, especially 3D ultrasound imaging is well suited to obtain such information on organ motion and deformation.
Common challenges faced by all 3D ultrasound acquisition techniques are variations in image quality and the lack of measures to indicate whether the acquired data is adequate for further image processing, such as diagnosis, segmentation and registration. The suitability of the image data for further processing depends on the image content, the contrast between the structure of interest and the background, the number of artifacts present in the image, as well as the image homogeneity and the density of the volume scan. All of these factors are often accessed by the user performing the scan once the scan is completed or the results of further processing are reviewed, such as acquiring a 3D data set during a pilot procedure, and attempting to register and analyze the registration results. If the results of the scan are insufficient, the entire acquisition process needs to be repeated-which is time consuming and tedious, as it is uncertain whether the iteration of the scan leads to better results.
The available state of the art for guidance/feedback during ultrasound acquisition can be classified as
-guidance for obtaining desired image content
Guidance for periodic transduction action
Guidance of 3D imaging based on a general model of the anatomy
As discussed in the following figures:
guidance for acquiring desired image content
The content of the B-model acquired in US 2012065510 is compared with a model of the desired image in order to train the user to acquire a desired view of the anatomical result. A fit quality measure for the desired target image is calculated for each acquired image and displayed to the user. The user then moves the ultrasound probe until an image is acquired with sufficient fit. This method provides feedback of the image quality of the 2D image but does not provide an accurate suggestion of how to improve the image quality nor give an indication of the usability of the image for further processing.
A general interactive assistant for image processing is proposed in US 2007016016. The assistant compares the acquired image with a pre-stored target image. If the similarity of the acquired image and the target image is not sufficient, the assistant attempts to recommend an action for improving the image quality. As in the previously discussed patents, the image content of the two images is directly compared and no further processing steps are considered.
Guidance for periodic transduction action
A system for guiding a free arm 3D scan is described in US 005645066. The method trains the user to move at a regular speed in a desired region of interest to obtain regularly spaced groups of 2D ultrasound images, which are then combined into a 3D volume. The graphically provided feedback represents the user's fill amount combination for the combined image buffer without any information to investigate the image quality or to give the generated 3D image volume.
A method for teaching a user to perform the correct probe action for elastography imaging is described in US 2012108965. By using a sensor such as an accelerometer inside the US probe, its motion is measured and compared to the desired motion pattern for elastography imaging. The system then provides visual or auditory type feedback to the operator in order to facilitate proper action. This system is limited to sensing motion within the transducer space and does not provide any feedback on the quality of the image content.
A method for visualizing the progress of scanning during laparoscopic ultrasound imaging is described in US 20090036775. Based on the transducer's position measurements, the acquired ultrasound image is displayed in 3D and the frames around the ultrasound image are highlighted if there are gaps in the scan or if the scan speed is too fast or too slow. This enables the user to rescan the missing area and ensure that regular probe motion is achieved. As in the above-mentioned patent, no feedback is provided for the image content.
Based onGuidance of 3D imaging of a generic model of an anatomy
An apparatus for guiding the acquisition of cardiac ultrasound is described in EP 1929956. The system specifically displays the intersection of US image planes with a 3D anatomical model in order to assess the progress in data acquisition on the heart. The basic analysis is thus limited to the geometric position of the image and does not include additional criteria with respect to the subsequent use of the image data.
In US20080187193 an apparatus for forming a guide image for US scanning is proposed. Based on the acquired series of images, the most suitable 3D shape model is selected and displayed to the user. This 3D shape model then acts as a guide image for subsequent imaging of the same structure. This enables systematic scanning and efficient localization of important anatomical features. The apparatus is intended to guide an ultrasound scan to a specific target location but does not aim at obtaining a desired, predefined quality, 3D volume image.
Based on the above description, the underlying problem of the present invention is to provide a method and a system which mitigate the acquisition of a 3D ultrasound data set, i.e. a 3D model of a body of interest of an object (e.g. a body or a body part, in particular an organ such as a liver of a patient), and which in particular allow to check the quality of the acquired 3D model in order to be able to ensure a specific further use of the acquired 3D model.
This problem is solved by a method having the features of claim 1 and a system having the features of claim 15. Preferred embodiments are respectively claimed in the corresponding subclaims and are described below.
The method according to the invention according to claim 1 comprises the steps of: providing a pre-acquired 3D image or model (i.e. a corresponding dataset) of an object (e.g. a body part or body of a person/patient, such as an organ like a liver), displaying the pre-acquired image on a display (e.g. an image user interface (GUI) of a computer), selecting a volume of interest of the object (e.g. a specific volume of the object to be examined) in the pre-acquired image (e.g. a GUI-assisted display of a computer having a connection to the display), and adjusting a spatial position of the volume of interest with respect to (a local coordinate system of) the pre-acquired image, e.g. by correspondingly positioning an Ultrasound (US) probe with respect to the object (e.g. on the body of the patient), particularly visualizing a current spatial position of the volume of interest (also denoted VOI) on the display with respect to the pre-acquired image, in particular in real time, and in particular on the display, a current (e.g. 2D) ultrasound image acquired in real time in the volume of interest by means of the ultrasound probe is displayed, wherein in particular the visualization of the volume of interest is overlaid on the displayed pre-acquired 3D image, and in particular the visualization of the volume of interest is updated on the display using the current spatial position of the ultrasound probe, in particular the current spatial position of its ultrasound probe is determined using a tracking system (e.g. in a so-called spatial fixation, patient fixation or camera coordinate system).
Now, when the spatial position of the volume of interest is selected or adjusted as planned (the VOI then remaining perceptually non-adjusted static), the acquisition of ultrasound images in said volume of interest is triggered in order to generate a 3D model (i.e. a corresponding data set representing a model or optionally a 3D ultrasound image) of said object in said volume of interest, wherein said triggering is performed in particular by means of said ultrasound probe, in particular by a specific movement of a predetermined pose of the ultrasound probe for the object (e.g. on the volume of interest), or by a specific predefined position of the ultrasound probe or by not moving the ultrasound probe within a predetermined time period or even automatically; and a plurality of ultrasound images are acquired, in particular intraoperatively, by means of an ultrasound probe in a volume of interest for generating said 3D model while moving said ultrasound probe along or over the volume of interest, for example over or across an object, for which images can be acquired, preferably in a VOI of the object, wherein the current image is particularly displayed in real time on said display, wherein the current image is displayed in two and/or three dimensions, for example in a 3D viewer represented on the display, wherein said three-dimensional displayed current ultrasound image is particularly superimposed on the displayed pre-acquired image, and wherein it is automatically determined whether the current ultrasound image has pixels in the volume of interest, wherein in case the current image has no pixels in the volume of interest, the current image is automatically discarded (i.e. not composed into the 3D model/ultrasound image), otherwise (i.e. when the image has pixels or voxels in the VOI), segmenting and composing the current ultrasound image onto the 3D model to be generated, displayed in real time on the display and in particular overlaid onto the displayed pre-acquired image, wherein in particular in case a new current ultrasound image is composed onto the 3D model, the displayed 3D model on the display is updated and a quality measure for the 3D model to be generated is automatically determined at the time of the acquiring the ultrasound image, wherein the acquiring the ultrasound image is ended as soon as the quality measure has reached a predetermined level, wherein in particular the quality measure is at least one of: the number of individual (2D) ultrasound images scanned in the volume of interest, the (3D) density of the ultrasound images acquired in the volume of interest (e.g., the ratio between the number of pixels or voxels scanned and the number of pixels or voxels of the VOI (i.e., the VOI volume)), the number and/or distribution of specific image features, particularly the number of anatomical structures segmented in the volume of interest (e.g., such as tumors, vessels, etc.), and the timing for scanning the ultrasound images. Other criteria may also be applied.
For example, the acquisition is stopped if the acquired (2D) ultrasound images exceed a predetermined number, or if the density of the 2D ultrasound images in the VOI exceeds a predetermined density value, or if a particular number and/or distribution of specific image features is detected, or after a predetermined period of time during which the VOI is deemed to be adequately sampled.
After the acquisition of the ultrasound image, the generated 3D model is preferably registered to the pre-acquired 3D image.
Thus, in particular, the method allows to interactively acquire ultrasound images for the purpose of image registration, i.e. fusion between image modalities. Due to such fusion, images acquired during treatment may be enhanced with more detailed information acquired outside the treatment room (e.g., ultrasound images with a smaller number of small vessels detected and smaller contrast during treatment fused with high resolution pre-operative CT or MRI). In particular, the invention aims at constructing an image acquisition framework which is not only aimed at acquiring high resolution images of the patient, but also at acquiring technical information which enables said fusion. Preferably, the user is guided to acquire the images/features needed to perform registration between the pre-acquired data and the acquired current data using patient-specific, a priori knowledge from the pre-operative data (typically from other modalities with better levels of detail than ultrasound).
According to a preferred embodiment, said provided pre-acquired 3D image is acquired in a first item, while said plurality of ultrasound images is acquired in a separate second item, carried out at a subsequent time. The first item may be hours/days/weeks prior to the second item, e.g. surgery/intervention.
In particular the period between the two is at least 1 hour, at least 12 hours, at least one day or at least one week.
According to a further embodiment of the method according to the invention, the provided pre-acquired 3D image is acquired by using an imaging method other than ultrasound.
According to a further embodiment of the method according to the invention, the quality measure is based on a criterion of patient-specific data from the pre-acquired 3D image.
According to a further embodiment of the method according to the invention, the number and/or distribution is selected in dependence on a patient-specific anatomy in the volume of interest.
According to a further embodiment of the invention, the user acquiring the plurality of ultrasound images is guided to move the ultrasound probe to a position where image features thereof are expected based on the pre-acquired 3D image, in particular in order to provide a sufficient data set for registering the generated 3D model to the pre-acquired 3D image.
According to embodiments of the present invention, the VOI is defined without using ultrasound probe guidance, but by placing an US probe at a specific location.
In addition, "overlaying" an Ultrasound (US) image onto the pre-acquired 3D image or model particularly indicates that at least a portion of the US image or the US image is displayed at a location in the pre-acquired image such that the content of the features of the US image are calibrated or matched with the corresponding content of the features of the pre-acquired image. The US image may thus complement the content or features of the pre-acquired image, and vice versa. Additionally, the US image may thus cover portions of the pre-acquired 3D image. In the case of a VOI, the overlap particularly indicates that a visualization of the VOI (i.e., 3D box, etc.) is displayed in the pre-acquired 3D image, particularly, for example, at an appropriate position corresponding to the position of the ultrasound probe in a spatially fixed (or patient-fixed or camera) coordinate system.
Thus, the invention described herein guides the user in acquiring an ultrasound model/dataset that meets the needs for further processing. Guidance is provided by online, real-time analysis and display of the acquired 3D model and by quantitative assessment of image quality/content required for subsequent processing.
In order to accomplish said adjustment of the spatial position of said volume of interest for said pre-acquired image and to overlay a visualization of the volume of interest onto the displayed pre-acquired image and to overlay said three-dimensionally displayed current ultrasound image onto the displayed pre-acquired image or to accomplish a check whether the current ultrasound image has at least one pixel in the volume of interest and to overlay a 3D model onto the displayed pre-acquired image, an initial registration (giving at least a rough calibration) is preferably performed. This allows a person to display (at least approximately) US images, VOIs, etc. in or on the pre-acquired 3D images in the correct position in order to align the features or content of the displayed US images with the features or content of the corresponding pre-acquired 3D images or models.
In particular the initial registration may be a landmark based registration, where the user selects for example four points in a pre-acquired 3D image (e.g. a virtual liver model) and then touches them with a tracking tool (in order to acquire points in a camera, a patient fixed or a spatially fixed coordinate system). The appropriate algorithm then automatically computes the registration transformation.
Alternatively, or in combination, an ultrasound-based initial registration may be implemented where the user selects a point in a pre-acquired 3D image (e.g., a virtual liver surface) where he wants to place an ultrasound probe. Subsequently, the pre-acquired 3D image is used to simulate the desired ultrasound image at that location and the user uses a calibrated ultrasound probe on the patient (object) to acquire the same image in the patient (thus, in a camera, patient-fixed or spatially-fixed coordinate system). An initial registration transformation is automatically calculated based on the simulated virtual image and the acquired actual image. In this regard, a calibrated ultrasound probe is an ultrasound probe that: the relation between the position of the acquired image in the spatially fixed (or patient fixed, or camera) coordinate system and the position of the ultrasound probe (of the position sensor) is known, so that knowing the position of the ultrasound probe means knowing the position of the acquired ultrasound image in the spatially fixed (or patient fixed, or camera) coordinate system.
In a preferred embodiment of the method according to the invention, the generated 3D model is automatically registered to the pre-acquired, in particular preoperatively acquired, 3D image, in particular by matching one or several features of the generated 3D model (by tracking their coordinates in the ultrasound probe's help acquisition spatial fixation (or patient fixation or camera) coordinate system) with one or several corresponding features of the pre-acquired 3D image, and in particular by using said corresponding features and the coordinates of said features in the respective coordinate system, by automatically determining a registration transformation between the ultrasound probe's spatial fixation (or patient fixation or camera) coordinate system and the pre-acquired 3D image's coordinate system.
In other words, in the context of the present invention according to the present method, the user defines a volume of interest (VOI) in which registration should be performed. The definition of the VOI is performed by clicking on a virtual model (i.e., a pre-acquired image) or interactively placing the VOI using the pose of the ultrasound probe as described above (if a pose is used, the above calibration or initial registration is used to show the position of the probe on the virtual model (i.e., the virtual model is mapped to a camera or space-fixed or patient-fixed coordinate system).
Once this ultrasound-based registration is completed, the location of the pre-acquired 3D image (i.e., the virtual 3D model) for the spatially fixed (or patient fixed or camera) coordinate system is known. Thus, a tool such as a surgical tool whose position is tracked in a spatially fixed (or patient-fixed or camera) coordinate system may be displayed on the pre-acquired 3D image (virtual model).
According to a preferred embodiment of the method according to the invention, the volume of interest is predefined in spatial dimensions of voxel units (height, width and depth) with respect to the volume of interest and is further predefined or selected for a specific feature or characteristic, in particular for its spatial resolution, density of detected or segmented structures, and/or homogeneity (i.e. its spatial density, wherein in this sense the VOI in the pre-acquired image is preferably uniformly sampled throughout) or number of artifacts (i.e. the number of artifacts is preferably smaller, preferably free of artifacts, and the VOI with a low noise level is smaller, preferably free of noise).
In addition, in a preferred embodiment of the method according to the invention, an artifact detection is automatically carried out for the current ultrasound image which is not discarded, in particular using at least one filter algorithm, in particular hough transform and/or low pass filtering, wherein this current ultrasound image is discarded, in particular in case an artifact is detected in the current ultrasound image, and wherein an artifact probability is calculated, in particular based on patient-specific characteristics of the pre-acquired 3D image.
Further, in a preferred embodiment of the method according to the invention, said segmentation of the single current ultrasound image, in particular of vessels, tumors, organ borders, biliary tract and/or other anatomy, is automatically performed using at least one algorithm (e.g. deterministic) providing a segmentation of a specific anatomical structure of the in-vivo object of interest, wherein said algorithm is selected in particular based on patient-specific features of the pre-acquired 3D image.
Furthermore, in a preferred embodiment of the method according to the invention said segmentation of the single current ultrasound image, in particular such as organ boundaries, organ parenchyma and/or vascular system, is performed automatically using a probabilistic assessment of image features, preferably using patient-specific features of the pre-acquired 3D image.
Furthermore, in a preferred embodiment of the method according to the invention, the US volume reconstruction algorithm applies two parallel processing steps, one for segmentation of information from different 2D US images and one for determining image artifacts, either by directly using the 2D US image content or based on the enhancement result, i.e. the detected features of the structure of the US image (e.g. after segmentation of the image). In other words, the artifact detection and the segmentation are preferably carried out in parallel, wherein in particular the artifact detection directly uses a single content of the current ultrasound image or a content detected by the current ultrasound, and wherein in particular the respective algorithms interact with each other iteratively.
Preferably, the image features (without artifacts) detected in the single current ultrasound image are then automatically merged into a 3D volume data set (which is also denoted as combined), which 3D volume data set represents a 3D model that is continuously generated after the series of (current) 2D ultrasound images is acquired.
Furthermore, in a preferred embodiment of the method according to the invention, during an initial adjustment of the spatial position of the ultrasound probe for the volume of interest and/or after moving the ultrasound probe during the acquisition of the plurality of ultrasound images, a guiding information is displayed on the display and/or provided audibly, in particular verbally, to the user, thereby assisting and/or guiding the user when placing and/or moving the US probe.
Preferably, the guiding information is provided by feedback based on acquired features of the pre-acquired 3D image and the 3D model.
Preferably, the ultrasound probe is tracked by taking spatial image coordinates using a coordinate measurement system based on optical, electromechanical or mechanical measurement criteria (i.e. in a room-fixed, patient-fixed or camera coordinate system) and/or by taking relevant image coordinates by analyzing relevant shifts of image features in subsequent images.
Further, in particular the guiding information comprises a visualization of at least one or several cubic meshes on the display, wherein in particular a specific color represents a defined tissue structure and/or anatomical structure. In addition, the mesh or meshes are preferably displayed on the pre-acquired 3D image.
Furthermore, in a preferred embodiment of the method according to the invention, missing information in the current ultrasound image is automatically interpolated, in particular after said segmentation, based on a priori information or using patient-specific features from a pre-acquired 3D image or a priori information about the object (e.g. organ). Additionally, after the segmentation, missing information in the current ultrasound image (401, 402) may be interpolated using queue-specific and/or statistical information about the geometry of the distribution of vessel structures, anatomical structures of interest in the object, object portion or lesion, and/or other known anatomical structures.
Preferably, the volume of interest is chosen such that it contains sufficient image information to allow further processing regarding diagnosis, visualization, segmentation and/or registration.
Furthermore, in a preferred embodiment of the method according to the invention, the generated 3D model is automatically calibrated, for example with pre-acquired 3D images, which is based in particular on an imaging method other than ultrasound and in particular on a coordinate system different compared to the 3D model, to display the level of progress of the 3D model generation, in particular with respect to pre-acquired or dynamically updated information content, in particular with respect to parameters such as homogeneity (see above) and/or resolution.
In addition, the visualization of the 3D model, preferably on a display, uses a user-defined static or dynamic color mapping, in particular indicative of the currently detected and analyzed anatomical structure.
Further, in a preferred embodiment according to the present invention, a signal that the ultrasound image acquisition process is successfully completed is transmitted to the user, in particular either acoustically via a speaker or graphically via said display.
Preferably, the pre-acquired 3D image is ultrasound, computed tomography or magnetic resonance imaging.
Further, the problem according to the invention is solved by a system having the features of claim 30, which is particularly designed to implement the method according to the invention, wherein the system comprises: an ultrasound probe connected to a data processing system, which in particular comprises a control unit for controlling the ultrasound probe, a computer device (e.g. a computer such as a PC or workstation, for example) for acquiring and analyzing US images, and a display connected to the computer for displaying information, in particular US image information and pre-acquired images, and information for a user (e.g. guidance information). In addition, the system comprises a tracking system for tracking the spatial position of the ultrasound probe (e.g. a coordinate system for spatial fixation, patient fixation or a camera), the tracking system comprising one or several position sensors arranged to or integrated into the ultrasound probe for detecting the spatial position of the ultrasound probe in said coordinate system, wherein said tracking system (also denoted coordinate measurement system) is especially designed to optically, electromechanically or mechanically sense the position of the ultrasound probe, i.e. said tracking system is based on optical, electromechanical or mechanical measurement criteria for position tracking of the ultrasound probe.
Preferably, the tracking system comprises a tracking device (such as a camera, in particular a stereo camera) designed to detect and track the position of a position sensor in a camera coordinate system arranged in the camera (or tracking device). Since the tracking device is often positioned for a space in which the patient is located or for the patient, such a coordinate system can also be denoted as a spatially fixed or patient-fixed coordinate system.
Preferably, the data processing system is designed to automatically check whether a current ultrasound image of the object acquired using the ultrasound probe has at least one pixel in a pre-selected volume of interest of a pre-acquired 3D image of the object, wherein in case the current image has no pixels in the volume of interest, the data processing system is designed to discard the current image, wherein otherwise (i.e. when the image has pixels/voxels in the VOI) the data processing system is designed to automatically segment the current ultrasound image and combine it to the 3D model, and wherein the data processing system is designed to determine a measure of quality for the 3D model to be generated, in particular to determine a measure of quality for the 3D model to be generated after acquiring the ultrasound image using the ultrasound probe, wherein the data processing system is designed to end the acquisition of the ultrasound image for the 3D model once said quality reaches a predefined level or a dynamically defined level, wherein the particular geological measurement is at least one of: the number of individual ultrasound images scanned in the volume of interest, the distribution and/or number of specific image features, in particular the number of anatomical structures segmented in the volume of interest or in particular the patient-specific number of desired features, and the time required for the acquisition of the ultrasound images (see also above).
In addition, the data processing system is specifically designed to automatically register the generated 3D model to the pre-acquired 3D image, or vice versa (see also above).
The system may further comprise a speaker for providing sound, in particular speech, information to the user (e.g. guidance information, see above).
The system according to the invention may also be characterized by features of the method according to the invention described herein.
Furthermore, according to another aspect of the present invention, a computer program is provided, comprising program instructions such that, when the computer program is loaded into or executed by a computer, cause the computer (for example said data processing system or said computer of a data processing system) to perform a method according to the present invention (for example according to claim 1). Here, in particular a pre-acquired 3D image, a current (2D) ultrasound image acquired using an ultrasound probe and/or a VOI are fed as input to a computer program.
In particular, according to another aspect of the invention, a computer program is provided comprising program commands which cause a computer (for example the data processing system or the computer of the data processing system) to check whether a current ultrasound image has at least one pixel in a volume of interest, wherein in case the current image has no pixel in the volume of interest, the current image is discarded, wherein otherwise the current ultrasound image is segmented and combined to a 3D model to be generated which is displayed in real time, in particular on a display (for example connected to the computer) and in particular superimposed on a displayed pre-acquired image, wherein in particular in case a new current ultrasound image is combined to the 3D model, the 3D model displayed on the display is updated and a quality measure for the 3D model to be generated is determined, in particular when the ultrasound image is acquired, wherein the acquisition of the ultrasound image is ended as soon as the quality measure reaches a predetermined level, wherein in particular the quality measure at least one of: the number of individual ultrasound images scanned in the volume of interest, the distribution and/or number of specific image features, in particular the number of segmented anatomical structures in the volume of interest, and the time required for scanning the ultrasound images (see also above).
Additionally, another aspect of the invention is a method for generating and visualizing guidance information of a user in real-time to assist in the location and identification of an appropriate location of a body of interest for placement of an ultrasound probe on a surface of an organ.
In this regard, tracking of the ultrasound probe is preferentially enabled by taking absolute spatial image coordinates using a coordinate measurement system based on optical, electromechanical or mechanical measurement criteria and/or by taking relevant image coordinates by analyzing the relevant shifts of image features in subsequent images.
In addition, the guiding information for the user in this connection preferably comprises a virtual visualization of a square grid in the display of a graphical user interface with specific colors representing defined tissue structures, in particular anatomical structures.
In addition, in this regard, the guidance information for the user is preferably audio or speech (e.g., recorded spoken or simulated audio).
According to yet another aspect of the present invention, a registration method is provided to calibrate an acquired 3D ultrasound image with a pre-acquired 3D image dataset to display a current level of progress of 3D volumetric image acquisition for pre-acquired or dynamically updated information content, particularly, but not limited to, parameters such as homogeneity (see above) and/or resolution.
In this regard, the visualization of the 3D ultrasound image data set preferably implements a specific, user-defined, static or dynamic color mapping indicative of the currently detected and analyzed anatomical structure.
In addition, in this regard, it is preferable to signal the successful completion of the image acquisition process to the user acoustically or graphically via means of a GUI and/or an acoustic interface, in particular a speaker.
In addition, in this connection, the pre-acquisition image is preferably an ultrasound, CT-or MR-image, in particular an ultrasound, CT-or MR-image of homogeneity quality and image content.
Further features and advantages of the invention will be described by way of example only with reference to the accompanying drawings, in which:
FIG. 1 illustrates an exemplary embodiment for 3D Ultrasound (US) image acquisition;
FIG. 2 shows a schematic diagram of the initialization of the US image acquisition process;
fig. 3A and 3B show a visualization and an anatomical structure of a volume of interest (VOI). FIG. 3B shows a visualization of the VOI along with a 2D US image; in fig. 3A, the graphical representation of the VOI and the ultrasound image are superimposed onto the 3D structure;
FIG. 4 shows a visualization during 3D ultrasound image acquisition;
figure 5A illustrates a US image acquisition process including artifact detection and image segmentation;
FIG. 5B shows a typical view of an artifact in a US image;
fig. 6 shows an acquisition algorithm with real-time feedback to guide the user to acquire a suitable image for further image processing.
In particular, the method and system according to the invention act to aim at optimizing the acquisition of 3D US images with the criterion of improving the real-time registration of US images using pre-acquired (e.g. 3D) images, in particular from US, CT and/or MR. By implementing real-time feedback during acquisition of the US images and an online or real-time 3D image analysis loop, the system/method aims to ensure proper image content of the 3D US images/models for further data processing, i.e. diagnosis, visualization, segmentation and registration.
The invention has been described with particular reference to image registration for manipulating soft tissue surgery, but is not limited to this application.
System installation
According to an exemplary embodiment, the method according to the invention uses in particular the following components: a 3D or 2D Ultrasound (US) probe 103 connected to a data processing system or unit 105 comprising a control unit 107 for controlling the US probe 103 and a computer (workstation or PC)106 provided with a Graphical User Interface (GUI)101 for displaying images-and other relevant user information. The display 101 may comprise a screen display (LCD or the like) and other means of graphically and/or visually displaying information to the user 100. Also, speakers may be coupled to computer 106 or GUI 101.
The US probe 103 is tracked by means of, for example, a commercially available tracking system 102. The US probe 103 is calibrated and has been bonded to or integrated with a passive or active tracking sensor or mirror 108 (also denoted as position sensor 108). In particular, the feedback and guidance for acquiring suitable image content is based on geometrical information of the acquired image in relation to the desired volume of interest and on measurements of the information content obtained in 3D. The measurements are taken from a segmentation of the acquired image and may be provided to the user as a quantitative indicator of qualitative 3D display or quality. By providing online feedback during 3D image acquisition, the operator is guided to move the US probe to the missing scan position to adjust the parameters correctly and ultimately ensure that sufficient data is acquired for subsequent processing. By controlling the image quality during the acquisition process, lengthy repetitions of the entire imaging process can be avoided.
Adjusting, visualizing, and selecting VOI
The user 100 selects a so-called volume of interest VOI301 in pre-acquired images from ultrasound- (US), computed tomography- (CT), or Magnetic Resonance (MR) imaging, which is displayed in the display 101 of the GUI of the system. Prior to selecting the VOI301, an initial registration may be performed based on one or several landmark points to roughly calibrate the pre-acquired anatomical image or model to a tracking coordinate system (spatially fixed coordinate system). The position of the VOI301 is then adjusted by the user placing the US probe 103 on the surface of the organ of interest 110. The current VOI301 is displayed in the GUI 101 and updated based on real-time tracking information. The user 100 thus receives real-time visual feedback on the GUI 101, and the GUI 101 allows him to interactively select the appropriate VOI301, i.e. the anatomical structure of interest 302. The adjustment algorithm for the VOI301 is shown in fig. 2.
During the accommodation phase, the VOI301 is visualized as having a visual cubic grid that displays the first US-image (fig. 3B) on the GUI 101 together with specific color lines. The VOI301 is placed under a virtual model of the tracked US probe 103 and the motion of the probe 103 is used to update the position of the VOI 301. Overlaying the virtual VOI301 onto the pre-acquired image or model 305A (fig. 3A) of the volume of interest enables the user to visually analyze the orientation and location of the selected VOI301, particularly whether the anatomical structure of interest is inside the VOI 301. The user 100 moves the US probe 103 over the surface of the organ 110 until the spatial placement of the VOI301 is satisfactory.
Once the correct placement of the VOI301 is achieved, the VOI301 is selected by keeping the probe 103 still in the desired position or by other interaction within reach of the user 100 (e.g., by pressing a confirmation button on the GUI 101, or by using voice commands).
The size of the VOI301 is determined by the following parameters: the length of the US probe 103, the image depth and the desired anatomy of interest. For example, VOIs 301 of internal organs, such as the liver, typically a vascular system, functional segmentation, tumor or tumor accumulations, organ boundaries, bile ducts, or/and parenchyma of the organ. In addition, the structure may also be a probabilistic representation of a desired feature such as an organ boundary (likelihood of an organ boundary within a particular region). The typical VOI301 specification is approximately 40mm (length) x80mm (width) x90mm (depth). Fig. 3A and 3B illustrate a typical VOI 301.
Obtaining ultrasound images in a VOI
Once the selection of the VOI301 is complete, 3D data acquisition is initiated. If the user places the probe 103 for the imaging region outside the VOI301 during the image acquisition process, he is notified visually and/or audibly via the GUI 101. The information may be displayed by a specific symbol/graphic representation such as a colored arrow or a hand, or/and it may be encoded as sound (e.g. by means of frequency or amplitude modulation, beep length). The voice information may also include verbal instructions given to the user through one or more speakers.
A single acquired (e.g., 2D) current US image 401, 402 is displayed in real time on the GUI 101. In this way, the user 100 may interactively, visually detect whether the anatomical structure of interest is visible in the US image. Visualization may be provided as a standard 2D ultrasound image 402 and also in the 3D viewer 401. The 3D viewer can either display only the ultrasound image and its position within the VOI301 (similar to fig. 3B) or it can overlay the acquired image with the corresponding 3D information from the pre-acquired image (similar to fig. 3A).
On-line image quality inspection, segmentation and assembly
Automatic online image quality inspection and analysis is performed during image acquisition. An example for an image evaluation algorithm is shown in fig. 5. The algorithm captures the acquired (current) US-image and checks whether the position of the image is within the selected VOI 301. If the image is not inside the selected VOI301, the next acquired (current) image is analyzed. The automated process uses spatial information from the tracking system 102 and the tracking sensor 108 is attached to the US probe 103. From the tracking information and US calibration transformations (e.g. a transformation linked to the position of the US probe 103, i.e. the position of a position sensor incorporated or integrated into the probe 103, to the position of the US image generated using the probe, so as to know the position of the US probe 103 in a camera coordinate system that is spatially fixed, patient fixed or knows the position of the US image in this coordinate system), the 3D spatial position of the US image is calculated and compared to the 3D spatial position of the VOI 301. An US image is considered to be outside the VOI301 if no pixels of the US image are located inside the VOI 301. Otherwise, the image is deemed valid and used for further processing, including artifact removal and segmentation. The artifact removal process detects US-specific artifacts such as large black streaks in the image (fig. 5B). These may result from insufficient contact between the active sensing area of the US probe 109 and the organ/biological tissue of the patient 110 or from a mesh structure reflecting the complete US signal. Black stripes are automatically detected by using methods such as hough transform, low pass filtering, or density analysis along vertical lines of the image.
In parallel to the artifact detection process, the image is segmented and buffered until artifact detection is complete (see fig. 5). If no artifact is present, the segmented images are retained and combined in the 3D US image/model. The segmentation automatically detects structures of interest (typically vessels, tumors, or organ boundaries) in the image and displays them as an overlay to the 2D image 404. If a new US image is combined into the 3D US volume, the 3D information 403 on the GUI 101 is updated and displayed to the user 100. By displaying the analyzed structure in real time on the 2D image, the user 100 can interactively determine whether the segmentation algorithm was successful in detecting relevant information on the current image. By updating the 3D visualization with the most recently acquired data, the user 100 also obtains feedback throughout the acquisition process and is able to determine if there is a location of missing information and ultimately determine whether to acquire an adequate representation of the anatomy of interest.
In addition to information on the content of the acquired images, GUI 101 is also capable of displaying the acquired image planes and thereby providing visual feedback of filling VOI301 with ultrasound images. This enables the user 100 to see locations where no image data is acquired and interactively place the ultrasound probe 103 at these locations.
The algorithm for cutting is chosen according to the anatomy of interest. Typical examples are algorithms for blood vessel detection and for organ surface detection. A wide range of US segmentation algorithms can be used in the state of the art [16 ].
Methods for image combination implemented are known and are as follows but not limited to: rayleigh model for density distribution, Pixel Nearest Neighbor (PNN), Voxel Nearest Neighbor (VNN), Distance Weight (DW) interpolation, non-mesh registration, Radial Basis Function (RBF) interpolation. They are described in document [12 ].
Quantitative measurement of information content
In addition to the visual feedback provided to the user, an automatic quantitative analysis of the US image data is performed in parallel with the processing of the image acquisition run. Such measurements ensure that the image content is suitable for further processing and provide additional real-time feedback to the user 100. Typical quality measures in the context of registration for handling soft tissue surgery include the percentage of the VOI301 scanned by the ultrasound probe 103 (e.g., 10% of the voxels in the scanned VOI) or the number of anatomical data detected voxels (e.g., the number of segmented vessels/tumors/boundaries). Since the desired/required image content is known from the pre-acquisition volumetric image data, measurements of the currently acquired information content can be correlated with the data required for further processing. In the case of a liver manipulation surgery, the system is intended to detect one of the vascular systems, which is then used for registration. The specifications of the vascular system (and the desired number of vessel pixels) are known from pre-operative imaging and a feedback loop may therefore detect the percentage vessels using intra-operative ultrasound. A similar amount of data in the pre-operative and intra-operative datasets is desirable to guide robust and accurate registration.
Feedback loop
Fig. 5 depicts a complete 3D image acquisition inscribing all of the components described above. The process starts by using an interactive VOI301 defined by a virtual display of the planned VOI301 (which is connected to the steered ultrasound probe 103).
Once the VOI301 is defined, the system enters a loop that analyzes each newly acquired image to determine if it describes a structure in the VOI301 and does not include artifacts. If the image is outside the VOI301 or contains artifacts, the algorithm returns to image acquisition. If not, the images are segmented and combined and the resulting data is displayed to the user 100 on the GUI 101.
Based on the visual feedback of GUI 101 and quantitative measurements of the information content, the criteria for stopping US acquisition are evaluated. The criteria for stopping the image acquisition are defined prior to or during the acquisition process and vary with the tissue or organ 110 to be analyzed. There are generally three basic terms for defining the criteria: (a) by visual definition of the user, (b) static criteria based on the acquired US data (e.g., number of active images acquired, percentage of filled volumes, percentage of voxels segmented), and (c) dynamic criteria based on the desired image content (e.g., expectation of the number of intra-operative vessel pixels desired based on pre-operative image data and VOI selection). Thus, the user 100 or acquisition algorithm decides whether the acquired image content is sufficient for the desired application (diagnosis, visualization, segmentation, registration) or whether additional images need to be acquired. If sufficient data is available, acquisition is stopped, otherwise feedback is provided to the user 100 regarding the additional image content that is needed.
Feedback to the user 100 includes visual or audio instructions regarding the necessary actions (e.g., probe action to other regions of the VOI301, searching for anatomy, changes in imaging parameters) to achieve the desired image quality. Based on this feedback, the user takes the next image and the feedback loop is started from the beginning.
Finally, other aspects of the invention are claimed below as items, which may also be expressed as claims.
Item 1: a method for 3D ultrasound image acquisition is proposed, comprising the steps of:
-providing a pre-acquired 3D image (305) of an object (110),
-displaying the pre-acquired image (305) on a display (101),
-selecting a volume of interest (301) of an object (110) in the pre-acquired image (305), and adjusting a spatial position of the volume of interest (301) for the pre-acquired image (305), in particular by correspondingly positioning an ultrasound probe (103) for the object (110),
-visualizing a current spatial position of the volume of interest (301), in particular on the display (101), for the pre-acquired image (305), in particular displaying a current ultrasound image (401, 402) acquired in real time in the volume of interest (301) by means of an ultrasound probe (103) in real time on the display (101), in particular in real time and in particular on the display (101), wherein the visualization of the volume of interest (301) is in particular superimposed on the displayed pre-acquired image (305), and in particular the visualization of the volume of interest (301) is updated on the display (101) using the current spatial position of the ultrasound probe (103), the current spatial position of the ultrasound probe (103) of which is determined in particular using a tracking system (102),
-when selecting or adjusting the spatial position of the volume of interest (301) as desired: triggering acquisition of ultrasound images (401, 402) in the volume of interest (301) for generating a 3D model (403) of the object (110) in the volume of interest (301), wherein the triggering is performed in particular by means of the ultrasound probe (103), in particular by using a defined gesture or a specific movement of the ultrasound probe (103) on the surface of the object (110); and
-acquiring a plurality of ultrasound images (401, 402) for generating the 3D model (403) by moving an ultrasound probe (103) along a volume of interest (301) simultaneously along the volume of interest (301) for the object (110) while the ultrasound probe (103) in the volume of interest (301), wherein a current image (401, 402) is particularly displayed in real time on the display (101), wherein particularly a current image is displayed in two dimensions (402) or in three dimensions (401) on the display, wherein particularly the three-dimensionally displayed current ultrasound image is overlaid on a displayed pre-acquired image (305), and wherein the three-dimensionally displayed current ultrasound image is particularly overlaid on a displayed pre-acquired image (305), and wherein the three-dimensionally displayed current ultrasound image is displayed in real time on the display
-checking whether the current ultrasound image (401, 402) has at least one pixel in the volume of interest (301), wherein when the current image (401, 402) has no pixels in the volume of interest (301), the current image (401, 402) is discarded, wherein otherwise the current ultrasound image (401, 402) is segmented and combined to the 3D model (403) to be generated, which is particularly displayed on the display (101) in real time and particularly overlaid onto the displayed pre-acquired image (305), wherein particularly when a new current ultrasound image (401, 402) is combined to the 3D model (403), the 3D model (403) displayed on the display (101) is updated, and
-determining a quality measure of a 3D model (403) to be generated from the acquisition of the ultrasound images (401, 402), wherein the acquisition of the ultrasound images (401, 402) is ended once the quality measure has reached a predefined level, wherein in particular the quality measure is at least one of:
-the number of individual ultrasound images scanned within the volume of interest (301),
-a density of ultrasound images (401, 402) acquired within the volume of interest (301),
-a distribution and/or a number of specific image features, in particular a number of segmented anatomical structures in the volume of interest (301), and
-the time required for scanning the ultrasound images (401, 402).
Item 2: method according to item 1, wherein an initial registration is performed, in particular for correctly displaying on the display (101) the 3D model (403) for the pre-acquired image (305), the acquired current ultrasound image (401, 402) and/or the position of the volume of interest (301), wherein in particular the initial registration comprises the steps of: -selecting a plurality of points, in particular four points, in the coordinate system of the pre-acquired image (305), -touching corresponding points of the object (110) using the tracking tool in order to acquire said corresponding points in the spatially fixed or patient-fixed coordinate system of the tool, and-determining a registration transformation between said coordinate systems from said points in the coordinate system of the pre-acquired image and their corresponding points in the spatially fixed (or patient-fixed) coordinate system of the tool, and/or wherein in particular the initial registration comprises the steps of:
selecting a point in a coordinate system of a pre-acquired image (305),
a desired ultrasound image at this location is calculated, a corresponding ultrasound image (401, 402) of the object (110) is acquired using the ultrasound probe (103) tracked in a spatially fixed or patient-fixed coordinate system of the ultrasound probe (103), and a registered transition between the coordinate systems is determined using the desired ultrasound image and the acquired ultrasound image (401, 402).
Item 3: the method according to item 1 or 2, wherein the generated 3D model (403) is registered to the pre-acquired 3D image (305), in particular pre-operatively acquired, in particular by registering at least one feature of the generated 3D model (403), whose coordinates in the spatially fixed or patient fixed system are acquired with the aid of the tracking probe (103), matching with a corresponding feature of the pre-acquired 3D image (305), and determining the registered transformation between the coordinate system of the pre-acquired 3D image (305) and the spatially fixed or patient fixed coordinate system of the ultrasound probe (103), in particular by using the coordinates of the at least one feature in the spatially fixed or patient fixed coordinate system and the coordinates of the corresponding feature in the coordinate system of the pre-acquired 3D image (305).
Item 4: the method according to any one of the preceding items, wherein artifact detection is performed on a current ultrasound image (401, 402) that is not discarded, in particular using at least one filter algorithm, in particular a hough transform and/or a low pass filter, wherein in particular the current ultrasound image is discarded in case an artifact is detected in this current ultrasound image.
Item 5: the method according to any of the preceding items, wherein said segmentation of the single current ultrasound image (401, 402) is performed using at least one deterministic algorithm providing a segmentation of a specific anatomical structure of an object in particular a vessel, a tumor, an organ border, a bile duct and/or other anatomical volume of interest.
Item 6: the method according to any of the preceding items, wherein said segmentation of the single current ultrasound image (401, 402) is performed using a probabilistic assessment of image features, such as in particular organ boundaries, organ parenchyma and/or vascular system.
Item 7: the method according to items 4 to 6, wherein said artifact detection and said segmentation are performed in parallel, wherein in particular said artifact detection directly uses a single content of a current ultrasound image (401, 402) or a detected content of said current ultrasound.
Item 8: the method according to any of the preceding items, wherein guiding information is displayed on said display (101) and/or provided to the user (100) by sound, in particular speech, when positioning the ultrasound probe (103) for said adjusting the spatial position of the volume of interest (301) and/or when moving the ultrasound probe (103) during said acquiring of the plurality of ultrasound images (401, 402), to assist and/or guide the user (100) with respect to positioning and/or moving the ultrasound probe (103).
Item 9: the method according to any of the preceding items, wherein the ultrasound probe (103) is tracked by taking absolute spatial image coordinates using a coordinate measurement system based on optical, electromechanical or mechanical measurement criteria and/or by taking relevant image coordinates by analyzing relevant shifts of image features in subsequent images.
Item 10: method according to item 8, wherein said guiding information comprises a virtual visualization of at least one or several cubic meshes on said display (101), wherein in particular specific colors represent defined tissue structures and/or anatomical structures.
Item 11: the method according to any of the preceding items, wherein missing information in the current ultrasound image (401, 402), in particular queue-specific and/or statistical information about the distribution of the geometry of the vascular structure, anatomical structures of interest and/or other known anatomical structures in an object, object part or lesion, is interpolated based on a priori information about the object (110), in particular according to the segmentation.
Item 12: method according to any of the preceding items, wherein the generated 3D model (403) is calibrated with the pre-acquired 3D image (305), in particular with respect to pre-acquired or dynamically updated information content, in particular with respect to parameters such as homogeneity and/or resolution, in order to display a current level of 3D model generation progress.
Item 13: the method according to any of the preceding items, wherein the visualization of the 3D model (403) on the display (101) uses a user-defined static or dynamic color mapping, in particular indicative of the currently detected and analyzed anatomical structure.
Item 14: the method according to any of the preceding items, wherein the pre-acquired 3D image (305) is ultrasound, computed tomography or magnetic resonance imaging.
Item 15: a system for deriving a method according to any of the preceding items, comprising:
-an ultrasound probe (103) connected to a data processing system (105), which data processing system (105) comprises a control unit (107) for controlling the ultrasound probe (103), a computer (106) and a display (101) connected to the computer (106) for displaying information, and
-a tracking system (102) for tracking the spatial position of the ultrasound probe (103), the tracking system (102) comprising one or several position sensors (108) arranged on the ultrasound probe (103) or in the ultrasound probe (103) for detecting the spatial position of the ultrasound probe (103), wherein
-the data processing system (105) is designed to automatically check whether a current ultrasound image (401, 402) of the object (110) acquired using the ultrasound probe (103) has at least one pixel in a pre-selected volume of interest (301) of a pre-acquired 3D image of the object (110). Wherein in case the current image (401, 402) has no pixels in the volume of interest (301), the data processing system (105) is designed to discard the current image (401, 402), wherein otherwise the data processing system (105) is designed to automatically segment the current ultrasound image (401, 402) and combine it to the 3D model (403), and wherein the data processing system (105) is designed to determine a quality measure for the 3D model (403) to be generated, in particular from ultrasound images (401, 402) acquired using the ultrasound probe (103), wherein the data processing system (105) is designed to end the acquisition of ultrasound images for the 3D model once said quality measure has reached a predefined level, wherein in particular said quality measure is at least one of: a number of individual ultrasound images (401, 402) scanned within the volume of interest (301), a density of ultrasound images (401, 402) acquired within the volume of interest (301), a distribution and/or number of specific image features, in particular a number of anatomical structures segmented in the volume of interest (301), and a time required for acquisition of the ultrasound images (401, 402).
Reference documents:
[1] san Jos re-Ester, M.Martin-Fern dez, P.P.Caballero-Mart i nez, C.Alberola-L Lopez and J.Ruiz-Alzola, "A.mechanical to three-dimensional-Ultrasound reconstructed from irregular sampled data," Ultrasound Medicine & Biology (Ultrasound in Medicine & Biology), vol.29, No.2, pp.255-269, month 2. 2003.
[2] Poon and R.N.Rohling, "Three-dimensional extended field-of-viewultrasound (Three-dimensional extended field Ultrasound," Ultrasound in Medicine & Biology), vol.32, No.3, pp.357-69, month 3, 2006.
[3] C.yao, j.m.simpson, t.schaeffeter and g.p.penney, "Spatial compounding of large number of multi-view 3D echocardiography using Spatial combinations of feature consistency," 2010IEEE international Symposium on Biomedical Imaging: From Nano to Macro, pp.968-971,2010.
[4] A.gee, r.prager, g.treece and l.berman, "Engineering a free 3D ultrasound system," Pattern Recognition Letters, vol.24, No.4-5, pp.757-777, month 2, 2003.
[5] Toonkum, N.C. Suwanwelac. Chinrungrueng, "Reconstruction of 3D ultrasound image Reconstruction based on circularly normalized Savitzky-Golay filter", "Ultrasonics", vol.51, No.2, pp.136-47, month 2.2011.
[6] Laura, k.drechsler, m.erdt, m.keil, m.noll, s.d.beni, g.sakas, andl.solvent, "for Liver Tumor Ablation," pp.133-140,2012.
[7] R.Rohling, a Gee and L.Berman, "A composition of free three-dimensional ultrasound reconstruction techniques (comparison of free-arm three-dimensional ultrasound reconstruction techniques)," Medical image analysis, vol.3, No.4, pp.339-59, month 12, 1999.
[8] Hellier, n.azzabou and c.barillot, "3D free ultrasonic reconstruction," pp.597-604,2005.
[9] -h.lin, c. -m.weng and y. -n.sun, "Ultrasound image composition based on motion compensation" □:. Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Annual International Conference of IEEE medical and biological Society Engineering). IEEE Engineering in Medicine and Biology Society Engineering. Conference, vol.6, pp.6445-8, month 1. 2005.
[10] "Three-dimensional ultrasound imaging", "Annual review of biomedical engineering", vol.2, pp.457-75, month 1.2000.
[11] Mercier, t.lang0, f.lindset and l.d.collins, "a review of calibration techniques for free 3-D Ultrasound systems", "Ultrasound in medicine & biology, vol.31, No.2, pp.143-65, month 2.2005.
[12] Solberg, F.Lindseth, H.Torp, R.E.Blake and T.a Nagelhus Hemes, "Freehand 3D Ultrasound reconstruction algorithm-review," ultrasonic in medicine & biology, "vol.33, No.7, pp.991-1009, month 7, 2007.
[13] S.t.m.eairs, j.e.n.s.b.eye AND m.i.h.enrerici, "organic contrast recovery AND VISUALIZATION OF irregorarly SAMPLED THREE-AND FOUR-dime information ultra sound DATA FOR cerrbrowovascleral application," Original contribution reconstruction AND VISUALIZATION OF IRREGULARLY sampled three-DIMENSIONAL AND FOUR-DIMENSIONAL ULTRASOUND DATA FOR CEREBROVASCULAR APPLICATIONS, "No. 26, No.2, pp.263-272,2000.
[14] Laura, k.drechsler, m.erdt, m.keil, m.noll, s.d.beni, g.sakas, and l.solvati, "for Liver Tumor Ablation," pp.133-140,2012.
[15] Ag, "Image-guided liver Surgery," International Journal of Computer Assisted Radiology and Surgery, "vol.7, No. s1, pp.141-145, month 5, 2012.
[16] Noble and d boukerroui, "Ultrasound image segmentation: a survey (Ultrasound image segmentation: surgery)," IEEE transactions on medical imaging ", vol.25, No.8, pp.987-1010, month 8, 2006.

Claims (34)

1. A method for 3D ultrasound image acquisition and registration of a 3D model to a pre-acquired 3D image, comprising the steps of:
-providing a pre-acquired 3D image (305) of an object (110),
-displaying the pre-acquired 3D image (305) on a display (101),
-selecting a volume of interest (301) of an object (110) in the pre-acquired 3D image (305),
-when selecting or adjusting the spatial position of the volume of interest (301) as desired: triggering acquisition of an ultrasound image (401, 402) in the volume of interest (301) for generating a 3D model (403) of the object (110) in the volume of interest (301), and
-acquiring a plurality of ultrasound images (401, 402) by an ultrasound probe (103) in a volume of interest (301) for generating the 3D model (403) while moving the ultrasound probe (103) along the volume of interest (301) for the object (110), and
-checking whether the current ultrasound image (401, 402) has at least one pixel in the volume of interest (301), wherein in case the current image (401, 402) has no pixels in the volume of interest (301), the current image (401, 402) is discarded, wherein otherwise the current ultrasound image (401, 402) is segmented and combined to the 3D model (403) to be generated, and
-determining a quality measure for a 3D model (403) to be generated from the acquisition of the ultrasound images (401, 402), wherein the acquisition of the ultrasound images (401, 402) is ended when the quality measure is met or has reached a predefined level,
-registering the generated 3D model (403) to the pre-acquired 3D image (305);
performing an initial registration for correctly displaying on the display (101) the 3D model (403) for the pre-acquired 3D image (305), the acquired current ultrasound image (401, 402) and/or the location of the volume of interest (301), wherein the initial registration comprises the steps of: selecting a point in a coordinate system of a pre-acquired image (305), calculating a desired ultrasound image at this location, acquiring a corresponding ultrasound image (401, 402) of the object (110) with the ultrasound probe (103) tracked in a spatially fixed or patient-fixed coordinate system of the ultrasound probe (103), and determining a registration transformation between said coordinate system using the desired ultrasound image and the acquired ultrasound image (401, 402).
2. The method according to claim 1, wherein said provided pre-acquired 3D image is acquired in a first step, and wherein said plurality of ultrasound images is acquired in a separate second step performed at a later time.
3. The method according to claim 1 or 2, wherein said provided pre-acquired 3D image is acquired by using an imaging method other than ultrasound.
4. The method according to claim 1, wherein the quality measure is based on criteria of patient-specific data from the pre-acquired 3D image.
5. The method of claim 1, wherein the quality measure is at least one of:
-the number of individual ultrasound images (401, 402) scanned within the volume of interest (301),
-a density of acquired ultrasound images (401, 402) within a volume of interest (301),
-the number and/or distribution of image features,
-the time required to scan the ultrasound images (401, 402).
6. The method according to claim 5, wherein the number and/or distribution is selected according to a patient-specific anatomy in the volume of interest (301).
7. The method according to claim 1, wherein a user acquiring the plurality of ultrasound images is guided to move the ultrasound probe (103) to a position of a desired image feature based on the pre-acquired 3D image in order to provide a sufficient data set to register the generated 3D model to the pre-acquired 3D image (305).
8. The method according to claim 1, characterized in that the generated 3D model (403) is registered to the pre-acquired 3D image (305) by matching at least one feature of the generated 3D model (403) with a corresponding feature of the pre-acquired 3D image (305) and by determining a registration transformation between the coordinate system of the pre-acquired 3D image (305) and the spatially fixed or patient fixed coordinate system of the ultrasound probe (103) by using the coordinates of the at least one feature in the spatially fixed or patient fixed coordinate system and the coordinates of the corresponding feature in the coordinate system of the pre-acquired 3D image (305), wherein the coordinates of the at least one feature of the 3D model in the spatially fixed or patient fixed system are acquired with the help of the tracking ultrasound probe (103).
9. The method according to claim 1, wherein said triggering is performed by said ultrasound probe (103).
10. The method according to claim 1, wherein the current image (401, 402) is displayed in real time on said display (101), wherein the current image is displayed two-dimensionally (402) and/or three-dimensionally (401) on said display, wherein the three-dimensionally displayed current ultrasound image is overlaid on the displayed pre-acquired 3D image (305) or the content of the pre-acquired 3D image is overlaid on the current two-dimensional image.
11. The method according to claim 1, wherein the features of the 3D model used or to be used for registering the 3D model (403) to the pre-acquired 3D image (305) are displayed in real time on the display (101).
12. The method according to claim 1, wherein the 3D model is displayed in real time on the display (101) and overlaid on the displayed pre-acquired 3D image (305), wherein the 3D model (403) displayed on the display (101) is updated in case a new current ultrasound image (401, 402) is combined to the 3D model (403).
13. The method according to claim 1, characterized in that artifact detection is performed for a current ultrasound image (401, 402) that is not discarded, wherein the current ultrasound image is discarded in case an artifact is detected in the current ultrasound image, and wherein the artifact probability is calculated based on patient-specific features of the pre-acquired 3D image.
14. A method according to claim 1, characterized in that said segmentation of the single current ultrasound image (401, 402) is performed using a probabilistic assessment of image features, being organ borders, organ parenchyma and/or vascular systems, wherein said probabilistic assessment uses patient-specific features of the pre-acquired 3D image.
15. The method according to claim 13, characterized in that said artifact detection and said segmentation are performed in parallel, wherein said artifact detection directly uses the single content of the current ultrasound image (401, 402) or the detected content of the current ultrasound, and wherein the respective algorithms repeatedly interact with each other.
16. Method according to claim 1, characterized in that guiding information is displayed on the display (101) and/or provided to the user (100) by sound for assisting and/or guiding the user (100) with respect to positioning and/or moving the ultrasound probe (103), wherein the guiding information is provided by feedback based on the pre-acquired 3D images and acquired features of the 3D model.
17. A method according to claim 1, characterized in that the ultrasound probe (103) is tracked by taking absolute spatial image coordinates using a coordinate measurement system based on optical, electromechanical or mechanical measurement criteria and/or related image coordinates by analyzing the relative displacement of image features in subsequent images.
18. The method according to claim 1, wherein after selecting a volume of interest (301) of an object (110) in the pre-acquired 3D image (305), the spatial position of the volume of interest (301) for the pre-acquired 3D image (305) is adjusted by correspondingly positioning an ultrasound probe (103) for the object (110).
19. The method according to claim 1, wherein a current spatial position of the volume of interest (301) on the display (101) for the pre-acquired 3D image (305) is visualized.
20. The method according to claim 19, wherein the visualization of the volume of interest (301) is superimposed on the displayed pre-acquired 3D image (305).
21. The method according to claim 19 or 20, wherein the visualization of the volume of interest (301) on the display (101) is updated using the current spatial position of the ultrasound probe (103), the current spatial position of the ultrasound probe (103) being determined using a tracking system (102).
22. The method according to claim 16, characterized in that the guiding information comprises a virtual visualization of at least one or several cubic meshes on the display (101), the meshes of which are displayed on the pre-acquired 3D image, wherein the specific color represents a defined tissue structure and/or anatomical structure.
23. A method according to claim 1, characterized in that after said segmentation, missing information in the current ultrasound image (401, 402) is interpolated using a priori information about the object (110) or patient-specific features from the pre-acquired 3D image.
24. The method according to claim 1, wherein after said segmentation queue-specific and/or statistical information about the distribution of geometries of the vascular structure, the anatomical structure of interest in the object or lesion and/or other known anatomical structures is used to interpolate missing information in the current ultrasound image (401, 402).
25. Method according to claim 1, characterized in that the generated 3D model (403) is calibrated with the pre-acquired 3D image (305) based on an imaging method other than ultrasound and based on a different coordinate system compared to the 3D model, in order to display the current level of progress of the 3D model generation for parameters of homogeneity and/or resolution for pre-acquisition or dynamically updating the information content.
26. A method according to claim 1, characterized in that the visualization of the 3D model (403) on the display (101) uses static or dynamic color mapping, indicating the currently detected and analyzed anatomical structure or indicating empirical features and information holes in a specific region of the 3D image.
27. The method according to claim 1, characterized in that the pre-acquired 3D image (305) is a computed tomography or magnetic resonance image.
28. The method according to claim 5, characterized in that the number and/or distribution of image features is the number of segmented anatomical structures in the volume of interest (301).
29. The method according to claim 8, characterized in that the pre-acquired 3D image (305) is a preoperatively acquired 3D image.
30. The method according to claim 9, characterized in that the triggering is performed by the ultrasound probe (103) by using a defined gesture or a specific movement of the ultrasound probe (103) on the surface of the object (110).
31. A method according to claim 16, characterized in that the guiding information is provided to the user (100) by sound to provide said guiding information to the user (100) by speech.
32. The method of claim 19, wherein the visualization is a real-time visualization.
33. A system for performing the method of any of the preceding claims, comprising
-an ultrasound probe (103) connected to a data processing system (105), the data processing system (105) comprising a control unit (107) for controlling the ultrasound probe (103), a computer (106) and a display (101) connected to the computer (106) for displaying information, and
-a tracking system (102) for tracking the spatial position of the ultrasound probe (103), the tracking system (102) comprising one or several position sensors (108) arranged on the ultrasound probe (103) or in the ultrasound probe (103) for detecting the spatial position of the ultrasound probe (103), wherein
-the data processing system (105) is designed to automatically check whether a current ultrasound image (401, 402) of the object (110) acquired using the ultrasound probe (103) has at least one pixel in a pre-selected volume of interest (301) of a pre-acquired 3D image of the object (110), wherein in case the current image (401, 402) has no pixels in the volume of interest (301), the data processing system (105) is designed to discard the current image (401, 402), wherein otherwise the data processing system (105) is designed to automatically segment the current ultrasound image (401, 402) and combine it to the 3D model (403), and wherein the data processing system (105) is designed to determine a quality measure of the 3D model (403) to be generated, the quality measure of the 3D model (403) to be generated being determined from the acquisition of the ultrasound image (401, 402) using the ultrasound probe (103), wherein the data processing system (105) is designed to end the acquisition of ultrasound images of the 3D model when the quality measure reaches a predefined level or a dynamically defined level, wherein the quality measure is at least one of: the number of individual ultrasound images (401, 402) scanned within the volume of interest (301), the density of ultrasound images (401, 402) acquired within the volume of interest (301), the distribution and/or number of specific image features, and the time required for acquisition of the ultrasound images (401, 402).
34. The system of claim 33, wherein the distribution and/or number of specific image features is a number of anatomical structures segmented in the volume of interest (301) or a patient-specific number of desired features.
CN201480042479.3A 2013-05-28 2014-05-28 Method and system for 3D acquisition of ultrasound images Active CN105407811B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP20130169579 EP2807978A1 (en) 2013-05-28 2013-05-28 Method and system for 3D acquisition of ultrasound images
EP13169579.3 2013-05-28
PCT/EP2014/061106 WO2014191479A1 (en) 2013-05-28 2014-05-28 Method and system for 3d acquisition of ultrasound images

Publications (2)

Publication Number Publication Date
CN105407811A CN105407811A (en) 2016-03-16
CN105407811B true CN105407811B (en) 2020-01-10

Family

ID=48577505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201480042479.3A Active CN105407811B (en) 2013-05-28 2014-05-28 Method and system for 3D acquisition of ultrasound images

Country Status (5)

Country Link
US (1) US20160113632A1 (en)
EP (2) EP2807978A1 (en)
JP (1) JP6453857B2 (en)
CN (1) CN105407811B (en)
WO (1) WO2014191479A1 (en)

Families Citing this family (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106030657B (en) * 2014-02-19 2019-06-28 皇家飞利浦有限公司 Motion Adaptive visualization in medicine 4D imaging
US20160367216A1 (en) * 2014-02-28 2016-12-22 Koninklijke Philips N.V. Zone visualization for ultrasound-guided procedures
US20150327841A1 (en) * 2014-05-13 2015-11-19 Kabushiki Kaisha Toshiba Tracking in ultrasound for imaging and user interface
WO2015173668A1 (en) * 2014-05-16 2015-11-19 Koninklijke Philips N.V. Reconstruction-free automatic multi-modality ultrasound registration.
EP3157436B1 (en) * 2014-06-18 2021-04-21 Koninklijke Philips N.V. Ultrasound imaging apparatus
EP3190975B1 (en) 2014-08-05 2021-01-06 Habico, Inc. Device, system, and method for hemispheric breast imaging
KR20160046670A (en) * 2014-10-21 2016-04-29 삼성전자주식회사 Apparatus and Method for supporting image diagnosis
WO2016092408A1 (en) * 2014-12-09 2016-06-16 Koninklijke Philips N.V. Feedback for multi-modality auto-registration
US10828014B2 (en) * 2015-03-31 2020-11-10 Koninklijke Philips N.V. Medical imaging apparatus
US10335115B2 (en) 2015-09-03 2019-07-02 Siemens Healthcare Gmbh Multi-view, multi-source registration of moving anatomies and devices
US11045170B2 (en) * 2015-10-28 2021-06-29 General Electric Company Method and system for acquisition, enhanced visualization, and selection of a representative plane of a thin slice ultrasound image volume
WO2017108667A1 (en) * 2015-12-21 2017-06-29 Koninklijke Philips N.V. Ultrasound imaging apparatus and ultrasound imaging method for inspecting a volume of subject
PL3449838T3 (en) * 2016-04-26 2024-03-04 Telefield Medical Imaging Limited Imaging method and device
JP6689666B2 (en) * 2016-05-12 2020-04-28 株式会社日立製作所 Ultrasonic imaging device
US10905402B2 (en) 2016-07-27 2021-02-02 Canon Medical Systems Corporation Diagnostic guidance systems and methods
US10403053B2 (en) * 2016-11-15 2019-09-03 Biosense Webster (Israel) Ltd. Marking sparse areas on maps
WO2018099810A1 (en) * 2016-11-29 2018-06-07 Koninklijke Philips N.V. Ultrasound imaging system and method
FR3059541B1 (en) * 2016-12-07 2021-05-07 Bay Labs Inc GUIDED NAVIGATION OF AN ULTRASONIC PROBE
EP3558151B1 (en) * 2016-12-20 2023-07-05 Koninklijke Philips N.V. Navigation platform for an intracardiac catheter
EP3574504A1 (en) 2017-01-24 2019-12-04 Tietronix Software, Inc. System and method for three-dimensional augmented reality guidance for use of medical equipment
EP3422048A1 (en) * 2017-06-26 2019-01-02 Koninklijke Philips N.V. Ultrasound imaging method and system
US10695132B2 (en) 2017-07-07 2020-06-30 Canon U.S.A., Inc. Multiple probe ablation planning
CA3075334C (en) * 2017-09-07 2021-08-31 Piur Imaging Gmbh Apparatus and method for determining motion of an ultrasound probe
WO2019072827A1 (en) * 2017-10-11 2019-04-18 Koninklijke Philips N.V. Intelligent ultrasound-based fertility monitoring
CN107854177A (en) * 2017-11-18 2018-03-30 上海交通大学医学院附属第九人民医院 A kind of ultrasound and CT/MR image co-registrations operation guiding system and its method based on optical alignment registration
US20190167231A1 (en) * 2017-12-01 2019-06-06 Sonocine, Inc. System and method for ultrasonic tissue screening
US20190246946A1 (en) * 2018-02-15 2019-08-15 Covidien Lp 3d reconstruction and guidance based on combined endobronchial ultrasound and magnetic tracking
EP3549529A1 (en) * 2018-04-05 2019-10-09 Koninklijke Philips N.V. Ultrasound imaging system and method
EP3785640A4 (en) * 2018-04-27 2021-04-07 FUJIFILM Corporation Ultrasound system and ultrasound system control method
US10685439B2 (en) 2018-06-27 2020-06-16 General Electric Company Imaging system and method providing scalable resolution in multi-dimensional image data
CN116777858A (en) * 2018-08-24 2023-09-19 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic image processing apparatus and method, and computer-readable storage medium
CN108986902A (en) * 2018-08-28 2018-12-11 飞依诺科技(苏州)有限公司 Checking method, device and the storage medium of four-dimensional scanning equipment
WO2020079077A1 (en) * 2018-10-16 2020-04-23 Koninklijke Philips N.V. Deep learning-based ultrasound imaging guidance and associated devices, systems, and methods
US20220015730A1 (en) * 2018-11-28 2022-01-20 Koninklijke Philips N.V. Most relevant x-ray image selection for hemodynamic simulation
CN111281424A (en) * 2018-12-07 2020-06-16 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging range adjusting method and related equipment
US20200245970A1 (en) * 2019-01-31 2020-08-06 Bay Labs, Inc. Prescriptive guidance for ultrasound diagnostics
EP3711677A1 (en) 2019-03-18 2020-09-23 Koninklijke Philips N.V. Methods and systems for acquiring composite 3d ultrasound images
DE102019203192A1 (en) * 2019-03-08 2020-09-10 Siemens Healthcare Gmbh Generation of a digital twin for medical examinations
WO2020242949A1 (en) * 2019-05-28 2020-12-03 Google Llc Systems and methods for video-based positioning and navigation in gastroenterological procedures
WO2020239979A1 (en) * 2019-05-31 2020-12-03 Koninklijke Philips N.V. Methods and systems for guiding the acquisition of cranial ultrasound data
US11844654B2 (en) 2019-08-19 2023-12-19 Caption Health, Inc. Mid-procedure view change for ultrasound diagnostics
JP7362354B2 (en) * 2019-08-26 2023-10-17 キヤノン株式会社 Information processing device, inspection system and information processing method
US11647982B2 (en) * 2019-09-18 2023-05-16 International Business Machines Corporation Instrument utilization management
CN111449684B (en) * 2020-04-09 2023-05-05 济南康硕生物技术有限公司 Method and system for rapidly acquiring standard scanning section of heart ultrasound
CN111445769B (en) * 2020-05-14 2022-04-19 上海深至信息科技有限公司 Ultrasonic teaching system based on small program
CN112155596B (en) * 2020-10-10 2023-04-07 达闼机器人股份有限公司 Ultrasonic diagnostic apparatus, method of generating ultrasonic image, and storage medium
EP4271277A2 (en) * 2020-12-30 2023-11-08 Koninklijke Philips N.V. Ultrasound imaging system, method and a non-transitory computer-readable medium
CN117529273A (en) * 2021-04-13 2024-02-06 舍巴影响有限公司 System and method for reconstructing 3D images from ultrasound images and camera images
EP4094695A1 (en) 2021-05-28 2022-11-30 Koninklijke Philips N.V. Ultrasound imaging system
CN113217345B (en) * 2021-06-17 2023-02-03 中船重工鹏力(南京)智能装备***有限公司 Automatic detection system and method for compressor oil injection pipe based on 3D vision technology
CN113499099A (en) * 2021-07-21 2021-10-15 上海市同仁医院 Carotid artery ultrasonic automatic scanning and plaque identification system and method
CN115592789B (en) * 2022-11-24 2023-03-17 深圳市星耀福实业有限公司 ALC plate static temperature control method, device and system
CN116531089B (en) * 2023-07-06 2023-10-20 中国人民解放军中部战区总医院 Image-enhancement-based blocking anesthesia ultrasonic guidance data processing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101474083A (en) * 2009-01-15 2009-07-08 西安交通大学 System and method for super-resolution imaging and multi-parameter detection of vascular mechanical characteristic
CN102300505A (en) * 2009-06-30 2011-12-28 株式会社东芝 Ultrasonic diagnostic device and control program for displaying image data
WO2012073164A1 (en) * 2010-12-03 2012-06-07 Koninklijke Philips Electronics N.V. Device and method for ultrasound imaging
CN102982314A (en) * 2012-11-05 2013-03-20 深圳市恩普电子技术有限公司 Method of identifying, tracing and measuring external and internal membranes of vessel

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5645066A (en) 1996-04-26 1997-07-08 Advanced Technology Laboratories, Inc. Medical ultrasonic diagnostic imaging system with scanning guide for three dimensional imaging
US6012458A (en) 1998-03-20 2000-01-11 Mo; Larry Y. L. Method and apparatus for tracking scan plane motion in free-hand three-dimensional ultrasound scanning using adaptive speckle correlation
US7672491B2 (en) * 2004-03-23 2010-03-02 Siemens Medical Solutions Usa, Inc. Systems and methods providing automated decision support and medical imaging
JP2006246974A (en) * 2005-03-08 2006-09-21 Hitachi Medical Corp Ultrasonic diagnostic equipment with reference image display function
JP4699062B2 (en) * 2005-03-29 2011-06-08 株式会社日立メディコ Ultrasonic device
JP2008534159A (en) * 2005-04-01 2008-08-28 ビジュアルソニックス インコーポレイテッド System and method for 3D visualization of interstitial structures using ultrasound
US20070016016A1 (en) 2005-05-31 2007-01-18 Gabriel Haras Interactive user assistant for imaging processes
US7831076B2 (en) 2006-12-08 2010-11-09 Biosense Webster, Inc. Coloring electroanatomical maps to indicate ultrasound data acquisition
US7925068B2 (en) 2007-02-01 2011-04-12 General Electric Company Method and apparatus for forming a guide image for an ultrasound image scanner
JP5394622B2 (en) 2007-07-31 2014-01-22 オリンパスメディカルシステムズ株式会社 Medical guide system
JP2009247739A (en) * 2008-04-09 2009-10-29 Toshiba Corp Medical image processing and displaying device, computer processing program thereof, and ultrasonic diagnosing equipment
US8355554B2 (en) * 2009-04-14 2013-01-15 Sonosite, Inc. Systems and methods for adaptive volume imaging
US9895135B2 (en) * 2009-05-20 2018-02-20 Analogic Canada Corporation Freehand ultrasound imaging systems and methods providing position quality feedback
US8900146B2 (en) * 2009-07-27 2014-12-02 The Hong Kong Polytechnic University Three-dimensional (3D) ultrasound imaging system for assessing scoliosis
US20120065510A1 (en) 2010-09-09 2012-03-15 General Electric Company Ultrasound system and method for calculating quality-of-fit
US20120108965A1 (en) 2010-10-27 2012-05-03 Siemens Medical Solutions Usa, Inc. Facilitating Desired Transducer Manipulation for Medical Diagnostics and Compensating for Undesired Motion
US8744211B2 (en) * 2011-08-31 2014-06-03 Analogic Corporation Multi-modality image acquisition
JP2014528347A (en) * 2011-10-10 2014-10-27 トラクトゥス・コーポレーション Method, apparatus and system for fully examining tissue using a handheld imaging device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101474083A (en) * 2009-01-15 2009-07-08 西安交通大学 System and method for super-resolution imaging and multi-parameter detection of vascular mechanical characteristic
CN102300505A (en) * 2009-06-30 2011-12-28 株式会社东芝 Ultrasonic diagnostic device and control program for displaying image data
WO2012073164A1 (en) * 2010-12-03 2012-06-07 Koninklijke Philips Electronics N.V. Device and method for ultrasound imaging
CN102982314A (en) * 2012-11-05 2013-03-20 深圳市恩普电子技术有限公司 Method of identifying, tracing and measuring external and internal membranes of vessel

Also Published As

Publication number Publication date
EP2807978A1 (en) 2014-12-03
CN105407811A (en) 2016-03-16
JP6453857B2 (en) 2019-01-16
WO2014191479A1 (en) 2014-12-04
EP3003161A1 (en) 2016-04-13
EP3003161B1 (en) 2022-01-12
JP2016522725A (en) 2016-08-04
US20160113632A1 (en) 2016-04-28

Similar Documents

Publication Publication Date Title
CN105407811B (en) Method and system for 3D acquisition of ultrasound images
US10515452B2 (en) System for monitoring lesion size trends and methods of operation thereof
JP7407790B2 (en) Ultrasound system with artificial neural network for guided liver imaging
KR102269467B1 (en) Measurement point determination in medical diagnostic imaging
JP5530592B2 (en) Storage method of imaging parameters
US10912536B2 (en) Ultrasound system and method
US10251627B2 (en) Elastography measurement system and method
JP7277967B2 (en) 3D imaging and modeling of ultrasound image data
CN109069131A (en) Ultrasonic system and method for breast tissue imaging
JP6873647B2 (en) Ultrasonic diagnostic equipment and ultrasonic diagnostic support program
EP3193727A1 (en) Ultrasound imaging apparatus
US20180089845A1 (en) Method and apparatus for image registration
CN109310399A (en) Medical Ultrasound Image Processing equipment
JP6833533B2 (en) Ultrasonic diagnostic equipment and ultrasonic diagnostic support program
CN106030657B (en) Motion Adaptive visualization in medicine 4D imaging
KR102643899B1 (en) Abdominal aortic aneurysm quantitative analysis system and method using 3D ultrasound images
EP3105741B1 (en) Systems for monitoring lesion size trends and methods of operation thereof
CN112545551A (en) Method and system for medical imaging device
JP2011530366A (en) Ultrasound imaging
US20200305837A1 (en) System and method for guided ultrasound imaging
JP5487339B2 (en) Medical image processing device
JP2021531122A (en) Ultrasound Systems and Methods for Induced Shear Wave Elastography of Anisotropic Tissues
JP2014036885A (en) Diagnostic imaging apparatus and diagnostic imaging method
Bravo et al. 3D ultrasound in cardiology

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant