US20150094584A1 - Ultrasound diagnosis apparatus and image processing apparatus - Google Patents

Ultrasound diagnosis apparatus and image processing apparatus Download PDF

Info

Publication number
US20150094584A1
US20150094584A1 US14/490,957 US201414490957A US2015094584A1 US 20150094584 A1 US20150094584 A1 US 20150094584A1 US 201414490957 A US201414490957 A US 201414490957A US 2015094584 A1 US2015094584 A1 US 2015094584A1
Authority
US
United States
Prior art keywords
information
movement
interest
myocardium
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/490,957
Other languages
English (en)
Inventor
Yasuhiko Abe
Tetsuya Kawagishi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Medical Systems Corp
Original Assignee
Toshiba Corp
Toshiba Medical Systems Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp, Toshiba Medical Systems Corp filed Critical Toshiba Corp
Assigned to KABUSHIKI KAISHA TOSHIBA, TOSHIBA MEDICAL SYSTEMS CORPORATION reassignment KABUSHIKI KAISHA TOSHIBA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ABE, YASUHIKO, KAWAGISHI, TETSUYA
Publication of US20150094584A1 publication Critical patent/US20150094584A1/en
Assigned to TOSHIBA MEDICAL SYSTEMS CORPORATION reassignment TOSHIBA MEDICAL SYSTEMS CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KABUSHIKI KAISHA TOSHIBA
Priority to US15/803,261 priority Critical patent/US11317896B2/en
Assigned to CANON MEDICAL SYSTEMS CORPORATION reassignment CANON MEDICAL SYSTEMS CORPORATION CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: TOSHIBA MEDICAL SYSTEMS CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0883Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of the heart
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/54Control of the diagnostic device
    • A61B8/543Control of the diagnostic device involving acquisition triggered by a physiological signal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/467Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means
    • A61B8/469Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means for selection of a region of interest
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Definitions

  • Embodiments described herein relate generally to an ultrasound diagnosis apparatus and an image processing apparatus.
  • the layer structure of the myocardium and the fiber directions in the myocardium it is generally known that, in the middle (“mid”) level of the left ventricle, the myocardial fiber direction in the epicardium is a left-hand oblique direction, whereas the myocardial fiber direction in the endocardium is a right-hand oblique direction, and the myocardial fiber direction in the intermediate layer is an annular direction, on the basis of anatomical observations. Further, by implementing methods for analyzing myocardial fiber directions in a non-invasive manner, it is possible to obtain results that are equivalent to the anatomical observations, on the basis of researches that employ Magnetic Resonance Imaging (MRI) apparatuses.
  • MRI Magnetic Resonance Imaging
  • Examples of such methods include a method by which myocardial fiber directions are estimated by using a finite element method on the basis of a local myocardial movement obtained by implementing a tagging MRI method.
  • Another example that is used in recent years is a method called “diffusion MRI” by which myocardial fiber directions are estimated by numerically solving a differential equation related to a spatial diffusion in a distribution pattern of water molecules.
  • ultrasound diagnosis apparatuses which is a non-invasive means, have been used to analyze strains in the myocardium in “a radial (wall-thickness) direction, a longitudinal (long-axis) direction, and a circumferential direction” defined by three axes that are orthogonal to each other. More specifically, the analysis of the myocardial strains in the three directions is performed by applying a two- or three-dimensional speckle tracking technique to two- or three-dimensional moving images taken by an ultrasound diagnosis apparatus. The analysis results are used in the clinical field and applied researches.
  • MRI apparatuses While analyses using MRI apparatuses are non-invasive and applicable to the clinical field, MRI apparatuses are expensive and large in size. For this reason, it is difficult to make the analyses using MRI apparatuses wide spread in the clinical field. Further, the researches using MRI apparatuses described above take labor and computation time to perform the analyses. In addition, MRI apparatuses have restrictions related to time resolution. Thus, MRI apparatuses are not as suitable as ultrasound diagnosis apparatuses for observing dynamics of the heart.
  • ultrasound diagnosis apparatuses are not currently being used for analyzing the “fiber strain” in a non-invasive manner. Further, as for local wall-movement information (e.g., strains), ultrasound diagnosis apparatuses only provide movement components in the specific directions (the radial (wall-thickness) direction, the longitudinal direction, and the circumferential direction) determined by the structure and the shape of the heart.
  • FIG. 1 is a block diagram of an exemplary configuration of an ultrasound diagnosis apparatus according to a first embodiment
  • FIG. 2 and FIG. 3 are drawings for explaining the calculating unit
  • FIG. 4 , FIG. 5 , and FIG. 6 are drawings for explaining the pre-processing process performed by the obtaining unit
  • FIG. 7 , FIG. 8 , FIG. 9 , FIG. 10 , and FIG. 11 are drawings for explaining the output mode implemented in the first embodiment
  • FIG. 12 is a flowchart for explaining an outline of a process performed by the ultrasound diagnosis apparatus according to the first embodiment
  • FIG. 13 is a drawing for explaining a second embodiment
  • FIG. 14 is a drawing for explaining a third embodiment.
  • FIG. 15 is a drawing for explaining a fourth embodiment.
  • An ultrasound diagnosis apparatus includes a calculating unit, an obtaining unit, a determining unit, and a controlling unit.
  • the calculating unit calculates first movement information indicating a movement of the myocardium by tracking a movement of a region of interest that corresponds to the myocardium and that is set in each of the plurality of pieces of three-dimensional image data.
  • the obtaining unit obtains direction information indicating a direction of a myocardial fiber in the myocardium.
  • the determining unit determines second movement information indicating a movement of the myocardium with respect to the direction of the myocardial fiber, on the basis of the first movement information and the direction information.
  • the controlling unit causes a display unit to display the second movement information.
  • FIG. 1 is a block diagram of an exemplary configuration of an ultrasound diagnosis apparatus according to the first embodiment.
  • the ultrasound diagnosis apparatus according to the first embodiment includes an ultrasound probe 1 , a monitor 2 , an input device 3 , an electrocardiograph 4 , and an apparatus main body 10 .
  • the ultrasound probe 1 includes a plurality of transducer elements (e.g., piezoelectric transducer elements), which generate an ultrasound wave on the basis of a drive signal supplied from a transmitting and receiving unit 11 included in the apparatus main body 10 (explained later). Further, the ultrasound probe 1 receives a reflected wave from a subject P and to convert the received reflected wave into an electric signal. Further, the ultrasound probe 1 includes matching layers included in the transducer elements, as well as a backing member that prevents ultrasound waves from propagating rearward from the transducer elements. The ultrasound probe 1 is detachably connected to the apparatus main body 10 .
  • transducer elements e.g., piezoelectric transducer elements
  • the transmitted ultrasound wave When an ultrasound wave is transmitted from the ultrasound probe 1 to the subject P, the transmitted ultrasound wave is repeatedly reflected on a surface of discontinuity of acoustic impedances at a tissue in the body of the subject P and is received as a reflected-wave signal by the plurality of piezoelectric transducer elements included in the ultrasound probe 1 .
  • the amplitude of the received reflected-wave signal is dependent on the difference between the acoustic impedances on the surface of discontinuity on which the ultrasound wave is reflected.
  • the transmitted ultrasound pulse is reflected on the surface of a flowing bloodstream or a cardiac wall
  • the reflected-wave signal is, due to the Doppler effect, subject to a frequency shift, depending on a velocity component of the moving members with respect to the ultrasound wave transmission direction.
  • the ultrasound probe 1 connected to the apparatus main body 10 is an ultrasound probe that is capable of two-dimensionally scanning the subject P and three-dimensionally scanning the subject P, by using ultrasound waves. More specifically, the ultrasound probe 1 connected to the apparatus main body 10 may be a mechanical four-dimensional (4D) probe or a two-dimensional (2D) array probe.
  • the mechanical 4D probe is able to two-dimensionally scan the subject P by employing the plurality of piezoelectric transducer elements arranged in a row and is also able to three-dimensionally scan the subject P by causing the plurality of piezoelectric transducer elements to swing at a predetermined angle (a swinging angle).
  • the 2D array probe is able to three-dimensionally scan the subject P by employing the plurality of piezoelectric transducer elements arranged in a matrix formation.
  • the 2D array probe is also able to two-dimensionally scan the subject P by transmitting ultrasound waves in a converged manner.
  • the input device 3 includes a mouse, a keyboard, a button, a panel switch, a touch command screen, a foot switch, a trackball, a joystick, and the like.
  • the input device 3 receives various types of setting requests from an operator of the ultrasound diagnosis apparatus and to transfer the received various types of setting requests to the apparatus main body 10 .
  • the monitor 2 displays a Graphical User Interface (GUI) used by the operator of the ultrasound diagnosis apparatus to input the various types of setting requests through the input device 3 and to display ultrasound image data and the like generated by the apparatus main body 10 .
  • GUI Graphical User Interface
  • the electrocardiograph 4 obtains an electrocardiogram (ECG) of the subject P, as a biological signal of the subject P who is three-dimensionally scanned.
  • ECG electrocardiogram
  • the electrocardiograph 4 transmits the obtained electrocardiogram to the apparatus main body 10 .
  • the apparatus main body 10 is an apparatus that generates ultrasound image data on the basis of the reflected-wave signal received by the ultrasound probe 1 .
  • the apparatus main body 10 illustrated in FIG. 1 is an apparatus that is able to generate two-dimensional ultrasound image data on the basis of two-dimensional reflected-wave data.
  • the apparatus main body 10 illustrated in FIG. 1 is an apparatus that is able to generate three-dimensional ultrasound image data on the basis of three-dimensional reflected-wave data.
  • three-dimensional ultrasound image data may be referred to as “volume data”.
  • the apparatus main body 10 includes the transmitting and receiving unit 11 , a B-mode processing unit 12 , a Doppler processing unit 13 , an image generating unit 14 , an image memory 15 , an internal storage unit 16 , an image processing unit 17 , and a controlling unit 18 .
  • the transmitting and receiving unit 11 includes a pulse generator, a transmission delaying unit, a pulser, and the like and supplies the drive signal to the ultrasound probe 1 .
  • the pulse generator repeatedly generates a rate pulse for forming a transmission ultrasound wave at a predetermined rate frequency.
  • the transmission delaying unit applies a delay period that is required to focus the ultrasound wave generated by the ultrasound probe 1 into the form of a beam and to determine transmission directionality and that corresponds to each of the transducer elements, to each of the rate pulses generated by the pulse generator.
  • the pulser applies a drive signal (a drive pulse) to the ultrasound probe 1 with timing based on the rate pulses.
  • the transmission delaying unit arbitrarily adjusts the transmission directions of the ultrasound waves transmitted from the transducer element surfaces, by varying the delay periods applied to the rate pulses.
  • the transmitting and receiving unit 11 has a function to be able to instantly change the transmission frequency, the transmission drive voltage, and the like, for the purpose of executing a predetermined scanning sequence on the basis of an instruction from the controlling unit 18 (explained later).
  • the configuration to change the transmission drive voltage is realized by using a linear-amplifier-type transmitting circuit of which the value can be instantly switched or by using a mechanism configured to electrically switch between a plurality of power source units.
  • the transmitting and receiving unit 11 includes a pre-amplifier, an Analog/Digital (A/D) converter, a reception delaying unit, an adder, and the like and generates reflected-wave data by performing various types of processes on the reflected-wave signal received by the ultrasound probe 1 .
  • the pre-amplifier amplifies the reflected-wave signal for each of channels.
  • the A/D converter applies an A/D conversion to the amplified reflected-wave signal.
  • the reception delaying unit applies a delay period required to determine reception directionality to the result of the A/D conversion.
  • the adder performs an adding process on the reflected-wave signals (digital data) to which the delays have been applied by the reception delaying unit, so as to generate the reflected-wave data.
  • the transmitting and receiving unit 11 When a two-dimensional scan is performed on the subject P, the transmitting and receiving unit 11 causes the ultrasound probe 1 to transmit two-dimensional ultrasound beams. The transmitting and receiving unit 11 then generates two-dimensional reflected-wave data from the two-dimensional reflected-wave signals received by the ultrasound probe 1 . When a three-dimensional scan is performed on the subject P, the transmitting and receiving unit 11 causes the ultrasound probe 1 to transmit three-dimensional ultrasound beams. The transmitting and receiving unit 11 then generates three-dimensional reflected-wave data from the three-dimensional reflected-wave signals received by the ultrasound probe 1 .
  • Output signals from the transmitting and receiving unit 11 can be in a form selected from various forms.
  • the output signals may be in the form of signals called Radio Frequency (RF) signals that contain phase information or may be in the form of amplitude information obtained after an envelope detection process.
  • RF Radio Frequency
  • the B-mode processing unit 12 receives the reflected-wave data from the transmitting and receiving unit 11 and generates data (B-mode data) in which the strength of each signal is expressed by a degree of brightness, by performing a logarithmic amplification, an envelope detection process, and the like on the received reflected-wave data.
  • the Doppler processing unit 13 obtains velocity information from the reflected-wave data received from the transmitting and receiving unit 11 by performing a frequency analysis, extracts bloodstream, tissues, and contrast-agent echo components under the influence of the Doppler effect, and further generates data (Doppler data) obtained by extracting moving member information such as a velocity, a dispersion, a power, and the like, for a plurality of points.
  • the B-mode processing unit 12 and the Doppler processing unit 13 are able to process both two-dimensional reflected-wave data and three-dimensional reflected-wave data.
  • the B-mode processing unit 12 is able to generate two-dimensional B-mode data from two-dimensional reflected-wave data and to generate three-dimensional B-mode data from three-dimensional reflected-wave data.
  • the Doppler processing unit 13 is able to generate two-dimensional Doppler data from two-dimensional reflected-wave data and to generate three-dimensional Doppler data from three-dimensional reflected-wave data.
  • the image generating unit 14 generates ultrasound image data from the data generated by the B-mode processing unit 12 and the Doppler processing unit 13 .
  • the image generating unit 14 generates two-dimensional B-mode image data in which the strength of the reflected wave is expressed by a degree of brightness.
  • the image generating unit 14 generates two-dimensional Doppler image data expressing the moving member information.
  • the two-dimensional Doppler image data is a velocity image, a dispersion image, a power image, or an image combining these images.
  • the image generating unit 14 is also capable of generating a Doppler waveform in which velocity information of bloodstream and tissues is plotted in a time series, from the Doppler data generated by the Doppler processing unit 13 .
  • the image generating unit 14 converts (by performing a scan convert process) a scanning line signal sequence from an ultrasound scan into a scanning line signal sequence in a video format used by, for example, television and generates display-purpose ultrasound image data. More specifically, the image generating unit 14 generates the display-purpose ultrasound image data by performing a coordinate transformation process compliant with the ultrasound scanning mode used by the ultrasound probe 1 . Further, as various types of image processing processes other than the scan convert process, the image generating unit 14 performs, for example, an image processing process (a smoothing process) to re-generate a brightness-average image or an image processing process (an edge enhancement process) using a differential filter within images, while using a plurality of image frames obtained after the scan convert process is performed. Further, the image generating unit 14 synthesizes text information of various parameters, scale graduations, body marks, and the like with the ultrasound image data.
  • an image processing process a smoothing process
  • an edge enhancement process an edge enhancement process
  • the B-mode data and the Doppler data are the ultrasound image data before the scan convert process is performed.
  • the data generated by the image generating unit 14 is the display-purpose ultrasound image data obtained after the scan convert process is performed.
  • the B-mode data and the Doppler data may also be referred to as raw data.
  • the image generating unit 14 generates three-dimensional B-mode image data by performing a coordinate transformation process on the three-dimensional B-mode data generated by the B-mode processing unit 12 . Further, the image generating unit 14 generates three-dimensional Doppler image data by performing a coordinate transformation process on the three-dimensional Doppler data generated by the Doppler processing unit 13 . In other words, the image generating unit 14 generates “the three-dimensional B-mode image data or the three-dimensional Doppler image data” as “three-dimensional ultrasound image data (volume data)”.
  • the image generating unit 14 performs a rendering process on the volume data, to generate various types of two-dimensional image data used for displaying the volume data on the monitor 2 .
  • Examples of the rendering process performed by the image generating unit 14 include a process to generate Multi Planar Reconstruction (MPR) image data from the volume data by implementing an MPR method.
  • Other examples of the rendering process performed by the image generating unit 14 include a process to apply a “curved MPR” to the volume data and a process to apply a “maximum intensity projection” to the volume data.
  • MPR Multi Planar Reconstruction
  • Another example of the rendering process performed by the image generating unit 14 is a Volume Rendering (VR) process to generate two-dimensional image data reflecting three-dimensional information.
  • Yet another example of the rendering process performed by the image generating unit 14 is a surface rendering process to generate Surface Rendering (SR) image data that three-dimensionally renders the shape of the surface of a target of the rendering process.
  • VR Volume Rendering
  • SR Surface Rendering
  • the image memory 15 is a memory that stores therein the image data generated by the image generating unit 14 . Further, the image memory 15 is also able to store therein the data generated by the B-mode processing unit 12 and the Doppler processing unit 13 . After a diagnosis process, for example, the operator is able to invoke the B-mode data or the Doppler data stored in the image memory 15 , and the invoked data serves as the display-purpose ultrasound image data via the image generating unit 14 .
  • the image generating unit 14 stores the volume data, i.e., the three-dimensional ultrasound image data, and the time at which an ultrasound scan was performed to generate the volume data into the image memory 15 , while keeping the electrocardiogram transmitted from the electrocardiograph 4 in correspondence therewith.
  • the image processing unit 17 and the controlling unit 18 are able to obtain cardiac phases at the time of the ultrasound scan performed to generate the volume data, by referring to the data stored in the image memory 15 .
  • the internal storage unit 16 stores therein a control computer program (hereinafter, “control program”) to realize ultrasound transmissions and receptions, image processing, and display processing, as well as various types of data such as diagnosis information (e.g., patients' IDs, medical doctors' observations), diagnosis protocols, and various types of body marks. Further, the internal storage unit 16 may be used, as necessary, for storing therein any of the image data stored in the image memory 15 . Further, it is possible to transfer the data stored in the internal storage unit 16 to an external apparatus via an interface (not shown). Examples of the external apparatus include a personal computer (PC) used by a medical doctor who performs an image diagnosis process, a storage medium such as a compact disk (CD) or a digital versatile disk (DVD), and a printer.
  • PC personal computer
  • CD compact disk
  • DVD digital versatile disk
  • the image processing unit 17 is provided in the apparatus main body 10 for performing a Computer-Aided Diagnosis (CAD) process.
  • the image processing unit 17 obtains the ultrasound image data stored in the image memory 15 and to perform image processing processes thereon to aid diagnosis processes. Further, the image processing unit 17 stores results of the image processing processes into the image memory 15 and/or the internal storage unit 16 . Processes performed by functional units included in the image processing unit 17 will be described in detail later.
  • the controlling unit 18 controls the entire processes performed by the ultrasound diagnosis apparatus. More specifically, on the basis of the various types of setting requests input by the operator via the input device 3 and various types of control programs and various types of data read from the internal storage unit 16 , the controlling unit 18 controls processes performed by the transmitting and receiving unit 11 , the B-mode processing unit 12 , the Doppler processing unit 13 , the image generating unit 14 , and the image processing unit 17 . Further, the controlling unit 18 exercises control so that the monitor 2 displays the display-purpose ultrasound image data stored in the image memory 15 and the internal storage unit 16 . Furthermore, the controlling unit 18 exercises control so that the monitor 2 displays processing results obtained by the image processing unit 17 .
  • the ultrasound diagnosis apparatus according to the first embodiment configured as described above performs the processes described below by employing the image processing unit 17 and the controlling unit 18 , for the purpose of conveniently presenting, in a non-invasive manner, information related to myocardial fiber directions and information about local movement components on the myocardial plane across which myocardial fibers extend.
  • the image processing unit 17 includes a calculating unit 171 , an obtaining unit 172 , and a determining unit 173 .
  • the calculating unit 171 by using a plurality of pieces of three-dimensional ultrasound image data in a time series corresponding to a three-dimensional region including a myocardium of the subject P, calculates first movement information indicating a movement of the myocardium by tracking a movement of a region of interest that corresponds to the myocardium and that is set in each of the plurality of pieces of three-dimensional image data.
  • the calculating unit 171 calculates the first movement information that is information about the movement of “the ‘region of interest’ that is set in the myocardium as a three-dimensional tracking target”, by performing a process including a three-dimensional pattern matching process on a group of three-dimensional ultrasound image data in a time series generated by performing an ultrasound scan on the heart.
  • the region of interest is at least one boundary plane selected from an endocardial plane of the myocardium, an epicardial plane of the myocardium, and an intermediate layer plane of the myocardium.
  • the obtaining unit 172 obtains direction information indicating the direction of a myocardial fiber in the myocardium.
  • the direction information serves as information indicating the direction of a myocardial fiber in the region of interest.
  • the determining unit 173 determines second movement information indicating a movement of the myocardium with respect to the direction of the myocardial fiber, on the basis of the first movement information and the direction information of the myocardial fiber. More specifically, the obtaining unit 172 according to the first embodiment estimates the direction information indicating the direction of the myocardial fiber in the myocardium, by using the first movement information. Further, the determining unit 173 according to the first embodiment determines the second movement information by using the direction information estimated by the obtaining unit 172 .
  • the controlling unit 18 causes the monitor 2 to display the second movement information.
  • the obtaining unit 172 obtains the direction information indicating the direction of a myocardial fiber, for each of the plurality of regions of interest.
  • the obtaining unit 172 estimates the direction information indicating the direction of a myocardial fiber for each of a plurality of regions of interest, by using the first movement information of each of the plurality of regions of interest calculated by the calculating unit 171 .
  • the determining unit 173 determines a piece of second movement information for each of the plurality of regions of interest. After that, the controlling unit 18 causes the pieces of second movement information corresponding to the plurality of regions of interest to be displayed.
  • the calculating unit 171 calculates the first movement information by tracking the position of each of the regions of interest set in each of the plurality of pieces of three-dimensional image data, by performing a process including a three-dimensional pattern matching process between pieces of image data. More specifically, the calculating unit 171 performs a three-dimensional speckle tracking (hereinafter, “3DT”) process on a group of three-dimensional ultrasound image data (three-dimensional moving image data).
  • 3DT three-dimensional speckle tracking
  • An example of a speckle tracking method is a method that makes it possible to accurately estimate a movement by, for example, implementing an optical flow method or performing any of various types of spatiotemporal interpolation processes, together with a pattern matching process. Further, there are some other speckle tracking methods by which a movement is estimated without performing any pattern matching process.
  • FIGS. 2 and 3 are drawings for explaining the calculating unit.
  • the operator three-dimensionally scans, for example, the left-side system of the heart of the subject P for a time period equal to or longer than one heartbeat by an apical approach.
  • the image generating unit 14 generates a group of three-dimensional ultrasound image data in a time series corresponding to the time period equal to or longer than one heartbeat and stores the generated group of data into the image memory 15 .
  • the group of three-dimensional ultrasound image data described above is a group of three-dimensional B-mode image data.
  • the operator specifies a group of three-dimensional ultrasound image data (time-series data) on which the image processing unit 17 is to perform an analysis and inputs an analysis start request to request the analysis performed by the image processing unit 17 .
  • the controlling unit 18 Having received the analysis start request, the controlling unit 18 , for example, causes the image generating unit 14 to generate a plurality of pieces of MPR image data obtained by cross-sectioning, on cross-sectional planes extending in multiple directions, the three-dimensional ultrasound image data in the first frame (a first volume) in the group of three-dimensional ultrasound image data and further causes the monitor 2 to display the generated pieces of MPR image data.
  • the calculating unit 171 obtains the group of three-dimensional ultrasound image data serving as the analysis target, from the image memory 15 .
  • the operator sets regions of interest on which a 3DT process is to be performed. For example, the operator traces, within the pieces of MPR image data, the positions of the endocardium of the left ventricle and the epicardium of the myocardium of the left ventricle. After that, for example, the calculating unit 171 reconstructs three-dimensional boundary planes from an endocardial plane and an epicardial plane that were traced. Subsequently, as illustrated in FIG.
  • the calculating unit 171 sets a mesh structured with a plurality of rectangles for each of the endocardial and epicardial planes in the first frame and sets intersection points of the meshes as tracking points.
  • the mesh for the epicardial plane is illustrated with line segments that are thicker than the line segments of the mesh for the endocardial plane.
  • the calculating unit 171 may automatically generate the position of the epicardial plane in a position that is apart from the endocardial plane by a predetermined thickness (a predetermined distance).
  • a predetermined thickness a predetermined distance
  • the present embodiment is not limited to the example where the boundary planes that are manually set by the operator are used. Another arrangement is acceptable in which the calculating unit 171 or the controlling unit 18 automatically sets the positions of the boundary planes on the basis of brightness levels or the like of the three-dimensional ultrasound image data.
  • the calculating unit 171 sets a coordinate system of the left ventricle including a long axis and a short axis, or the like, on the basis of the shapes of the endocardial plane and the epicardial plane.
  • a longitudinal direction of the left ventricle and a circumferential direction of the left ventricle have been set in the three-dimensional ultrasound image data.
  • a radial (wall-thickness) direction of the left ventricle has also been set.
  • the coordinate system of the left ventricle may manually be set by the operator.
  • it is also possible to set a region of interest inside the myocardium by setting an intermediate layer plane between the endocardial plane and the epicardial plane, as a boundary plane serving as a region of interest.
  • the calculating unit 171 sets template data. Further, with respect to each of the plurality of tracking points set on the epicardial plane in the first frame, the calculating unit 171 sets template data. Each of these pieces of template data is structured with a plurality of voxels centered on the tracking point.
  • the calculating unit 171 tracks the position of the template data by finding out the position into which the template data has moved in the following frame. In other words, the calculating unit 171 tracks the positions of the tracking points by finding out the positions thereof in an n'th frame into which the tracking points in the first frame have moved. As a result, the calculating unit 171 determines the position of a tracking point “P” on the boundary plane at a time “t” for each of all the frames.
  • C the longitudinal direction
  • C the circumferential direction
  • the calculating unit 171 calculates a motion vector “V(P(t))” of the tracking point “P(t)” at each of the times “t” by using Expression (1) shown below, as the first movement information.
  • P(t) denotes a single point in at least one region of interest selected from the endocardial plane, the epicardial plane, and the intermediate layer plane.
  • V ( P ( t )) P ( t+ 1) ⁇ P ( t ) (1)
  • the obtaining unit 172 estimates direction information indicating directions of myocardial fibers, by using the local three-dimensional motion vectors (the motion vectors related to the individual tracking points structuring the regions of interest) that were obtained as the first movement information from the 3DT process and that were obtained by performing the process including the three-dimensional pattern matching process performed by the calculating unit 171 .
  • the obtaining unit 172 obtains projection components of the motion vectors on the boundary planes serving as the regions of interest and estimates the obtained projection components as the direction information. More specifically, the obtaining unit 172 obtains orthogonal projection components of the motion vectors on the boundary planes and estimates the orthogonal projection components as the direction information.
  • the obtaining unit 172 estimates the direction information indicating the directions of the myocardial fibers, on the basis of the motion vectors and normal vectors with respect to each of the regions of interest (the boundary planes) near the tracking points at which the motion vectors were obtained.
  • the obtaining unit 172 obtains the orthogonal projection components of the motion vectors by using the normal vectors on the boundary planes near the tracking points at which the motion vectors were obtained and estimates the orthogonal projection components as the direction information indicating the directions of the myocardial fibers.
  • the determining unit 173 determines second movement information that is movement information of the directions of the myocardial fibers.
  • the process described above is a process based on a hypothesis that “moving directions of the individual tracking points obtained from the 3DT process substantially coincide with the directions of the myocardial fibers” and an objective fact that “the movement components in the radial (wall-thickness) direction are not movement components of the fiber directions”.
  • the process described above is a process of “extracting only the movement components on each of the boundary planes at the origins of the movements, by excluding the movements in the radial (wall-thickness) direction through the orthogonal projection of the individual motion vectors on the boundary plane serving as the region of interest”.
  • the hypothesis presented above will be referred to as “hypothesis (0)”.
  • V ⁇ ( P ( t )) V ( P ( t )) ⁇ n ( P ( t )), V ( P ( t ))>* n ( P ( t )) (2)
  • V ⁇ (P(t)) denotes “motion vector information of the myocardium (MyoVector)”. Further, in the range where hypothesis (0) explained above is true, it is possible to approximately regard “MyoVector” as the direction of the myocardial fiber “MyoFiber”. Accordingly, the simplest exemplary configuration in the first embodiment is to define “MyoVector”, without applying any change thereto, as the movement information (the second movement information) indicating the movement of the myocardium with respect to the myocardial fiber, on an assumption that “MyoVector” expresses the direction of the myocardial fiber. This definition will be explained later, as a “first direction definition”.
  • “MyoVector” expresses the movement component on the myocardial plane across which the myocardial fiber extends. It should be noted, however, that the direction of “MyoVector” may not necessarily be equal to “MyoFiber” expressing the direction of the myocardial fiber.
  • MyoFiber expressing the direction of the myocardial fiber.
  • a “myocardial sheet sliding theory” is known for explaining a mechanism that causes an increase in the wall thickness. According to this theory, the direction in which a myocardial sheet slides is perpendicular to a myocardial fiber direction. Accordingly, the movement information of the local myocardium includes not only the movement component from the fiber strain (the expansion and contraction) in the fiber direction, but also the movement component from the sheet sliding.
  • the movement component in the direction perpendicular to the myocardial fiber direction is a constraint condition for the exemplary configuration described above. In the following explanation, the constraint condition will be referred to as “constraint condition (A)”.
  • the image processing unit 17 performs various pre-processing processes explained below.
  • three pre-processing processes a first pre-processing process, a second pre-processing process, and a third pre-processing process
  • a first pre-processing process a second pre-processing process
  • a third pre-processing process a third pre-processing process that are performed to cope with three situations (a first situation, a second situation, and a third situation) where hypothesis (0) is not true, respectively.
  • FIGS. 4 to 6 are drawings for explaining the pre-processing processes performed by the obtaining unit.
  • the first situation is a situation where the first movement information is substantially equal to “zero”.
  • the scalar quantity of the calculated first movement information “V(P(t))” is substantially equal to “zero”
  • the scalar quantity of “V ⁇ (P(t))” used for obtaining the direction information of the myocardial fiber is also substantially equal to “zero”.
  • the obtaining unit 172 is unable to estimate the direction information of the myocardial fiber.
  • the obtaining unit 172 estimates, as the first pre-processing process, direction information indicating the direction of the myocardial fiber in the temporal phase by performing a temporal interpolation process. For example, if the absolute value (the scalar quantity) of “V ⁇ (P(t))” is smaller than a threshold value “Vth” set in advance, the obtaining unit 172 temporally interpolates the value of “V ⁇ (P(t))”.
  • the obtaining unit 172 calculates “V ⁇ (P(t))” by using Expression (3) shown below.
  • V ⁇ ⁇ ( P ⁇ ( t ) ) V ⁇ ⁇ ( P ⁇ ( t - 1 ) ) ( Only ⁇ ⁇ if ⁇ ′′ ⁇ ⁇ V ⁇ ⁇ ( P ⁇ ( t ) ) ⁇ ⁇ Vth ′′ ⁇ ⁇ is ⁇ ⁇ satisfied ) ⁇ ( 3 )
  • Expression (3) indicates that, if the absolute value of “V ⁇ (P(t))” is smaller than the threshold value “Vth”, “V ⁇ (P(t))” is regarded to be equal to an orthogonal projection component “V ⁇ (P(t ⁇ 1))” obtained at time “t ⁇ 1”.
  • the calculation using Expression (3) is merely an example.
  • the obtaining unit 172 calculates the direction information in the temporal phase by performing an interpolation process that uses the data in the temporal phase immediately preceding the temporal phase and the data in the temporal phase immediately following the temporal phase.
  • the obtaining unit 172 may use an average vector of “V ⁇ (P(t ⁇ 1))” and “V ⁇ (P(t+1))” as “V ⁇ (P(t))”.
  • the second situation is a situation in which a translation movement of the entire heart is taken into consideration.
  • the heart may move in a swinging motion while using the apex as the base point.
  • the movement component of such a translation movement does not necessarily coincide with a local myocardial fiber direction.
  • the obtaining unit 172 performs, as the second pre-processing process, a process to eliminate the translation movement component of the entire heart.
  • the obtaining unit 172 obtains a partial translation movement component of the heart from a partial average motion vector in the region of interest and estimates direction information indicating the direction of a myocardial fiber by using a component obtained by subtracting the obtained translation movement component from a motion vector. Even more specifically, the obtaining unit 172 estimates the direction information indicating the direction of the myocardial fiber by using a component obtained by subtracting the translation movement component locally estimated on a short-axis plane that is set in a different position in terms of the longitudinal (long-axis) direction of the region of interest, from a vector component obtained by separating the components of the motion vector in directions on the short-axis plane.
  • the obtaining unit 172 estimates a translation movement component “Vw(t)” from the motion vector in the region of interest. After that, for example, by using Expression (4) shown below, the obtaining unit 172 calculates “V′ ⁇ (P(t))” by subtracting “Vw(t)” from “V ⁇ (P(t))”, which is the motion vector on the boundary plane at each of the tracking points, and estimates the myocardial fiber direction by using calculated “V′ ⁇ (P(t))”.
  • V ′ ⁇ ( P ( t )) V ⁇ ( P ( t )) ⁇ Vw ( t ) (4)
  • the translation movement component “Vw(t)” expresses the movement component related to the entire heart, it may be effective, for the estimation thereof, to use an average motion vector obtained by averaging the motion vectors within the region of interest.
  • the main factor of the translation movement is considered to be a sideways swinging motion centered on the apex when the left ventricle is imagined to be a hanging bell, it would be impossible to correctly estimate the sideway swinging component, which is supposed to vary in accordance with the distance from the apex, when the “motion vector obtained by averaging the motion vectors within the entirety of the region of interest” were used as the average motion vector.
  • the obtaining unit 172 uses an average vector “ave(L)_V(t)” obtained by averaging motion vectors in the circumferential direction at the level. It should be noted that, however, because the movement in the long-axis direction is a valid contraction component, the obtaining unit 172 needs to perform a subtraction between the abovementioned motion vector components, so as not to impact the motion vector component in the long-axis direction.
  • the obtaining unit 172 extracts a vector component in the direction of the short-axis plane “S(L)” perpendicular to the long-axis direction, from the motion vector “V(P(t))”. More specifically, the obtaining unit 172 calculates an orthogonal projection vector component “Vs(P(t))” projected on a regression plane “C′(t)” of “S(L)” by the motion vector “V ⁇ (P(t))”, which is the orthogonal projection component. In that situation, the obtaining unit 172 calculates a unit normal vector “nC′(t)” on the regression plane “C′(t)”.
  • the obtaining unit 172 extracts a plurality of tracking points on the endocardial plane passing through the short-axis plane “S(L)” or a plurality of tracking points positioned within a predetermined distance from the endocardial plane passing through the short-axis plane “S(L)”. After that, for example, as illustrated in FIG. 4 , the obtaining unit 172 calculates the regression plane “C′(t)” by implementing a least-squares method while using the positions of the extracted plurality of tracking points and further calculates the unit normal vector “nC′(t)” from the regression plane “C′(t)”.
  • the obtaining unit 172 obtains “Vs(P(t))” on the short-axis plane “S(L)” by using Expression (5) shown below.
  • the calculation in Expression (5) is the same as the calculation process in Expression (2) used for deriving the orthogonal projection component.
  • Vs ( P ( t )) V ⁇ ( P ( t )) ⁇ nC ′( t ), V ⁇ ( P ( t ))>* nC ′( t ) (5)
  • Expression (5) indicates that “Vs(P(t))” is a vector component of the motion vector “V ⁇ (P(t))” in the direction of the short-axis plane “S(L)”.
  • Vs(P(t)) can be expressed as “Vs(p(C,L,t))”.
  • the obtaining unit 172 obtains “Vs(p(C,L,t))” for each of the tracking points in the circumferential direction using the range “1 ⁇ C ⁇ N” and further obtains “ave(L)_V(t)” by calculating an average of the obtained values of “Vs(p(C,L,t))”.
  • the obtaining unit 172 determines “ave(L)_V(t)” to be the translation movement component “Vw(t)”. In other words, the obtaining unit 172 obtains the translation movement component “Vw(t)” by performing the calculation in Expression (6) shown below.
  • the obtaining unit 172 estimates the myocardial fiber direction, by assigning “Vw(t)” obtained from Expression (6) to Expression (4) and obtaining “V′ ⁇ (P(t))” by eliminating the translation movement component from “V ⁇ (P(t))”.
  • the third situation is a situation in which “a rotation movement of the heart that can be regarded as a movement perpendicular to the myocardial fiber direction” is taken into consideration.
  • the change is observed as a movement component included in the first movement information.
  • a “movement component derived from a change in the myocardial fiber direction” makes it difficult for hypothesis (0) described above to be true.
  • the “movement component derived from a change in the myocardial fiber direction” corresponds to a torsion, i.e., the rotation component on each short-axis level “L”.
  • the obtaining unit 172 performs, as the third pre-processing process, a process to eliminate the rotation component on the basis of a hypothesis that “the rotation direction is substantially perpendicular to the myocardial fiber direction”.
  • this hypothesis will be referred to as “hypothesis (1)”.
  • the obtaining unit 172 obtains the rotation component on a short-axis plane that is set in a different position in terms of the longitudinal (long-axis) direction of the region of interest, from the motion vector. After that, as the third pre-processing process, the obtaining unit 172 estimates direction information indicating the direction of the myocardial fiber, by using a component obtained by subtracting the obtained rotation component from the motion vector.
  • the obtaining unit 172 first separates the components of the motion vector “V ⁇ (P(t))” obtained from the motion vector “V(P(t))” into a vector component “Vx(P(t))” in the long-axis direction and a vector component “Vs(P(t))” in the direction of the short-axis plane “S(L)” orthogonal to the long-axis direction. After that, the obtaining unit 172 estimates a rotation component “rot(S(L))_P(t)” of a vector “Vs(P(t))” on the short-axis plane.
  • the obtaining unit 172 estimates the myocardial fiber direction by using a motion vector “Vs′ ⁇ (P(t))” obtained by subtracting the rotation component “rot(S(L))_P(t)” from “Vs(P(t))”.
  • the final motion vector that is obtained by the obtaining unit 172 as a result of the third pre-processing process to be used for estimating the myocardial fiber direction i.e., “V′ ⁇ (P(t))” obtained by eliminating the rotation component from “V ⁇ (P(t))” is calculated by using Expression (7) shown below.
  • V ′ ⁇ ( P ( t )) Vx ( P ( t ))+ Vs ′ ⁇ ( P ( t )) (7)
  • Expression (7) indicates that the final motion vector “V′ ⁇ (P(t))” is obtained by combining the vector component of “V ⁇ (P(t))” in the longitudinal (long-axis) direction with the vector component obtained by subtracting the rotation component on the short-axis plane from the vector component of “V ⁇ (P(t))” in the short-axis plane direction.
  • the obtaining unit 172 obtains “Vx(P(t))” by using Expression (8) shown below, while using a unit direction vector “x(L,P(t))” in the longitudinal (long-axis) direction at the tracking point position “P(t)” at the short-axis level “L”.
  • Vx ( P ( t )) ⁇ x ( L,P ( t )), V ⁇ ( P ( t ))>* x ( L,P ( t )) (8)
  • the obtaining unit 172 obtains “Vs′ ⁇ (P(t))” by using Expression (9) shown below.
  • “Vs(P(t))” in Expression (9) is a vector equivalent to “Vs(P(t))” explained in the second situation above.
  • “Vs(P(t))” is a vector obtained by projecting “V(P(t))” on the regression plane “C′(t)” by using the unit normal vector “nC′(t)”.
  • Vs ′ ⁇ ( P ( t )) Vs ( P ( t )) ⁇ rot ( S ( L )) — P ( t ) (9)
  • the obtaining unit 172 calculates, as illustrated in FIG. 5 , the angle formed by “V(P(t))” and “V(P(t0))”.
  • “V(P(t0))” is a vector obtained by projecting “V(P(t0))” in a reference temporal phase “t0” onto “C′(t)” by using the unit normal vector “nC′(t0)”.
  • the calculated angle is, as illustrated in FIG. 5 , a local rotation angle “ ⁇ (C,L,t)” of the tracking position “P(t)” on the short-axis plane “S(L)” at the time “t”.
  • the rotation center used for defining the rotation angle is, for example, the gravity point of “C′(t)” corresponding to the contour on the regression plane.
  • the regression plane and the rotation angle illustrated in FIGS. 4 and 5 are described in detail in Japanese Patent Application Laid-open No. 2010-51731.
  • the obtaining unit 172 calculates “ ⁇ (C,L,t)” for each of the tacking points in the circumferential direction, on “C′(t)” corresponding to the contour on the regression plane. After that, the obtaining unit 172 calculates an average rotation angle “ ⁇ ′(L,t)” by averaging “ ⁇ (C,L,t)” on “C′(t)” in the circumferential direction. In this situation, the rotation angle “ ⁇ r” used for obtaining “rot(S(L))_P(t)” by performing the process described below may be either the local rotation angle “ ⁇ (C,L,t)” or the average rotation angle “ ⁇ ′(L,t)”.
  • the operator is able to select a rotation angle to be used from among these rotation angles, depending on a trade-off between fineness of the spatial resolution of the rotation component to be estimated and spatial stability.
  • the obtaining unit 172 determines the distance “R” between the rotation center “G(L,t)” used for defining the rotation angle and the point “C′(P(t))” on “C′(t)”, to be the radius. After that, the obtaining unit 172 obtains a point “C′(r)” in the position reached by rotating the point “C′(P(t))” by the rotation angle “ ⁇ r”.
  • the obtaining unit 172 calculates the rotation component “rot(S(L))_P(t)” on “C′(t)”.
  • C′(r0) in Expression (10) denotes, as illustrated in the bottom section of FIG. 6 , the point at the position corresponding to “C′(r)” when “G(L,t)” is translated to the zero vector “0”.
  • C′(P0(t)) in Expression (10) denotes, as illustrated in the bottom section of FIG. 6 , the point at the position corresponding to “C′(P(t))” when “G(L,t)” is translated to the zero vector “0”.
  • the obtaining unit 172 obtains “C′(r0)” from “C′(P0(t))” and “ ⁇ r” that are already known, so as to obtain the rotation component.
  • the obtaining unit 172 obtains “Vs′ ⁇ (P(t))”. Subsequently, the obtaining unit 172 calculates “V′ ⁇ (P(t))” obtained by eliminating the rotation component from “V ⁇ (P(t))”, by assigning “Vs′ ⁇ (P(t))” calculated from Expression (9) to Expression (7), so as to determine “V′ ⁇ (P(t))” to be fiber direction information of the myocardium.
  • the pre-processing processes performed in the first embodiment to address the three different situations have thus been explained.
  • the first, the second, and the third pre-processing processes performed on the motion vector “V ⁇ (P(t))” are all independent processes.
  • the obtaining unit 172 is also able to obtain the final motion vector by using any of the three pre-processing processes in an arbitrary combination, as necessary. It should be noted that, however, for example, if the operator has determined that there is no need to perform any pre-processing process, the operator may arrange the processes described below to be performed by using the motion vector “V ⁇ (P(t))” on which none of the pre-processing processes has been performed.
  • V ⁇ (P(t)) may be the motion vector of the orthogonal projection component on which no pre-processing process has been performed or may be the motion vector of the orthogonal projection component that is eventually obtained as a result of any the pre-processing processes.
  • the determining unit 173 defines the myocardial fiber direction and further determines the second movement information by using the defined myocardial fiber direction. More specifically, the determining unit 173 determines the myocardial fiber direction by using either a first direction definition or a second direction definition explained below.
  • the determining unit 173 defines the individual vector structured by the “direction information indicating the direction of the myocardial fiber” on the boundary plane serving as the region of interest, as the myocardial fiber direction.
  • the motion vector “V ⁇ (P(t))” is regarded as the myocardial fiber direction.
  • the determining unit 173 calculates at least one streamline obtained from at least one starting point set on the boundary plane in the vector field formed by the “information indicating the direction of the myocardial fiber” on the boundary plane serving as the region of interest, by performing a spatial interpolation process on the vector field, and further defines the “at least one streamline” as the myocardial fiber direction. More specifically, according to the second direction definition, the determining unit 173 regards a streamline vector “L(t,N)” explained below as the myocardial fiber direction.
  • the determining unit 173 determines an end (at least one of the apex and the valve annulus) in the longitudinal (long-axis) direction in the vector field “V ⁇ (P(t))” formed in the region of interest by the motion vector at the individual tracking point obtained from the process described above, to be a starting point “Q0(t)”.
  • the determining unit 173 may determine an arbitrary level near the center of the long axis in the vector field “V ⁇ (P(t))” to be the starting point “Q0(t)”.
  • the position and the quantity of “Q0(t)” may arbitrarily be set by the operator.
  • the determining unit 173 calculates the value of “V ⁇ (Q0(t))” closest to “Q0(t)”, by performing a spatial interpolation process on the vector field “V ⁇ (P(t))”. Subsequently, the determining unit 173 calculates a vector “Q1(t)” to be connected to “Q0(t)” by using Expression (11) shown below.
  • the determining unit 173 keeps connecting a vector “Qi(t)” to a vector “Qi ⁇ 1(t)”. After that, either when the extended distance of the connected vectors has reached a predetermined maximum length (e.g., 8 cm) or when the connected vectors has reached the other end in the longitudinal (long-axis) direction, the determining unit 173 en ds the vector connecting process. In this situation, the condition under which the vector connecting process is ended may arbitrarily be set by the operator.
  • the determining unit 173 obtains one streamline vector for one starting point.
  • the determining unit 173 repeatedly performs the vector connecting process for each of the starting points which are positioned apart from each other and of which the total quantity is N.
  • the starting points of which the total quantity is N may be a group of starting points obtained by dividing the valve annulus into N sections in the circumferential direction.
  • the determining unit 173 performs a process of preventing each streamline vector from intersecting other streamline vectors that have already been drawn. More specifically, the determining unit 173 performs a process of ending the connecting process when any streamline vector to be connected is detected to intersect another streamline vector.
  • the determining unit 173 eventually obtains as many streamline vectors “L(t,N)” as N in the region of interest serving as the processing target.
  • FIGS. 7 to 11 are drawings for explaining the output modes implemented in the first embodiment.
  • the output modes explained below are realized by a rendering process performed by the image generating unit 14 under control of the controlling unit 18 to which the output information from the determining unit 173 is input.
  • the determining unit 173 determines the myocardial fiber directions defined according to either the first direction definition or the second direction definition to be the second movement information and outputs the second movement information to the controlling unit 18 .
  • the myocardial fiber directions at the tracking points on at least one boundary plane selected from the endocardial plane, the epicardial plane, and the intermediate layer plane that is set as the region of interest is displayed in such a manner that the three-dimensional positions of the tracking points are visible.
  • a 3D rendering display that uses a rendering process performed on volume data. More specifically, to realize the 3D rendering display, the myocardial fiber directions at the tracking points on the boundary plane are displayed by using SR image data obtained by performing an SR process on the boundary plane of three-dimensional ultrasound image data of the heart.
  • the myocardial fiber directions at the tracking points on the boundary plane may be displayed by implementing a map display method that uses a polar map indicating a plurality of segments and that is recommended by the American Society of Echocardiography and the American Heart Association.
  • the determining unit 173 determines the direction information “MyoVector” estimated by the obtaining unit 172 to be the second movement information. In that situation, the determining unit 173 determines the individual vectors in the vector field (i.e., “V ⁇ (P(t))” obtained at the tracking points) to be the second movement information, without applying any change thereto. Further, the controlling unit 18 causes either lines or arrows each indicating the direction and the magnitude of the individual vector to be displayed while being superimposed on either three-dimensional rendering image data of the region of interest (the SR image data of the boundary plane) or a polar map of the heart.
  • the determining unit 173 may determine the directions of “MyoVector” which is the direction information estimated by the obtaining unit 172 , to be the second movement information. In that situation, the determining unit 173 determines the directions of the individual vectors in the vector field (i.e., the directions of “V ⁇ (P(t))” obtained at the tracking points), to be the second movement information.
  • the controlling unit 18 causes either line segments or arrows each indicating the direction of the individual vector in the vector field and having a regulated length to be displayed while being superimposed on either three-dimensional rendering image data of the region of interest (the SR image data of the boundary plane) or a polar map of the heart.
  • either the line segments or the arrows are used as the display objects that present “V ⁇ (P(t))”.
  • the display objects reflect both the magnitude and the direction of the vectors “V ⁇ (P(t))”.
  • only the directions are displayed by the display objects having a predetermined size.
  • the output specializes in the information about the myocardial fiber directions obtained from the motion vectors “V ⁇ (P(t))” in each temporal phase.
  • a normalization process is performed in order to obtain a unit direction vector “n” in the myocardial fiber direction.
  • the unit direction vector “n” in the myocardial fiber direction is obtained by dividing “V ⁇ (P(t))” by the magnitude of “V ⁇ (P(t))”.
  • the determining unit 173 sets a lower limit value “a” to the magnitude of the motion vector.
  • the determining unit 173 outputs the second movement information as “zero”.
  • the determining unit 173 obtains the motion vector in the temporal phase by performing an interpolation process in the time direction (i.e., performing the same process as the one explained above as the first pre-processing process), and outputs the second movement information by using the motion vector obtained from the interpolation process.
  • FIG. 7 illustrates an example of the 3D rendering display that uses line segments obtained by normalizing the directions of “V ⁇ (P(t))” obtained at the tracking points, while using time-series data of a healthy person as an analysis target.
  • the top section of FIG. 7 illustrates images obtained by displaying normalized line segments indicating the myocardial fiber directions at the tracking points on the endocardial plane, so as to be superimposed on SR image data of the endocardium (“Endo”) serving as a region of interest.
  • the bottom section of FIG. 7 illustrates images obtained by displaying normalized line segments indicating the myocardial fiber directions at the tracking points on the epicardial plane, so as to be superimposed on SR image data of the epicardium (“Epi”) serving as another region of interest.
  • Epi epicardium
  • FIG. 7 illustrates the manner in which the directions of the vectors obtained in four representative cardiac phases are distributed.
  • IVC phase denotes an isovolumetric contraction period
  • Ejection phase denotes a contraction period
  • IVR phase denotes an isovolumetric relaxation period
  • Before a′ denotes a temporal phase immediately before an atrial systole.
  • each of the normalized line segments is a line segment indicating the direction of “V ⁇ (P(t))”, which is the projection component on the boundary plane
  • each line segment is displayed in a superimposed manner as if the line segment was adhering on the boundary plane.
  • the “areas encircled with dotted lines” in FIG. 7 indicate regions in which the directions of the vectors are significantly different between the endocardium and the epicardium in the different cardiac phases. For example, it is observed from FIG. 7 that the vectors in the apex region during the contraction period are oriented in such a manner that the vectors on the endocardium intersect the vectors on the epicardium.
  • each of the line segments shown in the 3D rendering display is displayed while being color-coded, on the basis of the direction of the line segment and the object.
  • the color-coded display is realized according to a display method by which the second output mode is further applied to the first output mode. This display method will be explained in detail with the second output mode.
  • the determining unit 173 determines “at least one streamline (a streamline vector)” calculated from “MyoVector”, which is the direction information estimated by the obtaining unit 172 , to be the second movement information.
  • the controlling unit 18 causes lines each corresponding to “at least one streamline (the streamline vector)” to be displayed while being superimposed on either three-dimensional rendering image data of the region of interest or a polar map of the heart.
  • Each of the lines corresponding to the streamline vectors may be a line segment, a curve, or a polygonal line.
  • FIG. 8 illustrates an example in which a 3D rendering display using SR image data is applied to a streamline display.
  • FIG. 9 illustrates an example in which a map display using a polar map is applied to a streamline display.
  • “Ant-Sept” denotes the anteroseptal wall; “Ant” denotes the anterior wall; “Lat” denotes the lateral wall; “Post” denotes the posterior wall; “Inf” denotes the inferior wall; and “Sept” denotes the septum.
  • the streamline display illustrated in FIGS. 8 and 9 corresponds to display an image of streamlines along the flow of the fluid, the streamlines formed by ink poured on the plurality of places (the plurality of starting points) in the vector field of the fluid.
  • the 3D rendering display illustrated in FIG. 8 it is possible to arbitrarily change the position of the line of sight and the direction of the line of sight in accordance with a request made by the operator.
  • the 3D rendering display in FIG. 8 uses SR image data of the boundary plane in which the boundary position is colored gray (which is shown in black in FIG. 8 for drawing purposes). This process is used for the purpose of avoiding the situation where the visibility of streamlines superimposed in the front is lowered by streamlines superimposed in the rear.
  • the streamline vector display illustrated in FIGS. 8 and 9 is more effective in helping the viewer understand the continuous distribution of the fiber vectors than the local display of the individual vectors (see FIG. 7 ).
  • the map display (the line-segment display and the streamline display using the polar map) has an advantage over the 3D rendering display, because the viewer is able to three-dimensionally understand, all at once, the information about the myocardial fiber directions in the entirety of the region of interest.
  • the first output mode described above may be realized in a modification example described below in which a plurality of pieces of movement information are displayed simultaneously.
  • the display output processes explained below when the plurality of pieces of movement information related to the movement information of the myocardial fiber directions are displayed simultaneously, the operator is able to evaluate the wall movement of the left ventricle in a detailed and comprehensive manner.
  • the obtaining unit 172 estimates the direction information indicating the direction of the myocardial fiber for each of the plurality of regions of interest.
  • the determining unit 173 determines the second movement information for each of the plurality of regions of interest. After that, when a plurality of regions of interest are set, the controlling unit 18 causes the pieces of second movement information corresponding to the plurality of regions of interest to be displayed while being arranged in rows, as shown in the 3D rendering display illustrated in FIG. 7 .
  • the controlling unit 18 may cause the pieces of second movement information corresponding to the plurality of regions of interest to be displayed simultaneously, as shown in a 3D rendering display illustrated in FIG. 10 .
  • the lines indicating the directions and the magnitudes of the motion vectors “V ⁇ (P(t))” obtained at the tracking points of the endocardium are displayed while being superimposed on SR image data of the endocardial plane in which the endocardial plane boundary position is colored gray (which is shown in black in FIG. 10 for drawing purposes).
  • FIG. 10 the lines indicating the directions and the magnitudes of the motion vectors “V ⁇ (P(t))” obtained at the tracking points of the endocardium are displayed while being superimposed on SR image data of the endocardial plane in which the endocardial plane boundary position is colored gray (which is shown in black in FIG. 10 for drawing purposes).
  • the lines indicating the directions and the magnitudes of the motion vectors “V ⁇ (P(t))” obtained at the tracking points of the epicardial plane are also displayed simultaneously while being superimposed on the SR image data of the endocardial plane.
  • the reason why the endocardial plane boundary position is colored gray is the same as explained above.
  • the 3D rendering display illustrated in FIG. 10 is an example of a display to which the second output mode (explained later) is applied.
  • the rendering simultaneous display illustrated in FIG. 10 makes it possible for the operator to recognize that the fiber vectors (the motion vectors) of the endocardium and the epicardium are substantially in the same directions in the region from the valve annulus to the intermediate level.
  • the operator is able to recognize that, in the region near the apex, the directions of the fiber vectors are different between the endocardium and the epicardium, because the directions of the fiber vectors on the endocardial plane are longitudinal direction (the vertical direction), whereas the direction of the fiber vectors on the epicardial plane are circumferential direction (the horizontal direction).
  • the operator is also able to recognize that, in the temporal phase used in the rendering simultaneous display illustrated in FIG. 10 , the magnitudes of the motion vectors at the apex are relatively smaller, for both the endocardium and the epicardium.
  • the 3D rendering display illustrated in FIG. 10 is also applicable to a situation where the streamlines on the endocardial plane and the epicardial plane are displayed simultaneously. Further, the 3D rendering display illustrated in FIG. 10 is also applicable to a situation where the directions of “MyoVector” obtained at the tracking points on the endocardial plane and the epicardial plane are displayed simultaneously by using normalized line segments.
  • the controlling unit 18 may cause an index related to the local wall movement in the region of interest to be displayed simultaneously together with the second movement information.
  • a 3D rendering display is realized by using line segments obtained by normalizing the myocardial fiber directions, while the endocardial plane boundary position is indicated in gray in the display area on the right-hand side.
  • the 3D rendering display illustrated in FIG. 11 is also an example of a display to which the second output mode (explained later) is applied.
  • strains are further displayed simultaneously, as conventionally have been displayed, in the display area on the left-hand side, as an index related to the wall movement.
  • information obtained by applying a color conversion to the distribution of “radial strains (RS)” is displayed while being superimposed on a tissue structure represented by a plurality of pieces of MPR image data generated from three-dimensional B-mode image data by using a plurality of cross-sectional planes (plane A, plane B, and plane C at levels 1 to 3).
  • time-change curves of the RS are displayed in the charts in units of segments.
  • the bars extending along the vertical axes in the charts shown in FIG. 11 indicate the temporal phase that is currently being displayed.
  • the operator is able to recognize that the temporal phase in which the information indicated by the various types of image data was obtained is a temporal phase near the end systole at which the RS value reaches a peak.
  • the operator is also able to recognize the appearance of the shapes of the endocardial boundary plane in the temporal phase, as well as the manner in which the directions of the vectors on the endocardium change from the valve annulus toward the apex in a twisted fashion.
  • the 3D rendering simultaneous display in which the pieces of information about the endocardium and the epicardium are displayed simultaneously may be realized in such a manner that the line segments (or the arrows) corresponding to the myocardial fiber directions on the endocardial plane and the epicardial plane are displayed in a superimposed manner while the colors thereof are varied.
  • the 3D rendering simultaneous display in which the pieces of information about the endocardium and the epicardium are displayed simultaneously may be realized in such a manner that the pieces of information are displayed in a superimposed manner while the degrees of display transparency related to the display objects are varied between the two.
  • the controlling unit 18 may cause the pieces of second movement information corresponding to the plurality of regions of interest to be displayed while being superimposed on the three-dimensional rendering image data of a region of interest, by using any display mode in which the regions of interest are distinguishable from each other.
  • the display control exercised in this manner for example, the operator is able to more easily identify the myocardial fiber directions on both of the endocardial plane and the epicardial plane.
  • the directions in the output information have been referred to as “myocardial fiber directions” for the sake of convenience in the explanation.
  • the direction information indicated by “MyoVector” obtained in the present embodiment may not necessarily be myocardial fiber directions.
  • “MyoVector” is obtained by extracting the information indicating, at least, the movement on the myocardial plane across which the myocardial fibers extend and the directions of the myocardial fibers.
  • the processes described above performed by “the calculating unit 171 , the obtaining unit 172 , and the determining unit 173 ” can be expressed as follows:
  • the calculating unit 171 calculates the first movement information (the motion vector) indicating a movement of the myocardium by tracking a movement of the region of interest that is on a predetermined plane of the myocardium and that is set in each of the plurality of pieces of three-dimensional image data.
  • the obtaining unit 172 obtains the vector information (MyoVector) of the projection component of the movement in the region of interest on the predetermined plane of the myocardium, on the basis of the first movement information. After that, the determining unit 173 determines the second movement information indicating the movement of the myocardium, on the basis of the vector information.
  • MyoVector vector information of the projection component of the movement in the region of interest on the predetermined plane of the myocardium
  • the determining unit 173 determines a myocardial fiber angle on the basis of the myocardial fiber directions defined according to either the first direction definition or the second direction definition. After that, the determining unit 173 determines the myocardial fiber angle to be second movement information and outputs the second movement information to the controlling unit 18 . In other words, the determining unit 173 determines the myocardial fiber angle, which is an angle formed by the myocardial fiber and either the longitudinal direction of the myocardium or the circumferential direction of the myocardium, to be the second movement information. After that, the controlling unit 18 causes the monitor 2 to display the myocardial fiber angle.
  • the angle (a myocardial fiber angle “ ⁇ (t)”) formed by the myocardial fiber direction defined according to either the first direction definition or the second direction definition is quantified, at each temporal phase “t”, with respect to the longitudinal (long-axis) direction determined at a reference temporal phase “t0”, so that “ ⁇ (t)” is displayed as the second movement information.
  • the angle (a myocardial fiber angle “ ⁇ (t)”) formed by the myocardial fiber direction defined according to either the first direction definition or the second direction definition is quantified, at each temporal phase “t”, with respect to the short-axis direction determined at a reference temporal phase “t0”, so that “ ⁇ (t)” is displayed as the second movement information.
  • the determining unit 173 determines a vector in the fiber direction at a target position obtained as either the motion vector “V ⁇ (P(t))” according to the first direction definition or the streamline vector “L(t,N)” according to the second direction definition, to be “F(t)”. After that, for example, the determining unit 173 calculates a unit direction vector “c(t0)” in the circumferential direction on the short axis in the same target position in the reference temporal phase. Subsequently, the determining unit 173 calculates the angle “ ⁇ (t)” formed by “F(t)” and “c(t0)” and defines the angle “ ⁇ (t)” to be the myocardial fiber angle in the target position. In the following sections, “myocardial fiber angle” may simply be referred to as “fiber angle”.
  • the directions of fiber angles are defined as follows: a fiber direction parallel to the short axis is the angle “zero”; the counterclockwise direction with respect to the short axis when the left ventricle is viewed from the exterior of the heart is the “positive” direction; the clockwise direction is the “negative” direction; and the direction parallel to the long axis is “ ⁇ /2” at maximum.
  • the controlling unit 18 assigns a pink color if the fiber angle with respect to the short-axis direction is “zero degrees”, assigns a green color if the fiber angle with respect to the short-axis direction is “ ⁇ 90 degrees”, assigns cold colors (blue) if the fiber angle is in the positive direction, and assigns warm colors (red) if the fiber angle is the negative direction.
  • FIGS. 7 , 10 , and 11 illustrate the examples of 3D rendering display in which the color assignments are applied to the line segments indicating the myocardial fiber directions. Further, the objects illustrated in FIGS. 7 and 11 serve as objects for informing the operator of the correspondence relationship in the color assignments.
  • a fiber angle that is substantially parallel to the long axis is expressed in a color close to green, which expresses “ ⁇ 90 degrees”
  • a fiber angle that is substantially parallel to the short axis is expressed in a color close to pink, which expresses “ ⁇ 0 degrees”.
  • the range of the fiber angles is defined as “ ⁇ /2”.
  • the determining unit 173 obtains a unit direction vector “x(t0)” in the longitudinal (long-axis) direction at the target position in the reference temporal phase and calculates an angle “ ⁇ (t)” formed by “F(t)” and “x(t0)”. After that, the determining unit 173 expands the range by judging the polarity (positive/negative) of “ ⁇ (t)” with respect to the long axis, by using “ ⁇ (t)”.
  • the second output mode described above is implemented in combination with the first output mode that uses the motion vectors (the projection components) that are the “individual vectors in the vector field” and the streamline vectors represented by “at least one streamline”. Accordingly, the processes performed in the second output mode can be summarized as the following three processes:
  • the determining unit 173 determines the myocardial fiber angles as the second movement information. After that, the controlling unit 18 changes the display mode of the line segments or the arrows that indicate the directions and the magnitudes of the motion vectors, in accordance with the myocardial fiber angles.
  • the determining unit 173 determines the myocardial fiber angles as the second movement information. After that, the controlling unit 18 changes the display mode of the line segments or the arrows that indicate the directions of the motion vectors, in accordance with the myocardial fiber angles.
  • the determining unit 173 determines the myocardial fiber angles as the second movement information. After that, the controlling unit 18 changes the display mode of the lines corresponding to the streamline vectors, in accordance with the myocardial fiber angles.
  • the display mode changed in the processes described above is not limited to the one using colors. It is also acceptable to change the thickness of the lines or the arrows, in accordance with the myocardial fiber angles.
  • the second output mode include other examples of display such as an example in which a color conversion is applied to the myocardial fiber angles so as to be displayed in a polar map; and an example in which, while a wall movement index indicating strains or the like is displayed in color, line segments or arrows that are tilted in accordance with the angles formed by the myocardial fiber directions with respect to the long axis (the axis of radiation) direction are simultaneously displayed in a polar map in a superimposed manner.
  • display such as an example in which a color conversion is applied to the myocardial fiber angles so as to be displayed in a polar map; and an example in which, while a wall movement index indicating strains or the like is displayed in color, line segments or arrows that are tilted in accordance with the angles formed by the myocardial fiber directions with respect to the long axis (the axis of radiation) direction are simultaneously displayed in a polar map in a superimposed manner.
  • a strain component in the myocardial fiber direction is defined by using the myocardial fiber direction obtained according to either the first direction definition or the second direction definition, and the strain component in the myocardial fiber direction obtained in this manner is determined to be second movement information and output for a display purpose.
  • the determining unit 173 determines the strain in the myocardial fiber direction as the second movement information. More specifically, the determining unit 173 obtains the strain in the myocardial fiber direction by using the direction information and determines the obtained strain as the second movement information. In this situation, in the third output mode, the determining unit 173 calculates the strain component in the myocardial fiber direction by using either a first method or a second method described below.
  • the determining unit 173 by performing a process that includes a spatial interpolation process, obtains an inter-tracking-point distance between each of the tracking points in the region of interest in each temporal phase and one or more tracking points that are positioned in the myocardial fiber direction from the tracking point. After that, the determining unit 173 obtains a strain rate (or an instantaneous rate of change in the distance between the tracking points) on the basis of the obtained distance between the tracking points in each temporal phase. After that, the determining unit 173 time-integrates the obtained strain rates corresponding to the temporal phases, starting from a reference temporal phase over the different temporal phases.
  • the determining unit 173 determines the strain that denotes the rate of change in the length compared to the distance between the tracking points in the reference temporal phase, to be the second movement information.
  • the determining unit 173 forms a pair made up of: the motion vector “V ⁇ (P(t))” obtained at the tracking point “P(t)” in each temporal phase “t”; and a motion vector “V ⁇ (Q(t))” obtained at a tracking point “Q(t)” positioned at a predetermined distance from “P(t)” along the myocardial fiber direction.
  • the determining unit 173 calculates a strain rate “SR(P(t))” in the myocardial fiber direction at the tracking point “P(t)” in each temporal phase “t” by using Expression (12) shown below.
  • Vf ⁇ (P(t)) denotes a velocity component (unit: m/sec) obtained by dividing “a scalar component obtained from an inner product of the unit direction vector “n” in the myocardial fiber direction and “V ⁇ (P(t))” (a scalar component of “V ⁇ (P(t))” in the myocardial fiber direction)” by a time period “dT” of the frame intervals serving as the units for the temporal phases.
  • L(t)” in Expression (12) denotes the distance (unit: m) between the tracking point (P(t)) and the tracking point (Q(t)). Accordingly, the unit for “SR(P(t))” is “1/sec”.
  • the determining unit 173 obtains “Sn(P(t))” indicating the strain in the myocardial fiber direction by time-integrating “SR(P(t))” from the reference temporal phase “t0” to the temporal phase “t”.
  • “Sn(P(t))” denotes a “natural strain” in the myocardial fiber direction.
  • the “natural strain” may be referred to as a “logarithmic strain” and can be defined as “log(L(t)/L(0))”, which is the logarithm of the value obtained by dividing [the distance “L(t)” between the pair of tracking points in each temporal phase] by [the distance “L0” between the pair of tracking points in the reference temporal phase].
  • the natural strain “Sn(P(t))” is calculated by time-integrating the strain rates in the temporal phases from the reference temporal phase to the corresponding temporal phase.
  • the determining unit 173 may calculate “S L (P(t))” by converting “Sn(P(t))” by using Expression (13) shown below.
  • S L (P(t)) denotes a “Lagrangian strain” in the myocardial fiber direction, which indicates a rate of change in the length with respect to the reference temporal phase “t0”.
  • the determining unit 173 forms a pair made up of each of the tracking points in the region of interest in the reference temporal phase and a tracking point positioned in the myocardial fiber direction from the tracking point. After that, the determining unit 173 defines a strain calculated as a rate of change in the length comparing the distance between the pair of tracking points in each of the temporal phases other than the reference temporal phase obtained from the tracking results, with the distance between the pair of tracking points in the reference temporal phase, as the second movement information.
  • the “Lagrangian strain” may be referred to as an “engineering strain” and is a value obtained by dividing [the distance “L(t)” between the pair of tracking points in each temporal phase] by [the distance “L0” between the pair of tracking points in the reference temporal phase].
  • the determining unit 173 forms a pair made up of the tracking point “P(t0)” in the reference temporal phase “t0” and the tracking point “Q(t0)” positioned at a predetermined distance from “P(t0)” along the myocardial fiber direction and further calculates the length “L(t0)” between the tracking point “P(t0)” and the tracking point “Q(t0)”. After that, the determining unit 173 calculates the length “L(t)” between the tracking point “P(t)” and the tracking point “Q(t)” in the temporal phase “t” obtained from the tracking results. Subsequently, the determining unit 173 calculates “S L (P(t))” by using Expression (14) shown below.
  • the determining unit 173 determines the position of “Q(t)” by performing a spatial interpolation process that uses the positions of one or more tracking points that are near the position at the predetermined distance.
  • the determining unit 173 determines the position of “Q(t)” by performing a spatial interpolation process that uses a group of tracking points “Q′1(t), Q′2( t ), . . . , and Q′i(t)” that are near the position at the predetermined distance.
  • a spatial interpolation process is performed by using the motion vectors of one or more tracking points near the position at the predetermined distance.
  • the determining unit 173 calculates “V ⁇ (Q(t))” by performing a spatial interpolation process while using a group of motion vectors “V ⁇ (Q′1(t)), V ⁇ (Q′2(t)), . . . , V ⁇ (Q′i(t))” of the abovementioned group of tracking points.
  • the output of the strain component in the fiber direction obtained according to the first method or the second method may be displayed in any of various modes under the display control exercised by the controlling unit 18 , in the same manner as a conventional three-axis strain component.
  • the display modes include a 3D rendering display, a polar map display, and a color display over MPR image data.
  • a conventional wall-movement index e.g., a three-axis strain component
  • the “fiber strain” indicating the expansion and contraction of individual myocardial fibers has, generally speaking, a value approximately in the range of “ ⁇ 10% to ⁇ 15%”.
  • the LS value observed on the endocardium of a healthy person is approximately “ ⁇ 20%”, whereas the CS value is approximately “ ⁇ 30%”.
  • the LS value and the CS value both have a larger absolute value than the “fiber strain” value.
  • the strain component in the myocardial fiber direction obtained in the third output mode is not necessarily equivalent to the “fiber strain”.
  • the observed strain component in the myocardial fiber direction is larger than the “fiber strain”.
  • the reason is, as explained above as “constraint condition (A)”, that what is observed as the “fiber strain” is a strain that also takes into account the myocardial deformation component due to the myocardial sheet sliding.
  • the LS and CS values obtained by implementing conventional methods are strain components in the longitudinal (long-axis) direction and the short-axis direction that are determined on the basis of the shape of the heart.
  • the strain component obtained in the third output mode is obtained by extracting the expansion and the contraction components of the myocardium in the myocardial fiber direction. For this reason, the strain component obtained in the third output mode is considered to reflect the functions and “viability” of the local myocardium more directly than the LS and the CS values.
  • the determining unit 173 estimates either a shear strain rate or a shear strain between a first boundary plane and a second boundary plane, by using vector information of a projection component of a motion vector on the first boundary plane, the motion vector being obtained on the first boundary plane serving as a region of interest, and vector information of a projection component of a motion vector on the second boundary plane, the motion vector being obtained on second boundary plane serving as another region of interest. After that, the determining unit 173 determines either information about the shear strain rate or information about the shear strain, to be second movement information.
  • the determining unit 173 estimates a shear strain rate component between the endocardium and the epicardium, by using “MyoVector” obtained on the endocardial plane serving as a region of interest and “MyoVector” obtained on the epicardial plane serving as another region of interest, further determines information about the obtained shear strain rate component to be the second movement information, and outputs the second movement information for a display purpose.
  • the determining unit 173 determines information about shear strain components obtained by time-integrating the shear strain-rate components between the endocardium and the epicardium to be the second movement information and outputs the second movement information for a display purpose.
  • the determining unit 173 calculates the shear strain rate component between the endocardium and the epicardium “SRs(P(t))” by using Expression (15) shown below.
  • Vf ⁇ (Pepi(t)) denotes a velocity vector (unit: m/sec) obtained by dividing “MyoVector” at a tracking point “Pepi(t)” on the epicardial plane by the time period “dT” of the frame intervals serving as the units for time.
  • Vf ⁇ (Pendo(t)) denotes a velocity vector (unit: m/sec) obtained by dividing “MyoVector” at a tracking point “Pendo(t)” on the endocardial plane by the time period “dT”, the tracking point “Pendo(t)” being paired with “Pepi(t)” in a reference temporal phase.
  • W(t) denotes the distance (unit: m) between the tracking point “Pepi(t)” and the tracking point “Pendo(t)”, which is the length between the endocardial plane and the epicardial plane.
  • the unit for “SRs(P(t))” is “1/sec”.
  • the output position “P(t)” of “SRs” indicates a tracking point structuring the region of interest within the myocardium, and it is desirable to, for example, assign thereto a corresponding tracking point “Pmid(t)” on the intermediate layer plane.
  • “MyoVector” is a vector quantity
  • “SRs” is also a vector quantity.
  • the determining unit 173 is able to obtain shear strain rate components as two scalar quantities.
  • the determining unit 173 is able to obtain a shear strain component according to the definition of the “natural strain” explained above and is also able to convert the component into a shear strain component having the meaning of a “Lagrangian strain”. As a result of these processes, the determining unit 173 is able to easily extract, for example, “SsC(P(t))” and “SsL(P(t))” that are shear strain components between the endocardium and the epicardium, by using “MyoVector” obtained in the present embodiment.
  • “MyoVector” is the vector information obtained by projecting the motion vector onto the boundary plane serving as a region of interest so that, by using “MyoVector” on the endocardial plane and “MyoVector” on the epicardial plane, it is possible to easily calculate the shear strain rate and the shear strain between the endocardium and the epicardium, which have not conventionally been calculated easily, and to output the calculated information for a display purpose.
  • the pair made up of regions of interest used as an input in the fourth output mode does not necessarily have to be the endocardium and the epicardium. It is also acceptable to use tracking points “Pmid(t)” on the intermediate layer that are in mutually-different positions within the wall. In that situation, it is possible to analyze shear strain components obtained by dividing the inside of the myocardial wall into detailed sections.
  • FIG. 12 is a flowchart for explaining an outline of the process performed by the ultrasound diagnosis apparatus according to the first embodiment.
  • the controlling unit 18 included in the ultrasound diagnosis apparatus judges whether time-series data to be an analysis target has been stored and whether an analysis start request has been received (step S 101 ). If no time-series data to be an analysis target has been stored and no analysis start request has been received (step S 101 : No), the controlling unit 18 stands by until time-series data is stored and an analysis start request is received.
  • step S 101 if time-series data to be an analysis target has been stored and an analysis start request has been received (step S 101 : Yes), the calculating unit 171 calculates the first movement information according to an instruction from the controlling unit 18 (step S 102 ), and the obtaining unit 172 obtains the direction information of myocardial fibers (the direction information indicating directions of the myocardial fibers) (step S 103 ). More specifically, the obtaining unit 172 estimates “V ⁇ (P(t))” on the basis of the first movement information. In this situation, the obtaining unit 172 may perform at least one pre-processing process selected from the first, the second, and the third pre-processing processes, on “V ⁇ (P(t))”.
  • the determining unit 173 determines second movement information on the basis of a definition that is specified in advance (step S 104 ).
  • the determining unit 173 defines the myocardial fiber directions on the basis of the direction information according to either the first direction definition or the second direction definition and further determines the second movement information on the basis of the definition specified in one of the output modes selected from the first, the second, the third, and the fourth output modes.
  • the monitor 2 displays the second movement information in one of the output modes selected from the first to the fourth output modes (step S 105 ), and the process is ended.
  • the second movement information is displayed in a 3D rendering display or a polar map display. These display modes may be realized with a display of a moving picture or a display of still images that are arranged in a row. Further, in the flowchart shown in FIG. 12 , the processes at steps S 103 through S 105 are repeatedly performed every time the operator changes the definition that he/she wishes to reference.
  • “MyoVector” is estimated on the basis of the hypothesis that the “moving directions of the individual tracking points obtained from the 3DT process substantially coincide with the directions of the myocardial fibers” and the objective fact that “the movement components in the radial (wall-thickness) direction are not movement components of the fiber directions”, and further, the second movement information based on “MyoVector” is displayed.
  • the first embodiment it is possible to present, to the user, the information related to the myocardial fiber directions and the information about the local movement components on the myocardial plane across which the myocardial fibers extend, by using the ultrasound image data having a higher time resolution than MRI images, by employing the ultrasound diagnosis apparatus, which is less expensive than an MRI apparatus. Consequently, according to the first embodiment, it is possible to conveniently present, in a non-invasive manner, the information related to the myocardial fiber directions and the information about the local movement components on the myocardial plane across which the myocardial fibers extend.
  • the second movement information is displayed in the 3D rendering display or the polar map display, the user is able to intuitively evaluate the appearance of the myocardial fiber directions.
  • by calculating “MyoVector” and using the fourth output mode it is possible to obtain, for example, the information about the shear strain rate or the shear strain between the endocardium and the epicardium, in addition to the conventional three-axis strain component.
  • the examples were explained in which the motion vector obtained as the orthogonal projection component of the motion vector on the region of interest that is obtained from the tracking result is estimated as the myocardial fiber direction without applying any change thereto and in which the myocardial fiber direction is indirectly estimated on the basis of the motion vector obtained as the orthogonal projection component of the motion vector on the region of interest that is obtained from the tracking result.
  • a myocardial fiber direction is estimated by implementing a method different from those in the first embodiment, with reference to mathematical formulae and FIG. 13 and so on.
  • FIG. 13 is a drawing for explaining the second embodiment.
  • Estimation methods implemented in the second embodiment are roughly divided into two estimation methods. In the following sections, a first estimation method and a second estimation method will be explained, in the stated order.
  • the first estimation method is a method by which the determining unit 173 both estimates the direction information of a myocardial fiber and defines the second movement information, in a comprehensive manner. More specifically, according to the first estimation method, the determining unit 173 obtains a strain in the myocardial fiber direction by using a strain in the longitudinal direction and a strain in the circumferential direction that are obtained from the first movement information as local strains in the region of interest and further determines the obtained strain to be second movement information. In this situation, the LS and the CS values may be calculated by the calculating unit 171 or may be calculated by the determining unit 173 .
  • the first estimation method As an example of the first estimation method, an example will be explained in which, while using the endocardial plane of the myocardium as a region of interest, LS in a local region in the longitudinal (long-axis) direction and CS in a local region in the short-axis direction are calculated by performing the 3DT process, so as to calculate the strain component in the myocardial fiber direction.
  • the subscript “t” indicating the time (the temporal phase) will be omitted from the explanation.
  • FS fiber strain
  • ⁇ f the fiber angle (the myocardial fiber angle) in the target position
  • LSf a strain component in the longitudinal (long-axis) direction
  • CSf a strain component in the short-axis direction
  • both “LSf” and “CSf” are each a negative value and that, in contrast, when “FS” corresponds to an expansion (a positive value), both “LSf” and “CSf” are each a positive value.
  • range of “Of” is set to “ ⁇ /2 to ⁇ /2”, “cos( ⁇ f)” is always a positive value, whereas the sign of “sin( ⁇ f)” changes in accordance with that of “ ⁇ f”.
  • LSf/CSf can be expressed as shown in Expression (17) below by using “ ⁇ f”
  • ⁇ f can be expressed as shown in Expression (18) below by using “LSf/CSf”.
  • LS*CS ⁇ 0 is a condition for arranging the signs of “LSf” and “CSf” to be the same as each other.
  • “FS” is calculated backwards on the basis of “either a first concept or a second concept” explained below.
  • the strain component due to an expansion is observed in a medical case of a myocardial infarction or a heart failure with a degraded wall movement.
  • the strain component due to an expansion in this situation is not an active contraction movement.
  • the strain component due to an expansion occurs as a passive expansion as a result of a “rivalry competition” with myocardial tissues in the surroundings.
  • the strain component due to an expansion occurs because of an expansion that occurs at a time when a contraction is supposed to occur due to an abnormality in the electric conduction system.
  • the first concept is a concept for the process based on the former situations, whereas the second concept is a concept for the process based on the latter situations.
  • the determining unit 173 or the obtaining unit 172 performs the process indicated in Expression (19) shown below.
  • the second concept is a process putting importance on the side having a larger absolute value.
  • the determining unit 173 or the obtaining unit 172 performs the process indicated in Expression (20) shown below.
  • the determining unit 173 determines the “FS” value calculated backwards in the process described above to be the second movement information and outputs the second movement information to the controlling unit 18 .
  • the display mode of “FS” may be the same as the display mode explained for the third output mode in the first embodiment.
  • the determining unit 173 may determine “ ⁇ f” to be the second movement information and output the second movement information to the controlling unit 18 .
  • the display mode of “ ⁇ f” may be the same as the display mode explained for the second output mode in the first embodiment.
  • the first estimation method is a method by which the second movement information “FS” defined with the use of the mathematical formulae on the basis of the myocardial fiber direction is calculated from “LS” and “CS” that have conventionally been used as indices of wall movements, without performing the process of estimating the myocardial fiber direction itself.
  • the obtaining unit 172 obtains local strain information of the region of interest from the first movement information. More specifically, the obtaining unit 172 uses “LS” and “CS”. After that, the obtaining unit 172 estimates direction information indicating the direction of a myocardial fiber, by using “LS” and “CS” that are obtained and a motion vector near the region of interest.
  • the myocardial fiber direction is estimated by using information about the motion vector, together with “LS” and “CS”.
  • the fundamental configuration of the second estimation method is the same as the first estimation method, but the second estimation method is configured to apply a polarity to the myocardial fiber angle “Of” in order to determine a myocardial fiber direction.
  • the polarity of “ ⁇ f” is estimated by using not only “LS” and “CS” at “P(t)”, but also the motion vector at “P(t)”.
  • the second estimation method is based on hypothesis (0) explained in the first embodiment. Accordingly, the motion vector used for defining the polarity is the motion vector “V ⁇ (P(t))” obtained by performing the process explained in the first embodiment.
  • the obtaining unit 172 obtains “LS” and “CS” at “P(t)” and calculates the motion vector “V ⁇ (P(t))” at “P(t)” by performing the process explained in the first embodiment.
  • the obtaining unit 172 calculates the angle “0” formed by the vector “F(t)” in the fiber direction at the target position obtained from the motion vector “V ⁇ (P(t))” and, for example, the unit direction vector “c(t0)” in the circumferential direction on the short axis in the same target position in the reference temporal phase.
  • the obtaining unit 172 extracts only polarity information “sign( ⁇ )” from “ ⁇ ”, appends the extracted polarity information “sign( ⁇ )” to the value of “Of” obtained from the same process as the process performed by the determining unit 173 in the first estimation method, and estimates a myocardial fiber angle “ ⁇ ′f” that is direction information of a myocardial fiber. More specifically, the obtaining unit 172 calculates the myocardial fiber angle “ ⁇ ′f” by using Expression (21) shown below.
  • the determining unit 173 determines the myocardial fiber angle “ ⁇ ′f” to be second movement information and outputs the second movement information to the controlling unit 18 .
  • the determining unit 173 calculates “FS” corresponding to a scalar of the vector by performing the process explained in the first estimation method, determines a vector in the fiber direction obtained by appending “sign( ⁇ )” to “FS” to be the second movement information, and outputs the second movement information to the controlling unit 18 .
  • the vector in the fiber direction has the same meaning as the vector “F(t)” in the fiber direction explained in the first embodiment.
  • “ ⁇ ′f” and the vector in the fiber direction obtained by appending “sign( ⁇ )” to “FS” are output as the second movement information. Further, in the second estimation method, it is also acceptable to calculate and output a strain component in the myocardial fiber direction explained for the second output mode in the first embodiment by using the information about the vector in the myocardial fiber direction.
  • FS that is calculated in the same manner as in the first estimation method, as a strain component in the myocardial fiber direction.
  • the display mode for these final outputs may be implemented by using any of the various types of display methods explained in the first embodiment.
  • the myocardial fiber direction is not estimated by using a motion vector as the main element.
  • the vector information of the myocardial fiber direction is estimated by calculating “FS” as the scalar quantity that serves as the main element of the vector in the myocardial fiber direction, on the basis of “LS and CS” that have conventionally been used, and by further complementing only the polarity thereof by using the information of the motion vector.
  • the processes performed by the ultrasound diagnosis apparatus according to the second embodiment is similar to the processes shown in the flowchart in FIG. 12 except that the processes at steps S 103 and S 104 are replaced with the processes described above. Thus, the explanation thereof will be omitted.
  • the second embodiment like in the first embodiment, it is possible to conveniently present, in a non-invasive manner, the information related to the myocardial fiber directions and the information about the local movement components on the myocardial plane across which the myocardial fibers extend.
  • the examples are explained in which the second movement information is output by estimating the direction information of the myocardial tissue and the myocardial tissue directions by using the first movement information obtained by performing the 3DT process.
  • second movement information is output by providing direction information of the myocardial tissue and a myocardial tissue direction as user settings, with reference to FIG. 14 and so on.
  • FIG. 14 is a drawing for explaining the third embodiment.
  • the obtaining unit 172 obtains direction information indicating the direction of a myocardial fiber, as information set by the operator. Further, by using the direction information set by the operator, the determining unit 173 extracts a movement component in the myocardial fiber direction from the first movement information and determines the second movement information by using the extracted movement component.
  • the operator it is desirable for the operator to provide the obtaining unit 172 with “ ⁇ (P(t))” expressing a fiber angle value that varies in accordance with the positions of the tracking points and the temporal phases, while taking into account the local distribution of fiber angles within the region of interest and temporal changes.
  • the operator sets “ ⁇ (P(t))” by referring to a value estimated in advance by implementing a publicly-known method that uses MRI or sets a value presumed from known anatomical observations (e.g., information from textbooks) as an estimated value.
  • the determining unit 173 is able to extract, as the second movement information, information (the direction and the magnitude) of “MyoVector” that is a component of the motion vector with respect to the myocardial fiber direction set by the user and is further able to extract, as the second movement information, a strain component with respect to the myocardial fiber direction set by the user. It is possible to display the information that is output as a result of these processes, by implementing any of the various methods explained in the first and the second embodiments.
  • the direction information of the myocardial fibers is automatically estimated by using the various methods described above. There is a possibility, however, that in some situations, the automatic estimation of the myocardial fiber directions via the first movement information may not function well, in a medical case where the image quality of the time-series data to be analyzed is low or due to an impact of local artifacts occurring in the time-series data to be analyzed.
  • the direction information of the myocardial fibers is automatically estimated like in the first and the second embodiments, and the setting by the user about the direction information of the myocardial fibers is received for necessary data and necessary locations as described above, so that a calculation is performed while switching between the inputs of direction information to be used in the internal process.
  • the process performed in the present modification example can be summarized as below, while referring to the direction information of the myocardial fibers that is automatically estimated in the first and the second embodiments as first direction information and referring to the direction information of the myocardial fibers set by the user as second direction information:
  • the obtaining unit 172 according to the present modification example obtains third direction information indicating directions of myocardial fibers by using the first direction information and the second direction information. More specifically, the obtaining unit 172 according to the present modification example obtains the third direction information that is the direction information of the myocardial fibers, by smoothly switching from the first direction information to the second direction information, in a boundary between predetermined spatiotemporal regions in the region of interest. In this situation, the “boundary between predetermined spatiotemporal regions” is a boundary between spatiotemporal regions in which the precision level in the estimation of the first direction information that is automatically estimated is degraded. After that, the determining unit 173 according to the present modification example determines the second movement information by using the third direction information.
  • the “boundary between the spatiotemporal regions” may be set by the operator by, for example, specifying a space or a temporal phase of which the operator has determined that the precision level in the estimation of the first direction information is degraded because of a low reliability of the first movement information, by referring to the group of three-dimensional ultrasound image data or referring to MPR image data on which a distribution of indices such as a CS value is superimposed in color.
  • the “boundary between the spatiotemporal regions” may be set as a result of an automatic assessment by the controlling unit 18 that uses, for example, the brightness levels of the group of three-dimensional ultrasound image data or a signal-to-noise (S/N) ratio of the reception signals.
  • an index of reliability for estimating the motion vectors by implementing the speckle tracking method may be used.
  • the index of reliability various types of indices are known, such as image brightness levels, a brightness variance, similarities in pattern matching, and the like.
  • the obtaining unit 172 provides the third direction information obtained by averaging the first direction information and the second direction information while varying the weights thereof in accordance with the reliability of the movement.
  • the obtaining unit 172 obtains the third direction information indicating the directions of the myocardial fibers by performing a weighted addition on the first direction information and the second direction information in accordance with the reliability of the first movement information.
  • the present modification example it is possible to present, conveniently and with certainty, the information related to the myocardial fiber directions and the information about the local movement components on the myocardial plane across which myocardial fibers extend, in a non-invasive manner.
  • FIG. 15 is a drawing for explaining the fourth embodiment.
  • the obtaining unit 172 obtains the direction information indicating the direction of a myocardial fiber for each of the plurality of regions of interest. Further, in the fourth embodiment, the determining unit 173 determines information obtained by calculating a difference between the regions of interest by using the pieces of second movement information obtained for the plurality of regions of interest, to be new second movement information.
  • the determining unit 173 obtains a difference between pieces of movement information corresponding to fiber directions on two arbitrary planes selected from the endocardial plane, the epicardial plane, and the intermediate layer plane. In one example, the determining unit 173 determines and outputs a difference fiber angle obtained by subtracting a fiber angle of the endocardium from a fiber angle of the epicardium to be new second movement information.
  • the display mode for the difference information include an example in which time-change curves of the difference information in units of segments are displayed and an example in which the difference information is displayed in a 3D rendering display by assigning colors thereto. Further, in other examples of the display mode for the difference information, any of the various types of display methods explained in the first to the third embodiments is applicable to the new second movement information obtained by calculating the difference.
  • the controlling unit 18 may cause the monitor 2 to display a time-change curve of the myocardial fiber angle in at least one region defined within the region of interest.
  • time-change curves of the myocardial fiber angle “ ⁇ (t)” in units of segments may be displayed in a chart, as illustrated in FIG. 15 , like the strains or the like used as a conventional wall-movement index.
  • the horizontal axis expresses time
  • the vertical axis expresses the myocardial fiber angle.
  • Temporal changes in the fiber angle in the segments of three locations such as the apex (“Apex”), the intermediate part (“Mid”), and the base (“Base”) of the heart are displayed, together with an electrocardiogram.
  • the segments used for the time-change curves of the myocardial fiber angle any of the anterior wall, the septum, the lateral wall, the posterior wall, and the inferior wall, may be used.
  • the plotted line expressing the range of “ ⁇ (t)” exhibits aliasing at “ ⁇ 90 degrees”.
  • the plotted line exhibits aliasing at “ ⁇ 180 degrees”, for example.
  • the controlling unit 18 when displaying the chart showing the time-change curves, for example, the controlling unit 18 eliminates aliasing of the plotted lines by performing an “unwrapping process”, the unwrapping process is commonly used as a joining process when phase aliasing occurs.
  • the controlling unit 18 may display a chart while preventing the plotted line from aliasing near the angle parallel to the short axis or the long axis, by adding a predetermined offset component to the myocardial fiber angle, in advance.
  • the user is able to easily recognize the regions in which the movement directions of the myocardial fibers are different, between the endocardium and the epicardium. Further, for example, by referring to the chart showing the time-change curves of the myocardial fiber angles corresponding to the segments, the user is able to easily recognize the parts where the movements of the myocardial fibers are out of synchronization.
  • the ultrasound diagnosis apparatus performs the image processing processes that use the group of three-dimensional ultrasound image data of the heart.
  • the image processing processes described in any of the first to the fourth embodiments above may be performed by an image processing apparatus that is provided independently of the ultrasound diagnosis apparatus. More specifically, an image processing apparatus having the functions of the image processing unit 17 , the controlling unit 18 , and the like may perform the image processing processes explained above by receiving the group of three-dimensional ultrasound image data of the heart from the ultrasound diagnosis apparatus, a Picture Archiving and Communication System (PACS) database, a database in an electronic medical record system, or the like.
  • PACS Picture Archiving and Communication System
  • the image processing processes described in any of the first to the fourth embodiments above may be performed on not only the group of three-dimensional ultrasound image data of the heart, but also a group of three-dimensional medical image data acquired by an X-ray Computed Tomography (CT) Apparatus, an MRI apparatus, or the like.
  • CT Computed Tomography
  • the image processing processes performed on the group of three-dimensional medical image data of the heart may be performed by the medical image diagnosis apparatus that acquired the data or may be performed by the abovementioned image processing apparatus.
  • the examples are explained in which the image processing processes are performed for outputting and displaying the second movement information while the left ventricle is used as the target.
  • the image processing processes described in any of the first to the fourth embodiments above are also applicable to any of the other chambers of the heart (i.e., the left atrium, the right atrium, and the right ventricle) besides the left ventricle, although researches have not yet progressed very much.
  • the other chambers it is possible to present analysis results that are similar to those of the left ventricle.
  • the constituent elements of the apparatuses that are illustrated in the drawings in the first to the fourth embodiments are based on functional concepts. Thus, it is not necessary to physically configure the elements as indicated in the drawings. In other words, the specific mode of distribution and integration of the apparatuses is not limited to the ones illustrated in the drawings. It is acceptable to functionally or physically distribute or integrate all or a part of the apparatuses in any arbitrary units, depending on various loads and the status of use. Further, all or an arbitrary part of the processing functions performed by the apparatuses may be realized by a Central Processing Unit (CPU) and a computer program that is analyzed and executed by the CPU or may be realized as hardware using wired logic.
  • CPU Central Processing Unit
  • the image processing methods explained in the first to the fourth embodiments may be realized by causing a computer such as a personal computer or a workstation to execute an image processing computer program (hereinafter, an “image processing program”) that is prepared in advance.
  • the image processing program may be distributed via a network such as the Internet.
  • a computer such as a personal computer or a workstation to execute an image processing computer program (hereinafter, an “image processing program”) that is prepared in advance.
  • the image processing program may be distributed via a network such as the Internet.
  • the information related to the myocardial fiber directions and the information about the local movement components on the myocardial plane across which the myocardial fibers extend is possible to conveniently present, in a non-invasive manner, the information related to the myocardial fiber directions and the information about the local movement components on the myocardial plane across which the myocardial fibers extend.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Cardiology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
US14/490,957 2013-09-30 2014-09-19 Ultrasound diagnosis apparatus and image processing apparatus Abandoned US20150094584A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/803,261 US11317896B2 (en) 2013-09-30 2017-11-03 Ultrasound diagnosis apparatus and image processing apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013-205089 2013-09-30
JP2013205089 2013-09-30

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/803,261 Division US11317896B2 (en) 2013-09-30 2017-11-03 Ultrasound diagnosis apparatus and image processing apparatus

Publications (1)

Publication Number Publication Date
US20150094584A1 true US20150094584A1 (en) 2015-04-02

Family

ID=52740817

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/490,957 Abandoned US20150094584A1 (en) 2013-09-30 2014-09-19 Ultrasound diagnosis apparatus and image processing apparatus
US15/803,261 Active 2037-04-02 US11317896B2 (en) 2013-09-30 2017-11-03 Ultrasound diagnosis apparatus and image processing apparatus

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/803,261 Active 2037-04-02 US11317896B2 (en) 2013-09-30 2017-11-03 Ultrasound diagnosis apparatus and image processing apparatus

Country Status (2)

Country Link
US (2) US20150094584A1 (ja)
JP (2) JP6382036B2 (ja)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160140730A1 (en) * 2014-11-14 2016-05-19 The Regents Of The University Of California Ultrasound-based volumetric particle tracking method
US20170169609A1 (en) * 2014-02-19 2017-06-15 Koninklijke Philips N.V. Motion adaptive visualization in medical 4d imaging
US20190247015A1 (en) * 2018-02-09 2019-08-15 Samsung Medison Co., Ltd. Ultrasound diagnosis apparatus and method of displaying ultrasound image
US10460452B2 (en) 2014-11-14 2019-10-29 The Regents Of The University Of California Ultrasound-based volumetric particle tracking method
CN112654302A (zh) * 2018-10-23 2021-04-13 深圳迈瑞生物医疗电子股份有限公司 一种心脏运动定量分析方法和超声***
CN112674791A (zh) * 2020-11-30 2021-04-20 深圳大学 肌肉超声弹性成像的优化方法及***
US20210134011A1 (en) * 2019-11-06 2021-05-06 Ssam Sports, Inc. Calibrating 3d motion capture system for skeletal alignment using x-ray data
US11317875B2 (en) * 2016-09-30 2022-05-03 Siemens Healthcare Gmbh Reconstruction of flow data
US11393092B2 (en) * 2019-11-27 2022-07-19 Shanghai United Imaging Intelligence Co., Ltd. Motion tracking and strain determination
US11471123B2 (en) 2018-08-09 2022-10-18 Fujifilm Healthcare Corporation Ultrasound diagnostic apparatus, program, and method of operating ultrasound diagnosis apparatus
US11510653B2 (en) * 2019-03-08 2022-11-29 Fujifilm Healthcare Corporation Secondary flow detection device, secondary flow detection program, and ultrasonic signal processing device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020043561A1 (en) * 2018-08-29 2020-03-05 Koninklijke Philips N.V. Ultrasound system and methods for smart shear wave elastography
JP7501333B2 (ja) 2020-12-07 2024-06-18 コニカミノルタ株式会社 超音波診断装置、超音波診断装置の制御方法、及び、超音波診断装置の制御プログラム
CN112990101B (zh) * 2021-04-14 2021-12-28 深圳市罗湖医院集团 基于机器视觉的面部器官定位方法及相关设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030083578A1 (en) * 2001-09-21 2003-05-01 Yasuhiko Abe Ultrasound diagnostic apparatus, and image processing method
US20030158483A1 (en) * 2000-03-10 2003-08-21 Acuson Corporation Tissue motion analysis medical diagnostic ultrasound system and method
US20030216646A1 (en) * 2002-03-15 2003-11-20 Angelsen Bjorn A.J. Multiple scan-plane ultrasound imaging of objects
US20090083578A1 (en) * 2007-09-26 2009-03-26 International Business Machines Corporation Method of testing server side objects
US20090219301A1 (en) * 2005-10-20 2009-09-03 Koninklijke Philips Electronics N.V. Ultrasonic imaging system and method
US7717852B2 (en) * 2005-01-20 2010-05-18 Koninklijke Philips Electronics N.V. Method and device for determining the motion vector tissues in a biological medium

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4659974B2 (ja) 2000-12-12 2011-03-30 株式会社東芝 超音波診断装置
JP4758578B2 (ja) 2001-09-14 2011-08-31 日立アロカメディカル株式会社 心臓壁運動評価装置
JP4060615B2 (ja) 2002-03-05 2008-03-12 株式会社東芝 画像処理装置及び超音波診断装置
JP4594610B2 (ja) 2003-10-21 2010-12-08 株式会社東芝 超音波画像処理装置及び超音波診断装置
JP5238201B2 (ja) 2007-08-10 2013-07-17 株式会社東芝 超音波診断装置、超音波画像処理装置及び超音波画像処理プログラム
JP5438936B2 (ja) 2008-08-29 2014-03-12 株式会社東芝 超音波診断装置、画像処理装置及び画像処理プログラム
US8469890B2 (en) * 2009-03-24 2013-06-25 General Electric Company System and method for compensating for motion when displaying ultrasound motion tracking information
US8487933B2 (en) * 2009-03-31 2013-07-16 General Electric Company System and method for multi-segment center point trajectory mapping
JP5661453B2 (ja) * 2010-02-04 2015-01-28 株式会社東芝 画像処理装置、超音波診断装置、及び画像処理方法
US10321892B2 (en) 2010-09-27 2019-06-18 Siemens Medical Solutions Usa, Inc. Computerized characterization of cardiac motion in medical diagnostic ultrasound
JP5944633B2 (ja) * 2011-02-25 2016-07-05 株式会社東芝 超音波診断装置、画像処理装置及びプログラム
CN105105775B (zh) 2011-07-19 2018-11-09 东芝医疗***株式会社 心肌运动解析装置
US9129053B2 (en) * 2012-02-01 2015-09-08 Siemens Aktiengesellschaft Method and system for advanced measurements computation and therapy planning from medical data and images using a multi-physics fluid-solid heart model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030158483A1 (en) * 2000-03-10 2003-08-21 Acuson Corporation Tissue motion analysis medical diagnostic ultrasound system and method
US20030083578A1 (en) * 2001-09-21 2003-05-01 Yasuhiko Abe Ultrasound diagnostic apparatus, and image processing method
US20030216646A1 (en) * 2002-03-15 2003-11-20 Angelsen Bjorn A.J. Multiple scan-plane ultrasound imaging of objects
US7717852B2 (en) * 2005-01-20 2010-05-18 Koninklijke Philips Electronics N.V. Method and device for determining the motion vector tissues in a biological medium
US20090219301A1 (en) * 2005-10-20 2009-09-03 Koninklijke Philips Electronics N.V. Ultrasonic imaging system and method
US20090083578A1 (en) * 2007-09-26 2009-03-26 International Business Machines Corporation Method of testing server side objects

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170169609A1 (en) * 2014-02-19 2017-06-15 Koninklijke Philips N.V. Motion adaptive visualization in medical 4d imaging
US20160140730A1 (en) * 2014-11-14 2016-05-19 The Regents Of The University Of California Ultrasound-based volumetric particle tracking method
US9962142B2 (en) * 2014-11-14 2018-05-08 The Regents Of The University Of California Ultrasound-based volumetric particle tracking method
US10460452B2 (en) 2014-11-14 2019-10-29 The Regents Of The University Of California Ultrasound-based volumetric particle tracking method
US11317875B2 (en) * 2016-09-30 2022-05-03 Siemens Healthcare Gmbh Reconstruction of flow data
US20190247015A1 (en) * 2018-02-09 2019-08-15 Samsung Medison Co., Ltd. Ultrasound diagnosis apparatus and method of displaying ultrasound image
US11813112B2 (en) * 2018-02-09 2023-11-14 Samsung Medison Co., Ltd. Ultrasound diagnosis apparatus and method of displaying ultrasound image
US11471123B2 (en) 2018-08-09 2022-10-18 Fujifilm Healthcare Corporation Ultrasound diagnostic apparatus, program, and method of operating ultrasound diagnosis apparatus
US20210321974A1 (en) * 2018-10-23 2021-10-21 Shenzhen Mindray Bio-Medical Electronics Co., Ltd. Quantitative analysis method for cardiac motion, and ultrasonic system
CN112654302A (zh) * 2018-10-23 2021-04-13 深圳迈瑞生物医疗电子股份有限公司 一种心脏运动定量分析方法和超声***
US12004897B2 (en) * 2018-10-23 2024-06-11 Shenzhen Mindray Bio-Medical Electronics Co., Ltd. Quantitative analysis method for cardiac motion, and ultrasonic system
US11510653B2 (en) * 2019-03-08 2022-11-29 Fujifilm Healthcare Corporation Secondary flow detection device, secondary flow detection program, and ultrasonic signal processing device
US20210134011A1 (en) * 2019-11-06 2021-05-06 Ssam Sports, Inc. Calibrating 3d motion capture system for skeletal alignment using x-ray data
US11694360B2 (en) * 2019-11-06 2023-07-04 Ssam Sports, Inc. Calibrating 3D motion capture system for skeletal alignment using x-ray data
US11393092B2 (en) * 2019-11-27 2022-07-19 Shanghai United Imaging Intelligence Co., Ltd. Motion tracking and strain determination
CN112674791A (zh) * 2020-11-30 2021-04-20 深圳大学 肌肉超声弹性成像的优化方法及***

Also Published As

Publication number Publication date
US11317896B2 (en) 2022-05-03
JP6640922B2 (ja) 2020-02-05
JP2018140276A (ja) 2018-09-13
JP6382036B2 (ja) 2018-08-29
JP2015091299A (ja) 2015-05-14
US20180055487A1 (en) 2018-03-01

Similar Documents

Publication Publication Date Title
US11317896B2 (en) Ultrasound diagnosis apparatus and image processing apparatus
US10376236B2 (en) Ultrasound diagnostic apparatus, image processing apparatus, and image processing method
US9855024B2 (en) Medical diagnostic imaging apparatus, medical image processing apparatus, and control method for processing motion information
Seo et al. Endocardial surface area tracking for assessment of regional LV wall deformation with 3D speckle tracking imaging
JP5670324B2 (ja) 医用画像診断装置
US9865082B2 (en) Image processing system, X-ray diagnostic apparatus, and image processing method
US20130245441A1 (en) Pressure-Volume with Medical Diagnostic Ultrasound Imaging
US20160140707A1 (en) Medical diagnostic imaging apparatus, image processing apparatus, and image generating method
JP7375140B2 (ja) 超音波診断装置、医用画像診断装置、医用画像処理装置及び医用画像処理プログラム
US20110301462A1 (en) Ultrasonic diagnosis apparatus and ultrasonic image processing apparatus
US10363018B2 (en) Medical processing apparatus and medical processing method
US11350910B2 (en) Ultrasound diagnosis apparatus, medical image processing apparatus, and medical image processing method
JP6863774B2 (ja) 超音波診断装置、画像処理装置及び画像処理プログラム
US11712219B2 (en) Ultrasonic wave diagnostic apparatus, medical information processing apparatus, and computer program product
Bhan et al. Three-dimensional echocardiography
US11123043B2 (en) Ultrasound diagnostic apparatus, medical image processing apparatus, and medical image processing method
Muraru et al. Physical and technical aspects and overview of 3D-echocardiography
Kiss et al. Fusion of 3D echo and cardiac magnetic resonance volumes during live scanning
US11298104B2 (en) Medical processing apparatus, ultrasound diagnostic apparatus, and medical processing method
JP7483519B2 (ja) 超音波診断装置、医用画像処理装置、及び医用画像処理プログラム
Jeetley et al. and Roxy Senior
Meijboom et al. 3D Echocardiography

Legal Events

Date Code Title Description
AS Assignment

Owner name: TOSHIBA MEDICAL SYSTEMS CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ABE, YASUHIKO;KAWAGISHI, TETSUYA;REEL/FRAME:033780/0633

Effective date: 20140908

Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ABE, YASUHIKO;KAWAGISHI, TETSUYA;REEL/FRAME:033780/0633

Effective date: 20140908

AS Assignment

Owner name: TOSHIBA MEDICAL SYSTEMS CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KABUSHIKI KAISHA TOSHIBA;REEL/FRAME:039133/0915

Effective date: 20160316

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

AS Assignment

Owner name: CANON MEDICAL SYSTEMS CORPORATION, JAPAN

Free format text: CHANGE OF NAME;ASSIGNOR:TOSHIBA MEDICAL SYSTEMS CORPORATION;REEL/FRAME:049879/0342

Effective date: 20180104

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION