WO2018058632A1 - 一种成像方法和*** - Google Patents

一种成像方法和*** Download PDF

Info

Publication number
WO2018058632A1
WO2018058632A1 PCT/CN2016/101313 CN2016101313W WO2018058632A1 WO 2018058632 A1 WO2018058632 A1 WO 2018058632A1 CN 2016101313 W CN2016101313 W CN 2016101313W WO 2018058632 A1 WO2018058632 A1 WO 2018058632A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
imaging
region
imaged
interest
Prior art date
Application number
PCT/CN2016/101313
Other languages
English (en)
French (fr)
Inventor
魏芅
梁天柱
林穆清
邹耀贤
王凯
占美飞
侯杰贤
Original Assignee
深圳迈瑞生物医疗电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳迈瑞生物医疗电子股份有限公司 filed Critical 深圳迈瑞生物医疗电子股份有限公司
Priority to PCT/CN2016/101313 priority Critical patent/WO2018058632A1/zh
Priority to CN201680086564.9A priority patent/CN109310388B/zh
Priority to CN202111095357.9A priority patent/CN114224386A/zh
Publication of WO2018058632A1 publication Critical patent/WO2018058632A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0833Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
    • A61B8/085Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/467Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means
    • A61B8/469Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means for selection of a region of interest
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to the field of medical imaging technology, and more particularly to an imaging method and system.
  • Medical ultrasound images have been widely used in clinical practice because of their non-invasive, low-cost, real-time image display.
  • the specific medical ultrasound imaging uses ultrasonic echo signals to detect tissue structural information and pass two-dimensional images.
  • the structural information of the organization is displayed in real time, so that the doctor can identify the structural information in the two-dimensional image to provide a basis for clinical diagnosis.
  • the mainstream medical ultrasound imaging technology is a full-area image imaging technology, which uses the same imaging parameters for imaging the entire region within the current imaging range, and trades off the imaging parameters to make the image of the whole region uniform. And make the display of the whole area image the best, but this kind of technology may not be optimal for the images in the area of interest, and the features in the area of interest cannot be highlighted.
  • a partial image imaging technique is formed on the full-area image imaging technique, which can obtain an image of the region of interest to highlight the region of interest through the image of the region of interest.
  • One approach is to use different imaging parameters within and outside of the region of interest.
  • the imaging parameters outside the region of interest are not imaged in the region of interest of the method, so the images in the region of interest cannot be synthesized using images outside the region of interest during image synthesis, so that the region of interest and interest are The transition between the regions is poor.
  • Another approach is to optimize the image within the region of interest by delineating the region of interest. But in this method, outside the region of interest The image is in a frozen state, which is inconsistent with the image in the real-time region of interest, and also makes the transition between the image in the region of interest and the image outside the region of interest inferior.
  • the present invention provides an imaging method and system that can improve the transition effect between the region of interest and the outside of the region of interest.
  • an imaging method may include: acquiring an initial image of the object to be imaged; acquiring a region of interest of the object to be imaged based on the initial image; scanning imaging the region of interest based on the first imaging parameter to obtain a first image of the image; treating the image based on the second imaging parameter The entire imaging region is scanned and imaged to obtain a second imaging image, wherein the first imaging parameter and the second imaging parameter are at least partially different; and the first imaging image and the second imaging image are fused to obtain an imaging image of the object to be imaged.
  • acquiring the region of interest of the current object to be imaged based on the initial image may include: acquiring an area of interest designated by the operator to use the human-computer interaction interface through the initial image; or acquiring an initial image specified by the operator The image type, and matching the image to be initialized with the corresponding first sample image based on the image type to obtain the region of interest; or acquiring the region of interest based on the initial image by the image recognition method.
  • acquiring the region of interest based on the initial image by the image recognition method may include: acquiring an image type of the initial image, and matching the initial image with the corresponding first sample image based on the image type to obtain an interest. Or; acquiring a motion feature in the initial image, segmenting the initial image based on the motion feature, obtaining a motion region of the initial image, and determining the region of interest based on the motion region.
  • acquiring an image type of the initial image may include: acquiring an operator The image type of the specified initial image; or, the feature extraction of the initial image is performed to obtain the feature of the initial image, and the feature of the initial image is matched with the feature of the second sample image to obtain an image type of the initial image.
  • the first imaging parameter and the second imaging parameter may be at least one of a transmission frequency, an emission voltage, a line density, a number of focus, a focus position, a speckle noise suppression parameter, and an image enhancement parameter.
  • the merging the first imaging image and the second imaging image to obtain the imaging image of the object to be imaged may include: acquiring a first fusion parameter of the first imaging image and a second fusion of the second imaging image And merging the first imaging image and the second imaging image based on the first fusion parameter and the second fusion parameter to obtain an imaging image of the object to be imaged.
  • the imaging system may include: a scanning device that scans an object to be imaged to acquire image data of an object to be imaged; a processor that: acquires an initial image of the object to be imaged; and acquires a feeling of the object to be imaged based on the initial image a region of interest; controlling the scanning device to scan the region of interest based on the first imaging parameter to obtain a first imaging image; and controlling the scanning device to perform scanning imaging on all regions of the object to be imaged based on the second imaging parameter to obtain a second imaging image, Wherein the first imaging parameter and the second imaging parameter are different; and the first imaging image and the second imaging image are fused to obtain an imaging image of the object to be imaged.
  • the acquiring, by the processor, the region of interest of the current object to be imaged based on the initial image may include: acquiring, by the processor, the region of interest specified by the operator with the human-computer interaction interface through the initial image; or acquiring the operator by the processor Specifying an image type of the initial image, and matching the initial image with the corresponding first sample image based on the image type to obtain the region of interest; or the processor acquires the region of interest based on the initial image by the image recognition method.
  • the processor acquiring the region of interest based on the initial image by the image recognition method may include: the processor acquiring an image type of the initial image, and matching the initial image with the corresponding first sample image based on the image type Or obtaining a region of interest; or, the processor acquires a motion feature in the image to be initially, and segments the initial image based on the motion feature to obtain an initial motion region, and determines the region of interest based on the motion region.
  • the processor acquiring the image type of the initial image may include: the processor acquiring an image type of the initial image specified by the operator; or the processor performing feature extraction on the initial image to obtain a feature of the initial image, The features of the initial image are matched to the features of the second sample image to obtain the image type of the initial image.
  • the first imaging parameter and the second imaging parameter may be at least one of a transmission frequency, an emission voltage, a line density, a number of focus, a focus position, a speckle noise suppression parameter, and an image enhancement parameter.
  • the first imaging parameter and the second imaging parameter of the object to be imaged can be scanned and imaged respectively to obtain the first imaging image and the entire region of the region of interest.
  • a second image of the image such that during the fusion of the first imaged image and the second imaged image, more image information of the second imaged image may be fused at the edge of the first imaged image, such that the region of interest A smooth transition between the areas of interest increases the transition effect, thereby maintaining a visual consistency of the overall effect of the imaged image after fusion.
  • the first imaging parameter and the second imaging parameter are different, such that in the fusion process, the first imaging image may use image information of the region corresponding to the region of interest in the second imaging image to enhance the image quality of the region of interest.
  • the second imaging image is an image corresponding to all areas of the object to be imaged, and the image shape is a regular shape, and the parameter control of the second imaging parameter is relatively simple with respect to the unconventional shape, further because the second imaging image It is an image corresponding to all areas, so that images outside the area of interest are displayed in real time except for the area of interest, real-time display of all areas.
  • FIG. 1 is a flowchart of an imaging method according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of optimization of a first imaging parameter according to an embodiment of the present invention.
  • FIG. 3 is another schematic diagram of optimization of a first imaging parameter according to an embodiment of the present invention.
  • FIG. 4 is still another schematic diagram of optimization of a first imaging parameter according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an ultrasound imaging system according to an embodiment of the present invention.
  • the imaging method, device and device provided by the embodiments of the present invention are: acquiring a first imaging image corresponding to a region of interest of an object to be imaged and a second imaging image corresponding to all regions, and then, for the first imaging image and The two imaged images are fused to enhance the transition effect between the portion of the region of interest and the portion outside the region of interest in the resulting image.
  • ultrasound imaging system As a specific example, the invention is not limited to ultrasound imaging systems, but can be used in other medical imaging systems, such as X-ray imaging systems, magnetic resonance imaging (MRI) systems, positron emission computed tomography (PET) systems, or singles. Photon emission computed tomography (SPECT) systems, and so on.
  • MRI magnetic resonance imaging
  • PET positron emission computed tomography
  • SPECT Photon emission computed tomography
  • inventions of the present invention provide an imaging system and corresponding imaging method.
  • the imaging system can include a scan transpose and a processor.
  • the scanning device can scan the object to be imaged to obtain image data of the object to be imaged.
  • the scanning device is a probe.
  • the scanning device is a device for its corresponding image to be imaged.
  • the processor can control the scanning device or imaging system to implement the imaging method of the embodiments of the invention described in detail below.
  • image data is used to describe the data obtained by the scanning device, herein, the "image data” may also include unprocessed or processed or received after scanning by the scanning device, but still There is no data when the image is formed.
  • the image data herein also includes ultrasound echo data obtained after the ultrasound echo received by the probe, radio frequency data after certain processing, or image data after the ultrasound image is formed.
  • an ultrasound imaging system may include: a probe 1, a transmitting circuit 2, a transmit/receive selection switch 3, a receiving circuit 4, and beamforming.
  • Module 5 processor 6 and display 7.
  • the transmitting circuit 2 transmits the delayed-focused ultrasonic pulse having a certain amplitude and polarity to the probe 1 through the transmitting/receiving selection switch 3.
  • the probe 1 is excited by the ultrasonic pulse to emit ultrasonic waves to a target area (not shown) of the body to be tested, and receives an ultrasonic echo with tissue information reflected from the target area after a certain delay.
  • the ultrasonic echo is reconverted into an electrical signal.
  • the receiving circuit receives the electrical signals generated by the conversion of the probe 1 to obtain ultrasonic echo signals, and sends the ultrasonic echo signals to the beam combining module 5.
  • the beam synthesizing module 5 performs processing such as focus delay, weighting, and channel summation on the ultrasonic echo signals to obtain radio frequency signals, which can be sent to the processor 6 for related processing.
  • the ultrasonic image obtained by the processing of the processor 6 is sent to the display 7 for display.
  • the processor 6 can also implement the imaging method provided by the embodiment of the present invention.
  • the ultrasound imaging system will be described in detail below with reference to the accompanying drawings.
  • FIG. 1 is a flowchart of an imaging method provided by an embodiment of the present invention, which may include the following steps:
  • the processor acquires a region of interest of an object to be imaged currently.
  • the region of interest may be any region of the object to be imaged (eg, an operator of a doctor or other ultrasound imaging device, etc.) that is of interest to it, such as an area suspected of having a microstructural lesion, etc., in this region Structural information can be used as a basis for clinical diagnosis.
  • an image of a current object to be imaged (herein referred to as an "initial image”) may be obtained.
  • “initial” herein is merely an action for acquiring a region of interest at present. Or, in terms of steps, but not the initial or other specific meaning of the overall imaging process, to obtain the region of interest of the object to be imaged.
  • an imaging system eg, an ultrasound imaging system
  • an imaging system can be used to image an imaged subject (eg, using a full-area image imaging method as described above) to obtain a full-area ultrasound image of the object to be imaged (ie, the initial image) ), then, based on The full initial image obtains a region of interest of the object to be imaged.
  • a full-area ultrasound image may mean that the ultrasound image contains all of the area of the object to be imaged.
  • the "object to be imaged” as used herein may be one or more organs or regions of a human or animal that are currently or will be ultrasound scanned.
  • the manner of obtaining the region of interest includes, but is not limited to, three modes: an operator manual designation mode, a semi-automatic mode, and an automatic mode.
  • the following three modes are introduced one by one.
  • the processor can acquire the region of interest specified by the operator in the human-computer interaction interface, that is, the operator manually specifies the region of interest of the object to be imaged.
  • an initial image of an object to be imaged as described above is displayed in a human-computer interaction interface of the ultrasound imaging apparatus, and an input device, such as a trackball, is mounted on the ultrasound imaging device, and is displayed on the initial image of the object to be imaged by operating the trackball.
  • the sampling frame operates to change the position of the center point of the sampling frame and/or the size of the sampling frame.
  • the size of the sampling frame is horizontal, and the size of the sampling frame changes longitudinally when the operation track ball is vertically scrolled, etc. .
  • the size of the sampling frame changes, the position of the center point can be changed, and the position and size adjustment of the center point can be switched by operating the buttons on the trackball.
  • the area within the sampling frame is the area of interest.
  • Semi-automatic mode This mode is a combination of operator manual operation and image recognition technology.
  • the process may be: the processor acquires an image type of an initial image of an object to be imaged designated by an operator, and selects an object to be imaged based on the image type. The initial image is matched with the corresponding first sample image to obtain a region of interest.
  • the image type indicates which type of image the initial image of the current object to be imaged belongs to, such as a liver image, a kidney image, a heart image, an obstetric cerebellum image, etc., after the image type is acquired, an initial image of the operator to be imaged may be determined according to the image type.
  • the target of interest is the above-mentioned region of interest.
  • the operator selects an examination mode, ie which organ is scanned, and if the object to be imaged is the liver, the examination mode is selected as the liver mode, so in some embodiments
  • the check mode can be used to indicate the image type of the initial image of the object to be imaged.
  • the initial image of the object to be imaged may be matched with the corresponding first sample image based on the image type to obtain a region of interest.
  • the corresponding first sample image may be a sample image having the same image type as the initial image of the object to be imaged, and the sample image may be obtained offline or scanned by the imaging system to acquire multiple samples of the same image type.
  • a template image of the sample is matched as an initial reference to the initial image of the object to be imaged to obtain a region of interest.
  • the initial image of the object to be imaged is matched with the corresponding first sample image
  • the process of obtaining the region of interest may be: traversing the initial image of the object to be imaged, and selecting in the traversal process
  • the current traversed position is the center block, the same size block as the sample image, and the selected region block is similarly calculated with the first sample image.
  • the center point of the region block with the best similarity is selected as The best matching position is then delineated by the best matching position, wherein the similarity calculation method may adopt the SAD method (Sum of Absolute Differences) or the correlation coefficient method or other suitable methods.
  • This method can acquire the region of interest through image recognition technology.
  • the manner of acquiring the region of interest by the image recognition method may include, but is not limited to, the following two modes:
  • One way is to perform feature extraction on the initial image of the imaged object to obtain the object to be imaged.
  • a feature of the initial image matching a feature of the initial image of the object to be imaged with a feature of the second sample image, obtaining an image type of the initial image of the object to be imaged, and correspondingly matching the initial image of the object to be imaged based on the obtained image type
  • the first sample image is matched to obtain the region of interest.
  • the initial image of the object to be imaged is matched with the corresponding first sample image based on the image type, and the process of obtaining the region of interest can be referred to the specific implementation in the semi-automatic manner described above, for which the embodiment of the present invention is no longer set forth.
  • the above process of obtaining the image type based on the feature matching can be regarded as a process of automatically acquiring the image type, and the process of automatically acquiring the image type can further refine the image type of the initial image of the imaged object according to the operator specified manner. It is determined which type of image of which subject the initial image of the object to be imaged belongs to, such as which type of image belongs to the obstetrics or the heart.
  • the refined image type of the initial image of the object to be imaged can be determined by matching the features of the second sample image, and the matching process can be as follows:
  • Step 11 Feature extraction; wherein the above feature may refer to a general term of various attributes capable of characterizing an initial image of an object to be imaged from other images.
  • acquiring any one of the second sample images performs feature extraction on the second sample image to use the feature of the second sample image as a reference feature to facilitate matching of the initial image of the object to be imaged.
  • feature extraction of the initial image of the object to be imaged may be performed by using the same feature extraction method as the second sample image, and the feature of the initial image of the object to be imaged is obtained.
  • the feature extraction method may adopt image processing to extract features, such as Sobel operator, Canny operator, Roberts operator and SIFT operator, etc., or may be automatically extracted by machine learning method.
  • the characteristics of the image such as PCA (Principal Component Analysis), LDA (Linear Discriminant Analysis) and deep learning, automatically extract the features of the image.
  • Step 12 Feature matching; after obtaining the features of the initial image of the object to be imaged, the similarity calculation may be performed one by one with the features of the second sample image in the training sample library, and the image type of the second sample image with the most similar feature is selected as The image type of the initial image of the object to be imaged, wherein the method for measuring the feature similarity may be the SAD algorithm, the smaller the SAD value is, the more similar; or the correlation coefficient of the two groups of features may be calculated to measure the similarity between the two groups of features, The larger the coefficient, the more similar it is; or other suitable methods can be used.
  • the method for measuring the feature similarity may be the SAD algorithm, the smaller the SAD value is, the more similar; or the correlation coefficient of the two groups of features may be calculated to measure the similarity between the two groups of features, The larger the coefficient, the more similar it is; or other suitable methods can be used.
  • the method of acquiring the region of interest by the image recognition technology described above is applicable to various image types.
  • the motion region in the initial image of this type of object to be imaged may be the region of interest. Therefore, in the case where the image type of the initial image of the object to be imaged indicates that the object to be imaged is an object that periodically moves in the time dimension, the process of acquiring the region of interest by the image recognition technique may be as follows:
  • Step 21 Acquire the motion feature of the initial image of the object to be imaged; the method for acquiring the motion feature may be performed by using a frame difference method, for example, the image information of the current frame may be directly subtracted from the previous frame or the former. The image information of several frames is used to extract the motion characteristics of the current frame.
  • a frame difference method for example, the image information of the current frame may be directly subtracted from the previous frame or the former.
  • the image information of several frames is used to extract the motion characteristics of the current frame.
  • OF Optical Flow
  • GMM Gaussian Mixture Model
  • Step 22 segment the initial image of the image to be imaged based on the motion feature, and obtain a motion region in the initial image of the object to be imaged; after obtaining the motion feature, threshold segmentation may be used Morphological processing, segmentation of the motion area.
  • Step 23 Determine a region of interest based on the motion region; after segmenting the motion region, the motion region can be utilized to locate the region of interest.
  • the region of interest in embodiments of the present invention may be rectangular (eg, where the imaging coefficients are ultrasound imaging systems using linear array probes) or sectoral (eg, in imaging coefficients using convex or phased array probes)
  • a region of interest localization method may be to fit a regular region of interest to include the entire motion region, and the fitting method may be to calculate a circumscribed rectangle or sector of the motion region. You can also use the least squares to estimate the rectangular fit, or use other suitable fitting methods to fit.
  • the above-mentioned region of interest localization method is also suitable for the semi-automatic mode.
  • a semi-automatic method is: narrowing the positioning range based on the operator's input, and then using the automatic positioning method to locate the final region of interest within the reduced range.
  • the purpose of narrowing the positioning range is to improve the positioning efficiency and accuracy, and the narrowing of the positioning range may be: the operator draws at least one point on the motion area to prompt the range of the region of interest, or automatically reduces according to the operator's input information.
  • Positioning range Another semi-automatic mode is that the operator draws at least one point on the motion area to locate an initial region of interest.
  • the above-mentioned automatic positioning or semi-automatic positioning method is used to change the region of interest frame according to the image content. The location and size.
  • the positioning method of the above-mentioned region of interest can perform real-time positioning on the initial image of each object to be imaged, to change the region of interest in real time, or to perform positioning at intervals, or even to press the button by the operator. After the mode is triggered, the positioning is performed. And even for a system that needs to monitor the region of interest in real time, the location of the region of interest may be real-time, and the manner of acquiring the image type may be judged at intervals or after the triggering of the image type, and the image type is The acquisition process can be specified by the operator or based on the feature matching method. To.
  • the processor controls the scanning device to perform scanning imaging on the region of interest based on the first imaging parameter to obtain a first imaging image.
  • the processor controls the scanning device to perform scanning imaging on the entire area of the imaged object based on the second imaging parameter to obtain a second imaged image.
  • the "all area" as the scanning target for performing scanning imaging using the second imaging parameter is the entire area of the current object to be imaged including the aforementioned region of interest, that is, scanned when scanning imaging is performed using the second imaging parameter
  • the region (or the imaging region at this time) also includes the region of interest itself in addition to the region other than the aforementioned region of interest. Accordingly, accordingly, the obtained second imaging image is an image of the entire region of the object to be imaged that includes the region of interest, not an image of the region other than the region of interest.
  • the first imaging parameter and the second imaging parameter are different, and the difference may be that the first imaging parameter and the second imaging parameter are the same type of parameter, and the first imaging parameter and the second imaging parameter are The values are different; or the first imaging parameter and the second imaging parameter are different types of parameters, such as the first imaging parameter including parameter A and parameter B, and the second imaging parameter may include parameter C and parameter D; or the first imaging The parameter includes a second imaging parameter, such as the first imaging parameter including parameter A and parameter B, and the second imaging parameter includes parameter A, then determining that the first imaging parameter includes the second imaging parameter;
  • the first imaging parameter and the second imaging parameter may be at least: a transmission frequency, an emission voltage, a line density, a number of focus, a focus position, a speckle noise suppression parameter, and an image enhancement parameter.
  • a transmission frequency an emission voltage
  • a line density a number of focus
  • a focus position a speckle noise suppression parameter
  • an image enhancement parameter a speckle noise suppression parameter
  • the secondary imaging mode of the entire region and the region of interest is adopted, so in the scanning imaging process of the region of interest, the first imaging parameter can be optimized according to the size and position of the region of interest to optimize the sense. Area of interest.
  • optimizing the transmission frequency in the region of interest can make the region of interest not limited by the transmission frequency in the imaging process of the entire region.
  • the region of interest can be scanned.
  • the transmission frequency during imaging thereby increasing the resolution of the first imaged image.
  • the emission frequency of the region of interest during the scanning imaging process can be reduced, thereby improving the first imaged image. Penetration.
  • the emission voltage can be optimized, such as full area scanning imaging.
  • a lower emission voltage is used, and a higher emission voltage is used in the scanning imaging of the region of interest, as shown in FIG. 2, thereby improving the image quality in the region of interest when the transmission power satisfies the sound field limitation of the ultrasound system.
  • the limit index Ispta spatial peak time average sound intensity of one of the ultrasonic system sound fields is 480 mW/cm 2 or less.
  • the linear density, the number of focal points and the scanning frame rate of the ultrasound system are mutually constrained.
  • the image quality is imaged as shown in Fig. 3 or Fig. 4, wherein the horizontal axis in Figs. 2 to 4 is the probe position in the ultrasound system.
  • the embodiments of the present invention are not described one by one, and in addition to the above parameters, the first imaging parameter and the second imaging parameter may also adopt the emission aperture and the emission. Waveform, spatial recombination, frequency recombination, line recombination, and frame correlation, etc., by optimizing the first imaging parameter corresponding to the region of interest, thereby obtaining a first image with better quality.
  • the aforementioned first parameter and second parameter may be set accordingly.
  • the processor fuses the first imaging image and the second imaging image to obtain an imaging image of the object to be imaged.
  • the fusion process may be: acquiring a first fusion parameter of the first imaging image and a second fusion parameter of the second imaging image, and then, based on the first fusion parameter and the second fusion parameter, the first imaging image and the first The two imaged images are fused to obtain an imaged image of the object to be imaged.
  • the first fusion parameter and the second fusion parameter may be set according to actual conditions.
  • ⁇ + ⁇ >1 may also be taken, and the overall brightness level of the image outputted after the fusion may be increased. In other embodiments, it may be set in other ways.
  • the values of the first fusion parameter ⁇ and the second fusion parameter ⁇ are not fixed, and may be different according to each pixel in the image, each position in the image, and the generation time of the image.
  • the values of the first fusion parameter and the second fusion parameter may be not less than 0; If the gray value of each pixel in the first image or the second image is less than 0, the value of the corresponding fusion parameter may be less than 0, and the values of the first fusion parameter and the second fusion parameter are different.
  • the first fusion parameter ⁇ corresponding to each position in the first imaging image is different, and the second fusion parameter ⁇ corresponding to each position in the second imaging image may also be different, such as
  • the edge position of the region needs to be merged with more image information of the second imaged image, and the value of the second fusion parameter ⁇ at the edge position of the region of interest may be greater than the value of the second fusion parameter ⁇ at other locations.
  • An imaged image and a second imaged image are real-time images, which may change over time, and the values of the first blending parameter ⁇ and the second blending parameter ⁇ may also differ with time;
  • the imaging method provided by the embodiment of the present invention can scan and image the region of interest of the object to be imaged and the second region of the image to be imaged by using the first imaging parameter to obtain the region of interest.
  • the first imaging image and the second imaging image of the entire region such that the imaging parameters can be set in a targeted manner for the region of interest to specifically enhance and optimize the desired aspect of the image of the region of interest;
  • more image information of the second imaging image may be fused at the edge position of the first imaging image, so that a smooth transition between the region of interest and the outside of the region of interest, Improve the transition effect, so that the overall effect of the imaged image after fusion is visually consistent.
  • the first imaging parameter and the second imaging parameter are different, such that in the fusion process, the first imaging image may use image information of the region corresponding to the region of interest in the second imaging image to enhance the image quality of the region of interest.
  • the second imaging image is an image corresponding to all areas of the object to be imaged, and the image shape is a regular shape, and the parameter control of the second imaging parameter is relatively simple with respect to the unconventional shape, further because the second imaging image It is an image corresponding to all areas, so that images outside the area of interest are displayed in real time except for the area of interest, real-time display of all areas.
  • the processor in the above embodiment can add necessary hardware platforms through software (for example, microprocessor, microcontroller, programmable logic device, dedicated The integrated circuit or the like is implemented, or it can be implemented by hardware or firmware alone.
  • the technical solution of the present invention which is essential or contributes to the prior art, can also be embodied in the form of a software product carried on a non-transitory computer readable storage carrier ( In the ROM, disk, CD, server cloud space, a number of instructions are included to enable a terminal device (which may be a cell phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present invention.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Vascular Medicine (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

一种成像方法、装置及设备,可以分别采用第一成像参数对待成像对象的感兴趣区域和第二成像参数对待成像对象的全部区域进行扫描成像,得到感兴趣区域的第一成像图像和全部区域的第二成像图像,这样在对第一成像图像和第二成像图像融合过程中,可以在第一成像图像的边缘位置融合较多的第二成像图像的图像信息,使得感兴趣区域内和感兴趣区域外之间的平滑过渡,提高过渡效果,进而使得融合后的成像图像整体效果保持视觉上的一致性。

Description

一种成像方法和*** 技术领域
本发明涉及医学成像技术领域,更具体的说,尤其涉及一种成像方法和***。
背景技术
医学超声图像由于具有无创、低成本、图像实时显示的特点,在临床上面得到了越来越广泛的应用,具体的医学超声成像利用超声回波信号来检测组织的结构信息,并通过二维图像将组织的结构信息实时显示,这样医生可以对二维图像中结构信息进行辨识来为临床诊断提供依据。
目前主流的医学超声成像技术均为全区域图像成像技术,此种技术对当前成像范围内的全区域采用相同的成像参数进行成像,且对成像参数进行折中权衡,以使得全区域图像均匀一致,并使得全区域图像的显示效果最佳,但此种技术对于感兴趣区域内的图像来说其显示效果可能并不是最佳的,无法突出显示感兴趣区域内的特征。
为此在全区域图像成像技术上形成一种局部图像成像技术,其可以得到感兴趣区域的图像,以通过感兴趣区域的图像对感兴趣区域进行突出显示。
一种方法是感兴趣区域内和感兴趣区域外采用不同的成像参数。但是该方法感兴趣区域内没有使用感兴趣区域外的成像参数进行成像,因此在图像合成过程中感兴趣区域内的图像无法利用感兴趣区域外的图像进行合成,使得感兴趣区域内和感兴趣区域外之间的过渡效果较差。另一种方法是采用划定感兴趣区域的方式对感兴趣区域内的图像进行优化。但是该方法中,感兴趣区域外的 图像处于冻结状态,与实时的感兴趣区域内的图像不一致,同样使得感兴趣区域内的图像和感兴趣区域外的图像之间的过渡效果较差。
发明内容
有鉴于此,本发明提供一种成像方法和***,其可以提高感兴趣区域内和感兴趣区域外之间的过渡效果。
本发明的一些实施例中,提供了一种成像方法。该方法可以包括:获取待成像对象的初始图像;基于初始图像获取待成像对象的感兴趣区域;基于第一成像参数对感兴趣区域进行扫描成像,得到第一成像图像;基于第二成像参数对待成像对象的全部区域进行扫描成像,得到第二成像图像,其中第一成像参数和第二成像参数至少部分不同;对第一成像图像和第二成像图像进行融合,得到待成像对象的成像图像。
本发明的一些实施例中,基于初始图像获取当前待成像对象的感兴趣区域可以包括:获取操作者试用人机交互界面通过初始图像指定的感兴趣区域;或者,获取操作者指定的初始图像的图像类型,并基于图像类型将待初始图像与对应的第一样本图像进行匹配,获得感兴趣区域;或者,通过图像识别方法基于初始图像获取感兴趣区域。
本发明的一些实施例中,通过图像识别方法基于初始图像获取感兴趣区域可以包括:获取初始图像的图像类型,并基于图像类型将初始图像与对应的第一样本图像进行匹配,得到感兴趣区域;或者,获取初始图像中的运动特征,基于运动特征对初始图像进行分割,得到初始图像的运动区域,并基于运动区域确定感兴趣区域。
本发明的一些实施例中,获取初始图像的图像类型可以包括:获取操作者 指定的初始图像的图像类型;或者,对初始图像进行特征提取,得到初始图像的特征,将初始图像的特征与第二样本图像的特征进行匹配,得到初始图像的图像类型。
本发明的一些实施例中,第一成像参数和第二成像参数可以至少是:发射频率、发射电压、线密度、焦点数量、焦点位置、斑点噪声抑制参数和图像增强参数中的一种。
本发明的一些实施例中,对第一成像图像和第二成像图像进行融合,得到待成像对象的成像图像可以包括:获取第一成像图像的第一融合参数以及第二成像图像的第二融合参数;基于第一融合参数和第二融合参数对第一成像图像和第二成像图像进行融合,得到待成像对象的成像图像。
本发明的一些实施例中还提供了一种成像***。该成像***可以包括:扫描装置,该扫描装置扫描待成像对象以获取待成像对象的图像数据;处理器,该处理器用于:获取待成像对象的初始图像;基于初始图像获取待成像对象的感兴趣区域;控制扫描装置基于第一成像参数对感兴趣区域进行扫描成像,得到第一成像图像;控制扫描装置基于第二成像参数对待成像对象的全部区域进行扫面成像,得到第二成像图像,其中第一成像参数和第二成像参数不同;对第一成像图像和第二成像图像进行融合,得到待成像对象的成像图像。
本发明的一些实施例中,处理器基于初始图像获取当前待成像对象的感兴趣区域可以包括:处理器获取操作者用人机交互界面通过初始图像指定的感兴趣区域;或者,处理器获取操作者指定的初始图像的图像类型,并基于图像类型将初始图像与对应的第一样本图像进行匹配,获得感兴趣区域;或者,处理器通过图像识别方法基于初始图像获取感兴趣区域。
本发明的一些实施例中,处理器通过图像识别方法基于初始图像获取感兴趣区域可以包括:处理器获取初始图像的图像类型,并基于图像类型将初始图像与对应的第一样本图像进行匹配,得到感兴趣区域;或者,处理器获取待初始图像中的运动特征,基于运动特征对初始图像进行分割,得到初始的运动区域,并基于运动区域确定感兴趣区域。
本发明的一些实施例中,处理器获取初始图像的图像类型可以包括:处理器获取操作者指定的初始图像的图像类型;或者,处理器对初始图像进行特征提取,得到初始图像的特征,将初始图像的特征与第二样本图像的特征进行匹配,得到初始图像的图像类型。
本发明的一些实施例中,第一成像参数和第二成像参数可以至少是:发射频率、发射电压、线密度、焦点数量、焦点位置、斑点噪声抑制参数和图像增强参数中的一种。
本发明的一些实施例中,处理器对第一成像图像和第二成像图像进行融合,得到待成像对象的成像图像可以包括:处理器获取第一成像图像的第一融合参数以及第二成像图像的第二融合参数,并基于第一融合参数和第二融合参数对第一成像图像和第二成像图像进行融合,得到待成像对象的成像图像。
从上述的技术方案可以看出,可以分别采用第一成像参数对待成像对象的感兴趣区域和第二成像参数对待成像对象的全部区域进行扫描成像,得到感兴趣区域的第一成像图像和全部区域的第二成像图像,这样在对第一成像图像和第二成像图像融合过程中,可以在第一成像图像的边缘位置融合较多的第二成像图像的图像信息,使得感兴趣区域内和感兴趣区域外之间的平滑过渡,提高过渡效果,进而使得融合后的成像图像整体效果保持视觉上的一致性。
并且上述第一成像参数和第二成像参数不同,这样在融合过程中,第一成像图像可以使用第二成像图像中与感兴趣区域对应区域的图像信息,增强感兴趣区域的图像质量。且上述第二成像图像是待成像对象的全部区域对应的图像,其图像形状为一常规形状,相对于非常规形状来说在第二成像参数的参数控制上相对简单,进一步因为第二成像图像是全部区域对应的图像,这样除感兴趣区域内之外,感兴趣区域外的图像也被实时显示,实现全部区域的实时显示。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的成像方法的流程图;
图2为本发明实施例提供的第一成像参数优化的一种示意图;
图3为本发明实施例提供的第一成像参数优化的另一种示意图;
图4为本发明实施例提供的第一成像参数优化的再一种示意图;
图5为本发明实施例提供的超声成像***的结构示意图。
具体实施方式
本发明实施例提供的成像方法、装置及设备的思想是:获取同一个待成像对象的感兴趣区域对应的第一成像图像和全部区域对应的第二成像图像,然后对第一成像图像和第二成像图像进行融合,以提高最终获得的图像中感兴趣区域内的部分和感兴趣区域外的部分之间的过渡效果。
为使本领域技术人员更好地理解本发明实施例,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
下文中,以超声成像***作为具体的实例对本发明的实施例进行了说明。但是,本发明并不限于超声成像***,而是也可以用于其他的医学成像***,比如X线成像***、核磁共振成像(MRI)***、正电子发射计算机断层扫描成像(PET)***或者单光子发射计算机断层成像(SPECT)***,等等。
一般地,本发明实施例提供了一种成像***以及相应的成像方法。该成像***可以包括扫描转置和处理器。该扫描装置可以对待成像对象进行扫描以获得待成像对象的图像数据。例如,对于超声成像***,该扫描装置即为探头。对于其他成像***,该扫描装置为其相应的对待成像对象进行扫描的装置。而处理器可以控制扫描装置或者成像***实现下文中详细描述的本发明实施例的成像方法。这里,虽然使用了“图像数据”一词描述扫描装置获得的数据,但是本文中,这里的“图像数据”也可以包含扫描装置扫描后接收或者获得的未经处理或者已经经过一定处理、但是还没有形成图像时的数据。例如,对于超声成像***,这里的图像数据也包含探头接收的超声回波后获得的超声回波数据、经过一定处理后的射频数据或者形成超声图像之后的图像数据。
例如,请参阅图5,以超声成像***为例,本发明的一些实施例中,一种超声成像***可以包括:探头1、发射电路2、发射/接收选择开关3、接收电路4、波束合成模块5、处理器6和显示器7。
发射电路2将经过延迟聚焦的具有一定幅度和极性的超声脉冲通过发射/接收选择开关3发送到探头1。探头1受超声脉冲的激励,向受测机体组织的目标区域(图中未示出)发射超声波,经一定延时后接收从目标区域反射回来的带有组织信息的超声回波,并将此超声回波重新转换为电信号。接收电路接收探头1转换生成的电信号,获得超声回波信号,并将这些超声回波信号送入波束合成模块5。波束合成模块5对超声回波信号进行聚焦延时、加权和通道求和等处理,获得射频信号,这些射频信号可以送入处理器6进行相关处理。经过处理器6处理获得的超声图像送入显示器7进行显示。
本发明的实施例中,处理器6还可以实现本发明实施例提供的成像方法,下面结合附图仍然以超声成像***为例进行详细说明。
请参阅图1,其示出了本发明实施例提供的成像方法的一种流程图,可以包括以下步骤:
101:处理器获取当前待成像对象的感兴趣区域。其中感兴趣区域可以是待成像对象中用户(例如,医生或者其他超声成像装置的操作者,等等)对其感兴趣的任何区域,例如疑似存在微小结构病变的区域等等,此区域中的结构信息可以作为临床诊断的依据。
本发明的实施例中,可以基于已经获得的当前待成像对象的图像(本文中,称之为“初始图像”。但是,应该理解,这里的“初始”仅仅是针对当前获取感兴趣区域的动作或者步骤而言,而并非指整体成像过程的初始或其他特指含义)来获取待成像对象的感兴趣区域。例如,可以使用成像***(例如,超声成像***)对待成像对象进行成像(例如,使用如前文所述的全区域图像成像方法),获得待成像对象的全区域超声图像(即所说的初始图像),然后,基于 该全初始图像获得待成像对象的感兴趣区域。下文所述这里所说的“全区域超声图像”可以是指该超声图像包含了待成像对象的全部区域。这里所说的“待成像对象”可以是当前正在或者将要进行超声扫描的人体或者动物的一个或者多个器官或者区域。
在本发明实施例中,获取感兴趣区域的方式包括但不限于三种方式:操作者手动指定方式、半自动方式和自动方式,下面对这三种方式进行一一介绍。
操作者手动指定方式:在此种方式中,处理器可以获取操作者在人机交互界面中指定的感兴趣区域,即由操作者手动指定待成像对象的感兴趣区域。例如,超声成像装置的人机交互界面中显示前文所述的待成像对象的初始图像,并且超声成像装置上安装有输入装置,例如轨迹球,通过操作轨迹球对待成像对象的初始图像上显示的取样框进行操作,以改变取样框的中心点所在位置和/或取样框的大小,例如操作轨迹球横向滚动时取样框的大小横向,操作轨迹球纵向滚动时取样框的大小纵向变化,等等。取样框大小变化时,中心点所在位置可以不变,并且通过操作轨迹球上的按键来进行中心点所在位置和大小调节的切换。该取样框内的区域即为感兴趣区域。
半自动方式:此种方式是操作者手动操作和图像识别技术相结合的方式,其过程可以是:处理器获取操作者指定的待成像对象的初始图像的图像类型,并基于图像类型将待成像对象的初始图像与对应的第一样本图像进行匹配,得到感兴趣区域。
其中图像类型指示当前待成像对象的初始图像属于哪一类图像,例如肝脏图像、肾脏图像、心脏图像、产科小脑图像等类型,在获取图像类型后可以根据图像类型确定操作者对待成像对象的初始图像中的感兴趣的目标是什么,该 感兴趣的目标即上述感兴趣区域。
在对待成像对象进行扫描成像过程中,操作者会选择检查模式,即对何种器官进行扫描,如待成像对象为肝脏时,则会将检查模式选定为肝脏模式,因此在一些实施例中,该检查模式可以用于指示待成像对象的初始图像的图像类型。
在获取图像类型后,可以基于图像类型将待成像对象的初始图像与对应的第一样本图像进行匹配,得到感兴趣区域。其中对应的第一样本图像可以是与待成像对象的初始图像具有相同图像类型的样本图像,而样本图像可以是离线获得或者通过成像***扫描采集相同图像类型的多个样本后建立的多个样本的模板图像,将其作为匹配基准与待成像对象的初始图像进行匹配,来得到感兴趣区域。
在本发明一些实施例中,将待成像对象的初始图像与对应的第一样本图像进行匹配,得到感兴趣区域的过程可以是:遍历待成像对象的初始图像,在遍历过程中选取出以当前遍历的位置为中心,大小和样本图像相同的区域块,并将选取出的区域块与第一样本图像进行相似度计算,在遍历结束后选取相似度最优的区域块的中心点为最佳匹配位置,然后以最佳匹配位置为中心划定感兴趣区域,其中相似度计算方法可以采用SAD方法(Sum of Absolute Differences,绝对误差和)或相关系数法或其他适合的方法。
自动方式:此种方式可以通过图像识别技术获取感兴趣区域。在本发明的一些实施例中,通过图像识别方法获取感兴趣区域的方式可以包括但不限于下述两种方式:
一种方式是:对待成像对象的初始图像进行特征提取,得到待成像对象的 初始图像的特征,将待成像对象的初始图像的特征与第二样本图像的特征进行匹配,得到待成像对象的初始图像的图像类型,并基于获得的图像类型将待成像对象的初始图像与对应的第一样本图像进行匹配,得到感兴趣区域。在此种方式中基于图像类型将待成像对象的初始图像与对应的第一样本图像进行匹配,得到感兴趣区域的过程可以参阅上述半自动方式中的具体实现,对此本发明实施例不再阐述。
且上述基于特征匹配得到图像类型的过程可以视为自动获取图像类型的过程,自动获取图像类型的过程相对于操作者指定方式来说可以进一步对待成像对象的初始图像所属图像类型进行细化,来确定待成像对象的初始图像属于哪个科的哪一类图像,如属于产科或心脏中哪一类图像。为能够对待成像对象的初始图像所属图像类型进行细化,可以为每一个细化后的图像类型离线获得或者通过成像***扫描采集至少一个第二样本图像,而每个第二样本图像的图像类型已知,因此通过与第二样本图像的特征匹配就可以确定出待成像对象的初始图像的细化后的图像类型,其匹配过程可以如下:
步骤11:特征提取;其中上述特征可以是指能够表征待成像对象的初始图像区别于其他图像的各种属性的总称。在本发明一些实施例中,采集到任意一个第二样本图像均会对第二样本图像进行特征提取,以将第二样本图像的特征作为基准特征,便于后续的待成像对象的初始图像的匹配。同样的在获取到待成像对象的初始图像后,可以采用与第二样本图像相同的特征提取方式对待成像对象的初始图像进行特征提取,得到待成像对象的初始图像的特征。
其中特征提取方法可以采用图像处理提取特征的方法,如Sobel算子、Canny算子、Roberts算子和SIFT算子等;也可以采用机器学习方法自动提取 图像的特征,如采用PCA(Principal Component Analysis,主成分分析)、LDA(Linear Discriminant Analysis,线性判别式分析)和深度学习等方法自动提取出图像的特征。
步骤12:特征匹配;在得到待成像对象的初始图像的特征后,可以与训练样本库中的第二样本图像的特征逐一进行相似度计算,选择特征最相似的第二样本图像的图像类型为待成像对象的初始图像的图像类型,其中特征相似度的度量方法可以为SAD算法,SAD值越小说明越相似;或者也可计算两组特征的相关系数来度量两组特征的相似度,相关系数越大说明越相似;或者也可以采用其他适合的而方法。
上述介绍的图像识别技术获取感兴趣区域的方法适用于各种图像类型。
此外,对一些在时间维度上呈现周期性运动的图像类型,如胎心、成人心脏、颈动脉等图像类型,这种类型的待成像对象的初始图像中的运动区域可以就是感兴趣区域。因此,在待成像对象的初始图像的图像类型指示待成像对象是在时间维度上呈周期性运动的对象的情况下,通过图像识别技术获取感兴趣区域的过程可以如下:
步骤21:获取待成像对象的初始图像的运动特征;运动特征的获取可以采用多种方法,如可以利用帧差法来得到,具体地可以将当前帧的图像信息直接减去前一帧或前若干帧的图像信息来提取当前帧的运动特征,当然也可以采用其他方式如OF(Optical Flow,光流法)和GMM(Gaussian Mixture Model,高斯混合模型)等方法来提取运动特征。
步骤22:基于运动特征对待成像对象的初始图像进行分割,得到待成像对象的初始图像中的运动区域;在得到运动特征后,可以采用阈值分割结合形 态学处理,分割出运动区域。
步骤23:基于该运动区域确定感兴趣区域;分割出运动区域后,即可利用该运动区域来定位感兴趣区域。通常,本发明实施例中的感兴趣区域可以为矩形(例如,在成像系数是使用线阵探头的超声成像***的情况下)或者扇形(例如,在成像系数是使用凸阵或相控阵探头的超声成像***的情况下),因而一种感兴趣区域定位方法可以为拟合出一个规则的感兴趣区域,使之能够包含整个运动区域,拟合方法可以是计算运动区域的外接矩形或扇形,也可采用最小二乘估计矩形拟合,或者采用其他适合的拟合方法进行拟合。
以上感兴趣区域定位方法也适合半自动方式,例如一种半自动方式为:基于操作者的输入缩小定位范围,并再用自动定位方法在缩小后的范围内定位出最终的感兴趣区域。其中缩小定位范围的目的是提高定位效率和准确度,而缩小定位范围方式可以是:操作者在运动区域上绘制至少一个点来提示感兴趣区域的范围,或者根据操作者的输入信息来自动缩小定位范围。另一种半自动方式为操作者在运动区域上绘制至少一个点来定位一个初始的感兴趣区域,在实时扫查过程中,再根据图像内容采用上述自动定位或半自动定位方法实时改变感兴趣区域框的位置和大小。
需要说明的一点是:上述感兴趣区域的定位方法可以对每个待成像对象的初始图像进行实时定位,以实时改变感兴趣区域,也可间隔一段时间进行定位,甚至也可以是操作者通过按键等方式触发后再进行定位。且即使对于需要实时监测感兴趣区域的***来说,感兴趣区域的定位可以是实时的,而图像类型的获取方式可以间隔一段时间进行判断或者在触发图像类型获取之后进行判断,而上述图像类型的获取过程均可以采用操作者指定或者基于特征匹配方式得 到。
102:处理器控制扫描装置基于第一成像参数对感兴趣区域进行扫描成像,得到第一成像图像。
103:处理器控制扫描装置基于第二成像参数对待成像对象的全部区域进行扫描成像,得到第二成像图像。这里,作为使用第二成像参数进行扫描成像的扫描目标的“全部区域”是包含前述的感兴趣区域的当前待成像对象的整个区域,即,在使用第二成像参数进行扫描成像时,被扫描的区域(或者说此时的成像区域)除了前述的感兴趣区域之外的区域之外,也包含了该感兴趣区域本身。因此,相应地,所获得的第二成像图像是当前待成像对象的包含感兴趣区域的全部区域的图像,而非仅仅感兴趣区域之外的区域的图像。
而且,步骤102和103中,第一成像参数和第二成像参数不同,所谓不同可以是:第一成像参数和第二成像参数为相同类型的参数,且第一成像参数和第二成像参数的取值不同;或者第一成像参数和第二成像参数为不同类型的参数,如第一成像参数包括参数A和参数B,而第二成像参数可以包括参数C和参数D;又或者第一成像参数包括第二成像参数,如第一成像参数包括参数A和参数B,而第二成像参数包括参数A,则判定第一成像参数包括第二成像参数;等等。
在成像系数为超声成像***的实施例中,上述第一成像参数和第二成像参数至少可以是:发射频率、发射电压、线密度、焦点数量、焦点位置、斑点噪声抑制参数和图像增强参数中的一种。在本发明实施例中,采用全部区域和感兴趣区域的二次成像方式,因此在感兴趣区域的扫描成像过程中,可以根据感兴趣区域的大小和位置来优化第一成像参数,以优化感兴趣区域。
例如对感兴趣区域内的发射频率进行优化,可以使感兴趣区域不受限于全部区域成像过程中发射频率的限制,当感兴趣区域位于发射源的近场时,可以提高感兴趣区域在扫描成像过程中的发射频率,从而提高第一成像图像的分辨率,当感兴趣区域位于发射源的远场时,可以降低感兴趣区域在扫描成像过程中的发射频率,从而提高第一成像图像的穿透力。
又或者当超声***的其他参数固定时,发射电压越高,超声***的发射功率越大,成像图像的质量越好,因此在本发明实施例中可以对发射电压进行优化,如全部区域扫描成像时采用较低的发射电压,而在感兴趣区域的扫描成像时采用较高的发射电压,如图2所示,从而在发射功率满足超声***声场限制的情况下提高感兴趣区域内的图像质量,例如,其中一个超声***声场的限制指标Ispta(空间峰值时间平均声强)小于等于480mW/cm2
而超声***的线密度、焦点数量与扫描帧率是相互制约的,线密度越大或焦点数量越多,扫描帧率越低,因此可以对线密度或焦点数量进行优化,在全部区域的扫描成像时采用较低的线密度或较少数量的焦点,而在感兴趣区域的扫描成像时采用较高的线密度或较多数量的焦点,从而在扫描帧率满足要求的情况下提升第一成像图像质量,如图3或图4所示,其中图2至图4中的横轴为超声***中探头位置。
对于上述焦点位置、斑点噪声抑制参数和图像增强参数的优化过程,本发明实施例不再一一阐述,并且除上述参数之外,第一成像参数和第二成像参数还可以采用发射孔径、发射波形、空间复合、频率复合、线复合和帧相关等,通过对感兴趣区域对应的第一成像参数的优化,从而获得质量更佳的第一成像图像。
在成像***为其他类型的成像***的实施例中,前述的第一参数和第二参数可以相应地设置。
104:处理器对第一成像图像和第二成像图像进行融合,得到待成像对象的成像图像。在本发明实施例中融合过程可以是:获取第一成像图像的第一融合参数以及第二成像图像的第二融合参数,然后基于第一融合参数和第二融合参数对第一成像图像和第二成像图像进行融合,得到待成像对象的成像图像。
例如可以依据下述公式:Io=α*Ilocal+β*Iglobal进行融合,其中Ilocal是第一成像图像、Iglobal是第二成像图像、α是第一融合参数、β是第二融合参数、Io是待成像对象的成像图像,即融合结果,在本发明实施例中融合结果对应的图像视第一融合参数和第二融合参数的取值而定。
第一融合参数和第二融合参数可以根据实际情况设定。一些实施例中,可以取α+β=1。另一些实施例中,也可以取α+β>1,此时可提高融合后输出的图像的整体亮度水平。在其他的实施例中,也可以按照其他的方式设置。
在这里需要说明的一点是:上述第一融合参数α和第二融合参数β的取值不固定,其可以根据图像中各个像素、图像中各个位置以及图像的生成时间的不同而不同。
例如,在融合过程中,上述第一成像图像和第二成像图像中各个像素的灰度值在[0,255]之间时,第一融合参数和第二融合参数的取值可以不小于0;若上述第一成像图像或第二成像图像中各个像素的灰度值小于0,则相对应的融合参数的取值可以小于0,且上述第一融合参数和第二融合参数的取值不同时为0;或者在融合过程中,第一成像图像中各个位置对应的第一融合参数α不同、第二成像图像中各个位置对应的第二融合参数β也可以不同,比如感兴趣 区域的边缘位置需要融合较多的第二成像图像的图像信息,则在感兴趣区域的边缘位置处的第二融合参数β的取值可以大于其他位置处的第二融合参数β的取值,若在除边缘位置的其他位置融合较多的第一成像图像的图像信息,则其他位置处的第一融合参数α的取值大于边缘位置处第一融合参数α的取值;如果得到的第一成像图像和第二成像图像是实时图像,其可以随时间变化,则第一融合参数α和第二融合参数β的取值也可以随着时间的不同而不同;等等。
从上述的技术方案可以看出,本发明实施例提供的成像方法可以分别采用第一成像参数对待成像对象的感兴趣区域和第二成像参数对待成像对象的全部区域进行扫描成像,得到感兴趣区域的第一成像图像和全部区域的第二成像图像,这样,可以针对感兴趣区域针对性地设置成像参数,以对感兴趣区域的图像的所期望的方面进行针对性的增强和优化;在对第一成像图像和第二成像图像融合过程中,可以在第一成像图像的边缘位置融合较多的第二成像图像的图像信息,使得感兴趣区域内和感兴趣区域外之间的平滑过渡,提高过渡效果,进而使得融合后的成像图像整体效果保持视觉上的一致性。
并且上述第一成像参数和第二成像参数不同,这样在融合过程中,第一成像图像可以使用第二成像图像中与感兴趣区域对应区域的图像信息,增强感兴趣区域的图像质量。且上述第二成像图像是待成像对象的全部区域对应的图像,其图像形状为一常规形状,相对于非常规形状来说在第二成像参数的参数控制上相对简单,进一步因为第二成像图像是全部区域对应的图像,这样除感兴趣区域内之外,感兴趣区域外的图像也被实时显示,实现全部区域的实时显示。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例中的处理器可通过软件加必需的硬件平台(例如,微处理器、微控制器、可编程逻辑器件、专用集成电路等等)来实现,或者也可以单独通过硬件或者固件实现。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分也可以以软件产品的形式体现出来,该计算机软件产品承载在一个非易失性计算机可读存储载体(如ROM、磁碟、光盘、服务器云空间)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
对所提供的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所提供的原理和新颖特点相一致的最宽的范围。

Claims (12)

  1. 一种成像方法,其特征在于,所述方法包括:
    获取待成像对象的初始图像;
    基于所述初始图像获取所述待成像对象的感兴趣区域;
    基于第一成像参数对所述感兴趣区域进行扫描成像,得到第一成像图像;
    基于第二成像参数对所述待成像对象的全部区域进行扫描成像,得到第二成像图像,其中所述第一成像参数和所述第二成像参数至少部分不同;
    对所述第一成像图像和所述第二成像图像进行融合,得到所述待成像对象的成像图像。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述初始图像获取当前待成像对象的感兴趣区域包括:
    获取操作者试用人机交互界面通过所述初始图像指定的所述感兴趣区域;
    或者,
    获取所述操作者指定的所述初始图像的图像类型,并基于所述图像类型将所述待初始图像与对应的第一样本图像进行匹配,获得所述感兴趣区域;
    或者,
    通过图像识别方法基于所述初始图像获取所述感兴趣区域。
  3. 根据权利要求2所述的方法,其特征在于,所述通过图像识别方法基于所述初始图像获取所述感兴趣区域,包括:
    获取所述初始图像的图像类型,并基于所述图像类型将所述初始图像与对应的第一样本图像进行匹配,得到所述感兴趣区域;
    或者,
    获取所述初始图像中的运动特征,基于所述运动特征对所述初始图像进行分割,得到所述初始图像的运动区域,并基于所述运动区域确定所述感兴趣区域。
  4. 根据权利要求3所述的方法,其特征在于,所述获取所述初始图像的图像类型包括:
    获取所述操作者指定的所述初始图像的图像类型;
    或者,
    对所述初始图像进行特征提取,得到所述初始图像的特征,将所述初始图像的特征与第二样本图像的特征进行匹配,得到所述初始图像的图像类型。
  5. 根据权利要求1所述的方法,其特征在于,所述第一成像参数和第二成像参数至少是:发射频率、发射电压、线密度、焦点数量、焦点位置、斑点噪声抑制参数和图像增强参数中的一种。
  6. 根据权利要求1所述的方法,其特征在于,所述对所述第一成像图像和所述第二成像图像进行融合,得到所述待成像对象的成像图像,包括:
    获取所述第一成像图像的第一融合参数以及所述第二成像图像的第二融合参数;
    基于所述第一融合参数和所述第二融合参数对所述第一成像图像和所述第二成像图像进行融合,得到所述待成像对象的成像图像。
  7. 一种成像***,其特征在于,所述***包括:
    扫描装置,所述扫描装置扫描待成像对象以获取待成像对象的图像数据;
    处理器,所述处理器用于:
    获取待成像对象的初始图像;
    基于所述初始图像获取所述待成像对象的感兴趣区域;
    控制所述扫描装置基于第一成像参数对所述感兴趣区域进行扫描成像,得到第一成像图像;
    控制所述扫描装置基于第二成像参数对所述待成像对象的全部区域进行扫面成像,得到第二成像图像,其中所述第一成像参数和所述第二成像参数不同;
    对所述第一成像图像和所述第二成像图像进行融合,得到所述待成像对象的成像图像。
  8. 根据权利要求7所述的***,其特征在于,所述处理器基于所述初始图像获取当前待成像对象的感兴趣区域包括:
    所述处理器获取操作者用人机交互界面通过所述初始图像指定的所述感兴趣区域;
    或者,
    所述处理器获取所述操作者指定的所述初始图像的图像类型,并基于所述图像类型将所述初始图像与对应的第一样本图像进行匹配,获得所述感兴趣区域;
    或者,
    所述处理器通过图像识别方法基于所述初始图像获取所述感兴趣区域。
  9. 根据权利要求8所述的***,其特征在于,所述处理器通过图像识别方法基于所述初始图像获取所述感兴趣区域包括:
    所述处理器获取所述初始图像的图像类型,并基于所述图像类型将所述初始图像与对应的第一样本图像进行匹配,得到所述感兴趣区域;
    或者,
    所述处理器获取所述待初始图像中的运动特征,基于所述运动特征对所述初始图像进行分割,得到所述初始的运动区域,并基于所述运动区域确定所述感兴趣区域。
  10. 根据权利要求9所述的***,其特征在于,所述处理器获取所述初始图像的图像类型包括:
    所述处理器获取所述操作者指定的所述初始图像的图像类型;
    或者,
    所述处理器对所述初始图像进行特征提取,得到所述初始图像的特征,将所述初始图像的特征与第二样本图像的特征进行匹配,得到所述初始图像的图像类型。
  11. 根据权利要求7所述的***,其特征在于,所述第一成像参数和第二成像参数至少是:发射频率、发射电压、线密度、焦点数量、焦点位置、斑点噪声抑制参数和图像增强参数中的一种。
  12. 根据权利要求7所述的***,其特征在于,所述处理器对所述第一成像图像和所述第二成像图像进行融合,得到所述待成像对象的成像图像包括:
    所述处理器获取所述第一成像图像的第一融合参数以及所述第二成像图像的第二融合参数,并基于所述第一融合参数和所述第二融合参数对所述第一成像图像和所述第二成像图像进行融合,得到所述待成像对象的成像图像。
PCT/CN2016/101313 2016-09-30 2016-09-30 一种成像方法和*** WO2018058632A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2016/101313 WO2018058632A1 (zh) 2016-09-30 2016-09-30 一种成像方法和***
CN201680086564.9A CN109310388B (zh) 2016-09-30 2016-09-30 一种成像方法和***
CN202111095357.9A CN114224386A (zh) 2016-09-30 2016-09-30 一种成像方法和***

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/101313 WO2018058632A1 (zh) 2016-09-30 2016-09-30 一种成像方法和***

Publications (1)

Publication Number Publication Date
WO2018058632A1 true WO2018058632A1 (zh) 2018-04-05

Family

ID=61762392

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/101313 WO2018058632A1 (zh) 2016-09-30 2016-09-30 一种成像方法和***

Country Status (2)

Country Link
CN (2) CN109310388B (zh)
WO (1) WO2018058632A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020132953A1 (zh) * 2018-12-26 2020-07-02 深圳迈瑞生物医疗电子股份有限公司 一种成像方法及超声成像设备
CN111407317A (zh) * 2019-01-08 2020-07-14 深圳迈瑞生物医疗电子股份有限公司 执行超声成像的方法和***

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5987345A (en) * 1996-11-29 1999-11-16 Arch Development Corporation Method and system for displaying medical images
CN1864633A (zh) * 2005-05-19 2006-11-22 西门子公司 扩大对象区域的立体图像的显示范围的方法
CN1864634A (zh) * 2005-05-19 2006-11-22 西门子公司 扩大对象区域的二维图像的显示范围的方法
CN103913472A (zh) * 2012-12-31 2014-07-09 同方威视技术股份有限公司 Ct成像***和方法
CN104382616A (zh) * 2014-09-28 2015-03-04 安华亿能医疗影像科技(北京)有限公司 颈动脉三维图像构建装置
CN104780845A (zh) * 2012-11-06 2015-07-15 皇家飞利浦有限公司 增强超声图像

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100686289B1 (ko) * 2004-04-01 2007-02-23 주식회사 메디슨 대상체 영상의 윤곽내 볼륨 데이터를 이용하는 3차원초음파 영상 형성 장치 및 방법
US8090429B2 (en) * 2004-06-30 2012-01-03 Siemens Medical Solutions Usa, Inc. Systems and methods for localized image registration and fusion
US7372934B2 (en) * 2005-12-22 2008-05-13 General Electric Company Method for performing image reconstruction using hybrid computed tomography detectors
CN101405619B (zh) * 2006-03-16 2013-01-02 皇家飞利浦电子股份有限公司 计算机断层造影数据采集装置和方法
CN101053531A (zh) * 2007-05-17 2007-10-17 上海交通大学 基于多模式增敏成像融合的早期肿瘤定位跟踪方法
US7973834B2 (en) * 2007-09-24 2011-07-05 Jianwen Yang Electro-optical foveated imaging and tracking system
CN201854353U (zh) * 2010-10-13 2011-06-01 山东神戎电子股份有限公司 多光谱图像融合摄像机
JP6000569B2 (ja) * 2011-04-01 2016-09-28 東芝メディカルシステムズ株式会社 超音波診断装置及び制御プログラム
US9557415B2 (en) * 2014-01-20 2017-01-31 Northrop Grumman Systems Corporation Enhanced imaging system
CN103971100A (zh) * 2014-05-21 2014-08-06 国家电网公司 基于视频并针对自动提款机的伪装与偷窥行为的检测方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5987345A (en) * 1996-11-29 1999-11-16 Arch Development Corporation Method and system for displaying medical images
CN1864633A (zh) * 2005-05-19 2006-11-22 西门子公司 扩大对象区域的立体图像的显示范围的方法
CN1864634A (zh) * 2005-05-19 2006-11-22 西门子公司 扩大对象区域的二维图像的显示范围的方法
CN104780845A (zh) * 2012-11-06 2015-07-15 皇家飞利浦有限公司 增强超声图像
CN103913472A (zh) * 2012-12-31 2014-07-09 同方威视技术股份有限公司 Ct成像***和方法
CN104382616A (zh) * 2014-09-28 2015-03-04 安华亿能医疗影像科技(北京)有限公司 颈动脉三维图像构建装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020132953A1 (zh) * 2018-12-26 2020-07-02 深圳迈瑞生物医疗电子股份有限公司 一种成像方法及超声成像设备
CN112654298A (zh) * 2018-12-26 2021-04-13 深圳迈瑞生物医疗电子股份有限公司 一种成像方法及超声成像设备
CN111407317A (zh) * 2019-01-08 2020-07-14 深圳迈瑞生物医疗电子股份有限公司 执行超声成像的方法和***

Also Published As

Publication number Publication date
CN109310388B (zh) 2022-04-15
CN114224386A (zh) 2022-03-25
CN109310388A (zh) 2019-02-05

Similar Documents

Publication Publication Date Title
RU2667617C2 (ru) Система и способ эластографических измерений
WO2017206023A1 (zh) 一种心脏容积识别分析***和方法
US11100665B2 (en) Anatomical measurements from ultrasound data
JPWO2010116965A1 (ja) 医用画像診断装置、関心領域設定方法、医用画像処理装置、及び関心領域設定プログラム
US20210393240A1 (en) Ultrasonic imaging method and device
JP2016195764A (ja) 医用画像処理装置およびプログラム
JP2020503099A (ja) 出産前超音波イメージング
JP2005193017A (ja) ***患部分類の方法及びシステム
US12026886B2 (en) Method and system for automatically estimating a hepatorenal index from ultrasound images
US20200015785A1 (en) Volume rendered ultrasound imaging
US20240041431A1 (en) Ultrasound imaging method and system
JP2011120901A (ja) 超音波空間合成映像を提供する超音波システムおよび方法
EP3820374B1 (en) Methods and systems for performing fetal weight estimations
WO2018058632A1 (zh) 一种成像方法和***
KR20120102447A (ko) 진단장치 및 방법
US11717268B2 (en) Ultrasound imaging system and method for compounding 3D images via stitching based on point distances
WO2019130636A1 (ja) 超音波撮像装置、画像処理装置、及び方法
WO2020132953A1 (zh) 一种成像方法及超声成像设备
US20220039773A1 (en) Systems and methods for tracking a tool in an ultrasound image
CN112294361A (zh) 一种超声成像设备、盆底的切面图像生成方法
CN112672696A (zh) 用于跟踪超声图像中的工具的***和方法
US11382595B2 (en) Methods and systems for automated heart rate measurement for ultrasound motion modes
EP4390841A1 (en) Image acquisition method
JP7299100B2 (ja) 超音波診断装置及び超音波画像処理方法
EP4014884A1 (en) Apparatus for use in analysing an ultrasound image of a subject

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16917382

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16917382

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC, EPO FORM 1205A DATED 10/09/19

122 Ep: pct application non-entry in european phase

Ref document number: 16917382

Country of ref document: EP

Kind code of ref document: A1