WO2018058632A1 - 一种成像方法和*** - Google Patents
一种成像方法和*** Download PDFInfo
- Publication number
- WO2018058632A1 WO2018058632A1 PCT/CN2016/101313 CN2016101313W WO2018058632A1 WO 2018058632 A1 WO2018058632 A1 WO 2018058632A1 CN 2016101313 W CN2016101313 W CN 2016101313W WO 2018058632 A1 WO2018058632 A1 WO 2018058632A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- imaging
- region
- imaged
- interest
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 197
- 238000000034 method Methods 0.000 claims abstract description 64
- 230000004927 fusion Effects 0.000 claims description 46
- 238000000605 extraction Methods 0.000 claims description 10
- 230000005540 biological transmission Effects 0.000 claims description 9
- 230000003993 interaction Effects 0.000 claims description 6
- 230000001629 suppression Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 abstract description 20
- 230000007704 transition Effects 0.000 abstract description 10
- 230000000694 effects Effects 0.000 abstract description 8
- 230000000007 visual effect Effects 0.000 abstract description 2
- 239000000523 sample Substances 0.000 description 37
- 238000002604 ultrasonography Methods 0.000 description 15
- 238000012285 ultrasound imaging Methods 0.000 description 15
- 238000005516 engineering process Methods 0.000 description 7
- 238000005070 sampling Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 238000007499 fusion processing Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 210000004185 liver Anatomy 0.000 description 3
- 230000006798 recombination Effects 0.000 description 3
- 238000005215 recombination Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000003759 clinical diagnosis Methods 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 210000001715 carotid artery Anatomy 0.000 description 1
- 210000001638 cerebellum Anatomy 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 210000002458 fetal heart Anatomy 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 210000003734 kidney Anatomy 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000002603 single-photon emission computed tomography Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Detecting organic movements or changes, e.g. tumours, cysts, swellings
- A61B8/0833—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
- A61B8/085—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/46—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
- A61B8/467—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means
- A61B8/469—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means for selection of a region of interest
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the present invention relates to the field of medical imaging technology, and more particularly to an imaging method and system.
- Medical ultrasound images have been widely used in clinical practice because of their non-invasive, low-cost, real-time image display.
- the specific medical ultrasound imaging uses ultrasonic echo signals to detect tissue structural information and pass two-dimensional images.
- the structural information of the organization is displayed in real time, so that the doctor can identify the structural information in the two-dimensional image to provide a basis for clinical diagnosis.
- the mainstream medical ultrasound imaging technology is a full-area image imaging technology, which uses the same imaging parameters for imaging the entire region within the current imaging range, and trades off the imaging parameters to make the image of the whole region uniform. And make the display of the whole area image the best, but this kind of technology may not be optimal for the images in the area of interest, and the features in the area of interest cannot be highlighted.
- a partial image imaging technique is formed on the full-area image imaging technique, which can obtain an image of the region of interest to highlight the region of interest through the image of the region of interest.
- One approach is to use different imaging parameters within and outside of the region of interest.
- the imaging parameters outside the region of interest are not imaged in the region of interest of the method, so the images in the region of interest cannot be synthesized using images outside the region of interest during image synthesis, so that the region of interest and interest are The transition between the regions is poor.
- Another approach is to optimize the image within the region of interest by delineating the region of interest. But in this method, outside the region of interest The image is in a frozen state, which is inconsistent with the image in the real-time region of interest, and also makes the transition between the image in the region of interest and the image outside the region of interest inferior.
- the present invention provides an imaging method and system that can improve the transition effect between the region of interest and the outside of the region of interest.
- an imaging method may include: acquiring an initial image of the object to be imaged; acquiring a region of interest of the object to be imaged based on the initial image; scanning imaging the region of interest based on the first imaging parameter to obtain a first image of the image; treating the image based on the second imaging parameter The entire imaging region is scanned and imaged to obtain a second imaging image, wherein the first imaging parameter and the second imaging parameter are at least partially different; and the first imaging image and the second imaging image are fused to obtain an imaging image of the object to be imaged.
- acquiring the region of interest of the current object to be imaged based on the initial image may include: acquiring an area of interest designated by the operator to use the human-computer interaction interface through the initial image; or acquiring an initial image specified by the operator The image type, and matching the image to be initialized with the corresponding first sample image based on the image type to obtain the region of interest; or acquiring the region of interest based on the initial image by the image recognition method.
- acquiring the region of interest based on the initial image by the image recognition method may include: acquiring an image type of the initial image, and matching the initial image with the corresponding first sample image based on the image type to obtain an interest. Or; acquiring a motion feature in the initial image, segmenting the initial image based on the motion feature, obtaining a motion region of the initial image, and determining the region of interest based on the motion region.
- acquiring an image type of the initial image may include: acquiring an operator The image type of the specified initial image; or, the feature extraction of the initial image is performed to obtain the feature of the initial image, and the feature of the initial image is matched with the feature of the second sample image to obtain an image type of the initial image.
- the first imaging parameter and the second imaging parameter may be at least one of a transmission frequency, an emission voltage, a line density, a number of focus, a focus position, a speckle noise suppression parameter, and an image enhancement parameter.
- the merging the first imaging image and the second imaging image to obtain the imaging image of the object to be imaged may include: acquiring a first fusion parameter of the first imaging image and a second fusion of the second imaging image And merging the first imaging image and the second imaging image based on the first fusion parameter and the second fusion parameter to obtain an imaging image of the object to be imaged.
- the imaging system may include: a scanning device that scans an object to be imaged to acquire image data of an object to be imaged; a processor that: acquires an initial image of the object to be imaged; and acquires a feeling of the object to be imaged based on the initial image a region of interest; controlling the scanning device to scan the region of interest based on the first imaging parameter to obtain a first imaging image; and controlling the scanning device to perform scanning imaging on all regions of the object to be imaged based on the second imaging parameter to obtain a second imaging image, Wherein the first imaging parameter and the second imaging parameter are different; and the first imaging image and the second imaging image are fused to obtain an imaging image of the object to be imaged.
- the acquiring, by the processor, the region of interest of the current object to be imaged based on the initial image may include: acquiring, by the processor, the region of interest specified by the operator with the human-computer interaction interface through the initial image; or acquiring the operator by the processor Specifying an image type of the initial image, and matching the initial image with the corresponding first sample image based on the image type to obtain the region of interest; or the processor acquires the region of interest based on the initial image by the image recognition method.
- the processor acquiring the region of interest based on the initial image by the image recognition method may include: the processor acquiring an image type of the initial image, and matching the initial image with the corresponding first sample image based on the image type Or obtaining a region of interest; or, the processor acquires a motion feature in the image to be initially, and segments the initial image based on the motion feature to obtain an initial motion region, and determines the region of interest based on the motion region.
- the processor acquiring the image type of the initial image may include: the processor acquiring an image type of the initial image specified by the operator; or the processor performing feature extraction on the initial image to obtain a feature of the initial image, The features of the initial image are matched to the features of the second sample image to obtain the image type of the initial image.
- the first imaging parameter and the second imaging parameter may be at least one of a transmission frequency, an emission voltage, a line density, a number of focus, a focus position, a speckle noise suppression parameter, and an image enhancement parameter.
- the first imaging parameter and the second imaging parameter of the object to be imaged can be scanned and imaged respectively to obtain the first imaging image and the entire region of the region of interest.
- a second image of the image such that during the fusion of the first imaged image and the second imaged image, more image information of the second imaged image may be fused at the edge of the first imaged image, such that the region of interest A smooth transition between the areas of interest increases the transition effect, thereby maintaining a visual consistency of the overall effect of the imaged image after fusion.
- the first imaging parameter and the second imaging parameter are different, such that in the fusion process, the first imaging image may use image information of the region corresponding to the region of interest in the second imaging image to enhance the image quality of the region of interest.
- the second imaging image is an image corresponding to all areas of the object to be imaged, and the image shape is a regular shape, and the parameter control of the second imaging parameter is relatively simple with respect to the unconventional shape, further because the second imaging image It is an image corresponding to all areas, so that images outside the area of interest are displayed in real time except for the area of interest, real-time display of all areas.
- FIG. 1 is a flowchart of an imaging method according to an embodiment of the present invention
- FIG. 2 is a schematic diagram of optimization of a first imaging parameter according to an embodiment of the present invention.
- FIG. 3 is another schematic diagram of optimization of a first imaging parameter according to an embodiment of the present invention.
- FIG. 4 is still another schematic diagram of optimization of a first imaging parameter according to an embodiment of the present invention.
- FIG. 5 is a schematic structural diagram of an ultrasound imaging system according to an embodiment of the present invention.
- the imaging method, device and device provided by the embodiments of the present invention are: acquiring a first imaging image corresponding to a region of interest of an object to be imaged and a second imaging image corresponding to all regions, and then, for the first imaging image and The two imaged images are fused to enhance the transition effect between the portion of the region of interest and the portion outside the region of interest in the resulting image.
- ultrasound imaging system As a specific example, the invention is not limited to ultrasound imaging systems, but can be used in other medical imaging systems, such as X-ray imaging systems, magnetic resonance imaging (MRI) systems, positron emission computed tomography (PET) systems, or singles. Photon emission computed tomography (SPECT) systems, and so on.
- MRI magnetic resonance imaging
- PET positron emission computed tomography
- SPECT Photon emission computed tomography
- inventions of the present invention provide an imaging system and corresponding imaging method.
- the imaging system can include a scan transpose and a processor.
- the scanning device can scan the object to be imaged to obtain image data of the object to be imaged.
- the scanning device is a probe.
- the scanning device is a device for its corresponding image to be imaged.
- the processor can control the scanning device or imaging system to implement the imaging method of the embodiments of the invention described in detail below.
- image data is used to describe the data obtained by the scanning device, herein, the "image data” may also include unprocessed or processed or received after scanning by the scanning device, but still There is no data when the image is formed.
- the image data herein also includes ultrasound echo data obtained after the ultrasound echo received by the probe, radio frequency data after certain processing, or image data after the ultrasound image is formed.
- an ultrasound imaging system may include: a probe 1, a transmitting circuit 2, a transmit/receive selection switch 3, a receiving circuit 4, and beamforming.
- Module 5 processor 6 and display 7.
- the transmitting circuit 2 transmits the delayed-focused ultrasonic pulse having a certain amplitude and polarity to the probe 1 through the transmitting/receiving selection switch 3.
- the probe 1 is excited by the ultrasonic pulse to emit ultrasonic waves to a target area (not shown) of the body to be tested, and receives an ultrasonic echo with tissue information reflected from the target area after a certain delay.
- the ultrasonic echo is reconverted into an electrical signal.
- the receiving circuit receives the electrical signals generated by the conversion of the probe 1 to obtain ultrasonic echo signals, and sends the ultrasonic echo signals to the beam combining module 5.
- the beam synthesizing module 5 performs processing such as focus delay, weighting, and channel summation on the ultrasonic echo signals to obtain radio frequency signals, which can be sent to the processor 6 for related processing.
- the ultrasonic image obtained by the processing of the processor 6 is sent to the display 7 for display.
- the processor 6 can also implement the imaging method provided by the embodiment of the present invention.
- the ultrasound imaging system will be described in detail below with reference to the accompanying drawings.
- FIG. 1 is a flowchart of an imaging method provided by an embodiment of the present invention, which may include the following steps:
- the processor acquires a region of interest of an object to be imaged currently.
- the region of interest may be any region of the object to be imaged (eg, an operator of a doctor or other ultrasound imaging device, etc.) that is of interest to it, such as an area suspected of having a microstructural lesion, etc., in this region Structural information can be used as a basis for clinical diagnosis.
- an image of a current object to be imaged (herein referred to as an "initial image”) may be obtained.
- “initial” herein is merely an action for acquiring a region of interest at present. Or, in terms of steps, but not the initial or other specific meaning of the overall imaging process, to obtain the region of interest of the object to be imaged.
- an imaging system eg, an ultrasound imaging system
- an imaging system can be used to image an imaged subject (eg, using a full-area image imaging method as described above) to obtain a full-area ultrasound image of the object to be imaged (ie, the initial image) ), then, based on The full initial image obtains a region of interest of the object to be imaged.
- a full-area ultrasound image may mean that the ultrasound image contains all of the area of the object to be imaged.
- the "object to be imaged” as used herein may be one or more organs or regions of a human or animal that are currently or will be ultrasound scanned.
- the manner of obtaining the region of interest includes, but is not limited to, three modes: an operator manual designation mode, a semi-automatic mode, and an automatic mode.
- the following three modes are introduced one by one.
- the processor can acquire the region of interest specified by the operator in the human-computer interaction interface, that is, the operator manually specifies the region of interest of the object to be imaged.
- an initial image of an object to be imaged as described above is displayed in a human-computer interaction interface of the ultrasound imaging apparatus, and an input device, such as a trackball, is mounted on the ultrasound imaging device, and is displayed on the initial image of the object to be imaged by operating the trackball.
- the sampling frame operates to change the position of the center point of the sampling frame and/or the size of the sampling frame.
- the size of the sampling frame is horizontal, and the size of the sampling frame changes longitudinally when the operation track ball is vertically scrolled, etc. .
- the size of the sampling frame changes, the position of the center point can be changed, and the position and size adjustment of the center point can be switched by operating the buttons on the trackball.
- the area within the sampling frame is the area of interest.
- Semi-automatic mode This mode is a combination of operator manual operation and image recognition technology.
- the process may be: the processor acquires an image type of an initial image of an object to be imaged designated by an operator, and selects an object to be imaged based on the image type. The initial image is matched with the corresponding first sample image to obtain a region of interest.
- the image type indicates which type of image the initial image of the current object to be imaged belongs to, such as a liver image, a kidney image, a heart image, an obstetric cerebellum image, etc., after the image type is acquired, an initial image of the operator to be imaged may be determined according to the image type.
- the target of interest is the above-mentioned region of interest.
- the operator selects an examination mode, ie which organ is scanned, and if the object to be imaged is the liver, the examination mode is selected as the liver mode, so in some embodiments
- the check mode can be used to indicate the image type of the initial image of the object to be imaged.
- the initial image of the object to be imaged may be matched with the corresponding first sample image based on the image type to obtain a region of interest.
- the corresponding first sample image may be a sample image having the same image type as the initial image of the object to be imaged, and the sample image may be obtained offline or scanned by the imaging system to acquire multiple samples of the same image type.
- a template image of the sample is matched as an initial reference to the initial image of the object to be imaged to obtain a region of interest.
- the initial image of the object to be imaged is matched with the corresponding first sample image
- the process of obtaining the region of interest may be: traversing the initial image of the object to be imaged, and selecting in the traversal process
- the current traversed position is the center block, the same size block as the sample image, and the selected region block is similarly calculated with the first sample image.
- the center point of the region block with the best similarity is selected as The best matching position is then delineated by the best matching position, wherein the similarity calculation method may adopt the SAD method (Sum of Absolute Differences) or the correlation coefficient method or other suitable methods.
- This method can acquire the region of interest through image recognition technology.
- the manner of acquiring the region of interest by the image recognition method may include, but is not limited to, the following two modes:
- One way is to perform feature extraction on the initial image of the imaged object to obtain the object to be imaged.
- a feature of the initial image matching a feature of the initial image of the object to be imaged with a feature of the second sample image, obtaining an image type of the initial image of the object to be imaged, and correspondingly matching the initial image of the object to be imaged based on the obtained image type
- the first sample image is matched to obtain the region of interest.
- the initial image of the object to be imaged is matched with the corresponding first sample image based on the image type, and the process of obtaining the region of interest can be referred to the specific implementation in the semi-automatic manner described above, for which the embodiment of the present invention is no longer set forth.
- the above process of obtaining the image type based on the feature matching can be regarded as a process of automatically acquiring the image type, and the process of automatically acquiring the image type can further refine the image type of the initial image of the imaged object according to the operator specified manner. It is determined which type of image of which subject the initial image of the object to be imaged belongs to, such as which type of image belongs to the obstetrics or the heart.
- the refined image type of the initial image of the object to be imaged can be determined by matching the features of the second sample image, and the matching process can be as follows:
- Step 11 Feature extraction; wherein the above feature may refer to a general term of various attributes capable of characterizing an initial image of an object to be imaged from other images.
- acquiring any one of the second sample images performs feature extraction on the second sample image to use the feature of the second sample image as a reference feature to facilitate matching of the initial image of the object to be imaged.
- feature extraction of the initial image of the object to be imaged may be performed by using the same feature extraction method as the second sample image, and the feature of the initial image of the object to be imaged is obtained.
- the feature extraction method may adopt image processing to extract features, such as Sobel operator, Canny operator, Roberts operator and SIFT operator, etc., or may be automatically extracted by machine learning method.
- the characteristics of the image such as PCA (Principal Component Analysis), LDA (Linear Discriminant Analysis) and deep learning, automatically extract the features of the image.
- Step 12 Feature matching; after obtaining the features of the initial image of the object to be imaged, the similarity calculation may be performed one by one with the features of the second sample image in the training sample library, and the image type of the second sample image with the most similar feature is selected as The image type of the initial image of the object to be imaged, wherein the method for measuring the feature similarity may be the SAD algorithm, the smaller the SAD value is, the more similar; or the correlation coefficient of the two groups of features may be calculated to measure the similarity between the two groups of features, The larger the coefficient, the more similar it is; or other suitable methods can be used.
- the method for measuring the feature similarity may be the SAD algorithm, the smaller the SAD value is, the more similar; or the correlation coefficient of the two groups of features may be calculated to measure the similarity between the two groups of features, The larger the coefficient, the more similar it is; or other suitable methods can be used.
- the method of acquiring the region of interest by the image recognition technology described above is applicable to various image types.
- the motion region in the initial image of this type of object to be imaged may be the region of interest. Therefore, in the case where the image type of the initial image of the object to be imaged indicates that the object to be imaged is an object that periodically moves in the time dimension, the process of acquiring the region of interest by the image recognition technique may be as follows:
- Step 21 Acquire the motion feature of the initial image of the object to be imaged; the method for acquiring the motion feature may be performed by using a frame difference method, for example, the image information of the current frame may be directly subtracted from the previous frame or the former. The image information of several frames is used to extract the motion characteristics of the current frame.
- a frame difference method for example, the image information of the current frame may be directly subtracted from the previous frame or the former.
- the image information of several frames is used to extract the motion characteristics of the current frame.
- OF Optical Flow
- GMM Gaussian Mixture Model
- Step 22 segment the initial image of the image to be imaged based on the motion feature, and obtain a motion region in the initial image of the object to be imaged; after obtaining the motion feature, threshold segmentation may be used Morphological processing, segmentation of the motion area.
- Step 23 Determine a region of interest based on the motion region; after segmenting the motion region, the motion region can be utilized to locate the region of interest.
- the region of interest in embodiments of the present invention may be rectangular (eg, where the imaging coefficients are ultrasound imaging systems using linear array probes) or sectoral (eg, in imaging coefficients using convex or phased array probes)
- a region of interest localization method may be to fit a regular region of interest to include the entire motion region, and the fitting method may be to calculate a circumscribed rectangle or sector of the motion region. You can also use the least squares to estimate the rectangular fit, or use other suitable fitting methods to fit.
- the above-mentioned region of interest localization method is also suitable for the semi-automatic mode.
- a semi-automatic method is: narrowing the positioning range based on the operator's input, and then using the automatic positioning method to locate the final region of interest within the reduced range.
- the purpose of narrowing the positioning range is to improve the positioning efficiency and accuracy, and the narrowing of the positioning range may be: the operator draws at least one point on the motion area to prompt the range of the region of interest, or automatically reduces according to the operator's input information.
- Positioning range Another semi-automatic mode is that the operator draws at least one point on the motion area to locate an initial region of interest.
- the above-mentioned automatic positioning or semi-automatic positioning method is used to change the region of interest frame according to the image content. The location and size.
- the positioning method of the above-mentioned region of interest can perform real-time positioning on the initial image of each object to be imaged, to change the region of interest in real time, or to perform positioning at intervals, or even to press the button by the operator. After the mode is triggered, the positioning is performed. And even for a system that needs to monitor the region of interest in real time, the location of the region of interest may be real-time, and the manner of acquiring the image type may be judged at intervals or after the triggering of the image type, and the image type is The acquisition process can be specified by the operator or based on the feature matching method. To.
- the processor controls the scanning device to perform scanning imaging on the region of interest based on the first imaging parameter to obtain a first imaging image.
- the processor controls the scanning device to perform scanning imaging on the entire area of the imaged object based on the second imaging parameter to obtain a second imaged image.
- the "all area" as the scanning target for performing scanning imaging using the second imaging parameter is the entire area of the current object to be imaged including the aforementioned region of interest, that is, scanned when scanning imaging is performed using the second imaging parameter
- the region (or the imaging region at this time) also includes the region of interest itself in addition to the region other than the aforementioned region of interest. Accordingly, accordingly, the obtained second imaging image is an image of the entire region of the object to be imaged that includes the region of interest, not an image of the region other than the region of interest.
- the first imaging parameter and the second imaging parameter are different, and the difference may be that the first imaging parameter and the second imaging parameter are the same type of parameter, and the first imaging parameter and the second imaging parameter are The values are different; or the first imaging parameter and the second imaging parameter are different types of parameters, such as the first imaging parameter including parameter A and parameter B, and the second imaging parameter may include parameter C and parameter D; or the first imaging The parameter includes a second imaging parameter, such as the first imaging parameter including parameter A and parameter B, and the second imaging parameter includes parameter A, then determining that the first imaging parameter includes the second imaging parameter;
- the first imaging parameter and the second imaging parameter may be at least: a transmission frequency, an emission voltage, a line density, a number of focus, a focus position, a speckle noise suppression parameter, and an image enhancement parameter.
- a transmission frequency an emission voltage
- a line density a number of focus
- a focus position a speckle noise suppression parameter
- an image enhancement parameter a speckle noise suppression parameter
- the secondary imaging mode of the entire region and the region of interest is adopted, so in the scanning imaging process of the region of interest, the first imaging parameter can be optimized according to the size and position of the region of interest to optimize the sense. Area of interest.
- optimizing the transmission frequency in the region of interest can make the region of interest not limited by the transmission frequency in the imaging process of the entire region.
- the region of interest can be scanned.
- the transmission frequency during imaging thereby increasing the resolution of the first imaged image.
- the emission frequency of the region of interest during the scanning imaging process can be reduced, thereby improving the first imaged image. Penetration.
- the emission voltage can be optimized, such as full area scanning imaging.
- a lower emission voltage is used, and a higher emission voltage is used in the scanning imaging of the region of interest, as shown in FIG. 2, thereby improving the image quality in the region of interest when the transmission power satisfies the sound field limitation of the ultrasound system.
- the limit index Ispta spatial peak time average sound intensity of one of the ultrasonic system sound fields is 480 mW/cm 2 or less.
- the linear density, the number of focal points and the scanning frame rate of the ultrasound system are mutually constrained.
- the image quality is imaged as shown in Fig. 3 or Fig. 4, wherein the horizontal axis in Figs. 2 to 4 is the probe position in the ultrasound system.
- the embodiments of the present invention are not described one by one, and in addition to the above parameters, the first imaging parameter and the second imaging parameter may also adopt the emission aperture and the emission. Waveform, spatial recombination, frequency recombination, line recombination, and frame correlation, etc., by optimizing the first imaging parameter corresponding to the region of interest, thereby obtaining a first image with better quality.
- the aforementioned first parameter and second parameter may be set accordingly.
- the processor fuses the first imaging image and the second imaging image to obtain an imaging image of the object to be imaged.
- the fusion process may be: acquiring a first fusion parameter of the first imaging image and a second fusion parameter of the second imaging image, and then, based on the first fusion parameter and the second fusion parameter, the first imaging image and the first The two imaged images are fused to obtain an imaged image of the object to be imaged.
- the first fusion parameter and the second fusion parameter may be set according to actual conditions.
- ⁇ + ⁇ >1 may also be taken, and the overall brightness level of the image outputted after the fusion may be increased. In other embodiments, it may be set in other ways.
- the values of the first fusion parameter ⁇ and the second fusion parameter ⁇ are not fixed, and may be different according to each pixel in the image, each position in the image, and the generation time of the image.
- the values of the first fusion parameter and the second fusion parameter may be not less than 0; If the gray value of each pixel in the first image or the second image is less than 0, the value of the corresponding fusion parameter may be less than 0, and the values of the first fusion parameter and the second fusion parameter are different.
- the first fusion parameter ⁇ corresponding to each position in the first imaging image is different, and the second fusion parameter ⁇ corresponding to each position in the second imaging image may also be different, such as
- the edge position of the region needs to be merged with more image information of the second imaged image, and the value of the second fusion parameter ⁇ at the edge position of the region of interest may be greater than the value of the second fusion parameter ⁇ at other locations.
- An imaged image and a second imaged image are real-time images, which may change over time, and the values of the first blending parameter ⁇ and the second blending parameter ⁇ may also differ with time;
- the imaging method provided by the embodiment of the present invention can scan and image the region of interest of the object to be imaged and the second region of the image to be imaged by using the first imaging parameter to obtain the region of interest.
- the first imaging image and the second imaging image of the entire region such that the imaging parameters can be set in a targeted manner for the region of interest to specifically enhance and optimize the desired aspect of the image of the region of interest;
- more image information of the second imaging image may be fused at the edge position of the first imaging image, so that a smooth transition between the region of interest and the outside of the region of interest, Improve the transition effect, so that the overall effect of the imaged image after fusion is visually consistent.
- the first imaging parameter and the second imaging parameter are different, such that in the fusion process, the first imaging image may use image information of the region corresponding to the region of interest in the second imaging image to enhance the image quality of the region of interest.
- the second imaging image is an image corresponding to all areas of the object to be imaged, and the image shape is a regular shape, and the parameter control of the second imaging parameter is relatively simple with respect to the unconventional shape, further because the second imaging image It is an image corresponding to all areas, so that images outside the area of interest are displayed in real time except for the area of interest, real-time display of all areas.
- the processor in the above embodiment can add necessary hardware platforms through software (for example, microprocessor, microcontroller, programmable logic device, dedicated The integrated circuit or the like is implemented, or it can be implemented by hardware or firmware alone.
- the technical solution of the present invention which is essential or contributes to the prior art, can also be embodied in the form of a software product carried on a non-transitory computer readable storage carrier ( In the ROM, disk, CD, server cloud space, a number of instructions are included to enable a terminal device (which may be a cell phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Veterinary Medicine (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Vascular Medicine (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
Description
Claims (12)
- 一种成像方法,其特征在于,所述方法包括:获取待成像对象的初始图像;基于所述初始图像获取所述待成像对象的感兴趣区域;基于第一成像参数对所述感兴趣区域进行扫描成像,得到第一成像图像;基于第二成像参数对所述待成像对象的全部区域进行扫描成像,得到第二成像图像,其中所述第一成像参数和所述第二成像参数至少部分不同;对所述第一成像图像和所述第二成像图像进行融合,得到所述待成像对象的成像图像。
- 根据权利要求1所述的方法,其特征在于,所述基于所述初始图像获取当前待成像对象的感兴趣区域包括:获取操作者试用人机交互界面通过所述初始图像指定的所述感兴趣区域;或者,获取所述操作者指定的所述初始图像的图像类型,并基于所述图像类型将所述待初始图像与对应的第一样本图像进行匹配,获得所述感兴趣区域;或者,通过图像识别方法基于所述初始图像获取所述感兴趣区域。
- 根据权利要求2所述的方法,其特征在于,所述通过图像识别方法基于所述初始图像获取所述感兴趣区域,包括:获取所述初始图像的图像类型,并基于所述图像类型将所述初始图像与对应的第一样本图像进行匹配,得到所述感兴趣区域;或者,获取所述初始图像中的运动特征,基于所述运动特征对所述初始图像进行分割,得到所述初始图像的运动区域,并基于所述运动区域确定所述感兴趣区域。
- 根据权利要求3所述的方法,其特征在于,所述获取所述初始图像的图像类型包括:获取所述操作者指定的所述初始图像的图像类型;或者,对所述初始图像进行特征提取,得到所述初始图像的特征,将所述初始图像的特征与第二样本图像的特征进行匹配,得到所述初始图像的图像类型。
- 根据权利要求1所述的方法,其特征在于,所述第一成像参数和第二成像参数至少是:发射频率、发射电压、线密度、焦点数量、焦点位置、斑点噪声抑制参数和图像增强参数中的一种。
- 根据权利要求1所述的方法,其特征在于,所述对所述第一成像图像和所述第二成像图像进行融合,得到所述待成像对象的成像图像,包括:获取所述第一成像图像的第一融合参数以及所述第二成像图像的第二融合参数;基于所述第一融合参数和所述第二融合参数对所述第一成像图像和所述第二成像图像进行融合,得到所述待成像对象的成像图像。
- 一种成像***,其特征在于,所述***包括:扫描装置,所述扫描装置扫描待成像对象以获取待成像对象的图像数据;处理器,所述处理器用于:获取待成像对象的初始图像;基于所述初始图像获取所述待成像对象的感兴趣区域;控制所述扫描装置基于第一成像参数对所述感兴趣区域进行扫描成像,得到第一成像图像;控制所述扫描装置基于第二成像参数对所述待成像对象的全部区域进行扫面成像,得到第二成像图像,其中所述第一成像参数和所述第二成像参数不同;对所述第一成像图像和所述第二成像图像进行融合,得到所述待成像对象的成像图像。
- 根据权利要求7所述的***,其特征在于,所述处理器基于所述初始图像获取当前待成像对象的感兴趣区域包括:所述处理器获取操作者用人机交互界面通过所述初始图像指定的所述感兴趣区域;或者,所述处理器获取所述操作者指定的所述初始图像的图像类型,并基于所述图像类型将所述初始图像与对应的第一样本图像进行匹配,获得所述感兴趣区域;或者,所述处理器通过图像识别方法基于所述初始图像获取所述感兴趣区域。
- 根据权利要求8所述的***,其特征在于,所述处理器通过图像识别方法基于所述初始图像获取所述感兴趣区域包括:所述处理器获取所述初始图像的图像类型,并基于所述图像类型将所述初始图像与对应的第一样本图像进行匹配,得到所述感兴趣区域;或者,所述处理器获取所述待初始图像中的运动特征,基于所述运动特征对所述初始图像进行分割,得到所述初始的运动区域,并基于所述运动区域确定所述感兴趣区域。
- 根据权利要求9所述的***,其特征在于,所述处理器获取所述初始图像的图像类型包括:所述处理器获取所述操作者指定的所述初始图像的图像类型;或者,所述处理器对所述初始图像进行特征提取,得到所述初始图像的特征,将所述初始图像的特征与第二样本图像的特征进行匹配,得到所述初始图像的图像类型。
- 根据权利要求7所述的***,其特征在于,所述第一成像参数和第二成像参数至少是:发射频率、发射电压、线密度、焦点数量、焦点位置、斑点噪声抑制参数和图像增强参数中的一种。
- 根据权利要求7所述的***,其特征在于,所述处理器对所述第一成像图像和所述第二成像图像进行融合,得到所述待成像对象的成像图像包括:所述处理器获取所述第一成像图像的第一融合参数以及所述第二成像图像的第二融合参数,并基于所述第一融合参数和所述第二融合参数对所述第一成像图像和所述第二成像图像进行融合,得到所述待成像对象的成像图像。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/101313 WO2018058632A1 (zh) | 2016-09-30 | 2016-09-30 | 一种成像方法和*** |
CN201680086564.9A CN109310388B (zh) | 2016-09-30 | 2016-09-30 | 一种成像方法和*** |
CN202111095357.9A CN114224386A (zh) | 2016-09-30 | 2016-09-30 | 一种成像方法和*** |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/101313 WO2018058632A1 (zh) | 2016-09-30 | 2016-09-30 | 一种成像方法和*** |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018058632A1 true WO2018058632A1 (zh) | 2018-04-05 |
Family
ID=61762392
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/101313 WO2018058632A1 (zh) | 2016-09-30 | 2016-09-30 | 一种成像方法和*** |
Country Status (2)
Country | Link |
---|---|
CN (2) | CN109310388B (zh) |
WO (1) | WO2018058632A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020132953A1 (zh) * | 2018-12-26 | 2020-07-02 | 深圳迈瑞生物医疗电子股份有限公司 | 一种成像方法及超声成像设备 |
CN111407317A (zh) * | 2019-01-08 | 2020-07-14 | 深圳迈瑞生物医疗电子股份有限公司 | 执行超声成像的方法和*** |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5987345A (en) * | 1996-11-29 | 1999-11-16 | Arch Development Corporation | Method and system for displaying medical images |
CN1864633A (zh) * | 2005-05-19 | 2006-11-22 | 西门子公司 | 扩大对象区域的立体图像的显示范围的方法 |
CN1864634A (zh) * | 2005-05-19 | 2006-11-22 | 西门子公司 | 扩大对象区域的二维图像的显示范围的方法 |
CN103913472A (zh) * | 2012-12-31 | 2014-07-09 | 同方威视技术股份有限公司 | Ct成像***和方法 |
CN104382616A (zh) * | 2014-09-28 | 2015-03-04 | 安华亿能医疗影像科技(北京)有限公司 | 颈动脉三维图像构建装置 |
CN104780845A (zh) * | 2012-11-06 | 2015-07-15 | 皇家飞利浦有限公司 | 增强超声图像 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100686289B1 (ko) * | 2004-04-01 | 2007-02-23 | 주식회사 메디슨 | 대상체 영상의 윤곽내 볼륨 데이터를 이용하는 3차원초음파 영상 형성 장치 및 방법 |
US8090429B2 (en) * | 2004-06-30 | 2012-01-03 | Siemens Medical Solutions Usa, Inc. | Systems and methods for localized image registration and fusion |
US7372934B2 (en) * | 2005-12-22 | 2008-05-13 | General Electric Company | Method for performing image reconstruction using hybrid computed tomography detectors |
CN101405619B (zh) * | 2006-03-16 | 2013-01-02 | 皇家飞利浦电子股份有限公司 | 计算机断层造影数据采集装置和方法 |
CN101053531A (zh) * | 2007-05-17 | 2007-10-17 | 上海交通大学 | 基于多模式增敏成像融合的早期肿瘤定位跟踪方法 |
US7973834B2 (en) * | 2007-09-24 | 2011-07-05 | Jianwen Yang | Electro-optical foveated imaging and tracking system |
CN201854353U (zh) * | 2010-10-13 | 2011-06-01 | 山东神戎电子股份有限公司 | 多光谱图像融合摄像机 |
JP6000569B2 (ja) * | 2011-04-01 | 2016-09-28 | 東芝メディカルシステムズ株式会社 | 超音波診断装置及び制御プログラム |
US9557415B2 (en) * | 2014-01-20 | 2017-01-31 | Northrop Grumman Systems Corporation | Enhanced imaging system |
CN103971100A (zh) * | 2014-05-21 | 2014-08-06 | 国家电网公司 | 基于视频并针对自动提款机的伪装与偷窥行为的检测方法 |
-
2016
- 2016-09-30 CN CN201680086564.9A patent/CN109310388B/zh active Active
- 2016-09-30 CN CN202111095357.9A patent/CN114224386A/zh active Pending
- 2016-09-30 WO PCT/CN2016/101313 patent/WO2018058632A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5987345A (en) * | 1996-11-29 | 1999-11-16 | Arch Development Corporation | Method and system for displaying medical images |
CN1864633A (zh) * | 2005-05-19 | 2006-11-22 | 西门子公司 | 扩大对象区域的立体图像的显示范围的方法 |
CN1864634A (zh) * | 2005-05-19 | 2006-11-22 | 西门子公司 | 扩大对象区域的二维图像的显示范围的方法 |
CN104780845A (zh) * | 2012-11-06 | 2015-07-15 | 皇家飞利浦有限公司 | 增强超声图像 |
CN103913472A (zh) * | 2012-12-31 | 2014-07-09 | 同方威视技术股份有限公司 | Ct成像***和方法 |
CN104382616A (zh) * | 2014-09-28 | 2015-03-04 | 安华亿能医疗影像科技(北京)有限公司 | 颈动脉三维图像构建装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020132953A1 (zh) * | 2018-12-26 | 2020-07-02 | 深圳迈瑞生物医疗电子股份有限公司 | 一种成像方法及超声成像设备 |
CN112654298A (zh) * | 2018-12-26 | 2021-04-13 | 深圳迈瑞生物医疗电子股份有限公司 | 一种成像方法及超声成像设备 |
CN111407317A (zh) * | 2019-01-08 | 2020-07-14 | 深圳迈瑞生物医疗电子股份有限公司 | 执行超声成像的方法和*** |
Also Published As
Publication number | Publication date |
---|---|
CN109310388B (zh) | 2022-04-15 |
CN114224386A (zh) | 2022-03-25 |
CN109310388A (zh) | 2019-02-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
RU2667617C2 (ru) | Система и способ эластографических измерений | |
WO2017206023A1 (zh) | 一种心脏容积识别分析***和方法 | |
US11100665B2 (en) | Anatomical measurements from ultrasound data | |
JPWO2010116965A1 (ja) | 医用画像診断装置、関心領域設定方法、医用画像処理装置、及び関心領域設定プログラム | |
US20210393240A1 (en) | Ultrasonic imaging method and device | |
JP2016195764A (ja) | 医用画像処理装置およびプログラム | |
JP2020503099A (ja) | 出産前超音波イメージング | |
JP2005193017A (ja) | ***患部分類の方法及びシステム | |
US12026886B2 (en) | Method and system for automatically estimating a hepatorenal index from ultrasound images | |
US20200015785A1 (en) | Volume rendered ultrasound imaging | |
US20240041431A1 (en) | Ultrasound imaging method and system | |
JP2011120901A (ja) | 超音波空間合成映像を提供する超音波システムおよび方法 | |
EP3820374B1 (en) | Methods and systems for performing fetal weight estimations | |
WO2018058632A1 (zh) | 一种成像方法和*** | |
KR20120102447A (ko) | 진단장치 및 방법 | |
US11717268B2 (en) | Ultrasound imaging system and method for compounding 3D images via stitching based on point distances | |
WO2019130636A1 (ja) | 超音波撮像装置、画像処理装置、及び方法 | |
WO2020132953A1 (zh) | 一种成像方法及超声成像设备 | |
US20220039773A1 (en) | Systems and methods for tracking a tool in an ultrasound image | |
CN112294361A (zh) | 一种超声成像设备、盆底的切面图像生成方法 | |
CN112672696A (zh) | 用于跟踪超声图像中的工具的***和方法 | |
US11382595B2 (en) | Methods and systems for automated heart rate measurement for ultrasound motion modes | |
EP4390841A1 (en) | Image acquisition method | |
JP7299100B2 (ja) | 超音波診断装置及び超音波画像処理方法 | |
EP4014884A1 (en) | Apparatus for use in analysing an ultrasound image of a subject |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16917382 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16917382 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC, EPO FORM 1205A DATED 10/09/19 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16917382 Country of ref document: EP Kind code of ref document: A1 |