CN108335336A - Ultrasonic imaging method and device - Google Patents

Ultrasonic imaging method and device Download PDF

Info

Publication number
CN108335336A
CN108335336A CN201710042667.1A CN201710042667A CN108335336A CN 108335336 A CN108335336 A CN 108335336A CN 201710042667 A CN201710042667 A CN 201710042667A CN 108335336 A CN108335336 A CN 108335336A
Authority
CN
China
Prior art keywords
face
image
volume data
processor
visual image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710042667.1A
Other languages
Chinese (zh)
Other versions
CN108335336B (en
Inventor
刘俞辰
丁浩
魏芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Emperor Electronic Tech Co Ltd
Original Assignee
Shenzhen Emperor Electronic Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Emperor Electronic Tech Co Ltd filed Critical Shenzhen Emperor Electronic Tech Co Ltd
Priority to CN201710042667.1A priority Critical patent/CN108335336B/en
Publication of CN108335336A publication Critical patent/CN108335336A/en
Application granted granted Critical
Publication of CN108335336B publication Critical patent/CN108335336B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The present invention relates to a kind of ultrasonic imaging method and device, device includes memory module, for storing ultrasonic volume data and generating the copy of ultrasonic volume data;First processor, the copy for obtaining ultrasonic volume data;Convert the copy of ultrasonic volume data to 3 D visual image;First image output module, for exporting 3 D visual image;Second processor, for obtaining ultrasonic volume data;Convert ultrasonic volume data to vision two dimensional image;Second image output module, for exporting vision two dimensional image.Above-mentioned ultrasonic imaging method and device, the ultrasonic volume data of acquisition is handled to obtain the vision two dimensional image checked for professional, and the copy of the ultrasonic volume data of acquisition can be handled to obtain the 3 D visual image checked for ordinary user, enhance the intuitive of the formed image of ultrasonic imaging.3 D visual image is to be handled to obtain to the copy of ultrasonic volume data, avoids affecting one another when generating vision two dimensional image and 3 D visual image.

Description

Ultrasonic imaging method and device
Technical field
The present invention relates to image processing fields, more particularly to a kind of ultrasonic imaging method and device.
Background technology
Ultrasonic imaging technique is to send out ultrasonic wave to examined object by ultrasonic transducer and receive from examined object The echo of coming is returned to, using the physical features and object to be detected of ultrasonic wave in difference present on acoustic properties, to draw Go out the technology of examined object internal morphology information.
However, ultrasonic imaging technique is formed by image at present, not enough intuitively, it usually needs the personnel Jing Guo professional training Can be readable, limit the application of ultrasonic imaging technique.
Invention content
Based on this, it is necessary to be formed by not intuitive enough the problem of image for current ultrasonic imaging, provide a kind of ultrasound Imaging method and device.
A kind of ultrasonic imaging method, the method includes:
Obtain ultrasonic volume data;
Convert the ultrasonic volume data to vision two dimensional image;
Export the vision two dimensional image;
Obtain the copy of the ultrasonic volume data;
Convert the copy of the ultrasonic volume data to 3 D visual image;
Export the 3 D visual image.
The method further includes in one of the embodiments,:
The first regulating command for the 3 D visual image is received by the first control module;
By first processor according to first regulating command, the 3 D visual image is adjusted;
The second regulating command for the vision two dimensional image is received by the second control module;
By second processor according to second regulating command, the vision two dimensional image is adjusted.
The copy by the ultrasonic volume data is converted into the step of 3 D visual image in one of the embodiments, Suddenly, including:
Face is rendered according to the copy setting benchmark of the ultrasonic volume data;
Obtain visual angle matrix;
Calculate the benchmark render face according to after the visual angle matrix rotation different angle the first rendering face and the second wash with watercolours Dye face;
Face and described second, which is rendered, according to described first renders face generation 3 D visual image.
It is described in one of the embodiments, to render face and second rendering face generation 3 D visual according to described first The step of image, including:
According to following formula, the first rendering face and the second rendering face are merged to obtain 3 D visual figure Picture:
Wherein, P is the 3 D visual image, ωk(k=1,2) is fusion coefficients, P1Face, P are rendered for described first2For Described second renders face.
It is described in one of the embodiments, to render face and second rendering face generation 3 D visual according to described first The step of image, including:
Obtain the first color table and the second color table, first color table and second color table complementary colours each other;
Face is rendered according to described in the first color list processing first;
Face is rendered according to described in the second color list processing second;
It, will treated the first rendering face and treated that the second rendering face is merged according to following formula To obtain 3 D visual image:
Wherein, P is the 3 D visual image, ωk(k=1,2) is fusion coefficients, P1Face, P are rendered for described first2For Described second renders face.
The method further includes in one of the embodiments,:
Receive the first regulating command for the 3 D visual image;
According to first regulating command, the different angles that the benchmark renders face according to the visual angle matrix rotation are adjusted Degree;
Face and described second, which is rendered, according to the different angle update described first after adjusting renders face;
Face and newer second, which is rendered, according to newer first renders the face update 3 D visual image.
A kind of supersonic imaging device, described device include:
Memory module for storing ultrasonic volume data, and generates the copy of the ultrasonic volume data;
The first processor being connect with the memory module, the copy for obtaining the ultrasonic volume data;It will be described super The copy of sound volume data is converted into 3 D visual image;
The first image output module being connect with the first processor, for exporting the 3 D visual image;
The second processor being connect with the memory module, for obtaining the ultrasonic volume data;By the ultrasonic body number According to being converted into vision two dimensional image;
The second image output module being connect with the second processor, for exporting the vision two dimensional image.
Further include the first control module being connect with the first processor in one of the embodiments, for receiving For the first regulating command of the 3 D visual image;The first processor is additionally operable to according to first regulating command, The 3 D visual image is adjusted;
The second control module being connect with the second processor, for receive for the vision two dimensional image and with The second asynchronous regulating command of first regulating command;The second processor is additionally operable to according to second regulating command, The vision two dimensional image is adjusted.
The first processor is additionally operable to that base is arranged according to the copy of the ultrasonic volume data in one of the embodiments, Quasi- rendering face;Obtain visual angle matrix;It calculates the benchmark and renders face according to first after the visual angle matrix rotation different angle Rendering face and second renders face;Face and described second, which is rendered, according to described first renders face generation 3 D visual image.
Further include in one of the embodiments,:
The first control module being connect with the first processor, for receiving first for the 3 D visual image Regulating command;The first processor is additionally operable to, according to first regulating command, adjust the benchmark and render face according to The different angle of visual angle matrix rotation;Face and described second is rendered according to the different angle update described first after adjusting to render Face;Face and newer second, which is rendered, according to newer first renders the face update 3 D visual image.
Above-mentioned ultrasonic imaging method and device, on the one hand handling the ultrasonic volume data got can obtain for special On the other hand the vision two dimensional image that industry personnel check can be handled to obtain confession the copy of acquired ultrasonic volume data The 3 D visual image that ordinary user checks, enhances the intuitive of the formed image of ultrasonic imaging.Moreover, 3 D visual image To be handled the copy of ultrasonic volume data, can avoid generating vision two dimensional image and when 3 D visual image each other It influences.
Description of the drawings
Fig. 1 is the schematic diagram of supersonic imaging device in an embodiment;
Fig. 2 is the schematic diagram of supersonic imaging device in another embodiment;
Fig. 3 is the imaging effect schematic diagram of 3 D visual image in an embodiment;
Fig. 4 is the imaging effect schematic diagram of 3 D visual image in an embodiment;
Fig. 5 is the imaging effect schematic diagram of 3 D visual image in an embodiment;
Fig. 6 is the flow chart of ultrasonic imaging method in an embodiment;
Fig. 7 is the flow chart of rate-determining steps in an embodiment;
Fig. 8 is the flow chart of step S610 in embodiment illustrated in fig. 6.
Specific implementation mode
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that specific embodiment described herein is used only for explaining the present invention, and It is not used in the restriction present invention.
Be described in detail according to an embodiment of the invention before, it should be noted that, the embodiment described essentially consist in The combination of the ultrasonic imaging method step related to device and system component.Therefore, said system component and method and step be It is showed in position by ordinary symbol in the accompanying drawings, and merely illustrates and understand that the embodiment of the present invention has The details of pass, so as not to because for having benefited from those of ordinary skill in the art of the present invention those of apparent details it is fuzzy The disclosure.
Herein, such as left and right, upper and lower, front and rear, first and second etc relational terms are used merely to area Divide an entity or action and another entity or action, and not necessarily requires or imply and is between this entity or action any Actual this relationship or sequence.The terms "include", "comprise" or any other variant are intended to cover non-exclusive inclusion, by This to include the process, method, article or equipments of a series of elements to include not only these elements, but also includes not bright The other element really listed, or be elements inherent to such a process, method, article, or device.
Refering to Figure 1, Fig. 1 is the schematic diagram of supersonic imaging device in an embodiment, which may include storage Module 100, first processor 200, the first image output module 300, second processor 400 and the second image output module 500, wherein first processor 200 is connected with memory module 100 respectively with second processor 400, the first image output module 300 are connected with first processor 200, and the second image output module 500 is connected with second processor 400.
Specifically, memory module 100 can be used for storing ultrasonic volume data, and generate the copy of ultrasonic volume data, this is deposited It can be the storage devices such as flash, ROM to store up module 100.
Ultrasonic volume data can refer to the data obtained by ultrasonic probe, and ultrasonic probe can pass through a connecting line It is connected with memory module 100.Wherein ultrasonic probe emits ultrasonic wave to target object first, and then acquisition passes through target pair As the ultrasonic wave being reflected back, corresponding ultrasonic volume data is produced according to the ultrasonic wave being reflected back.It is deposited in ultrasonic volume data Before storage, ultrasonic volume data can also be pre-processed, such as be filtered, it is dry in ultrasonic volume data to filter out Disturb data.
The copy of ultrasonic volume data is to above-mentioned ultrasonic volume data replicate obtained, is getting ultrasonic body number According to later, being replicated to ultrasonic volume data to obtain the copy of ultrasonic volume data, to ultrasonic volume data and ultrasonic volume data Copy be duplicate.
First processor 200 can be used for obtaining the copy of ultrasonic volume data, and convert the copy of ultrasonic volume data to 3 D visual image.
Wherein, first processor 200 can be digital signal processors such as microcontroller etc., it is preferable that first processor 200 Can be CPU (central processing unit) and GPU (graphics processor).CPU executes the more complex serial task of logic judgment, and GPU is held The calculating task of a large amount of highly-parallels of row, can improve the processing speed of first processor 200 in this way, enhance interactivity.The One processor 200 can be arranged inside the shell, to be protected to first processor 200.
3 D visual image can be the image for having space multistory sense.It can will be obtained by first processor 200 The copy of ultrasonic volume data be converted into 3 D visual image.And since the copy of ultrasonic volume data and ultrasonic volume data is complete The same, therefore first processor 200 can also be ultrasonic volume data being converted to vision two dimensional image.
First image output module 300 can be used for exporting 3 D visual image.
Wherein, the first image output module 300 can be common display screen either touch screen etc. or can be tablet Computer monitor, computer monitor, television set, projecting apparatus etc. show that equipment, first image output module 300 can be arranged On the shell of above-mentioned first processor 200, so that first processor 200 and the first image output module 300 are integrated in one It rises or the first image output module 300 can be wirelessly connected with 200 phase of first processor.
After ultrasonic volume data is converted to 3 D visual image by first processor 200, which is sent To the first image output module 300, which can show the 3 D visual image.
Second processor 400 can be used for obtaining ultrasonic volume data, and convert ultrasonic volume data to vision two dimensional image.
Wherein, second processor 400 can be digital signal processors such as microcontroller etc., it is preferable that second processor 400 Can be CPU (central processing unit) and GPU (graphics processor).CPU executes the more complex serial task of logic judgment, and GPU is held The calculating task of a large amount of highly-parallels of row, can improve the processing speed of second processor 400 in this way, enhance interactivity.The Two processors 400 can be arranged inside the shell, to be protected to second processor 400.
The vision two dimensional image refers to flat image.Converting ultrasonic volume data to the process of vision two dimensional image can wrap Filtering, colors countenance and extraction characteristic value etc. are included, wherein filtering can be using gaussian filtering, mean filter or intermediate value filter The modes such as wave.
Ultrasonic volume data can be converted to two image of vision by second processor 400, and when first processor 200 is to super When sound volume data is operated, i.e. first processor 200 is when ultrasonic volume data is converted to 3 D visual image, this is at second Reason device 400 can be operated for the copy of ultrasonic volume data, i.e., convert the copy of ultrasonic volume data to vision two dimension Image.Why first processor 200 is set and second processor 400 is handled for different data, is to ensure The processing procedure of first processor 200 and second processor 400 is not interfere with each other.
Second image output module 500 can be used for exporting vision two dimensional image.
Wherein, the second image output module 500 can be common display screen or touch screen etc., second image output Module 500 can be arranged on the shell of above-mentioned second processor 400, so that second processor 400 and the second image are defeated Go out that module 500 integrates or the second image output module 500 can be wirelessly connected with 400 phase of second processor.
After ultrasonic volume data is converted to vision two dimensional image by second processor 400, which is sent To the second image output module 500, which can show the vision two dimensional image.
Above-mentioned supersonic imaging device, on the one hand handling the ultrasonic volume data got can obtain for professional On the other hand the vision two dimensional image checked can be handled to obtain for common the copy of acquired ultrasonic volume data The 3 D visual image that family is checked, enhances the intuitive of the formed image of ultrasonic imaging.Moreover, 3 D visual image is to super What the copy of sound volume data was handled, it can avoid affecting one another when generating vision two dimensional image and 3 D visual image.
It please refers in one of the embodiments, shown in Fig. 2, Fig. 2 is the signal of supersonic imaging device in another embodiment Figure, in this embodiment, the supersonic imaging device include not only all modules in embodiment illustrated in fig. 1, further include the first control Molding block 600 and the second control module 700, wherein the first control module 600 is connected with first processor 200, the second control Module 700 is connected with 400 device of second processor.
First control module 600 can be used for receiving the first regulating command for 3 D visual image;First processor 200 can be also used for, according to the first regulating command, 3 D visual image being adjusted.
Wherein, the first control module 600 can be integrated in the operation panel on the shell of above-mentioned first processor 200, It can also be and 200 mutually independent control module of above-mentioned first processor.First control module 600 can be by wired or wireless Mode be connected with first processor 200, such as first control module 600 can pass through a connecting line and first processor 200 are connected, and can also be and are connected with first processor 200 by wireless modes such as bluetooth, wifi, do not limit first herein The connection type of control module 600 and first processor 200.
Multiple functions may be implemented in first control module 600, such as the first control module 600 can provide a pair of control and put Button that is big and reducing, when pressing the button zoomed in or out every time, the first image output module 300 can export correspondingly sized Image specifically can pre-set the limit of scaling and scaling, when pressing the button zoomed in or out, first Control module 600 can receive a signal, generally a voltage or current signal, and the first control module 600 can send the signal To first processor 200, first processor 200 obtains opposite with the signal according to pre-set scaling and the scaling limit The scaling answered zooms in and out 3 D visual image further according to the scaling, so that the first image output module 300 output images of corresponding size.
First control module 600 can also provide a pair of button for adjusting brightness and contrast, and it is bright to press the adjusting every time When degree and the button of contrast, the brightness and contrast of the 3 D visual image of the first image output module 300 output can be corresponding Adjustment.Specifically, when pressing the button for adjusting brightness and contrast, the first control module 600 can receive a signal, and one As be a voltage or current signal, first processor 200 according to the voltage or current signal can get than spend gain parameter With brightness offset parameter;
The 3 D visual image after adjusting can be calculated according to following formula:
g1(x)=a1×f1(x)+b1 (1)
Wherein, a1For contrast gain parameter, b1For brightness offset parameter, f1(x) it is the 3 D visual image before adjusting, g1(x) it is the 3 D visual image after adjusting.
First control module 600 can also provide a playback button, since ultrasonic probe is when acquiring ultrasonic wave, Multigroup ultrasonic wave can be acquired, so as to form the copy of multigroup ultrasonic volume data and multigroup ultrasonic volume data, therefore whenever inspection When measuring the playback button and being pressed, can successively in the output of the first image output module 300 multigroup ultrasonic volume data pair Originally 3 D visual image, such as the sequencing etc. according to acquisition are formed by.
Second control module 700 can be used for receiving the second regulating command for vision two dimensional image;Second controller It can be also used for, according to the second regulating command, vision two dimensional image being adjusted.
Wherein, the second control module 700 can be integrated in the operation panel on the shell of above-mentioned second processor 400, It can also be and 400 mutually independent control module of above-mentioned second processor.Second control module 700 can be by wired or wireless Mode be connected with second processor 400, such as second control module 700 can pass through a connecting line and second processor 400 are connected, and can also be and are connected with second processor 400 by wireless modes such as bluetooth, wifi, do not limit second herein The connection type of control module 700 and second processor 400.
Multiple functions may be implemented in second control module 700, such as the second control module 700 can provide a pair of control and put Button that is big and reducing, when pressing the button zoomed in or out every time, the second image output module 500 can export correspondingly sized Image specifically can pre-set the limit of scaling and scaling, when pressing the button zoomed in or out, second Control module 700 can receive a signal, generally a voltage or current signal, and the second control module 700 can send the signal To Second processing module, Second processing module obtains corresponding with the signal according to pre-set scaling and the scaling limit Scaling, vision two dimensional image is zoomed in and out further according to the scaling, so that the second image output module 500 Export image of corresponding size.
Second control module 700 can also provide a pair of button for adjusting brightness and contrast, and it is bright to press the adjusting every time When degree and the button of contrast, the brightness and contrast of the vision two dimensional image of the second image output module 500 output can be corresponding Adjustment.Specifically, when pressing the button for adjusting brightness and contrast, the second control module 700 can receive a signal, and one As be a voltage or current signal, second processor 400 according to the voltage or current signal can get than spend gain parameter With brightness offset parameter;
The vision two dimensional image after adjusting can be calculated according to following formula:
g2(x)=a2×f2(x)+b2 (2)
Wherein, a2For contrast gain parameter, b2For brightness offset parameter, f2(x) it is the vision two dimensional image before adjusting, g2(x) it is the vision two dimensional image after adjusting.
Second control module 700 can also provide a playback button, since ultrasonic probe is when acquiring ultrasonic wave, Multigroup ultrasonic wave can be acquired, so as to form the copy of multigroup ultrasonic volume data and multigroup ultrasonic volume data, therefore whenever inspection It, can in the output of the second image output module 500, multigroup ultrasonic volume data be thought successively when measuring the playback button and being pressed At vision two dimensional image, such as the sequencing etc. according to acquisition.
In addition in above-described embodiment, due to being provided respectively corresponding to the first control module 600 of first processor 200 and right 3 D visual image should be carried out by the first control module 600 in the second control module 700 of second processor 400 Adjustment will not have an impact vision two dimensional image, similarly, be carried out to vision two dimensional image by the second control module 700 Adjustment will not have an impact 3 D visual image, so that professional and ordinary user can be respectively to vision two dimensions Image and 3 D visual image are operated, and will not be influenced each other each other.
First processor 200 can be also used for that base is arranged according to the copy of ultrasonic volume data in one of the embodiments, Quasi- rendering face;Obtain visual angle matrix;Calculating benchmark render face according to after the matrix rotation different angle of visual angle first render face and Second renders face;Face and second, which is rendered, according to first renders face generation 3 D visual image.
Specifically, first processor 200 can be based on aberration formula technology, polarization type technology or shutter technology will be ultrasonic The copy of volume data is converted to 3 D visual image, and basic principle is to generate two rendering faces with certain relationship to obtain Two difference are little, but include the rendering image of different perspective informations, then final to obtain by merging two images 3 D visual image.When ordinary user wears the anaglyph spectacles with different filter effects, left eye and right eye can be from melting The two images before fusion are respectively seen in image after conjunction, brain utilizes the minute differences of the two images, can count automatically The distance for calculating object, to form effect 3 dimensional drawing true to nature.
Benchmark specifically can be generated by the copy of ultrasonic volume data and render face.Then visual angle matrix is obtained.In three-dimensional In stereo scene, when observing object from different angles, not instead of object of rotation itself renders face.Usually waiting seeing The center that object is placed on world coordinate system is examined, rendering face is rotated around the reference axis of object coordinates system.It can in the present embodiment To determine " positive and negative " direction of rendering face rotation using lefft-hand rule, while using the visual angle matrix in row vector preservation rendering face. Since the present embodiment uses two rendering faces in calculating process, not only it needs to be determined that two rendering face positions, It also needs to determine the relationship between two rendering faces.Unit matrix can be first used to indicate that benchmark renders the first of face in the present embodiment Then beginning position calculates benchmark and renders visual angle matrix of the face after axis n rotations θ, obtains its postrotational position.
The calculation formula that θ is wherein rotated around axis n is:
V R (n, θ)=v'(3)
Wherein R (n, θ) is visual angle matrix, and v' is vector v around the postrotational vectors of axis n.
The expression formula of R (n, θ) is:
Wherein, p ', q ', r ' represent base vector, nx、ny、nzRespectively represent the x of object coordinates system, tri- axis of y, z.
Therefore, it some angle, θ rotated to any axis n can be broken into x-axis, y-axis and z-axis and rotate θ respectivelyx、θy、θz.Cause This R (n, θ) can be decomposed into:
From above formula it is found that need to only determine the value of θ, so that it may to obtain the position that benchmark renders place after the rotation of face.
Calculating benchmark renders face and renders face and the second rendering mask body according to first after the matrix rotation different angle of visual angle Ground can render face and above-mentioned visual angle matrix using benchmark to carry out, under normal circumstances, for simplicity, first render face and Second rendering face can be divided right and left, and can also be to separate up and down, and in the present embodiment for dividing right and left, first renders face It can be that benchmark rendering face rotates to obtain around the axis of world coordinate system with the second rendering face, can also be that benchmark renders face difference Right translation obtains to the left, when first renders the axis rotation of face and the second rendering face around world coordinate system, can utilize Ry(n, θy) two visual angle matrixes that benchmark renders face around y-axis rotation β and-β are calculated separately out, to obtain the first rendering face and second Rendering face, the angle between the first rendering face and the second rendering face is 2 β at this time;When the first rendering face and second render face difference To the left when right translation L, it need to only change the value of corresponding visual angle matrix and the first rendering face and the second rendering face just can be obtained, at this time the One renders the distance between face and the second rendering face as 2L.
It can be that simulation sight first carries out body wash with watercolours to generate 3 D visual image according to the first rendering face and the second rendering face The calculating of dye, the then rendering of Weighted Fusion two faces.
Wherein, the calculating that simulation sight carries out body rendering is that simulation viewpoint sends out a ray, by the picture of viewing screen Continue to extend further along this direction after vegetarian refreshments, to ultrasound while traversing all three-dimensional data points of the passed through object of ray Volume data carry out resampling, and according to corresponding to sampled point of the sequence by process from front to back or from back to front color and Opacity is mixed to calculate the color of pixel according to a certain percentage.It can be with when using mode from front to back Using following formula:
Coutαout=Cinαin+Cnowαnow(1-αin) (8)
αoutinnow(1-αin) (9)
Following formula may be used when by the way of from back to front:
Cout=Cin(1-αnow)+Cnowαnow (10)
Wherein CnowAnd αnowThe color of current volume element and the value of opacity are respectively represented.CinAnd αinTable respectively Show the accumulating value of color and opacity before projection radiation enters volume element.CoutAnd αoutIt is illustrated respectively in projection radiation The accumulating value of color and opacity after into volume element.When all light all pass through volume element to complete sampling synthesis Afterwards, so that it may be presented on the screen with finally will completely render figure.
The image co-registration mode that the present embodiment uses is a kind of linear weighted function amalgamation mode, in other embodiments can be with Using other linear fusion modes or non-linear fusion mode, no longer limit herein.
When passing through Weighted Fusion, the first rendering face and the second rendering face can be merged to obtain according to following formula Obtain 3D vision image:
Wherein, P is 3D vision image, ωk(k=1,2) is fusion coefficients, P1Face, P are rendered for first2It is rendered for second Face.
As can be seen that working as ω from formula1When bigger, P1Ratio it is higher, P2Ratio it is lower;Work as ω2When bigger, P1's Ratio is lower, P2Ratio it is higher.ω1、ω2Concrete numerical value need according to have filter effect anaglyph spectacles parameter etc. Actual conditions are adjusted.The image after human eye is merged by the anaglyph spectacles viewing with filter effect need to be met when adjusting, Left eye can only can be clearly seen that P1, but cannot see that P2;Right eye can only can be clearly seen that P2, but cannot see that P1
Wherein, when the different technology of use, such as when aberration formula technology, polarization type technology or shutter technology, first Image output module 300 can be different equipment, such as when using aberration formula technology, to the first image output module 300 without other requirements, when using polarization type technology, then need the first image output module 300 that can show simultaneously Have the function of out of phase shear wave image, when using shutter technology, then needs the first image output module 300 Refresh rate is at least up to 120Hz.
In above-described embodiment, by aberration formula technology, polarization type technology or shutter technology by the pair of ultrasonic volume data Originally 3 D visual image is converted to, the stereo space sense of 3 D visual image is improved, provides to ordinary user and preferably regards Feel experience.
In one of the embodiments, when use aberration formula technology converts the copy of ultrasonic volume data to 3 D visual figure When picture, first processor 200 can be also used for before generating 3 D visual image according to the first rendering face and the second rendering face, Obtain the first color table and the second color table, first color table and second color table complementary colours each other;According to described First renders face described in first color list processing;Face is rendered according to described in the second color list processing second.
Specifically, the color of any one of digital picture pixel can be recorded and be expressed by one group of rgb value, figure As upper all colors, mixed according to different ratios by three kinds of colors of RGB, red, green, blue are also known as It is any in three primary colors of the same colour all to use remaining two kinds of color formula for three primary colors.Each color has 0~255 grade of brightness, often The set of kind colour brightness is formed the color table of the color.Before fusion first renders face and the second rendering face, need to count Calculation obtains two kinds of complementary color tables, each rendering face renders image using a kind of color table.Such as work as table1 RGB expression formulas be:
Table1=(αr, 0,0) and × 255 (12)
Wherein table1 is the color table of red, αrFor red percentage, value range is [0,1].
The RGB expression formulas of the complementary color table table2 of table1 are at this time:
Table2=(0, αg, αb)×255 (13)
Wherein table2 is the color table of cyan, αgAnd αbRespectively the percentage of green and blue, value range are all [0,1], the color after green and blue mixing are cyan.
The color that two kinds of colors all do not contain other side is can be seen that from the RGB expression formulas of table1 and table2, therefore can To preserve the information in the first rendering face with the color of table1, and the letter in the second rendering face is preserved with the color of table2 Breath.
In above-described embodiment, the first rendering face and the second rendering face are handled respectively by two kinds of complementary colors, So that there are difference for the color in the first rendering face and the second rendering face, so that the left eye and right eye of people are respectively seen not Same image increases stereoscopic effect.
The device can also include in one of the embodiments,:The first control module being connect with first processor 200 600, for receiving the first regulating command for 3 D visual image;First processor 200, which is additionally operable to adjust according to first, to be referred to It enables, adjusts the different angle that benchmark renders face according to visual angle matrix rotation;It is rendered according to the different angle update first after adjusting Face and second renders face;Face and newer second, which is rendered, according to newer first renders face update 3 D visual image.
Specifically, the first control module 600 can be integrated in the operating surface on the shell of above-mentioned first processor 200 Plate, can also be with 200 mutually independent control module of above-mentioned first processor, first control module 600 can by wired or Wirelessly be connected with first processor 200, for example, first control module 600 can be by a connecting line and first at Reason device 200 is connected, and can also be and is connected with first processor 200 by wireless modes such as bluetooth, wifi, is not limited herein The connection type of first control module 600 and first processor 200.
First control module 600 not only may be implemented the above zoom, adjust brightness and contrast and The functions such as playback, the ginseng during 3 D visual image can also be converted by adjusting the above-mentioned copy by ultrasonic volume data It counts to adjust the display of 3 D visual image.
Specifically, it please refers to shown in Fig. 3 to Fig. 5, Fig. 3 to Fig. 5 is the imaging effect of 3 D visual image in an embodiment Fruit schematic diagram is schemed on the basis of wherein Fig. 3, and Fig. 4 and Fig. 5 are with reference to scheming.Assuming that the image that the first rendering face is formed is located at the first figure As output module 300 Al at, second rendering face formed image is located at the Ar of the first image output module 300, please compare Shown in Fig. 3 and Fig. 4, when between two rendering faces 2 β of angle or spacing 2L reduce when, the distance between Al and Ar reduce therewith, It moves after imaging point A at this time, increases at a distance from observer, i.e. D in Fig. 41Than the D in Fig. 30Greatly, observer can feel object Farther out from oneself, stereoscopic effect is weaker;Please shown in comparison diagram 3 and Fig. 5, when between two rendering faces 2 β of angle or spacing 2L increase When big, the distance between Al and Ar increase therewith, and imaging point A Forwards at this time are reduced, observer can feel with a distance from observer Feel that object is close from oneself, i.e. D in Fig. 52Than the D in Fig. 30Small, stereoscopic effect is very strong.But work as the distance between Al and Ar When excessive, two rendering faces render the image difference come also can be excessive, and brain will be considered that at this time sees two images at the same time, because The symptoms such as this can not associate them, even may feel that dizziness and nausea in watching process, and eyes can not focus.And due to Everyone interpupillary distance of eyes is different, therefore receptible stereoscopic effect power is also different, thus the present embodiment provided One control module 600 can also include the button of 2 β of angle or spacing 2L sizes between two rendering faces of a pair of of adjusting, commonly User can adjust the size of 2 β or spacing 2L according to the interpupillary distance of oneself, and under the premise of not causing dizzy, can use up can Can increase 2 β or spacing 2L promote stereoscopic effect, to experiencing the viewing experience of the most comfortable.
In addition the first control module 600 in the present embodiment can also include three positive negative bias in direction of a pair of control x, y, z The button or trace ball of shifting amount, ordinary user can update benchmark by the angle, θ of button or trace ball control above and render The three-dimensional to 3 D visual image to realize that rendering face and second to first renders the control in face, and then is realized in the position in face Rotation.
In above-described embodiment, by the first control module 600 can receive for 3 D visual image first adjust refer to It enables, the parameter in above-mentioned 3 D visual image imaging process can be adjusted according to first regulating command, to realize to first Rendering face and second renders the adjustment in face, and then can update 3 D visual image.
It please refers to shown in Fig. 6, Fig. 6 is the flow chart of ultrasonic imaging method in an embodiment, which can be with Including:
S602:Obtain ultrasonic volume data.
Specifically, ultrasonic volume data refers to the data generated according to ultrasonic wave, and wherein ultrasonic wave is generally by ultrasonic wave The acquisition of the equipment such as probe can emit ultrasonic wave to target object first, then be set by ultrasonic probe etc. in actual use The ultrasonic wave of standby acquisition target object reflection, the ultrasonic wave according to the reflection produce corresponding ultrasonic volume data.
S604:Convert ultrasonic volume data to vision two dimensional image.
Specifically, which refers to flat image.Convert ultrasonic volume data to the mistake of vision two dimensional image Journey may include filtering, colors countenance and extraction characteristic value etc., wherein filtering can be using gaussian filtering, mean filter or The modes such as person's medium filtering specifically may refer to the above.
S606:Export vision two dimensional image.
Specifically, after generating vision two dimensional image, the vision two dimensional image can directly be exported so that user etc. checks.
S608:Obtain the copy of ultrasonic volume data.
Specifically, the copy of ultrasonic volume data is to above-mentioned ultrasonic volume data replicate obtained, is being got After ultrasonic volume data, ultrasonic volume data is replicated to obtain the copy of ultrasonic volume data, to ultrasonic volume data and is surpassed The copy of sound volume data is duplicate.
S610:Convert the copy of ultrasonic volume data to 3 D visual image.
Specifically, which can be carried out at the same time with step S604, can also be carried out before step S604, by ultrasonic body Data are converted to vision two dimensional image and convert the copy of ultrasonic volume data to 3 D visual image and do not interfere with each other therebetween.
S612:Export 3 D visual image.
Specifically, after generating 3 D visual image, the 3 D visual image can directly be exported so that user etc. checks.
Above-mentioned ultrasonic imaging method, on the one hand handling the ultrasonic volume data got can obtain for professional On the other hand the vision two dimensional image checked can be handled to obtain for common the copy of acquired ultrasonic volume data The 3 D visual image that family is checked, enhances the intuitive of the formed image of ultrasonic imaging.Moreover, 3 D visual image is to super What the copy of sound volume data was handled, it can avoid affecting one another when generating vision two dimensional image and 3 D visual image.
It please refers in one of the embodiments, shown in Fig. 7, Fig. 7 is the flow chart of rate-determining steps in an embodiment, is being schemed Can also include a rate-determining steps in ultrasonic imaging method shown in 6, which may include:
S702:The first regulating command for 3 D visual image is received by the first control module 600.
Specifically, ordinary user can be by manipulating the first control module 600, to being formed by 3 D visual image Regulated and controled.
S704:By first processor 200 according to the first regulating command, 3 D visual image is adjusted.
After the first control module 600 receives above-mentioned first regulating command, which can be sent to first Processor 200, first processor 200 can be controlled according to the regulating command being formed by 3 D visual image, such as The control of brightness, contrast or size etc..
S706:The second regulating command for vision two dimensional image is received by the second control module 700.
Specifically, professional user can by manipulating the second control module, to be formed by vision two dimensional image into Row regulation and control, the step can be carried out at the same time with step S702, can also be carried out before step S702.
S708:By second processor 400 according to the second regulating command, vision two dimensional image is adjusted.
Since above-mentioned the first regulating command for 3 D visual image is by the first control module 600 in the present embodiment It receives, is also to be handled in first processor 200, the second regulating command for vision two dimensional image is by second What control module 700 received, be also to be handled in second processor 400, since the first regulating command and second are adjusted Instruction is independently received, and is independently handled, therefore is directed to the adjustment of 3 D visual image and is directed to vision two dimensional image Adjustment be independent from each other.
It please refers in one of the embodiments, described in Fig. 8, Fig. 8 is the flow of step S610 in embodiment illustrated in fig. 6 Figure, in this embodiment, step S610 converts the copy of ultrasonic volume data to 3 D visual image, may include:
S802:Benchmark, which is generated, according to the copy of ultrasonic volume data renders face.
S804:Obtain visual angle matrix.
S806:Calculating benchmark renders face and renders face and the second rendering according to first after the matrix rotation different angle of visual angle Face.
S808:Face and second, which is rendered, according to first renders face generation 3 D visual image.
In this embodiment, by forming different rendering faces, the copy of ultrasonic volume data is converted into 3 D visual figure Picture improves the stereo space sense of 3 D visual image, and preferable visual experience is provided to ordinary user.
Step S808 renders face and second according to first and renders face generation 3 D visual figure in one of the embodiments, Picture may include:
According to following formula, the first rendering face and the second rendering face are merged to obtain 3 D visual image:
Wherein, P is 3 D visual image, ωk(k=1,2) is fusion coefficients, P1Face, P are rendered for first2It is rendered for second Face.
In above-described embodiment, by merging the first rendering face and the second rendering face, by the copy of ultrasonic volume data 3 D visual image is converted to, the stereo space sense of 3 D visual image is improved, preferable vision is provided to ordinary user Experience.
Step S808 renders face and second according to first and renders face generation 3 D visual figure in one of the embodiments, Picture may include:
Obtain the first color table and the second color table, the first color table and the second color table complementary colours each other;
Face is rendered according to the first color list processing first;
Face is rendered according to the second color list processing second;
It, will treated the first rendering face and treated that the second rendering face is merged to be regarded according to following formula Feel 3-D view:
Wherein, P is 3 D visual image, ωk(k=1,2) is fusion coefficients, P1Face, P are rendered for first2It is rendered for second Face.
In above-described embodiment, the first rendering face and the second rendering face are located respectively by two kinds of complementary color tables Reason so that there are difference for the color in the first rendering face and the second rendering face, so that the left eye and right eye of people are seen respectively To different images, increase stereoscopic effect.
The ultrasonic imaging method can also include in one of the embodiments,:
Receive the first regulating command for 3 D visual image.
According to the first regulating command, the different angle that benchmark renders face according to visual angle matrix rotation is adjusted.
Face and second, which is rendered, according to the different angle update first after adjusting renders face.
Face and newer second, which is rendered, according to newer first renders face update 3 D visual image.
In above-described embodiment, by the first control module 600 can receive for 3 D visual image first adjust refer to It enables, the parameter in above-mentioned 3 D visual image imaging process can be adjusted according to first regulating command, to realize to first Rendering face and second renders the adjustment in face, and then can update 3 D visual image.
The above-mentioned specific restriction to ultrasonic imaging method may refer to the above specific restriction to supersonic imaging device, This is repeated no more.
Each technical characteristic of embodiment described above can be combined arbitrarily, to keep description succinct, not to above-mentioned reality It applies all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited In contradiction, it is all considered to be the range of this specification record.
Several embodiments of the invention above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art It says, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the protection of the present invention Range.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (10)

1. a kind of ultrasonic imaging method, which is characterized in that the method includes:
Obtain ultrasonic volume data;
Convert the ultrasonic volume data to vision two dimensional image;
Export the vision two dimensional image;
Obtain the copy of the ultrasonic volume data;
Convert the copy of the ultrasonic volume data to 3 D visual image;
Export the 3 D visual image.
2. according to the method described in claim 1, it is characterized in that, the method further includes:
The first regulating command for the 3 D visual image is received by the first control module;
By first processor according to first regulating command, the 3 D visual image is adjusted;
The second regulating command for the vision two dimensional image is received by the second control module;
By second processor according to second regulating command, the vision two dimensional image is adjusted.
3. according to the method described in claim 1, it is characterized in that, the copy by the ultrasonic volume data is converted into vision The step of 3-D view, including:
Face is rendered according to the copy setting benchmark of the ultrasonic volume data;
Obtain visual angle matrix;
It calculates the benchmark and renders face according to the first rendering face and the second rendering face after the visual angle matrix rotation different angle;
Face and described second, which is rendered, according to described first renders face generation 3 D visual image.
4. according to the method described in claim 3, it is characterized in that, described render face and second rendering according to described first Face generates the step of 3 D visual image, including:
According to following formula, the first rendering face and the second rendering face are merged to obtain 3 D visual image:
Wherein, P is the 3 D visual image, ωk(k=1,2) is fusion coefficients, P1Face, P are rendered for described first2It is described Second renders face.
5. according to the method described in claim 3, it is characterized in that, described render face and second rendering according to described first Face generates the step of 3 D visual image, including:
Obtain the first color table and the second color table, first color table and second color table complementary colours each other;
Face is rendered according to described in the first color list processing first;
Face is rendered according to described in the second color list processing second;
It, will treated the first rendering face and treated that the second rendering face is merged to obtain according to following formula Obtain 3 D visual image:
Wherein, P is the 3 D visual image, ωk(k=1,2) is fusion coefficients, P1Face, P are rendered for described first2It is described Second renders face.
6. according to the method described in claim 3, it is characterized in that, the method further includes:
Receive the first regulating command for the 3 D visual image;
According to first regulating command, the different angle that the benchmark renders face according to the visual angle matrix rotation is adjusted;
Face and described second, which is rendered, according to the different angle update described first after adjusting renders face;
Face and newer second, which is rendered, according to newer first renders the face update 3 D visual image.
7. a kind of supersonic imaging device, which is characterized in that described device includes:
Memory module for storing ultrasonic volume data, and generates the copy of the ultrasonic volume data;
The first processor being connect with the memory module, the copy for obtaining the ultrasonic volume data;By the ultrasonic body The copy of data is converted into 3 D visual image;
The first image output module being connect with the first processor, for exporting the 3 D visual image;
The second processor being connect with the memory module, for obtaining the ultrasonic volume data;The ultrasonic volume data is turned Turn to vision two dimensional image;
The second image output module being connect with the second processor, for exporting the vision two dimensional image.
8. device according to claim 7, which is characterized in that further include the first control being connect with the first processor Module, for receiving the first regulating command for the 3 D visual image;The first processor is additionally operable to according to The 3 D visual image is adjusted in first regulating command;
The second control module being connect with the second processor, for receive for the vision two dimensional image and with it is described The second asynchronous regulating command of first regulating command;The second processor is additionally operable to according to second regulating command, to institute Vision two dimensional image is stated to be adjusted.
9. device according to claim 7, which is characterized in that the first processor is additionally operable to according to the ultrasonic body number According to copy setting benchmark render face;Obtain visual angle matrix;It calculates the benchmark and renders face according to the visual angle matrix rotation not Face and second, which is rendered, with first after angle renders face;Face and described second, which is rendered, according to described first renders face generation vision three Tie up image.
10. device according to claim 9, which is characterized in that further include:
The first control module being connect with the first processor is adjusted for receiving for the first of the 3 D visual image Instruction;The first processor is additionally operable to, according to first regulating command, adjust the benchmark and render face according to the visual angle The different angle of matrix rotation;Face and described second, which is rendered, according to the different angle update described first after adjusting renders face;Root Face and newer second, which is rendered, according to newer first renders the face update 3 D visual image.
CN201710042667.1A 2017-01-20 2017-01-20 Ultrasonic imaging method and device Active CN108335336B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710042667.1A CN108335336B (en) 2017-01-20 2017-01-20 Ultrasonic imaging method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710042667.1A CN108335336B (en) 2017-01-20 2017-01-20 Ultrasonic imaging method and device

Publications (2)

Publication Number Publication Date
CN108335336A true CN108335336A (en) 2018-07-27
CN108335336B CN108335336B (en) 2024-04-02

Family

ID=62922213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710042667.1A Active CN108335336B (en) 2017-01-20 2017-01-20 Ultrasonic imaging method and device

Country Status (1)

Country Link
CN (1) CN108335336B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340742A (en) * 2018-12-18 2020-06-26 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging method and device and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1257694A (en) * 1998-11-23 2000-06-28 通用电气公司 Three-D supersonic imaging of speed and power data using mean or mid-value pixel projection
CN101371792A (en) * 2007-08-24 2009-02-25 通用电气公司 Method and apparatus for voice recording with ultrasound imaging
CN101405769A (en) * 2005-12-31 2009-04-08 布拉科成像S.P.A.公司 Systems and methods for collaborative interactive visualization of 3D data sets over a network ('DextroNet')
CN101959463A (en) * 2008-03-04 2011-01-26 超声成像公司 Twin-monitor electronic display system
CN102613990A (en) * 2012-02-03 2012-08-01 声泰特(成都)科技有限公司 Display method of blood flow rate of three-dimensional ultrasonic spectrum Doppler and space distribution of blood flow rate
CN102637303A (en) * 2012-04-26 2012-08-15 珠海医凯电子科技有限公司 Ultrasonic three-dimensional mixed and superposed volumetric rendering processing method based on GPU (Graphic Processing Unit)
US20130170721A1 (en) * 2011-12-29 2013-07-04 Samsung Electronics Co., Ltd. Method and apparatus for processing ultrasound image
US20140081140A1 (en) * 2012-09-14 2014-03-20 Samsung Electronics Co., Ltd. Ultrasound imaging apparatus and control method for the same
CN103654863A (en) * 2012-09-04 2014-03-26 通用电气公司 Systems and methods for parametric imaging
CN103971396A (en) * 2014-05-24 2014-08-06 哈尔滨工业大学 OpenGL ES (open graphics library for embedded system) implementation method for ray casting algorithm under ARM+GPU (advanced RISC machine+graphic processing unit) heterogeneous architecture
CN104224230A (en) * 2014-09-15 2014-12-24 声泰特(成都)科技有限公司 Three-dimensional and four-dimensional ultrasonic imaging method and device based on GPU (Graphics Processing Unit) platform and system
CN105025803A (en) * 2013-02-28 2015-11-04 皇家飞利浦有限公司 Segmentation of large objects from multiple three-dimensional views
CN106093870A (en) * 2016-05-30 2016-11-09 西安电子科技大学 The SAR GMTI clutter suppression method of hypersonic aircraft descending branch

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1257694A (en) * 1998-11-23 2000-06-28 通用电气公司 Three-D supersonic imaging of speed and power data using mean or mid-value pixel projection
CN101405769A (en) * 2005-12-31 2009-04-08 布拉科成像S.P.A.公司 Systems and methods for collaborative interactive visualization of 3D data sets over a network ('DextroNet')
CN101371792A (en) * 2007-08-24 2009-02-25 通用电气公司 Method and apparatus for voice recording with ultrasound imaging
CN101959463A (en) * 2008-03-04 2011-01-26 超声成像公司 Twin-monitor electronic display system
US20130170721A1 (en) * 2011-12-29 2013-07-04 Samsung Electronics Co., Ltd. Method and apparatus for processing ultrasound image
CN102613990A (en) * 2012-02-03 2012-08-01 声泰特(成都)科技有限公司 Display method of blood flow rate of three-dimensional ultrasonic spectrum Doppler and space distribution of blood flow rate
CN102637303A (en) * 2012-04-26 2012-08-15 珠海医凯电子科技有限公司 Ultrasonic three-dimensional mixed and superposed volumetric rendering processing method based on GPU (Graphic Processing Unit)
CN103654863A (en) * 2012-09-04 2014-03-26 通用电气公司 Systems and methods for parametric imaging
US20140081140A1 (en) * 2012-09-14 2014-03-20 Samsung Electronics Co., Ltd. Ultrasound imaging apparatus and control method for the same
CN105025803A (en) * 2013-02-28 2015-11-04 皇家飞利浦有限公司 Segmentation of large objects from multiple three-dimensional views
CN103971396A (en) * 2014-05-24 2014-08-06 哈尔滨工业大学 OpenGL ES (open graphics library for embedded system) implementation method for ray casting algorithm under ARM+GPU (advanced RISC machine+graphic processing unit) heterogeneous architecture
CN104224230A (en) * 2014-09-15 2014-12-24 声泰特(成都)科技有限公司 Three-dimensional and four-dimensional ultrasonic imaging method and device based on GPU (Graphics Processing Unit) platform and system
CN106093870A (en) * 2016-05-30 2016-11-09 西安电子科技大学 The SAR GMTI clutter suppression method of hypersonic aircraft descending branch

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赖俊良: "基于CUDA的超声弹性成像***的研究与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 01, pages 138 - 1306 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340742A (en) * 2018-12-18 2020-06-26 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging method and device and storage medium
CN111340742B (en) * 2018-12-18 2024-03-08 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging method and equipment and storage medium

Also Published As

Publication number Publication date
CN108335336B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
US11632537B2 (en) Method and apparatus for obtaining binocular panoramic image, and storage medium
US7907167B2 (en) Three dimensional horizontal perspective workstation
JP5515301B2 (en) Image processing apparatus, program, image processing method, recording method, and recording medium
CN106504188B (en) Generation method and device for the eye-observation image that stereoscopic vision is presented
CN105432078B (en) Binocular gaze imaging method and equipment
CN106688231A (en) Stereo image recording and playback
TWI701941B (en) Method, apparatus and electronic device for image processing and storage medium thereof
EP3087554A1 (en) 3-d light field camera and photography method
Berning et al. A study of depth perception in hand-held augmented reality using autostereoscopic displays
US11961250B2 (en) Light-field image generation system, image display system, shape information acquisition server, image generation server, display device, light-field image generation method, and image display method
US20140184600A1 (en) Stereoscopic volume rendering imaging system
CN109870820A (en) Pin hole reflection mirror array integration imaging augmented reality device and method
US11190757B2 (en) Camera projection technique system and method
CN107948631A (en) It is a kind of based on cluster and the bore hole 3D systems that render
CN105721855B (en) A kind of three-dimensional data method for drafting and its application, three-dimensional image display method
CN108335336A (en) Ultrasonic imaging method and device
MacQuarrie et al. Perception of volumetric characters' eye-gaze direction in head-mounted displays
JP5331785B2 (en) Stereoscopic image analyzer
US20210327121A1 (en) Display based mixed-reality device
Mulder Realistic occlusion effects in mirror-based co-located augmented reality systems
CN209674113U (en) Pin hole reflection mirror array integration imaging augmented reality device
CN207603821U (en) A kind of bore hole 3D systems based on cluster and rendering
Wei et al. Color anaglyphs for panorama visualizations
US11543667B2 (en) Head-mounted display generated status message
CN108471939A (en) Pan zone measuring method and device and wearable display equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant