CN102157012A - Method for three-dimensionally rendering scene, graphic image treatment device, equipment and system - Google Patents

Method for three-dimensionally rendering scene, graphic image treatment device, equipment and system Download PDF

Info

Publication number
CN102157012A
CN102157012A CN201110070830.8A CN201110070830A CN102157012A CN 102157012 A CN102157012 A CN 102157012A CN 201110070830 A CN201110070830 A CN 201110070830A CN 102157012 A CN102157012 A CN 102157012A
Authority
CN
China
Prior art keywords
depth
scene
camera
play
weights
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201110070830.8A
Other languages
Chinese (zh)
Other versions
CN102157012B (en
Inventor
徐振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SuperD Co Ltd
Original Assignee
Shenzhen Super Perfect Optics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Super Perfect Optics Ltd filed Critical Shenzhen Super Perfect Optics Ltd
Priority to CN2011100708308A priority Critical patent/CN102157012B/en
Publication of CN102157012A publication Critical patent/CN102157012A/en
Application granted granted Critical
Publication of CN102157012B publication Critical patent/CN102157012B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a method for three-dimensionally rendering scene, a graphic image treatment device, equipment and system; the method comprises the following steps of: calculating the depth parameter related to the distance of the scene to be rendered between the scene and the camera and the scene scale parameter of the scene to be rendered; confirming the zero parallax surface depths of the first camera and the second camera according to the depth parameter related to the distance of the scene to be rendered between the scene and the camera and the scene scale parameter; rendering the scene to be rendered by using the first and the second cameras based on the confirmed zero parallax surface depths. Correspondingly, the graphic image treatment device comprises a scene analyzing module, a parameter confirming module and a three-dimensional rendering module; the graphic image treatment device can be used in mobile terminal, computer system and image obtaining equipment. Through the invention, the three-dimensionally rendered image has better three-dimensional display effect.

Description

Scene is carried out method, graph and image processing device and equipment, the system that solid is played up
Technical field
The present invention relates to a kind of graph processing technique, particularly relate to and a kind of scene is carried out method, graph and image processing device and equipment, the system that solid is played up.
Background technology
The eyeball of human eye at first can find an interested point when seeing an object, thereby two eyeballs can remove to converge (convergence).Eyeball is understood zoom then, and to just in time making object can be imaged on the focus place, what at this moment become similarly is the most clearly the focal adjustments of eyeball.But for three-dimensional display, the eyeball of human eye is that the screen that stares at three-dimensional display is watched forever, and just the image difference of seeing owing to right and left eyes forms parallax, thereby produced stereoscopic sensation.
Causing a tired very big reason when watching three-dimensional display is exactly that focus is followed not matching of convergent point, and unmatched degree is big more, and sense of fatigue is strong more.With Fig. 1 a is example, and two stains among the figure are represented two eyeballs of people, and hollow " ten " word table shows object.When human eye is wanted to see object of screen the inside (recessed), should regulate and make the eyeball zoom, on the focus (focus) of two eyeballs and this object (convergent point) coupling, parallax at this moment (i.e. distance between the intersection point of two optical axises and screen) is d.But now because the people stares at screen, and the focus of two eyeballs drops on the screen, and convergent point is constant, and parallax is 0.So, | d-0| can be used as and weighs a this unmatched value.The situation of formation parallax d when Fig. 1 b has represented that human eye wants to see the object of screen outside (protrusions).
Only playing up in the 2D technology need be played up with a camera, in 3D technology such as 3d gaming scene being carried out solid plays up when being used for stereo display, need obtain the view that two width of cloth have parallax, so need to use first camera and second camera (camera in the literary composition all refers to the virtual camera that uses when solid is played up also can be described as left camera and right camera).Please refer to Fig. 2, the single camera under the 2D environment is offset L respectively left Sep/ 2 and be offset L to the right Sep/ 2 obtain first camera and second camera.α is the angle of release of camera, L SepBe the spacing of first camera and second camera, Z ConBe the parallax free face degree of depth of first camera and second camera, i.e. distance between the line between the convergent point of first camera and second camera and first camera and second camera, L SepAnd Z ConAfter deciding, line angulation β has also just determined between the optical axis of camera and first camera and second camera.
Scene is carried out solid play up when being used for stereo display L SepAnd Z ConExtremely important to three-dimensional display effect, Z ConDetermined which part of scene can be presented at screen outside (protrusion), which part can be presented at screen the inside (recessed), L SepAlso relevant with object degree recessed, that protrude in the scene.So these two parameters have directly determined the sense of reality and the comfort of the scene of stereo display.Prior art is carried out solid when playing up to scene, and major part all is a fixed L Sep, and with Z ConBe located in the middle of the scene, can only guarantee to occur stereoeffect like this, but can not bring real as far as possible stereoeffect to the beholder.
Summary of the invention
The technical problem to be solved in the present invention provides a kind ofly carries out the method that solid is played up to scene, and the figure after this method makes solid play up has better stereo display effect.
In order to address the above problem, the invention provides and a kind of scene is carried out the method that solid is played up, comprising:
Calculate wait to play up scene with scene and camera between the relevant depth parameter of distance and wait to play up the scene scale parameter of scene;
According to relevant depth parameter and the scene scale parameter of distance between described that wait to play up scene and scene and camera, determine the parallax free face degree of depth of first camera and second camera;
Based on the described parallax free face degree of depth of determining, use described first camera and second camera to wait to play up scene and play up respectively to described.
Preferably,
The depth parameter that distance is relevant between described that wait to play up scene and scene and camera comprises minimum-depth, mean depth and the depth capacity of waiting to play up scene, and described mean depth equals the described mean value of waiting to play up the degree of depth of all pixels in the scene.
The described scene scale parameter of playing up scene of waiting comprises that described depth capacity deducts the depth difference that minimum-depth obtains.
Preferably,
Described according to relevant depth parameter and the scene scale parameter of distance between described that wait to play up scene and scene and camera, determine the parallax free face degree of depth of first camera and second camera, comprising:
Ranking operation result according to described minimum-depth, mean depth and depth capacity determines the described parallax free face degree of depth; Wherein, within weights border separately, the weights of described minimum-depth increase with the increase of described minimum-depth and depth difference, and the weights of described mean depth and depth capacity reduce with the increase of described minimum-depth and depth difference or remain unchanged.
Preferably,
The ranking operation result of described minimum-depth, mean depth and depth capacity is weighted the result that average calculating operation obtains to described minimum-depth, mean depth and depth capacity, the weights of each depth parameter are all more than or equal to 0 and smaller or equal to 1, and the weights sum of each depth parameter equals 1.
Preferably,
The weights of described depth capacity are zero, and the weights of described minimum-depth obtain according to the weighted sum and the corresponding weights border of described minimum-depth and depth difference, and described weighted sum is within described weights border the time, and the weights of described minimum-depth equal described weighted sum; When described weighted sum surpassed the coboundary on described weights border or lower boundary, the weights of described minimum-depth equaled described coboundary or lower boundary.
Preferably,
The weights of described minimum-depth obtain according to the weighted sum and the corresponding weights border of described minimum-depth and depth difference, wherein, the inverse of the weights sum of the weights of described minimum-depth and described depth difference equals the absolute value of the depth difference between cutting face far away and the nearly cutting face, and the coboundary in the corresponding weights border is less than 1.
Preferably,
Use described first camera and second camera to wait to play up before scene plays up respectively first spacing when determining to play up in the following manner between described first camera and second camera to described:
Based on the parallax free face degree of depth of determining, calculate and describedly wait to play up the pixel that has described depth capacity in the scene when the maximum positive parallax that the parallax of first view and second view equals to allow, second spacing between described first camera and second camera;
Based on the parallax free face degree of depth of determining, calculate and describedly wait to play up the pixel that has described minimum-depth in the scene when the maximum negative parallax that the parallax of first view and second view equals to allow, the 3rd spacing between described first camera and second camera;
With in described second spacing and the 3rd spacing less one be defined as described first spacing.
Another technical matters that the present invention will solve provides a kind of graph and image processing device, and the figure that this graph and image processing device carries out obtaining after solid is played up to scene has better stereo display effect.
In order to address the above problem, the invention provides a kind of graph and image processing device, it is characterized in that, comprising:
The scene analysis module, be used to calculate wait to play up scene with scene and camera between the relevant depth parameter of distance and wait to play up the scene scale parameter of scene;
The parameter determination module comprises again: the degree of depth is determined submodule, is used for determining the parallax free face degree of depth of first camera and second camera according to relevant depth parameter and the scene scale parameter of distance between described that wait to play up scene and scene and camera;
Three-dimensional rendering module is used for based on the described parallax free face degree of depth of determining, uses described first camera and second camera to wait to play up scene and play up respectively described.
Preferably,
Described scene analysis module comprises:
The statistics submodule, be used to calculate wait to play up scene with scene and camera between the relevant depth parameter of distance, described depth parameter comprises minimum-depth, mean depth and the depth capacity of waiting to play up scene, and described mean depth equals the described mean value of waiting to play up the degree of depth of all pixels in the scene;
Calculating sub module is used to calculate the scene scale parameter of waiting to play up scene, and described scene scale parameter comprises that described depth capacity deducts the depth difference that minimum-depth obtains.
Preferably,
The described degree of depth determines that submodule is that ranking operation result according to described minimum-depth, mean depth and depth capacity determines the described parallax free face degree of depth; Wherein, within weights border separately, the weights of described minimum-depth increase with the increase of described minimum-depth and depth difference, and the weights of described mean depth and depth capacity reduce with the increase of described minimum-depth and depth difference or remain unchanged.
Preferably,
The described degree of depth determines that submodule is to determine the described parallax free face degree of depth according to described minimum-depth, mean depth and depth capacity being weighted the result that average calculating operation obtains, the weights of each depth parameter are all more than or equal to 0 and smaller or equal to 1, and the weights sum of each depth parameter equals 1.
Preferably,
The described degree of depth determines that submodule is when being weighted result that average calculating operation obtains and determining the described parallax free face degree of depth to described minimum-depth, mean depth and depth capacity, the weights of described depth capacity are made as zero, the weights that obtain described minimum-depth according to the weighted sum and the corresponding weights border of described minimum-depth and depth difference, described weighted sum is within described weights border the time, and the weights of described minimum-depth equal described weighted sum; When described weighted sum surpassed the coboundary on described weights border or lower boundary, the weights of described minimum-depth equaled described coboundary or lower boundary.
Preferably,
The described degree of depth is determined the weights that submodule obtains described minimum-depth according to the weighted sum and the corresponding weights border of described minimum-depth and depth difference, wherein, the inverse of the weights sum of the weights of described minimum-depth and described depth difference equals the absolute value of the depth difference between cutting face far away and the nearly cutting face.
Preferably,
Described graph and image processing device is based on one or more realizations in central processing unit (CPU), coprocessor (GPU), programmable logic device (PLD) (FPGA) and the special IC (ASIC).
Preferably,
Described parameter determination module comprises that also spacing determines submodule, first spacing when being used to determine to play up between described first camera and second camera;
Described parameter determination module comprises again:
The first spacing arithmetic element, be used for based on the parallax free face degree of depth of determining, calculate and describedly wait to play up the pixel that has described depth capacity in the scene when the maximum positive parallax that the parallax of first view and second view equals to set, second spacing between described first camera and second camera;
The second spacing arithmetic element, be used for based on the parallax free face degree of depth of determining, calculate and describedly wait to play up the pixel that has described minimum-depth in the scene when the maximum negative parallax that the parallax of first view and second view equals to set, the 3rd spacing between described first camera and second camera;
Decision unit is used for that described second spacing and the 3rd spacing is less one and is defined as described first spacing.
Said method and graph and image processing device are carrying out solid when playing up to scene, calculate the parallax free face degree of depth of first camera and second camera adaptively according to the characteristics of distance between scene scale and scene and camera, the figure after making solid play up can bring more real stereoeffect to the beholder.In addition, can also pass through restriction, guarantee beholder's comfort, obtain stereoeffect as well as possible the spacing of first camera and second camera.Such scheme can be applied to need carry out the occasion that solid is played up and shown to scene as 3d gaming, three-dimensional animation and three-dimensional movie etc.
The another technical matters that the present invention will solve provides a kind of portable terminal, and the figure that this portable terminal carries out obtaining after solid is played up to scene has better stereo display effect.
In order to address the above problem, the invention provides a kind of portable terminal, comprise three-dimensional display, it is characterized in that, also comprise as graph and image processing device provided by the invention, described graph and image processing device use first camera and second camera to described wait to play up scene and play up respectively after, be sent to described three-dimensional display and show playing up two width of cloth figures that obtain.
Because adopted above-mentioned graph and image processing device, the figure that this portable terminal carries out obtaining after solid is played up to scene has better stereo display effect.
The another technical matters that the present invention will solve provides a kind of computer system, and the figure that this computer system carries out obtaining after solid is played up to scene has better stereo display effect.
In order to address the above problem, the invention provides a kind of computer system, comprise graphics storage devices, graph and image processing device and 3 d display device, it is characterized in that:
Described graphics storage devices is used to preserve view data and the attribute data of waiting to play up scene, and wherein, described attribute data comprises the described degree of depth of waiting to play up each pixel in the scene;
Described graph and image processing device, adopt graph and image processing device provided by the invention, wherein, the described degree of depth of waiting to play up each pixel in the scene reads from described graphics storage devices, and use first camera and second camera to described wait to play up scene and play up respectively after, be sent to described 3 d display device with playing up two width of cloth figures that obtain;
Described 3 d display device is used for based on described two width of cloth figures scene described to be played up being carried out stereo display.
Because adopted above-mentioned graph and image processing device, the figure that this computer system carries out obtaining after solid is played up to scene has better stereo display effect.
The another technical matters that the present invention will solve provides a kind of image acquisition equipment, and the figure that this image acquisition equipment carries out obtaining after solid is played up to scene has better stereo display effect.
In order to address the above problem, the invention provides a kind of image acquisition equipment, comprise image acquiring device and image display, it is characterized in that, also comprise graph and image processing device provided by the invention, described graph and image processing device calculates the employed described degree of depth of waiting to play up each pixel in the scene and obtains from described image acquiring device, use first camera and second camera to described wait to play up scene and play up respectively after, be sent to described image display with playing up two width of cloth figures that obtain.
Preferably, described image acquisition equipment can be stereocamera or stereocamera.
Because adopted above-mentioned graph and image processing device, the figure that this image acquisition equipment carries out obtaining after solid is played up to scene has better stereo display effect.
Description of drawings
Fig. 1 a and Fig. 1 b are respectively the synoptic diagram that human eye is watched the object of screen the inside (recessed) and screen outside (protrusion);
The synoptic diagram of correlation parameter when Fig. 2 is to use first camera and second camera that scene is played up;
Fig. 3 a and Fig. 3 b are respectively the exemplary stereoscopically displaying images that obtains when closely playing up little scene and playing up large scene at a distance;
To be first embodiment of the invention carry out the process flow diagram of the method that solid plays up to scene to Fig. 4;
The synoptic diagram that Fig. 5 is to use one camera that scene is played up;
Fig. 6 is the weights and the scene scale parameter Z of minimum-depth s, the minor increment Z between camera and scene MinThe synoptic diagram of relation;
Fig. 7 is the module map of first embodiment of the invention graph and image processing device;
Fig. 8 a and Fig. 8 b are respectively the synoptic diagram of waiting to play up the parallax of pixel on first view and second view that has depth capacity and minimum-depth in the scene.
Embodiment
For making the purpose, technical solutions and advantages of the present invention clearer, hereinafter will be elaborated to embodiments of the invention in conjunction with the accompanying drawings.Need to prove that under the situation of not conflicting, embodiment among the application and the feature among the embodiment be combination in any mutually.
First embodiment
The scene that present embodiment is formed a plurality of objects is carried out solid when playing up, and according to the characteristics of the distance between scene scale and camera and scene, the parallax free face degree of depth of first camera and second camera is adjusted, and can obtain desirable stereo display effect.
People can be placed on notice on certain object when the little scene of close-ups, wish to see more details, so when stereo display, if allow more that object in the scene protrudes, it is truer to seem.As the example of Fig. 3 a, during little scene that a few object of close-ups is formed, wish that the object of front protrudes more more this moment.And people can think to obtain more the impression of panorama when the distant surveillance large scene, so if allow recessed more of object in the scene when stereo display, can bring stronger three-dimensional depth feelings.During large scene that a lot of objects of distant surveillance are formed,,,, can allow recessed more of object in the scene, also can avoid scene to be shown the influence that device is cut edge simultaneously in order to obtain a kind of depth feelings because scene is on a grand scale as the example of Fig. 3 b.
For this reason, can calculate earlier wait to play up scene with scene and camera between relevant depth parameter and the scene scale parameter of distance; Then, the parallax free face degree of depth of determining first camera and second camera according to the relevant depth parameter of distance between described and scene and camera and scene scale parameter; At last, based on the described parallax free face degree of depth, use first camera and second camera to wait to play up scene and play up respectively to described.
Scene is carried out can using one camera that the 3D scene is drawn before solid plays up, obtain scene to be played up.Fig. 5 shows the synoptic diagram that common use one camera is drawn the 3D scene.Plane, ABCD place among the figure is cutting face far away (Back clipping plane), and plane, A ' B ' C ' D ' place is nearly cutting face (Front clipping plane).Only the pixel in the scene between nearly cutting face and cutting face far away just may be projected to the projection plane (View plane) at EFGH place, and the part outside these two faces all can be dismissed.Each pixel that projects to projection plane has constituted scene to be played up.In the literary composition, the distance table between nearly cutting face and the cutting face far away is shown Z f-Z n, Z wherein fBe the degree of depth of cutting face far away, Z nThe degree of depth for nearly cutting face.Z f, Z nIn a certain application, can be taken as constant.
Present embodiment carries out method that solid plays up as shown in Figure 4 to scene, comprising:
Step 110 according to the degree of depth of waiting to play up each pixel in the scene, determines to wait to play up the minimum-depth Z of scene Min, mean depth Z AvgWith depth capacity Z Max, and compute depth difference Z s=Z Max-Z Min
Wait to play up that the degree of depth of each pixel is the transversal section at these pixel places and the distance between first camera and the transversal section, the second camera place in the scene.Wherein, Z MinFor waiting to play up in the scene degree of depth with the nearest pixel of camera distance; Z AvgFor waiting to play up the mean value of the degree of depth of all pixels in the scene; Z MaxFor waiting to play up in the scene degree of depth with camera distance pixel farthest.Z Min, Z AvgAnd Z MaxAll be with scene and camera between the relevant depth parameter of distance, wherein, Z MinBe also referred to as the minor increment between camera and scene in the text, be used to represent to observe the characteristics of the distance of waiting to play up scene.Z sAs scene scale parameter, be used to represent to wait play up the characteristics of the scene scale of scene.The data of the degree of depth of above-mentioned pixel can be obtained from corresponding memory storage such as buffer memory.
Step 120 is according to minimum-depth Z Min, mean depth Z AvgWith depth capacity Z MaxThe ranking operation result determine the parallax free face degree of depth Z of first camera and second camera Con, wherein, within weights border separately, Z MinWeights with Z MinAnd Z sIncrease and increase Z AvgAnd Z MaxWeights with Z MinAnd Z sIncrease and reduce or remain unchanged;
Need to prove, according to Z Min, Z AvgAnd Z MaxThe ranking operation result determine Z ConThe time, Z ConCan calculate according to corresponding operational formula; Z ConAlso can be according to the Z that calculates Min, Z AvgAnd Z MaxSearch in advance the result that preserves according to ranking operation as comprising Z ConWith Z Min, Z AvgAnd Z MaxCorresponding relation table and obtain; Z ConAlso can obtain in conjunction with the aforementioned calculation and the dual mode of tabling look-up.
Above-mentioned Z MinAnd Z sIncrease and comprise Z MinAnd Z sIncrease simultaneously, and Z MinAnd Z sIn one increase a constant situation.
Within corresponding weights border, Z MinWeights with Z MinAnd Z sIncrease and when increasing, can be as long as Z MinAnd Z sIncrease Z MinWeights promptly increase; Also can be at Z MinAnd Z sAfter increasing certain numerical value, Z MinWeights increase again.Be Z MinWeights value continuously, also can quantize.
Above-mentioned ranking operation can adopt the weighted mean computing, that is, parallax free face degree of depth con can calculate according to following formula:
Z Con=Z Minα+Z Avgβ+Z Maxγ formula (1)
Wherein, α is Z MinWeights, β is Z AvgWeights, γ is Z MaxWeights because be the weighted mean computing, so each weights satisfies: alpha+beta+γ=1,0≤α≤1,0≤β≤1,0≤γ≤1.According to the constraint condition that above-mentioned weights change, in the present embodiment, α, beta, gamma also satisfy: α is with Z MinAnd Z sIncrease and increase, and when α increases, beta, gamma reduce simultaneously or one constant one reduce.
Weights γ can also can be according to Z for fixed value MinAnd Z sDetermine, be not limited to certain specific mode.In one example, get γ=0, then formula (1) can be expressed as:
Z Con=Z Minα+Z Avg(1-α) formula (2)
At this moment, 0≤α<1, β=1-α.
Step 130 is based on the parallax free face degree of depth Z that determines Con, use first camera and second camera to wait to play up scene and play up respectively to described.
In this step, minimum-depth Z MinWeights α can be according to Z MinAnd Z sWeighted sum and the border of weights α determine, specifically can be in the following ways:
Order:
α = k Z f - Z n · Z s + 1 - k Z f - Z n · Z min Formula (3)
Wherein, Z fThe degree of depth of the cutting face far away for to scene rendering the time, Z nThe degree of depth of the nearly cutting face for to scene rendering the time, k is default coefficient, 0<k<1.K can adopt empirical value, has represented the weight of d and s.
As shown in Figure 6, according to the characteristics of distance between scene scale and camera and scene, roughly can be divided into closely play up little scene (as indoor scene), at a distance play up little scene, closely play up large scene (as outdoor scene) and play up situation such as large scene at a distance.Wherein, Z MinAnd Z sIn the time of all little, belong to the situation of closely playing up little scene, Z MinAnd Z sIn the time of all big, belong to the situation of playing up large scene at a distance.
According to the method for present embodiment, at Z MinAnd Z sIn the time of all little, Z MinWeights α smaller, the parallax free face degree of depth Z of first camera that calculates and second camera ConRelatively large, with regard to there being more part to be positioned at before the parallax free face, to finish solid like this and play up when carrying out stereo display in the scene, it is more that the object in the scene will protrude, and wants to see the custom of more details when meeting the little scene of people's close-ups.And Z MinAnd Z sIn the time of all big, Z MinWeights α bigger, the parallax free face degree of depth Z of first camera that calculates and second camera ConSmaller relatively, just there is more part to be positioned at after the parallax free face in the scene, finish solid like this and play up when carrying out stereo display, what the object in the scene will be recessed is more, meets the custom that people's distant surveillance large scene thinks to obtain more the panorama impression.Therefore, adopt the present embodiment method that scene is carried out the method that solid plays up and to bring more real stereoeffect to the observer.
Correspondingly, present embodiment also provides a kind of graph and image processing device, and this graph and image processing device can be based on one or more realizations in central processing unit (CPU), coprocessor (GPU), programmable logic device (PLD) (FPGA) and the special IC (ASIC).Can be that CPU, GPU move corresponding software and realize, also can realize based on the hardware logic in the logical circuits such as FPGA, ASIC that perhaps part is based on hardware logic electric circuit, part realizes based on software.
(spacing of second embodiment shown in broken lines is determined submodule 202 among the figure) as shown in Figure 7, the graph and image processing device of present embodiment comprises scene analysis module 10, parameter determination module 20 and three-dimensional rendering module 30, wherein:
Described scene analysis module 10, be used to calculate wait to play up scene with scene and camera between the relevant depth parameter of distance and wait to play up the scene scale parameter of scene.
This scene analysis module 10 comprises again:
Statistics submodule 101, be used to calculate wait to play up scene with scene and camera between the relevant depth parameter of distance, described depth parameter comprises minimum-depth, mean depth and the depth capacity of waiting to play up scene, and described mean depth equals the described mean value of waiting to play up the degree of depth of all pixels in the scene;
Calculating sub module 102 is used to calculate the scene scale parameter of waiting to play up scene, and described scene scale parameter comprises that described depth capacity deducts the depth difference that minimum-depth obtains.
Described parameter determination module 20 comprises again:
The degree of depth is determined submodule 201, is used for determining the parallax free face degree of depth of first camera and second camera according to relevant depth parameter and the scene scale parameter of distance between described that wait to play up scene and scene and camera.In the present embodiment, be that ranking operation result according to described minimum-depth, mean depth and depth capacity determines the described parallax free face degree of depth; Wherein, within weights border separately, the weights of described minimum-depth increase with the increase of described minimum-depth and depth difference, and the weights of described mean depth and depth capacity reduce with the increase of described minimum-depth and depth difference or remain unchanged.Above-mentioned ranking operation can be the weighted mean computing, and the weights of each depth parameter are all more than or equal to 0 and smaller or equal to 1, and the weights sum of each depth parameter equals 1.
In one example, the degree of depth determines that submodule is made as zero with the weights of described depth capacity, the weights that obtain described minimum-depth according to the weighted sum and the corresponding weights border of described minimum-depth and depth difference, described weighted sum is within described weights border the time, and the weights of described minimum-depth equal described weighted sum; When described weighted sum surpassed the coboundary on described weights border or lower boundary, the weights of described minimum-depth equaled described coboundary or lower boundary.Wherein, the inverse of the weights sum of the weights of described minimum-depth and described depth difference can equal the absolute value of the depth difference between cutting face far away and the nearly cutting face.
Described three-dimensional rendering module 30 is used for based on the described parallax free face degree of depth of determining, uses described first camera and second camera to wait to play up scene and play up respectively described.
Above-mentioned graph and image processing device can be applied to have in the equipment of three-dimensional display, in mobile phone, computer, TV or the like.
Correspondingly, present embodiment also provides a kind of portable terminal, comprise three-dimensional display, the graph and image processing device that also comprises present embodiment, this graph and image processing device use first camera and second camera to described wait to play up scene and play up respectively after, be sent to described three-dimensional display and show playing up two width of cloth figures that obtain.
Correspondingly, present embodiment provides a kind of computer system again, comprises graphics storage devices, graph and image processing device and 3 d display device, wherein:
Described graphics storage devices is used to preserve view data and the attribute data of waiting to play up scene, and wherein, described attribute data comprises the described degree of depth of waiting to play up each pixel in the scene;
Described graph and image processing device, adopt the graph and image processing device of present embodiment, wherein, the described degree of depth of waiting to play up each pixel in the scene reads from described graphics storage devices, and use first camera and second camera to described wait to play up scene and play up respectively after, be sent to described 3 d display device with playing up two width of cloth figures that obtain;
Described 3 d display device is used for based on described two width of cloth figures scene described to be played up being carried out stereo display.
Correspondingly, present embodiment provides a kind of image acquisition equipment again, comprise image acquiring device and image display, the graph and image processing device that also comprises present embodiment, described graph and image processing device calculates the employed described degree of depth of waiting to play up each pixel in the scene and obtains from described image acquiring device, use first camera and second camera to described wait to play up scene and play up respectively after, be sent to described image display with playing up two width of cloth figures that obtain.Described image acquisition equipment can be stereocamera or stereocamera.
Second embodiment
Among first embodiment, the spacing L when playing up between first camera and second camera SepCan handle by the mode of prior art.Present embodiment then on the basis of first embodiment, has adopted hereinafter described determining deviation L really SepMethod.
The spacing of human eye generally is about 65mm, so object can not be excessive in the positive negative parallax of first view and second view, otherwise can bring the sense of fatigue of eyes.Object is relevant with the degree of depth of spacing, the parallax free face degree of depth and the object of first camera and second camera in the positive negative parallax of first view and second view, limits by the spacing of the positive negative parallax of the maximum that sets in advance permission to first camera and second camera.
Please refer to Fig. 8 a, show and wait to play up the pixel that has depth capacity in the scene parallax D at first view and second view 1, D 1>0, the parallax free face of first camera and second camera is corresponding to display screen when playing up.As seen from the figure:
L sep = D 1 1 - Z con Z max
Each meaning of parameters is worked as D with above in the formula 1During the maximum positive parallax that equals to allow, the L that obtains SepBe designated as L Sep1
Please refer to Fig. 8 b, show and wait to play up the pixel that has minimum-depth in the scene parallax D at first view and second view 2, D 2<0.As seen from the figure:
L sep = D 2 1 - Z con Z min
With D 2During the maximum negative parallax that equals to allow, the L that obtains SepBe designated as L Sep2
In the present embodiment, maximum positive parallax and maximum positive negative parallax can be according to the spacing values of human eye, for example, and desirable [65mm, 65mm].
Present embodiment carries out the method that solid is played up to scene, comprising: the step 110 and 120 by first embodiment is handled, and obtains Z ConAfter, calculate L in a manner described Sep1And L Sep2, with L Sep1And L Sep2In the spacing L of less first camera when being defined as playing up and second camera SepThen, based on the Z that determines ConAnd L Sep, use first camera and second camera to treat to play up scene and play up respectively, generate two width of cloth figures that are used for stereo display.
By the L that determines with upper type Sep, the disparity range that can guarantee all objects in the scene is in the scope of the maximum positive negative parallax that allows, thus the assurance human eye can not feel tired.Simultaneously, because L SepBy the maximum positive negative parallax value that allows, can obtain better stereo display effect so that the object in the scene protrudes or recessed degree is bigger.
Correspondingly, present embodiment carries out the system that solid is played up to scene, and on the basis of the first embodiment system, the parameter determination module also comprises:
Spacing is determined submodule 202, first spacing when being used to determine to play up between described first camera and second camera.
Described spacing determines that submodule 202 comprises again:
The first spacing arithmetic element, be used for based on the parallax free face degree of depth of determining, calculate and describedly wait to play up the pixel that has described depth capacity in the scene when the maximum positive parallax that the parallax of first view and second view equals to set, second spacing between described first camera and second camera;
The second spacing arithmetic element, be used for based on the parallax free face degree of depth of determining, calculate and describedly wait to play up the pixel that has described minimum-depth in the scene when the maximum negative parallax that the parallax of first view and second view equals to set, the 3rd spacing between described first camera and second camera;
Decision unit is used for that described second spacing and the 3rd spacing is less one and is defined as described first spacing.
One of ordinary skill in the art will appreciate that all or part of step in the said method can instruct related hardware to finish by program, described program can be stored in the computer-readable recording medium, as ROM (read-only memory), disk or CD etc.Alternatively, the all or part of step of the foregoing description also can use one or more integrated circuit to realize, correspondingly, each the device/module/unit in the foregoing description can adopt the form of hardware to realize, also can adopt the form of software function module to realize.The present invention is not restricted to the combination of the hardware and software of any particular form.
The above is the preferred embodiments of the present invention only, is not limited to the present invention, and for a person skilled in the art, the present invention can have various changes and variation.Within the spirit and principles in the present invention all, any modification of being done, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (19)

1. one kind is carried out the method that solid is played up to scene, comprising:
Calculate wait to play up scene with scene and camera between the relevant depth parameter of distance and wait to play up the scene scale parameter of scene;
According to relevant depth parameter and the scene scale parameter of distance between described that wait to play up scene and scene and camera, determine the parallax free face degree of depth of first camera and second camera;
Based on the described parallax free face degree of depth of determining, use described first camera and second camera to wait to play up scene and play up respectively to described.
2. the method for claim 1 is characterized in that:
The depth parameter that distance is relevant between described that wait to play up scene and scene and camera comprises minimum-depth, mean depth and the depth capacity of waiting to play up scene, and described mean depth equals the described mean value of waiting to play up the degree of depth of all pixels in the scene;
The described scene scale parameter of playing up scene of waiting comprises that described depth capacity deducts the depth difference that minimum-depth obtains.
3. method as claimed in claim 2 is characterized in that:
Described according to relevant depth parameter and the scene scale parameter of distance between described that wait to play up scene and scene and camera, determine the parallax free face degree of depth of first camera and second camera, comprising:
Ranking operation result according to described minimum-depth, mean depth and depth capacity determines the described parallax free face degree of depth; Wherein, within weights border separately, the weights of described minimum-depth increase with the increase of described minimum-depth and depth difference, and the weights of described mean depth and depth capacity reduce with the increase of described minimum-depth and depth difference or remain unchanged.
4. method as claimed in claim 3 is characterized in that:
The ranking operation result of described minimum-depth, mean depth and depth capacity is weighted the result that average calculating operation obtains to described minimum-depth, mean depth and depth capacity, the weights of each depth parameter are all more than or equal to 0 and smaller or equal to 1, and the weights sum of each depth parameter equals 1.
5. method as claimed in claim 4 is characterized in that:
The weights of described depth capacity are zero, and the weights of described minimum-depth obtain according to the weighted sum and the corresponding weights border of described minimum-depth and depth difference, and described weighted sum is within described weights border the time, and the weights of described minimum-depth equal described weighted sum; When described weighted sum surpassed the coboundary on described weights border or lower boundary, the weights of described minimum-depth equaled described coboundary or lower boundary.
6. method as claimed in claim 5 is characterized in that:
The weights of described minimum-depth obtain according to the weighted sum and the corresponding weights border of described minimum-depth and depth difference, wherein, the inverse of the weights sum of the weights of described minimum-depth and described depth difference equals the absolute value of the depth difference between cutting face far away and the nearly cutting face, and the coboundary in the corresponding weights border is less than 1.
7. as the described method of arbitrary claim in the claim 1 to 6, it is characterized in that:
Use described first camera and second camera to wait to play up before scene plays up respectively first spacing when determining to play up in the following manner between described first camera and second camera to described:
Based on the parallax free face degree of depth of determining, calculate and describedly wait to play up the pixel that has described depth capacity in the scene when the maximum positive parallax that the parallax of first view and second view equals to allow, second spacing between described first camera and second camera;
Based on the parallax free face degree of depth of determining, calculate and describedly wait to play up the pixel that has described minimum-depth in the scene when the maximum negative parallax that the parallax of first view and second view equals to allow, the 3rd spacing between described first camera and second camera;
With in described second spacing and the 3rd spacing less one be defined as described first spacing.
8. a graph and image processing device is characterized in that, comprising:
The scene analysis module, be used to calculate wait to play up scene with scene and camera between the relevant depth parameter of distance and wait to play up the scene scale parameter of scene;
The parameter determination module comprises again: the degree of depth is determined submodule, is used for determining the parallax free face degree of depth of first camera and second camera according to relevant depth parameter and the scene scale parameter of distance between described that wait to play up scene and scene and camera;
Three-dimensional rendering module is used for based on the described parallax free face degree of depth of determining, uses described first camera and second camera to wait to play up scene and play up respectively described.
9. graph and image processing device as claimed in claim 8 is characterized in that:
Described scene analysis module comprises:
The statistics submodule, be used to calculate wait to play up scene with scene and camera between the relevant depth parameter of distance, described depth parameter comprises minimum-depth, mean depth and the depth capacity of waiting to play up scene, and described mean depth equals the described mean value of waiting to play up the degree of depth of all pixels in the scene;
Calculating sub module is used to calculate the scene scale parameter of waiting to play up scene, and described scene scale parameter comprises that described depth capacity deducts the depth difference that minimum-depth obtains.
10. graph and image processing device as claimed in claim 9 is characterized in that:
The described degree of depth determines that submodule is that ranking operation result according to described minimum-depth, mean depth and depth capacity determines the described parallax free face degree of depth; Wherein, within weights border separately, the weights of described minimum-depth increase with the increase of described minimum-depth and depth difference, and the weights of described mean depth and depth capacity reduce with the increase of described minimum-depth and depth difference or remain unchanged.
11. graph and image processing device as claimed in claim 10 is characterized in that:
The described degree of depth determines that submodule is to determine the described parallax free face degree of depth according to described minimum-depth, mean depth and depth capacity being weighted the result that average calculating operation obtains, the weights of each depth parameter are all more than or equal to 0 and smaller or equal to 1, and the weights sum of each depth parameter equals 1.
12. graph and image processing device as claimed in claim 11 is characterized in that:
The described degree of depth determines that submodule is when being weighted result that average calculating operation obtains and determining the described parallax free face degree of depth to described minimum-depth, mean depth and depth capacity, the weights of described depth capacity are made as zero, the weights that obtain described minimum-depth according to the weighted sum and the corresponding weights border of described minimum-depth and depth difference, described weighted sum is within described weights border the time, and the weights of described minimum-depth equal described weighted sum; When described weighted sum surpassed the coboundary on described weights border or lower boundary, the weights of described minimum-depth equaled described coboundary or lower boundary.
13. graph and image processing device as claimed in claim 12 is characterized in that:
The described degree of depth is determined the weights that submodule obtains described minimum-depth according to the weighted sum and the corresponding weights border of described minimum-depth and depth difference, wherein, the inverse of the weights sum of the weights of described minimum-depth and described depth difference equals the absolute value of the depth difference between cutting face far away and the nearly cutting face.
14. graph and image processing device as claimed in claim 8 is characterized in that:
Described graph and image processing device is based on one or more realizations in central processing unit (CPU), coprocessor (GPU), programmable logic device (PLD) (FPGA) and the special IC (ASIC).
15. graph and image processing device as claimed in claim 8 is characterized in that:
Described parameter determination module comprises that also spacing determines submodule, first spacing when being used to determine to play up between described first camera and second camera;
Described parameter determination module comprises again:
The first spacing arithmetic element, be used for based on the parallax free face degree of depth of determining, calculate and describedly wait to play up the pixel that has described depth capacity in the scene when the maximum positive parallax that the parallax of first view and second view equals to set, second spacing between described first camera and second camera;
The second spacing arithmetic element, be used for based on the parallax free face degree of depth of determining, calculate and describedly wait to play up the pixel that has described minimum-depth in the scene when the maximum negative parallax that the parallax of first view and second view equals to set, the 3rd spacing between described first camera and second camera;
Decision unit is used for that described second spacing and the 3rd spacing is less one and is defined as described first spacing.
16. portable terminal, comprise three-dimensional display, it is characterized in that, also comprise as the described graph and image processing device of arbitrary claim in the claim 8 to 15, described graph and image processing device use first camera and second camera to described wait to play up scene and play up respectively after, be sent to described three-dimensional display and show playing up two width of cloth figures that obtain.
17. a computer system comprises graphics storage devices, graph and image processing device and 3 d display device, it is characterized in that:
Described graphics storage devices is used to preserve view data and the attribute data of waiting to play up scene, and wherein, described attribute data comprises the described degree of depth of waiting to play up each pixel in the scene;
Described graph and image processing device, adopt as the described graph and image processing device of arbitrary claim in the claim 8 to 15, wherein, the described degree of depth of waiting to play up each pixel in the scene reads from described graphics storage devices, and use first camera and second camera to described wait to play up scene and play up respectively after, be sent to described 3 d display device with playing up two width of cloth figures that obtain;
Described 3 d display device is used for based on described two width of cloth figures scene described to be played up being carried out stereo display.
18. image acquisition equipment, comprise image acquiring device and image display, it is characterized in that, also comprise as the described graph and image processing device of arbitrary claim in the claim 8 to 15, described graph and image processing device calculates the employed described degree of depth of waiting to play up each pixel in the scene and obtains from described image acquiring device, use first camera and second camera to described wait to play up scene and play up respectively after, be sent to described image display with playing up two width of cloth figures that obtain.
19. image acquisition equipment as claimed in claim 18 is characterized in that, described image acquisition equipment is stereocamera or stereocamera.
CN2011100708308A 2011-03-23 2011-03-23 Method for three-dimensionally rendering scene, graphic image treatment device, equipment and system Expired - Fee Related CN102157012B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011100708308A CN102157012B (en) 2011-03-23 2011-03-23 Method for three-dimensionally rendering scene, graphic image treatment device, equipment and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011100708308A CN102157012B (en) 2011-03-23 2011-03-23 Method for three-dimensionally rendering scene, graphic image treatment device, equipment and system

Publications (2)

Publication Number Publication Date
CN102157012A true CN102157012A (en) 2011-08-17
CN102157012B CN102157012B (en) 2012-11-28

Family

ID=44438490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011100708308A Expired - Fee Related CN102157012B (en) 2011-03-23 2011-03-23 Method for three-dimensionally rendering scene, graphic image treatment device, equipment and system

Country Status (1)

Country Link
CN (1) CN102157012B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102427547A (en) * 2011-11-15 2012-04-25 清华大学 Multi-angle stereo rendering apparatus
CN102802015A (en) * 2012-08-21 2012-11-28 清华大学 Stereo image parallax optimization method
CN103810743A (en) * 2012-11-07 2014-05-21 辉达公司 Setting downstream render state in an upstream shader
CN104869389A (en) * 2015-05-15 2015-08-26 北京邮电大学 Off-axis virtual camera parameter determination method and system
CN104980729A (en) * 2015-07-14 2015-10-14 上海玮舟微电子科技有限公司 Disparity map generation method and system
CN105282532A (en) * 2014-06-03 2016-01-27 天津拓视科技有限公司 3D display method and device
CN107211085A (en) * 2015-02-20 2017-09-26 索尼公司 Camera device and image capture method
CN109658494A (en) * 2019-01-07 2019-04-19 北京达美盛科技有限公司 A kind of Shading Rendering method in three-dimensional visualization figure
CN112129262A (en) * 2020-09-01 2020-12-25 珠海市一微半导体有限公司 Visual ranging method and visual navigation chip of multi-camera group

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477701A (en) * 2009-02-06 2009-07-08 南京师范大学 Built-in real tri-dimension rendering process oriented to AutoCAD and 3DS MAX
CN101482978A (en) * 2009-02-20 2009-07-15 南京师范大学 ENVI/IDL oriented implantation type true three-dimensional stereo rendering method
CN101587386A (en) * 2008-05-21 2009-11-25 深圳华为通信技术有限公司 Method for processing cursor, Apparatus and system
CN101635061A (en) * 2009-09-08 2010-01-27 南京师范大学 Adaptive three-dimensional rendering method based on mechanism of human-eye stereoscopic vision
US20100103168A1 (en) * 2008-06-24 2010-04-29 Samsung Electronics Co., Ltd Methods and apparatuses for processing and displaying image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587386A (en) * 2008-05-21 2009-11-25 深圳华为通信技术有限公司 Method for processing cursor, Apparatus and system
US20100103168A1 (en) * 2008-06-24 2010-04-29 Samsung Electronics Co., Ltd Methods and apparatuses for processing and displaying image
CN101477701A (en) * 2009-02-06 2009-07-08 南京师范大学 Built-in real tri-dimension rendering process oriented to AutoCAD and 3DS MAX
CN101482978A (en) * 2009-02-20 2009-07-15 南京师范大学 ENVI/IDL oriented implantation type true three-dimensional stereo rendering method
CN101635061A (en) * 2009-09-08 2010-01-27 南京师范大学 Adaptive three-dimensional rendering method based on mechanism of human-eye stereoscopic vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《激光杂志》 20081231 钟强 等 计算机生成全息实时显示技术的研究现状 全文 1-19 第29卷, 第4期 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102427547A (en) * 2011-11-15 2012-04-25 清华大学 Multi-angle stereo rendering apparatus
CN102802015A (en) * 2012-08-21 2012-11-28 清华大学 Stereo image parallax optimization method
CN102802015B (en) * 2012-08-21 2014-09-10 清华大学 Stereo image parallax optimization method
CN103810743A (en) * 2012-11-07 2014-05-21 辉达公司 Setting downstream render state in an upstream shader
CN105282532A (en) * 2014-06-03 2016-01-27 天津拓视科技有限公司 3D display method and device
CN105282532B (en) * 2014-06-03 2018-06-22 天津拓视科技有限公司 3D display method and apparatus
CN107211085A (en) * 2015-02-20 2017-09-26 索尼公司 Camera device and image capture method
CN107211085B (en) * 2015-02-20 2020-06-05 索尼公司 Image pickup apparatus and image pickup method
CN104869389A (en) * 2015-05-15 2015-08-26 北京邮电大学 Off-axis virtual camera parameter determination method and system
CN104980729A (en) * 2015-07-14 2015-10-14 上海玮舟微电子科技有限公司 Disparity map generation method and system
CN109658494A (en) * 2019-01-07 2019-04-19 北京达美盛科技有限公司 A kind of Shading Rendering method in three-dimensional visualization figure
CN109658494B (en) * 2019-01-07 2023-03-31 北京达美盛软件股份有限公司 Shadow rendering method in three-dimensional visual graph
CN112129262A (en) * 2020-09-01 2020-12-25 珠海市一微半导体有限公司 Visual ranging method and visual navigation chip of multi-camera group

Also Published As

Publication number Publication date
CN102157012B (en) 2012-11-28

Similar Documents

Publication Publication Date Title
CN102157012B (en) Method for three-dimensionally rendering scene, graphic image treatment device, equipment and system
EP2381691B1 (en) Method and apparatus for processing three-dimensional images
KR102162107B1 (en) Image processing apparatus, image processing method and program
US8559703B2 (en) Method and apparatus for processing three-dimensional images
US9754379B2 (en) Method and system for determining parameters of an off-axis virtual camera
KR102564479B1 (en) Method and apparatus of 3d rendering user' eyes
CN109510975B (en) Video image extraction method, device and system
US10553014B2 (en) Image generating method, device and computer executable non-volatile storage medium
US20130027389A1 (en) Making a two-dimensional image into three dimensions
JP4270347B2 (en) Distance calculator
Lee et al. Eye tracking based glasses-free 3D display by dynamic light field rendering
CN114637391A (en) VR content processing method and equipment based on light field
Yoon et al. Saliency-guided stereo camera control for comfortable vr explorations
US11600043B1 (en) Stereoscopic rendering of non-flat, reflective or refractive surfaces
CN114859561B (en) Wearable display device, control method thereof and storage medium
Shen et al. 3-D perception enhancement in autostereoscopic TV by depth cue for 3-D model interaction
TWI817335B (en) Stereoscopic image playback apparatus and method of generating stereoscopic images thereof
US20240137483A1 (en) Image processing method and virtual reality display system
US20240236293A9 (en) Image processing method and virtual reality display system
CN115118949A (en) Stereoscopic image generation method and electronic device using same
JP2023026148A (en) Viewpoint calculation apparatus and program of the same
US9609313B2 (en) Enhanced 3D display method and system
CN115767068A (en) Information processing method and device and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20160612

Address after: 518054 Guangdong city of Shenzhen province Qianhai Shenzhen Hong Kong cooperation zone before Bay Road No. 1 building 201 room A

Patentee after: SHENZHEN RUNHUA CHUANGSHI SCIENCE & TECHNOLOGY Co.,Ltd.

Address before: 518053 Guangdong city of Shenzhen province Nanshan District overseas Chinese eastern industrial area H-1 building 101

Patentee before: SHENZHEN SUPER PERFECT OPTICS Ltd.

C56 Change in the name or address of the patentee
CP01 Change in the name or title of a patent holder

Address after: 518054 Guangdong city of Shenzhen province Qianhai Shenzhen Hong Kong cooperation zone before Bay Road No. 1 building 201 room A

Patentee after: SUPERD Co.,Ltd.

Address before: 518054 Guangdong city of Shenzhen province Qianhai Shenzhen Hong Kong cooperation zone before Bay Road No. 1 building 201 room A

Patentee before: SHENZHEN RUNHUA CHUANGSHI SCIENCE & TECHNOLOGY Co.,Ltd.

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121128

CF01 Termination of patent right due to non-payment of annual fee