CN104966297B - A kind of method that general echo generates shade - Google Patents
A kind of method that general echo generates shade Download PDFInfo
- Publication number
- CN104966297B CN104966297B CN201510336330.2A CN201510336330A CN104966297B CN 104966297 B CN104966297 B CN 104966297B CN 201510336330 A CN201510336330 A CN 201510336330A CN 104966297 B CN104966297 B CN 104966297B
- Authority
- CN
- China
- Prior art keywords
- pixel
- mtd
- contour line
- echo
- mrow
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000005070 sampling Methods 0.000 claims abstract description 43
- 238000000638 solvent extraction Methods 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 claims description 2
- 230000011218 segmentation Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 230000003044 adaptive effect Effects 0.000 description 4
- 238000012217 deletion Methods 0.000 description 3
- 230000037430 deletion Effects 0.000 description 3
- 230000008030 elimination Effects 0.000 description 3
- 238000003379 elimination reaction Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000007630 basic procedure Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 239000003973 paint Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Image Generation (AREA)
Abstract
The invention discloses the ancillary technique that a kind of general echo generates shade, including:Precomputation is carried out to profile line template and obtains contour line sampled point look-up table;Scene is drawn from light source space, with reference to the linear echo of echo Algorithm constitution based on uniform point sampling;Scene is drawn from ken space, by pixel transform to light source space, and is contour line pixel and non-contour line pixel pixel classifications;Shelter depth is calculated using linear interpolation for non-contour line pixel, compares depth value and obtains shadow result;For contour line pixel, contour line sampled point is read in the contour line sampled point look-up table built in advance by local similarity relationships, according to pixel coordinate and the position relationship of contour line sampled point, shadow result is obtained.The present invention provides the ancillary technique that a kind of echo generates shade, on the basis of existing shade nomography, using less extra performance expense, improves the shadow generation quality or algorithm performance of these algorithms.
Description
Technical field
The present invention relates to computer graphics shade drafting field, and in particular to a kind of general echo generation shade
Method.
Background technology
Shade is one of important element of real world, is also indispensable one in composograph Realistic Rendering
Point.It carries the important informations such as light source position in scene, light direction of transfer.Although shade drafting algorithm has been computer
In graphics the problem of a classics, but it is not resolved completely the problem of how efficiently and robustly to generate shade.Paint
Two kinds of main flow algorithms of shade processed are shade nomography and shadow volume algorithm respectively, are represented respectively based on image and based on geometry
Two kinds of hard shadow generation thinkings.The maximum feature of wherein shade nomography makes simply efficiently, and algorithm performance is substantially independent
In scene complexity.In addition, shadow volume algorithm is due to needing the geometry borderline of extraction object scene, therefore it can be only applied to geometry
For in polygonal scene;And shade nomography can be used in any scene geometry that can be rasterized, limitation is used
Less.Therefore shade nomography has been referred to as the selection that shade is generated in most of real-time applications.
Topic is that shade loses shape between original shade nomography is maximum.In order to solve this problem, existing shade nomography
Difference according to mentality of designing can be largely classified into distortion/fitting, z- segmentations, adaptivenon-uniform sampling, the types such as elimination of losing shape.Wherein
Distortion/fitting (Lloyd D B, Govindaraju N K, Quammen C, et al.Logarithmic perspective
Shadowmaps [J] .ACM Transactions on Graphics, 2008,27 (4):Article No.106) and z- segmentations
(Lloyd D B, Tuft D, Yoon S e, et a1.Warping and Partitioning for Low ErrorShadow
Maps[C].Proceedings of the 17th Eurographics Conference on
RenderingTechniques.Aire-la-Ville:Eurographics Association Press, 2006:215-
Although 226) method efficiency is higher, they can only make to lose shape to be evenly distributed in scene, and the treatment effect lost shape is still
The size of resolution ratio is depended critically upon, and distort/approximating method can not handle projection and lose shape.Adaptivenon-uniform sampling
(Fernando R, Fernandez S, Bala K, et al.Adaptive shadow maps [C] .Proceedingsof
the28th annual conference on Computer graphics and interactive techniques
(SIGGRAPH’01).New York:ACM Press, 2001:387-390) with lose shape elimination (Aila T, Laine
S.Alias-free shadow maps[C].Proceedings of the Fifteenth
Eurographicsconference on Rendering Techniques.Aire-la-Ville:Eurographics
AssociationPress, 2004:161-166) scheduling algorithm can avoid the problem of losing shape of echo completely in theory, but calculate
Method is less efficient, and the effect directly played in real-time application is simultaneously little.On the other hand, only needed to compared to original shade nomography
A single pass depth map is stored, z- segmentations, adaptivenon-uniform sampling and the methods such as elimination of losing shape are required for more extra shade
Figure memory space.
The content of the invention
The technical problem to be solved in the present invention is the method that a kind of general echo of design generates shade, helps other the moon
Shadow nomography improves shadow generation quality and algorithm performance.
In order to solve the above technical problems, the technical solution adopted by the present invention is:
A kind of method that general echo generates shade, including:
(1) look into going the profile line template progress precomputation in pixelation profile reconstruction technology to obtain contour line sampled point
Look for table;
(2) scene is drawn from light source space, the first echo is obtained with reference to the shade nomography based on uniform point sampling,
Respectively half of texel size of skew obtains the second echo in x and y directions, and the first echo and the second echo constitute linear shade
Figure;
(3) from ken space draw scene, by pixel transform to light source space, and pixel classifications be contour line pixel with
Non- contour line pixel;
(4) shelter depth is calculated using linear interpolation for non-contour line pixel, compares depth value and obtain shadow result;
For contour line pixel, contour line is read in the contour line sampled point look-up table built in advance by local similarity relationships
Sampled point, according to pixel coordinate and the position relationship of contour line sampled point, obtains shadow result.
Shade nomography of the present invention based on uniform point sampling is fitting algorithm (bibliography:Lloyd D B,
Govindaraju N K, Quammen C, et al.Logarithmic perspective shadowmaps [J] .ACM
Transactions on Graphics, 2008,27 (4):Article No.106), z- partitioning algorithm (bibliography:Lloyd
D B, Tuff D, Yoon S e, et al.Warping and Partitioning for Low ErrorShadow Maps
[C].Proceedings of the 17th Eurographics Conference on
RenderingTechniques.Aire-la-Ville:Eurographics Association Press, 2006:215-
Or adaptivenon-uniform sampling algorithm (bibliography 226):Femando R, Fernandez S, Bala K, et al.Adaptive
shadow maps[C].Proceedingsof the 28th annual conference on Computer graphics
and interactive techniques(SIGGRAPH’01).New York:ACM Press, 2001:387-390).
Preferably, the pre-computation methods in step (1) are:
(1-1) enumerates texel similitude figure nearby:For each depth-sampling point, the adjacent depth of itself and eight neighborhood is judged
The serial relation of sampled value is spent, similitude figure is generated;
(1-2) is carried out using each depth-sampling point one side of something adjacent with some as a unit to similitude figure
Voronoi subdivisions, obtain the position and direction of contour line;
(1-3) according to the position of contour line calculate B-spline curves, to 5 points of B-spline curves uniform sampling, and calculate this 5
The coordinate of individual point, so as to set up contour line sampled point look-up table.
Preferably, in step (1-2), deletion degree is 2 node optimization profile line position.
Specifically, pre-computation methods are as follows:
Texel similitude figure nearby is enumerated first:For each depth-sampling point, the adjacent depth of it and eight neighborhood is judged
The serial relation of sampled value is spent, if the gap between two depth values is less than given threshold value, then it represents that have between the two
Geometric continuity;Conversely, then between the two without geometric continuity, so as to generate similitude figure.Then adopted with each depth
Sampling point one side of something adjacent with some is unit, and voronoi subdivisions are carried out to similitude figure, obtain contour line position and
Direction.In order to eliminate some singularities in Voronoi diagram, simplify subdivision result, deletion degree is 2 node optimization contour line
Position.B-spline curves are calculated then according to the position of contour line.To put it more simply, to the B-spline curves uniform sampling 5 reconstructed
It is individual, and the coordinate of this 5 points is calculated, so as to set up contour line sampled point look-up table.
Preferably, center and angle point of the sampling location of the linear echo in step (2) for each texel.
The generation method of linear echo in step (2) is as follows:
Center and angle point of the sampling location of linear echo of the invention for each texel, are adopted with reference to any based on uniform point
The depth value at shade nomography (such as fitting, z- segmentations and adaptivenon-uniform sampling method) storage all pixels center of sample is first
Echo, half of texel size is offset in x and y directions and obtains the second echo, the first echo and second echo are constituted
Linear echo.
It is with reference to the method that fitting shade nomography obtains linear echo:It is the to store the depth value at all pixels centers
One echo, half of texel size is offset in x and y directions and obtains the second echo, the first echo and the second echo are constituted
Linear echo, then applies deformation to linear echo and distortion obtains new linear echo.
Obtaining linear shade drawing method with reference to z- segmentation shade nomographys is:The cone is divided into along the z-axis direction multiple
The sub- cone, the depth value that each cone stores its all pixels center is the first echo, and half of texel is offset in x and y directions
Size obtains the second echo, and the first echo and second echo constitute linear echo.
The method that combining adaptive partitioning algorithm obtains linear echo is:The first echo is successively segmented, is improved constantly
Scene local resolution, less than one set-point of the error between two times result, then offsets half in x and y directions up to date
Individual texel size obtains the second echo, and the first echo and the second echo constitute linear echo.
Preferably, the pixel classifications method in step (3) is:
Object pixel is projected to light source space, and finds the texel where object pixel, is obtained by linear echo
Depth-sampling point on the texel center and four angles;
The average value of the value of depth-sampling point on four angles is asked for, judges whether obtained result stores with pixel center
Depth value it is equal, if both are equal, the object pixel is contour line pixel;
Otherwise, the object pixel is non-contour line pixel.
In the depth value that relatively average value of the value of depth sampled point is stored with pixel center, it is necessary to set one minimum
Tolerable error avoid the error in floating number calculating process.
, can be by pixel P line for pixel P preferably, when linear interpolation in step (4) calculates depth value
Reason coordinate determines delta-shaped region where pixel P, with reference to three summit V of the delta-shaped region0、V1And V2Texture coordinate,
Calculate barycentric coodinates (u, v, ω) of the pixel P relative to the delta-shaped region:
The depth value of pixel P shelter is calculated again:
Pocc.z=uV0.z+v·V1.z+ω·V2.z,
In formula:
PoccPixel P shelter is represented,
Z represents depth value,
(P.x, P.y) is expressed as two-dimensional projection coordinates of the pixel P in light source space,
V0、V1And V2For three summits of affiliated triangle,
V0.x, V0.y, V1.x, V1.y, V2.x, V2.y V is represented respectively0、V1And V22 d texture coordinate,
V0.z, V1.z, V2.z V is represented respectively0、V1And V2Depth value.
Preferably, in step (4), for contour line pixel, by local similarity relationships in the wheel built in advance
The method of reading contour line sampled point is in profile sampled point look-up table:
Similitude figure result near correspondence texel is calculated according to linear echo, read from similitude figure result pre-
5 sampling point positions on the correspondence profile line calculated, judge further according to 5 position relationships between sampled point and broken line
Go out shadow result.
Compared with prior art, present invention tool has the advantage that:
(1) present invention generates the technology of shade as a kind of general auxiliary shade nomography, can be with any based on equal
The shade nomography of even point sampling is used in combination, and helps other echo algorithm improvement shadow generation quality and algorithm performance;With
When fitting, z- segmentations etc. are efficiently but the relatively low algorithm of generation shadow quality is used in combination, the algorithm can be helped further to eliminate
The phenomenon of losing shape of shade;, can be with when being used in combination with the generation such as the adaptivenon-uniform sampling algorithm that shadow quality is higher but performance is relatively low
Algorithm is helped to generate the shadow result of similar mass with higher efficiency;
(2) method that the present invention relates to is simple, efficiently, does not change the framework of original shade nomography, when with its
When its algorithm is used in combination, it is thus only necessary to seldom extra performance expense.
Brief description of the drawings
Fig. 1 is the basic procedure schematic diagram of the embodiment of the present invention;
The schematic diagram that Fig. 2 generates for the echo of the embodiment of the present invention;
The schematic diagram that Fig. 3 is combined for the present invention with the shade nomography based on uniform point sampling.
Embodiment
As shown in figure 1, the technical solution adopted by the present invention is:A kind of general auxiliary shade nomography generates the skill of shade
Art, implementation step is as follows:
1) precomputation is carried out to the profile line template gone in pixelation profile reconstruction technology being related in the present invention to obtain
To contour line sampled point look-up table.
Texel similitude figure nearby is enumerated first:For each depth-sampling point, the phase of it and eight neighborhood is first determined whether
The serial relation of adjacent depth-sampling value, if the gap between two depth values is less than given threshold value, then it represents that between the two
With geometric continuity;Conversely, then between the two without geometric continuity, so as to generate similitude figure.Then by similitude figure
In every a line be considered as two one side of something, two one side of something are belonging respectively to adjacent depth-sampling point.With each depth-sampling point
Adjacent one side of something is a unit with some, and it is from this that each region after voronoi subdivisions, subdivision is carried out to similitude figure
The set of the sampled point in region and its half of nearest point, it is big that the line segment of subdivision contour line pixel constitutes estimated contour line
Cause position.In order to eliminate some singularities in Voronoi diagram, simplify subdivision result, deletion degree is 2 node optimization profile
Line position.
B-spline curves are calculated then according to the position of contour line, and our final computational shadowgraphs are as a result, it is desirable to judge screen picture
Position relationship between the light source space coordinate and contour line of element, and this needs to be related to complex calculating.For simplification
This problem, to 5 points of B-spline curves uniform sampling reconstructed, and calculates the coordinate of this 5 points.Thus, it is expected that calculating step
Final calculation result be the light source space texture coordinate of 5 points stored in each texel.Pixel is solved to close with B-spline position
The problem of the problem of being has been also converted into solution pixel and broken line position relationship, this problem can by by the coordinate of point according to
Verified in the secondary equation for bringing each line segment into.In addition a little it is worth noting that, shade nomography also needs to determine shade
Region and illuminated area are respectively in the information on which side of contour line.The contour line fixed for certain position, can be according to shade
The orientation difference in region produces ambiguity situation.In our method, we pass through 5 sampled point storage orders of B-spline curves
Difference distinguish two kinds of ambiguity situations, the storage order of sampled point is the clockwise direction of shadow region.So we set up
Contour line sampled point look-up table.
2) from light source space draw scene, with reference to it is any based on uniform point sampling shade nomography (such as distortion/fitting,
The method such as z- segmentations and adaptivenon-uniform sampling) depth value of pixel center is stored for the first echo, offset half in x and y directions
Texel size obtains second echo, and the first echo and second echo constitute linear echo.
The shade drawing generating method is similar with traditional shade drawing method, difference i.e. one equal resolution, position of many storages
Offset the echo of half of texel size.Center and angle point of the sampling location of echo of the present invention for each texel, it is this to adopt
Sample loading mode is twice of number of samples in fact for traditional shade drawing method, i.e., one original echo is inclined plus one
Move the echo ((a) part in Fig. 2) of the equal resolution of half of texel distance.As shown in (b) part in Fig. 2, linear reconstruction
One-dimensional signal can relatively conventional echo closer to primary signal, the error of registered depth value is also obviously reduced.In Fig. 2
(c) in part, the actual geometry in scene becomes the object collection using the triangle of a quarter pixel size as geometry unit
Close, the more shallow triangular representation of color and the bigger region of original geometry error.By observation it was found that in reconstruct geometry
Portion is consistent with original geometry;Although the texel region error near body outline is relative to original shade drawing method
Improve, but still there is error.
Obtaining linear echo with reference to fitting shade nomography is:The depth value for storing all pixels center is the first shade
Figure, half of texel size is offset in x and y directions and obtains second echo, the first echo and second echo constitute line
Property echo, then apply deformation to linear echo and distortion obtain new linear echo so that the depth in light source space
In the density redistribution of sampled point, such as Fig. 3 shown in (a) part.
Obtaining linear echo with reference to z- segmentation shade nomographys is:As shown in (b) part in Fig. 3, the cone along z-axis
Direction is divided into many sub- cones, and the depth value that each cone stores its all pixels center is the first echo, in x and y side
Second echo is obtained to half of texel size of skew, the first echo and second echo constitute linear echo.
Combining adaptive partitioning algorithm obtains linear echo:As shown in (c) part in Fig. 3, first is successively segmented cloudy
Shadow figure, improves constantly scene local resolution, and the error between two times result is less than a set-point up to date, then in x
Half of texel size is offset with y directions and obtains second echo, and the first echo and second echo constitute linear shade
Figure.
3) from ken space draw scene, by pixel transform to light source space, and pixel classifications be contour line pixel with
Non- contour line pixel.
Before computational shadowgraph, we classify firstly the need of to screen pixels.When a pixel projection is empty to light source
Between when, we find the texel where it, and by linear echo, we can obtain on the texel center and four angles accurate
Depth-sampling value.Our values to the depth-sampling point on four angles are averaging, judge obtained result whether with texel
The depth value for feeling storage is equal.If both are equal, the screen pixels are contour line pixel;Otherwise, the pixel is non-contour line
Pixel., it is necessary to set a minimum tolerable error to avoid in floating number calculating process when the two values are compared
Error.
4) shelter depth is calculated using linear interpolation for non-contour line pixel, compares depth value and obtain shadow result;
For contour line pixel, contour line sampled point is read in a lookup table by local similarity relationships, judge pixel coordinate and
The position relationship of contour line sampled point, obtains shadow result.
Shelter depth is calculated using linear interpolation for non-contour line pixel, compares depth value and obtains shadow result.
In linear echo, the texel of each original echo is sampled a natural division for four deltas
Domain.When a pixel P is projected on echo, determine it is located at which triangle of which texel by its texture coordinate
In region.With reference to three summit V of triangle0、V1And V2Texture coordinate, the pixel phase is tried to achieve according to the relation of formula below
For the barycenter oftriangle coordinate (u, v, ω).
Then the depth value of the shelter of P points can be obtained by following formula linear interpolation
Pocc.z=uV0.z+v·V1.z+ω·V2.z
PoccPixel P shelter is represented,
Z represents depth value,
(P.x, P.y) is expressed as two-dimensional projection coordinates of the pixel P in light source space,
V0、V1And V2For three summits of affiliated triangle,
V0.x, V0.y, V1.x, V1.y, V2.x, V2.y V is represented respectively0、V1And V22 d texture coordinate,
V0.z, V1.z, V2.z V is represented respectively0、V1And V2Depth value.
For contour line pixel, the similitude figure result near correspondence texel is calculated according to echo first, then read
5 sampling point positions on the good correspondence profile line of precomputation are taken, are judged finally according to the position relationship between point and broken line
Go out shade result of calculation.
The preferred embodiment of the present invention is the foregoing is only, protection scope of the present invention is not limited in above-mentioned embodiment party
Formula, every technical scheme for belonging to the principle of the invention belongs to protection scope of the present invention.For those skilled in the art
Speech, some improvements and modifications carried out on the premise of the principle of the present invention is not departed from, these improvements and modifications also should be regarded as this
The protection domain of invention.
Claims (7)
1. a kind of method that general echo generates shade, it is characterised in that including:
(1) lookup of contour line sampled point is obtained to going the profile line template in pixelation profile reconstruction technology to carry out precomputation
Table;The method of the precomputation is:
(1-1) enumerates texel similitude figure nearby:For each depth-sampling point, judge that the adjacent depth of itself and eight neighborhood is adopted
The serial relation of sample value, generates similitude figure;
(1-2) carries out voronoi using each depth-sampling point one side of something adjacent with some as a unit to similitude figure
Subdivision, obtains the position and direction of contour line;
(1-3) calculates B-spline curves according to the position of contour line, to 5 points of B-spline curves uniform sampling, and calculates 5 points
Coordinate, so as to set up contour line sampled point look-up table;
(2) scene is drawn from light source space, the first echo is obtained with reference to the shade nomography based on uniform point sampling, in x and y
Respectively half of texel size of skew obtains the second echo in direction, and the first echo and the second echo constitute linear echo;
(3) scene is drawn from ken space, by pixel transform to light source space, and is contour line pixel and non-wheel pixel classifications
Profile pixel;
(4) shelter depth is calculated using linear interpolation for non-contour line pixel, compares depth value and obtain shadow result;For
Contour line pixel, reads contour line by local similarity relationships in the contour line sampled point look-up table built in advance and samples
Point, according to pixel coordinate and the position relationship of contour line sampled point, obtains shadow result.
2. the method that general echo according to claim 1 generates shade, it is characterised in that described based on uniform
The shade nomography of point sampling is fitting algorithm, z- partitioning algorithms or adaptivenon-uniform sampling algorithm.
3. the method that general echo according to claim 1 generates shade, it is characterised in that in step (1-2), delete
Except the node optimization profile line position that degree is 2.
4. the method that general echo according to claim 1 generates shade, it is characterised in that the line in step (2)
Property echo sampling location for each texel center and angle point.
5. the method that general echo according to claim 1 generates shade, it is characterised in that the picture in step (3)
Plain sorting technique is:
Object pixel is projected to light source space, and finds the texel where object pixel, the line is obtained by linear echo
Depth-sampling point on plain center and four angles;
Ask for the average value of the value of depth-sampling point on four angles, judge obtained result whether the depth stored with pixel center
Angle value is equal, if both are equal, and the object pixel is contour line pixel;
Otherwise, the object pixel is non-contour line pixel.
6. the method that general echo according to claim 1 generates shade, it is characterised in that the line in step (4)
Property interpolation calculation depth value when, for pixel P, delta-shaped region where pixel P can be determined by pixel P texture coordinate,
With reference to three summit V of the delta-shaped region0、V1And V2Texture coordinate, calculate pixel P relative to the delta-shaped region
Barycentric coodinates (u, v, ω):
<mrow>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>P</mi>
<mo>.</mo>
<mi>x</mi>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>P</mi>
<mo>.</mo>
<mi>y</mi>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>V</mi>
<mn>0</mn>
</msub>
<mo>.</mo>
<mi>x</mi>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>.</mo>
<mi>x</mi>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>.</mo>
<mi>x</mi>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>V</mi>
<mn>0</mn>
</msub>
<mo>.</mo>
<mi>y</mi>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>V</mi>
<mn>1</mn>
</msub>
<mo>.</mo>
<mi>y</mi>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>V</mi>
<mn>2</mn>
</msub>
<mo>.</mo>
<mi>y</mi>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mi>u</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<mi>v</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<mi>&omega;</mi>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>,</mo>
</mrow>
The depth value of pixel P shelter is calculated again:
Pocc.z=uV0.z+v·V1.z+ω·V2.z,
In formula:
PoccPixel P shelter is represented,
Z represents depth value,
(P.x, P.y) is expressed as two-dimensional projection coordinates of the pixel P in light source space,
V0、V1And V2For three summits of affiliated triangle,
V0.x, V0.y, V1.x, V1.y, V2.x, V2.y V is represented respectively0、V1And V22 d texture coordinate, V0.z, V1.z, V2.z divide
V is not represented0、V1And V2Depth value.
7. the method that general echo according to claim 1 generates shade, it is characterised in that in step (4), for
Contour line pixel, reads contour line by local similarity relationships in the contour line sampled point look-up table built in advance and samples
Point method be:
Similitude figure result near correspondence texel is calculated according to linear echo, precomputation is read from similitude figure result
5 sampling point positions on good correspondence profile line, the moon is judged further according to 5 position relationships between sampled point and broken line
Shadow result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510336330.2A CN104966297B (en) | 2015-06-12 | 2015-06-12 | A kind of method that general echo generates shade |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510336330.2A CN104966297B (en) | 2015-06-12 | 2015-06-12 | A kind of method that general echo generates shade |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104966297A CN104966297A (en) | 2015-10-07 |
CN104966297B true CN104966297B (en) | 2017-09-12 |
Family
ID=54220328
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510336330.2A Active CN104966297B (en) | 2015-06-12 | 2015-06-12 | A kind of method that general echo generates shade |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104966297B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106447759B (en) * | 2016-10-19 | 2018-10-12 | 长春理工大学 | The method for realizing the indirect lighting effect proximity rendering of three-dimensional scenic using visibility interpolation |
CN114283267A (en) * | 2018-02-11 | 2022-04-05 | 鲸彩在线科技(大连)有限公司 | Shadow map determination method and device |
CN110502966B (en) * | 2019-07-01 | 2023-06-30 | 广州市川流信息科技有限公司 | Classified information acquisition equipment, method and storage device for packages |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101937577A (en) * | 2010-09-17 | 2011-01-05 | 浙江大学 | Method capable of generating shadow with boundary pixel oversampling effect |
CN101271588B (en) * | 2007-10-26 | 2012-01-11 | 威盛电子股份有限公司 | Recreatable geometric shade pattern method |
CN103366396A (en) * | 2013-07-06 | 2013-10-23 | 北京航空航天大学 | Partial shadow image-based high-quality soft shadow fast generation method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101598374B1 (en) * | 2009-09-21 | 2016-02-29 | 삼성전자주식회사 | Image processing apparatus and method |
-
2015
- 2015-06-12 CN CN201510336330.2A patent/CN104966297B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101271588B (en) * | 2007-10-26 | 2012-01-11 | 威盛电子股份有限公司 | Recreatable geometric shade pattern method |
CN101937577A (en) * | 2010-09-17 | 2011-01-05 | 浙江大学 | Method capable of generating shadow with boundary pixel oversampling effect |
CN103366396A (en) * | 2013-07-06 | 2013-10-23 | 北京航空航天大学 | Partial shadow image-based high-quality soft shadow fast generation method |
Also Published As
Publication number | Publication date |
---|---|
CN104966297A (en) | 2015-10-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106910242B (en) | Method and system for carrying out indoor complete scene three-dimensional reconstruction based on depth camera | |
KR101199475B1 (en) | Method and apparatus for reconstruction 3 dimension model | |
McCool | Shadow volume reconstruction from depth maps | |
TWI443602B (en) | Hierarchical bounding of displaced parametric surfaces | |
Sibbing et al. | Sift-realistic rendering | |
JP6863693B2 (en) | Graphics processing system and method | |
JP2003067769A (en) | Method and system for generating image from one set of discrete sample points | |
Bangaru et al. | Differentiable rendering of neural sdfs through reparameterization | |
CN103854301A (en) | 3D reconstruction method of visible shell in complex background | |
CN104966297B (en) | A kind of method that general echo generates shade | |
Maruya | Generating a Texture Map from Object‐Surface Texture Data | |
Westerteiger et al. | Spherical Terrain Rendering using the hierarchical HEALPix grid | |
CN108197555B (en) | Real-time face fusion method based on face tracking | |
CN104933754B (en) | Remove the linear shade drawing method of pixelation profile reconstruction | |
NO324930B1 (en) | Device and method for calculating raster data | |
Pagés et al. | Composition of Texture Atlases for 3D Mesh Multi-texturing. | |
Degener et al. | A variational approach for automatic generation of panoramic maps. | |
Englert | Using mesh shaders for continuous level-of-detail terrain rendering | |
Frisken et al. | Efficient estimation of 3D euclidean distance fields from 2D range images | |
Li et al. | Edge-aware neural implicit surface reconstruction | |
US20020175913A1 (en) | Regularization of implicial fields representing objects and models | |
Wang et al. | Silhouette smoothing for real-time rendering of mesh surfaces | |
Dutreve et al. | Real-time dynamic wrinkles of face for animated skinned mesh | |
Hardy et al. | 3-view impostors | |
Fedorov et al. | Interactive reconstruction of the 3D-models using single-view images and user markup |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |