CN103530907A - Complicated three-dimensional model drawing method based on images - Google Patents

Complicated three-dimensional model drawing method based on images Download PDF

Info

Publication number
CN103530907A
CN103530907A CN201310497271.8A CN201310497271A CN103530907A CN 103530907 A CN103530907 A CN 103530907A CN 201310497271 A CN201310497271 A CN 201310497271A CN 103530907 A CN103530907 A CN 103530907A
Authority
CN
China
Prior art keywords
visual angle
pixel
virtual visual
image
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310497271.8A
Other languages
Chinese (zh)
Other versions
CN103530907B (en
Inventor
向开兵
郝爱民
吴伟和
李帅
王德志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN ESUN DISPLAY CO Ltd
Beihang University
Original Assignee
SHENZHEN ESUN DISPLAY CO Ltd
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN ESUN DISPLAY CO Ltd, Beihang University filed Critical SHENZHEN ESUN DISPLAY CO Ltd
Priority to CN201310497271.8A priority Critical patent/CN103530907B/en
Publication of CN103530907A publication Critical patent/CN103530907A/en
Application granted granted Critical
Publication of CN103530907B publication Critical patent/CN103530907B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention provides a complicated three-dimensional model drawing method based on images. Vertexes are uniformity selected from a spherical surface surrounding a model as camera coordinate positions, and a color image and a depth image of the model under a sampling visual angle are obtained by taking a sphere center as a target point of a camera. The method comprises the steps of triangularly dividing the spherical surface according to sampling point coordinates to determine a triangle where a virtual visual angle is located, taking corresponding visual angles as reference visual angles when taking vertexes of the triangle as sampling points, and drawing the model under the virtual visual angle with depth images and color images under the reference visual angles: calculating mapping relationships between the virtual visual angle and pixels in the three reference visual angles respectively with parameters of the reference visual angles, and drawing an image of the virtual visual angle with the appropriate pixels of the reference visual angles or a background pixel by taking the depth images as references, and finally, optimizing the drawn color image. The method can meet real-time requirements and obtains a very true-to-life drawing effect.

Description

Complex three-dimensional modeling rendering method based on image
Technical field
The present invention relates to a kind of Realistic Rendering method based on image, be mainly used in the Realistic Rendering of complex model under virtual visual angle.
Background technology
In the middle of traditional graphics, the general process of Realistic Rendering is: user inputs the geometrical property of object and carries out Geometric Modeling, then the illumination information at environment according to model, the physical attributes such as the smoothness of model, transparency, reflectivity refractive index, and superficial makings etc., by spatial alternation, perspective transform etc., calculate the color value of each pixel of object under certain viewing angles.Yet the modeling process of this method is complicated, calculating and the expense showing are large, and time complexity and model complexity coupling are high, are not suitable for the drafting of complex model, and are difficult to obtain drawing result true to nature.
Method for drafting based on image (Image-Based Rendering.IBR) is usingd image as basic input, image on the basis of not carrying out geometric model reconstruction under synthetic virtual visual angle, in fields such as video-game, fictitious tour, ecommerce, industrial detection, having boundless application prospect, is therefore also a study hotspot of three-dimensional picture field of drawing true to nature.Complex three-dimensional modeling rendering method based on image proposed by the invention is a kind of IBR method.
Main method based on Image Rendering is as follows:
1. how much and image blend modeling and method for drafting (Hybrid Geometry and Image-based Approach)
PaulE.Debevec (list of references 1-PaulE.Debevec, camilloj.Taylor, Jitendra make. " Modeling and rendering architecture from photographs:a hybrid geometry and imaged-based approach " .In SIGGRAPH96Processing of the23 rdannual conference on computer graphics and interactive techniques.Page11-20) geometry proposing and the key step of image blend modeling and method for drafting are as follows:
A. photographed scene photo, the edge of mutual designated model;
B. the rough model of generation model;
C. utilize the stereoscopic vision algorithm refined model based on model;
D. utilize the synthetic new view of texture based on viewpoint.
The example of this process as shown in Figure 4.The advantage of this method is simple and fast, the new view that can obtain by taking a small amount of photo; This process of shortcoming also needs the profile of artificial designated model, can only be applicable to the scenery of the neat appearances such as common building thing, is not suitable for complex model.
2. the method for view interpolation, conversion (View Interpolation, Transaction)
View interpolation, transform method directly utilize the image under the photo generating virtual visual angle of reference point.View interpolation, transform method (list of references 2-Geetha Ramachandran, Markus Rupp. " MultiView Synthesis From Stereo Views " .In IWSSIP2012, 11-13April2012, PP.341-PP.345. list of references 3-Ankit K.Jain, Lam C.Tran, RamsinKhoshabeh, Truong Q.Nguyen, Efficient Stereo-to-Multiview Synthesis, ICASSP2011.PP.889-PP.892. list of references 4-S.C.Chan, A.C.Bovik, H.R.Sheikh, and E.P.SimomCelli, " Image-Based Rendering and synthesis ", IEEE Signal Process, Mag, vol.24, no.6, PP.22-PP.33) input is coloured image and the depth image of two Regularizations under visual angle, output be to be positioned at (baseline on the straight line that two reference point determine, Baseline) image under virtual visual angle, concrete process is as follows:
A. Stereo matching, generates initial synthetic visual angle image;
B. optimization process, finds possible empty point according to rim detection;
C. fill and generate the corresponding depth map in synthetic visual angle;
D. image reconstruction, empty according to depth image filling color image.
The example of the method as shown in Figure 5.The advantage of this method is that process is simple, noise peakedness ratio high (noise peakedness ratio (PSNR), the image after characterization process and the similarity of processing front image of synthetic image.Signal peak, than higher, illustrates that the validity of composograph is stronger), and filling cavity that can be outstanding.But the method can only produce the image that is positioned at the virtual visual angle on baseline, and picture is carried out to Regularization operation and can introduce projection error, can only generate approximate intermediate image.
Summary of the invention
There is following shortcoming in existing three-dimensional model Realistic Rendering method: it is complicated that the method based on how much exists model to obtain with process of reconstruction, drawing process is subject to model complexity, model illumination properties affect large, draw the effect sense of reality not strong, be not suitable for the drafting of complex model; Method for drafting based on image, synthetic visual angle is limited on two baselines between reference viewing angle, cannot generate the image under object meaning in office visual angle.
For the shortcoming of prior art, the present invention proposes the complex three-dimensional modeling rendering method based on image, and it comprises following process:
(1) demarcation at virtual visual angle: carry out triangle division to surrounding the sphere of model according to sampled point camera position coordinate, determine the tri patch at place, virtual visual angle, the corresponding visual angle, three summits of getting this tri patch is reference viewing angle, and virtual visual angle can be expressed as the linear combination of reference viewing angle;
(2) calculate and draw: according to the position of virtual visual angle, reference viewing angle and camera parameter, calculating the coordinate of each pixel in three reference viewing angle images and the mapping relations between the pixel coordinate under virtual visual angle; According to mapping relations, with reference to each pixel in coloured image under visual angle, be mapped in the image under virtual visual angle, calculate coordinate and the depth value of this pixel in the hypograph of virtual visual angle, for the pixel that has a plurality of reference viewing angle, be mapped to the situation of same position, get the pixel value that depth value is little; With the pixel of all referenced visual angles pixel filling in the hypograph of the virtual visual angle of tense marker, construct the gray-scale map of a width reflection from reference viewing angle to virtual visual angle mapping situation;
(3) optimization of image: for the cavity in the hypograph of virtual visual angle, passing through (2) calculating does not have reference picture to be mapped to the pixel of this position, gray-scale map from reflection reference viewing angle to virtual visual angle mapping situation, extract edge contour information, along edge contour, the coloured image generating is carried out to medium filtering, with the value filling cavity of neighbor pixel; Meanwhile, by medium filtering, filter out noise pixel.
Wherein, by step (1), realize the demarcation to virtual visual angle, be identified for the reference viewing angle that draw at virtual visual angle, the pixel of setting up under reference viewing angle by step (2) arrives the mapping relations between pixel under virtual visual angle, realizes the complex model under virtual visual angle is drawn.
Wherein, in step (2) and step (3), use CUDA (Compute Unified Device Architecture, unified calculation equipment framework) parallel computation, accelerates drafting and optimal speed under virtual visual angle, reaches the requirement of real-time, interactive.
Principle of the present invention is:
The invention provides a kind of complex three-dimensional modeling rendering method based on image, on the sphere that surrounds model, uniform design summit is as camera coordinates position, and the impact point that the centre of sphere of take is camera, obtains coloured image and the depth image of model under this sampling visual angle.According to sample point coordinate, sphere is carried out to triangle division, determine the triangle at place, virtual visual angle, get that to take vertex of a triangle be reference viewing angle as the corresponding visual angle of sampled point, use depth image and coloured image under reference viewing angle to draw the model under virtual visual angle: first, to utilize the parameter of reference viewing angle to calculate respectively the mapping relations between pixel in virtual visual angle and three reference viewing angle; Secondly, take depth image as reference, the image of selecting suitable reference viewing angle pixel or background pixel to draw virtual visual angle; Finally, to drawing the coloured image obtaining, be optimized.In whole drawing process, use CUDA acceleration, realized the parallel fast processing to image.The present invention can meet the requirement of real-time, obtains the drafting effect being really true to life simultaneously.
Compared with prior art, advantage is as follows in the present invention:
(1) virtual visual angle of the present invention can be moved arbitrarily in the middle of space.Utilize three models under the virtual visual angle of reference point to draw, so both can guarantee that virtual visual angle can, in level and vertical two cofree movements of dimension, reduce again input quantity and the storage overhead of algorithm to greatest extent;
(2) method for drafting that the present invention proposes is highly stable, and the complexity coupling of the time complexity of algorithm and scene is low, is particularly useful for the drafting of complex model.
Accompanying drawing explanation
Fig. 1 is process flow diagram of the present invention;
Fig. 2 is the principle schematic of this algorithm;
Fig. 3 when the inside of virtual visual angle at certain tri patch, the position relationship at virtual visual angle and reference point visual angle;
Fig. 4 direct transform matrix and inverse-transform matrix;
Fig. 5 is how much and image blend modeling process, wherein: (a) be the profile of determining rebuilt object by mutual, (b) be the rough model that obtains object, (c) being the stereoscopic vision algorithm refined model of using based on model, is (d) to obtain final scene by the texture based on viewpoint;
Fig. 6 is the example of disparity map method, initial input picture is the coloured image (being the schematic diagram of the gray-scale map of this coloured image in figure) of Regularization, initial generation figure has obtained cromogram (being the schematic diagram of the gray-scale map of this cromogram in figure) after pixel mapping and filling process, note the cavity in square frame, the final output of algorithm is hole-filling cromogram (being the schematic diagram of the gray-scale map of this cromogram in figure) later, notices that the cavity in square frame is eliminated;
The cromogram (being the schematic diagram of the gray-scale map of this cromogram in figure) at the virtual visual angle of Fig. 7 after preliminary mapping is filled, the cavity of the inside of the noise of attention model periphery;
The gray-scale map at the virtual visual angle of Fig. 8 after preliminary mapping is filled, the noise of attention model periphery and inner cavity;
Fig. 9 enters the outline map after overexpansion, difference operation;
Figure 10 simply carries out medium filtering result afterwards to all pixels;
Figure 11 edge pixel carried out the result after medium filtering, compares with Figure 10, and Figure 11 has better preserved detailed information, truly stronger;
Figure 12 is the part input picture of invention example, and top three pictures are input picture schematic diagram, and input picture can be coloured image, in figure, does not show, below three pictures are difference depth maps of correspondence with it;
Figure 13 is the part output image of invention example, and the picture of output is synthetic by method for drafting proposed by the invention.
Embodiment
Below in conjunction with drawings and the specific embodiments, further illustrate the present invention.
1. the Realistic Rendering method based on depth image:
For by being evenly distributed on the sphere that surrounds model, the three-dimensional model that the sampled point that comprises depth map under camera perspective parameter, camera perspective, cromogram represents, according to sample point coordinate, sphere is carried out to triangle division, determine the triangle at place, virtual visual angle, get that to take vertex of a triangle be reference viewing angle as the corresponding visual angle of sampled point, use depth image and coloured image under reference viewing angle to draw the model under virtual visual angle: first, to utilize the parameter of reference viewing angle to calculate respectively the mapping relations between pixel in virtual visual angle and three reference viewing angle; Secondly, take depth image as reference, the image of selecting suitable reference viewing angle pixel or background pixel to draw virtual visual angle; Finally, to drawing the coloured image obtaining, be optimized.In whole drawing process, use CUDA acceleration, realized the parallel fast processing to image.Specific implementation process is as follows:
1. the demarcating module at virtual visual angle
In the present invention, the three-dimensional model that represents object with depth image, coloured image and the sampled point camera parameter of different sampled points.A two tuples<K for three-dimensional model M, V>represent, wherein K is a simplicial complex, has represented the annexation of sampled point; V represents the set of sampled point, V=(v<sub TranNum="115">i</sub>| i=1,2,3...|V||), | V| represents the number of sampled point; V<sub TranNum="116">i</sub>=(c<sub TranNum="117">i</sub>, d<sub TranNum="118">i</sub>, p<sub TranNum="119">i</sub>) represent i sampled point, c<sub TranNum="120">i</sub>and d<sub TranNum="121">i</sub>represent respectively i sampled point coloured image and depth image, p<sub TranNum="122">i</sub>the camera parameter that has represented i sampled point, p<sub TranNum="123">i</sub>=(pc<sub TranNum="124">i</sub>, po<sub TranNum="125">i</sub>, asp<sub TranNum="126">i</sub>, fov<sub TranNum="127">i</sub>, zn<sub TranNum="128">i</sub>, zf<sub TranNum="129">i</sub>), pc<sub TranNum="130">i</sub>represent camera position, po<sub TranNum="131">i</sub>represent camera target, asp<sub TranNum="132">i</sub>the aspect ratio that represents the camera visual field, fov<sub TranNum="133">i</sub>the range that represents the visual field of camera, zn<sub TranNum="134">i</sub>, zf<sub TranNum="135">i</sub>the minimum value and the maximal value that represent respectively camera significant depth.
Before the model of drawing under virtual visual angle, need to obtain three sampled points nearest with virtual visual angle, carry out the demarcation at virtual visual angle.Because all sampled points are equally distributed on the sphere that surrounds object, at sphere, according to sample point coordinate, carry out after triangle division, only need to determine the tri patch at place, virtual visual angle, with three summits of this tri patch, be exactly required nearest sampled point, these three nearest sampled points are called as the reference point at virtual visual angle.
<v 1,v 2,v 3>=f(v) (1)
Wherein, v is virtual visual angle, < v 1, v 2, v 3> is the tri patch at place, virtual visual angle, v 1, v 2, v 3it is the reference point of v.In the present invention, by solving the vector and the polyhedral intersection point that approaches encirclement object sphere that points to the centre of sphere from virtual visual angle camera position, according to the tri patch at intersection point place, determine three nearest reference point.
Virtual visual angle and reference point have position relationship as shown in Figure 3.From the knowledge of cartesian geometry, under the position relationship shown in Fig. 3, virtual visual angle can be linear synthetic by reference point, and meet:
v &RightArrow; = &alpha; v &RightArrow; 1 + &beta; v &RightArrow; 2 + ( 1 - &alpha; - &beta; ) v &RightArrow; 3 0 &le; &alpha; &le; 1 0 &le; &beta; &le; 1 0 &le; 1 - &alpha; - &beta; &le; 1 - - - ( 2 )
Wherein,
Figure BDA0000399361910000052
represent respectively the vector by virtual visual angle coordinate points and three reference point coordinate points centre ofs sphere.
As shown in Figure 3, the coordinate of three reference point and the centre of sphere form tetrahedral structure, and the coordinate at virtual visual angle is positioned at the tri patch that reference point coordinate surrounds.Order vector
Figure BDA0000399361910000053
tetrahedron volume can be expressed as the not form of the parallelopipedal product on three limits in same plane:
u = 1 6 v &RightArrow; &times; e 1 &RightArrow; &CenterDot; e 2 &RightArrow; = - 1 6 v 1 &RightArrow; &CenterDot; v 2 &RightArrow; &CenterDot; e 2 &RightArrow; - - - ( 3 )
Formula 2 is out of shape:
v 1 &RightArrow; = v &RightArrow; - &beta; v 2 &RightArrow; - ( 1 - &alpha; - &beta; ) v 3 &RightArrow; &alpha; - - - ( 4 )
The latter half of formula (4) being brought into formula (3) obtains following formula:
&alpha; = - v &RightArrow; &CenterDot; v 2 &RightArrow; &times; e 2 &RightArrow; v &RightArrow; &times; e 1 &RightArrow; &CenterDot; e 2 &RightArrow; - - - ( 5 )
In like manner.Can obtain:
&beta; = - v &RightArrow; &CenterDot; v 1 &RightArrow; &times; e 2 &RightArrow; v &RightArrow; &times; e 1 &RightArrow; &CenterDot; e 2 &RightArrow; - - - ( 6 )
In sum, only need in the middle of all dough sheets, travel through, utilize formula (5), (6) to obtain α, β in each tri patch, if α, β meet the constraint condition in formula (2), virtual visual angle is just among the encirclement of this dough sheet, and this virtual visual angle can be according to formula (2) by reference point linear expression.
2. calculate and drafting module
This module comprises two subprocess, computation process and drawing process.Computation process determines that the pixel under each reference point visual angle arrives the mapping relations of pixel under virtual visual angle; Drawing process, for each pixel under virtual visual angle, selects pixel or background pixel under 1 to 3 reference point visual angle to draw according to the mapping relations of having tried to achieve.
2.1 computation processes:
Pixel coordinate and the pixel value of the depth image under given a certain reference point visual angle, in conjunction with the camera parameter of reference point, can obtain the corresponding coordinate in three-dimensional world coordinate system of this pixel, and this process is reversible.Be that under any reference point visual angle, the pixel coordinate of depth image and the coordinate in world coordinate system of pixel value and three-dimensional body exist dijection relation:
pixel &RightArrow; = M &CenterDot; object &RightArrow; object &RightArrow; = M - 1 &CenterDot; pixel &RightArrow; pixel &RightArrow; = ( i , j , l , depth ) object &RightArrow; = ( x , y , z , depth ) - - - ( 7 )
I wherein, j is pixel coordinate, x, y, z is coordinate in world coordinate system, depth is the pixel value of depth image.M is invertible matrix, by sampled point camera parameter, is determined.Being defined in the present invention coordinate in world coordinate system is forward conversion matrix to the transition matrix M of pixel coordinate, M -1for reverse transformation matrix.As shown in Figure 4, the forward conversion matrix of take illustrates its solution procedure as example to the solution procedure of forward conversion matrix and reverse transformation matrix.
First through camera view, conversion is converted to camera volume coordinate to the coordinate of object in world coordinate system, then is converted into pixel coordinate through perspective projection transformation, that is:
M=mProject·mLookAt (8)
Formula (7), (8) simultaneous are obtained:
pixel &RightArrow; = mProject &CenterDot; mLookAt &CenterDot; object &RightArrow; - - - ( 9 )
object &RightArrow; = ( mProject &CenterDot; mLookAt ) - 1 &CenterDot; pixel &RightArrow;
Wherein, mLookAt is the transformation matrix from world coordinates to camera volume coordinate, and this matrix determines by position coordinates pc, coordinate of ground point po and the positive dirction coordinate up of camera, and concrete form is as follows:
mLookAt = xaxis &RightArrow; . x yaxis &RightArrow; . x zaxis &RightArrow; . x 0 xaxis &RightArrow; . y yaxis &RightArrow; . y zaxis &RightArrow; . y 0 xaxis &RightArrow; . z yaxis &RightArrow; . y zaxis &RightArrow; . z 0 - xaxis &RightArrow; . po - yaxis &RightArrow; . po - zaxis &RightArrow; . po 1
zaxis &RightArrow; = pc - po | pc - po | xaxis &RightArrow; = up &times; zaxis &RightArrow; | up &times; zaxis &RightArrow; | yaxis &RightArrow; = zaxis &RightArrow; &times; xaxis &RightArrow; - - - ( 10 )
MProject is perspective projection transformation matrix, and this matrix is by visual angle range (fov) aspect ratio (asp) degree of depth (zn) and the degree of depth (zf) decision farthest recently of camera, and concrete form is as follows:
mproject = xScale 0 0 0 0 yScale 0 0 0 0 zf zf - zn 0 0 0 - zn * zf zf - zn 1 - - - ( 11 )
yScale = cot ( fov 2 ) xScale = yScale asp
Therefore, the pixel under reference point visual angle is as follows to the mapping relations of pixel under virtual visual angle:
pixel &RightArrow; = mProject &CenterDot; mLookAt &CenterDot; ( mP roject v i &CenterDot; mLook At v i ) - 1 &CenterDot; pixel v i &RightArrow; - - - ( 12 )
Wherein,
Figure BDA0000399361910000077
pixel coordinate and the depth value under virtual visual angle,
Figure BDA0000399361910000078
reference point v ipixel coordinate and depth value, mLookAt v, mProject vtransformation matrix and the perspective projection matrix of the camera coordinates from world coordinates to virtual visual angle,
Figure BDA0000399361910000079
from world coordinates to reference point v itransformation matrix and the perspective projection matrix of camera coordinates.
2.2 drawing process:
Image under virtual visual angle is stored in the middle of coloured image.By formula (12), set out, under the pixel under virtual visual angle and reference point visual angle, pixel may have multiple corresponding relation:
Situation 1: the pixel of virtual visual angle hypograph does not have the pixel under reference point visual angle corresponding with it, and this pixel is an empty part;
Situation 2: the pixel of virtual visual angle hypograph only has 1 pixel under reference point visual angle corresponding with it, directly utilizes the pixel under reference point visual angle to fill the image under virtual visual angle;
Situation 3: some pixels of virtual visual angle hypograph have 2 to 3 pixels under reference point visual angle corresponding with it, carry out the Image Rendering under virtual visual angle according to formula (13).Wherein, the pixel under virtual visual angle is p, pixel p under three reference viewing angle 1, p 2, p 3corresponding with it, under the reference point visual angle of selected depth value minimum, pixel is drawn the image at virtual visual angle in principle.α wherein iit is the weight at the reference point visual angle of trying to achieve in formula 2.
p = &Sigma; i = 1 Q &alpha; i &alpha; p i , p i &Element; Q Q = { q | q - p ' < threash , | p ' = min ( p 1 &CenterDot; &CenterDot; &CenterDot; p n ) } &alpha; = &Sigma; i Q &alpha; i n = 2,3 - - - ( 13 )
The all pixels that belong to situation 2 and situation 3 of mark in the process of drawing, are stored in label information in gray-scale map simultaneously.In the process of drawing, use CUDA parallel computation, a plurality of pixels under synthetic virtual visual angle simultaneously, speed that can greatly acceleration drawing.
3. the optimization module of image
The coloured image of above-mentioned drawing process output contains noise and cavity, the reason in cavity illustrates in the calculating above and drafting module, the reason that produces noise is: owing to having error in extraction of depth information process, and what formula (10) obtained might not be the mapping between integer pixel, therefore in drawing process, there will be noise, as shown in Figure 7, Figure 8.Comparatively simple solution is that the image at virtual visual angle is carried out to medium filtering, does like this and can eliminate most noises and cavity.Although it is image blurring and too level and smooth that this method simply, can cause, as shown in figure 10.First the present invention expands and difference operation to gray-scale map, extracts the edge contour of image, then along edge contour, carries out medium filtering, utilizes empty surrounding pixel filling cavity, simultaneously the noise signal at removal of images edge.The module of image optimization comprises following subprocess:
3.1 extract edge pixel
Expansion is to ask local maximum operation.Expansion is that image and core are carried out to convolution, calculates the maximal value of pixel in the middle of the region that core covers, and by this maximal value assignment the pixel to reference point appointment; As calculated with drafting module after the coloured image and the gray level image that generate all contain cavity, gray level image is carried out to expansive working and can eliminate cavity.Gray-scale map after expansion and original gray-scale map are done difference computing, and what obtain is exactly outline map, and what in figure, store is exactly edge contour information, and cavity is with regard in the middle of existence and edge contour, as shown in Figure 9.
3.2 medium filtering
Adopt in the present invention the method for medium filtering to carry out smoothing processing to empty pixel, medium filtering is replaced each pixel in the square neighborhood of Filtering Template center pixel by the pixel in the middle of template.Because medium filtering can cause image fault, the pixel of therefore carrying out intermediate value replacement is limited in the middle of edge contour by strict, can reduce to greatest extent like this number of times of filtering operation, keeps detailed information.Meanwhile, because mostly noise signal is isolated point in the middle of space, when carrying out medium filtering, most edge noise point is covered by background pixel, has reached the object of eliminating noise.The result that final filtering obtains as shown in figure 11.
Similar to drawing process, by using CUDA parallel computation to accelerate image manipulation, improve the travelling speed of method for drafting.
2. implementation process
Next the drawing process of beautiful Buddhist of take is example explanation specific embodiment of the invention process.
(1) on the spherical outside surface that surrounds beautiful Buddhist, select uniformly 162 sampled points, take the centre of sphere as true origin, r=430, the coordinate of i sampled point is v i=(x i, y i, z i), coordinate meets following formula:
x i = r ( 1 - 2 i - 1 n ) cos ( arcsin ( 1 - 2 i - 1 n ) n&pi; ) y i = r ( 1 - 2 i - 1 n ) sin ( arcsin ( 1 - 2 i - 1 n ) n&pi; ) z i = r cos ( arcsin ( 1 - 2 i - 1 n ) ) - - - ( 14 )
Wherein, n=162.
(2) according to sample point coordinate, spherical outside surface is carried out to triangle division, according to the result of triangle division, model data is divided into groups.The present invention adopts one of method construct that triangle approaches to take the ball inpolyhedron that triangle is basic model.After division, on sphere, have 162 sampled points, 320 tri patchs.
(3) sampled point parameter is set, takes depth image and coloured image.The parameter p of sampled point i=(pc i, po i, asp i, fov i, zn i, zf i), pc ibe the coordinate of sampled point, by formula (14), calculated, arranging of all the other parameters is as shown in the table:
Table 1
po i fov i asp i zn i zf i
(0,0,0) 47′ 1.333 350 550
Set after parameter first color image shot, then take depth image.Select wherein three sampled point v 1, v 2, v 3, wherein:
pc v 1 = ( - 182.8899,365.7798,132.8773 )
pc v 2 = ( - 220.4835,217.4600 , - 298.3256 )
pc v 3 = ( 295.9221,226.0644,215.0000 )
The coloured image collecting and depth image are as shown in figure 12.
(4) demarcate virtual visual angle, calculate mapping relations.In example, user uses keyboard and the virtual visual angle of the mutual control of mouse to roam in three-dimensional scenic, and initial virtual visual angle is (0,430,0), and the changing value between adjacent virtual visual angle is specified by user interactions.Known current virtual visual angle, the tri patch obtaining in traversal step (2), determines the tri patch at place, virtual visual angle, and demarcates virtual visual angle with the vector quantization of the reference point sensing centre of sphere according to formula (2), (5), (6).According to the mapping relations of the parameter calculation formula of the parameter at virtual visual angle and sampled point (12).
(5) synthetic virtual visual angle hypograph, and carry out medium filtering.
A. the pixel in virtual visual angle does not have the pixel of reference point with it at once, uses background pixel to fill virtual visual angle hypograph;
B. in the virtual visual angle in pixel only with a reference point pixel at once, use this respective pixel to fill virtual visual angle hypograph;
When c. the pixel of the pixel in virtual visual angle and 2 to 3 reference point is corresponding, set thresh=15, according to formula (13), select the synthetic virtual visual angle of one or more reference point pixel hypograph.
The all pixels that belong to situation b and situation c of mark in the process of drawing, are stored in label information in gray-scale map simultaneously.Gray-scale map is carried out to expansive working, and subtract each other with original gray-scale map, obtain the profile information of model, along the profile of model, carry out medium filtering, obtain the coloured image under final virtual visual angle.
The output of this example is the cromogram under the virtual visual angle under being controlled by user interactions, and Figure 13 is the part Output rusults of this example.
3. implementation result
Mainly from real-time and two aspect explanation implementation results of the sense of reality.The main configuration of testing computing machine used is as shown in the table:
Table 2
Operating system 64bit windows7 Ultimate
Processor Intel Core TM2Q9400(4CPUs)2.66GHz
Video card NVIDIA GeForce GTX470
Internal memory 4G
2.1 real-time
On testing computer, take beautiful Buddhist as tested object, use 3DMAX software to carry out beautiful Buddhist modeling, optical signature due to jade material, there is scattering after injecting material inside in light, in the middle of finally penetrating object and coming into view, this phenomenon is called as Subsurface Scattering, calculating Subsurface Scattering need to expend the plenty of time, and the drafting used time of using 3DMAX to carry out a visual angle is approximately 40 minutes.The three-dimensional model drawing method of employing based on image, during modeling, do not need to consider the material of model, during drafting, according to pixel, fill, the real time rate of drawing on testing computer was about for 25 frame/seconds, can respond fast user's input, control the roaming arbitrarily in space of virtual visual angle, reach the requirement of real-time, interactive completely.
2.2 the sense of reality
In the middle of this application, 162 sampled points have been selected, and three sampled points that selection and virtual visual angle approach are the most reference point, and the pixel of take is drawn as unit, carries out scene optimization after tentatively completing, repair the noise in cavity removal of images, the effect finally obtaining as shown in figure 13.Make comparisons with original input Figure 12, in Figure 13, there is no cavity and noise pixel, need not calculate and also can obtain drafting effect true to nature through complicated Subsurface Scattering.
The part that the present invention does not elaborate belongs to technology as well known to those skilled in the art.

Claims (3)

1. the complex three-dimensional modeling rendering method based on image, its feature comprises following process:
(1) demarcation at virtual visual angle: carry out triangle division to surrounding the sphere of model according to sampled point camera position coordinate, determine the tri patch at place, virtual visual angle, the corresponding visual angle, three summits of getting this tri patch is reference viewing angle, and virtual visual angle can be expressed as the linear combination of reference viewing angle;
(2) calculate and draw: according to the position of virtual visual angle, reference viewing angle and camera parameter, calculating the coordinate of each pixel in three reference viewing angle images and the mapping relations between the pixel coordinate under virtual visual angle; According to mapping relations, with reference to each pixel in coloured image under visual angle, be mapped in the image under virtual visual angle, calculate coordinate and the depth value of this pixel in the hypograph of virtual visual angle, for the pixel that has a plurality of reference viewing angle, be mapped to the situation of same position, get the pixel value that depth value is little; With the pixel of all referenced visual angles pixel filling in the hypograph of the virtual visual angle of tense marker, construct the gray-scale map of a width reflection from reference viewing angle to virtual visual angle mapping situation;
(3) optimization of image: for the cavity in the hypograph of virtual visual angle, passing through (2) calculating does not have reference picture to be mapped to the pixel of this position, gray-scale map from reflection reference viewing angle to virtual visual angle mapping situation, extract edge contour information, along edge contour, the coloured image generating is carried out to medium filtering, with the value filling cavity of neighbor pixel; Meanwhile, by medium filtering, filter out noise pixel.
2. according to the complex three-dimensional modeling rendering method based on image described in claim 1, it is characterized in that: by step (1), realize the demarcation to virtual visual angle, be identified for the reference viewing angle that draw at virtual visual angle, the pixel of setting up under reference viewing angle by step (2) arrives the mapping relations between pixel under virtual visual angle, realizes the complex model under virtual visual angle is drawn.
3. according to the complex three-dimensional modeling rendering method based on image described in claim 1 or 2, it is characterized in that: in step (2) and step (3), use CUDA parallel computation, accelerate drafting and optimal speed under virtual visual angle, reach the requirement of real-time, interactive.
CN201310497271.8A 2013-10-21 2013-10-21 Complicated three-dimensional model drawing method based on images Active CN103530907B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310497271.8A CN103530907B (en) 2013-10-21 2013-10-21 Complicated three-dimensional model drawing method based on images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310497271.8A CN103530907B (en) 2013-10-21 2013-10-21 Complicated three-dimensional model drawing method based on images

Publications (2)

Publication Number Publication Date
CN103530907A true CN103530907A (en) 2014-01-22
CN103530907B CN103530907B (en) 2017-02-01

Family

ID=49932885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310497271.8A Active CN103530907B (en) 2013-10-21 2013-10-21 Complicated three-dimensional model drawing method based on images

Country Status (1)

Country Link
CN (1) CN103530907B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331918A (en) * 2014-10-21 2015-02-04 无锡梵天信息技术股份有限公司 Occlusion culling and acceleration method for drawing outdoor ground surface in real time based on depth map
CN105509671A (en) * 2015-12-01 2016-04-20 中南大学 Method for calibrating central point of robot tool through employing plane calibration plate
CN107169924A (en) * 2017-06-14 2017-09-15 歌尔科技有限公司 The method for building up and system of three-dimensional panoramic image
CN107464278A (en) * 2017-09-01 2017-12-12 叠境数字科技(上海)有限公司 The spheroid light field rendering intent of full line of vision
CN108520342A (en) * 2018-03-23 2018-09-11 中建三局第建设工程有限责任公司 Platform of internet of things management method based on BIM and its system
CN111402404A (en) * 2020-03-16 2020-07-10 贝壳技术有限公司 Panorama complementing method and device, computer readable storage medium and electronic equipment
CN111651055A (en) * 2020-06-09 2020-09-11 浙江商汤科技开发有限公司 City virtual sand table display method and device, computer equipment and storage medium
CN112199756A (en) * 2020-10-30 2021-01-08 久瓴(江苏)数字智能科技有限公司 Method and device for automatically determining distance between straight lines
CN114543816A (en) * 2022-04-25 2022-05-27 深圳市赛特标识牌设计制作有限公司 Guiding method, device and system based on Internet of things
CN115272523A (en) * 2022-09-22 2022-11-01 中科三清科技有限公司 Method and device for drawing air quality distribution map, electronic equipment and storage medium
CN116502371A (en) * 2023-06-25 2023-07-28 憨小犀(泉州)数据处理有限公司 Ship-shaped diamond cutting model generation method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6697062B1 (en) * 1999-08-06 2004-02-24 Microsoft Corporation Reflection space image based rendering
US20090102840A1 (en) * 2004-07-15 2009-04-23 You Fu Li System and method for 3d measurement and surface reconstruction
US20110115886A1 (en) * 2009-11-18 2011-05-19 The Board Of Trustees Of The University Of Illinois System for executing 3d propagation for depth image-based rendering
CN102945565A (en) * 2012-10-18 2013-02-27 深圳大学 Three-dimensional photorealistic reconstruction method and system for objects and electronic device
CN103116897A (en) * 2013-01-22 2013-05-22 北京航空航天大学 Three-dimensional dynamic data compression and smoothing method based on image space

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6697062B1 (en) * 1999-08-06 2004-02-24 Microsoft Corporation Reflection space image based rendering
US20090102840A1 (en) * 2004-07-15 2009-04-23 You Fu Li System and method for 3d measurement and surface reconstruction
US20110115886A1 (en) * 2009-11-18 2011-05-19 The Board Of Trustees Of The University Of Illinois System for executing 3d propagation for depth image-based rendering
CN102945565A (en) * 2012-10-18 2013-02-27 深圳大学 Three-dimensional photorealistic reconstruction method and system for objects and electronic device
CN103116897A (en) * 2013-01-22 2013-05-22 北京航空航天大学 Three-dimensional dynamic data compression and smoothing method based on image space

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KNORR S 等: "An image-based rendering(ibr) approach for realistic stereo view synthesis of tv broadcast based on structure from motion", 《IMAGE PROCESSING IEEE INTERNATIONAL CONFERENCE ON》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331918A (en) * 2014-10-21 2015-02-04 无锡梵天信息技术股份有限公司 Occlusion culling and acceleration method for drawing outdoor ground surface in real time based on depth map
CN104331918B (en) * 2014-10-21 2017-09-29 无锡梵天信息技术股份有限公司 Based on earth's surface occlusion culling and accelerated method outside depth map real-time rendering room
CN105509671A (en) * 2015-12-01 2016-04-20 中南大学 Method for calibrating central point of robot tool through employing plane calibration plate
CN107169924A (en) * 2017-06-14 2017-09-15 歌尔科技有限公司 The method for building up and system of three-dimensional panoramic image
CN107169924B (en) * 2017-06-14 2020-10-09 歌尔科技有限公司 Method and system for establishing three-dimensional panoramic image
CN107464278B (en) * 2017-09-01 2020-01-24 叠境数字科技(上海)有限公司 Full-view sphere light field rendering method
US10909752B2 (en) 2017-09-01 2021-02-02 Plex-Vr Digital Technology (Shanghai) Co., Ltd. All-around spherical light field rendering method
KR20200024946A (en) * 2017-09-01 2020-03-09 플렉스-브이알 디지털 테크놀로지 (상하이) 씨오., 엘티디. How to render a spherical light field in all directions
KR102143319B1 (en) * 2017-09-01 2020-08-10 플렉스-브이알 디지털 테크놀로지 (상하이) 씨오., 엘티디. How to render an omnidirectional spherical light field
CN107464278A (en) * 2017-09-01 2017-12-12 叠境数字科技(上海)有限公司 The spheroid light field rendering intent of full line of vision
CN108520342A (en) * 2018-03-23 2018-09-11 中建三局第建设工程有限责任公司 Platform of internet of things management method based on BIM and its system
CN108520342B (en) * 2018-03-23 2021-12-17 中建三局第一建设工程有限责任公司 BIM-based Internet of things platform management method and system
CN111402404A (en) * 2020-03-16 2020-07-10 贝壳技术有限公司 Panorama complementing method and device, computer readable storage medium and electronic equipment
CN111651055A (en) * 2020-06-09 2020-09-11 浙江商汤科技开发有限公司 City virtual sand table display method and device, computer equipment and storage medium
CN112199756A (en) * 2020-10-30 2021-01-08 久瓴(江苏)数字智能科技有限公司 Method and device for automatically determining distance between straight lines
CN114543816A (en) * 2022-04-25 2022-05-27 深圳市赛特标识牌设计制作有限公司 Guiding method, device and system based on Internet of things
CN114543816B (en) * 2022-04-25 2022-07-12 深圳市赛特标识牌设计制作有限公司 Guiding method, device and system based on Internet of things
CN115272523A (en) * 2022-09-22 2022-11-01 中科三清科技有限公司 Method and device for drawing air quality distribution map, electronic equipment and storage medium
CN115272523B (en) * 2022-09-22 2022-12-09 中科三清科技有限公司 Method and device for drawing air quality distribution map, electronic equipment and storage medium
CN116502371A (en) * 2023-06-25 2023-07-28 憨小犀(泉州)数据处理有限公司 Ship-shaped diamond cutting model generation method
CN116502371B (en) * 2023-06-25 2023-09-08 厦门蒙友互联软件有限公司 Ship-shaped diamond cutting model generation method

Also Published As

Publication number Publication date
CN103530907B (en) 2017-02-01

Similar Documents

Publication Publication Date Title
CN103530907A (en) Complicated three-dimensional model drawing method based on images
CN103021017B (en) Three-dimensional scene rebuilding method based on GPU acceleration
CN104616345B (en) Octree forest compression based three-dimensional voxel access method
CN102915559B (en) Real-time transparent object GPU (graphic processing unit) parallel generating method based on three-dimensional point cloud
CN106709947A (en) RGBD camera-based three-dimensional human body rapid modeling system
CN104966317B (en) A kind of three-dimensional method for automatic modeling based on ore body contour line
CN109003325A (en) A kind of method of three-dimensional reconstruction, medium, device and calculate equipment
CN104616286B (en) Quick semi-automatic multi views depth restorative procedure
CN110378349A (en) The mobile terminal Android indoor scene three-dimensional reconstruction and semantic segmentation method
CN102222357B (en) Foot-shaped three-dimensional surface reconstruction method based on image segmentation and grid subdivision
CN102521869B (en) Three-dimensional model surface texture empty filling method guided by geometrical characteristic
CN102306386B (en) Method for quickly constructing third dimension tree model from single tree image
CN101763649B (en) Method for drawing enhanced model contour surface point
CN101916454A (en) Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
CN105006021A (en) Color mapping method and device suitable for rapid point cloud three-dimensional reconstruction
CN105261062B (en) A kind of personage&#39;s segmentation modeling method
CN108986221A (en) A kind of three-dimensional face grid texture method lack of standardization approached based on template face
CN103646421A (en) Tree lightweight 3D reconstruction method based on enhanced PyrLK optical flow method
Liu et al. A complete statistical inverse ray tracing approach to multi-view stereo
CN102831634B (en) Efficient accurate general soft shadow generation method
Jin et al. 3-d reconstruction of shaded objects from multiple images under unknown illumination
CN104318605A (en) Parallel lamination rendering method of vector solid line and three-dimensional terrain
Wang et al. Voge: a differentiable volume renderer using gaussian ellipsoids for analysis-by-synthesis
CN103617593B (en) The implementation method of three-dimensional fluid physic animation engine and device
Gu et al. Ue4-nerf: Neural radiance field for real-time rendering of large-scale scene

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP02 Change in the address of a patent holder

Address after: 518133 23rd floor, Yishang science and technology creative building, Jiaan South Road, Haiwang community Central District, Xin'an street, Bao'an District, Shenzhen City, Guangdong Province

Patentee after: SHENZHEN ESUN DISPLAY Co.,Ltd.

Patentee after: BEIHANG University

Address before: No. 4001, Fuqiang Road, Futian District, Shenzhen, Guangdong 518048 (B301, Shenzhen cultural and Creative Park)

Patentee before: SHENZHEN ESUN DISPLAY Co.,Ltd.

Patentee before: BEIHANG University

CP02 Change in the address of a patent holder