CN108682041A - A method of multiple light courcess rendering is carried out based on the sampling of matrix ranks and deep learning - Google Patents

A method of multiple light courcess rendering is carried out based on the sampling of matrix ranks and deep learning Download PDF

Info

Publication number
CN108682041A
CN108682041A CN201810320587.2A CN201810320587A CN108682041A CN 108682041 A CN108682041 A CN 108682041A CN 201810320587 A CN201810320587 A CN 201810320587A CN 108682041 A CN108682041 A CN 108682041A
Authority
CN
China
Prior art keywords
matrix
row
illumination
image
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810320587.2A
Other languages
Chinese (zh)
Other versions
CN108682041B (en
Inventor
张根源
应跃波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Radio and Television Group of Zhejiang
Zhejiang University of Media and Communications
Original Assignee
Radio and Television Group of Zhejiang
Zhejiang University of Media and Communications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Radio and Television Group of Zhejiang, Zhejiang University of Media and Communications filed Critical Radio and Television Group of Zhejiang
Priority to CN201810320587.2A priority Critical patent/CN108682041B/en
Publication of CN108682041A publication Critical patent/CN108682041A/en
Application granted granted Critical
Publication of CN108682041B publication Critical patent/CN108682041B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a kind of methods carrying out multiple light courcess rendering based on the sampling of matrix ranks and deep learning, including:Step 1, illumination matrix is established according to three-dimensional scenic;Step 2, several rows are randomly selected from illumination matrix, obtain primary random condensation matrix;Step 3, several rows are randomly selected in primary random condensation matrix, obtains secondary random condensation matrix;Step 4, for different points of view, primary random condensation matrix image and secondary random condensation matrix image are drawn respectively;Step 5, using primary random condensation matrix image and secondary random condensation matrix image to training deep neural network model;Step 6, in real-time rendering high realism image, the secondary random condensation matrix image of drafting is inputted into trained deep neural network model, output obtains complete high realism image.The method that multiple light courcess provided by the invention renders, multiple light courcess rendering can be quick and precisely carried out using trained deep neural network model.

Description

A method of multiple light courcess rendering is carried out based on the sampling of matrix ranks and deep learning
Technical field
The present invention relates to Computer Image Processing fields, and in particular to one kind based on matrix ranks sampling and deep learning into The method that row multiple light courcess renders.
Background technology
To being shone with indirect light, high dynamic range ambient lighting, and the complex scene with multiple direct light sources carries out wash with watercolours Dye is a very challenging job.Studies have shown that such issues that can be achieved a solution by being converted into multiple light courcess problem, i.e., All light sources can be converted to the set of point light source, be asked to which the rendering problem for shining indirect light is converted to a multi-point source Topic.Using thousands of point light sources directly render clearly very difficult.Lightcuts frames provide one kind can The method of extension solves the problems, such as multi-point source, can be by ray tracing device based on CPU using visibility culling algorithm It completes to calculate in a few minutes.
In practical applications, light source needs the position for light source and object with irradiated object there are relative position relation The relationship of setting is rendered.Under interaction scenarios, such as in film or structured design process, need to timely respond to light source and object Relative position changes, and carries out real-time rendering, this can bring great operand.Existing method is to solve this by pre-processing Problem, i.e., complete to render for various positions relationship in advance, and rendering result is directly read in the interaction stage.In this way calculating total amount It is divided.But there are two significant deficiencies for this method:1, a large amount of memory storage preprocessed datas are occupied.2, using this The scene of method, only light source or object unilaterally can activities.Which greatly limits the application ranges of this method.
Hardware of the GPU as special disposal image, inside have acceleration capacity, including Shadow Mapping algorithm and tinter Deng.It renders to calculate for figure and calculating acceleration and parallel processing capability is provided.Rendering processing is carried out using GPU, can effectively be reduced CPU overhead, while being promoted and rendering computational efficiency and effect.But above-mentioned algorithm, still or than relatively time-consuming, the present invention is using deeply Learning network is spent to learn the completion to part drawing image, and then achievees the purpose that entire image is drawn.
Invention content
The present invention provides a kind of methods carrying out multiple light courcess rendering based on the sampling of matrix ranks and deep learning, will be complicated Three-dimensional scenic under multiple light courcess render problem the problem of being converted into trained deep neural network model, utilize trained depth Neural network model can quick and precisely carry out multiple light courcess rendering.
A method of multiple light courcess rendering is carried out based on the sampling of matrix ranks and deep learning, including:
Step 1, illumination matrix is established according to three-dimensional scenic, in the illumination matrix, each row indicate what a light source irradiated All sampled points indicate all light sources irradiation on a sampled point per a line;
Step 2, several rows are randomly selected from illumination matrix, obtain primary random condensation matrix;
Step 3, several rows are randomly selected in primary random condensation matrix, obtains secondary random condensation matrix;
Step 4, for different points of view, primary random condensation matrix image and secondary random condensation matrix figure are drawn respectively Picture;
Step 5, using primary random condensation matrix image and secondary random condensation matrix image to training depth nerve net Network model;
Step 6, in real-time rendering high realism image, secondary random condensation matrix image is drawn first, then by two Secondary random condensation matrix image inputs trained deep neural network model, and output obtains complete high realism image.
When line number in primary random condensation matrix is enough, primary random condensation matrix image may be considered entire three Dimension scene draws obtained image completely, after deep neural network model trains, as long as by secondary condensation matrix image It is input to trained deep neural network model, so that it may to obtain primary random condensation matrix image, namely obtain entire three Dimension scene draws obtained image completely, greatly improves the efficiency of drafting.
Preferably, in step 4, the detailed process for the step of drawing primary random condensation matrix image is as follows:
Primary random condensation matrix is divided into several clusters, in each cluster, is chosen by step 4-a-1 using sampling sub-clustering A complete row render the row according to RGB color channel as representing, and obtain the complete illumination sampling of the row;
Step 4-a-2 is extended the illumination sampling of agency's row, obtains intensity of illumination of each cluster on RGB channel Summation;
The intensity of illumination of each cluster is merged, obtains multiple light courcess rendering result by step 4-a-3.
Preferably, the sampling sub-clustering in step 4-a-1, includes the following steps:
Step 4-a-1-1 is randomly choosed in primary random condensation matrixA row are used as cluster center, will be with cluster center most Close point is divided into the cluster representated by cluster center, and c is the columns in primary random condensation matrix;
Step 4-a-1-2 is preferentially randomly selected in the row far from the row for a certain row in primary random condensation matrix Row increase the row its weights when a certain row are selected by fixed proportion, untilA row are selected;
Step 4-a-1-3, using the larger row of weight as cluster center, foundation will once reduce at random at a distance from cluster center Matrix is divided into several clusters.
Preferably, in step 4, the detailed process for the step of drawing secondary random condensation matrix image is as follows:
Step 4-b-1, according to sub-clustering factor pair, once random condensation matrix carries out sub-clustering, and the sub-clustering factor is the row per cluster Number;
Step 4-b-2 randomly chooses some clusters from divided cluster, in each cluster selected at random, chooses complete One row render the row according to RGB color channel as representing, and obtain the complete illumination sampling of the row;
Step 4-b-3 is extended the illumination sampling of agency's row, obtains intensity of illumination of each cluster on RGB channel Summation;
The intensity of illumination of each cluster is merged, obtains multiple light courcess rendering result by step 4-b-4.
Preferably, in step 5, the quantity of image pair is no less than 10000.
Each viewpoint corresponds to the image pair of primary random a condensation matrix image and secondary random condensation matrix image, figure As pair quantity be no less than 10000 namely the quantity of viewpoint be no less than 10000.
The method provided by the invention that multiple light courcess rendering is carried out based on the sampling of matrix ranks and deep learning, by complicated three Multiple light courcess under dimension scene renders the problem of problem is converted into trained deep neural network model, utilizes trained depth nerve Network model can quick and precisely carry out multiple light courcess rendering.
Description of the drawings
Fig. 1 is the schematic diagram of deep neural network model in the present invention.
Specific implementation mode
Below in conjunction with the accompanying drawings, to the present invention is based on the sides that the sampling of illumination matrix ranks and deep learning carry out multiple light courcess rendering Method is described in detail.
Step 1, the illumination matrix with multiple light courcess is established according to scene, in illumination matrix, one light source of each row expression All sampled points of irradiation indicate all light sources irradiation on a sampled point per a line.
For the multiple light courcess scene with m sampled point and n light source, tribute of all light sources on each sampled point is calculated It the sum of offers and can be obtained complete scene rendering as a result, the problem is converted into:The illumination matrix A that one specification is m × n, wherein Arbitrary element AijIndicate contributions of the light source j on sampled point i, each element uses RGB scalars in illumination matrix A, by illumination square All row in battle array A add up, and can obtain contribution margin of each light source on all sampled points.
The element of complete illumination matrix is calculated according to above formula, computation complexity is O (mn).If sampled point i for It is invisible for light source j, then AijIt is 0, according to actual conditions, has a large amount of 0 elements, i.e. illumination matrix A to be in practical illumination matrix Low-rank, r rows are randomly selected from illumination matrix and form reduction illumination matrix, when r is sufficiently large, so that it may to include complete illumination The enough information of matrix, reduction illumination matrix can be regarded as the sampled version of complete illumination matrix, by reducing illumination square Battle array is rendered, and the rendering effect of complete illumination matrix can be obtained.
The jth of illumination matrix A is arranged and is usedIt indicates, the rendering result ∑ of global multiple light courcessAIt can be indicated with following formula:
Step 2, several rows are randomly selected from illumination matrix, form reduction illumination matrix, namely primary random reduction light According to matrix.
R rows, r × n illumination matrixes R, ρ of formation are randomly choosed from illumination matrix AjThe row of R, by the element in R from It is converted to using RGB scalars and carries out scalar, ρ using the 2- norms of RGB triplesjIt is complete columnSampled version, referred to as one Secondary random reduction row, illumination matrix R is a down-scaled version of complete illumination matrix A.
It is respectively processed, computation complexity can be dropped by the way that illumination matrix R is divided into multiple portions (i.e. multiple clusters) As low as O (m+n).By being respectively processed to each cluster, complete illumination matrix R is synthesized, approximation, cluster-dividing method can be obtained The degree of error of final result is affected, so needing to determine cluster-dividing method according to estimation error, makes the error of final result most It is small.
Step 3, for different viewpoints, primary random condensation matrix image and secondary random condensation matrix figure are drawn respectively Picture, specific implementation detail are as follows:
3.1, the drafting of primary random condensation matrix image
10000 viewpoints are randomly choosed in three-dimensional scenic to be drawn, each drawing viewpoints image is numbered, Respectively V1........V10000.For the image of each viewpoint, drawing process is as follows:
For the illumination matrix A that specification is m × n, n rows are divided into l cluster C1, C2..., Cl, reduce the norm of row | | ρj|| For cluster CkMiddle light source j intensities of illumination on the entire image.
DefinitionskFor cluster CkWhole intensities of illumination measured value, define cluster CkIntensity of illumination ForFollowing formula can be obtained:
Wherein
Wherein jth is listed in cluster CkIn percentage beXA represents the estimator of illumination matrix A intensity of illumination.
In each cluster, reduction illumination matrix norm is chosen according to percentage and represents row, it is general to choose 50% or more conduct The percentage of reduction.When all | | ρj| | when > 0, following equation can be obtained:
It can thus be seen that E [XA] it is actually ∑AUnbiased estimator.According to E [XA]=∑A, complete illumination matrix A The assessment formula of error can be expressed as:E[||XA-∑A||2], most effective sub-clustering method be so that E [| | XA-∑A||2] Value minimize.
Illumination matrix R is the r × n illumination matrixes for randomly choosing r rows from illumination matrix A and being formed, due toIt is independent , XRError expected be exactlyThe sum of error expected.By stochastic variable XRAnd its corresponding assessed value ∑RIt is respectively labeled as X and X ', can by E [| | XA-∑A||2] indicated with following formula:
By cluster measure E [| | XR-∑R||2] indicated with following formula:
It indicates It indicates
Following equation is obtained by deriving above:
The distance between any two vector x and y are defined as
D represents the difference measurement of two light sources in the same cluster, can be with It is indicated by following formula:
It is the cosine value of x and y angles.The tribute of the intensity of illumination of two light sources on the image Offering can be assessed by the angle between them.
It is derived by above formula, the error that sub-clustering introduces can be indicated by following formula:
Wherein, x indicates that pixel, w indicate that the corresponding weight of pixel, k indicate number of clusters, according to previous E [| | XR-∑R| |2] formula, definition
According to the above error evaluation formula, sub-clustering is carried out to primary random condensation matrix in two steps:
Illumination matrix is divided by step 3-1 using sampling sub-clusteringPart, (C is the columns of illumination matrix R), specific side Method is:Random selectionA point (point here is equal to the row in illumination matrix R) is used as cluster central point, and according to distance d's The point nearest with cluster central point is divided into the cluster representated by these cluster central points by calculation formula.
Define αiFor the sum of all expenses for being incident on point i (i.e. the sum of light intensity):
By piAs point i in αiIn ratio preferentially selected at random in the points of range points i farther out in all the points for point i It selects a little, when point i is selected, sets its weight as 1/pi.Point i is often selected once, and weight is increased 1/pi, iteration carries out this Kind processing, Zhi DaodianA point is selected.Then, the central point of cluster is determined according to each point weight (the larger point of weight is as cluster Central point), then by all the points based on cluster central point distance d carry out cluster division.
Step 3-2 completes whole sub-clusterings using top-down partition method.It is completed based on backSub-clustering, to The illumination matrix of sub-clustering is further decomposed, and the specific method is as follows:By point(r dimension skies are projected on random line Between), it then finds and line is divided into two sections of Best Point.The illumination matrix R by sub-clustering is divided into two in this way A part doubles sub-clustering number.
By above step, illumination matrix R can be divided into multiple portions, and make final error calculated most It is small.
Row in primary random condensation matrix are divided by cluster, k cluster C can be obtained1, C2..., Ck.In each cluster In, a complete row are chosen as representing, and according to RGB color channel, are carried out rendering processing respectively, are used the tinter pair on GPU The row are rendered, and the complete illumination sampled value of the row is obtained.
DefinitionFor the ρ after coloringi, according to formulaCluster representative is extended, is obtained every Intensity of illumination summation of a cluster on RGB channel.
It is handled using this method on each cluster, each cluster C can be obtained1, C2..., CkRBG intensities of illumination.
Each cluster is merged, the sum of the intensity of illumination of each row in primary random condensation matrix R is obtained, completes to render.
The k cluster by rendering processing that previous step obtains is merged, institute of the primary light according to matrix R can be obtained There are the sum of row, that is, obtains the multiple light courcess rendering result of original scene.
3.2 2 times random reduction illumination matrix is drawn
Similar once random reduction illumination matrix is drawn, and it is secondary that 10000 viewpoints progress are randomly choosed in three-dimensional scenic Random reduction illumination matrix is drawn, each drawing viewpoints image is numbered, respectively V '1........V’10000.For The image of each viewpoint, specific method for drafting are as follows:
To sub-clustering once be carried out according to row by random condensation matrix R, specific divide can advise according to a sub-clustering factor It is fixed, for example, it is exactly that every two row carries out sub-clustering that the sub-clustering factor, which is 2, if it is 5, that is, the progress sub-clustering of every 5 row.Once sub-clustering is complete If, some clusters are randomly choosed from divided cluster to be drawn, the image drawn out in this way has missing, because Some clusters are randomly choosed, some image pixels is caused not calculated.Once after randomly choosing better cluster, the side of drafting The drafting of the similar primary reduction illumination matrix of method;
Step 4, the image drawn using primary random condensation matrix and secondary reduction illumination matrix is to training a depth Spend neural network model.
Here for deep neural network model as shown in Figure 1, in Fig. 1, X represents the figure that primary reduction illumination matrix generates Picture, Y represent the image that secondary reduction illumination matrix generates, and G and F are respectively that depth generates network model, indicate a conversion letter Number, DXAnd DYRespectively represent differentiation deep neural network model, DXThe characteristics of image of the primary reduction illumination matrix of study, DYStudy The characteristics of image of secondary reduction illumination matrix.
The image of the primary reduction illumination matrixes of input 10000 and secondary reduction illumination matrix is to being trained, training When, so that loss function is minimum, loss function here is exactly that the primary image for reducing the drafting of illumination matrix passes through depth Degree neural network model is converted into the image that secondary reduction illumination matrix is drawn, and is then drawn again from secondary reduction illumination matrix Image convert back the image that primary reduction illumination matrix is drawn, the primary reduction illumination matrix of the image being converted back and input The image difference of drafting, when this loss function reaches minimum, it is believed that entire neural network has trained.
Step 5, it is drawn first when real-time high realism is drawn using trained deep neural network model It is secondary to reduce obtained illumination matrix image, then the image is input in deep neural network model, deep neural network The output image of model is exactly required complete drawing image.
The training study that the present invention converts the multiple light courcess rendering problem under complex scene to deep neural network model is asked Topic, has obtained preferably drawing image by the processing of deep neural network model, has improved rendering efficiency and real-time, can be with There is the scene that real-time and high quality require applied to rendering.
According to the disclosure and teachings of the above specification, those skilled in the art in the invention can also be to above-mentioned embodiment party Formula carries out change and modification appropriate.Therefore, the invention is not limited in specific implementation modes disclosed and described above, to this Some modifications and changes of invention should also be as falling into the scope of the claims of the present invention.In addition, although this specification In used some specific terms, these terms are merely for convenience of description, does not limit the present invention in any way.

Claims (5)

1. a kind of method carrying out multiple light courcess rendering based on the sampling of matrix ranks and deep learning, which is characterized in that including:
Step 1, illumination matrix is established according to three-dimensional scenic, in the illumination matrix, each row indicate that a light source irradiates all Sampled point indicates all light sources irradiation on a sampled point per a line;
Step 2, several rows are randomly selected from illumination matrix, obtain primary random condensation matrix;
Step 3, several rows are randomly selected in primary random condensation matrix, obtains secondary random condensation matrix;
Step 4, for different points of view, primary random condensation matrix image and secondary random condensation matrix image are drawn respectively;
Step 5, using primary random condensation matrix image and secondary random condensation matrix image to training deep neural network mould Type;
Step 6, in real-time rendering high realism image, draw secondary random condensation matrix image first, then by it is secondary with Machine condensation matrix image inputs trained deep neural network model, and output obtains complete high realism image.
2. the method that matrix ranks sampling as described in claim 1 and deep learning carry out multiple light courcess rendering, which is characterized in that In step 4, the detailed process for the step of drawing primary random condensation matrix image is as follows:
Primary random condensation matrix is divided into several clusters, in each cluster, is chosen complete by step 4-a-1 using sampling sub-clustering One row as represent, the row are rendered according to RGB color channel, obtain the row complete illumination sampling;
Step 4-a-2 is extended the illumination sampling of agency's row, obtains intensity of illumination summation of each cluster on RGB channel;
Step 4-a-3 merges the intensity of illumination of each cluster, obtains multiple light courcess rendering result.
3. the method that matrix ranks sampling as claimed in claim 2 and deep learning carry out multiple light courcess rendering, which is characterized in that Sampling sub-clustering in step 4-a-1, includes the following steps:
Step 4-a-1-1 is randomly choosed in primary random condensation matrixA row are used as cluster center, will be nearest with cluster center Point is divided into the cluster representated by cluster center, and c is total columns in primary random condensation matrix;
Step 4-a-1-2 preferentially randomly selects row for a certain row in primary random condensation matrix in the row far from the row, When a certain row are selected, increase the row weight by fixed proportion, untilA row are selected;
Step 4-a-1-3, using the larger row of weight as cluster center, according to will primary random condensation matrix at a distance from cluster center It is divided into several clusters.
4. the method that matrix ranks sampling as described in claim 1 and deep learning carry out multiple light courcess rendering, which is characterized in that In step 4, draw secondary random condensation matrix image the step of detailed process it is as follows:
Step 4-b-1, according to sub-clustering factor pair, once random condensation matrix carries out sub-clustering, and the sub-clustering factor is the line number per cluster;
Step 4-b-2 randomly chooses some clusters from divided cluster, in each cluster selected at random, chooses a complete row As representative, the row are rendered according to RGB color channel, obtain the complete illumination sampling of the row;
Step 4-b-3 is extended the illumination sampling of agency's row, obtains intensity of illumination summation of each cluster on RGB channel;
The intensity of illumination of each cluster is merged, obtains multiple light courcess rendering result by step 4-b-4.
5. the method that matrix ranks sampling as described in claim 1 and deep learning carry out multiple light courcess rendering, which is characterized in that In step 5, the quantity of image pair is no less than 10000.
CN201810320587.2A 2018-04-11 2018-04-11 Method for performing multi-light-source rendering based on matrix row and column sampling and deep learning Active CN108682041B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810320587.2A CN108682041B (en) 2018-04-11 2018-04-11 Method for performing multi-light-source rendering based on matrix row and column sampling and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810320587.2A CN108682041B (en) 2018-04-11 2018-04-11 Method for performing multi-light-source rendering based on matrix row and column sampling and deep learning

Publications (2)

Publication Number Publication Date
CN108682041A true CN108682041A (en) 2018-10-19
CN108682041B CN108682041B (en) 2021-12-21

Family

ID=63799860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810320587.2A Active CN108682041B (en) 2018-04-11 2018-04-11 Method for performing multi-light-source rendering based on matrix row and column sampling and deep learning

Country Status (1)

Country Link
CN (1) CN108682041B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100045675A1 (en) * 2008-08-20 2010-02-25 Take Two Interactive Software, Inc. Systems and methods for reproduction of shadows from multiple incident light sources
CN104200513A (en) * 2014-08-08 2014-12-10 浙江传媒学院 Matrix row-column sampling based multi-light-source rendering method
CN104200512A (en) * 2014-07-30 2014-12-10 浙江传媒学院 Multiple-light source rendering method based on virtual spherical light sources
CN104732579A (en) * 2015-02-15 2015-06-24 浙江传媒学院 Multi-light-source scene rendering method based on light fragmentation
CN105447906A (en) * 2015-11-12 2016-03-30 浙江大学 Method for calculating lighting parameters and carrying out relighting rendering based on image and model
CN106558092A (en) * 2016-11-16 2017-04-05 北京航空航天大学 A kind of multiple light courcess scene accelerated drafting method based on the multi-direction voxelization of scene

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100045675A1 (en) * 2008-08-20 2010-02-25 Take Two Interactive Software, Inc. Systems and methods for reproduction of shadows from multiple incident light sources
CN104200512A (en) * 2014-07-30 2014-12-10 浙江传媒学院 Multiple-light source rendering method based on virtual spherical light sources
CN104200513A (en) * 2014-08-08 2014-12-10 浙江传媒学院 Matrix row-column sampling based multi-light-source rendering method
CN104732579A (en) * 2015-02-15 2015-06-24 浙江传媒学院 Multi-light-source scene rendering method based on light fragmentation
CN105447906A (en) * 2015-11-12 2016-03-30 浙江大学 Method for calculating lighting parameters and carrying out relighting rendering based on image and model
CN106558092A (en) * 2016-11-16 2017-04-05 北京航空航天大学 A kind of multiple light courcess scene accelerated drafting method based on the multi-direction voxelization of scene

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MILOSˇ HASˇAN 等: "matrix row column sampling for the many light problem", 《ACM TRANSACTIONS ON GRAPHICS》 *
YUCHI HUO 等: "A matrix sampling-and-recovery approach for many-lights render", 《ACM TRANSACTIONS ON GRAPHICS》 *
唐宇: "三维模拟演练***中光照模型的研究与实现", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
金师豪: "多光绘制框架下光泽场景的高效绘制", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Also Published As

Publication number Publication date
CN108682041B (en) 2021-12-21

Similar Documents

Publication Publication Date Title
CN110097609B (en) Sample domain-based refined embroidery texture migration method
CN109255831A (en) The method that single-view face three-dimensional reconstruction and texture based on multi-task learning generate
CN111833237B (en) Image registration method based on convolutional neural network and local homography transformation
Li et al. Age progression and regression with spatial attention modules
JPWO2006129791A1 (en) Image processing system, three-dimensional shape estimation system, object position / posture estimation system, and image generation system
CN108229276A (en) Neural metwork training and image processing method, device and electronic equipment
US10922852B2 (en) Oil painting stroke simulation using neural network
Cai et al. Multi-objective evolutionary 3D face reconstruction based on improved encoder–decoder network
CN106204701A (en) A kind of rendering intent based on light probe interpolation dynamic calculation indirect reference Gao Guang
CN109410195A (en) A kind of magnetic resonance imaging brain partition method and system
CN111062290A (en) Method and device for constructing Chinese calligraphy style conversion model based on generation confrontation network
CN113744136A (en) Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion
CN116416376A (en) Three-dimensional hair reconstruction method, system, electronic equipment and storage medium
CN104200513A (en) Matrix row-column sampling based multi-light-source rendering method
CN106530383B (en) The facial rendering intent of face based on Hermite interpolation neural net regression models
CN117635418B (en) Training method for generating countermeasure network, bidirectional image style conversion method and device
CN114648724A (en) Lightweight efficient target segmentation and counting method based on generation countermeasure network
CN109993701A (en) A method of the depth map super-resolution rebuilding based on pyramid structure
CN109829857A (en) A kind of antidote and device based on the tilted image for generating confrontation network
CN108682041A (en) A method of multiple light courcess rendering is carried out based on the sampling of matrix ranks and deep learning
LU500193B1 (en) Low-illumination image enhancement method and system based on multi-expression fusion
DE102022100517A1 (en) USING INTRINSIC SHADOW DENOISE FUNCTIONS IN RAYTRACING APPLICATIONS
Lu et al. Position-dependent importance sampling of light field luminaires
González et al. based ambient occlusion
Abdolhoseini et al. Neuron image synthesizer via Gaussian mixture model and Perlin noise

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant