CN105447906B - The method that weight illumination render is carried out based on image and model calculating illumination parameter - Google Patents

The method that weight illumination render is carried out based on image and model calculating illumination parameter Download PDF

Info

Publication number
CN105447906B
CN105447906B CN201510771082.4A CN201510771082A CN105447906B CN 105447906 B CN105447906 B CN 105447906B CN 201510771082 A CN201510771082 A CN 201510771082A CN 105447906 B CN105447906 B CN 105447906B
Authority
CN
China
Prior art keywords
image
illumination
model
msub
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510771082.4A
Other languages
Chinese (zh)
Other versions
CN105447906A (en
Inventor
耿卫东
黄倩妮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201510771082.4A priority Critical patent/CN105447906B/en
Publication of CN105447906A publication Critical patent/CN105447906A/en
Application granted granted Critical
Publication of CN105447906B publication Critical patent/CN105447906B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a kind of method for carrying out weight illumination render based on image and model calculating illumination parameter.The step of this method, is as follows:1) normal vector of each three-dimensional point is calculated using the three-dimensional point cloud model of scene;2) light source position and direction vector of the calculated for pixel values image in the coordinate of three-dimensional point and normal vector and image;3) illumination model for assuming image is Phong models, the parameter of energy function in the light source information computation model obtained using above-mentioned steps;4) the shadow and highlight region of image is calculated;5) the target RGB image to be rendered is given, scene and the Lighting information fusion of original image are rendered into target image, export final rendering result.The inventive method causes in the heavy During Illumination of production of film and TV later stage special effect making, simplifies the process of illumination again, and quick output weight illumination render is as a result, it is possible to allow user tentatively to judge that input picture combines if appropriate for for later stage actual situation.Solving existing production of film and TV process can not find camera lens with returning to work phenomenon caused by the inconsistent place of post-production.

Description

The method that weight illumination render is carried out based on image and model calculating illumination parameter
Technical field
The present invention relates to a kind of heavy irradiation rendering method, and single width RGB image and image scene are based on more particularly, to one kind The calculating image irradiation parameter of three-dimensional point cloud model and the method for carrying out weight illumination render.
Background technology
Heavy lighting (IBRL) based on image existing substantial amounts of achievement in research in graphics and image processing field, But due to being influenceed by the illumination variation of complexity, the formation of such as shade, multiple light courcess interfere, existing research is most of only to be limited In carrying out illumination again further according to ambient light after known 3D models or to 3D model modelings.The current existing heavy light based on image According to method three major types are generally divided into from realization principle, based on bidirectional reflectance function, based on basic function and based on full light The method of function.
Synthetic technology counterweight illumination in game and film has particular/special requirement.The film shooting of early stage generally can all clapped Take the photograph scene and place the illumination condition that environment ball is used to record surrounding environment, this be based on reflective function carry out illumination again method it One.Although the illumination again based on reflective function has many limitations, as the foundation of reflection model directly affects the effect, no of illumination again All illumination conditions etc. can be imitated, but because the processing time required for the heavy lighting based on reflective function is relatively fewer And used by most researchers for pursuing efficiently weight lighting.Bidirectional reflectance distribution function (BRDF) contains illumination in scene Reflective information, accordingly, Oskar et al. [Oskar 2007] proposes using light in radiative transfer technology in estimated calculate first Technology is cut in source, according to observability precomputation light source cut with bidirectional reflectance function, substantial amounts of calculating is placed on pre-computation step, so as to Weight illumination render is carried out to static scene with interactive speed.The heavy lighting of early stage is all the situation in unknown object light source Lower realization, for the controllable situation of target light source, then there is another kind of processing scheme.The researchers of illumination propose again Various image collecting devices are exactly to be used to control light source position, direction etc. to make light source parameters, it is known that so as to simplify illumination again Algorithm.As Xuehong et al. [Xuehong 2014] target be scheduled on outdoor scene, taken the photograph under different illumination conditions with same Camera obtains the image series under Same Scene, establishes BRDF model extractions Lighting information and carries out illumination again.Tze et al. [Tze 2009] its RBF is then calculated to the image sequence obtained with same method and carries out illumination again, with reference to Mahajan et al. Under the analysis Phong illumination models that [Mahajan 2008] is proposed BRDF and spherical harmonic function carry out the research of illumination again into Fruit.
Heavy lighting based on basic function is mainly for static scene.Because basic image contains a variety of illumination conditions The display effect of lower object or scene, therefore carry out linear combination to the basic images of acquisitions a series of and can draw to shine bar in target light The rendering result of object or scene under part.Due to needing substantial amounts of basic image, the heavy lighting based on basic function more should For improving accuracy of face identification, three-dimensional reconstruction being post-processed or the existing great amount of images such as Video processing and to illumination In terms of the illumination again having higher requirements.Newest research be Amr et al. [Amr 2014] using facial image training set as basic image, Spherical harmonics basic function in target facial image is extracted compares with after illumination parameter with training dataset, selects close to mesh The combination image and weight of logo image, the requirement for rebuilding face 3D models and strict illumination condition is avoided, reduce face knowledge By the barely satisfactory identification error brought of illumination in not.
Plenoptic function have recorded in any direction, wavelength and under the time optional position light, contain brush dimension Data.Plenoptic function can be used for simulating the complex scene effect under the conditions of multiple light courcess or any light source.But also due to full light letter The complexity that number calculates, the research that illumination again is carried out using plenoptic function are relatively fewer.Guangwei and Yebin [Guangwei 2009] image set of the more illumination of multiple views is directed to, is mixed with multi views vision technique (MVS) and the illumination again based on image (IBL) technology, 3D models is reconstructed using full light figure and texture is pasted to object with the illumination pattern that back video camera obtains, will be new Photoenvironment figure (the optical detection picture library that debevec has been used in article) is decomposed under 31 base illumination, finally by synthesizing Intensity of reflected light under new illumination, complete illumination again.
The content of the invention
In view of the above-mentioned deficiencies in the prior art, it is an object of the present invention to provide a kind of based on image and model calculating illumination parameter The method for carrying out weight illumination render.
The purpose of the present invention is achieved through the following technical solutions:One kind calculates illumination parameter based on image and model The method for carrying out weight illumination render, this method comprise the following steps:
(1) pending RGB image and corresponding image scene three-dimensional point cloud model are read in, according to scene point cloud and normal direction The relation under light source direction between image rgb value is measured, system of linear equations is established, utilizes the side of Least Square Method light source To vector;
(2) assume that object is Lambert bodies in scene, considers illumination model, established with reference to light source direction and normal vector The energy function of Phong illumination models;
(3) energy function is minimized using optimization method, image scene is tried to achieve by the partial derivative for minimizing energy function In each object ambient light AiWith diffusing reflection numerical value Di
(4) the ambient light A being calculated by step 3iWith diffusing reflection numerical value DiThe shadow and highlight region of image is calculated, and Image is preserved into as intermediate result;
(5) direction of illumination of target RGB image is calculated, the threedimensional model of original image is added according to the illumination model of hypothesis Fusion is carried out into target image to render, and exports the image of final rendering.
Further, in the step 1, when normal vector is unknown, normal vectorSolution it is as follows:Due to whole model Three-dimensional point cloud is large number of, and internal memory overflows during to prevent from calculating, and a cloud will be divided into some, per partly taking neighbouring k Point (k takes 3000 here) simultaneously remembers the covariance matrix per partial dotWherein It is one in every k point in part,To require the point of normal vector, PCA decomposition is carried out to covariance matrix and is tried to achieve a littleNormal direction Amount
Further, in the step 1, light source directionSolution it is as follows:The model of input is by common pure 3 D visual Method for reconstructing obtains, some pixel that the model have recorded the three-dimensional point cloud of image scene and each three-dimensional point corresponds on image The coordinate of point, i.e., each three-dimensional point can find a pixel in the picture and correspond in model.Due to image scene Middle possible more than one object, the illumination tensor of each object is different, and before light source direction is sought, first image is divided Cut, isolate each different object area and be designated as i.Assuming that there was only a light source in scene, simple illumination mould is selected here TypeWherein I is the rgb value of image pixel, and ρ is the constant value illumination tensor relevant with object,For light source Direction, P are the space coordinates of three-dimensional point,It is point P normal vectors, T represents the transposition operation of vector.To in image in each region Each pixel, establish system of linear equations by above-mentioned illumination model, light source direction can then be obtained by solving equation group
Further, the illumination model selected in the step 2 for:
Wherein I is the rgb value of image pixel.IaFor ambient light color, IpFor the brightness of ambient light.ka,kdAnd ksRespectively Ambient light, diffusing reflection and specularity factor.Sp∈ [0,1] is shade, HpFor high backscatter extinction logarithmic ratio.For light source direction,For normal direction Amount.Define energy functionFor:
For image pixel,Represent pixelOn rgb value,For pixelThe normal direction of corresponding model three-dimensional point Amount.Note ρ is the constant value illumination tensor relevant with object, then corresponding to the ambient light value A of each region i in imagei=Iaρ, overflow Reflected value Di=Ipρ。
Further, the shadow and highlight coefficient of step 4 Scene calculates according to following scattering model:
Shade
Bloom
Wherein parameter tsAnd tnIt is the positive threshold value for adjusting image light slippery, setting by hand.AiAnd DiFor region i in image Ambient light and diffusing reflection numerical value,Represent pixel in imageRgb value.
Further, in the step 5.Final rendering still utilizes the illumination mould for being used primarily for solving light source direction Type.When have scene light source direction and shade, the light source information such as bloom after, can be with the scene according to illumination model Object be added in another target scene, it is only necessary to obtain the light source information in target scene again, be updated in illumination model Rendering result finally can be obtained.
The beneficial effects of the invention are as follows:The present invention is extraction model and Lighting information from the RGB image of single width, and is merged Rendered on to target RGB image, quickly and easily realize the basic fusion process of illumination again, during facilitating movies-making The deficiency in camera lens is found in time, is changed or is retaken on the spot.Traditional heavy lighting is all to existing in post-processing Image or video are rendered, it is impossible to the problem of finding to be not suitable for special effect making present in image and video in real time, are caused Later stage retakes camera lens, extends whole production of film and TV process, while add cost.The present invention borrows the threedimensional model of image scene Calculate the illumination parameter in image, ensure entirely weigh illumination render process it is quick while, display simply weighs exactly as far as possible Effect after illumination fusion, reference is provided to be taken on site, improves the efficiency of whole shooting process.
Brief description of the drawings
Fig. 1 is the overview flow chart of the inventive method;
Fig. 2 is the original RGB image of input;
Fig. 3 is scene three-dimensional point cloud model corresponding to the image of input;
Fig. 4 is to solve for the image shadow region of gained;
Fig. 5 is to solve for the image highlight area of gained.
Embodiment
The core of the inventive method is to extract scene three-dimensional point cloud model therein according to the RGB image of input, utilizes this A little information establish energy function and minimize it, and so as to try to achieve illumination parameter, original image model of place is finally rendered into target In image.
The embodiment of idiographic flow is described following with one embodiment, step is following (see Fig. 1):
(1) three-dimensional scenic point cloud model (see Fig. 3) corresponding to the original RGB image (see Fig. 2) and image of single width, root are read in According to the relation of scene point cloud and normal vector under light source direction between image rgb value, system of linear equations is established, utilizes a most young waiter in a wineshop or an inn Multiplication estimates the direction vector of light source;
(2) assume that object is Lambert bodies in scene, considers illumination model, established with reference to light source direction and normal vector The energy function of Phong illumination models;
(3) energy function is minimized using optimization method, image scene is tried to achieve by the partial derivative for minimizing energy function In each object ambient light AiWith diffusing reflection numerical value Di
(4) the ambient light A being calculated by step 3iWith diffusing reflection numerical value DiCalculate image shadow region (see Fig. 4) and Highlight area (see Fig. 5), and preserve into image as intermediate result;
(5) direction of illumination of target RGB image is calculated, the threedimensional model of original image is added according to the illumination model of hypothesis Fusion is carried out into target image to render, and exports the image of final rendering.

Claims (6)

  1. A kind of 1. method that weight illumination render is carried out based on image and model calculating illumination parameter, it is characterised in that this method bag Include following steps:
    (1) original RGB image of single width and corresponding image scene three-dimensional point cloud model are read in, according to scene point cloud and normal vector Relation under light source direction between image rgb value, establishes system of linear equations, utilizes the direction of Least Square Method light source Vector;
    (2) assume that object is Lambert bodies in scene, considers illumination model, Phong is established with reference to light source direction and normal vector The energy function of illumination model;
    (3) energy function is minimized using optimization method, the partial derivative by minimizing energy function is tried to achieve every in image scene The ambient light A of individual objectiWith diffusing reflection numerical value Di
    (4) the ambient light A being calculated by step (3)iWith diffusing reflection numerical value DiThe shadow factor of image and high backscatter extinction logarithmic ratio are calculated, And preserve into image as intermediate result;
    (5) direction of illumination of target RGB image is calculated, the threedimensional model of original RGB image is added according to the illumination model of hypothesis Enter to carry out fusion into target RGB image to render, export the image of final rendering.
  2. 2. a kind of method that weight illumination render is carried out based on image and model calculating illumination parameter according to claim 1, Characterized in that, in the step (1), when normal vector is unknown, normal vectorSolution it is as follows:Due to the three-dimensional of whole model Point cloud is large number of, and internal memory overflows during to prevent from calculating, and a cloud will be divided into some, per partly taking k neighbouring point, this In k take 3000, and remember the covariance matrix per partial dotWhereinIt is every portion One divided in k point,To require the point of normal vector, PCA decomposition is carried out to covariance matrix and is tried to achieve a littleNormal vector
  3. 3. a kind of method that weight illumination render is carried out based on image and model calculating illumination parameter according to claim 1, Characterized in that, in the step (1), light source directionSolution it is as follows:The model of input is rebuild by common pure 3 D visual Method obtains, some pixel that the model have recorded the three-dimensional point cloud of image scene and each three-dimensional point corresponds on image Each three-dimensional point can find a pixel in the picture and correspond in coordinate, i.e. model;Due to can in image scene Energy more than one object, the illumination tensor of each object is different, and before light source direction is sought, first image is split, point Separate out each different object area and be designated as i;Assuming that there was only a light source in scene, simple illumination model is selected hereWherein I is the rgb value of image pixel, and ρ is the constant value illumination tensor relevant with object,For light source side To, P is the space coordinates of three-dimensional point,It is point P normal vectors, T represents the transposition operation of vector;To in image in each region Each pixel, system of linear equations is established by above-mentioned illumination model, light source direction can then be obtained by solving equation group
  4. 4. a kind of method that weight illumination render is carried out based on image and model calculating illumination parameter according to claim 1, Characterized in that, the step (2) in selection illumination model for:
    <mrow> <mi>I</mi> <mo>=</mo> <msub> <mi>I</mi> <mi>a</mi> </msub> <msub> <mi>k</mi> <mi>a</mi> </msub> <mo>+</mo> <msub> <mi>I</mi> <mi>p</mi> </msub> <msub> <mi>S</mi> <mi>p</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>k</mi> <mi>d</mi> </msub> <mo>&lt;</mo> <mover> <mi>l</mi> <mo>&amp;RightArrow;</mo> </mover> <mo>,</mo> <mover> <mi>n</mi> <mo>&amp;RightArrow;</mo> </mover> <mo>&gt;</mo> <mo>+</mo> <msub> <mi>k</mi> <mi>s</mi> </msub> <msub> <mi>H</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> </mrow>
    Wherein I is the rgb value of image pixel;IaFor ambient light color, IpFor the brightness of ambient light;ka,kdAnd ksRespectively environment Light, diffusing reflection and specularity factor;Sp∈ [0,1] is shadow factor, HpFor high backscatter extinction logarithmic ratio;For light source direction,For normal direction Amount;Define energy functionFor:
    <mrow> <mi>E</mi> <mrow> <mo>(</mo> <mover> <mi>l</mi> <mo>&amp;RightArrow;</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mover> <mi>r</mi> <mo>&amp;RightArrow;</mo> </mover> </munder> <msup> <mrow> <mo>(</mo> <mi>I</mi> <mo>(</mo> <mover> <mi>r</mi> <mo>&amp;RightArrow;</mo> </mover> <mo>)</mo> <mo>-</mo> <msub> <mi>A</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>D</mi> <mi>i</mi> </msub> <mo>&lt;</mo> <mover> <mi>l</mi> <mo>&amp;RightArrow;</mo> </mover> <mo>,</mo> <mover> <mi>n</mi> <mo>&amp;RightArrow;</mo> </mover> <mo>(</mo> <mover> <mi>r</mi> <mo>&amp;RightArrow;</mo> </mover> <mo>)</mo> <mo>&gt;</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow>
    For image pixel,Represent pixelOn rgb value,For pixelThe normal vector of corresponding model three-dimensional point;Remember ρ For the constant value illumination tensor relevant with object, then corresponding to the ambient light value A of each region i in imagei=Iaρ, diffusing reflection number Value Di=Ipρ。
  5. 5. a kind of method that weight illumination render is carried out based on image and model calculating illumination parameter according to claim 4, Characterized in that, the shadow factor and high backscatter extinction logarithmic ratio of step (4) Scene calculate according to following scattering model:
    Wherein parameter tsAnd tnIt is the positive threshold value for adjusting image light slippery, setting by hand;AiAnd DiFor the ambient light of region i in image With diffusing reflection numerical value,Represent pixel in imageRgb value.
  6. 6. a kind of method that weight illumination render is carried out based on image and model calculating illumination parameter according to claim 1, Characterized in that, in the step (5);Final rendering still utilizes the illumination model for being used primarily for solving light source direction;When , can be so that the object in the scene be added to according to illumination model after having light source direction and shade, the high optical information of scene In another target scene, it is only necessary to obtain the light source information in target scene again, being updated in illumination model can obtain finally Rendering result.
CN201510771082.4A 2015-11-12 2015-11-12 The method that weight illumination render is carried out based on image and model calculating illumination parameter Expired - Fee Related CN105447906B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510771082.4A CN105447906B (en) 2015-11-12 2015-11-12 The method that weight illumination render is carried out based on image and model calculating illumination parameter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510771082.4A CN105447906B (en) 2015-11-12 2015-11-12 The method that weight illumination render is carried out based on image and model calculating illumination parameter

Publications (2)

Publication Number Publication Date
CN105447906A CN105447906A (en) 2016-03-30
CN105447906B true CN105447906B (en) 2018-03-13

Family

ID=55558037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510771082.4A Expired - Fee Related CN105447906B (en) 2015-11-12 2015-11-12 The method that weight illumination render is carried out based on image and model calculating illumination parameter

Country Status (1)

Country Link
CN (1) CN105447906B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364292A (en) * 2018-03-26 2018-08-03 吉林大学 A kind of illumination estimation method based on several multi-view images

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023296B (en) * 2016-05-27 2018-09-28 华东师范大学 Fluid scene illumination parameter computational methods
CN106204714B (en) * 2016-08-01 2019-02-01 华东师范大学 Video fluid illumination calculation method based on Phong model
CN106570928B (en) * 2016-11-14 2019-06-21 河海大学 A kind of heavy illumination method based on image
CN107424206B (en) * 2017-04-14 2020-09-22 苏州蜗牛数字科技股份有限公司 Interaction method for influencing shadow expression of virtual scene by using real environment
CN107506714B (en) * 2017-08-16 2021-04-02 成都品果科技有限公司 Face image relighting method
CN107909640B (en) * 2017-11-06 2020-07-28 清华大学 Face relighting method and device based on deep learning
CN107944420B (en) * 2017-12-07 2020-10-27 北京旷视科技有限公司 Illumination processing method and device for face image
CN108460841A (en) * 2018-01-23 2018-08-28 电子科技大学 A kind of indoor scene light environment method of estimation based on single image
CN108509887A (en) * 2018-03-26 2018-09-07 深圳超多维科技有限公司 A kind of acquisition ambient lighting information approach, device and electronic equipment
CN108682041B (en) * 2018-04-11 2021-12-21 浙江传媒学院 Method for performing multi-light-source rendering based on matrix row and column sampling and deep learning
CN108765537A (en) * 2018-06-04 2018-11-06 北京旷视科技有限公司 A kind of processing method of image, device, electronic equipment and computer-readable medium
CN109618472A (en) * 2018-07-16 2019-04-12 马惠岷 Lamp light control method and system
CN109224448B (en) * 2018-09-25 2021-01-01 北京天马时空网络技术有限公司 Method and device for stream rendering
CN109448098B (en) * 2018-09-29 2023-01-24 北京航空航天大学 Method for reconstructing virtual scene light source based on single night scene image of building
CN109389113B (en) * 2018-10-29 2020-12-15 大连恒锐科技股份有限公司 Multifunctional footprint acquisition equipment
CN109785423B (en) * 2018-12-28 2023-10-03 广州方硅信息技术有限公司 Image light supplementing method and device and computer equipment
CN110009723B (en) * 2019-03-25 2023-01-31 创新先进技术有限公司 Reconstruction method and device of ambient light source
CN111063034B (en) * 2019-12-13 2023-08-04 四川中绳矩阵技术发展有限公司 Time domain interaction method
CN111147745B (en) * 2019-12-30 2021-11-30 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and storage medium
WO2021226862A1 (en) * 2020-05-13 2021-11-18 Shanghaitech University Neural opacity point cloud
CN111798384A (en) * 2020-06-10 2020-10-20 武汉大学 Reverse rendering human face image illumination information editing method
CN111815750A (en) * 2020-06-30 2020-10-23 深圳市商汤科技有限公司 Method and device for polishing image, electronic equipment and storage medium
CN111968216B (en) * 2020-07-29 2024-03-22 完美世界(北京)软件科技发展有限公司 Volume cloud shadow rendering method and device, electronic equipment and storage medium
CN112053338A (en) * 2020-08-31 2020-12-08 浙江商汤科技开发有限公司 Image decomposition method and related device and equipment
CN112258622A (en) * 2020-10-26 2021-01-22 北京字跳网络技术有限公司 Image processing method, image processing device, readable medium and electronic equipment
EP4254319A4 (en) * 2020-12-28 2024-01-03 Huawei Tech Co Ltd Image processing method and apparatus
CN112819940B (en) * 2021-01-29 2024-02-23 网易(杭州)网络有限公司 Rendering method and device and electronic equipment
CN112927342B (en) * 2021-02-22 2022-12-20 中铁二院工程集团有限责任公司 Illumination calculation method and fixed pipeline rendering and programmable pipeline rendering methods
CN112819941B (en) * 2021-03-05 2023-09-12 网易(杭州)网络有限公司 Method, apparatus, device and computer readable storage medium for rendering water surface
CN113920036A (en) * 2021-12-14 2022-01-11 武汉大学 Interactive relighting editing method based on RGB-D image
CN114627246A (en) * 2022-03-25 2022-06-14 广州光锥元信息科技有限公司 Method for simulating 3D (three-dimensional) lighting of image video containing portrait
CN116385614B (en) * 2023-03-29 2024-03-01 深圳海拓时代科技有限公司 3D vision module rendering control system based on visualization

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246600A (en) * 2008-03-03 2008-08-20 北京航空航天大学 Method for real-time generating reinforced reality surroundings by spherical surface panoramic camera
CN103035025A (en) * 2012-12-28 2013-04-10 浙江大学 Material high realistic rendering algorithm based on bidirectional reflectance distribution function (BRDF) measured data

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7262771B2 (en) * 2003-10-10 2007-08-28 Microsoft Corporation Systems and methods for all-frequency relighting using spherical harmonics and point light distributions
WO2009143163A2 (en) * 2008-05-21 2009-11-26 University Of Florida Research Foundation, Inc. Face relighting from a single image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246600A (en) * 2008-03-03 2008-08-20 北京航空航天大学 Method for real-time generating reinforced reality surroundings by spherical surface panoramic camera
CN103035025A (en) * 2012-12-28 2013-04-10 浙江大学 Material high realistic rendering algorithm based on bidirectional reflectance distribution function (BRDF) measured data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Removing shadows from images;GD Finlayson et al;《European Conference on Computer Vision》;20021231;第2353卷(第1期);第823-836页 *
光学遥感图像重光照方法研究;王晨昊 等;《测绘通报》;20141231;第170-173页 *
基于图像的重光照技术;丁晓东;《中国优秀硕士学位论文全文数据库 信息科技辑》;20091215(第12期);第I138-581页 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364292A (en) * 2018-03-26 2018-08-03 吉林大学 A kind of illumination estimation method based on several multi-view images
CN108364292B (en) * 2018-03-26 2021-05-25 吉林大学 Illumination estimation method based on multiple visual angle images

Also Published As

Publication number Publication date
CN105447906A (en) 2016-03-30

Similar Documents

Publication Publication Date Title
CN105447906B (en) The method that weight illumination render is carried out based on image and model calculating illumination parameter
CN105844695B (en) Illumination modeling method based on real material measurement data
CN109255831A (en) The method that single-view face three-dimensional reconstruction and texture based on multi-task learning generate
Behrendt et al. Realistic real-time rendering of landscapes using billboard clouds
Lu et al. Illustrative interactive stipple rendering
US12002150B2 (en) Systems and methods for physically-based neural face shader via volumetric lightmaps
CN107452048A (en) The computational methods and device of global illumination
CN103262126B (en) Image processing apparatus, illumination processing device and method thereof
CN108460841A (en) A kind of indoor scene light environment method of estimation based on single image
Paulin et al. Review and analysis of synthetic dataset generation methods and techniques for application in computer vision
CN110060335A (en) There are the virtual reality fusion methods of mirror article and transparent substance in a kind of scene
Chen et al. Single image based illumination estimation for lighting virtual object in real scene
CN105976423B (en) A kind of generation method and device of Lens Flare
Mirbauer et al. SkyGAN: Towards Realistic Cloud Imagery for Image Based Lighting.
CN113610955A (en) Object rendering method and device and shader
CN103247070A (en) Interactive relighting sense of reality rendering method based on precomputed transfer tensor
Thompson et al. Real-time mixed reality rendering for underwater 360 videos
McGuire et al. A phenomenological scattering model for order-independent transparency
Wang et al. Capturing and rendering geometry details for BTF-mapped surfaces
González et al. based ambient occlusion
CN108447085A (en) A kind of face visual appearance restoration methods based on consumer level RGB-D cameras
Shi et al. Material Design in Augmented Reality with In-Situ Visual Feedback.
Zheng et al. An extended photometric stereo algorithm for recovering specular object shape and its reflectance properties
Ma et al. A shape-from-shading method based on surface reflectance component estimation
Yang et al. Light Sampling Field and BRDF Representation for Physically-based Neural Rendering

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180313

Termination date: 20181112