CN106570928A - Image-based re-lighting method - Google Patents

Image-based re-lighting method Download PDF

Info

Publication number
CN106570928A
CN106570928A CN201610998904.7A CN201610998904A CN106570928A CN 106570928 A CN106570928 A CN 106570928A CN 201610998904 A CN201610998904 A CN 201610998904A CN 106570928 A CN106570928 A CN 106570928A
Authority
CN
China
Prior art keywords
image
training
artificial neural
neural network
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610998904.7A
Other languages
Chinese (zh)
Other versions
CN106570928B (en
Inventor
韦伟
刘惠义
钱苏斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201610998904.7A priority Critical patent/CN106570928B/en
Publication of CN106570928A publication Critical patent/CN106570928A/en
Application granted granted Critical
Publication of CN106570928B publication Critical patent/CN106570928B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses an image-based re-lighting method and belongs to the computer graphics field. In order to achieve re-lighting as accurately as possible with as few samples as possible, quantitative random sampling is performed repeatedly in the spaces of image samples and image pixels, and training is performed through using an artificial neural network until the training accuracy of all pixels reaches a set threshold value; and provided that the artificial neural network has a requirement for minimum samples in training, the Bagging algorithm of ensemble learning is utilized to perform averaging processing on pixel training samples when the pixel training samples are insufficient. The method of the present invention is tested in a simulated three-dimensional scene, and a test result indicates that the image-based re-lighting method has the advantages of less training time and high robustness as well as fewer image samples, high speed, excellent real-time performance and high PSNR (peak signal to noise ratio) of a reconstructed scene image under the same relative error accuracy compared with the prior art.

Description

A kind of heavy illumination method based on image
Technical field
The present invention relates to a kind of heavy illumination method based on image, belongs to machine learning and graphics field.
Background technology
Illumination again (Image-based Relighting, IBR) based on image, also referred to as image-based rending (Image-based Rendering), its objective is the image from capture, calculates and obtains optical transport matrix and draw out new Light conditions under scene image.Its sharpest edges are the geological informations without the need for scene, render and do not receive scene complexity shadow Ring, and can also show the various lighting effects such as reflection, refraction, scattering.Therefore, IBR has become at once graphics since the proposition Field focus of attention.
IBR generally requires to obtain image pattern by intensive sampling, considerably increases working strength and memory space.Can Using machine learning method, by the sampling of small sample, accurately realize based on the illumination again of image as far as possible, be urgent need to resolve Problem.
The content of the invention
The technical problem to be solved is to provide a kind of heavy illumination method based on image.By image pattern by It is cumulative plus, pixel space stochastical sampling, three-layer neural network be trained and Bagging integrated study thoughts comprehensive fortune With it is achieved thereby that small sample, high-precision heavy lighting effect.
The present invention is employed the following technical solutions to solve above-mentioned technical problem:
The present invention provides a kind of heavy illumination method based on image, it is characterised in that including step in detail below:
Step 1:Gather one group of scene data, including spot light LigX, LigY coordinate and its corresponding fixing The image set ImageSet of viewpoint output, is calculated mean value ImgAvg_s of the image set ImageSet in tri- passages of R, G, B R、ImgAvg_G、ImgAvg_B;
Step 2:The stochastical sampling in image set ImageSet, constitutes image subset of the image pattern number for ImageNum ImageSubset;
Step 3:The stochastical sampling in the pixel space of image subset ImageSubset, obtains the instruction of artificial neural network Practice sample set, specially:
(1) stochastical sampling in the pixel space of image subset ImageSubset, constitutes pixel point set, wherein, hits For PixNum, the coordinate of pixel is [Px, Py];
(2) training sample set includes two parts of input and the output for corresponding to artificial neural network respectively, wherein, input Part include Px, Py, LigX, LigY, ImgAvg_R, ImgAvg_G, ImgAvg_B, output par, c be [LigX, LigY] with The image rgb value of [Px, Py] corresponding position;
Step 4:It is trained using the training sample set pair artificial neural network of step 3, after the completion of training, will be relatively flat Square error is less than or equal to preset first threshold value δ1Pixel be labeled as the artificial neural network that the training is completed;
Step 5, in step 4 stochastical sampling again in unlabelled pixel, trains again artificial neural network, until The pixel that training sample is concentrated all is labeled or unlabelled pixel is unsatisfactory for the most sample that artificial neural network is trained This requirement;It is integrated using Bagging when the smallest sample that unlabelled pixel is unsatisfactory for artificial neural network training is required The thought of study, unlabelled pixel together decides on its output by all neutral nets;
Step 6:With the artificial neural network test chart image set ImageSet for training, if measuring accuracy reaches default second Threshold value δ2, then the artificial neural network for training, execution step 7 are preserved;Otherwise, increase image pattern number ImageNum, return 2;
Step 7:Scene under reconstructing light source at an arbitrary position with the neutral net for training.
As the further prioritization scheme of the present invention, hits PixNum >=Pix in the step 3min, wherein,TminIt is the smallest sample number of artificial neural network training need, a is coefficient and a >=1).
As the further prioritization scheme of the present invention, entered using training sample set pair artificial neural network in the step 4 Before row training, training sample set is normalized.
Used as the further prioritization scheme of the present invention, the artificial neural network structure in the step 4 is 7 input sections Point, 2 hidden layers, 3 output nodes, wherein, the nodes of two hidden layers are identical, input node be respectively Px, Py, LigX, LigY、ImgAvg_R、ImgAvg_G、ImgAvg_B;Output node is respectively [LigX, LigY] and [Px, Py] corresponding position image Rgb value;The nodes N of hidden layerhideDetermined by experiment.
As the further prioritization scheme of the present invention, the smallest sample number T of artificial neural network training needmin=b [(7+ 1)×Nhide+(Nhide+1)×Nhide+(Nhide+ 1) × 3], wherein, b is coefficient and b >=10.
As the further prioritization scheme of the present invention, the relative square error of pixel in step 4
Wherein,Represent the actual rgb value of the ith pixel point of jth image, Ij(Pixi) represent ANN The rgb value of the ith pixel point of the jth image of network prediction output.
As the further prioritization scheme of the present invention, when unlabelled pixel is unsatisfactory for artificial neural network in step 5 When the smallest sample of training is required, using the thought of Bagging integrated studies, the output of unlabelled pixel is by training The output simple average of all artificial neural networks draws.
As the further prioritization scheme of the present invention, relative mean square error in the step 6
As the further prioritization scheme of the present invention, increase image pattern number ImageNum in step 6 according to actual needs.
Used as the further prioritization scheme of the present invention, image pattern number ImageNum increases by 20.
The present invention adopts above technical scheme compared with prior art, with following technique effect:The present invention is in simulation Tested in two three-dimensional scenics, as a result shown, compared with prior art, not only the training time is few, and robustness is strong;In phase Under same relative error precision, image pattern again needed for illumination is less, and the PSNR values for reconstructing scene image are higher.
Description of the drawings
Fig. 1 is method of the present invention flow chart.
When Fig. 2 is the training error of the Dragon and Mitsuba scenes that the present invention and prior art is respectively adopted and training Between comparison diagram, wherein, (a) be Dragon scenes training error, (b) be Mitsuba scenes training error, (c) The training time of Dragon scenes, (d) be Mitsuba scenes training time.
Specific embodiment
Technical scheme is described in further detail below in conjunction with the accompanying drawings:
A kind of heavy illumination method based on image of the present invention, as shown in figure 1, including:
Step 1:Gather one group of scene data (Dagon, Mitsuba), including spot light LigX, LigY coordinate and Its corresponding image set ImageSet in fixed view output;Image set is calculated in the mean value of tri- passages of R, G, B, is obtained ImgAvg_R、ImgAvg_G、ImgAvg_B;Contextual data is specifically as shown in table 1.
The contextual data of table 1
Scene Distribution of light sources Picture size
Dragon 31×31 64×48
Mitsuba 21×21 64×48
Step 2:The stochastical sampling in image set ImageSet, constitutes image subset ImageSubset, and image pattern number is ImageNum。
Step 3:The stochastical sampling in pixel of the image subset as ImageSubset, obtaining artificial neural network needs Training sample set;
(1) in image subset as the pixel space stochastical sampling of ImageSubset, pixel point set is constituted, hits is PixNum, the coordinate of pixel is [Px, Py];
(2) training sample set is constituted by being input into, exporting two parts, wherein input attribute include LigX, LigY, Px, Py, ImgAvg_R, ImgAvg_G, ImgAvg_B, output attribute is the image rgb value of [LigX, LigY] and [Px, Py] corresponding position.
Step 4:It is trained using training sample set pair artificial neural network, after the completion of training, will be with respect to square error RSE≤predetermined threshold value δ1Pixel be labeled as the artificial neural network that the training is completed.
Step 5:In step 4 stochastical sampling again in unlabelled pixel, trains again artificial neural network, until The pixel that training sample is concentrated all is labeled or unlabelled pixel is unsatisfactory for the most sample that artificial neural network is trained This requirement;When the smallest sample that unmarked pixel is unsatisfactory for artificial neural network training is required, using Bagging integrated studies Thought, by all neutral nets together decide on its output.
Step 6:With the artificial neural network test chart image set ImageSet for training, if measuring accuracy reaches default threshold Value δ2, then the artificial neural network for training is preserved;Otherwise, increase image pattern number ImageNum, restart from step 2.
Step 7:With the scene under the artificial neural network reconstruct any light source position for training.Stochastical sampling and training The image set ImagesetTest of Imageset equivalent amounts, with the neutral net reconstruct scene for training.
As shown in Fig. 2 with Ren et al. in " Image Based Relighting Using Neural Networks.ACM Transactions on Graphics, 2015.34 (4) " technology in text is compared.Wherein, in Fig. 2 A () and (b) is respectively the training error figure of Dragon and Mitsuba scenes, be (c) respectively Dragon and Mitsuba fields with (d) The training time schematic diagram of scape.By Fig. 2 it will be apparent that, it is (empty in figure using the method for the present invention with the increase of image pattern number Shown in line), RMSE is substantially fast than what Ren method declined, and also just meaning needs less sample to reach identical precision;Same Training time required for the method for the present invention is also below Ren methods.
Table 2 is that the test data to two scenes of Dragon and Mitsuba carries out scene reconstruction result, is shown using less Image can obtain RMSE value lower than Ren method.
The scene reconstruction result of table 3
The above, the only specific embodiment in the present invention, but protection scope of the present invention is not limited thereto, and appoints What be familiar with the people of the technology disclosed herein technical scope in, it will be appreciated that the conversion expected or replacement, all should cover The present invention include within the scope of, therefore, protection scope of the present invention should be defined by the protection domain of claims.

Claims (10)

1. a kind of heavy illumination method based on image, it is characterised in that including step in detail below:
Step 1:Gather one group of scene data, including LigX, LigY coordinate of spot light and its corresponding in fixed view The image set ImageSet of output, be calculated image set ImageSet tri- passages of R, G, B mean value ImgAvg_R, ImgAvg_G、ImgAvg_B;
Step 2:The stochastical sampling in image set ImageSet, constitutes image subset of the image pattern number for ImageNum ImageSubset;
Step 3:The stochastical sampling in the pixel space of image subset ImageSubset, obtains the training sample of artificial neural network This collection, specially:
(1) stochastical sampling in the pixel space of image subset ImageSubset, constitutes pixel point set, wherein, hits is PixNum, the coordinate of pixel is [Px, Py];
(2) training sample set includes two parts of input and the output for corresponding to artificial neural network respectively, wherein, importation Including Px, Py, LigX, LigY, ImgAvg_R, ImgAvg_G, ImgAvg_B, output par, c is [LigX, LigY] and [Px, Py] The image rgb value of corresponding position;
Step 4:It is trained using the training sample set pair artificial neural network of step 3, after the completion of training, will be with respect to a square mistake Difference is less than or equal to preset first threshold value δ1Pixel be labeled as the artificial neural network that the training is completed;
Step 5, in step 4 stochastical sampling again in unlabelled pixel, trains again artificial neural network, until training The whole labeled or unlabelled pixels of pixel in sample set are unsatisfactory for the smallest sample of artificial neural network training will Ask;When the smallest sample that unlabelled pixel is unsatisfactory for artificial neural network training is required, using Bagging integrated studies Thought, unlabelled pixel by all neutral nets together decide on its output;
Step 6:With the artificial neural network test chart image set ImageSet for training, if measuring accuracy reaches default Second Threshold δ2, then the artificial neural network for training, execution step 7 are preserved;Otherwise, increase image pattern number ImageNum, return 2;
Step 7:Scene under reconstructing light source at an arbitrary position with the neutral net for training.
2. a kind of heavy illumination method based on image according to claim 1, it is characterised in that sample in the step 3 Number PixNum >=Pixmin, wherein,TminIt is the smallest sample number of artificial neural network training need, a It is coefficient and a >=1).
3. a kind of heavy illumination method based on image according to claim 1, it is characterised in that utilize in the step 4 Before training sample set pair artificial neural network is trained, training sample set is normalized.
4. a kind of heavy illumination method based on image according to claim 1, it is characterised in that the people in the step 4 Artificial neural networks structure is 7 input nodes, 2 hidden layers, 3 output nodes, wherein, the nodes of two hidden layers are identical, defeated Ingress is respectively Px, Py, LigX, LigY, ImgAvg_R, ImgAvg_G, ImgAvg_B;Output node is [LigX, LigY] With the image rgb value of [Px, Py] corresponding position;The nodes N of hidden layerhideDetermined by experiment.
5. a kind of heavy illumination method based on image according to claim 1 or 2 or 4, it is characterised in that ANN The smallest sample number T of network training needmin=b [(7+1) × Nhide+(Nhide+1)×Nhide+(Nhide+ 1) × 3], wherein, b is Coefficient and b >=10.
6. a kind of heavy illumination method based on image according to claim 1, it is characterised in that pixel in step 4 With respect to square errorWherein,Represent the of jth image The actual rgb value of i pixel, Ij(Pixi) represent that the jth of neural network prediction output opens the ith pixel point of image Rgb value.
7. a kind of heavy illumination method based on image according to claim 1, it is characterised in that when unmarked in step 5 Pixel be unsatisfactory for artificial neural network training smallest sample require when, using the thought of Bagging integrated studies, do not mark The output of the pixel of note is drawn by the output simple average of all artificial neural networks for training.
8. a kind of heavy illumination method based on image according to claim 1, it is characterised in that the phase in the step 6 To mean square error
9. a kind of heavy illumination method based on image according to claim 1, it is characterised in that according to reality in step 6 Need to increase image pattern number ImageNum.
10. a kind of heavy illumination method based on image according to claim 9, it is characterised in that image pattern number ImageNum increases by 20.
CN201610998904.7A 2016-11-14 2016-11-14 A kind of heavy illumination method based on image Expired - Fee Related CN106570928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610998904.7A CN106570928B (en) 2016-11-14 2016-11-14 A kind of heavy illumination method based on image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610998904.7A CN106570928B (en) 2016-11-14 2016-11-14 A kind of heavy illumination method based on image

Publications (2)

Publication Number Publication Date
CN106570928A true CN106570928A (en) 2017-04-19
CN106570928B CN106570928B (en) 2019-06-21

Family

ID=58541876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610998904.7A Expired - Fee Related CN106570928B (en) 2016-11-14 2016-11-14 A kind of heavy illumination method based on image

Country Status (1)

Country Link
CN (1) CN106570928B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909640A (en) * 2017-11-06 2018-04-13 清华大学 Face weight illumination method and device based on deep learning
CN108765540A (en) * 2018-04-26 2018-11-06 河海大学 A kind of heavy illumination method based on image and integrated study
CN110033055A (en) * 2019-04-19 2019-07-19 中共中央办公厅电子科技学院(北京电子科技学院) A kind of complex object image weight illumination method based on the parsing of semantic and material with synthesis

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140293010A1 (en) * 2009-11-18 2014-10-02 Quang H. Nguyen System for executing 3d propagation for depth image-based rendering
CN104700109A (en) * 2015-03-24 2015-06-10 清华大学 Method and device for decomposing hyper-spectral intrinsic images
CN105447906A (en) * 2015-11-12 2016-03-30 浙江大学 Method for calculating lighting parameters and carrying out relighting rendering based on image and model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140293010A1 (en) * 2009-11-18 2014-10-02 Quang H. Nguyen System for executing 3d propagation for depth image-based rendering
CN104700109A (en) * 2015-03-24 2015-06-10 清华大学 Method and device for decomposing hyper-spectral intrinsic images
CN105447906A (en) * 2015-11-12 2016-03-30 浙江大学 Method for calculating lighting parameters and carrying out relighting rendering based on image and model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PEIRAN REN等: "Image Based Relighting Using Neural Networks", 《ACM TRANSACTIONS ON GRAPHICS》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909640A (en) * 2017-11-06 2018-04-13 清华大学 Face weight illumination method and device based on deep learning
CN107909640B (en) * 2017-11-06 2020-07-28 清华大学 Face relighting method and device based on deep learning
CN108765540A (en) * 2018-04-26 2018-11-06 河海大学 A kind of heavy illumination method based on image and integrated study
CN108765540B (en) * 2018-04-26 2022-04-12 河海大学 Relighting method based on image and ensemble learning
CN110033055A (en) * 2019-04-19 2019-07-19 中共中央办公厅电子科技学院(北京电子科技学院) A kind of complex object image weight illumination method based on the parsing of semantic and material with synthesis

Also Published As

Publication number Publication date
CN106570928B (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN111292264B (en) Image high dynamic range reconstruction method based on deep learning
Malu et al. Learning photography aesthetics with deep cnns
Cao et al. Ancient mural restoration based on a modified generative adversarial network
CN109961434A (en) Non-reference picture quality appraisement method towards the decaying of level semanteme
CN109472193A (en) Method for detecting human face and device
CN109711401A (en) A kind of Method for text detection in natural scene image based on Faster Rcnn
CN108764250A (en) A method of extracting essential image with convolutional neural networks
DE102021105249A1 (en) MICROTRAINING FOR ITERATIVE REFINEMENT OF A NEURAL NETWORK WITH FEW ADAPTATIONS
CN110992366A (en) Image semantic segmentation method and device and storage medium
CN110532914A (en) Building analyte detection method based on fine-feature study
CN106570928A (en) Image-based re-lighting method
CN111161278A (en) Deep network aggregation-based fundus image focus segmentation method
Liu et al. CT-UNet: Context-transfer-UNet for building segmentation in remote sensing images
CN110503078A (en) A kind of remote face identification method and system based on deep learning
Fu et al. A blind medical image denoising method with noise generation network
CN114973086A (en) Video processing method and device, electronic equipment and storage medium
CN113763300A (en) Multi-focus image fusion method combining depth context and convolution condition random field
Cui et al. Remote sensing image recognition based on dual-channel deep learning network
CN116740547A (en) Digital twinning-based substation target detection method, system, equipment and medium
CN114863450B (en) Image processing method, device, electronic equipment and storage medium
CN113593007B (en) Single-view three-dimensional point cloud reconstruction method and system based on variation self-coding
CN114972937A (en) Feature point detection and descriptor generation method based on deep learning
CN114385883B (en) Contour enhancement method for approximately simulating chapping method in style conversion
Zeng et al. 3D plants reconstruction based on point cloud
Li et al. Compact twice fusion network for edge detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190621

Termination date: 20211114