CN112837425B - Mixed reality illumination consistency adjusting method - Google Patents

Mixed reality illumination consistency adjusting method Download PDF

Info

Publication number
CN112837425B
CN112837425B CN202110260083.8A CN202110260083A CN112837425B CN 112837425 B CN112837425 B CN 112837425B CN 202110260083 A CN202110260083 A CN 202110260083A CN 112837425 B CN112837425 B CN 112837425B
Authority
CN
China
Prior art keywords
hsv
values
sampling
virtual scene
sampling point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110260083.8A
Other languages
Chinese (zh)
Other versions
CN112837425A (en
Inventor
苏虎
刘琰
马志千
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202110260083.8A priority Critical patent/CN112837425B/en
Publication of CN112837425A publication Critical patent/CN112837425A/en
Application granted granted Critical
Publication of CN112837425B publication Critical patent/CN112837425B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for adjusting the consistency of mixed reality illumination, which comprises the following steps: designing, arranging and collecting sampling points; mapping the coordinates of the sampling points to a virtual scene according to a mapping relation, and performing illumination precomputation processing on each sampling point; collecting a frame of video, converting data of each sampling point into an HSV (hue, saturation and value) model, and obtaining HSV values of the sampling points; adjusting the video HSV value according to the sampling point and the calculation result; converting a frame of video picture subjected to HSV adjustment into an RGB model, and transmitting the RGB model into a virtual scene as dynamic texture to realize the synthesis of the video and the virtual scene; if the video picture composition is not finished, entering the next frame cycle, and if the composition is finished, releasing the resources and exiting. The invention can well unify the brightness of the pictures from two different sources, and better solves the key technical problem in the video projection type mixed reality.

Description

Mixed reality illumination consistency adjusting method
Technical Field
The invention relates to the technical field of mixed reality, in particular to a method for adjusting the illumination consistency of mixed reality.
Background
In a video-transmissive mixed reality system, real-time video of the real world needs to be mixed with a three-dimensional rendered scene of a virtual space. The lighting condition of the virtual space is preset according to the simulated environment and is controlled by the system. The real world illumination is determined by the illumination condition of the actual use environment of the user and is not controlled by the system. The two kinds of illumination can not be kept uniform generally, which causes the inconsistency of the brightness of the real-time video and the three-dimensional rendering scene in the mixed reality environment, brings obvious distortion and seriously affects the visual effect and the sense of reality.
A patent (application No. CN202011064045.7) discloses a video penetration mixed reality method, system, readable storage medium and electronic device, the method includes the following steps: acquiring real camera coordinates, acquiring corresponding virtual camera coordinates, and acquiring mapping information of the real camera coordinates and the virtual camera coordinates; acquiring a virtual world coordinate system, and acquiring a real world coordinate system according to mapping information; for any object in the virtual world coordinate system, respectively obtaining a virtual coordinate point of the virtual world coordinate system and a real coordinate point in the real world coordinate, projecting the object in the virtual world coordinate system to the visual field of the mixed reality glasses, and enabling the object in the visual field to be located at the real coordinate point. Although the patent can obtain the virtual camera coordinate and the virtual world coordinate system according to the real camera coordinate, and then obtain the real world coordinate system by means of the virtual world coordinate system, the process processing method is rough, the illumination of the real world and the illumination of the virtual space cannot be well unified, and the sense of reality needs to be improved.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a method for adjusting the consistency of mixed reality illumination.
The purpose of the invention is realized by the following technical scheme:
the method for adjusting the consistency of mixed reality illumination comprises the following steps:
step 1: selecting n brightness sampling patches S1 and S2 … Sn on the shot object;
step 2: processing each sampling patch to obtain color values which the surfaces of the sampling patches should have under the illumination condition of the current virtual scene, and converting the color values into hue, saturation and brightness values of an HSV model;
and step 3: entering frame circulation, shooting a frame of picture by a camera, transmitting the frame of picture to a computer, repeating the step 1 and the step 2, and converting the color of the obtained shooting picture pixel into an HSV model;
and 4, step 4: identifying sampling patch areas from a shot picture, and obtaining the hue, saturation and brightness mean value of each shot sampling patch area;
and 5: comparing the hue, saturation and brightness values of the sampling patch of the virtual scene obtained in the step (2) with the hue, saturation and brightness mean values of the sampling patch area obtained in the step (4) to obtain a deviation vector; if the modulus of the deviation vector is smaller than the set threshold value, the sampling point is reserved; if the modulus of the deviation vector is larger than a set threshold value, the corresponding point is regarded as sampling failure, and the sampling point is abandoned;
step 6: setting the number of reserved sampling points as m; if m is 0, the calculation of the frame fails, and the step 3 is directly skipped; if m is greater than 0, utilizing the reserved m sampling points and four vertexes of the video picture to construct a Delaunay triangle, and setting the number of triangles in the constructed triangle as t;
and 7: adjusting HSV values of all vertexes in the triangular net formed in the step 6; completing the adjustment of HSV values of t triangular vertexes;
and 8: adjusting the HSV value of the pixels in the t triangles by adopting a bilinear difference mode;
and step 9: converting a frame of video picture which is subjected to HSV adjustment into an RGB model, and transmitting the RGB model into a virtual scene as dynamic texture;
step 10: and the three-dimensional graphic engine maps the dynamic texture corresponding to the video picture into the virtual scene to complete real-time rendering and realize the synthesis of the virtual scene and the video.
Further, the method for adjusting the consistency of the mixed reality illumination further comprises the following steps: if the video picture synthesis is not finished, entering the next frame cycle and skipping to the step 3; if the synthesis is complete, the resources are released and exit is performed.
The step 1 specifically comprises: according to the video mixing scheme, n brightness sampling patches S1 and S2 … Sn are selected on a shot object, the size of a sampling patch Si of the video mixing scheme can ensure that the number of pixels imaged in a video picture after the sampling patch Si is shot is not lower than a set threshold value X, and each sampling patch Si can represent the brightness characteristic of a certain range area Qi.
The step 2 specifically comprises:
step 21: mapping the sampling patch from the coordinate system of the actual camera to the three-dimensional space coordinate system of the virtual scene according to the mapping relation between the shot object and the virtual scene;
step 22: then according to an illumination model preset in the virtual scene, carrying out illumination precomputation on the sampling surface patches mapped in the virtual scene to obtain color values of the surfaces of the sampling surface patches under the illumination condition of the current virtual scene, and converting the color values into hue, saturation and brightness values of an HSV model;
step 23: the average of hue, saturation and brightness in the ith sampling plane is denoted as (Hci, Sci, Vci), and the average calculation method may be arithmetic average or weighted average.
The step 4 further comprises: the hue, saturation, and lightness mean values in the ith sampling slice are denoted as (Hgi, Sgi, Vgi).
The calculation formula of the deviation vector in the step 5 is as follows: ehsv (i) (Hci-Hci, Sci-Sgi, Vci-Vgi), where ehsv (i) is a deviation vector of the ith sampling patch, and Hci, Sci, and Vci are the average values of hue, saturation, and brightness of the ith sampling patch mapped to the virtual scene, respectively; hgi, Sgi, Vgi are the average values of hue, saturation and lightness in the identified sampling picture respectively.
The step 7 specifically includes: firstly, adjusting HSV values of m sampling points, enabling a modulus of a corresponding deviation vector EHSV (i) of each sampling point to be zero, and directly adopting the HSV adjustment quantity of the sampling point closest to the Euclidean distance of the vertex by the HSV adjustment quantity of the four vertexes to finish the adjustment of the HSV values of the t triangular vertexes.
The step 8 specifically includes: and after the HSV values of three sides are finished, the HSV values of the pixels on every two sides are interpolated again to finish the calculation of the HSV adjusting values of other pixels in the triangle.
The invention has the beneficial effects that:
the invention can well unify the brightness of the pictures from two different sources, and better solves the key technical problem in the video projection type mixed reality.
Drawings
FIG. 1 is a block flow diagram of the present invention.
Detailed Description
In order to more clearly understand the technical features, objects, and effects of the present invention, embodiments of the present invention will now be described with reference to the accompanying drawings.
In this embodiment, as shown in fig. 1, the method for adjusting the consistency of mixed reality illumination specifically includes the following steps:
1. and selecting a plurality of brightness sampling points on the shot object according to the video mixing scheme. The sampling points can be distributed in a uniformly distributed mode, and can also be non-uniformly distributed according to the characteristics of the shot object. After the setup is completed, n sampling points (patches) S1, S2 … Sn are obtained. Each sample point Si can characterize the luminance of a certain range region Qi. The size of the sampling point should ensure that the number of pixels imaged in the video picture after the sampling point is shot is not lower than the set threshold value X. The color of the sampling point can adopt white, pure three-primary-color or other colors, and if the sampling point with the required size and color on the shot object does not exist, a pure-color label can be pasted.
2. And mapping the sampling patch from the coordinate system of the actual camera to the three-dimensional space coordinate system of the virtual scene according to the mapping relation between the shot object and the virtual scene. And then, according to an illumination model preset in the virtual scene, carrying out illumination precomputation on the sampling patches mapped into the virtual scene to obtain color values of the surfaces of the sampling patches under the illumination condition of the current virtual scene, and converting the color values into hue, saturation and brightness values of the HSV model. The average value of hue, saturation and brightness in the ith sampling plane is recorded as (Hci, Sci, Vci), and the average value calculation mode can adopt arithmetic average or weighted average.
3. And entering a frame cycle, shooting a frame of picture by the camera, transmitting the frame of picture to the computer, and converting the colors of the obtained shooting picture pixels into HSV models.
4. And identifying sampling patch areas from the shot picture, and obtaining the hue, saturation and brightness mean value of each shot sampling area. The average values of hue, saturation and lightness in the ith sampling panel are (Hgi, Sgi, Vgi).
5. (Hgi, Sgi, Vgi) obtained in step 4 is compared with (Hci, Sci, Vci) obtained in step 2. Let the deviation vector ehsv (i) ═ Hci-Hgi, Sci-Sgi, Vci-Vgi). If the modulus of the deviation vector is smaller than the set threshold value, the sampling point is reserved. If the modulus of the deviation EHSV (i) is larger than the set threshold value, the corresponding point Si is regarded as sampling failure, and the sampling point is discarded.
6. Let the number of remaining samples be m (0< m < ═ n). If m is 0, the frame fails to be calculated, and the step 3 is directly skipped. And if m is greater than 0, performing Delaunay triangulation by using the reserved m sampling points and four vertexes of the video picture. And setting the number of triangles in the constructed triangular net as t.
7. And adjusting HSV values of all the vertexes in the triangular net formed in the step 6. Firstly, adjusting HSV values of m sampling points to enable the modulus of a corresponding deviation vector EHSV (i) of each sampling point to be zero. The HSV regulating quantity of the sampling point closest to the vertex in Euclidean distance is directly adopted as the HSV regulating quantity of the four vertexes. And completing the adjustment of the HSV values of the t triangular vertexes.
8. And adjusting the HSV value of the pixels in the t triangles by adopting a bilinear interpolation mode. The specific method comprises the following steps: and performing linear interpolation by using the HSV value adjustment quantity of every two vertexes to complete the adjustment of the HSV value of the pixel on one side of the triangle. After the HSV values of the three edges are finished, the HSV adjusting values of the pixels on every two edges are used for carrying out interpolation once again to finish the calculation of the HSV adjusting values of other pixels in the triangle.
9. And converting a frame of video picture which is subjected to HSV adjustment into an RGB model, and transmitting the RGB model into a virtual scene as dynamic texture.
10. And the three-dimensional graphic engine maps the dynamic texture corresponding to the video picture into the virtual scene to complete real-time rendering and realize the synthesis of the virtual scene and the video.
11. If the video picture synthesis is not finished, entering the next frame cycle and skipping to the step 3; if the synthesis is complete, the resources are released and exit is performed.
While there has been shown and described the fundamental principles of the invention and the principal features and advantages thereof, it will be understood by those skilled in the art that the invention is not limited by the embodiments described above, which are given by way of illustration of the principles of the invention, but is susceptible to various changes and modifications without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (8)

1. The method for adjusting the consistency of mixed reality illumination is characterized by comprising the following steps of:
step 1: selecting n brightness sampling points S1 and S2 … Sn on a shot object;
step 2: processing each sampling point to obtain color values of the surfaces of the sampling points under the illumination condition of the current virtual scene, and converting the color values into hue, saturation and brightness values of an HSV model;
and step 3: entering frame circulation, shooting a frame of picture by a camera, transmitting the frame of picture to a computer, repeating the step 1 and the step 2, and converting the color of the obtained shooting picture pixel into an HSV model;
and 4, step 4: identifying sampling point areas from a shot picture, and obtaining the hue, saturation and brightness mean value of each shot sampling point area;
and 5: comparing the hue, saturation and brightness values of the sampling points under the virtual scene obtained in the step 2 with the hue, saturation and brightness mean values of the sampling point area obtained in the step 4 to obtain deviation vectors; if the modulus of the deviation vector is smaller than the set threshold value, the sampling point is reserved; if the modulus of the deviation vector is larger than a set threshold value, the corresponding point is regarded as sampling failure, and the sampling point is abandoned;
step 6: setting the number of reserved sampling points as m; if m =0, the calculation of the frame fails, and the step 3 is directly skipped; if m is greater than 0, utilizing the reserved m sampling points and four vertexes of the video picture to construct a Delaunay triangle, and setting the number of triangles in the constructed triangle as t;
and 7: adjusting HSV values of all vertexes in the triangular net formed in the step 6; completing the adjustment of HSV values of t triangular vertexes;
and 8: adjusting the HSV value of the pixels in the t triangles by adopting a bilinear difference mode;
and step 9: converting a frame of video picture which is subjected to HSV adjustment into an RGB model, and transmitting the RGB model into a virtual scene as dynamic texture;
step 10: and the three-dimensional graphic engine maps the dynamic texture corresponding to the video picture into the virtual scene to complete real-time rendering and realize the synthesis of the virtual scene and the video.
2. The mixed reality illumination consistency adjustment method according to claim 1, further comprising: if the video picture synthesis is not finished, entering the next frame cycle and skipping to the step 3; if the synthesis is complete, the resources are released and exit is performed.
3. The method for adjusting the illumination consistency of mixed reality according to claim 1, wherein the step 1 specifically comprises: according to the video mixing scheme, n brightness sampling points S1 and S2 … Sn are selected on a shot object, the size of each sampling point Si can ensure that the number of pixels imaged in a video picture after the sampling points Si are shot is not lower than a set threshold value X, and each sampling point Si can represent the brightness characteristic of a certain range area Qi.
4. The method for adjusting the illumination consistency of mixed reality according to claim 1, wherein the step 2 specifically comprises:
step 21: mapping the sampling point from the coordinate system of the actual camera to the three-dimensional space coordinate system of the virtual scene according to the mapping relation between the shot object and the virtual scene;
step 22: then, according to a preset illumination model in the virtual scene, carrying out illumination precomputation on the sampling points mapped into the virtual scene to obtain color values of the surfaces of the sampling points under the illumination condition of the current virtual scene, and converting the color values into hue, saturation and brightness values of an HSV model;
step 23: and (3) recording the average values of hue, saturation and brightness in the ith sampling point as Hci, Sci and Vci, wherein the average value calculation mode can adopt arithmetic average or weighted average.
5. The method according to claim 1, wherein the step 4 further comprises: the hue, saturation, and lightness mean values in the ith sample point are denoted as Hgi, Sgi, and Vgi.
6. The method according to claim 1, wherein the step 5 comprises a deviation vector calculation formula: eHSV(i) = (Hci-Hgi, Sci-Sgi, Vci-Vgi), wherein EHSV(i) The deviation vector of the ith sampling point is shown, and Hci, Sci and Vci are respectively the hue, saturation and lightness mean values of the ith sampling point mapped in the virtual scene; hgi, Sgi and Vgi are respectively the average values of hue, saturation and lightness in the identified sampling points in the shot picture.
7. The method for adjusting the illumination consistency of mixed reality according to claim 1, wherein the step 7 specifically comprises: firstly, adjusting HSV values of m sampling points to enable each sampling point to correspond to a deviation vector EHSV(i) The model of the triangular vertex HSV value is zero, and the HSV regulating quantity of the sampling point closest to the vertex in Euclidean distance is directly adopted as the HSV regulating quantity of the four vertexes, so that the regulation of the HSV value of the t triangular vertexes is completed.
8. The method for adjusting the illumination consistency of mixed reality according to claim 1, wherein the step 8 specifically comprises: and after the HSV values of three sides are finished, the HSV values of the pixels on every two sides are interpolated again to finish the calculation of the HSV adjusting values of other pixels in the triangle.
CN202110260083.8A 2021-03-10 2021-03-10 Mixed reality illumination consistency adjusting method Active CN112837425B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110260083.8A CN112837425B (en) 2021-03-10 2021-03-10 Mixed reality illumination consistency adjusting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110260083.8A CN112837425B (en) 2021-03-10 2021-03-10 Mixed reality illumination consistency adjusting method

Publications (2)

Publication Number Publication Date
CN112837425A CN112837425A (en) 2021-05-25
CN112837425B true CN112837425B (en) 2022-02-11

Family

ID=75930081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110260083.8A Active CN112837425B (en) 2021-03-10 2021-03-10 Mixed reality illumination consistency adjusting method

Country Status (1)

Country Link
CN (1) CN112837425B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096941A (en) * 2011-01-30 2011-06-15 北京航空航天大学 Consistent lighting method under falsehood-reality fused environment
CN108986195A (en) * 2018-06-26 2018-12-11 东南大学 A kind of single-lens mixed reality implementation method of combining environmental mapping and global illumination rendering

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8040361B2 (en) * 2005-04-11 2011-10-18 Systems Technology, Inc. Systems and methods for combining virtual and real-time physical environments
CN107808409B (en) * 2016-09-07 2022-04-12 中兴通讯股份有限公司 Method and device for performing illumination rendering in augmented reality and mobile terminal
CN110691175B (en) * 2019-08-19 2021-08-24 深圳市励得数码科技有限公司 Video processing method and device for simulating motion tracking of camera in studio
CN110866978A (en) * 2019-11-07 2020-03-06 辽宁东智威视科技有限公司 Camera synchronization method in real-time mixed reality video shooting

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096941A (en) * 2011-01-30 2011-06-15 北京航空航天大学 Consistent lighting method under falsehood-reality fused environment
CN108986195A (en) * 2018-06-26 2018-12-11 东南大学 A kind of single-lens mixed reality implementation method of combining environmental mapping and global illumination rendering

Also Published As

Publication number Publication date
CN112837425A (en) 2021-05-25

Similar Documents

Publication Publication Date Title
US6983082B2 (en) Reality-based light environment for digital imaging in motion pictures
CN110458932B (en) Image processing method, device, system, storage medium and image scanning apparatus
CN107330964B (en) Display method and system of complex three-dimensional object
CN106875437A (en) A kind of extraction method of key frame towards RGBD three-dimensional reconstructions
WO2021169396A1 (en) Media content placement method and related device
CN110248242B (en) Image processing and live broadcasting method, device, equipment and storage medium
CN112446939A (en) Three-dimensional model dynamic rendering method and device, electronic equipment and storage medium
CN106657947A (en) Image generation method and photographing device
US7907147B2 (en) Texture filtering apparatus, texture mapping apparatus, and method and program therefor
CN109671044B (en) A kind of more exposure image fusion methods decomposed based on variable image
Park Interactive 3D reconstruction from multiple images: A primitive-based approach
US20240127402A1 (en) Artificial intelligence techniques for extrapolating hdr panoramas from ldr low fov images
CN112837425B (en) Mixed reality illumination consistency adjusting method
US6864889B2 (en) System for previewing a photorealistic rendering of a synthetic scene in real-time
Kurth et al. Real-time adaptive color correction in dynamic projection mapping
US6781583B2 (en) System for generating a synthetic scene
CN108900825A (en) A kind of conversion method of 2D image to 3D rendering
CN112866507B (en) Intelligent panoramic video synthesis method and system, electronic device and medium
US20230087663A1 (en) Image processing apparatus, image processing method, and 3d model data generation method
CN112002019B (en) Method for simulating character shadow based on MR mixed reality
CN114299202A (en) Processing method and device for virtual scene creation, storage medium and terminal
JP2014164497A (en) Information processor, image processing method and program
CN110889889A (en) Oblique photography modeling data generation method applied to immersive display equipment
Martin et al. NeRF View Synthesis: Subjective Quality Assessment and Objective Metrics Evaluation
JP2786261B2 (en) Color image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant