CN114494561A - Method for realizing visual domain analysis in WebGL - Google Patents

Method for realizing visual domain analysis in WebGL Download PDF

Info

Publication number
CN114494561A
CN114494561A CN202210290176.XA CN202210290176A CN114494561A CN 114494561 A CN114494561 A CN 114494561A CN 202210290176 A CN202210290176 A CN 202210290176A CN 114494561 A CN114494561 A CN 114494561A
Authority
CN
China
Prior art keywords
dimensional model
virtual camera
matrix
point
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210290176.XA
Other languages
Chinese (zh)
Inventor
管永权
郭飞
胡玮
卢浩浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Tali Technology Co ltd
Original Assignee
Xi'an Tali Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Tali Technology Co ltd filed Critical Xi'an Tali Technology Co ltd
Priority to CN202210290176.XA priority Critical patent/CN114494561A/en
Publication of CN114494561A publication Critical patent/CN114494561A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method for realizing visual field analysis in WebGL, belonging to the technical field of computer graphics. The method comprises the following steps: the virtual camera takes one point on the three-dimensional model as a rendering point, and renders the three-dimensional model to obtain a depth map; constructing a view pyramid by an inverse matrix of a world transformation matrix and a perspective projection matrix of the virtual camera; judging the position relation between the visual cone and the three-dimensional model in a shader under the visual angle of the main camera, and if the three-dimensional model is positioned in the visual cone, rendering the three-dimensional model by the shader; otherwise, processing the depth map to obtain a final output depth value, and judging whether the three-dimensional model is visible or not according to the size relation between the depth of the three-dimensional model rendering point and the final output depth value. The invention ensures that the visual field analysis is not limited to some professional software any more, can be realized by browsing by using a browser, and has all the advantages of a B/S architecture.

Description

Method for realizing visual domain analysis in WebGL
Technical Field
The invention belongs to the technical field of computer graphics, and particularly relates to a method for realizing visual field analysis in WebGL.
Background
WebGL (full-write Web Graphics Library) is a 3D drawing protocol, the drawing technical standard allows JavaScript and OpenGL ES 2.0 to be combined together, by adding one JavaScript binding of OpenGL ES 2.0, WebGL can provide hardware 3D accelerated rendering for HTML5 Canvas, WebGL 2.0 specification is released in 2017 in 1 month, and as WebGL is not very common in browser technology, when a front-end technician is not familiar with Graphics and OpenGL related knowledge, some functions cannot be realized by using a bottom API through simple operation.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a method for realizing visual field analysis in WebGL.
In order to achieve the above purpose, the invention provides the following technical scheme:
a method of implementing visual domain analysis in WebGL, comprising the steps of:
constructing a virtual camera;
determining a world transformation matrix of the virtual camera according to the position of the virtual camera, the rotation angle around the y axis and the rotation angle around the x axis;
determining a perspective projection matrix of the virtual camera according to the upper and lower opening included angles, the left and right opening included angles and the detection distance of the virtual camera;
the virtual camera takes one point on the three-dimensional model as a rendering point, and renders the three-dimensional model to obtain a depth map;
constructing a view pyramid by an inverse matrix of a world transformation matrix and a perspective projection matrix of the virtual camera;
judging the position relation between the visual cone and the three-dimensional model in a shader under the visual angle of the main camera, and if the three-dimensional model is positioned in the visual cone, rendering the three-dimensional model by the shader; otherwise, processing the depth map to obtain a final output depth value, and judging whether the three-dimensional model is visible or not according to the size relation between the depth of the three-dimensional model rendering point and the final output depth value.
Preferably, the world transformation matrix of the virtual camera is obtained according to,
M=R2(β)·R1(α)·T
wherein M is a world transformation matrix of the virtual camera;
R2(β) a first rotation matrix determined for an angle β by which the virtual camera is rotated about the x-axis,
Figure BDA0003561500590000021
R1(α) a second rotation matrix determined for an angle α by which the virtual camera is rotated about the y-axis,
Figure BDA0003561500590000022
t is a shift matrix determined by the coordinates (x, y, z) of the virtual camera,
Figure BDA0003561500590000023
the world transformation matrix M of the virtual camera calculated from the first rotation matrix, the second rotation matrix and the offset matrix is expressed as follows,
Figure BDA0003561500590000024
preferably, a perspective projection matrix P of the virtual camera is determined according to the vertical opening angle A, the horizontal opening angle B and the detection distance of the virtual camera based on the following formula,
Figure BDA0003561500590000025
wherein near is the closest distance detected by the virtual camera, far is the farthest distance detected by the virtual camera;
top=near*tan(A/2)
bottom=top-height
in the formula (I), the compound is shown in the specification,
height=2*top
left=-0.5*width
right=left+width
in the formula (I), the compound is shown in the specification,
width=aspect*height
in the formula (I), the compound is shown in the specification,
aspect=tan(B)/tan(A)
the perspective projection matrix P is simplified as follows:
Figure BDA0003561500590000031
preferably, the step of rendering the three-dimensional model to obtain the depth map by the virtual camera with a point on the three-dimensional model as a rendering point includes:
sequentially carrying out modeling change, observation change, projection change, normalized coordinate transformation and viewport change on the three-dimensional model by using a virtual camera according to the following formula;
Figure BDA0003561500590000032
wherein P is a perspective projection matrix of the virtual camera, V-1Is the inverse of the world transformation matrix of the virtual camera, (x, y, z) is the coordinates of the rendering point;
M1for the world transformation matrix of the three-dimensional model, sequentially carrying out matrix transformation of translation, rotation and scaling on the three-dimensional model to obtain the world transformation matrix M of the three-dimensional model1
Calculating to obtain the final output depth value according to the changed three-dimensional model parameters,
Figure BDA0003561500590000041
wherein, the final output depth value is a floating point number between 0 and 1.0;
and encoding the final output depth value, and storing by using four rgba channels to obtain a depth map.
Preferably, the step of constructing the view frustum from the inverse of the world transformation matrix and the perspective projection matrix of the virtual camera includes:
constructing an observation projection transformation matrix from the inverse of the world transformation matrix and the perspective projection matrix of the virtual camera according to:
Figure BDA0003561500590000042
wherein P is a perspective projection matrix of the virtual camera, M-1An inverse of a world transformation matrix for the virtual camera;
constructing a six-side view vertebral body according to the observation projection transformation matrix, wherein six surfaces of the six-side view vertebral body are respectively expressed as: p is a radical of1,p2,p3,p4,p5,p6
Wherein p is1The unit vector of the normal vector of (a) is,
normalize(a03-a00,a13-a10,a23-a20);
p1the distance from the origin is such that,
(a33-a30)/length(a03-a00,a13-a10,a23-a20);
p2the unit vector of the normal vector of (a) is,
normalize(a03+a00,a13+a10,a23+a20);
p2the distance from the origin is such that,
(a33+a30)/length(a03+a00,a13+a10,a23+a20);
p3the unit vector of the normal vector of (a) is,
normalize(a03+a01,a13+a11,a23+a21);
p3the distance from the origin is such that,
(a33+a31)/length(a03+a01,a13+a11,a23+a21);
p4the unit vector of the normal vector of (a) is,
normalize(a03-a01,a13-a11,a23-a21);
p4the distance from the origin is such that,
(a33-a31)/length(a03-a01,a13-a11,a23-a21);
p5the unit vector of the normal vector of (a) is,
normalize(a03-a02,a13-a12,a23-a22);
p5the distance from the origin is such that,
(a33-a32)/length(a03-a02,a13-a12,a23-a22);
p6the unit vector of the normal vector of (a) is,
normalize(a03+a02,a13+a12,a23+a22);
p6the distance from the origin is such that,
(a33+a32)/length(a03+a02,a13+a12,a23+a22);
in the formula, the nomalize () function is a function of solving a unit vector of a vector, and the length () function is a function of solving the euclidean length of the vector.
Preferably, the step of determining the position relationship between the visual cone and the three-dimensional model in the shader includes:
judging the position relation between the three-dimensional model and the visual cone according to the distance from the point on the three-dimensional model to each plane of the visual cone; wherein the distance from a point on the three-dimensional model to a plane of the view volume is calculated according to the following formula
Figure BDA0003561500590000051
Wherein D is the distance from a point on the three-dimensional model to each plane of the visual cone,
Figure BDA0003561500590000052
the coordinate of a point on the three-dimensional model is N, the unit vector of a normal vector of a plane of the view cone body is N, and the distance is the Euclidean distance between the plane of the view cone body and an origin.
Preferably, the step of obtaining the depth of the rendering point of the three-dimensional model comprises:
calculating the UV coordinates of the rendering points according to the following formula;
Figure 1
wherein P is a perspective projection matrix of the virtual camera, M1 -1Sequentially carrying out translation, rotation and scaling matrix transformation on the three-dimensional model to obtain a world transformation matrix M of the three-dimensional model1(X, y, z) is the coordinates of the rendering point, (X)1,Y1,Z1) UV coordinates for the rendered points;
calculating the pixel coordinate of the rendering point on the screen as (X) by using the UV coordinate of the rendering point1/W1,Y1/W1,Z1/W1);
Calculating the depth value of the rendering point of the three-dimensional model by utilizing the pixel coordinate of the rendering point,
Figure BDA0003561500590000062
preferably, the step of judging whether the three-dimensional model is visible comprises:
reading a depth map according to the UV coordinates of the rendering points;
collecting an rgba value at a rendering point, decoding the rgba value, wherein the decoded rgba value is a final output depth value;
if the depth value of the rendering point of the three-dimensional model is larger than the final output depth value, the three-dimensional model is invisible; otherwise, the three-dimensional model is visible.
The method for realizing the visual domain analysis in the WebGL has the following beneficial effects: 1. the visual field analysis is not limited to some professional software any more, can be realized by browsing by using a browser, and has all the advantages of the B/S architecture. 2. The method has important application values in navigation, aviation and military aspects, such as arrangement of a radar station, a transmitting station of a television station, road selection, navigation and the like, and arrangement of a battlefield, arrangement of an observation sentry post, laying of a communication line and the like in military affairs; sometimes, invisible areas can be analyzed, for example, when a low-altitude reconnaissance airplane flies, the capture of enemy radars needs to be avoided as much as possible, and the airplane needs to select a radar blind area to fly.
Drawings
In order to more clearly illustrate the embodiments of the present invention and the design thereof, the drawings required for the embodiments will be briefly described below. The drawings in the following description are only some embodiments of the invention and it will be clear to a person skilled in the art that other drawings can be derived from them without inventive effort.
FIG. 1 is a flowchart of a method for implementing visual field analysis in WebGL according to embodiment 1 of the present invention;
FIG. 2 is a diagram illustrating the rendering result of a three-dimensional model by a shader at a main camera angle according to embodiment 1 of the present invention;
FIG. 3 is a depth map of a three-dimensional model rendered by a shader according to embodiment 1 of the present invention;
FIG. 4 is a diagram showing the rendering result of the virtual camera on the three-dimensional model according to embodiment 1 of the present invention;
fig. 5 is a depth map of a three-dimensional model rendered by a virtual camera according to embodiment 1 of the present invention.
Detailed Description
In order that those skilled in the art will better understand the technical solutions of the present invention and can practice the same, the present invention will be described in detail with reference to the accompanying drawings and specific examples. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Example 1
Referring to fig. 1, a method for implementing visual field analysis in WebGL includes the following steps: constructing a virtual camera; determining a world transformation matrix of the virtual camera according to the position of the virtual camera, the rotation angle around the y axis and the rotation angle around the x axis; determining a perspective projection matrix of the virtual camera according to the upper and lower opening included angles, the left and right opening included angles and the detection distance of the virtual camera; the virtual camera takes one point on the three-dimensional model as a rendering point, and renders the three-dimensional model to obtain a depth map; constructing a view pyramid by an inverse matrix of a world transformation matrix and a perspective projection matrix of the virtual camera; judging the position relation between the visual cone and the three-dimensional model in a shader under the visual angle of the main camera, and if the three-dimensional model is positioned in the visual cone, rendering the three-dimensional model by the shader; otherwise, processing the depth map to obtain a final output depth value, and judging whether the three-dimensional model is visible or not according to the size relation between the depth of the three-dimensional model rendering point and the final output depth value.
In the present embodiment, the world transformation matrix of the virtual camera is obtained according to the following formula,
M=R2(β)·R1(α)·T
in the formula, M is a world transformation matrix of the virtual camera.
R2(β) a first rotation matrix determined for an angle β by which the virtual camera is rotated about the x-axis,
Figure BDA0003561500590000081
R1(α) a second rotation matrix determined for an angle α by which the virtual camera is rotated about the y-axis,
Figure BDA0003561500590000082
t is a shift matrix determined by the coordinates (x, y, z) of the virtual camera,
Figure BDA0003561500590000083
the world transformation matrix M of the virtual camera calculated from the first rotation matrix, the second rotation matrix and the offset matrix is expressed as follows,
Figure BDA0003561500590000084
in the embodiment, a perspective projection matrix P of the virtual camera is determined according to the up-down opening angle A, the left-right opening angle B and the detection distance of the virtual camera based on the following formula,
Figure BDA0003561500590000085
wherein near is the closest distance detected by the virtual camera, far is the farthest distance detected by the virtual camera;
top=near*tan(A/2),
bottom=top-height,
in the formula (I), the compound is shown in the specification,
height=2*top,
left=-0.5*width,
right=left+width,
in the formula (I), the compound is shown in the specification,
width=aspect*height,
in the formula (I), the compound is shown in the specification,
aspect=tan(B)/tan(A),
the perspective projection matrix P is simplified as follows:
Figure BDA0003561500590000091
fig. 3 is a depth map obtained by rendering a three-dimensional model by a virtual camera, in this embodiment, a point on the three-dimensional model is used as a rendering point by the virtual camera, and the step of rendering the three-dimensional model to obtain the depth map includes:
sequentially carrying out modeling change, observation change, projection change, normalized coordinate transformation and viewport change on the three-dimensional model by using a virtual camera according to the following formula,
Figure BDA0003561500590000092
wherein P is a perspective projection matrix of the virtual camera, V-1Which is the inverse of the world transformation matrix of the virtual camera, (x, y, z) is the coordinates of the rendering point. M1For the world transformation matrix of the three-dimensional model, sequentially carrying out matrix transformation of translation, rotation and scaling on the three-dimensional model to obtain the world transformation matrix M of the three-dimensional model1. Calculating to obtain the final output depth value according to the changed three-dimensional model parameters,
because the viewport transformation matrix needs to be applied in viewport transformation
Figure BDA0003561500590000093
And normalizing the transformed coordinates to the screen area.
Figure BDA0003561500590000094
Wherein the final output depth value is a floating point number between 0-1.0. And encoding the final output depth value, and storing by using four rgba channels to obtain a depth map. This is due to the fact that only one channel of one rgba is used for storing depth information, which will result in a loss of longitude. It is therefore necessary to encode the depth values and then store them using the rgba four channels to increase the longitude. Finally decoding the data when in use. Fig. 4 is a depth map rendered by a camera inside a scene (the depth map with the smallest depth value remained after a depth test).
In this embodiment, the step of constructing the view volume from the inverse of the world transformation matrix and the perspective projection matrix of the virtual camera includes: constructing an observation projective transformation matrix from the inverse of the world transformation matrix and the perspective projection matrix of the virtual camera according to:
Figure BDA0003561500590000101
wherein P is a perspective projection matrix of the virtual camera, M-1Is the inverse of the world transformation matrix of the virtual camera.
Fig. 5 is a schematic plane diagram of a viewing pyramid in 6 planes, in this embodiment, a hexahedral pyramid is constructed according to an observation projection transformation matrix, and six surfaces of the hexahedral pyramid are respectively represented as: p is a radical of1,p2,p3,p4,p5,p6
Wherein p is1The unit vector of the normal vector of (a) is,
normalize(a03-a00,a13-a10,a23-a20)。
p1the distance from the origin is such that,
(a33-a30)/length(a03-a00,a13-a10,a23-a20)。
p2the unit vector of the normal vector of (a) is,
normalize(a03+a00,a13+a10,a23+a20)。
p2the distance from the origin is such that,
(a33+a30)/length(a03+a00,a13+a10,a23+a20)。
p3the unit vector of the normal vector of (a) is,
normalize(a03+a01,a13+a11,a23+a21)。
p3the distance from the origin is such that,
(a33+a31)/length(a03+a01,a13+a11,a23+a21)。
p4the unit vector of the normal vector of (a) is,
normalize(a03-a01,a13-a11,a23-a21)。
p4the distance from the origin is such that,
(a33-a31)/length(a03-a01,a13-a11,a23-a21)。
p5the unit vector of the normal vector of (a) is,
normalize(a03-a02,a13-a12,a23-a22)
p5the distance from the origin is such that,
(a33-a32)/length(a03-a02,a13-a12,a23-a22)。
p6the unit vector of the normal vector of (a) is,
normalize(a03+a02,a13+a12,a23+a22)。
p6the distance from the origin is such that,
(a33+a32)/length(a03+a02,a13+a12,a23+a22)。
in the formula, the nomalize () function is a function of solving a unit vector of a vector, and the length () function is a function of solving the euclidean length of the vector.
Specifically, the step of determining the position relationship between the visual cone and the three-dimensional model in the shader includes: judging the position relation between the three-dimensional model and the visual cone according to the distance from the point on the three-dimensional model to each plane of the visual cone; wherein the distance from a point on the three-dimensional model to a plane of the visual cone is calculated according to the following formula,
Figure BDA0003561500590000111
wherein D is the distance from a point on the three-dimensional model to each plane of the visual cone,
Figure BDA0003561500590000112
the coordinate of a point on the three-dimensional model is N, the unit vector of a normal vector of a plane of the view cone body is N, and the distance is the Euclidean distance between the plane of the view cone body and an origin.
In this embodiment, the step of obtaining the depth of the rendering point of the three-dimensional model includes: the UV coordinates of the rendered points are calculated according to the following formula,
Figure BDA0003561500590000121
wherein P is a perspective projection matrix of the virtual camera, M1 -1Is the inverse of the world transformation matrix of the three-dimensional model, (X, y, z) is the coordinates of the rendering point, (X1,Y1,Z1) UV coordinates of the rendered points.
Calculating a pixel coordinate of the rendering point as (X) using the UV coordinates of the rendering point1/W1,Y1/W1,Z1/W1)。
Calculating the depth value of the three-dimensional model by using the pixel coordinates of the rendering point,
Figure BDA0003561500590000122
specifically, the step of judging whether the three-dimensional model is visible includes: reading a depth map according to the UV coordinates of the rendering points; collecting an rgba value at a rendering point, decoding the rgba value, wherein the decoded rgba value is a final output depth value; if the depth value of the rendering point of the three-dimensional model is larger than the final output depth value, the three-dimensional model is invisible; otherwise, the three-dimensional model is visible.
The above embodiments are only preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, and any simple changes or equivalent substitutions of the technical solutions that can be obviously obtained by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (8)

1. A method for performing visual domain analysis in WebGL, comprising the steps of:
constructing a virtual camera;
determining a world transformation matrix of the virtual camera according to the position of the virtual camera, the rotation angle around the y axis and the rotation angle around the x axis;
determining a perspective projection matrix of the virtual camera according to the upper and lower opening included angles, the left and right opening included angles and the detection distance of the virtual camera;
the virtual camera takes one point on the three-dimensional model as a rendering point, and renders the three-dimensional model to obtain a depth map;
constructing a view pyramid by an inverse matrix of a world transformation matrix and a perspective projection matrix of the virtual camera;
judging the position relation between the visual cone and the three-dimensional model in a shader under the visual angle of the main camera, and if the three-dimensional model is positioned in the visual cone, rendering the three-dimensional model by the shader; otherwise, processing the depth map to obtain a final output depth value, and judging whether the three-dimensional model is visible or not according to the size relation between the depth of the three-dimensional model rendering point and the final output depth value.
2. The method of claim 1, wherein the world transformation matrix of the virtual camera is obtained according to the following formula,
M=R2(β)·R1(α)·T
wherein M is a world transformation matrix of the virtual camera;
R2(β) a first rotation matrix determined for an angle β by which the virtual camera is rotated about the x-axis,
Figure FDA0003561500580000011
R1(α) a second rotation matrix determined for an angle α by which the virtual camera is rotated about the y-axis,
Figure FDA0003561500580000012
t is a shift matrix determined by the coordinates (x, y, z) of the virtual camera,
Figure FDA0003561500580000021
the world transformation matrix M of the virtual camera calculated from the first rotation matrix, the second rotation matrix and the offset matrix is expressed as follows,
Figure FDA0003561500580000022
3. the method of claim 1, wherein the perspective projection matrix P of the virtual camera is determined according to the angle A between the upper and lower opening degrees, the angle B between the left and right opening degrees and the detection distance of the virtual camera based on the following formula,
Figure FDA0003561500580000023
wherein near is the closest distance detected by the virtual camera, far is the farthest distance detected by the virtual camera;
top=near*tan(A/2)
bottom=top-height
in the formula (I), the compound is shown in the specification,
height=2*top
left=-0.5*width
right=left+width
in the formula (I), the compound is shown in the specification,
width=aspect*height
in the formula (I), the compound is shown in the specification,
aspect=tan(B)/tan(A)
the perspective projection matrix P is simplified as follows:
Figure FDA0003561500580000031
4. the method of claim 1, wherein the virtual camera uses a point on the three-dimensional model as a rendering point, and the step of rendering the three-dimensional model to obtain the depth map comprises:
sequentially carrying out modeling change, observation change, projection change, normalized coordinate transformation and viewport change on the three-dimensional model by using a virtual camera according to the following formula;
Figure FDA0003561500580000032
wherein P is a perspective projection matrix of the virtual camera, V-1Is the inverse of the world transformation matrix of the virtual camera, (x, y, z) is the coordinates of the rendering point;
M1for the world transformation matrix of the three-dimensional model, sequentially carrying out matrix transformation of translation, rotation and scaling on the three-dimensional model to obtain the world transformation matrix M of the three-dimensional model1
Calculating to obtain the final output depth value according to the changed three-dimensional model parameters,
Figure FDA0003561500580000033
wherein, the final output depth value is a floating point number between 0 and 1.0;
and encoding the final output depth value, and storing by using four rgba channels to obtain a depth map.
5. The method of claim 1, wherein the step of constructing a view frustum from an inverse of a world transformation matrix and a perspective projection matrix of the virtual camera comprises:
constructing an observation projective transformation matrix from the inverse of the world transformation matrix and the perspective projection matrix of the virtual camera according to:
Figure FDA0003561500580000041
wherein P is a perspective projection matrix of the virtual camera, M-1An inverse of a world transformation matrix for the virtual camera;
constructing a six-side view vertebral body according to the observation projection transformation matrix, wherein six surfaces of the six-side view vertebral body are respectively expressed as: p is a radical of1,p2,p3,p4,p5,p6
Wherein p is1The unit vector of the normal vector of (a) is,
normalize(a03-a00,a13-a10,a23-a20);
p1the distance from the origin is such that,
(a33-a30)/length(a03-a00,a13-a10,a23-a20);
p2the unit vector of the normal vector of (a) is,
normalize(a03+a00,a13+a10,a23+a20);
p2the distance from the origin is such that,
(a33+a30)/length(a03+a00,a13+a10,a23+a20);
p3the unit vector of the normal vector of (a) is,
normalize(a03+a01,a13+a11,a23+a21);
p3the distance from the origin is such that,
(a33+a31)/length(a03+a01,a13+a11,a23+a21);
p4the unit vector of the normal vector of (a) is,
normalize(a03-a01,a13-a11,a23-a21);
p4the distance from the origin is such that,
(a33-a31)/length(a03-a01,a13-a11,a23-a21);
p5the unit vector of the normal vector of (a) is,
normalize(a03-a02,a13-a12,a23-a22);
p5the distance from the origin is such that,
(a33-a32)/length(a03-a02,a13-a12,a23-a22);
p6the unit vector of the normal vector of (a) is,
normalize(a03+a02,a13+a12,a23+a22);
p6the distance from the origin is such that,
(a33+a32)/length(a03+a02,a13+a12,a23+a23);
in the formula, the nomalize () function is a function of solving a unit vector of a vector, and the length () function is a function of solving the euclidean length of the vector.
6. The method of claim 1, wherein the step of determining the position relationship between the visual cone and the three-dimensional model in the shader comprises:
judging the position relation between the three-dimensional model and the visual cone according to the distance from the point on the three-dimensional model to each plane of the visual cone; wherein the distance from a point on the three-dimensional model to a plane of the view volume is calculated according to the following formula
Figure FDA0003561500580000051
Wherein D is the distance from a point on the three-dimensional model to each plane of the visual cone,
Figure FDA0003561500580000052
the coordinate of a point on the three-dimensional model, N is a unit vector of a normal vector of a plane of the view cone, and distance is the Euclidean distance between the plane of the view cone and an origin.
7. The method of claim 1, wherein the step of obtaining the depth of the rendering point of the three-dimensional model comprises:
calculating the UV coordinates of the rendering points according to the following formula;
Figure FDA0003561500580000053
wherein P is a perspective projection matrix of the virtual camera, M1 -1Sequentially carrying out translation, rotation and scaling matrix transformation on the three-dimensional model to obtain a world transformation matrix M of the three-dimensional model1(X, y, z) is the coordinates of the rendering point, (X)1,Y1,Z1) UV coordinates for the rendered points;
calculating the pixel coordinate of the rendering point on the screen as (X) by using the UV coordinate of the rendering point1/W1,Y1/W1,Z1/W1);
Calculating the depth value of the rendering point of the three-dimensional model by utilizing the pixel coordinate of the rendering point,
Figure FDA0003561500580000061
8. the method of claim 7, wherein the step of determining whether the three-dimensional model is visible comprises:
reading a depth map according to the UV coordinates of the rendering points;
collecting an rgba value at a rendering point, decoding the rgba value, wherein the decoded rgba value is a final output depth value;
if the depth value of the rendering point of the three-dimensional model is larger than the final output depth value, the three-dimensional model is invisible; otherwise, the three-dimensional model is visible.
CN202210290176.XA 2022-03-23 2022-03-23 Method for realizing visual domain analysis in WebGL Pending CN114494561A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210290176.XA CN114494561A (en) 2022-03-23 2022-03-23 Method for realizing visual domain analysis in WebGL

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210290176.XA CN114494561A (en) 2022-03-23 2022-03-23 Method for realizing visual domain analysis in WebGL

Publications (1)

Publication Number Publication Date
CN114494561A true CN114494561A (en) 2022-05-13

Family

ID=81488834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210290176.XA Pending CN114494561A (en) 2022-03-23 2022-03-23 Method for realizing visual domain analysis in WebGL

Country Status (1)

Country Link
CN (1) CN114494561A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091742A (en) * 2022-12-29 2023-05-09 维坤智能科技(上海)有限公司 Method for displaying and optimizing camera observation points of three-dimensional scene

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091742A (en) * 2022-12-29 2023-05-09 维坤智能科技(上海)有限公司 Method for displaying and optimizing camera observation points of three-dimensional scene
CN116091742B (en) * 2022-12-29 2024-04-02 维坤智能科技(上海)有限公司 Method for displaying and optimizing camera observation points of three-dimensional scene

Similar Documents

Publication Publication Date Title
CN107564089B (en) Three-dimensional image processing method, device, storage medium and computer equipment
US11521311B1 (en) Collaborative disparity decomposition
CA2395257C (en) Any aspect passive volumetric image processing method
US20130021445A1 (en) Camera Projection Meshes
US20040105573A1 (en) Augmented virtual environments
US20080068386A1 (en) Real-Time Rendering of Realistic Rain
GB2465792A (en) Illumination Direction Estimation using Reference Object
EP3655928B1 (en) Soft-occlusion for computer graphics rendering
AU2019226134B2 (en) Environment map hole-filling
US11276150B2 (en) Environment map generation and hole filling
EP2650843A2 (en) Image processor, lighting processor and method therefor
CN110276791B (en) Parameter-configurable depth camera simulation method
Bradley et al. Image-based navigation in real environments using panoramas
JP2019194924A (en) Display of objects based on multiple models
CN116485984B (en) Global illumination simulation method, device, equipment and medium for panoramic image vehicle model
CN102592306B (en) The method of estimation blocked in virtual environment
CN114494561A (en) Method for realizing visual domain analysis in WebGL
EP2831846B1 (en) Method for representing a participating media in a scene and corresponding device
Vandame et al. Pipeline for real-time video view synthesis
Iwaszczuk et al. Model-to-image registration and automatic texture mapping using a video sequence taken by a mini UAV
EP2962290B1 (en) Relaying 3d information by depth simulation using 2d pixel displacement
Kolhatkar et al. Real-time virtual viewpoint generation on the GPU for scene navigation
CA3142001A1 (en) Spherical image based registration and self-localization for onsite and offsite viewing
She et al. Rendering 2d lines on 3d terrain model with optimization in visual quality and running performance
KR20030082307A (en) Image-based rendering method using orthogonal cross cylinder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination