CN113240785A - Multi-camera combined rapid ray tracing method, system and application - Google Patents

Multi-camera combined rapid ray tracing method, system and application Download PDF

Info

Publication number
CN113240785A
CN113240785A CN202110391588.8A CN202110391588A CN113240785A CN 113240785 A CN113240785 A CN 113240785A CN 202110391588 A CN202110391588 A CN 202110391588A CN 113240785 A CN113240785 A CN 113240785A
Authority
CN
China
Prior art keywords
point
dimensional
target
search
visual angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110391588.8A
Other languages
Chinese (zh)
Other versions
CN113240785B (en
Inventor
李静
代嫣冉
洪世宽
蒋昱麒
聂佳杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202110391588.8A priority Critical patent/CN113240785B/en
Publication of CN113240785A publication Critical patent/CN113240785A/en
Application granted granted Critical
Publication of CN113240785B publication Critical patent/CN113240785B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The invention belongs to the technical field of computer graphics and computer vision, and discloses a multi-camera combined fast ray tracing method, a system and application. And in the target rendering stage, starting from the pixel point of the target surface body, shielding judgment is carried out along the direction of the current optimal visual angle, the variable step length guided by the distance field is also used for searching, the optimal non-shielding visual angle is selected for rendering and coloring, and the scene image under the virtual visual angle is quickly restored. The algorithm is optimized by using the geometric relationship among the multiple cameras; the optimization process can be realized by using a plurality of independent threads to perform parallel calculation on an Opengl-CUAD interoperation architecture and performing real-time rendering and display.

Description

Multi-camera combined rapid ray tracing method, system and application
Technical Field
The invention belongs to the technical field of computer graphics and computer vision, and particularly relates to a multi-camera combined rapid ray tracing method, a multi-camera combined rapid ray tracing system and application.
Background
At present: ray tracing technology (RayTracing) is one of the hot spots in computer graphics research. The method can simulate physical phenomena such as light reflection, refraction and shadow under a real scene through calculation, thereby vividly restoring the real three-dimensional scene, and can be widely applied to a plurality of fields such as movie and television production, sports event rebroadcasting, remote education, online conferences, medical images and the like. However, due to the limitations of the algorithm itself and the computing power of hardware, the classical ray tracing method can draw natural and vivid scenes, but cannot draw the scenes in real time along with the observation angle. With the increasingly complex application scenes and the increasingly high resolution of data acquisition equipment, the operating efficiency of the ray tracing technology is increasingly difficult to meet the requirements of practical application. Therefore, the acceleration method for researching ray tracing has important theoretical and practical application value.
How to improve the operation efficiency of the ray tracing algorithm has been a great concern in academia and industry. With the progress of research, many acceleration methods of ray tracing technology have appeared in recent years. These research results can be largely divided into two main categories: an acceleration method based on hardware parallel processing and algorithm optimization. Algorithms based on hardware parallel processing mainly benefit from the continuous improvement of the parallel computing processing capability of the GPU. In 2012, wuhucho et al invented a method for constructing a BHV tree parallel ray tracing based on a GPU. According to the method, the BHV space is divided by utilizing the GPU merging characteristic, and intersection information of rays among graphic elements is calculated in parallel to improve ray tracing efficiency. In 2018, people such as alpine cold invent a ray tracing optimization method facing a 3D scene. The method divides nodes in the traditional KD-Tree, implements different parallel processing methods on a GPU aiming at different nodes, and accelerates the rendering process. In the same year, the applicant also provides a ray tracing optimization method, the execution efficiency of the algorithm is improved based on the Opengl-CUDA graph interoperation function, and real-time display of ray tracing is achieved. The acceleration method based on hardware parallel processing does show obvious acceleration effect for specific algorithm and application scenes, but the display effect is limited by hardware processing performance and has no universality. The second type of method is an acceleration method based on algorithm optimization. The method starts from the algorithm and optimizes key steps in the algorithm. For example, one adaptive bounding box partitioning-based volume rendering acceleration method proposed by royal, et al, accelerates the algorithm from the data processing level based on an adaptive bounding box partitioning strategy of image segmentation and windowing techniques. In 2017, Li Zeyu et al used the partial derivative at the tangent point of the surface normal and the distance proportional relationship between the viewpoint and the tangent point to adjust the sampling rate to achieve the purpose of improving the ray tracing speed according to the difference between the observation viewpoint and the image distance. Then, in 2020, Aoshan et al proposed an acceleration method using the similarity between adjacent layers and the empty voxel skip. The two types of acceleration methods are limited by hardware computing capacity and have pertinence to acceleration of an algorithm on one hand, and cannot perform specific acceleration guidance on a ray tracing process or cause the imaging quality to be reduced to some extent on the other hand. How to fully utilize the geometric relationship among multiple views and realize rapid ray tracing is a problem to be solved urgently at present.
Through the above analysis, the problems and defects of the prior art are as follows:
(1) the existing method for improving the operation efficiency of ray tracing is limited by hardware computing capacity and has pertinence to the acceleration of an algorithm.
(2) The existing method for improving the operation efficiency of ray tracing cannot perform specific acceleration guidance on the ray tracing process or cause the imaging quality to be reduced.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a multi-camera combined rapid ray tracing method, a system and application.
The invention is realized in such a way that a multi-camera combined fast ray tracing method comprises the following steps: a target modeling stage and a target rendering stage;
in the target modeling stage, firstly, bounding box intersection test is carried out on search rays emitted from a virtual imaging plane to obtain an effective search interval, and then a variable step length is calculated by utilizing the derived linear relation between a two-dimensional point and a three-dimensional point;
and in the target rendering stage, starting from the pixel point of the target surface body, shielding judgment is carried out along the direction of the current optimal visual angle, the variable step length guided by the distance field is also used for searching, the optimal non-shielding visual angle is selected for rendering and coloring, and the scene image under the virtual visual angle is quickly restored.
Further, the multi-camera combined fast ray tracing method specifically comprises the following steps:
firstly, preparing data, including a scene picture, a target segmentation result and multi-camera calibration internal and external parameters;
secondly, performing distance transformation on the target segmentation result to obtain the distance from each point outside the target contour to the nearest boundary of the target under the image coordinate system;
starting from a virtual viewpoint two-dimensional imaging plane, emitting a search light ray equivalent to the image resolution;
and fourthly, judging whether the light rays emitted from the viewpoint are intersected with the reconstruction region or not, and carrying out intersection test on the bounding box. Rays can be divided into two broad categories depending on whether they intersect a bounding box: the ray intersecting the bounding box is a valid ray, otherwise it is an invalid ray. The effective ray intersects with the bounding box at two points, namely a near point and a far point, and the distance between the two points is an effective interval traversed by the ray;
fifthly, projecting three-dimensional points on the search light to each view angle according to the calibrated internal and external parameters of the camera, and judging whether the three-dimensional points are target surface points according to whether the three-dimensional points are in the segmented result target or not; if the target surface point is the target surface point, ending the search and taking the current depth as the depth value of the reconstructed target surface; otherwise, inquiring the distance field under the corresponding visual angle to obtain the nearest target boundary distance to the projection pixel point, and solving a two-dimensional search point under each camera visual angle by taking the distance as a step length along the search direction;
sixthly, back projecting the two-dimensional search points under each visual angle onto the three-dimensional search light line according to the linear relation between the two-dimensional pixel coordinate and the three-dimensional world coordinate to obtain three-dimensional search step lengths corresponding to different visual angles, and taking the intersection of the three-dimensional search step lengths as the next search step length; stepping along the ray searching direction according to the length to serve as a next searching point; repeating the fifth step to judge whether the current search point is the target surface point or not until the effective interval is traversed, and obtaining a target three-dimensional surface model;
seventhly, starting from the target three-dimensional surface model, connecting the target surface voxel point with the reference view angle and the virtual view angle; sorting all the reference visual angles by taking a space included angle between the virtual visual angle and the reference visual angle as a measurement, wherein the smaller the space included angle is, the more similar the reference visual angle and the virtual visual angle is, and the better the visual angle is selected to be colored;
eighthly, in the coloring process, considering the view angle and the shielding condition of the target at the view angle, and performing shielding judgment according to the reference view angle sequence; calculating an intersection point between the target three-dimensional surface voxel point and the current optimal view angle to determine a search range;
a ninth step of traversing from the reconstruction target voxel point to the view angle direction, judging whether the traversed three-dimensional point is a voxel point, and judging according to the same as the fifth step; if the point is a voxel point, a front shielding target exists under the visual angle, the search under the visual angle is terminated, and the eighth step is repeated by selecting the next visual angle in the camera sequence; otherwise, continuously updating the current search point by using the variable step length guided by the distance field to perform traversal judgment until the current search point exceeds the search interval; and selecting the visual angle to perform texture rendering on the target surface model, and restoring a picture of the three-dimensional scene under the visual angle.
Furthermore, the first step of data preparation obtains a group of multi-view original images by performing data acquisition on the basketball scene through multiple cameras, and performs segmentation on the target in the original images to obtain a segmentation result Maskn,The target is basketball player, the segmentation method can adopt traditional image segmentation method or segmentation network based on deep learning; in addition, the camera array needs to be calibrated to obtain a three-dimensional world point P ═ (X, Y, Z, 1)TTo imaging plane pixel coordinate p ═ (u, v, 1)TThe mapping relationship of (1):
sp=MP
wherein s is a scale factor and M maps a matrix.
Further, the multi-view target segmentation result of the second step divides points in the image into backgrounds
Figure BDA0003016936230000041
And foreground
Figure BDA0003016936230000042
Two sets. Performing distance transformation on the segmentation result to obtain a group of distance fields; the shortest distance from a point outside the foreground contour to the foreground boundary is recorded in the distance field.
Further, the third step is starting from a virtual viewpoint two-dimensional imaging plane, and emitting a cluster of searching light rays connecting the optical center and the pixel points of the imaging plane:
P(t,u,v)=P0+tn(u,v),t∈(0,+∞);
wherein, P0In the world coordinate system, n (u, v) is a direction vector of a ray projected by the pixel (u, v), and t is a search interval.
Further, in the basketball movement scene in the fourth step, the reconstruction area is often limited to the size of the basketball court or half court, the area can be quantized into a cubic bounding box containing all reconstruction voxels, and the bounding box intersection test is performed on the emission ray; rays can be divided into two broad categories depending on whether they intersect a bounding box: the ray intersecting the bounding box is a valid ray, otherwise it is an invalid ray. The effective ray intersects the bounding box at two points, near point tnearAnd a far point tfarAnd the distance between the two points is the effective interval traversed by the ray, and then the ray equation is modified as follows:
P(t,u,v)=P0+tn(u,v),t∈(tnear,tfar)。
further, the fifth step projects three-dimensional points in the ray traversal process to each view angle according to a formula sp ═ MP, and projects the three-dimensional pointsThe shadow point coordinate is (u)n,vn) (ii) a Judging whether the projection point is in the contour of the corresponding view segmentation result target, if so, returning a value of 1, otherwise, returning a value of 0; if the number of cameras of the projection point in the target contour meets the requirement of a threshold value T, the point is considered to hit the target three-dimensional surface:
Figure BDA0003016936230000051
if the projection point does not hit the target three-dimensional surface, inquiring the distance field D under the corresponding view anglenObtaining the profile distance d closest to the projection pointnTraversing along the searching direction by the step length to obtain the next searching point on the two-dimensional image of each visual angle
Figure BDA0003016936230000052
Further, the closest point P has been obtained in the sixth stepnearAnd the farthest point PfarAnd its projected two-dimensional pixel coordinates
Figure BDA0003016936230000053
And the next search point under different viewing angles
Figure BDA0003016936230000054
Firstly, reversely calculating a next search point and a search distance on a two-dimensional plane back to a three-dimensional space, and solving a union of three-dimensional search step lengths of all visual angles to be used as a maximum search step length; and back projecting the next search point on the two-dimensional plane to the three-dimensional plane by taking the linear projection relation between the three-dimensional point and the two-dimensional point as a link, wherein the vector parameter equation of the space straight line is as follows:
Pnext=tPnear+(1-t)Pfar
multiplying both ends of the equation by the mapping matrix M of each view anglen
MnPnext=tMnPnear+(1-t)MnPfar
Figure BDA0003016936230000055
Wherein
Figure BDA0003016936230000056
Is a point
Figure BDA0003016936230000057
The depth under the nth reference camera, which is an unknown quantity,
Figure BDA0003016936230000058
the depth values of the nearest point of the point, the farthest point under the nth reference camera, which are known,
Figure BDA0003016936230000059
Figure BDA0003016936230000061
pixel coordinates, all homogeneous, are spread out with:
Figure BDA0003016936230000062
solving a simultaneous equation set to obtain:
Figure BDA0003016936230000063
wherein the content of the first and second substances,
Figure BDA0003016936230000064
and
Figure BDA0003016936230000065
are the coefficients to be solved. Solving the unknown parameter t into the formula Pnext=tPnear+(1-t)PfarThen the next three-dimensional search point corresponding to the nth visual angle can be obtained
Figure BDA0003016936230000066
Calculating the distance between the current search point and the current search point as DnAfter all the visual angles are calculated, a cluster of distance intervals on the search ray is obtained, and the union set of all the distance intervals is calculated to obtain the current maximum search step length Dmax
Dmax=D1∪D2∪…∪DN
Stepping by the distance along the ray searching direction to obtain the next searching point PnextAccording to the mapping relation, other distances from the search point on the two-dimensional image to the target boundary can be obtained, and a distance field is used; repeating the fifth step to judge whether the current search point is the target surface point or not until the effective interval is traversed, and obtaining a target three-dimensional surface model;
the seventh step is to sort all the reference camera visual angles by taking the space included angles of the virtual camera and the real camera relative to the same body pixel point as measurement; the visual angles can be sequenced according to the orthographic projection included angles of the virtual camera and the real camera relative to the same body pixel point or the normal direction of the target three-dimensional surface;
in the eighth step of model rendering, not only the optimal view angle needs to be considered, but also the shielding condition of the target under the view angle needs to be considered; traversing from the target surface voxel point to the optimal view angle to judge whether other voxel points exist; firstly, calculating an intersection point between a search ray and a reconstruction region bounding box, and determining a search range;
the ninth step is that traversal is carried out from the reconstructed voxel point to the current optimal view angle direction, whether a traversal three-dimensional point is a voxel point or not is judged, and the judgment is based on the fifth step; if the current search point is a voxel point, a front shielding target exists under the view angle, the search under the view angle is terminated, and the eighth step is repeated by selecting the next view angle in the optimal camera sequence; otherwise, continuing searching along the searching direction, calculating the variable step length in the searching process by using the mapping relation in the sixth step, continuously updating the current searching point and judging; if the previous shielding target does not appear after traversing beyond the search range all the time, selecting the visual angle to perform texture rendering on the target surface model; and all the voxel points finish shielding judgment and rendering coloring, and an imaging result of the three-dimensional scene under the visual angle is restored.
Another object of the present invention is to provide a multi-camera combined fast ray tracing system for performing the multi-camera combined fast ray tracing method, the multi-camera combined fast ray tracing system comprising:
the data pre-preparation module is used for realizing data preparation of a scene picture, a target segmentation result and multi-camera calibration internal and external parameters;
the distance transformation module is used for carrying out distance transformation on the target segmentation result to obtain the distance from each point outside the target contour to the nearest boundary of the target under the image coordinate system;
the searching light emitting module is used for emitting searching light equivalent to the image resolution from the virtual viewpoint two-dimensional imaging plane;
the bounding box intersection testing module is used for judging whether the light rays emitted from the viewpoint are intersected with the reconstruction region or not and carrying out bounding box intersection testing;
the target surface point judging module is used for projecting three-dimensional points on the search light to each visual angle according to the calibrated internal and external parameters of the camera and judging whether the three-dimensional points are target surface points according to whether the three-dimensional points are in the segmentation result target;
the three-dimensional search step length acquisition module is used for back projecting the two-dimensional search points under each visual angle onto the three-dimensional search light according to the linear relation between the two-dimensional pixel coordinate and the three-dimensional world coordinate to obtain three-dimensional search step lengths corresponding to different visual angles, and taking the intersection of the three-dimensional search step lengths as the next search step length; stepping along the ray searching direction according to the length to serve as a next searching point; judging whether the current search point is a target surface point or not until the effective interval is traversed to obtain a target three-dimensional surface model;
the view angle processing module is used for connecting lines between the target surface voxel points and the reference view angle and the virtual view angle from the target three-dimensional surface model; sorting all the reference visual angles by taking a space included angle between the virtual visual angle and the reference visual angle as a measurement, wherein the smaller the space included angle is, the more similar the reference visual angle and the virtual visual angle is, and the better the visual angle is selected to be colored;
the searching range determining module is used for considering the shielding condition of the target under the visual angle besides the visual angle in the color process and carrying out shielding judgment according to the reference visual angle sequence; calculating an intersection point between the target three-dimensional surface voxel point and the current optimal view angle to determine a search range;
and the body velocity point judgment module is used for traversing from the reconstruction target voxel point to the visual angle direction and judging whether the traversed three-dimensional point is a voxel point.
Another object of the present invention is to provide a terminal for implementing the multi-camera combined fast ray tracing method, the terminal comprising: movie television production terminal, sports event rebroadcasting terminal, remote education terminal, online conference terminal, medical image terminal.
By combining all the technical schemes, the invention has the advantages and positive effects that: the method fully excavates the geometric information among multiple views, and guides the variable step length search in the space by utilizing the mapping relation between the two-dimensional image and the three-dimensional scene. The method can reduce the search of invalid space in the space, thereby realizing the acceleration of the ray tracing algorithm. Because the calculation of each ray is independent, the real-time rendering display can be realized by using an Opengl-CUDA interoperation-based framework.
Drawings
Fig. 1 is a flowchart of a multi-camera combined fast ray tracing method according to an embodiment of the present invention.
FIG. 2 is a schematic structural diagram of a multi-camera combined fast ray tracing system according to an embodiment of the present invention;
in fig. 2: 1. a data pre-preparation module; 2. a distance conversion module; 3. a search ray emitting module; 4. a bounding box intersection test module; 5. a target surface point judgment module; 6. a three-dimensional search step length obtaining module; 7. a view angle processing module; 8. a search range determination module; 9. and a body speed point judgment module.
Fig. 3 is a flowchart of an implementation of a multi-camera combined fast ray tracing method according to an embodiment of the present invention.
Fig. 4 is a diagram of an example of data preparation provided by an embodiment of the invention.
Fig. 5 is a schematic diagram for deriving a mapping relationship between a two-dimensional pixel and a three-dimensional world point according to an embodiment of the present invention.
Fig. 6 is a schematic diagram illustrating a reconstruction model and a rendering result according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In view of the problems in the prior art, the present invention provides a multi-camera combined fast ray tracing method, system and application thereof, which are described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the multi-camera combined fast ray tracing method provided by the present invention comprises the following steps:
s101: preparing data, including a scene picture, a target segmentation result and multi-camera calibration internal and external parameters;
s102: performing distance transformation on the target segmentation result to obtain the distance from each point outside the target contour to the nearest boundary of the target under the image coordinate system, which is called as a distance field in the invention;
s103: starting from a virtual viewpoint two-dimensional imaging plane, emitting a search light ray equivalent to the image resolution;
s104: and judging whether the light rays emitted from the viewpoint are intersected with the reconstruction region or not, and carrying out bounding box intersection test. Rays can be divided into two broad categories depending on whether they intersect a bounding box: the ray intersecting the bounding box is a valid ray, otherwise it is an invalid ray. The effective ray intersects with the bounding box at two points, namely a near point and a far point, and the distance between the two points is an effective interval traversed by the ray;
s105: according to the calibrated internal and external parameters of the camera, three-dimensional points on the search light can be projected to each visual angle, and whether the three-dimensional points are target surface points or not is judged according to whether the three-dimensional points are in the target of the segmentation result. If the target surface point is the target surface point, ending the search and taking the current depth as the depth value of the reconstructed target surface; otherwise, inquiring the distance field under the corresponding visual angle to obtain the nearest target boundary distance to the projection pixel point, and solving a two-dimensional search point under each camera visual angle by taking the distance as a step length along the search direction;
s106: and back projecting the two-dimensional search points under each visual angle onto the three-dimensional search light according to the linear relation between the two-dimensional pixel coordinate and the three-dimensional world coordinate to obtain three-dimensional search step lengths corresponding to different visual angles, and taking the intersection of the three-dimensional search step lengths as the next search step length. Stepping by this length along the ray search direction serves as the next search point. Repeating the fifth step to judge whether the current search point is the target surface point or not until the effective interval is traversed, and obtaining a target three-dimensional surface model;
s107: starting from the target three-dimensional surface model, connecting the target surface voxel point with the reference view angle and the virtual view angle. Sorting all the reference visual angles by taking a space included angle between the virtual visual angle and the reference visual angle as a measurement, wherein the smaller the space included angle is, the more similar the reference visual angle and the virtual visual angle is, and the better the visual angle is selected to be colored;
s108: in addition to the view angle, the occlusion condition of the object under the view angle is also considered in the coloring process. And carrying out occlusion judgment according to the reference visual angle sequence. Calculating an intersection point between the target three-dimensional surface voxel point and the current optimal view angle to determine a search range;
s109: traversing from the reconstruction target voxel point to the view angle direction, and judging whether the traversed three-dimensional point is a voxel point according to the same S105. If the view angle is a voxel point, a front shielding target exists under the view angle, the search under the view angle is terminated, and the next view angle in the camera sequence is selected to repeat S108; otherwise, continuously updating the current search point by using the variable step length guided by the distance field to perform traversal judgment until the current search point exceeds the search interval; and selecting the visual angle to perform texture rendering on the target surface model, and restoring a picture of the three-dimensional scene under the visual angle.
Those skilled in the art can also implement the multi-camera combined fast ray tracing method provided by the present invention by using other steps, and the multi-camera combined fast ray tracing method provided by the present invention in fig. 1 is only one specific embodiment.
As shown in fig. 2, the multi-camera combined fast ray tracing system provided by the present invention comprises:
the data pre-preparation module 1 is used for realizing data preparation of a scene picture, a target segmentation result and multi-camera calibration internal and external parameters;
the distance transformation module 2 is used for carrying out distance transformation on the target segmentation result to obtain the distance from each point outside the target contour to the nearest boundary of the target under the image coordinate system;
the searching light emitting module 3 is used for emitting searching light equivalent to the image resolution from the virtual viewpoint two-dimensional imaging plane;
the bounding box intersection testing module 4 is used for judging whether the light rays emitted from the viewpoint intersect with the reconstruction region or not and carrying out bounding box intersection testing;
the target surface point judging module 5 is used for projecting three-dimensional points on the search light to each view angle according to the calibrated internal and external parameters of the camera, and judging whether the three-dimensional points are target surface points according to whether the three-dimensional points are in the segmented result target or not;
the three-dimensional search step length acquisition module 6 is used for back-projecting the two-dimensional search points under each view angle onto the three-dimensional search light according to the linear relation between the two-dimensional pixel coordinate and the three-dimensional world coordinate to obtain three-dimensional search step lengths corresponding to different view angles, and taking the intersection of the three-dimensional search step lengths as the next search step length; stepping along the ray searching direction according to the length to serve as a next searching point; judging whether the current search point is a target surface point or not until the effective interval is traversed to obtain a target three-dimensional surface model;
the view angle processing module 7 is used for connecting lines between the target surface voxel points and the reference view angle and the virtual view angle from the target three-dimensional surface model; sorting all the reference visual angles by taking a space included angle between the virtual visual angle and the reference visual angle as a measurement, wherein the smaller the space included angle is, the more similar the reference visual angle and the virtual visual angle is, and the better the visual angle is selected to be colored;
the search range determining module 8 is configured to consider, in addition to the view angle, an occlusion situation of the target at the view angle during the color process, and perform occlusion judgment according to the reference view angle sequence; calculating an intersection point between the target three-dimensional surface voxel point and the current optimal view angle to determine a search range;
and the body velocity point judgment module 9 is configured to traverse from the reconstruction target voxel point to the view angle direction, and judge whether the traversed three-dimensional point is a voxel point.
The technical solution of the present invention is further described below with reference to the accompanying drawings.
The system of the multi-camera combined rapid ray tracing method provided by the invention calculates the variable step length in the three-dimensional space by taking the two-dimensional distance field as a guide through the geometric mapping relation among the multi-cameras, thereby realizing the acceleration of the ray searching process. The acceleration method can be applied to two stages of object modeling and rendering and coloring. Specifically, in the target modeling stage, firstly, bounding box intersection tests are carried out on search rays emitted from a virtual imaging plane to obtain an effective search interval, then, a variable step length is calculated by utilizing the derived linear relation between a two-dimensional point and a three-dimensional point, and the search process is greatly accelerated along the ray direction by the step length. And in the target rendering stage, starting from the pixel point of the target surface body, shielding judgment is carried out along the direction of the current optimal visual angle, the variable step length guided by the distance field is also used for searching, the optimal non-shielding visual angle can be selected for rendering and coloring, and the scene image under the virtual visual angle can be quickly restored.
As shown in fig. 3, the multi-camera combined fast ray tracing method provided by the present invention is mainly divided into two parts, namely, model reconstruction and model rendering, and specifically includes the following steps:
step one, the method of the invention needs to prepare the following data: the multi-view target image, the target segmentation result and the multi-camera internal and external calibration parameters are shown in fig. 4. Taking a basketball court as an example, the basketball is shot by multiple camerasData acquisition is carried out on a scene to obtain a group of multi-view original images. Segmenting the target in the original image to obtain a segmentation result MasknIn this example, the target is a basketball player. The segmentation method can adopt a traditional image segmentation method or a segmentation network based on deep learning. In addition, the camera array needs to be calibrated to obtain a three-dimensional world point P ═ (X, Y, Z, 1)TTo imaging plane pixel coordinate p ═ (u, v, 1)TThe mapping relationship of (1):
sp=wP(1)
wherein s is a scale factor and M maps a matrix.
Step two, dividing points in the image into backgrounds by the multi-view target segmentation result
Figure BDA0003016936230000121
And foreground
Figure BDA0003016936230000122
Two sets. Distance transforming the segmentation results may result in a set of distance fields. The shortest distance from a point outside the foreground contour to the foreground boundary is recorded in the distance field.
And step three, starting from a virtual viewpoint two-dimensional imaging plane, emitting a cluster of searching light rays connecting the optical center and the pixel points of the imaging plane:
P(t,u,v)=P0+tn(u,v),t∈(0,+∞) (2)
wherein, P0In the world coordinate system, n (u, v) is a direction vector of a ray projected by the pixel (u, v), and t is a search interval.
And fourthly, on one hand, in the football field and the basketball field, the movement area of the object is limited. On the other hand, the camera array coverage area is also limited, since moving objects are observed with a sufficient number of viewing angles. Therefore, the size of the reconstruction region can be known in advance. For example, in a basketball game scenario, the reconstruction area is often limited to the size of a basketball court or half-court. The region may be quantized to a cubic bounding box containing all reconstructed voxels. And carrying out a bounding box intersection test on the emission rays. According toWhether intersecting a bounding box can classify rays into two broad categories: the ray intersecting the bounding box is a valid ray, otherwise it is an invalid ray. The effective ray intersects the bounding box at two points, near point tnearAnd a far point tfarAnd the distance between two points is the effective interval traversed by the ray, the ray equation (2) can be modified as follows:
P(t,u,v)=P0+tn(u,v),t∈(tnear,tfar) (3)
step five, three-dimensional points in the ray traversal process can be projected to each view angle according to the formula (1), and the coordinates of the projection points are (u)n,vn). And judging whether the projection point is in the contour of the corresponding view angle segmentation result target, if so, returning a value of 1, and if not, returning a value of 0. If the number of cameras of the projection point in the target contour meets the requirement of a threshold value T, the point is considered to hit the target three-dimensional surface:
Figure BDA0003016936230000131
if the projection point does not hit the target three-dimensional surface, inquiring the distance field D under the corresponding view anglenObtaining the profile distance d closest to the projection pointn. Traversing along the searching direction by the step length to obtain the next searching point on the two-dimensional image of each visual angle
Figure BDA0003016936230000132
Step six, as shown in FIG. 5, the closest point P has been obtainednearAnd the farthest point PfarAnd its projected two-dimensional pixel coordinates
Figure BDA0003016936230000133
And the next search point under different viewing angles
Figure BDA0003016936230000134
Because the projection mapping relations of all the visual angles are different, the maximum distance on the two-dimensional plane between different visual angles is back-projected to threeThe dimension is not necessarily the longest distance on the search ray. Here, the next search point on the two-dimensional plane and the search distance need to be back-calculated into the three-dimensional space. Because the reconstruction algorithm based on visual hull takes whether all points are in the two-dimensional silhouette as the judgment basis of the target point, the union of the three-dimensional search step lengths of all the visual angles is solved as the maximum search step length. The invention takes the linear projection relation between the three-dimensional point and the two-dimensional point as a link to back-project the next search point on the two-dimensional plane to the three-dimensional plane. Due to Pnear,Pfar,PnextThe vector parameter equation of the spatial straight line is as follows under the same world coordinate system and on the same search ray for three points:
Pnent=tPnear+(1-t)Pfar (5)
multiplying both ends of the equation by the mapping matrix M of each view anglen:
MnPnext=tMnPnear+(1-t)MnPfar
Figure BDA0003016936230000135
Wherein
Figure BDA0003016936230000136
Is a point
Figure BDA0003016936230000137
Depth under the nth reference camera, which is an unknown quantity.
Figure BDA0003016936230000138
The depth values of the nearest point and the farthest point under the nth reference camera are known respectively.
Figure BDA0003016936230000139
Figure BDA00030169362300001310
All of which are homogeneous pixel coordinates, spread out with:
Figure BDA00030169362300001311
The simultaneous system of equations can be solved:
Figure BDA0003016936230000141
wherein the content of the first and second substances,
Figure BDA0003016936230000142
and
Figure BDA0003016936230000143
are the coefficients to be solved. Obtaining the unknown parameter t to substitute the formula (5), and obtaining the next three-dimensional search point corresponding to the nth view angle
Figure BDA0003016936230000144
Calculating the distance between the current search point and the current search point as Dn. After all the visual angles are calculated, the invention can obtain a cluster of distance intervals on the search light. Calculating the union of all the distance intervals to obtain the current maximum search step Dmax
Dmax=D1∪D2∪…∪DN (9)
Stepping by the distance along the ray searching direction to obtain the next searching point Pnext. According to the mapping relation, the distance from the search point on the two-dimensional image to the target boundary can be obtained. Distance fields are used in this example. Repeating the fifth step to judge whether the current search point is the target surface point or not until the effective interval is traversed to obtain a target three-dimensional surface model, as shown in the target reconstruction model of fig. 6;
and seventhly, the target three-dimensional surface voxel point can be observed by a plurality of reference cameras, and the observed color and texture of each reference camera with different visual angles can also be greatly different. To get the real view angle closest to the virtual viewpoint color, the reference cameras need to be sorted. In the invention, all reference camera visual angles are sequenced by taking the space included angles of the virtual camera and the real camera relative to the same body pixel point as measurement. In addition, the visual angles can be sequenced according to the orthographic projection included angles of the virtual camera and the real camera relative to the same body pixel point or the normal direction of the three-dimensional surface of the target;
and step eight, in the model rendering process, not only the optimal visual angle needs to be considered, but also the shielding condition of the target under the visual angle needs to be considered. And traversing from the target surface voxel point to the optimal view angle to judge whether other voxel points exist. Firstly, calculating an intersection point between a search ray and a reconstruction region bounding box, and determining a search range;
and step nine, traversing from the reconstructed voxel point to the current optimal view angle direction, and judging whether the traversed three-dimensional point is a voxel point according to the same step five. If the current search point is a voxel point, a front shielding target exists under the view angle, the search under the view angle is terminated, and the next view angle in the optimal camera sequencing is selected to repeat the step eight; otherwise, continuing searching along the searching direction, calculating the variable step length in the searching process by using the mapping relation in the step six, continuously updating the current searching point and judging. And if the previous shielding target does not appear after traversing beyond the search range all the time, selecting the visual angle to perform texture rendering on the target surface model. All voxel points are subjected to shielding judgment and rendering coloring, and the imaging result of the three-dimensional scene at the visual angle is restored, as shown in the target rendering result of fig. 6.
In the model reconstruction step, all operations are performed independently aiming at each emission ray, so that a data structure based on a CUDA parallel computing architecture can be designed, and independent threads are distributed to accelerate the algorithm. Similarly, in the model rendering link, the steps are all independently operated aiming at the target three-dimensional voxel point, and the CUDA can also be used for parallel acceleration. In addition, an Opengl-CUDA interoperation framework can be used, data transmission resources are saved, and real-time interactive display is performed on imaging results by utilizing Opengl.
The invention fully utilizes the geometric relationship among the multiple cameras to realize a rapid ray tracing method. The method starts from the angle of algorithm optimization, deduces the linear mapping relation between the two-dimensional pixel points and the three-dimensional world points, reduces the search of redundant space by taking the distance on the two-dimensional image as a guide, and realizes the acceleration of the ray tracing algorithm. In addition, in the design of the invention, all the steps can be distributed with independent threads for parallel computation based on an Opengl-CUDA interoperation architecture, real-time interactive display is carried out, and the algorithm effect is further accelerated.
It should be noted that the embodiments of the present invention can be realized by hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided on a carrier medium such as a disk, CD-or DVD-ROM, programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier, for example. The apparatus and its modules of the present invention may be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., or by software executed by various types of processors, or by a combination of hardware circuits and software, e.g., firmware.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A multi-camera combined fast ray tracing method is characterized in that the multi-camera combined fast ray tracing method comprises the following steps: a target modeling stage and a target rendering stage;
in the target modeling stage, firstly, bounding box intersection test is carried out on search rays emitted from a virtual imaging plane to obtain an effective search interval, and then a variable step length is calculated by utilizing the derived linear relation between a two-dimensional point and a three-dimensional point;
and in the target rendering stage, starting from the pixel point of the target surface body, shielding judgment is carried out along the direction of the current optimal visual angle, the variable step length guided by the distance field is also used for searching, the optimal non-shielding visual angle is selected for rendering and coloring, and the scene image under the virtual visual angle is quickly restored.
2. The multi-camera combined fast ray tracing method of claim 1, wherein the multi-camera combined fast ray tracing method comprises the following steps:
firstly, preparing data, including a scene picture, a target segmentation result and multi-camera calibration internal and external parameters;
secondly, performing distance transformation on the target segmentation result to obtain the distance from each point outside the target contour to the nearest boundary of the target under the image coordinate system;
starting from a virtual viewpoint two-dimensional imaging plane, emitting a search light ray equivalent to the image resolution;
fourthly, judging whether the light rays emitted from the viewpoint are intersected with the reconstruction region or not, and carrying out intersection test on the bounding box; rays can be divided into two broad categories depending on whether they intersect a bounding box: the ray intersected with the bounding box is an effective ray, otherwise, the ray is an ineffective ray; the effective ray intersects with the bounding box at two points, namely a near point and a far point, and the distance between the two points is an effective interval traversed by the ray;
fifthly, projecting three-dimensional points on the search light to each view angle according to the calibrated internal and external parameters of the camera, and judging whether the three-dimensional points are target surface points according to whether the three-dimensional points are in the segmented result target or not; if the target surface point is the target surface point, ending the search and taking the current depth as the depth value of the reconstructed target surface; otherwise, inquiring the distance field under the corresponding visual angle to obtain the nearest target boundary distance to the projection pixel point, and solving a two-dimensional search point under each camera visual angle by taking the distance as a step length along the search direction;
sixthly, back projecting the two-dimensional search points under each visual angle onto the three-dimensional search light line according to the linear relation between the two-dimensional pixel coordinate and the three-dimensional world coordinate to obtain three-dimensional search step lengths corresponding to different visual angles, and taking the intersection of the three-dimensional search step lengths as the next search step length; stepping along the ray searching direction according to the length to serve as a next searching point; repeating the fifth step to judge whether the current search point is the target surface point or not until the effective interval is traversed, and obtaining a target three-dimensional surface model;
seventhly, starting from the target three-dimensional surface model, connecting the target surface voxel point with the reference view angle and the virtual view angle; sorting all the reference visual angles by taking a space included angle between the virtual visual angle and the reference visual angle as a measurement, wherein the smaller the space included angle is, the more similar the reference visual angle and the virtual visual angle is, and the better the visual angle is selected to be colored;
eighthly, in the coloring process, considering the view angle and the shielding condition of the target at the view angle, and performing shielding judgment according to the reference view angle sequence; calculating an intersection point between the target three-dimensional surface voxel point and the current optimal view angle to determine a search range;
a ninth step of traversing from the reconstruction target voxel point to the view angle direction, judging whether the traversed three-dimensional point is a voxel point, and judging according to the same as the fifth step; if the point is a voxel point, a front shielding target exists under the visual angle, the search under the visual angle is terminated, and the eighth step is repeated by selecting the next visual angle in the camera sequence; otherwise, continuously updating the current search point by using the variable step length guided by the distance field to perform traversal judgment until the current search point exceeds the search interval; and selecting the visual angle to perform texture rendering on the target surface model, and restoring a picture of the three-dimensional scene under the visual angle.
3. The multi-camera combined fast ray tracing method of claim 2, wherein the data preparation of the first step is to acquire a basketball scene by multi-camera data acquisitionForming a multi-view original image, and segmenting the target in the original image to obtain a segmentation result MasknThe target is basketball player, the segmentation method can adopt the traditional image segmentation method or the segmentation network based on deep learning; in addition, the camera array needs to be calibrated to obtain a three-dimensional world point P ═ (X, Y, Z, 1)TTo imaging plane pixel coordinate p ═ (u, v, 1)TThe mapping relationship of (1):
sp=MP
wherein s is a scale factor and M maps a matrix.
4. The multi-camera joint fast ray tracing method of claim 2, wherein the multi-view object segmentation result of the second step divides points in the image into backgrounds
Figure FDA0003016936220000021
And foreground
Figure FDA0003016936220000022
Two sets; performing distance transformation on the segmentation result to obtain a group of distance fields; the shortest distance from a point outside the foreground contour to the foreground boundary is recorded in the distance field.
5. The multi-camera combined fast ray tracing method of claim 2, wherein said third step emits a cluster of search rays connecting the optical center and the pixel points of the imaging plane, starting from the virtual viewpoint two-dimensional imaging plane:
P(t,u,v)=P0+tn(u,v),t∈(0,+∞);
wherein, P0In the world coordinate system, n (u, v) is a direction vector of a ray projected by the pixel (u, v), and t is a search interval.
6. A multi-camera combined fast ray tracing method as claimed in claim 2, wherein the reconstruction area of the basketball movement scene of the fourth step is limited to the size of the basketball court or half-courtThe region may be quantized into a cubic bounding box containing all reconstructed voxels, and a bounding box intersection test is performed on the emission rays; rays can be divided into two broad categories depending on whether they intersect a bounding box: the ray intersected with the bounding box is an effective ray, otherwise, the ray is an ineffective ray; the effective ray intersects the bounding box at two points, near point tnearAnd a far point tfarAnd the distance between the two points is the effective interval traversed by the ray, and then the ray equation is modified as follows:
P(t,u,v)=P0+tn(u,v),t∈(tnear,tfar)。
7. the multi-camera combined fast ray tracing method of claim 2, wherein the fifth step projects three-dimensional points during ray traversal according to the formula sp-MP to each view angle, and the coordinates of the projected points are (u)n,vn) (ii) a Judging whether the projection point is in the contour of the corresponding view segmentation result target, if so, returning a value of 1, otherwise, returning a value of 0; if the number of cameras of the projection point in the target contour meets the requirement of a threshold value T, the point is considered to hit the target three-dimensional surface:
Figure FDA0003016936220000031
if the projection point does not hit the target three-dimensional surface, inquiring the distance field D under the corresponding view anglenObtaining the profile distance d closest to the projection pointnTraversing along the searching direction by the step length to obtain the next searching point on the two-dimensional image of each visual angle
Figure FDA0003016936220000032
8. The multi-camera combined fast ray tracing method of claim 2, wherein the closest point P has been obtained in said sixth stepnearAnd the farthest point PfarAnd its projected two-dimensional pixelCoordinates of the object
Figure FDA0003016936220000041
And the next search point under different viewing angles
Figure FDA0003016936220000042
Firstly, reversely calculating a next search point and a search distance on a two-dimensional plane back to a three-dimensional space, and solving a union of three-dimensional search step lengths of all visual angles to be used as a maximum search step length; and back projecting the next search point on the two-dimensional plane to the three-dimensional plane by taking the linear projection relation between the three-dimensional point and the two-dimensional point as a link, wherein the vector parameter equation of the space straight line is as follows:
Pnext=tPnear+(1-t)Pfar
multiplying both ends of the equation by the mapping matrix M of each view anglen
MnPnext=tMnPnear+(1-t)MnPfar
Figure FDA0003016936220000043
Wherein
Figure FDA0003016936220000044
Is a point
Figure FDA0003016936220000045
The depth under the nth reference camera, which is an unknown quantity,
Figure FDA0003016936220000046
the depth values of the nearest point of the point, the farthest point under the nth reference camera, which are known,
Figure FDA0003016936220000047
Figure FDA0003016936220000048
pixel coordinates, all homogeneous, are spread out with:
Figure FDA0003016936220000049
solving a simultaneous equation set to obtain:
Figure FDA00030169362200000410
wherein the content of the first and second substances,
Figure FDA00030169362200000411
and
Figure FDA00030169362200000412
substituting the calculated unknown parameter t into the formula P for the coefficient to be solvednext=tPnear+(1-t)PfarThen the next three-dimensional search point corresponding to the nth visual angle can be obtained
Figure FDA00030169362200000413
Calculating the distance between the current search point and the current search point as DnAfter all the visual angles are calculated, a cluster of distance intervals on the search ray is obtained, and the union set of all the distance intervals is calculated to obtain the current maximum search step length Dmax
Dmax=D1∪D2∪…∪DN
Stepping by the distance along the ray searching direction to obtain the next searching point PnextAccording to the mapping relation, other distances from the search point on the two-dimensional image to the target boundary can be obtained, and a distance field is used; repeating the fifth step to judge whether the current search point is the target surface point or not until the effective interval is traversed, and obtaining a target three-dimensional surface model;
the seventh step is to sort all the reference camera visual angles by taking the space included angles of the virtual camera and the real camera relative to the same body pixel point as measurement; the visual angles can be sequenced according to the orthographic projection included angles of the virtual camera and the real camera relative to the same body pixel point or the normal direction of the target three-dimensional surface;
in the eighth step of model rendering, not only the optimal view angle needs to be considered, but also the shielding condition of the target under the view angle needs to be considered; traversing from the target surface voxel point to the optimal view angle to judge whether other voxel points exist; firstly, calculating an intersection point between a search ray and a reconstruction region bounding box, and determining a search range;
the ninth step is that traversal is carried out from the reconstructed voxel point to the current optimal view angle direction, whether a traversal three-dimensional point is a voxel point or not is judged, and the judgment is based on the fifth step; if the current search point is a voxel point, a front shielding target exists under the view angle, the search under the view angle is terminated, and the eighth step is repeated by selecting the next view angle in the optimal camera sequence; otherwise, continuing searching along the searching direction, calculating the variable step length in the searching process by using the mapping relation in the sixth step, continuously updating the current searching point and judging; if the previous shielding target does not appear after traversing beyond the search range all the time, selecting the visual angle to perform texture rendering on the target surface model; and all the voxel points finish shielding judgment and rendering coloring, and an imaging result of the three-dimensional scene under the visual angle is restored.
9. A multi-camera combined fast ray tracing system for performing the multi-camera combined fast ray tracing method of any one of claims 1 to 8, wherein the multi-camera combined fast ray tracing system comprises:
the data pre-preparation module is used for realizing data preparation of a scene picture, a target segmentation result and multi-camera calibration internal and external parameters;
the distance transformation module is used for carrying out distance transformation on the target segmentation result to obtain the distance from each point outside the target contour to the nearest boundary of the target under the image coordinate system;
the searching light emitting module is used for emitting searching light equivalent to the image resolution from the virtual viewpoint two-dimensional imaging plane;
the bounding box intersection testing module is used for judging whether the light rays emitted from the viewpoint are intersected with the reconstruction region or not and carrying out bounding box intersection testing;
the target surface point judging module is used for projecting three-dimensional points on the search light to each visual angle according to the calibrated internal and external parameters of the camera and judging whether the three-dimensional points are target surface points according to whether the three-dimensional points are in the segmentation result target;
the three-dimensional search step length acquisition module is used for back projecting the two-dimensional search points under each visual angle onto the three-dimensional search light according to the linear relation between the two-dimensional pixel coordinate and the three-dimensional world coordinate to obtain three-dimensional search step lengths corresponding to different visual angles, and taking the intersection of the three-dimensional search step lengths as the next search step length; stepping along the ray searching direction according to the length to serve as a next searching point; judging whether the current search point is a target surface point or not until the effective interval is traversed to obtain a target three-dimensional surface model;
the view angle processing module is used for connecting lines between the target surface voxel points and the reference view angle and the virtual view angle from the target three-dimensional surface model; sorting all the reference visual angles by taking a space included angle between the virtual visual angle and the reference visual angle as a measurement, wherein the smaller the space included angle is, the more similar the reference visual angle and the virtual visual angle is, and the better the visual angle is selected to be colored;
the searching range determining module is used for considering the shielding condition of the target under the visual angle besides the visual angle in the color process and carrying out shielding judgment according to the reference visual angle sequence; calculating an intersection point between the target three-dimensional surface voxel point and the current optimal view angle to determine a search range;
and the body velocity point judgment module is used for traversing from the reconstruction target voxel point to the visual angle direction and judging whether the traversed three-dimensional point is a voxel point.
10. A terminal for implementing the multi-camera combined fast ray tracing method according to any one of claims 1 to 8, the terminal comprising: movie television production terminal, sports event rebroadcasting terminal, remote education terminal, online conference terminal, medical image terminal.
CN202110391588.8A 2021-04-13 2021-04-13 Multi-camera combined rapid ray tracing method, system and application Active CN113240785B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110391588.8A CN113240785B (en) 2021-04-13 2021-04-13 Multi-camera combined rapid ray tracing method, system and application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110391588.8A CN113240785B (en) 2021-04-13 2021-04-13 Multi-camera combined rapid ray tracing method, system and application

Publications (2)

Publication Number Publication Date
CN113240785A true CN113240785A (en) 2021-08-10
CN113240785B CN113240785B (en) 2024-03-29

Family

ID=77127960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110391588.8A Active CN113240785B (en) 2021-04-13 2021-04-13 Multi-camera combined rapid ray tracing method, system and application

Country Status (1)

Country Link
CN (1) CN113240785B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681814A (en) * 2022-09-19 2023-09-01 荣耀终端有限公司 Image rendering method and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009097714A1 (en) * 2008-02-03 2009-08-13 Panovasic Technology Co., Ltd. Depth searching method and depth estimating method for multi-viewing angle video image
CN106558092A (en) * 2016-11-16 2017-04-05 北京航空航天大学 A kind of multiple light courcess scene accelerated drafting method based on the multi-direction voxelization of scene
CN110276823A (en) * 2019-05-24 2019-09-24 中国人民解放军陆军装甲兵学院 The integration imaging generation method and system that can be interacted based on ray tracing and in real time

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009097714A1 (en) * 2008-02-03 2009-08-13 Panovasic Technology Co., Ltd. Depth searching method and depth estimating method for multi-viewing angle video image
CN106558092A (en) * 2016-11-16 2017-04-05 北京航空航天大学 A kind of multiple light courcess scene accelerated drafting method based on the multi-direction voxelization of scene
CN110276823A (en) * 2019-05-24 2019-09-24 中国人民解放军陆军装甲兵学院 The integration imaging generation method and system that can be interacted based on ray tracing and in real time

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
戚爽;喻光继;: "多级存储优化的大规模全局光照快速计算", 测绘通报, no. 03 *
赵建伟;班钰;王朝;闫双胜;: "基于GPU的光线追踪算法", 兵工自动化, no. 05 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681814A (en) * 2022-09-19 2023-09-01 荣耀终端有限公司 Image rendering method and electronic equipment
CN116681814B (en) * 2022-09-19 2024-05-24 荣耀终端有限公司 Image rendering method and electronic equipment

Also Published As

Publication number Publication date
CN113240785B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
Hodaň et al. Photorealistic image synthesis for object instance detection
CN108090947B (en) Ray tracing optimization method for 3D scene
US11302056B2 (en) Techniques for traversing data employed in ray tracing
US7940269B2 (en) Real-time rendering of light-scattering media
US7940268B2 (en) Real-time rendering of light-scattering media
JP4769732B2 (en) A device that realistically displays complex dynamic 3D scenes by ray tracing
US11450057B2 (en) Hardware acceleration for ray tracing primitives that share vertices
CN104361624B (en) The rendering intent of global illumination in a kind of electronic 3-D model
US20230316632A1 (en) Hardware-based techniques applicable for ray tracing for efficiently representing and processing an arbitrary bounding volume
CN110633628B (en) RGB image scene three-dimensional model reconstruction method based on artificial neural network
CN107170037A (en) A kind of real-time three-dimensional point cloud method for reconstructing and system based on multiple-camera
US7209136B2 (en) Method and system for providing a volumetric representation of a three-dimensional object
Mudge et al. Viewpoint quality and scene understanding
CN115170741A (en) Rapid radiation field reconstruction method under sparse visual angle input
Wang et al. Voge: a differentiable volume renderer using gaussian ellipsoids for analysis-by-synthesis
CN113240785B (en) Multi-camera combined rapid ray tracing method, system and application
US20240009226A1 (en) Techniques for traversing data employed in ray tracing
US20220392121A1 (en) Method for Improved Handling of Texture Data For Texturing and Other Image Processing Tasks
Marniok et al. Real-time variational range image fusion and visualization for large-scale scenes using GPU hash tables
US11803998B2 (en) Method for computation of local densities for virtual fibers
Hu et al. Image-based modeling of inhomogeneous single-scattering participating media
CN110889889A (en) Oblique photography modeling data generation method applied to immersive display equipment
Keul et al. Soft shadow computation using precomputed line space visibility information
Bürger et al. GPU Rendering of Secondary Effects.
Huang et al. Traversal fields for ray tracing dynamic scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant