Panoramic prebaking-based quick rendering method and visual imaging system
Technical Field
The invention relates to a visual imaging system of an aviation simulator, in particular to a quick rendering method based on panoramic prebaking and a visual imaging system.
Background
At present, a view imaging system of an aviation simulator adopts a multi-channel distributed real-time rendering mode, a large amount of CPU and GPU resources are consumed for realizing synchronization of special effects of terrain, three-dimensional cloud, dynamic ocean, wind and the like in multi-channel views, and meanwhile, the image updating rate of the view imaging system of the aviation simulator has higher requirements, so that the view imaging system can only be realized by reducing shadow, mirror image, illumination and texture details in a virtual scene, and the existing view imaging system cannot provide a virtual environment with higher fidelity.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects of the prior art, the invention provides a rapid rendering method based on panoramic prebaking and a visual imaging system, so as to accelerate the real-time rendering time and improve the rendering fidelity.
The technical scheme is as follows: in order to achieve the purpose, the invention provides the following technical scheme:
a quick rendering method based on panoramic prebaking comprises the following steps:
(1) the visual imaging system receives task data sent by a flight simulator host in real time, and the task data comprises the following steps: LOD strategy files, flight positions, flight attitudes and flight historical data;
(2) the visual imaging system carries out path foresight according to the real-time flight position, the real-time flight attitude and the flight historical data, predicts a flight path in future time, and calculates to obtain the position and the range of a preloading area according to the predicted flight path; then generating an LOD mesh model of the position and the range of the preloading region by adopting an LOD scheme described by an LOD strategy file, wherein the LOD mesh model is a prospective path bounding volume;
(3) the visual imaging system receives a scene scheduling instruction of an instructor platform, and caches the static objects and the dynamic objects covered by the foresight path bounding volume in an object database according to a priority order specified by the scene scheduling instruction;
(4) the GPU pre-baking the cached data: generating illumination information matched with a scene after operating the environment panoramic high-dynamic illumination information, and packaging the calculated illumination information on the model and the ground map to obtain a terrain, a landform and a gas image picture generated by a ground object and a gas image object with illumination map information;
(5) and the visual imaging system continuously receives the real-time flight position sent by the flight simulator host, and when the visual imaging system flies to the preloading region position, the GPU is controlled to directly call the rendered chartlet information to render the chartlet information on the physical model of the look-ahead path bounding volume.
Specifically, the flight position comprises the longitude, latitude and height of the flight; the flying attitude comprises: three angles of the running direction of the airplane, linear velocity, angular acceleration and flying jitter state.
Specifically, the scene scheduling instruction includes: scene starting position, meteorological command, scene command, flight task command to be completed by pilot.
The invention also provides a panoramic pre-baking-based fast rendering visual imaging system, which is used for realizing the method and comprises the following steps: the system comprises a three-channel graphic workstation, a visual driving unit and a database, wherein the visual driving unit and the database are arranged on the three-channel graphic workstation; the visual driving unit comprises an interactive scheduling engine, a database deployment engine, a scene deployment engine and a rendering engine;
the interactive scheduling engine receives flight position and flight attitude data transmitted by a flight performance simulation computer in a flight simulator host through a local area network, generates a look-ahead path bounding volume according to the received data, and transmits the look-ahead path bounding volume to a database engine and a rendering engine through a local information link;
the method comprises the steps that a database deployment engine receives a scene scheduling instruction of an instructor platform, reads and loads relevant data of static objects and dynamic objects covered in a look-ahead path bounding volume from a database into a specified block in a memory of a visual imaging system according to a priority order specified by the scene scheduling instruction, and then broadcasts the address of the look-ahead path bounding volume in the memory through a local message link;
the scene deployment engine acquires data in the memory according to the broadcast address, generates scene data for rendering according to the acquired data, writes the scene data into the block, and sends the storage address to the rendering engine;
and the rendering engine loads the data stored in the block into a GPU of the visual imaging system for prebaking, and then controls the GPU to directly call prebaked chartlet information to render the chartlet information on a physical model of the look-ahead path bounding volume when the data fly to the look-ahead path bounding volume.
Further, in the panoramic pre-baking based fast rendering visual imaging system, the rendering engine further performs the following steps:
according to the data stored in the blocks, a scene to be rendered is divided into a near view, a middle view, a far view and a super-far view, then the far view and the super-far view are rendered in a single GPU, the middle view is deployed in the single GPU for prebaking rendering, the near view is split according to odd frames and even frames, and continuous odd and even frames are rendered in different GPUs.
Has the advantages that: compared with the prior art, the invention has the following advantages:
the invention adopts the pre-baking technology, can reduce the processing time of rendering the texture in real time, improves the real-time rendering efficiency of the virtual scene, and ensures that the image update rate is stabilized at 60 Hz. The panoramic prebaking generation system has breakthrough reality and high performance, can meet the most of the current and future complex training requirements, and has incomparable scene reality, environment processing capacity and strong performance.
Drawings
FIG. 1 is a schematic flow chart according to embodiment 1 of the present invention;
fig. 2 is a functional architecture diagram according to embodiment 2 of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and specific embodiments. It is to be understood that the present invention may be embodied in various forms, and that there is no intention to limit the invention to the specific embodiments illustrated, but on the contrary, the intention is to cover some exemplary and non-limiting embodiments shown in the attached drawings and described below.
It is to be understood that the features listed above for the different embodiments may be combined with each other to form further embodiments within the scope of the invention, where technically feasible. Furthermore, the particular examples and embodiments of the invention described are non-limiting, and various modifications may be made in the structure, steps, and sequence set forth above without departing from the scope of the invention.
The invention aims to provide a flight simulator vision system optimization scheme which can accelerate the real-time rendering time and improve the rendering fidelity.
In view of this, the present invention provides a method for fast rendering based on panorama prebaking and a system for imaging a scene, which are specifically described below with two embodiments.
Example 1:
the present embodiment provides a method for fast rendering based on panoramic prebaking, and fig. 1 is a flowchart of the present embodiment. As shown, the method comprises the following steps:
step 1, a visual imaging system receives task data sent by a flight simulator host in real time, and the method comprises the following steps: LOD strategy files, flight positions, flight attitudes, and flight history data.
In step 1, the flight position comprises longitude, latitude and height of flight; the flying attitude comprises: three angles of the running direction of the airplane, linear velocity, angular acceleration and flying jitter state.
Step 2, the vision imaging system carries out path foresight according to the real-time flight position, the real-time flight attitude and the flight historical data, predicts a flight path in future time, and calculates to obtain the position and the range of a preloading area according to the predicted flight path; and then generating an LOD mesh model of the position and the range of the preloading region by adopting an LOD scheme described by an LOD strategy file, wherein the LOD mesh model is a look-ahead path bounding volume.
In this embodiment, we take 5 seconds, that is, the generated look-ahead path bounding volume is the spatial range that the aircraft will pass through in the future 5 s. The LOD strategy file is a plurality of pre-stored LOD schemes, the LOD schemes are hierarchical detail technologies and are used for determining resource allocation of object rendering according to positions and importance degrees of nodes of the object model in a display environment, and the number of faces and the detail degree of non-important objects are reduced, so that high-efficiency rendering operation is obtained. The look-ahead path bounding volume is actually obtained by dividing the spatial model into regular or irregular subdivided grids by an LOD (low-order decomposition) technology, and the gridded spatial model is the look-ahead path bounding volume. In the present invention, the LOD technology is not limited, and any LOD technology in the prior art should be included in the scope of the present invention.
Step 3, the scene imaging system receives a scene scheduling instruction of the instructor platform, and caches the static objects and the dynamic objects covered by the foresight path bounding volume in a GPU of the imaging system according to the priority order specified by the scene scheduling instruction from an object database according to the scene scheduling instruction; the scene scheduling command comprises a scene starting position, a meteorological command, a scene command, a flight task command to be completed by a pilot and the like.
Step 4, the GPU pre-baking the cached data: and generating illumination information matched with the scene after operating the environment panoramic high-dynamic illumination information, and packaging the calculated illumination information on the model and the ground map to obtain a terrain, a landform and a gas image picture generated by the ground object and the gas object with the illumination map information.
And 5, continuously receiving the real-time flight position sent by the flight simulator host by the visual imaging system, and when the real-time flight position flies to the preloading region position, controlling the GPU to directly call the rendered chartlet information to render the chartlet information on the physical model of the look-ahead path bounding volume.
Example 2:
the present embodiment proposes a visual imaging system for implementing the method described in embodiment 1. A typical visual imaging system consists essentially of an aircraft performance simulation system, an instructor platform, a three-channel graphics workstation, and a display system. The three-channel graphic workstation mainly comprises a hardware system of the workstation, and a vision drive module and a database which are arranged on the hardware. In order to implement the panoramic prebaking fast rendering method, the three-channel graphics workstation in this embodiment adopts a GPU graphics workstation, six RTX2080 graphics cards are built in, and a host memory 64G, Win 1064-bit operating system is provided.
In this embodiment, the view driver module is optimally designed to enable it to perform scene prebaking, and its functional architecture is shown in fig. 2 and includes: the system comprises an interactive scheduling engine, a database deployment engine, a scene deployment engine and a rendering engine.
1) Interactive scheduling engine
The interactive scheduling engine receives signals such as flight position (longitude, latitude and altitude signals) and attitude (three angles in the aircraft running direction, linear speed, angular acceleration and flight jitter state) transmitted from a flight performance simulation computer (an upper computer) in a flight simulator host through a local area network, predicts the flight path and the view range of the next 5 seconds, and transmits the prediction result to the database engine and the rendering engine through a local information link.
The interactive scheduling engine also acquires scene scheduling instructions (starting position, meteorological instructions, scene instructions and task instructions) from the instructor station and sends the scene scheduling instructions to the database engine.
The interactive scheduling engine also needs to feed back current terrain and distance information, environmental field and other information to the upper computer.
And sending to the native message link: longitude, latitude, altitude signals, three angles of the airplane running direction, linear speed, angular acceleration, flight jitter state, predicted loading area position and range (priority, area number, limit number and task scene number of each object in the area), rendering priority LOD position and range and scene loading instructions.
2) Database deployment engine
The database deployment engine obtains the position and the range of a predicted loading area from the interactive scheduling engine, reads and loads the static objects and the dynamic objects covered in the loading area from the solid state array hard disk into a specified block in the memory according to the priority, and broadcasts the address of the area in the memory through a local message link.
In the database memory, each data is stored in an exclusive memory block in an appointed form, the data related to the geographic position is bound according to the area number, the dynamic object is bound with the task scene, the meteorological environment data is bound with the space time sequence, and the special effect is bound with the event. And when receiving the region loading instruction, the database deployment engine loads different region data into the specified memory block according to the priority.
The database deployment engine is divided into: the system comprises a database parallel loading module, a database material management module and a database parameter setting module, wherein the functions of the modules are as follows:
a database parallel loading module: loading a memory;
the database material management module: material management and hard disk loading;
setting database parameters: a load range threshold is set.
3) Scene deployment engine
And the scene deployment engine generates scene data for rendering according to the data in the memory block, wherein the data comprises mesh vertex data, map data, texture data, physical data and the like. And generating animation nodes according to the motion track of the moving object, performing interpolation processing on data with insufficient time resolution to generate special effect triggering events, and realizing time sequence calling of multiple events.
The working principle of the scene deployment engine is as follows:
the scene deployment engine finds corresponding data blocks in a memory according to the area numbers and content indexes received from the database deployment engine, generates an area terrain and landform model with vertex set map materials, generates a corresponding repeated model according to data such as vegetation, roads, vectors and the like, generates a corresponding water area and waterway handover area according to water body vector data, fills the corresponding model according to ground buildings and static vehicles, generates corresponding meteorological field data and fine cloud body data according to a scene time sequence, and generates node dynamic update according to a dynamic node time sequence motion instruction. And then writing all the models or data into a memory for a rendering engine to call and send address information to a message link of the local machine.
The scene deployment engine comprises the following parts:
a scene deployment engine to generate vertices with mapping information for rendering within a scene;
the meteorological deployment engine is used for generating meteorological data in a rendered scene and cloud three-dimensional volume data;
the dynamic object deployment engine is used for generating a dynamic node set and a special effect event time sequence object for rendering;
and the scene parameter setting module is used for setting all parameters, algorithms, thresholds and logics in the scene deployment engine.
4) Rendering engine
And the rendering engine renders a display picture based on a viewpoint according to data generated by the scene deployment engine, completes prebaking, controls the GPU to directly call prebaked chartlet information to render the chartlet information on a physical model of the foresight path bounding volume when flying to the foresight path bounding volume, and finally sends rendered picture data to an output picture management system through a system bus.
In this embodiment, a preferred technical scheme, i.e., layered rendering, is adopted, and the specific steps are as follows:
and the rendering engine splits rendering tasks running in different modules according to the visual range LOD of the viewpoint, calls data contents touched by each LOD according to the level range of the LOD, and renders and generates four layers of pictures of a near view, a middle view, a far view and an ultra-far view in different rendering modules.
The realization principle is as follows:
and after receiving the address information needing to be processed, the rendering engine generates 10 layers of LOD segmentation according to the distance between the viewpoint and the geographic position of the data range, wherein 1-7 levels are near scenes, 8 levels are medium scenes, 9 levels are far scenes, and 10 levels are ultra-far scenes.
And the operation of the long shot and the ultra long shot is deployed in an output graphics GPU, and a global illumination model and a background panorama are generated according to global illumination and astronomical information.
And the operation of the middle scene is deployed in a single middle scene GPU, and the terrain, the landform and the aerial image picture of the middle scene generated by the ground object and the aerial image object with the illumination mapping information are prepared in a panoramic prebaking mode.
The close-range operation is deployed in 4 GPUs, optical rendering such as reflection, refraction, scattering, projection and emission is realized according to a rendering mode of RTX real-time ray tracing, and equidistant continuous pictures required by a coverage angle range are generated according to a strategy of odd-even frame alternate rendering.
In this embodiment, the rendering engine is divided into:
perspective and super-perspective rendering engines (background rendering engines): generating a background panorama;
a medium shot close shot rendering engine: generating a view field picture;
rendering dynamic policy management module: dynamically adjusting the distribution of rendering tasks according to the system load prediction;
an enhanced view rendering engine: and (4) generating enhanced visual pictures such as radar, low-light night vision, infrared night vision and the like.
And finally, combining the far, middle and near scene graphs into a complete frame in a single image processing GPU. And cutting the picture into a plurality of independent image data according to the attribute of each channel virtual camera, carrying out correction fusion through edge adjustment and brightness fusion software built in the vision management software, and outputting a final image to the display equipment through a display card DP port.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.