CN111798364A - Panoramic prebaking-based quick rendering method and visual imaging system - Google Patents

Panoramic prebaking-based quick rendering method and visual imaging system Download PDF

Info

Publication number
CN111798364A
CN111798364A CN202010937606.3A CN202010937606A CN111798364A CN 111798364 A CN111798364 A CN 111798364A CN 202010937606 A CN202010937606 A CN 202010937606A CN 111798364 A CN111798364 A CN 111798364A
Authority
CN
China
Prior art keywords
flight
scene
data
imaging system
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010937606.3A
Other languages
Chinese (zh)
Other versions
CN111798364B (en
Inventor
李吉磊
张雪松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Aircraft Industry Co Ltd
Original Assignee
JIANGSU PUXU SOFTWARE INFORMATION TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JIANGSU PUXU SOFTWARE INFORMATION TECHNOLOGY CO LTD filed Critical JIANGSU PUXU SOFTWARE INFORMATION TECHNOLOGY CO LTD
Priority to CN202010937606.3A priority Critical patent/CN111798364B/en
Publication of CN111798364A publication Critical patent/CN111798364A/en
Application granted granted Critical
Publication of CN111798364B publication Critical patent/CN111798364B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a panoramic prebaking-based fast rendering method and a visual imaging system, which comprise the following steps: the visual imaging system receives task data sent by the flight simulator host in real time, predicts the position and range of a preloading region where a flight path passes in future time according to the task data, and generates a look-ahead path enclosure; according to a received scene scheduling instruction of an instructor platform, caching a static object and a dynamic object covered by a forward-looking path bounding volume in an object database in a GPU of an imaging system for prebaking to obtain a ground object with illumination mapping information and a topographic feature and a meteorological picture generated by the meteorological object; and when the target object flies to the position of the preloading area, controlling the GPU to directly call the rendered chartlet information to render the chartlet information on the physical model of the look-ahead path bounding volume. The invention adopts the pre-baking technology, can improve the real-time rendering efficiency of the virtual scene, and enhances the fidelity of image rendering on the premise of ensuring the stability of the image updating rate.

Description

Panoramic prebaking-based quick rendering method and visual imaging system
Technical Field
The invention relates to a visual imaging system of an aviation simulator, in particular to a quick rendering method based on panoramic prebaking and a visual imaging system.
Background
At present, a view imaging system of an aviation simulator adopts a multi-channel distributed real-time rendering mode, a large amount of CPU and GPU resources are consumed for realizing synchronization of special effects of terrain, three-dimensional cloud, dynamic ocean, wind and the like in multi-channel views, and meanwhile, the image updating rate of the view imaging system of the aviation simulator has higher requirements, so that the view imaging system can only be realized by reducing shadow, mirror image, illumination and texture details in a virtual scene, and the existing view imaging system cannot provide a virtual environment with higher fidelity.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects of the prior art, the invention provides a rapid rendering method based on panoramic prebaking and a visual imaging system, so as to accelerate the real-time rendering time and improve the rendering fidelity.
The technical scheme is as follows: in order to achieve the purpose, the invention provides the following technical scheme:
a quick rendering method based on panoramic prebaking comprises the following steps:
(1) the visual imaging system receives task data sent by a flight simulator host in real time, and the task data comprises the following steps: LOD strategy files, flight positions, flight attitudes and flight historical data;
(2) the visual imaging system carries out path foresight according to the real-time flight position, the real-time flight attitude and the flight historical data, predicts a flight path in future time, and calculates to obtain the position and the range of a preloading area according to the predicted flight path; then generating an LOD mesh model of the position and the range of the preloading region by adopting an LOD scheme described by an LOD strategy file, wherein the LOD mesh model is a prospective path bounding volume;
(3) the visual imaging system receives a scene scheduling instruction of an instructor platform, and caches the static objects and the dynamic objects covered by the foresight path bounding volume in an object database according to a priority order specified by the scene scheduling instruction;
(4) the GPU pre-baking the cached data: generating illumination information matched with a scene after operating the environment panoramic high-dynamic illumination information, and packaging the calculated illumination information on the model and the ground map to obtain a terrain, a landform and a gas image picture generated by a ground object and a gas image object with illumination map information;
(5) and the visual imaging system continuously receives the real-time flight position sent by the flight simulator host, and when the visual imaging system flies to the preloading region position, the GPU is controlled to directly call the rendered chartlet information to render the chartlet information on the physical model of the look-ahead path bounding volume.
Specifically, the flight position comprises the longitude, latitude and height of the flight; the flying attitude comprises: three angles of the running direction of the airplane, linear velocity, angular acceleration and flying jitter state.
Specifically, the scene scheduling instruction includes: scene starting position, meteorological command, scene command, flight task command to be completed by pilot.
The invention also provides a panoramic pre-baking-based fast rendering visual imaging system, which is used for realizing the method and comprises the following steps: the system comprises a three-channel graphic workstation, a visual driving unit and a database, wherein the visual driving unit and the database are arranged on the three-channel graphic workstation; the visual driving unit comprises an interactive scheduling engine, a database deployment engine, a scene deployment engine and a rendering engine;
the interactive scheduling engine receives flight position and flight attitude data transmitted by a flight performance simulation computer in a flight simulator host through a local area network, generates a look-ahead path bounding volume according to the received data, and transmits the look-ahead path bounding volume to a database engine and a rendering engine through a local information link;
the method comprises the steps that a database deployment engine receives a scene scheduling instruction of an instructor platform, reads and loads relevant data of static objects and dynamic objects covered in a look-ahead path bounding volume from a database into a specified block in a memory of a visual imaging system according to a priority order specified by the scene scheduling instruction, and then broadcasts the address of the look-ahead path bounding volume in the memory through a local message link;
the scene deployment engine acquires data in the memory according to the broadcast address, generates scene data for rendering according to the acquired data, writes the scene data into the block, and sends the storage address to the rendering engine;
and the rendering engine loads the data stored in the block into a GPU of the visual imaging system for prebaking, and then controls the GPU to directly call prebaked chartlet information to render the chartlet information on a physical model of the look-ahead path bounding volume when the data fly to the look-ahead path bounding volume.
Further, in the panoramic pre-baking based fast rendering visual imaging system, the rendering engine further performs the following steps:
according to the data stored in the blocks, a scene to be rendered is divided into a near view, a middle view, a far view and a super-far view, then the far view and the super-far view are rendered in a single GPU, the middle view is deployed in the single GPU for prebaking rendering, the near view is split according to odd frames and even frames, and continuous odd and even frames are rendered in different GPUs.
Has the advantages that: compared with the prior art, the invention has the following advantages:
the invention adopts the pre-baking technology, can reduce the processing time of rendering the texture in real time, improves the real-time rendering efficiency of the virtual scene, and ensures that the image update rate is stabilized at 60 Hz. The panoramic prebaking generation system has breakthrough reality and high performance, can meet the most of the current and future complex training requirements, and has incomparable scene reality, environment processing capacity and strong performance.
Drawings
FIG. 1 is a schematic flow chart according to embodiment 1 of the present invention;
fig. 2 is a functional architecture diagram according to embodiment 2 of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and specific embodiments. It is to be understood that the present invention may be embodied in various forms, and that there is no intention to limit the invention to the specific embodiments illustrated, but on the contrary, the intention is to cover some exemplary and non-limiting embodiments shown in the attached drawings and described below.
It is to be understood that the features listed above for the different embodiments may be combined with each other to form further embodiments within the scope of the invention, where technically feasible. Furthermore, the particular examples and embodiments of the invention described are non-limiting, and various modifications may be made in the structure, steps, and sequence set forth above without departing from the scope of the invention.
The invention aims to provide a flight simulator vision system optimization scheme which can accelerate the real-time rendering time and improve the rendering fidelity.
In view of this, the present invention provides a method for fast rendering based on panorama prebaking and a system for imaging a scene, which are specifically described below with two embodiments.
Example 1:
the present embodiment provides a method for fast rendering based on panoramic prebaking, and fig. 1 is a flowchart of the present embodiment. As shown, the method comprises the following steps:
step 1, a visual imaging system receives task data sent by a flight simulator host in real time, and the method comprises the following steps: LOD strategy files, flight positions, flight attitudes, and flight history data.
In step 1, the flight position comprises longitude, latitude and height of flight; the flying attitude comprises: three angles of the running direction of the airplane, linear velocity, angular acceleration and flying jitter state.
Step 2, the vision imaging system carries out path foresight according to the real-time flight position, the real-time flight attitude and the flight historical data, predicts a flight path in future time, and calculates to obtain the position and the range of a preloading area according to the predicted flight path; and then generating an LOD mesh model of the position and the range of the preloading region by adopting an LOD scheme described by an LOD strategy file, wherein the LOD mesh model is a look-ahead path bounding volume.
In this embodiment, we take 5 seconds, that is, the generated look-ahead path bounding volume is the spatial range that the aircraft will pass through in the future 5 s. The LOD strategy file is a plurality of pre-stored LOD schemes, the LOD schemes are hierarchical detail technologies and are used for determining resource allocation of object rendering according to positions and importance degrees of nodes of the object model in a display environment, and the number of faces and the detail degree of non-important objects are reduced, so that high-efficiency rendering operation is obtained. The look-ahead path bounding volume is actually obtained by dividing the spatial model into regular or irregular subdivided grids by an LOD (low-order decomposition) technology, and the gridded spatial model is the look-ahead path bounding volume. In the present invention, the LOD technology is not limited, and any LOD technology in the prior art should be included in the scope of the present invention.
Step 3, the scene imaging system receives a scene scheduling instruction of the instructor platform, and caches the static objects and the dynamic objects covered by the foresight path bounding volume in a GPU of the imaging system according to the priority order specified by the scene scheduling instruction from an object database according to the scene scheduling instruction; the scene scheduling command comprises a scene starting position, a meteorological command, a scene command, a flight task command to be completed by a pilot and the like.
Step 4, the GPU pre-baking the cached data: and generating illumination information matched with the scene after operating the environment panoramic high-dynamic illumination information, and packaging the calculated illumination information on the model and the ground map to obtain a terrain, a landform and a gas image picture generated by the ground object and the gas object with the illumination map information.
And 5, continuously receiving the real-time flight position sent by the flight simulator host by the visual imaging system, and when the real-time flight position flies to the preloading region position, controlling the GPU to directly call the rendered chartlet information to render the chartlet information on the physical model of the look-ahead path bounding volume.
Example 2:
the present embodiment proposes a visual imaging system for implementing the method described in embodiment 1. A typical visual imaging system consists essentially of an aircraft performance simulation system, an instructor platform, a three-channel graphics workstation, and a display system. The three-channel graphic workstation mainly comprises a hardware system of the workstation, and a vision drive module and a database which are arranged on the hardware. In order to implement the panoramic prebaking fast rendering method, the three-channel graphics workstation in this embodiment adopts a GPU graphics workstation, six RTX2080 graphics cards are built in, and a host memory 64G, Win 1064-bit operating system is provided.
In this embodiment, the view driver module is optimally designed to enable it to perform scene prebaking, and its functional architecture is shown in fig. 2 and includes: the system comprises an interactive scheduling engine, a database deployment engine, a scene deployment engine and a rendering engine.
1) Interactive scheduling engine
The interactive scheduling engine receives signals such as flight position (longitude, latitude and altitude signals) and attitude (three angles in the aircraft running direction, linear speed, angular acceleration and flight jitter state) transmitted from a flight performance simulation computer (an upper computer) in a flight simulator host through a local area network, predicts the flight path and the view range of the next 5 seconds, and transmits the prediction result to the database engine and the rendering engine through a local information link.
The interactive scheduling engine also acquires scene scheduling instructions (starting position, meteorological instructions, scene instructions and task instructions) from the instructor station and sends the scene scheduling instructions to the database engine.
The interactive scheduling engine also needs to feed back current terrain and distance information, environmental field and other information to the upper computer.
And sending to the native message link: longitude, latitude, altitude signals, three angles of the airplane running direction, linear speed, angular acceleration, flight jitter state, predicted loading area position and range (priority, area number, limit number and task scene number of each object in the area), rendering priority LOD position and range and scene loading instructions.
2) Database deployment engine
The database deployment engine obtains the position and the range of a predicted loading area from the interactive scheduling engine, reads and loads the static objects and the dynamic objects covered in the loading area from the solid state array hard disk into a specified block in the memory according to the priority, and broadcasts the address of the area in the memory through a local message link.
In the database memory, each data is stored in an exclusive memory block in an appointed form, the data related to the geographic position is bound according to the area number, the dynamic object is bound with the task scene, the meteorological environment data is bound with the space time sequence, and the special effect is bound with the event. And when receiving the region loading instruction, the database deployment engine loads different region data into the specified memory block according to the priority.
The database deployment engine is divided into: the system comprises a database parallel loading module, a database material management module and a database parameter setting module, wherein the functions of the modules are as follows:
a database parallel loading module: loading a memory;
the database material management module: material management and hard disk loading;
setting database parameters: a load range threshold is set.
3) Scene deployment engine
And the scene deployment engine generates scene data for rendering according to the data in the memory block, wherein the data comprises mesh vertex data, map data, texture data, physical data and the like. And generating animation nodes according to the motion track of the moving object, performing interpolation processing on data with insufficient time resolution to generate special effect triggering events, and realizing time sequence calling of multiple events.
The working principle of the scene deployment engine is as follows:
the scene deployment engine finds corresponding data blocks in a memory according to the area numbers and content indexes received from the database deployment engine, generates an area terrain and landform model with vertex set map materials, generates a corresponding repeated model according to data such as vegetation, roads, vectors and the like, generates a corresponding water area and waterway handover area according to water body vector data, fills the corresponding model according to ground buildings and static vehicles, generates corresponding meteorological field data and fine cloud body data according to a scene time sequence, and generates node dynamic update according to a dynamic node time sequence motion instruction. And then writing all the models or data into a memory for a rendering engine to call and send address information to a message link of the local machine.
The scene deployment engine comprises the following parts:
a scene deployment engine to generate vertices with mapping information for rendering within a scene;
the meteorological deployment engine is used for generating meteorological data in a rendered scene and cloud three-dimensional volume data;
the dynamic object deployment engine is used for generating a dynamic node set and a special effect event time sequence object for rendering;
and the scene parameter setting module is used for setting all parameters, algorithms, thresholds and logics in the scene deployment engine.
4) Rendering engine
And the rendering engine renders a display picture based on a viewpoint according to data generated by the scene deployment engine, completes prebaking, controls the GPU to directly call prebaked chartlet information to render the chartlet information on a physical model of the foresight path bounding volume when flying to the foresight path bounding volume, and finally sends rendered picture data to an output picture management system through a system bus.
In this embodiment, a preferred technical scheme, i.e., layered rendering, is adopted, and the specific steps are as follows:
and the rendering engine splits rendering tasks running in different modules according to the visual range LOD of the viewpoint, calls data contents touched by each LOD according to the level range of the LOD, and renders and generates four layers of pictures of a near view, a middle view, a far view and an ultra-far view in different rendering modules.
The realization principle is as follows:
and after receiving the address information needing to be processed, the rendering engine generates 10 layers of LOD segmentation according to the distance between the viewpoint and the geographic position of the data range, wherein 1-7 levels are near scenes, 8 levels are medium scenes, 9 levels are far scenes, and 10 levels are ultra-far scenes.
And the operation of the long shot and the ultra long shot is deployed in an output graphics GPU, and a global illumination model and a background panorama are generated according to global illumination and astronomical information.
And the operation of the middle scene is deployed in a single middle scene GPU, and the terrain, the landform and the aerial image picture of the middle scene generated by the ground object and the aerial image object with the illumination mapping information are prepared in a panoramic prebaking mode.
The close-range operation is deployed in 4 GPUs, optical rendering such as reflection, refraction, scattering, projection and emission is realized according to a rendering mode of RTX real-time ray tracing, and equidistant continuous pictures required by a coverage angle range are generated according to a strategy of odd-even frame alternate rendering.
In this embodiment, the rendering engine is divided into:
perspective and super-perspective rendering engines (background rendering engines): generating a background panorama;
a medium shot close shot rendering engine: generating a view field picture;
rendering dynamic policy management module: dynamically adjusting the distribution of rendering tasks according to the system load prediction;
an enhanced view rendering engine: and (4) generating enhanced visual pictures such as radar, low-light night vision, infrared night vision and the like.
And finally, combining the far, middle and near scene graphs into a complete frame in a single image processing GPU. And cutting the picture into a plurality of independent image data according to the attribute of each channel virtual camera, carrying out correction fusion through edge adjustment and brightness fusion software built in the vision management software, and outputting a final image to the display equipment through a display card DP port.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (5)

1. A quick rendering method based on panoramic prebaking is characterized by comprising the following steps:
(1) the visual imaging system receives task data sent by a flight simulator host in real time, and the task data comprises the following steps: LOD strategy files, flight positions, flight attitudes and flight historical data;
(2) the visual imaging system carries out path foresight according to the real-time flight position, the real-time flight attitude and the flight historical data, predicts a flight path in future time, and calculates to obtain the position and the range of a preloading area according to the predicted flight path; then generating an LOD mesh model of the position and the range of the preloading region by adopting an LOD scheme described by an LOD strategy file, wherein the LOD mesh model is a prospective path bounding volume;
(3) the visual imaging system receives a scene scheduling instruction of an instructor platform, and caches the static objects and the dynamic objects covered by the foresight path bounding volume in an object database according to a priority order specified by the scene scheduling instruction;
(4) the GPU pre-baking the cached data: generating illumination information matched with a scene after operating the environment panoramic high-dynamic illumination information, and packaging the calculated illumination information on the model and the ground map to obtain a terrain, a landform and a gas image picture generated by a ground object and a gas image object with illumination map information;
(5) and the visual imaging system continuously receives the real-time flight position sent by the flight simulator host, and when the visual imaging system flies to the preloading region position, the GPU is controlled to directly call the rendered chartlet information to render the chartlet information on the physical model of the look-ahead path bounding volume.
2. The panoramic prebaking-based fast rendering method as recited in claim 1, wherein the flight location comprises longitude, latitude, altitude of the flight; the flying attitude comprises: three angles of the running direction of the airplane, linear velocity, angular acceleration and flying jitter state.
3. The method of claim 1, wherein the scene scheduling instruction comprises: scene starting position, meteorological command, scene command, flight task command to be completed by pilot.
4. A panoramic prebaking based fast rendering visual imaging system for implementing the method of any one of claims 1 to 3, comprising: the system comprises a three-channel graphic workstation, a visual driving unit and a database, wherein the visual driving unit and the database are arranged on the three-channel graphic workstation; the visual driving unit comprises an interactive scheduling engine, a database deployment engine, a scene deployment engine and a rendering engine;
the interactive scheduling engine receives flight position and flight attitude data transmitted by a flight performance simulation computer in a flight simulator host through a local area network, generates a look-ahead path bounding volume according to the received data, and transmits the look-ahead path bounding volume to a database engine and a rendering engine through a local information link;
the method comprises the steps that a database deployment engine receives a scene scheduling instruction of an instructor platform, reads and loads relevant data of static objects and dynamic objects covered in a look-ahead path bounding volume from a database into a specified block in a memory of a visual imaging system according to a priority order specified by the scene scheduling instruction, and then broadcasts the address of the look-ahead path bounding volume in the memory through a local message link;
the scene deployment engine acquires data in the memory according to the broadcast address, generates scene data for rendering according to the acquired data, writes the scene data into the block, and sends the storage address to the rendering engine;
and the rendering engine loads the data stored in the block into a GPU of the visual imaging system for prebaking, and then controls the GPU to directly call prebaked chartlet information to render the chartlet information on a physical model of the look-ahead path bounding volume when the data fly to the look-ahead path bounding volume.
5. The panoramic prebaking-based fast rendering visual imaging system of claim 4 wherein the rendering engine further performs the steps of:
according to the data stored in the blocks, a scene to be rendered is divided into a near view, a middle view, a far view and a super-far view, then the far view and the super-far view are rendered in a single GPU, the middle view is deployed in the single GPU for prebaking rendering, the near view is split according to odd frames and even frames, and continuous odd and even frames are rendered in different GPUs.
CN202010937606.3A 2020-09-09 2020-09-09 Panoramic prebaking-based quick rendering method and visual imaging system Active CN111798364B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010937606.3A CN111798364B (en) 2020-09-09 2020-09-09 Panoramic prebaking-based quick rendering method and visual imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010937606.3A CN111798364B (en) 2020-09-09 2020-09-09 Panoramic prebaking-based quick rendering method and visual imaging system

Publications (2)

Publication Number Publication Date
CN111798364A true CN111798364A (en) 2020-10-20
CN111798364B CN111798364B (en) 2020-12-11

Family

ID=72834112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010937606.3A Active CN111798364B (en) 2020-09-09 2020-09-09 Panoramic prebaking-based quick rendering method and visual imaging system

Country Status (1)

Country Link
CN (1) CN111798364B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581575A (en) * 2021-12-13 2022-06-03 北京市建筑设计研究院有限公司 Model rendering processing method and device and electronic equipment
WO2024027274A1 (en) * 2022-08-03 2024-02-08 腾讯科技(深圳)有限公司 Map scene rendering method and apparatus, server, terminal, computer-readable storage medium, and computer program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281654A (en) * 2008-05-20 2008-10-08 上海大学 Method for processing cosmically complex three-dimensional scene based on eight-fork tree
CN102013189A (en) * 2010-12-16 2011-04-13 成都西麦克虚拟现实电子技术有限公司 General flight simulation engine
CN106446351A (en) * 2016-08-31 2017-02-22 郑州捷安高科股份有限公司 Real-time drawing-oriented large-scale scene organization and scheduling technology and simulation system
CN108447337A (en) * 2018-03-29 2018-08-24 深圳视觉航空科技有限公司 Simulated flight implementation method based on virtual reality
CN111105491A (en) * 2019-11-25 2020-05-05 腾讯科技(深圳)有限公司 Scene rendering method and device, computer readable storage medium and computer equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281654A (en) * 2008-05-20 2008-10-08 上海大学 Method for processing cosmically complex three-dimensional scene based on eight-fork tree
CN102013189A (en) * 2010-12-16 2011-04-13 成都西麦克虚拟现实电子技术有限公司 General flight simulation engine
CN106446351A (en) * 2016-08-31 2017-02-22 郑州捷安高科股份有限公司 Real-time drawing-oriented large-scale scene organization and scheduling technology and simulation system
CN108447337A (en) * 2018-03-29 2018-08-24 深圳视觉航空科技有限公司 Simulated flight implementation method based on virtual reality
CN111105491A (en) * 2019-11-25 2020-05-05 腾讯科技(深圳)有限公司 Scene rendering method and device, computer readable storage medium and computer equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张惠娟 等: "基于DR预测的大规模Web3D场景预加载机制", 《***仿真学报》 *
张燕燕: "飞行模拟器大规模真实地形实时可视化技术的研究与实现", 《中国博士学位论文全文数据库-信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581575A (en) * 2021-12-13 2022-06-03 北京市建筑设计研究院有限公司 Model rendering processing method and device and electronic equipment
WO2024027274A1 (en) * 2022-08-03 2024-02-08 腾讯科技(深圳)有限公司 Map scene rendering method and apparatus, server, terminal, computer-readable storage medium, and computer program product

Also Published As

Publication number Publication date
CN111798364B (en) 2020-12-11

Similar Documents

Publication Publication Date Title
AU2018274971B2 (en) Augmented video system providing enhanced situational awareness
EP0282504B1 (en) Digital simulation system for generating realistic scenes
US9626790B1 (en) View-dependent textures for interactive geographic information system
US8907950B2 (en) Driving simulation apparatus, wide-angle camera video simulation apparatus, and image deforming/compositing apparatus
CN112102499A (en) Fused reality system and method
US5598359A (en) Weather effects generator for simulation systems
CN111798364B (en) Panoramic prebaking-based quick rendering method and visual imaging system
EP3754469A1 (en) 3d structure engine-based computation platform
WO2010035266A2 (en) Universal collaborative pseudo-realistic viewer
CN110852952B (en) Large-scale terrain real-time drawing method based on GPU
CN110908510B (en) Application method of oblique photography modeling data in immersive display equipment
JP4953719B2 (en) Method and system for rendering a real-time isometric view of the terrain of a land object traversed by a mobile platform
Feibush et al. Visualization for situational awareness
CN112102466A (en) Location-based platform of multiple 3D engines for delivering location-based 3D content to users
CN117295647A (en) Sensor simulation with unified multi-sensor view
CN112330806B (en) Visual synthesis method and system based on low-power-consumption hardware platform
EP0845130B1 (en) Atmospheric effects simulation
CN111210515A (en) Airborne synthetic vision system based on terrain real-time rendering
JPH0154749B2 (en)
Abásolo et al. From a serious training simulator for ship maneuvering to an entertainment simulator
CN117745937A (en) View synthesis method, device, terminal equipment and storage medium
Mackey NPSNET: hierarchical data structures for real-time three-dimensional visual simulation.
Rowley Computer generated imagery for training simulators
CA2226115C (en) Atmospheric effects simulation
CN117095148A (en) Blue star space environment simulation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 210000 Qinhuai District, Nanjing City, Jiangsu Province, No. 1

Patentee after: Jiangsu puxu Technology Co.,Ltd.

Address before: 210000 Qinhuai District, Nanjing City, Jiangsu Province, No. 1

Patentee before: JIANGSU PUXU SOFTWARE INFORMATION TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder
TR01 Transfer of patent right

Effective date of registration: 20230529

Address after: 723213 Liulin Town, Chenggu County, Hanzhong City, Shaanxi Province

Patentee after: Shaanxi Aircraft Industry Co.,Ltd.

Address before: 210000 Qinhuai District, Nanjing City, Jiangsu Province, No. 1

Patentee before: Jiangsu puxu Technology Co.,Ltd.

TR01 Transfer of patent right