CN117745937A - View synthesis method, device, terminal equipment and storage medium - Google Patents

View synthesis method, device, terminal equipment and storage medium Download PDF

Info

Publication number
CN117745937A
CN117745937A CN202311711456.4A CN202311711456A CN117745937A CN 117745937 A CN117745937 A CN 117745937A CN 202311711456 A CN202311711456 A CN 202311711456A CN 117745937 A CN117745937 A CN 117745937A
Authority
CN
China
Prior art keywords
information
flight
tile
map
view synthesis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311711456.4A
Other languages
Chinese (zh)
Inventor
尹莫波
汤海东
黄洽南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Huitian Aerospace Technology Co Ltd
Original Assignee
Guangdong Huitian Aerospace Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Huitian Aerospace Technology Co Ltd filed Critical Guangdong Huitian Aerospace Technology Co Ltd
Priority to CN202311711456.4A priority Critical patent/CN117745937A/en
Publication of CN117745937A publication Critical patent/CN117745937A/en
Pending legal-status Critical Current

Links

Landscapes

  • Navigation (AREA)

Abstract

The invention discloses a view synthesis method, a device, terminal equipment and a storage medium, wherein ground object information and/or flight information are obtained; performing information processing on the ground object information and/or the flight information to obtain integrated information; and carrying out synthetic rendering processing based on the integrated information to obtain a target flight view so as to improve the situation awareness of pilots, reduce the operation burden of the pilots, and provide effective support for ultra-low-altitude flight safety, thereby improving the safety of ultra-low-altitude flight and reducing the probability of collision of low-altitude flight.

Description

View synthesis method, device, terminal equipment and storage medium
Technical Field
The present invention relates to the field of aircraft technologies, and in particular, to a method and apparatus for synthesizing a view, a terminal device, and a storage medium.
Background
The synthetic vision system (Synthetic Vision System, SVS for short) is characterized in that the flight track, the trend vector and the surrounding environment are depicted by using a computer software technology through geographic data, airplane position, heading, attitude information and the like, so that the situation awareness of a pilot is improved, the flight safety is improved, the operation burden of the pilot is reduced, and the probability of collision of controllable flight is reduced.
The main application scene of SVS is commercial aircraft cabin. The aircraft cabin belongs to a medium-high altitude flight scene (the flight height is higher than 4000 meters), the SVS of the aircraft cabin mainly displays information such as terrain, rivers, oceans and the like in a centralized manner, more elements influencing the flight safety in the scene of ultra-low altitude flight (the flight height is lower than 100 meters) are compared with the medium-high altitude flight scene, and the construction and the trees are all elements influencing the flight safety, so that the SVS of the commercial aircraft cabin lacks effective support for the ultra-low altitude flight safety.
Therefore, there is a need for a solution that provides effective support for ultra low altitude flight safety.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a view synthesis method, a device, terminal equipment and a storage medium, which aim to provide effective support for ultra-low altitude flight safety.
In order to achieve the above object, the present invention provides a view synthesis method, including:
obtaining ground object information and/or flight information;
performing information processing on the ground object information and/or the flight information to obtain integrated information;
and carrying out synthetic rendering processing based on the integrated information to obtain the target flight view.
Optionally, the flight information includes at least one of position information, attitude information and flight trend line information, and the step of obtaining ground feature information and/or flight information includes:
requesting the ground object information from a preset tile map data source; and/or the number of the groups of groups,
and acquiring at least one of position information, attitude information and flight trend line information of the target aircraft by reading the vehicle-mounted signal.
Optionally, the ground object information includes at least one of terrain data, vector data and model data, the preset tile map data source includes at least one of a network map tile service, a tile map service and a 3D tile database, and the step of requesting the ground object information from the preset tile map data source includes at least one of:
requesting and acquiring the topographic data from the network map tile service;
requesting and acquiring the vector data from the tile map service;
requesting and retrieving the model data from the 3D tile database.
Optionally, the integration information includes at least one of map tile information, flight track information and collision information, and the step of performing information processing on the ground feature information and/or the flight information to obtain the integration information includes:
determining the map tile information according to the ground object information, and determining the flight track information according to the flight information;
and performing collision detection according to the map tile information and the flight track information, and determining collision information.
Optionally, the step of determining the map tile information according to the feature information includes:
generating a map grid according to the topographic data in the feature information;
generating geographic data of each tile by combining the vector data and/or the model data based on the map grid;
and loading the geographic data of each tile to form the map tile information.
Optionally, the view synthesis method is applied to a synthesized view system, the synthesized view system includes a service layer and a frame layer, the step of determining collision information includes:
loading scene information corresponding to the synthetic vision system through the service layer;
calculating the flight range of the target aircraft by combining the flight track information based on the scene information;
loading the map tile information through the frame layer, and detecting whether ground objects colliding with the flight range exist in the map tile information;
and if the map tile information contains the ground feature which collides with the flight range, generating collision information corresponding to the ground feature.
Optionally, the step of performing the synthetic rendering process based on the integrated information to obtain the target flight view includes:
performing route drawing according to the flight track information in the integrated information to generate target route information; and/or, carrying out barrier drawing according to the map tile information in the integration information to generate barrier information; and/or generating collision early warning according to the collision information in the integrated information;
and integrally rendering at least one of the target route information, the obstacle information and the collision early warning through a preset rendering engine to obtain the target flight view.
In addition, to achieve the above object, the present invention also provides a view synthesis apparatus, including:
the acquisition module is used for acquiring ground object information and/or flight information;
the processing module is used for carrying out information processing on the ground feature information and/or the flight information to obtain integrated information;
and the rendering module is used for carrying out synthetic rendering processing based on the integrated information to obtain a target flight view.
In addition, to achieve the above object, the present invention also provides a terminal device including a memory, a processor, and a view synthesis program stored on the memory and executable on the processor, the view synthesis program implementing the steps of the view synthesis method as described above when executed by the processor.
In addition, in order to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a view synthesis program which, when executed by a processor, implements the steps of the view synthesis method as described above.
The embodiment of the invention provides a view synthesis method, a device, terminal equipment and a storage medium, which are used for acquiring ground feature information and/or flight information; performing information processing on the ground object information and/or the flight information to obtain integrated information; and carrying out synthetic rendering processing based on the integrated information to obtain a target flight view so as to improve the situation awareness of pilots, reduce the operation burden of the pilots, and provide effective support for ultra-low-altitude flight safety, thereby improving the safety of ultra-low-altitude flight and reducing the probability of collision of low-altitude flight.
Drawings
FIG. 1 is a schematic diagram of functional modules of a terminal device to which a view synthesis apparatus of the present invention belongs;
FIG. 2 is a flow chart of an exemplary embodiment of a view synthesis method according to the present invention;
FIG. 3 is a schematic diagram illustrating a specific flow of step S20 in the embodiment of FIG. 2;
FIG. 4 is a schematic diagram of a geographic environment restoration process according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a collision detection flow in an embodiment of the invention;
fig. 6 is a schematic diagram of a system architecture according to an embodiment of the invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The main solutions of the embodiments of the present invention are: obtaining ground object information and/or flight information; performing information processing on the ground object information and/or the flight information to obtain integrated information; and carrying out synthetic rendering processing based on the integrated information to obtain a target flight view so as to improve the situation awareness of pilots, reduce the operation burden of the pilots, and provide effective support for ultra-low-altitude flight safety, thereby improving the safety of ultra-low-altitude flight and reducing the probability of collision of low-altitude flight.
Technical terms related to the embodiment of the invention:
the synthetic vision system (Synthetic Vision System, SVS for short) is characterized in that the flight track, the trend vector and the surrounding environment are depicted by using a computer software technology through geographic data, airplane position, heading, attitude information and the like, so that the situation awareness of a pilot is improved, the flight safety is improved, the operation burden of the pilot is reduced, and the probability of collision of controllable flight is reduced.
The main application scene of SVS is commercial aircraft cabin. The aircraft cabin belongs to a medium-high altitude flight scene (the flight height is higher than 4000 meters), the SVS of the aircraft cabin mainly displays information such as terrain, rivers, oceans and the like in a centralized manner, more elements influencing the flight safety are arranged in the scene of ultra-low altitude flight (the flight height is lower than 100 meters) than in the medium-high altitude flight scene, and the construction and the trees are all elements influencing the flight safety, so that the SVS of the commercial aircraft cabin lacks effective support for the ultra-low altitude flight safety.
The invention provides a solution, a virtual space environment with ultra-low altitude (the flight height is lower than 100 meters) is generated through a computer software technology to assist a pilot to fly, the virtual space environment restores land features endangering ultra-low altitude flight safety such as terrains, buildings, rivers, roads, trees and the like through a geographic database, and the land features influencing the flight safety are calculated in advance through flight trends to trigger alarm prompt.
Specifically, referring to fig. 1, fig. 1 is a schematic diagram of functional modules of a terminal device to which the view synthesis apparatus of the present invention belongs. The view synthesis means may be a device independent of the terminal device, capable of view synthesis, which may be carried on the terminal device in the form of hardware or software. The terminal equipment can be an intelligent mobile terminal with a data processing function such as a mobile phone and a tablet personal computer, and can also be a fixed terminal equipment or a server with a data processing function.
In this embodiment, the terminal device to which the view synthesis apparatus belongs at least includes an output module 110, a processor 120, a memory 130, and a communication module 140.
The memory 130 stores an operating system and a view synthesis program, and the view synthesis device may obtain ground feature information and/or flight information; performing information processing on the ground feature information and/or the flight information to obtain integrated information; performing synthetic rendering processing based on the integrated information, and storing the obtained information such as the target flight view in the memory 130; the output module 110 may be a display screen or the like. The communication module 140 may include a WIFI module, a mobile communication module, a bluetooth module, and the like, and communicates with an external device or a server through the communication module 140.
Wherein the view synthesis program in the memory 130, when executed by the processor, performs the steps of:
obtaining ground object information and/or flight information;
performing information processing on the ground object information and/or the flight information to obtain integrated information;
and carrying out synthetic rendering processing based on the integrated information to obtain the target flight view.
Further, the view synthesis program in the memory 130, when executed by the processor, further performs the steps of:
requesting the ground object information from a preset tile map data source; and/or the number of the groups of groups,
and acquiring at least one of position information, attitude information and flight trend line information of the target aircraft by reading the vehicle-mounted signal.
Further, the view synthesis program in the memory 130, when executed by the processor, further performs the steps of:
requesting and acquiring the topographic data from a network map tile service;
requesting and acquiring the vector data from a tile map service;
the model data is requested and obtained from a 3D tile database.
Further, the view synthesis program in the memory 130, when executed by the processor, further performs the steps of:
determining the map tile information according to the ground object information, and determining the flight track information according to the flight information;
and performing collision detection according to the map tile information and the flight track information, and determining collision information.
Further, the view synthesis program in the memory 130, when executed by the processor, further performs the steps of:
generating a map grid according to the topographic data in the feature information;
generating geographic data of each tile by combining the vector data and/or the model data based on the map grid;
and loading the geographic data of each tile to form the map tile information.
Further, the view synthesis program in the memory 130, when executed by the processor, further performs the steps of:
loading scene information corresponding to the synthetic vision system through the service layer;
calculating the flight range of the target aircraft by combining the flight track information based on the scene information;
loading the map tile information through the frame layer, and detecting whether ground objects colliding with the flight range exist in the map tile information;
and if the map tile information contains the ground feature which collides with the flight range, generating collision information corresponding to the ground feature.
Further, the view synthesis program in the memory 130, when executed by the processor, further performs the steps of:
performing route drawing according to the flight track information in the integrated information to generate target route information; and/or, carrying out barrier drawing according to the map tile information in the integration information to generate barrier information; and/or generating collision early warning according to the collision information in the integrated information;
and integrally rendering at least one of the target route information, the obstacle information and the collision early warning through a preset rendering engine to obtain the target flight view.
According to the scheme, the ground feature information and/or the flight information are obtained; performing information processing on the ground object information and/or the flight information to obtain integrated information; and carrying out synthetic rendering processing based on the integrated information to obtain a target flight view so as to improve the situation awareness of pilots, reduce the operation burden of the pilots, and provide effective support for ultra-low-altitude flight safety, thereby improving the safety of ultra-low-altitude flight and reducing the probability of collision of low-altitude flight.
The method embodiment of the invention is proposed based on the above-mentioned terminal equipment architecture but not limited to the above-mentioned architecture.
The execution subject of the method of the embodiment may be a view synthesis device or a terminal device, and the embodiment uses the view synthesis device as an example.
Referring to fig. 2, fig. 2 is a flow chart of an exemplary embodiment of a view synthesis method according to the present invention. The view synthesis method comprises the following steps:
s10, obtaining ground object information and/or flight information;
specifically, the embodiment of the invention applies the view synthesis method to a synthesized view system (Synthetic Vision System, SVS), integrates and renders ground objects in an ultra-low-altitude flight environment, and displays the ground objects on a vehicle-to-vehicle system of a flight cabin.
Optionally, the ground feature information acquired in the embodiment of the present invention includes at least one of terrain data, vector data, and model data, and the flight information includes at least one of position information, attitude information, and flight trend line information.
Optionally, the step of acquiring the ground feature information and/or the flight information includes:
requesting the ground object information from a preset tile map data source; and/or the number of the groups of groups,
and acquiring at least one of position information, attitude information and flight trend line information of the target aircraft by reading the vehicle-mounted signal.
Optionally, in the embodiment of the invention, the information such as the longitude and latitude position, the attitude information, the flight trend line and the like of the flight cabin is mainly obtained through vehicle-mounted signals, and the information is used for restoring the visual angle and the flight trend of the aircraft in the three-dimensional world.
Optionally, the preset tile map data source comprises at least one of a network map tile service, a tile map service, and a 3D tile database.
Optionally, the step of requesting the ground object information from a preset tile map data source includes at least one of:
requesting and acquiring the topographic data from the network map tile service;
requesting and acquiring the vector data from the tile map service;
requesting and retrieving the model data from the 3D tile database.
Alternatively, geographic data provided based on WMTS (Web Map Tile Service, network Map Tile Service), TMS (Tile Map Service), and 3D tiles database (i.e., 3D Tile database) may be used to generate a geographic environment within a range around the flight-cabin from Map tiles.
Step S20, carrying out information processing on the ground feature information and/or the flight information to obtain integrated information;
further, after the ground feature information and/or the flight information are obtained, the information processing can be performed on the ground feature information and/or the flight information, and the integrated information is obtained.
Alternatively, the embodiment of the invention adopts tile map technology, the map is divided into small tiles and respectively loaded and displayed, thereby enabling large-scale map data to be efficiently presented in Web map service and other map applications. In this process, the map is divided into small square or rectangular tiles. Each tile represents a small area on the map. Tile maps typically provide different levels of resolution. At higher zoom levels, tiles display more detailed information, while at lower zoom levels, tiles display more extensive areas.
Alternatively, instead of loading the entire map at once, the map application in embodiments of the present invention loads and displays only the tiles required for the current view. This helps reduce data transmission and improve performance.
Optionally, through the latitude and longitude position, attitude information, flight trend line and other information of the obtained flight cabin, the visual angle and flight trend of the aircraft can be restored in the three-dimensional world, the flight range can be calculated through the position and the flight trend of the aircraft, the three-dimensional object is used for representing the flight range, and then all the ground objects existing in a certain range around the flight cabin are screened, so that all the ground objects colliding with the flight range in the flight environment can be detected, and the collision information can be determined.
And step S30, performing synthetic rendering processing based on the integrated information to obtain a target flight view.
Furthermore, the ground feature information and/or the flight information are subjected to information processing to obtain integrated information, and then the integrated information is subjected to synthetic rendering processing to obtain the target flight view.
Optionally, the step of performing the synthetic rendering process based on the integrated information to obtain the target flight view includes:
performing route drawing according to the flight track information in the integrated information to generate target route information; and/or, carrying out barrier drawing according to the map tile information in the integration information to generate barrier information; and/or generating collision early warning according to the collision information in the integrated information;
and integrally rendering at least one of the target route information, the obstacle information and the collision early warning through a preset rendering engine to obtain the target flight view.
Alternatively, rendering tasks may be performed by a GPU (graphics processing unit) rendering Engine, such as OpenGL, directX and WebGL etc. rendering interfaces, or Unity3D, un real Engine or Godot etc. rendering engines. In the rendered target flight view, not only the three-dimensional geographic environment information restored according to geographic data, but also the flight range of the current aircraft and obstacles existing in the flight range can be included, and the possible collision is displayed in an alarm mode, so that the safety degree of manual flight of a pilot is improved, and the operation burden of the pilot is reduced. Through carrying out integrated rendering on elements such as buildings, trees, rivers, roads and the like in the ultra-low-altitude geographic environment and carrying out multi-element collision detection, the reliability of SVS in the ultra-low-altitude environment is improved.
In this embodiment, the ground feature information and/or the flight information are acquired; performing information processing on the ground object information and/or the flight information to obtain integrated information; and carrying out synthetic rendering processing based on the integrated information to obtain a target flight view so as to improve the situation awareness of pilots, reduce the operation burden of the pilots, and provide effective support for ultra-low-altitude flight safety, thereby improving the safety of ultra-low-altitude flight and reducing the probability of collision of low-altitude flight.
Referring to fig. 3, fig. 3 is a specific flowchart of step S20 in the embodiment of fig. 2. The present embodiment is based on the embodiment shown in fig. 2, and in the present embodiment, the step S20 includes:
step S201, determining map tile information according to the ground object information, and determining flight track information according to the flight information;
alternatively, the embodiment of the invention adopts tile map technology, the map is divided into small tiles and respectively loaded and displayed, thereby enabling large-scale map data to be efficiently presented in Web map service and other map applications. In this process, the map is divided into small square or rectangular tiles. Each tile represents a small area on the map. Tile maps typically provide different levels of resolution. At higher zoom levels, tiles display more detailed information, while at lower zoom levels, tiles display more extensive areas.
Referring to fig. 4, fig. 4 is a schematic diagram of a geographical environment restoration flow in an embodiment of the present invention, as shown in fig. 4, in an embodiment of the present invention, map initialization is performed by reading a map configuration file, and tiles may be calculated and rendered by a map update mechanism, so as to generate tile objects, where registering the tile objects by terrain, vector factories, and model factories is included.
Optionally, the step of determining the map tile information according to the feature information includes:
generating a map grid according to the topographic data in the feature information;
generating geographic data of each tile by combining the vector data and/or the model data based on the map grid;
and loading the geographic data of each tile to form the map tile information.
Optionally, in embodiments of the present invention, the request of terrain data from WMTS services for generating a terrain grid is made by a terrain factory, which helps to ensure interoperability between different map service providers and developers, since WMTS is one of the open standards defined by the international organization for standardization (Open Geospatial Consortium, OGC). WMTS services divide a map into different sets of tiles, each set of tiles containing a series of pre-rendered map tiles. The tile sets typically include different zoom levels and map projections. WMTS services typically store tiles in a standard image format (e.g., PNG, JPEG) and provide access through a standard URL pattern. WMTS has better performance than other map services because it allows a developer to request only the required tiles, not the entire map. This helps reduce data transmission and increases the loading speed of the map application.
Alternatively, vector data is requested from the TMS service through a vector factory, and after the terrain mesh is generated, the vector data is generated, which may include rivers, roads, trees, tower-type buildings, and the like. Alternatively, unlike WMTS, TMS uses a different tile coordinate scheme in which the row and column coordinates of the tile are at the origin in the upper left corner, with the row coordinate increment indicating a downward movement and the column coordinate increment indicating a rightward movement. Similar to WMTS, TMS services are also RESTful-based Web service interfaces. The developer may make a request to the service via the HTTP protocol and specify the desired set of tiles, zoom level, and row and column coordinates of the tiles in the URL. TMS services typically store tiles in a standard image format (e.g., PNG, JPEG). The standards for TMS services are formulated by Open Source Geospatial Foundation (OSGeo), but not OGC (Open Geospatial Consortium) certified standards. Furthermore, unlike WMTS, TMS allows developers to use custom tile sets, which allows the TMS flexibility in some specific application scenarios. Both differ in providing map tile services, but both are intended to provide map data in the form of tiles over a network for efficient map display in Web applications.
Alternatively, the model data is requested from the 3D Tiles database through the model factory, and after the terrain mesh is generated, the model data is generated, which may include related models such as buildings. Alternatively, 3D Tiles use the concept of Tiles (Tiles) to segment a three-dimensional model into small blocks for on-demand loading and display. 3D Tiles are a specification for efficient storage and transmission of large-scale three-dimensional geospatial data. Although the 3D Tiles specification is not a database itself, it can be used in conjunction with a database to achieve more efficient data management and retrieval. For example, a PostgreSQL database may be used to store 3D Tiles data and to process geographic information using PostGIS. The 3D tilles data may be stored in a cloud database, such as Amazon DynamoDB, *** Cloud Firestore, or Azure Cosmos DB. These services typically have good scalability and performance, suitable for processing large-scale geographic three-dimensional data.
Optionally, after loading the geographic data of the single tile is completed, loading the map is completed after loading all tiles is completed.
Step S202, collision detection is carried out according to the map tile information and the flight path information, and collision information is determined.
Optionally, the view synthesis method is applied to a synthesized view system, and the synthesized view system comprises a service layer and a framework layer.
Optionally, the step of performing collision detection according to the map tile information and the flight path information, and determining the collision information includes:
loading scene information corresponding to the synthetic vision system through the service layer;
calculating the flight range of the target aircraft by combining the flight track information based on the scene information;
loading the map tile information through the frame layer, and detecting whether ground objects colliding with the flight range exist in the map tile information;
and if the map tile information contains the ground feature which collides with the flight range, generating collision information corresponding to the ground feature.
Referring to fig. 5, fig. 5 is a schematic diagram of a collision detection flow in an embodiment of the present invention, as shown in fig. 5, in the embodiment of the present invention, information such as longitude and latitude positions, attitude information, flight trend lines, etc. of a flight cabin is obtained mainly through vehicle-mounted signals, a viewing angle and a flight trend of an aircraft are restored in a three-dimensional world, a flight range can be calculated through the positions and the flight trend of the aircraft, a three-dimensional object is used to represent the flight range, and then all features existing in a certain range around the flight cabin are screened, so that all features colliding with the flight range in the flight environment can be detected.
Optionally, the longitude and latitude position, the attitude information, the flight trend line and the like of the flight cabin are all external signals, and are provided for the SVS in a vehicle-mounted Ethernet or Can signal mode.
Optionally, the generated collision information can be displayed on the SVS through characters, graphics and color changes, and can also be noticed by a pilot through voice broadcasting and other modes, so that the flight track and/or flight parameters of the aircraft can be adjusted according to the collision information, and the flight safety can be ensured.
According to the scheme, the map tile information is determined according to the ground feature information, and the flight track information is determined according to the flight information; and performing collision detection according to the map tile information and the flight track information, determining collision information, and generating the collision information according to the element information in the flight range while displaying various environment elements affecting the flight safety in a centralized manner for risk prompting or early warning, so that effective support is provided for the ultra-low altitude flight safety.
In addition, an embodiment of the present invention further provides a view synthesis device, where the view synthesis device includes:
the acquisition module is used for acquiring ground object information and/or flight information;
the processing module is used for carrying out information processing on the ground feature information and/or the flight information to obtain integrated information;
and the rendering module is used for carrying out synthetic rendering processing based on the integrated information to obtain a target flight view.
Referring to fig. 6, fig. 6 is a schematic diagram of a system architecture in an embodiment of the present invention, and as shown in fig. 6, in an embodiment of the present invention, rendering of a flight environment is mainly implemented based on a three-dimensional graphics rendering engine, and is divided into a frame layer and a service layer.
Optionally, in the service layer, the longitude and latitude position, the attitude information, the flight trend line and other information of the flight cabin are mainly obtained through vehicle-mounted signals, the visual angle and the flight trend of the aircraft are restored in the three-dimensional world, the flight range can be calculated through the position and the flight trend of the aircraft, the three-dimensional object is used for representing the flight range, and then all ground objects existing in a certain range around the flight cabin are screened, so that all ground objects colliding with the flight range in the flight environment can be detected. The longitude and latitude position, the attitude information, the flight trend line and the like of the flight cabin are all external signals, and are provided for the SVS in a vehicle-mounted Ethernet or Can signal mode.
Optionally, in the framework layer, geographic environments within a range around the cockpit of the aircraft are generated by map tiles based on geographic data provided by WMTS service (network map tile service), TMS service (tile map service), and 3dtiles database (geographic model specification), wherein WMTS service provides terrain data (raster pattern); TMS services provide river, road, tree, tower building, etc. data (vector format, including planar range and altitude information, etc.); the 3dtiles database provides data for building models (three-dimensional models).
Optionally, the map updating mechanism adopted in the embodiment of the invention calculates the number of map tiles and corresponding tile coordinates in a certain range by taking the flight cabin as a center through the longitude and latitude of the flight cabin.
Optionally, in other embodiments, a set of flight environment simulation system may be implemented through the point cloud data, so as to perform simulation rendering on the real flight environment, and the effect of improving the ultra-low altitude manual flight safety degree may be achieved, but at the same time, the hardware burden may be increased, and in particular, the computational power requirement on the GPU may be increased.
Alternatively, another solution to the synthetic vision system for ultra-low altitude flight is to use a camera to perform environment rendering after performing environment sensing, and using this solution requires more precise camera equipment and requires high delay requirements for the camera image transmission link.
Optionally, in the embodiment of the invention, elements such as buildings, trees, roads and the like are added compared with the SVS of the traditional medium-high altitude aircraft cabin, so that potential safety risks in the ultra-low altitude flight environment are considered, and the collision probability of manual flight in the ultra-low altitude flight environment is reduced.
In the embodiment, various types of ground objects are displayed on the SVS in real time according to vector data provided by the tile service and model data of the 3dtiles service, and the smoothness of the SVS is ensured by using a quadtree algorithm, GPU rendering acceleration and other technologies, and meanwhile, ground objects which endanger flight safety around a flight cabin are obtained through algorithm retrieval, so that warning display under ultra-low-altitude manual flight is realized, the safety degree of manual flight of a pilot is improved, and the operation burden of the pilot is reduced.
The principle and implementation process of view synthesis are implemented in this embodiment, please refer to the above embodiments, and are not described herein.
In addition, the embodiment of the invention also provides a terminal device, which comprises a memory, a processor and a view synthesis program stored in the memory and capable of running on the processor, wherein the view synthesis program realizes the steps of the view synthesis method when being executed by the processor.
Because the present view synthesis program is executed by the processor, all the technical solutions of all the foregoing embodiments are adopted, and therefore, at least all the beneficial effects brought by all the technical solutions of all the foregoing embodiments are not described in detail herein.
In addition, the embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a view synthesis program, and the view synthesis program realizes the steps of the view synthesis method when being executed by a processor.
Because the present view synthesis program is executed by the processor, all the technical solutions of all the foregoing embodiments are adopted, and therefore, at least all the beneficial effects brought by all the technical solutions of all the foregoing embodiments are not described in detail herein.
Compared with the prior art, the method, the device, the terminal equipment and the storage medium for synthesizing the views are provided by the embodiment of the invention, and the ground feature information and/or the flight information are obtained; performing information processing on the ground object information and/or the flight information to obtain integrated information; and carrying out synthetic rendering processing based on the integrated information to obtain a target flight view so as to improve the situation awareness of pilots, reduce the operation burden of the pilots, and provide effective support for ultra-low-altitude flight safety, thereby improving the safety of ultra-low-altitude flight and reducing the probability of collision of low-altitude flight.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as above, including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, a controlled terminal, or a network device, etc.) to perform the method of each embodiment of the present application.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. The view synthesis method is characterized by comprising the following steps of:
obtaining ground object information and/or flight information;
performing information processing on the ground object information and/or the flight information to obtain integrated information;
and carrying out synthetic rendering processing based on the integrated information to obtain the target flight view.
2. The view synthesis method according to claim 1, wherein the flight information includes at least one of position information, attitude information, and flight trend line information, and the step of acquiring the ground feature information and/or the flight information includes:
requesting the ground object information from a preset tile map data source; and/or the number of the groups of groups,
and acquiring at least one of position information, attitude information and flight trend line information of the target aircraft by reading the vehicle-mounted signal.
3. The view synthesis method of claim 2, wherein the clutter information comprises at least one of terrain data, vector data, and model data, the pre-set tile map data source comprises at least one of a network map tile service, a tile map service, and a 3D tile database, and the requesting the clutter information from the pre-set tile map data source comprises at least one of:
requesting and acquiring the topographic data from the network map tile service;
requesting and acquiring the vector data from the tile map service;
requesting and retrieving the model data from the 3D tile database.
4. The view synthesis method according to claim 3, wherein the integration information includes at least one of map tile information, flight path information, and collision information, and the step of performing information processing on the ground feature information and/or the flight information to obtain the integration information includes:
determining the map tile information according to the ground object information, and determining the flight track information according to the flight information;
and performing collision detection according to the map tile information and the flight track information, and determining collision information.
5. The view synthesis method of claim 4, wherein the step of determining the map tile information from the ground object information comprises:
generating a map grid according to the topographic data in the feature information;
generating geographic data of each tile by combining the vector data and/or the model data based on the map grid;
and loading the geographic data of each tile to form the map tile information.
6. The view synthesis method according to claim 4, wherein the view synthesis method is applied to a synthesized view system, the synthesized view system includes a business layer and a frame layer, the step of performing collision detection according to the map tile information and the flight path information, and the step of determining collision information includes:
loading scene information corresponding to the synthetic vision system through the service layer;
calculating the flight range of the target aircraft by combining the flight track information based on the scene information;
loading the map tile information through the frame layer, and detecting whether ground objects colliding with the flight range exist in the map tile information;
and if the map tile information contains the ground feature which collides with the flight range, generating collision information corresponding to the ground feature.
7. The method of view synthesis according to claim 4, wherein the step of performing the synthetic rendering process based on the integration information to obtain the target flight view comprises:
performing route drawing according to the flight track information in the integrated information to generate target route information; and/or, carrying out barrier drawing according to the map tile information in the integration information to generate barrier information; and/or generating collision early warning according to the collision information in the integrated information;
and integrally rendering at least one of the target route information, the obstacle information and the collision early warning through a preset rendering engine to obtain the target flight view.
8. A view synthesis apparatus, characterized in that the view synthesis apparatus comprises:
the acquisition module is used for acquiring ground object information and/or flight information;
the processing module is used for carrying out information processing on the ground feature information and/or the flight information to obtain integrated information;
and the rendering module is used for carrying out synthetic rendering processing based on the integrated information to obtain a target flight view.
9. A terminal device comprising a memory, a processor and a view synthesis program stored on the memory and executable on the processor, the view synthesis program when executed by the processor implementing the steps of the view synthesis method according to any of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a view synthesis program, which when executed by a processor, implements the steps of the view synthesis method according to any of claims 1-7.
CN202311711456.4A 2023-12-12 2023-12-12 View synthesis method, device, terminal equipment and storage medium Pending CN117745937A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311711456.4A CN117745937A (en) 2023-12-12 2023-12-12 View synthesis method, device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311711456.4A CN117745937A (en) 2023-12-12 2023-12-12 View synthesis method, device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117745937A true CN117745937A (en) 2024-03-22

Family

ID=90260148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311711456.4A Pending CN117745937A (en) 2023-12-12 2023-12-12 View synthesis method, device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117745937A (en)

Similar Documents

Publication Publication Date Title
US10837773B2 (en) Detection of vertical structures based on LiDAR scanner data for high-definition maps for autonomous vehicles
CN110869981B (en) Vector data encoding of high definition map data for autonomous vehicles
US11353589B2 (en) Iterative closest point process based on lidar with integrated motion estimation for high definition maps
US20200393566A1 (en) Segmenting ground points from non-ground points to assist with localization of autonomous vehicles
CA2112101C (en) Real time three dimensional geo-referenced digital orthophotograph-basedpositioning, navigation, collision avoidance and decision support system
WO2018126079A1 (en) High definition map and route storage management system for autonomous vehicles
CN111882977A (en) High-precision map construction method and system
KR101405891B1 (en) Reality display system of air inteligence and method thereof
CN112381935A (en) Synthetic vision generation and multi-element fusion device
CN112419499A (en) Immersive situation scene simulation system
CN115686069A (en) Synchronous coordination control method and system for unmanned aerial vehicle cluster
CN107748502B (en) Passive space perception interaction method of entity in combat simulation based on discrete event
CN113192192A (en) Live-action three-dimensional digital twin channel scene construction method
CN116110225A (en) Vehicle-road cooperative cloud control system and method based on digital twin
CN111798364B (en) Panoramic prebaking-based quick rendering method and visual imaging system
KR102012361B1 (en) Method and apparatus for providing digital moving map service for safe navigation of unmanned aerial vehicle
CN117197339A (en) Model display method, device and equipment based on DEM and storage medium
CN112488010A (en) High-precision target extraction method and system based on unmanned aerial vehicle point cloud data
CN117745937A (en) View synthesis method, device, terminal equipment and storage medium
KR102012362B1 (en) Method and apparatus for generating digital moving map for safe navigation of unmanned aerial vehicle
CN113538679A (en) Mixed real-scene three-dimensional channel scene construction method
JP3024666B2 (en) Method and system for generating three-dimensional display image of high-altitude image
CN112364112A (en) Method for analyzing water and soil conditions on two sides of highway
KR102285801B1 (en) Producing method, display method and system for meteorological image using geo-kompsat-2a
Hansen et al. Real-time synthetic vision cockpit display for general aviation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination