CN114998771A - Display method and system for enhancing visual field of aircraft, aircraft and storage medium - Google Patents

Display method and system for enhancing visual field of aircraft, aircraft and storage medium Download PDF

Info

Publication number
CN114998771A
CN114998771A CN202210838759.1A CN202210838759A CN114998771A CN 114998771 A CN114998771 A CN 114998771A CN 202210838759 A CN202210838759 A CN 202210838759A CN 114998771 A CN114998771 A CN 114998771A
Authority
CN
China
Prior art keywords
data
aircraft
target object
target
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210838759.1A
Other languages
Chinese (zh)
Other versions
CN114998771B (en
Inventor
薛松柏
李德
徐大勇
郭亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Wofeitianyu Technology Co ltd
Zhejiang Geely Holding Group Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Aerofugia Technology Chengdu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Aerofugia Technology Chengdu Co Ltd filed Critical Zhejiang Geely Holding Group Co Ltd
Priority to CN202210838759.1A priority Critical patent/CN114998771B/en
Publication of CN114998771A publication Critical patent/CN114998771A/en
Application granted granted Critical
Publication of CN114998771B publication Critical patent/CN114998771B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0017Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0073Surveillance aids
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/04Anti-collision systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a display method and a display system for enhancing the visual field of an aircraft, the aircraft and a storage medium, which are applied to the technical field of aircraft display. The method comprises the following steps: acquiring image data, environment data and target object data of a remote real scene of an aircraft; performing data fusion on the image data, the environment data and the target object data to form enhanced visual data; according to the technical scheme for displaying the enhanced visual data on the display screen of the aircraft, the enhanced visual data can be displayed on the display screen of the aircraft after the data collected by a plurality of channels are fused into the enhanced visual data, so that the data can be displayed in a centralized manner, the observation is facilitated, and the flight safety is improved.

Description

Display method and system for enhancing visual field of aircraft, aircraft and storage medium
Technical Field
The invention relates to the technical field of aircraft display, in particular to a display method and a display system for enhancing the visual field of an aircraft, the aircraft and a storage medium.
Background
In the field of aviation, manned aircraft are increasingly being developed. In the flying process of the aircraft, the environmental condition needs to be monitored in real time, and the flying safety is improved. Currently, data may be collected around or in front of the aircraft, including cooperative target image data, environmental data, terrain data, building data, and the like. However, different data are displayed in different positions of the aircraft in a scattered manner, and the display positions are relatively scattered, so that the flight observation is not facilitated.
Disclosure of Invention
The application aims to solve the problems that data display on the aircraft is scattered and is not beneficial to observation by providing a display method and system for enhancing the visual field of the aircraft, the aircraft and a storage medium.
The application provides a display method for enhancing the visual field of an aircraft, which comprises the following steps:
acquiring image data, environment data and target object data of a remote real scene of the aircraft;
performing data fusion on the image data, the environment data and the target object data to form enhanced visual data;
displaying the augmented view data on a display screen of the aircraft.
Optionally, the step of performing data fusion on the image data, the environment data, and the target object data to form enhanced view data includes:
identifying the image data, the environment data, and the target object data;
when at least one of the image data, the environment data and the target object data is identified to have a target object, determining that the target object exists in a preset flight range of the aircraft;
and when a target object exists in the preset flight range of the aircraft, performing data fusion on the image data, the environment data and the target object data to form the enhanced visual data.
Optionally, the target object data includes operating state data of the target object; after the step of performing data fusion on the image data, the environment data and the target object data to form enhanced view data, the method further includes:
predicting whether the aircraft collides with the target object according to the operating state data of the target object and the operating state data of the aircraft;
if not, executing the step of displaying the enhanced visual data on a display screen of the aircraft;
and if so, generating alarm information.
Optionally, the step of performing data fusion on the image data, the environment data, and the target object data to form enhanced view data includes:
extracting target characteristic data in the image data, and performing data fusion on the target characteristic data, the target object data and the environment data to form the enhanced visual data; or,
acquiring instrument data and a flight track of an aircraft, and performing data fusion on the image data, the environment data, the target object data, the instrument data and the flight track to form the enhanced view data.
Optionally, the extracting the target feature data in the image data includes:
determining the intensity of ambient light according to the ambient data;
extracting all target image data matched with the ambient light intensity from the image data;
and fusing all the target image data to obtain the target characteristic data.
Optionally, the step of extracting all target image data matching the ambient light intensity from the image data includes:
extracting feature points in the image data;
determining matched pairs of feature points based on a similarity measure and the feature points;
determining a homography matrix according to the matched characteristic point pairs;
converting the image data based on the homography matrix to obtain the image data after registration processing;
extracting all target image data matched with the ambient light intensity from the image data after the registration processing.
Optionally, the step of fusing all the target image data to obtain the target feature data includes:
extracting feature data in each target image data;
and fusing the characteristic data in each target image data based on a preset fusion mode to obtain the target characteristic data, wherein the preset fusion mode comprises at least one of a logic filtering method, a weighted average method, a mathematical morphology method, an image algebra method, a simulated annealing method, a pyramid image fusion method, a wavelet transformation image fusion method and a multi-scale decomposition method.
Optionally, the performing data fusion on the target feature data, the target object data, and the environment data to form the enhanced view data includes:
determining the type of a target object according to the target object data and the target characteristic data, wherein the type of the target object comprises a cooperative target object or a non-cooperative target object;
extracting all target source data matched with the type of the target object from the target feature data, the environment data and the target object data;
extracting feature data in each target source data;
and fusing the characteristic data to obtain the enhanced visual data.
Optionally, the step of displaying the enhanced view data on a display screen of the aircraft comprises:
acquiring current light intensity;
determining the target display brightness of the display screen according to the light intensity;
and adjusting the brightness of the display screen to the target display brightness.
Optionally, the step of acquiring image data, environment data and target object data of the remote real scene of the aircraft comprises:
determining a cooperative target object closest to the aircraft, and receiving image data, environment data and target object data of a distant real scene of the aircraft, which are acquired by the closest cooperative target object, wherein a first communication relation is established between the cooperative target object and the aircraft, so that data sharing between the cooperative target object and the aircraft is performed based on the first communication relation; and/or the presence of a gas in the atmosphere,
acquiring image data, environmental data and target object data of the remote real scene of the aircraft based on corresponding sensors on the aircraft; and/or the presence of a gas in the gas,
and acquiring image data, environment data and target object data of the remote real scene of the aircraft based on an intelligent terminal, wherein a second communication relation is established among the intelligent terminal, the cooperation target object and the aircraft, so that data sharing among the intelligent terminal, the cooperation target object and the aircraft is carried out based on the second communication relation.
In addition, to achieve the above object, the present invention further provides an aircraft vision-enhanced display system, including: the environment monitoring device, adjustable transparency and display screen and aircraft of luminance.
Furthermore, to achieve the above object, the present invention also provides an aircraft including: the display program of the aircraft vision enhancement is executed by the data processor to realize the steps of the display method of the aircraft vision enhancement.
In addition, to achieve the above object, the present invention further provides a storage medium having stored thereon an aircraft enhanced vision display program, which when executed by a data processor, implements the steps of the aircraft enhanced vision display method described above.
According to the technical scheme, after image data, environment data and target object data of a remote real scene of the aircraft are obtained, the image data, the environment data and the target object data are subjected to data fusion to form enhanced visual scene data fusing all data, and then the enhanced visual scene data is displayed on a display screen of the aircraft.
Drawings
FIG. 1 is a schematic diagram of an enhanced vision display system for an aircraft according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a transparency and brightness adjustable display system according to the present invention;
FIG. 3 is a schematic structural view of the aircraft of the present invention;
FIG. 4 is a schematic flowchart of a first embodiment of a method for displaying enhanced vision on an aircraft according to the present invention.
The objects, features, and advantages of the present application are further described in connection with the embodiments, with reference to the accompanying drawings, which are a single embodiment and are not intended to be a complete description of the invention.
Detailed Description
The application provides a display method for enhancing the visual field of the aircraft in order to solve the technical problems that data display on the aircraft is scattered and observation is not facilitated. The method for displaying the enhanced visual scene of the aircraft can perform data fusion on image data, environment data and target object data after acquiring the image data, the environment data and the target object data of a real scene at a distance from the aircraft to form enhanced visual scene data fused with all data, and further display the enhanced visual scene data on a display screen of the aircraft.
For a better understanding of the above technical solutions, exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a display system for enhancing a visual field of an aircraft according to an embodiment of the present invention.
It should be noted that fig. 1 is a schematic structural diagram of a hardware operating environment of a display system for enhancing an aircraft view.
As shown in fig. 1, the aircraft vision-enhanced display system may include: the environment monitoring device, adjustable transparency and display screen and the aircraft of luminance.
Wherein, this aircraft includes data processor, and this data processor is connected with environmental monitoring device and display screen. After the data processor collects the enhanced visual source data and the environmental data of the environmental monitoring device, the enhanced visual data is generated after the source data and the environmental data are processed, and the enhanced visual data is displayed on the display screen.
Wherein, this environmental monitoring device can be used to the measurement environmental data, including visibility measurement, luminance measurement. The environment monitoring device can also be in communication connection with other systems and receive environmental visibility data and brightness data transmitted by other systems.
Wherein, the display screen is used for displaying the enhanced visual data. Image information including sensor images, air lines, instruments, terrain contour features, building contour features, cooperative target information, non-cooperative target information, meteorological data and the like can be fused into enhanced visual data through the general processing equipment and then displayed.
Optionally, the profile of the display screen may be adjusted according to the aerodynamic profile of the aircraft, such as: drop, bullet, spherical, parabolic, hyperbolic, etc.
Alternatively, the display screen may be of the type LCD transparent display screen, OLED transparent display screen, laser projection or other display screen with adjustable brightness and transparency, or a combination of the above.
Alternatively, the display screen may be located directly in front of the pilot, or may cover the entire visual range that the pilot can see, such as the transparent portholes on the left and right sides, the area above the pilot, the area of the console where no instrumentation is located, and the like.
Optionally, the display screen is connected with the environment monitoring device and used for obtaining the light intensity collected by the environment detection device, automatically adjusting the brightness and transparency of the display screen of the aircraft based on the light intensity, and enhancing the real view outside the cabin.
Optionally, the brightness and transparency of the display screen of the aircraft can be adjusted jointly according to the layout position and the light intensity by combining the layout position of the display screen. For example, when the brightness of the light from the outside or the light intensity inside the aircraft decreases, the brightness of the enhanced view is increased; the brightness of the enhanced view is reduced when the ambient light brightness or the light intensity inside the aircraft is increased. The transparency can also be adjusted according to the layout position of the display screen.
Optionally, the enhanced view is visually matched with the cockpit outside real view and the eyes of the pilot, so that when the head of the pilot moves, the enhanced view can be adaptively adjusted according to the head offset of the pilot, and the adjusted enhanced view is displayed on the display screen.
Referring to fig. 2, fig. 2 is a display system with adjustable transparency and brightness according to the present application, based on which the following three views can be mainly observed: the cabin exterior scene, the display system entity and the enhanced view data. The display system with adjustable transparency and brightness comprises a transparent cockpit display screen besides a display screen with adjustable transparency and brightness, and the transparent cockpit display screen can visually observe real scenery outside the cockpit.
Optionally, the aircraft vision-enhanced display system may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally three axes), can detect the magnitude and direction of gravity when in rest, and can be used for application of recognizing the attitude of an aircraft, related functions of vibration recognition, and the like; of course, the display system for enhancing the visual field of the aircraft may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the configuration of the aircraft enhanced vision display system shown in FIG. 1 does not constitute a limitation of the aircraft enhanced vision display system, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The specific implementation of the display system for enhancing the aircraft vision of the present invention is substantially the same as the above-mentioned embodiments of the display method for enhancing the aircraft vision, and is not described herein again.
Based on the same inventive concept, the present application also proposes an aircraft, with reference to fig. 3, comprising: the display program comprises a memory, a data processor and an aircraft vision enhancement display program stored on the memory and capable of running on the data processor, wherein when the data processor executes the aircraft vision enhancement display program, the steps of the aircraft vision enhancement display method are realized.
In particular, the aircraft comprises a data processor 1001, for example a CPU, a memory 1005. The aircraft may also include a user interface 1003, a network interface 1004, and a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., a WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory such as a disk memory. The memory 1005 may alternatively be a storage device separate from the data processor 1001 described above.
As shown in fig. 3, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and an aircraft vision-enhanced display program. Among these, the operating system is a program that manages and controls the hardware and software resources of the aircraft, the display program for the enhanced vision of the aircraft, and the execution of other software or programs.
In the aircraft shown in fig. 3, the user interface 1003 is mainly used for connecting a terminal, and communicating data with the terminal; the network interface 1004 is mainly used for a background server and is in data communication with the background server; the data processor 1001 may be used to invoke a display program for aircraft vision enhancement stored in the memory 1005.
In this embodiment, the aircraft comprises: a memory 1005, a data processor 1001, and an aircraft enhanced vision display program stored on the memory and executable on the data processor.
Those skilled in the art will appreciate that the aircraft structure illustrated in FIG. 3 is not intended to be limiting of aircraft and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
A first embodiment.
As shown in fig. 4, in a first embodiment of the present application, the method for displaying enhanced visual field of an aircraft mainly includes the following steps:
step S110, image data, environment data and target object data of the remote real scene of the aircraft are obtained.
In the embodiment, the aircraft acquires data in front of the aircraft or around the aircraft in real time during the flight process. The acquired data includes, but is not limited to, image data, environmental data, target object data. For example, the data sources of the cooperative targets may be acquired by different types of sensors, for example, the temperature may be a temperature sensor, and the location may be an image sensor. Since the data come from different channels, the data needs to be fused and displayed.
The image data is also referred to as pixel-level target image data. The acquired image data may be image data in front of the aircraft or in the surroundings of the aircraft. The image data may be visible light image data, infrared image data, multispectral image data, lidar image data, terrain database data, synthetic aperture radar image data, and the like. The image data may be directly acquired by a corresponding sensor, for example, the lidar image data may be acquired by the lidar sensor, and the infrared image data may be acquired by the infrared sensor. Optionally, the acquired image data may be stored in a local database, and may also be backed up to be stored in the cloud, so that the data is not easily lost. Alternatively, the image data may be acquired through a communication link, for example, acquiring corresponding data from image data acquired and stored in advance by the ground terminal device.
The environment data may be environment data in front of the aircraft or around the aircraft. The environment data may also be environment data around the cooperative target. The collaboration target is an aircraft or other device communicatively coupled to the aircraft. The environmental data may also be environmental data within the aircraft. The environmental data includes, but is not limited to, meteorological data, temperature data, humidity data, light intensity, visibility, air quality, wind speed, wind direction, and the like.
Alternatively, the environmental data may be collected by the aircraft's own environmental monitoring device. The environment monitoring device comprises a plurality of types of sensors, and corresponding environment data can be acquired through the sensors of different types, for example, temperature data can be acquired through a temperature sensor; the light intensity can be collected by a photosensitive sensor.
Optionally, the environmental data may also be obtained through a communication link, for example, the aircraft may be in communication connection with the smart terminal, and the meteorological data displayed on the smart terminal, such as a mobile phone, may be adopted to transmit the meteorological data to the aircraft, so that the aircraft can obtain the meteorological data.
Optionally, the environmental data outside the aircraft may be acquired by the aircraft sensor and the intelligent terminal, and the environmental data, the image data, and the target object data may be acquired by a cooperative target around the aircraft, and the environmental data, the image data, and the target object data may be transmitted to the aircraft, so that the aircraft may acquire the environmental data, the image data, and the target object data. In the process, a first communication relation is established between the aircraft and the cooperative target, and data sharing between the cooperative target and the aircraft can be realized based on the first communication relation. The aircraft may search for a cooperative target that is closest to itself, through which the environmental data, image data, and target object data are acquired. Optionally, the data between the aircraft and the cooperative target is shared, and after the aircraft searches for the cooperative target closest to the aircraft, the aircraft acquires environment data, image data and target object data corresponding to the distant real scene from the closest cooperative target. By acquiring the environmental data, the image data and the target object data and then fusing, the visual field of the pilot can be greatly increased by the visual enhancement technology when the aircraft flies in the cloud layer, rainy days and other climatic conditions.
Optionally, a second communication relationship among the intelligent terminal, the cooperative target and the aircraft may be established, so as to share data among the intelligent terminal, the cooperative target and the aircraft based on the second communication relationship. Optionally, the intelligent terminal may be a ground terminal, or a terminal on an aircraft. The intelligent terminal can acquire image data, environment data and target object data of the remote real scene of the aircraft, which are acquired by the aircraft and the cooperative target in real time. Optionally, the collected data may be stored, preliminarily screened, or integrated on the intelligent terminal. Optionally, the processed data may be sent to the aircraft in real time.
Optionally, the image data, the environmental data and the target object data of the remote real scene of the aircraft, which are acquired by the cooperative target object closest to the aircraft, may also be simultaneously acquired according to actual requirements; acquiring image data, environment data and target object data of the remote real scene of the aircraft based on corresponding sensors on the aircraft; and acquiring image data, environment data and target object data of the remote real scene of the aircraft based on the intelligent terminal.
Wherein the target object data includes cooperation target image data; non-cooperative target image data may also be included. The cooperative target may be other aircraft communicatively coupled with the aircraft, etc.; the non-cooperative target may be an object that is not communicatively coupled to the aircraft, such as a bird, aircraft, building, terrain, and the like. Optionally, the cooperative target image data includes, but is not limited to, latitude and longitude, altitude, time, track angle, course inflection point, heading, airspeed, identification information and category information of the target, and the like. The target object data can be acquired through a corresponding sensor, can be acquired through a cooperative target and then transmitted to the aircraft, and can be acquired through other communication links such as a base station.
Optionally, the data acquired by the aircraft may also include flight instruments, navigational routes, navigational maps, video data, terrain data, building data, etc. of the aircraft itself.
And step S120, performing data fusion on the image data, the environment data and the target object data to form enhanced visual data.
In this embodiment, the enhanced view data may be displayed on a display screen, and the enhanced view data is data obtained by superimposing data collected by multiple types or channels. The situation in front of the aircraft or around the aircraft can be visually observed through the enhanced visual data.
Specifically, after the aircraft acquires image data, environmental data, and target object data of a remote real scene of the aircraft, in the related art, a plurality of display screens or a plurality of meters may exist in the aircraft, and the display screens or the meters are respectively used for displaying data collected by different types or different channels, that is, different data are respectively displayed on different display screens. Therefore, to avoid the appearance of a single display for target detection from multiple sensing sources, the pilot needs to frequently observe the display screens or instruments displayed in different locations. By adopting the multi-target detection and information fusion technology, the enhanced visual data is obtained after data fusion is carried out on the image data, the environment data and the target object data, and then the fused enhanced visual data is displayed on the display screen in a unified manner. Optionally, the features corresponding to different data are different, the features corresponding to different data can be extracted, data fusion is performed based on the features of different data, enhanced view data is obtained, and the precision of the fused enhanced view data is further improved.
And S130, displaying the enhanced view data on a display screen of the aircraft.
In the embodiment, the display screen is adjustable in brightness and transparency, so that a pilot can observe the situation outside the cabin through the display screen, and meanwhile, the display screen can display superposed target object information to prompt the pilot and increase the flight safety. Optionally, the display position and the display mode of the enhanced view data on the display screen may be set. The display position and the display mode can be set or adjusted according to the working habits of pilots, for example, the enhanced live-action picture can be moved, zoomed and the like. Optionally, a single sensing data source can be selected according to the needs of an operator to display or enhanced visual data after fusion processing can be displayed, the displayed data can be combined data of different data sources, and automatic or manual adjustment is performed according to different flight stages, different climates, different illumination, different temperatures, different visibility and the like.
According to the technical scheme, after the image data, the environment data and the target object data of the remote real scene of the aircraft are acquired, the image data, the environment data and the target object data are subjected to data fusion to form enhanced visual data fused with all data, and then the enhanced visual data are displayed on the display screen of the aircraft.
Optionally, step S120 in the first embodiment specifically includes the following steps:
step S121, extracting target feature data in the image data, and performing data fusion on the target feature data, the target object data, and the environment data to form the enhanced view data.
In this embodiment, after acquiring image data, environment data, and target object data of a remote real scene of an aircraft, data fusion is performed on the acquired data, thereby forming enhanced view data. Specifically, image data may be acquired, and the image data may be processed to extract target feature data in the image data. Alternatively, after the image data is subjected to processing such as gradation processing and binarization processing, feature detection such as edge detection, angle detection, or area detection is performed on the binarized image data. After the feature detection is performed, feature extraction is performed to extract target feature data in the image data.
In this embodiment, the target feature data includes fixed target outline features such as terrain and buildings, and dynamic features of the moving target, such as moving track, moving direction, target outline size, moving speed, and the like. After extracting the target feature data in the image data, performing data fusion on the target feature data, the target object data and the environment data to form enhanced view data.
Or, step S122, acquiring instrument data and a flight trajectory of the aircraft, and performing data fusion on the image data, the environment data, the target object data, the instrument data and the flight trajectory to form the enhanced view data.
In this embodiment, instrument data and a flight trajectory of the aircraft may also be acquired, and the flight trajectory may be determined by latitude and longitude collected by the position sensor at different times. The image data, the environmental data, the target object data, the instrument data and the flight trajectory may be data fused to form enhanced view data. By fusing instrument data and the flight track, if a target object exists, the distance between the flight track and the target object can be observed in real time on the display screen, and collision prediction can be carried out based on the distance.
Optionally, a corresponding fusion mode can be selected according to the requirement of the pilot to perform fusion, so as to obtain different enhanced view data. The two fusion modes can be adopted at the same time, and the enhanced visual data corresponding to different fusion modes can be switched to display or contrastively displayed during actual display.
According to the technical scheme, the technical means of fusing and superposing different data to form the enhanced visual data is adopted, so that the problem of low target detection precision of a single sensor is solved, and the display precision of the data is improved by fusing multi-dimensional data.
Optionally, in step S121 of the first embodiment, the extracting target feature data in the image data specifically includes the following steps:
step S1211, determining the intensity of the ambient light according to the ambient data;
step S1212, extracting all target image data matching the ambient light intensity from the image data;
and step S1213, fusing all the target image data to obtain the target characteristic data.
In this embodiment, the accessible environmental monitoring device gathers environmental data, and this environmental data includes information such as visibility, temperature, humidity, light intensity, and then can confirm ambient light intensity according to this environmental data. The above-mentioned target image data is also image data.
In the process of extracting the target feature data from the image data, the target image data needs to be extracted from the image data, and then the extracted target image data is fused to obtain the target feature data. However, since the advantages of different sensors are different, the image data collected under different environmental conditions are also different.
In order to improve the accuracy of the enhanced view information obtained by final fusion, optionally, a target sensor may be determined by combining the current environmental data and the advantages of different sensors, the target sensor is adopted to acquire image data, the image data acquired by the target sensor is further regarded as target image data, and all the target image data are fused to obtain target characteristic data. For example: the radar imaging radial precision is higher, the radar imaging effect is obviously better than that of an optical imaging technology under the condition of poor light, key characteristics of targets can be extracted through multispectral, infrared and visible light, image information after fusion is richer, and the target characteristics are more obvious. After fusion, the outline characteristics of fixed targets such as terrain, buildings and the like and dynamic characteristics of the moving target, such as the moving track, the moving direction, the outline size of the target, the moving speed and the like of the moving target are extracted.
Optionally, image data may be acquired by different sensors, all target image data matched with the current ambient light intensity are extracted from the image data acquired by all the sensors according to the current ambient light intensity, and then all the matched target image data are subjected to fusion processing, so as to obtain target feature data.
Optionally, all target image data matching the ambient light intensity may be extracted from image data collected from different sensors on the cooperative target or aircraft; all target image data matching the ambient light intensity may also be extracted from image data transmitted from other communication links.
According to the technical scheme, the image data of the image sensor to be adopted is determined according to different ambient light intensities, and is used as the target image data, and then the target image data is fused, so that the technical means of the target characteristic data is obtained, the technical advantages of each sensor can be fully utilized, and the target detection precision is improved.
Optionally, in the process of extracting all target image data matched with the ambient light intensity from the image data, the collected image data needs to be registered first, and then the target image data is extracted from the image data after registration processing. The image registration is a process of matching and superimposing two or more images acquired at different times, different sensors or under different conditions (weather, illuminance, camera position and angle, etc.). Specifically, the registration processing of the acquired image data specifically includes extracting feature points in the image data, and determining matched feature point pairs based on the similarity measurement and the feature points; determining a homography matrix according to the matched characteristic point pairs; converting the image data based on the homography matrix to obtain image data after registration processing; after the image data after the registration processing is obtained, all target image data matching the ambient light intensity are extracted from the image data after the registration processing.
According to the technical scheme, the image data are registered, and then the target image data matched with the ambient light intensity are extracted from the registered image data, so that the precision of the target data is improved.
Optionally, in step S1213 of the first embodiment, the step of fusing all the target image data to obtain the target feature data specifically includes the following steps:
step S12131 of extracting feature data in each of the target image data;
step S12132, fusing the feature data in each of the target image data based on a preset fusion manner to obtain the target feature data, where the preset fusion manner includes at least one of a logic filtering method, a weighted average method, a mathematical morphology method, an image algebra method, a simulated annealing method, a pyramid image fusion method, a wavelet transform image fusion method, and a multi-scale decomposition method.
In this embodiment, after the target image data is obtained, all the obtained target image data may be fused, so as to obtain the target feature data. The fusing of all the target image data may be extracting feature data in each target image data, and fusing the feature data in each target image data by using a preset fusion mode to obtain the target feature data.
Optionally, feature data in each target image data may be fused based on a logical filtering method to obtain the target feature data. Optionally, feature data in each of the target image data may be fused based on a weighted average method to obtain the target feature data. Optionally, feature data in each of the target image data may be fused based on a mathematical morphology method to obtain the target feature data. Optionally, feature data in each of the target image data may be fused based on an image algebra method to obtain the target feature data. Optionally, feature data in each target image data may be fused based on a simulated annealing method to obtain the target feature data. Optionally, feature data in each of the target image data may be fused based on a pyramid image fusion method to obtain the target feature data. Optionally, feature data in each target image data may be fused based on a wavelet transform image fusion method to obtain the target feature data. Optionally, feature data in each of the target image data may be fused based on a multi-scale decomposition method to obtain the target feature data. The feature data can be fused by adopting the methods, so that the target feature data can be obtained.
According to the technical scheme, different fusion modes can be adopted to fuse the target image data to obtain the target characteristic data, so that the fusion of the image data is realized.
Optionally, in step S121 in the first embodiment, performing data fusion on the target feature data, the target object data, and the environment data to form the enhanced view data specifically includes the following steps:
step S1214, determining the type of the target object according to the target object data and the target characteristic data;
step S1215 of extracting all target source data matching the type of the target object from the target feature data, the environment data, and the target object data;
step S1216, extracting feature data in each of the target source data;
and step S1217, fusing the characteristic data to obtain the enhanced visual data.
In this embodiment, after the target feature data, the target object data, and the environment data are determined, the various data are subjected to data fusion, so as to form enhanced view data. Because the acting distances of the sensors are different, once a target is found in the safe flight range, the target can be displayed on the display system, if a plurality of targets appear, whether the targets are the same target is judged, and if the targets are the same target, data level fusion is carried out, and then the display system is used for displaying. In the fusion process, because the data sources of the target object are dispersed, the data collected by the corresponding sensors can be selected according to the advantages of the sensors to be fused in the process of fusing the data.
Alternatively, the type of the target object may be determined from pre-acquired target object data and target feature data in the image data. Alternatively, the type of the target object may be a cooperative target object or a non-cooperative target. Optionally, after determining the type of the target object, all target source data matching the target object type are extracted from the target feature data, the environment data and the target object data. Specifically, the relevant data corresponding to the cooperation target may be extracted from the target feature data, the relevant data corresponding to the cooperation target may be extracted from the environment data, the relevant data corresponding to the cooperation target may be extracted from the target object data, and all the extracted data may be integrated to obtain the target source data.
Optionally, after obtaining the target source data, extracting feature data in the target source data, that is, extracting feature data from the target feature data, the environment data, and the target object data that match the type of the target object.
Optionally, after extracting the feature data, performing fusion processing on each feature data extracted from each target source data, thereby obtaining the enhanced view data. Alternatively, the fusion processing may be performed by at least one of a logic filtering method, a weighted average method, a mathematical morphology method, an image algebra method, an analog annealing method, a pyramid image fusion method, a wavelet transform image fusion method, and a multi-scale decomposition method.
For example: the target is a cooperative target, information such as course, speed, track and the like of the cooperative target can be acquired through an inquiry response mechanism or a broadcast mechanism, and the information is fused with related data acquired by other sensors, so that the position information of the cooperative target can be accurately acquired, and the characteristic data and the size information of the target can be acquired; such as: and if the target is a non-cooperative target, fusing target data detected by the radar and the photoelectric detector, and displaying the fused target data through a display screen.
According to the technical scheme, the target detection precision can be improved by fully utilizing the technical advantages of each sensor.
Optionally, step S130 in the first embodiment specifically includes the following steps:
step S131, acquiring the current light intensity;
step S132, determining the target display brightness of the display screen according to the light intensity;
and step S133, adjusting the brightness of the display screen to the target display brightness.
In this embodiment, the brightness and the transparency of the display screen are adjustable, and the brightness and the transparency of the display screen can be automatically adjusted according to different layout positions or different brightness of light rays, so as to enhance the real visual scene outside the cabin. For example: if the brightness of the real scene is higher, the enhanced brightness of the visual scene is reduced, otherwise, the enhanced brightness of the visual scene is improved. The transparency may also be adjusted according to the position of the display system layout.
Alternatively, the current light intensity may be a light intensity inside the aircraft or a light intensity outside the aircraft. The current light intensity can be obtained by the environment monitoring device. The target display brightness of the display screen corresponding to different light intensities is different. The corresponding relationship between different light intensities and the target display brightness of the display screen can be established in advance according to experience or experiments. And then after the current light intensity is obtained, the target display brightness of the display screen can be searched and determined based on the light intensity and the corresponding relation.
Optionally, after determining the target display brightness of the display screen, adjusting the brightness of the display screen to the target display brightness, thereby adjusting the brightness of the enhanced view data and increasing the corresponding safe flight capability. Optionally, a brightness adjustment function may be provided on the display screen, and by activating the brightness adjustment function, the pilot may adjust the brightness of the display screen to the target display brightness. Optionally, the target display brightness of the display screen may be automatically adjusted after the target display brightness is determined.
According to the technical scheme, the transparency and brightness adjustable display screen is adopted to replace a traditional glass cabin, the adaptive environment of the aircraft is improved by using the visual enhancement technology, the brightness of the visual is automatically adjusted and enhanced according to the environment monitoring device, and the corresponding safe flight capability is improved.
A second embodiment.
Based on the first embodiment, in the second embodiment of the present application, the method for displaying enhanced visual field of an aircraft mainly includes the following steps:
step S110, acquiring image data, environment data and target object data of a remote real scene of the aircraft;
step S221 of recognizing the image data, the environment data, and the target object data;
step S222, when it is recognized that at least one of the image data, the environment data, and the target object data has a target object, determining that a target object exists within a preset flight range of the aircraft;
step S223, when a target object exists in a preset flight range of the aircraft, performing data fusion on the image data, the environment data and the target object data to form the enhanced view data;
and S130, displaying the enhanced view data on a display screen of the aircraft.
In this embodiment, after the image data, the environment data, and the target object data of the remote real scene of the aircraft are acquired, whether the target object exists in the preset flight range of the aircraft may be determined by the general processing device or the data processor. Optionally, the preset flight range may be adaptively adjusted according to actual conditions, and the preset flight range refers to a safe flight range. Optionally, the image data, the environment data, and the target object data may be identified, and when at least one of the image data, the environment data, and the target object data is identified to have a target object, it is determined that the target object exists, and the position of the target object is determined. And after the position of the target object is determined, determining the distance between the aircraft and the target object according to the current position of the aircraft and the position of the target object, determining whether the target object is positioned in the preset flight range based on the distance, and further judging that the target object exists in the preset flight range of the aircraft. Alternatively, it may be determined that the aircraft is within the preset flight range when the distance is less than or equal to a preset value. And when the distance is larger than a preset value, determining that the aircraft is not in the preset flight range.
Optionally, when a target object exists in the preset flight range of the aircraft, data fusion may be performed on the image data, the environmental data, and the target object data to form enhanced view data, and the enhanced view data is displayed on a display screen of the aircraft. Optionally, when the target object exists in the preset flight range of the aircraft, returning to continuously acquire the image data, the environment data and the target object data of the remote real scene of the aircraft, so as to detect the target object in real time. According to the technical scheme, the target object can be detected in real time.
A third embodiment.
Based on the first embodiment, in the third embodiment of the present application, the method for displaying the enhanced view of the aircraft mainly includes the following steps:
step S110, acquiring image data, environment data and target object data of a remote real scene of the aircraft;
step S120, carrying out data fusion on the image data, the environment data and the target object data to form enhanced visual data;
step S310, predicting whether the aircraft collides with the target object according to the operation state data of the target object and the operation state data of the aircraft;
if not, executing step S130, and displaying the enhanced visual data on a display screen of the aircraft;
if yes, go to step S320 to generate an alarm message.
In this embodiment, after the acquired data are fused to obtain enhanced view data, collision prediction can be further performed according to the target characteristic data, so that collision risks are reduced, and the data are uniformly displayed on a display system. Specifically, the collision prediction is performed by combining the operating state data of the flight itself, including but not limited to flight route, moving speed, flight size, and other parameters, with the above-mentioned target object data, for example, the operating state data of the target object, such as flight speed, flight angle, and the like. And judging whether a collision risk exists between the target object and the aircraft through collision prediction, and if the collision risk does not exist, displaying the fused enhanced visual information on a display screen after graphical processing. If collision risks exist, the collision risks are displayed on a display screen according to relevant standards and display rules, and a pilot needs to adjust a flight path in time to avoid collision.
Optionally, when there may be a collision risk, corresponding warning information may be generated, and the warning information may be displayed in the form of voice, text, vibration, and the like. The alarm information may specifically include information such as a position, a distance, and a type of the target object. And the collision risk of the pilot is reduced through the alarm prompt.
Alternatively, the collision risk may be determined based on the flying speed of the target object, the flying angle, and the distance of the target object from the aircraft. Alternatively, the risk of collision may be determined if the target object is less than a preset distance from the aircraft and the flight speed is greater than a preset speed. If not, it is determined that a collision risk is unlikely.
According to the technical scheme, the target characteristic data is extracted after the pixel-level fusion, and the collision prediction is performed according to the target characteristic data after the characteristic-level data fusion, so that the collision risk is reduced, and the target characteristic data is uniformly displayed on the display system.
While a logical order is shown in the flow chart, in some cases, the steps shown or described may be performed in a different order than presented herein.
Based on the same inventive concept, an embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a display program for enhancing an aircraft view, and when the display program for enhancing an aircraft view is executed by a data processor, the steps of the display method for enhancing an aircraft view described above are implemented, and the same technical effects can be achieved.
Since the storage medium provided in the embodiments of the present application is a storage medium used for implementing the method in the embodiments of the present application, based on the method described in the embodiments of the present application, a person skilled in the art can understand a specific structure and a modification of the storage medium, and thus details are not described here. Any storage medium used in the method of the embodiment of the present application is intended to be protected by the present application.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a data processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the data processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means can be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (13)

1. An aircraft enhanced vision display method, comprising:
acquiring image data, environment data and target object data of a remote real scene of the aircraft;
performing data fusion on the image data, the environment data and the target object data to form enhanced visual data;
displaying the augmented view data on a display screen of the aircraft.
2. The method for enhanced viewing of an aircraft according to claim 1, wherein said step of data fusing said image data, said environmental data, and said target object data to form enhanced viewing data comprises:
identifying the image data, the environment data, and the target object data;
when at least one of the image data, the environment data and the target object data is identified to have a target object, determining that the target object exists in a preset flight range of the aircraft;
and when the target object exists in the preset flight range of the aircraft, performing data fusion on the image data, the environment data and the target object data to form the enhanced view data.
3. The aircraft vision-augmented display method of claim 1, wherein the target object data includes operating state data of a target object; after the step of performing data fusion on the image data, the environment data and the target object data to form enhanced view data, the method further includes:
predicting whether the aircraft collides with the target object according to the operating state data of the target object and the operating state data of the aircraft;
if not, executing the step of displaying the enhanced visual data on a display screen of the aircraft;
and if so, generating alarm information.
4. The method for enhanced viewing of an aircraft according to claim 1, wherein said step of data fusing said image data, said environmental data, and said target object data to form enhanced viewing data comprises:
extracting target characteristic data in the image data, and performing data fusion on the target characteristic data, the target object data and the environment data to form the enhanced visual data; or,
and acquiring instrument data and a flight track of an aircraft, and performing data fusion on the image data, the environment data, the target object data, the instrument data and the flight track to form the enhanced visual data.
5. The aircraft vision-augmented display method of claim 4, wherein said extracting target feature data in the image data comprises:
determining the intensity of ambient light according to the ambient data;
extracting all target image data matched with the ambient light intensity from the image data;
and fusing all the target image data to obtain the target characteristic data.
6. The aircraft vision-augmented display method of claim 5, wherein said step of extracting from said image data all target image data that match said ambient light intensity comprises:
extracting feature points in the image data;
determining matched feature point pairs based on the similarity measure and the feature points;
determining a homography matrix according to the matched characteristic point pairs;
converting the image data based on the homography matrix to obtain the image data after registration processing;
extracting all target image data matched with the ambient light intensity from the image data after the registration processing.
7. The method for enhanced viewing of an aircraft according to claim 5, wherein said step of fusing all of said target image data to obtain said target feature data comprises:
extracting feature data in each target image data;
and fusing the characteristic data in each target image data based on a preset fusion mode to obtain the target characteristic data, wherein the preset fusion mode comprises at least one of a logic filtering method, a weighted average method, a mathematical morphology method, an image algebra method, a simulated annealing method, a pyramid image fusion method, a wavelet transformation image fusion method and a multi-scale decomposition method.
8. The method for displaying enhanced vision on an aircraft according to claim 4, wherein said data fusing said target feature data, said target object data, and said environment data to form said enhanced vision data comprises:
determining the type of a target object according to the target object data and the target characteristic data, wherein the type of the target object comprises a cooperative target object or a non-cooperative target object;
extracting all target source data matched with the type of the target object from the target feature data, the environment data and the target object data;
extracting feature data in each target source data;
and fusing the characteristic data to obtain the enhanced visual data.
9. The aircraft vision-augmented display method of claim 1, wherein the step of displaying the augmented vision data on a display screen of the aircraft comprises:
acquiring the current light intensity;
determining the target display brightness of the display screen according to the light intensity;
and adjusting the brightness of the display screen to the target display brightness.
10. The method for enhanced viewing of an aircraft according to claim 1, wherein said step of obtaining image data, environmental data and target object data of a remote scene of the aircraft comprises:
determining a cooperative target object closest to the aircraft, and receiving image data, environment data and target object data of a distant real scene of the aircraft, which are acquired by the closest cooperative target object, wherein a first communication relation is established between the cooperative target object and the aircraft, so that data sharing between the cooperative target object and the aircraft is performed based on the first communication relation; and/or the presence of a gas in the gas,
acquiring image data, environmental data and target object data of the remote real scene of the aircraft based on corresponding sensors on the aircraft; and/or the presence of a gas in the atmosphere,
and acquiring image data, environment data and target object data of the remote real scene of the aircraft based on an intelligent terminal, wherein a second communication relation is established among the intelligent terminal, the cooperation target object and the aircraft, so that data sharing among the intelligent terminal, the cooperation target object and the aircraft is carried out based on the second communication relation.
11. An aircraft, characterized in that it comprises: a memory, a data processor and an aircraft enhanced vision display program stored on the memory and executable on the data processor, the aircraft enhanced vision display program when executed by the data processor implementing the steps of the aircraft enhanced vision display method as claimed in any one of claims 1 to 10.
12. An aircraft enhanced vision display system, comprising: an environmental monitoring device, a display screen with adjustable transparency and brightness and an aircraft according to claim 11.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium stores an aircraft enhanced vision display program which, when executed by a data processor, implements the steps of the aircraft enhanced vision display method of any one of claims 1-10.
CN202210838759.1A 2022-07-18 2022-07-18 Display method and system for enhancing visual field of aircraft, aircraft and storage medium Active CN114998771B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210838759.1A CN114998771B (en) 2022-07-18 2022-07-18 Display method and system for enhancing visual field of aircraft, aircraft and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210838759.1A CN114998771B (en) 2022-07-18 2022-07-18 Display method and system for enhancing visual field of aircraft, aircraft and storage medium

Publications (2)

Publication Number Publication Date
CN114998771A true CN114998771A (en) 2022-09-02
CN114998771B CN114998771B (en) 2022-11-01

Family

ID=83021146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210838759.1A Active CN114998771B (en) 2022-07-18 2022-07-18 Display method and system for enhancing visual field of aircraft, aircraft and storage medium

Country Status (1)

Country Link
CN (1) CN114998771B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117475806A (en) * 2023-12-28 2024-01-30 深圳康荣电子有限公司 Display screen self-adaptive response method and device based on multidimensional sensing data feedback

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100247A (en) * 2015-07-29 2015-11-25 重庆赛乐威航空科技有限公司 Low-altitude aircraft meteorological information interaction system
US20190043164A1 (en) * 2017-08-01 2019-02-07 Honeywell International Inc. Aircraft systems and methods for adjusting a displayed sensor image field of view
CN112288879A (en) * 2020-10-29 2021-01-29 中国航空工业集团公司洛阳电光设备研究所 Method for enhancing visual field on board
WO2021063119A1 (en) * 2019-10-01 2021-04-08 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for image processing, terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100247A (en) * 2015-07-29 2015-11-25 重庆赛乐威航空科技有限公司 Low-altitude aircraft meteorological information interaction system
US20190043164A1 (en) * 2017-08-01 2019-02-07 Honeywell International Inc. Aircraft systems and methods for adjusting a displayed sensor image field of view
WO2021063119A1 (en) * 2019-10-01 2021-04-08 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for image processing, terminal
CN112288879A (en) * 2020-10-29 2021-01-29 中国航空工业集团公司洛阳电光设备研究所 Method for enhancing visual field on board

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LI QINGQING等: "Towards Active Vision with UAVs in Marine Search and Rescue: Analyzing Human Detection at Variable Altitudes", 《2020 IEEE INTERNATIONAL SYMPOSIUM ON SAFETY, SECURITY, AND RESCUE ROBOTICS (SSRR)》 *
LYNDA J. KRAMER等: "Commercial Flight Crew Decision Making During Low-Visibility Approach Operations Using Fused Synthetic and Enhanced Vision Systems", 《THE INTERNATIONAL JOURNAL OF AVIATION PSYCHOLOGY》 *
叶亚洲: "动态三维复杂场景感知及其增强合成视景技术", 《中国优秀硕士学位论文全文数据库 (工程科技II辑)》 *
宋琳: "无人机飞行途中视觉导航关键技术研究", 《中国博士学位论文全文数据库 (工程科技Ⅱ辑)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117475806A (en) * 2023-12-28 2024-01-30 深圳康荣电子有限公司 Display screen self-adaptive response method and device based on multidimensional sensing data feedback
CN117475806B (en) * 2023-12-28 2024-03-29 深圳康荣电子有限公司 Display screen self-adaptive response method and device based on multidimensional sensing data feedback

Also Published As

Publication number Publication date
CN114998771B (en) 2022-11-01

Similar Documents

Publication Publication Date Title
US8493412B2 (en) Methods and systems for displaying sensor-based images of an external environment
US7528938B2 (en) Geospatial image change detecting system and associated methods
US7603208B2 (en) Geospatial image change detecting system with environmental enhancement and associated methods
US7630797B2 (en) Accuracy enhancing system for geospatial collection value of an image sensor aboard an airborne platform and associated methods
US8433457B2 (en) Environmental condition detecting system using geospatial images and associated methods
US8654149B2 (en) System and method for displaying enhanced vision and synthetic images
US20100231418A1 (en) Methods and systems for correlating data sources for vehicle displays
KR20170115544A (en) Environmental scene condition detection
EP2618322B1 (en) System and method for detecting and displaying airport approach lights
CN106104667B (en) The windshield and its control method of selection controllable areas with light transmission
CN112329725B (en) Method, device and equipment for identifying elements of road scene and storage medium
IL259680B (en) Aircraft systems and methods for adjusting a displayed sensor image field of view
Nagarani et al. Unmanned Aerial vehicle’s runway landing system with efficient target detection by using morphological fusion for military surveillance system
CN108024070A (en) The method and relevant display system of sensor image are covered on the composite image
CN114998771B (en) Display method and system for enhancing visual field of aircraft, aircraft and storage medium
EP3742118A1 (en) Systems and methods for managing a vision system display of an aircraft
EP3657233B1 (en) Avionic display system
CN116323393A (en) System for detecting foreign matter on runway and method thereof
Nussberger et al. Robust aerial object tracking from an airborne platform
US9979934B1 (en) Automated weather sensing system and method using cameras
Rzucidło et al. Simulation studies of a vision intruder detection system
EP3905223A1 (en) Aircraft display systems and methods for identifying target traffic
US20220049974A1 (en) Video display system and method
US10777013B1 (en) System and method for enhancing approach light display
US20240203142A1 (en) Scanning aid for camera-based searches

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230530

Address after: Building 4, No. 200 Tianfu Fifth Street, Chengdu High tech Zone, China (Sichuan) Pilot Free Trade Zone, Chengdu City, Sichuan Province, 610000, 6th floor, Zone A

Patentee after: Chengdu wofeitianyu Technology Co.,Ltd.

Patentee after: ZHEJIANG GEELY HOLDING GROUP Co.,Ltd.

Address before: 610000 No. 601 and 602, block a, building 5, No. 200, Tianfu Fifth Street, Chengdu hi tech Zone, Chengdu (Sichuan) pilot Free Trade Zone, Sichuan Province

Patentee before: Wofei Changkong Technology (Chengdu) Co.,Ltd.

Patentee before: ZHEJIANG GEELY HOLDING GROUP Co.,Ltd.

TR01 Transfer of patent right