CN111192229A - Airborne multi-mode video image enhancement display method and system - Google Patents

Airborne multi-mode video image enhancement display method and system Download PDF

Info

Publication number
CN111192229A
CN111192229A CN202010003148.6A CN202010003148A CN111192229A CN 111192229 A CN111192229 A CN 111192229A CN 202010003148 A CN202010003148 A CN 202010003148A CN 111192229 A CN111192229 A CN 111192229A
Authority
CN
China
Prior art keywords
video frame
image
color video
color
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010003148.6A
Other languages
Chinese (zh)
Other versions
CN111192229B (en
Inventor
程岳
李亚晖
韩伟
文鹏程
刘作龙
余冠锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Aeronautics Computing Technique Research Institute of AVIC
Original Assignee
Xian Aeronautics Computing Technique Research Institute of AVIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Aeronautics Computing Technique Research Institute of AVIC filed Critical Xian Aeronautics Computing Technique Research Institute of AVIC
Priority to CN202010003148.6A priority Critical patent/CN111192229B/en
Publication of CN111192229A publication Critical patent/CN111192229A/en
Application granted granted Critical
Publication of CN111192229B publication Critical patent/CN111192229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • G06T3/147Transformations for image registration, e.g. adjusting or mapping for alignment of images using affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the field of airborne graphic image processing, and relates to an airborne multi-mode video image enhancement display method and system. According to the method, the real-time multi-mode video and the color video are subjected to sub-pixel precision matching, the two types of video sources are fused in an image brightness space based on significance information weighting, and then the color video information is migrated into the multi-mode video information in a chromaticity and saturation space, so that the organic fusion of the two types of data of the multi-mode video and the color video is realized, the real-time characteristics of the multi-mode video are reserved for the fused video, rich color video color texture information conforming to pilot observation is added, and the readability of an airborne multi-mode video picture is enhanced. The invention can improve the space scene perception capability of pilots on runways and obstacles of airports under the condition of low visibility, further reduce the typical accidents of controllable flight ground collision, runway invasion and the like in the approach landing process of airplanes, and improve the safety of airplanes.

Description

Airborne multi-mode video image enhancement display method and system
Technical Field
The invention belongs to the field of airborne graphic image processing, and relates to an airborne multi-mode video image enhancement display method.
Background
And a Comprehensive Visual System (CVS) is used for providing equivalent visual video picture information of an airport runway, dangerous terrain and obstacles for a pilot in a flight approach stage by fusing a multi-mode video and a three-dimensional digital map. The method integrates the advantages of large visual field, high resolution and true color of a Synthetic Vision System (SVS) and the advantages of real-time multi-mode (long-wave infrared, short-wave infrared and millimeter wave) video pictures of an Enhanced Vision System (EVS). Virtual-real fused CVS real-time visual information obviously improves situational awareness of pilots and enhances flight safety.
In a traditional integrated visual system, due to inherent errors such as navigation parameters, sensor calibration and the like, a synthetic visual image is not matched with an enhanced visual image to a certain extent, and a multi-mode video image is embedded in a synthetic visual virtual image generally in a picture-in-picture mode. Therefore, in the output picture of the integrated vision system, the multi-modal video part basically presents monochrome real-time video sensor information, and real color information and detail information which can be identified are lacked.
In order to enhance the multi-mode video image display content in the comprehensive vision system, a virtually generated wireframe, characters and color blocks are mainly adopted to mark information of a runway and obstacles. Similarly, due to the existence of geometric errors, the marking information calculated by the integrated vision system has errors with the actual position, which causes troubles to pilots.
Disclosure of Invention
The invention provides an airborne multi-mode video image enhancement display method, and aims to improve the readability of a comprehensive view multi-mode video.
The technical scheme of the invention is as follows:
the airborne multi-mode video picture enhancement display method comprises the following steps:
acquiring a clear color video and a real-time multi-mode video shot in a pre-collected and recorded approach process;
intercepting corresponding color video frames which are collected and recorded in advance according to the current airborne positioning information and flight attitude information, and translating and scaling the corresponding color video frames to the scale and position of the current multi-mode video frame for coarse registration;
taking a color video frame as a floating image and a multi-mode video frame as a fixed image, and performing geometric transformation on the floating image through parameter optimization based on an established error energy function related to affine transformation between the floating image and the fixed image to realize optimized registration (obtain a sub-pixel registration relation between the floating image and the fixed image);
performing HSV color space decomposition on the registered color video frame to obtain decomposed chrominance, saturation and brightness components; carrying out image fusion on the brightness component of the color video frame and the multi-mode video frame;
and merging the fused brightness component with the chrominance and saturation component of the color video frame, and outputting a fusion result.
Optionally, the onboard positioning information comprises longitude, latitude, and altitude; the attitude information includes pitch, roll and yaw data.
Optionally, the coarse registration is to sequentially locate and query the positions of the color video frames acquired and recorded in advance by using longitude, latitude, altitude, pitch, roll and yaw data of the current airplane output by the integrated navigation system; and zooming and translating the color video frame by utilizing the internal parameters of the camera to ensure that the color video frame is roughly registered with the current real-time multi-mode video frame picture.
Optionally, performing iterative optimization on the optimized registration, specifically, using a Normalized Total Gradient (NTG) as an error energy function (optimization objective function) of affine transformation, and outputting a geometric transformation relation between the accurate color video frame and the real-time multi-modal video frame; and performing geometric transformation on the color video frame by using the optimized affine transformation parameters to ensure that the color video frame and the real-time multi-modal video frame obtain a sub-pixel-level accurate registration effect.
Optionally, the image fusion specifically adopts a fusion method based on significance, and applies a laplacian operator pixel by pixel in a luminance component image of a color video frame and a multi-modal video frame to obtain an initial significance image; then, after the initial saliency image is subjected to guide filtering, outputting a smooth saliency image; and taking the brightness value of the saliency image as a weighting weight, carrying out pixel-by-pixel weighted averaging on the brightness component of the color video frame and the multi-mode video frame, and outputting the fused brightness component.
Correspondingly, the invention also provides an airborne multi-modal video picture enhancement display system, which comprises:
the video acquisition module is used for acquiring a clear color video and a real-time multi-mode video shot in a pre-acquired and recorded approach process;
the image registration module is used for intercepting corresponding color video frames which are collected and recorded in advance according to the current airborne positioning information and the flight attitude information, and translating and scaling the corresponding color video frames to the scale and the position of the current multi-mode video frame for rough registration; then, taking the color video frame as a floating image and the multi-mode video frame as a fixed image, and carrying out geometric transformation on the floating image according to an established error energy function related to affine transformation between the floating image and the fixed image to realize optimized registration (obtain a sub-pixel registration relation between the floating image and the fixed image);
the luminance image component fusion module is used for carrying out HSV color space decomposition on the registered color video frame to obtain the decomposed chrominance, saturation and luminance components; carrying out image fusion on the brightness component of the color video frame and the multi-mode video frame;
and the image merging output module is used for merging the fused brightness component with the chrominance and saturation component of the color video frame and outputting a merging result.
Correspondingly, the invention also provides an airborne device, which comprises a processor and a program memory, wherein when the program stored in the program memory is loaded by the processor, the airborne multi-modal video picture enhancement display method is executed.
The invention has the following advantages:
the invention can improve the space scene perception capability of pilots to airport runways and obstacles under the condition of low visibility (including haze, rain, snow, sand and night vision conditions) by registering and fusing a real-time multi-mode video picture and a pre-recorded clear color video picture, enhancing the color information and texture information of the image and outputting the enhanced video picture which accords with the real color and texture, thereby reducing the typical accidents of controllable flight collision, runway invasion and the like in the approaching landing process of the airplane and improving the safety of the airplane.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present invention.
Detailed Description
The present invention is described in detail below with reference to the accompanying drawings and examples.
In order to satisfy the requirement of the multi-modal image fusion output picture with sub-pixel level registration accuracy and color texture optimization, the method for enhancing and displaying the airborne multi-modal video picture provided by the embodiment mainly comprises two parts, namely, accurate registration and weighted fusion, as shown in fig. 1.
In the precise registration section:
firstly, the clear color video frames shot in the process of pre-recorded approaching are inquired by utilizing onboard GPS information (longitude, latitude, height) and flight attitude information (pitching, rolling and yawing). The flight approach from 200 feet to 100 feet, all essentially following a fixed flight path, locates the pre-recorded color video frames consistent with the current multimodal video frames via GPS and camera external parameters acquired by inertial navigation devices. Meanwhile, the color video frames can be translated and scaled to the scale and the position of the multi-modal image through the camera internal parameters corresponding to the color video frames and the camera internal parameters corresponding to the multi-modal video, so that a coarse registration effect is obtained. In this case, the positional deviation between the multimodal image and the color image is small, and affine transformation can be considered as an approximation.
Secondly, the color video frame is taken as a floating image f, and the multi-mode video frame is taken as a fixed image fREstablishing errors relating to affine transformations between floating and fixed imagesAn energy function. In this embodiment, the Normalized Total Gradient (NTG) is used as an error energy function of the affine transformation between the floating image and the fixed image, i.e. the Normalized Total Gradient (NTG)
Figure BDA0002354225680000031
In the above formula, the first and second carbon atoms are,
Figure BDA0002354225680000041
is a gradient symbol | · |)1Is the norm of L1.
And (3) optimally solving the error energy function through an iterative method, and obtaining affine transformation parameters under the condition of minimum error energy function, namely the geometric transformation from the floating image to the fixed image. Finally, the sub-pixel registration relation between the floating image and the fixed image is obtained by carrying out geometric transformation on the floating image.
In the weighted fusion part:
firstly, HSV color space decomposition is carried out on the registered color video frame to obtain the decomposed chroma, saturation and brightness image components. Because the multi-modal video frame has only a luminance component, the luminance component of the color video frame is image-fused with the multi-modal video frame. In the process of image fusion, a fusion method based on the significance is adopted. And extracting an initial value of the image saliency by using a Laplace operator, and outputting a fusion weight after performing smooth optimization on the initial saliency map by using a guide filter. And further carrying out pixel-by-pixel weighted average on the color image brightness component and the multi-mode image by utilizing saliency map weighting and outputting a brightness component fusion result. And finally, combining the fused brightness component and the original color video frame chroma and saturation components, and outputting a fusion result.
The color video is clear video shot in sunny weather, has rich colors and textures and has higher identification degree. Through the migration of colors and textures, the multi-mode video is obviously enhanced, and the output enhanced picture has better readability.
As shown in fig. 1, the specific implementation steps of this embodiment are as follows:
1) and acquiring a clear color video and a real-time multi-mode video shot in the approach process which are acquired and recorded in advance.
2) And intercepting the corresponding color video frame which is collected and recorded in advance according to the current airborne positioning information and the flight attitude information, and translating and scaling the color video frame to the scale and the position of the current multi-mode video frame for coarse registration. The method comprises the following steps: reading position attitude data such as longitude, latitude, altitude, pitching, rolling and yawing of the airplane through airborne combined navigation equipment, firstly inquiring navigation data corresponding to a pre-recorded color video by using the longitude, latitude and altitude data, selecting a time point matched with the longitude, latitude and altitude, and after the time point is used as a center and is expanded forwards and backwards to a certain range, matching the attitude data corresponding to the color video by using the pitching, rolling and yawing data, and positioning to an accurate time point; after the video matching time point is located, capturing a frame picture corresponding to the color video as a fused video frame to be matched; reading camera internal parameters corresponding to a color camera and a multi-mode camera, including focal length, principal point and distortion parameters, zooming and translating a color video to be matched, so that the corresponding focal length is the same as the focal length of the multi-mode video, and the position of the principal point is the same as the position of the principal point of the multi-mode video; the color video frame is now in coarse registration with the multi-modal video frame.
3) The method comprises the steps of taking a color video frame as a floating image and a multi-mode video frame as a fixed image, and carrying out geometric transformation on the floating image through parameter optimization based on an established error energy function related to affine transformation between the floating image and the fixed image to realize optimized registration. The method comprises the following steps: considering that the color video frame and the multi-mode video frame which are roughly registered approximately satisfy the affine transformation relation, performing iterative optimization on affine transformation parameters corresponding to the color video frame and the multi-mode video frame as optimization objects and error energy constructed by taking the normalized total gradient as optimization parameters to obtain accurate affine transformation parameters. Because the multi-modal video frame and the color video frame are initially registered, the superposition optimization process can be quickly converged, and the registration parameters of the accuracy sub-pixel level are output. And applying the geometric transformation to the color video frame to obtain the color video frame after registration.
4) Performing HSV color space decomposition on the registered color video frame to obtain decomposed chrominance, saturation and brightness components; and carrying out image fusion on the brightness component of the color video frame and the multi-mode video frame. The method comprises the following steps: firstly, the color video frame after the geometric transformation is applied is subjected to chrominance, saturation and brightness decomposition, and a single-channel color video frame brightness image and a single-channel multi-mode image are extracted and fused. And traversing the color video frame brightness image and the multi-mode image, and obtaining a rough saliency image by utilizing a Laplacian operator. And taking the brightness image of the color video frame as a guide image, performing guide filtering on the saliency image of the color video frame, and outputting the smoothed saliency image of the color video frame. And taking the multi-modal video frame as a guide image, performing guide filtering on the multi-modal video frame saliency image, and outputting the smoothed multi-modal video frame saliency image. And then, carrying out normalization weight calculation on the smoothed color video frame saliency image and the multi-mode video frame saliency image, and obtaining a higher weight value for the pixel with a higher numerical value. And then, carrying out weighted average on the color video frame brightness image and the multi-mode video frame image according to the pixel-by-pixel weight value to obtain the fused brightness image component.
5) And merging the fused brightness component with the chrominance and saturation component of the color video frame, and outputting the fused video frame.

Claims (10)

1. An airborne multi-modal video picture enhancement display method is characterized by comprising the following steps:
acquiring a clear color video and a real-time multi-mode video shot in a pre-collected and recorded approach process;
intercepting corresponding color video frames which are collected and recorded in advance according to the current airborne positioning information and flight attitude information, and translating and scaling the corresponding color video frames to the scale and position of the current multi-mode video frame for coarse registration;
taking a color video frame as a floating image and a multi-mode video frame as a fixed image, and performing geometric transformation on the floating image through parameter optimization based on an established error energy function related to affine transformation between the floating image and the fixed image to realize optimized registration;
performing HSV color space decomposition on the registered color video frame to obtain decomposed chrominance, saturation and brightness components; carrying out image fusion on the brightness component of the color video frame and the multi-mode video frame;
and merging the fused brightness component with the chrominance and saturation component of the color video frame, and outputting a fusion result.
2. The method for on-board multi-modal video visual enhancement display according to claim 1, wherein: the airborne positioning information comprises longitude, latitude and altitude; the attitude information includes pitch, roll and yaw data.
3. The method for on-board multi-modal video visual enhancement display according to claim 2, wherein: the coarse registration is to sequentially position and query the positions of color video frames acquired and recorded in advance by using longitude, latitude, altitude, pitching, rolling and yawing data of the current airplane output by the integrated navigation system; and zooming and translating the color video frame by utilizing the internal parameters of the camera to ensure that the color video frame is roughly registered with the current real-time multi-mode video frame picture.
4. The method for on-board multi-modal video visual enhancement display according to claim 1, wherein: performing the optimized registration, specifically performing iterative optimization by taking a Normalized Total Gradient (NTG) as an error energy function of affine transformation, and outputting a precise geometric transformation relation between the color video frame and the real-time multi-modal video frame; and performing geometric transformation on the color video frame by using the optimized affine transformation parameters to ensure that the color video frame and the real-time multi-modal video frame obtain a sub-pixel-level accurate registration effect.
5. The method for on-board multi-modal video visual enhancement display according to claim 1, wherein: the image fusion specifically adopts a fusion method based on significance, and applies a Laplace operator in a color video frame brightness component image and a multi-mode video frame pixel by pixel to obtain an initial significance image; then, after the initial saliency image is subjected to guide filtering, outputting a smooth saliency image; and taking the brightness value of the saliency image as a weighting weight, carrying out pixel-by-pixel weighted averaging on the brightness component of the color video frame and the multi-mode video frame, and outputting the fused brightness component.
6. An on-board multi-modal video picture enhancement display system, comprising:
the video acquisition module is used for acquiring a clear color video and a real-time multi-mode video shot in a pre-acquired and recorded approach process;
the image registration module is used for intercepting corresponding color video frames which are collected and recorded in advance according to the current airborne positioning information and the flight attitude information, and translating and scaling the corresponding color video frames to the scale and the position of the current multi-mode video frame for rough registration; then, taking the color video frame as a floating image and the multi-mode video frame as a fixed image, and carrying out geometric transformation on the floating image according to an established error energy function related to affine transformation between the floating image and the fixed image to realize optimized registration;
the luminance image component fusion module is used for carrying out HSV color space decomposition on the registered color video frame to obtain the decomposed chrominance, saturation and luminance components; carrying out image fusion on the brightness component of the color video frame and the multi-mode video frame;
and the image merging output module is used for merging the fused brightness component with the chrominance and saturation component of the color video frame and outputting a merging result.
7. The on-board multi-modal video visual enhancement display system of claim 6, wherein: the coarse registration is to sequentially position and query the positions of color video frames acquired and recorded in advance by using longitude, latitude, altitude, pitching, rolling and yawing data of the current airplane output by the integrated navigation system; and zooming and translating the color video frame by utilizing the internal parameters of the camera to ensure that the color video frame is roughly registered with the current real-time multi-mode video frame picture.
8. The on-board multi-modal video visual enhancement display system of claim 6, wherein: performing the optimized registration, specifically performing iterative optimization by taking a Normalized Total Gradient (NTG) as an error energy function of affine transformation, and outputting a precise geometric transformation relation between the color video frame and the real-time multi-modal video frame; and performing geometric transformation on the color video frame by using the optimized affine transformation parameters to ensure that the color video frame and the real-time multi-modal video frame obtain a sub-pixel-level accurate registration effect.
9. The on-board multi-modal video visual enhancement display system of claim 6, wherein: the image fusion specifically adopts a fusion method based on significance, and applies a Laplace operator in a color video frame brightness component image and a multi-mode video frame pixel by pixel to obtain an initial significance image; then, after the initial saliency image is subjected to guide filtering, outputting a smooth saliency image; and taking the brightness value of the saliency image as a weighting weight, carrying out pixel-by-pixel weighted averaging on the brightness component of the color video frame and the multi-mode video frame, and outputting the fused brightness component.
10. An airborne device comprising a processor and a program memory, wherein the program memory stores a program that when loaded by the processor performs the method of on-board multi-modal video visual enhancement display of claim 1.
CN202010003148.6A 2020-01-02 2020-01-02 Airborne multi-mode video picture enhancement display method and system Active CN111192229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010003148.6A CN111192229B (en) 2020-01-02 2020-01-02 Airborne multi-mode video picture enhancement display method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010003148.6A CN111192229B (en) 2020-01-02 2020-01-02 Airborne multi-mode video picture enhancement display method and system

Publications (2)

Publication Number Publication Date
CN111192229A true CN111192229A (en) 2020-05-22
CN111192229B CN111192229B (en) 2023-10-13

Family

ID=70709781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010003148.6A Active CN111192229B (en) 2020-01-02 2020-01-02 Airborne multi-mode video picture enhancement display method and system

Country Status (1)

Country Link
CN (1) CN111192229B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145362A (en) * 2020-01-02 2020-05-12 中国航空工业集团公司西安航空计算技术研究所 Virtual-real fusion display method and system for airborne comprehensive vision system
CN112419211A (en) * 2020-09-29 2021-02-26 西安应用光学研究所 Night vision system image enhancement method based on synthetic vision

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231206A (en) * 2011-07-14 2011-11-02 浙江理工大学 Colorized night vision image brightness enhancement method applicable to automotive assisted driving system
CN104469155A (en) * 2014-12-04 2015-03-25 中国航空工业集团公司第六三一研究所 On-board figure and image virtual-real superposition method
US9726486B1 (en) * 2011-07-29 2017-08-08 Rockwell Collins, Inc. System and method for merging enhanced vision data with a synthetic vision data
WO2018076732A1 (en) * 2016-10-31 2018-05-03 广州飒特红外股份有限公司 Method and apparatus for merging infrared image and visible light image
CN109492714A (en) * 2018-12-29 2019-03-19 同方威视技术股份有限公司 Image processing apparatus and its method
CN109509164A (en) * 2018-09-28 2019-03-22 洛阳师范学院 A kind of Multisensor Image Fusion Scheme and system based on GDGF
CN109547710A (en) * 2018-10-10 2019-03-29 中国航空工业集团公司洛阳电光设备研究所 A kind of enhancing what comes into a driver's and Synthetic vision merge implementation method
CN110443776A (en) * 2019-08-07 2019-11-12 中国南方电网有限责任公司超高压输电公司天生桥局 A kind of Registration of Measuring Data fusion method based on unmanned plane gondola

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231206A (en) * 2011-07-14 2011-11-02 浙江理工大学 Colorized night vision image brightness enhancement method applicable to automotive assisted driving system
US9726486B1 (en) * 2011-07-29 2017-08-08 Rockwell Collins, Inc. System and method for merging enhanced vision data with a synthetic vision data
CN104469155A (en) * 2014-12-04 2015-03-25 中国航空工业集团公司第六三一研究所 On-board figure and image virtual-real superposition method
WO2018076732A1 (en) * 2016-10-31 2018-05-03 广州飒特红外股份有限公司 Method and apparatus for merging infrared image and visible light image
CN109509164A (en) * 2018-09-28 2019-03-22 洛阳师范学院 A kind of Multisensor Image Fusion Scheme and system based on GDGF
CN109547710A (en) * 2018-10-10 2019-03-29 中国航空工业集团公司洛阳电光设备研究所 A kind of enhancing what comes into a driver's and Synthetic vision merge implementation method
CN109492714A (en) * 2018-12-29 2019-03-19 同方威视技术股份有限公司 Image processing apparatus and its method
CN110443776A (en) * 2019-08-07 2019-11-12 中国南方电网有限责任公司超高压输电公司天生桥局 A kind of Registration of Measuring Data fusion method based on unmanned plane gondola

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SHU-JIE CHEN: "Normalized Total Gradient: A New Measure for Multispectral Image Registration", pages 1297 - 1310 *
WEI GAN: "Infrared and visible image fusion with the use of multi-scale edge-preserving decomposition and guided image filter", 《INFRARED PHYSICS & TECHNOLOGY》, pages 37 - 51 *
YUE CHENG: "A prototype of Enhanced Synthetic Vision System using short-wave infrared", 《2018 IEEE/AIAA 37TH DIGITAL AVIONICS SYSTEMS CONFERENCE (DASC)》, pages 1071 - 1077 *
齐小谦: "一种直升机合成视景辅助导航技术", pages 499 - 503 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145362A (en) * 2020-01-02 2020-05-12 中国航空工业集团公司西安航空计算技术研究所 Virtual-real fusion display method and system for airborne comprehensive vision system
CN112419211A (en) * 2020-09-29 2021-02-26 西安应用光学研究所 Night vision system image enhancement method based on synthetic vision
CN112419211B (en) * 2020-09-29 2024-02-02 西安应用光学研究所 Night vision system image enhancement method based on synthetic vision

Also Published As

Publication number Publication date
CN111192229B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
CN115439424B (en) Intelligent detection method for aerial video images of unmanned aerial vehicle
CN107194989B (en) Traffic accident scene three-dimensional reconstruction system and method based on unmanned aerial vehicle aircraft aerial photography
CN111145362B (en) Virtual-real fusion display method and system for airborne comprehensive vision system
Ribeiro-Gomes et al. Approximate georeferencing and automatic blurred image detection to reduce the costs of UAV use in environmental and agricultural applications
KR101261409B1 (en) System for recognizing road markings of image
CN109341686B (en) Aircraft landing pose estimation method based on visual-inertial tight coupling
CN112364707B (en) System and method for performing beyond-the-horizon perception on complex road conditions by intelligent vehicle
US11113570B2 (en) Systems and methods for automatically generating training image sets for an environment
CN111709994B (en) Autonomous unmanned aerial vehicle visual detection and guidance system and method
CN114413881A (en) Method and device for constructing high-precision vector map and storage medium
CN111192229B (en) Airborne multi-mode video picture enhancement display method and system
CN113066050B (en) Method for resolving course attitude of airdrop cargo bed based on vision
US9726486B1 (en) System and method for merging enhanced vision data with a synthetic vision data
CN207068060U (en) The scene of a traffic accident three-dimensional reconstruction system taken photo by plane based on unmanned plane aircraft
CN116152342A (en) Guideboard registration positioning method based on gradient
Liu et al. Sensor fusion method for horizon detection from an aircraft in low visibility conditions
WO2021026855A1 (en) Machine vision-based image processing method and device
CN110749323A (en) Method and device for determining operation route
CN112560922A (en) Vision-based foggy-day airplane autonomous landing method and system
CN113253619B (en) Ship data information processing method and device
CN106097270A (en) The desert areas Road Detection platform being positioned on unmanned plane
CN111833384B (en) Method and device for rapidly registering visible light and infrared images
Cheng et al. Infrared Image Enhancement by Multi-Modal Sensor Fusion in Enhanced Synthetic Vision System
JP2003085535A (en) Position recognition method for road guide sign
CN116718165B (en) Combined imaging system based on unmanned aerial vehicle platform and image enhancement fusion method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant