CN114782539A - Visual positioning system and method based on cloud layer observation in cloudy weather - Google Patents
Visual positioning system and method based on cloud layer observation in cloudy weather Download PDFInfo
- Publication number
- CN114782539A CN114782539A CN202210701077.6A CN202210701077A CN114782539A CN 114782539 A CN114782539 A CN 114782539A CN 202210701077 A CN202210701077 A CN 202210701077A CN 114782539 A CN114782539 A CN 114782539A
- Authority
- CN
- China
- Prior art keywords
- cloud layer
- image
- ground
- moment
- visual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Navigation (AREA)
Abstract
The invention relates to the field of visual positioning navigation, in particular to a visual positioning system and method based on cloud layer observation in cloudy weather. The ground image acquisition module, the ground image processing module, the ground image matching module, the ground cloud layer movement speed calculation module, the vision sensor and the vision navigation module are included. Compared with the prior art, the invention has the beneficial effects that: cloud layer speed information measured by the ground visual observation equipment is transmitted to the mobile platform through a data link, differential processing is carried out by combining the plane moving position of the platform relative to a cloud layer and the cloud layer speed information, the influence of cloud layer movement is eliminated, the real east and north displacement of the mobile platform is obtained, and the actual position of the mobile platform under a navigation coordinate system is calculated.
Description
Technical Field
The invention relates to the field of visual positioning navigation, in particular to a visual positioning system and method based on cloud layer observation in cloudy weather.
Background
The ground and low-altitude mobile platform navigation and positioning depends on a satellite navigation means. The satellite navigation depends on radio signals, is easy to interfere in severe environment, and is easy to cause the condition that the navigation result of the mobile platform is unavailable. A technical solution is needed to provide a stable navigation result in a special environment, instead of the satellite navigation.
The visual navigation does not depend on radio signals, and relative pose movement can be calculated by matching images acquired at different moments for navigation and positioning, so that the defect of satellite navigation can be overcome. In a city, the surrounding features of the mobile platform are obvious, and the position of a camera or the movement of the mobile platform can be obtained by reversely deducing through observing surrounding fixed feature targets. Under the scene that the ground features of a lake surface, a sand land and the like are not obvious, the visual navigation method based on feature extraction is difficult to acquire stable feature reference points on the ground for matching and feature calculation. Therefore, accurate navigation cannot be achieved.
Because the cloud layer moves, the position of the mobile platform cannot be calculated by taking the cloud layer as a fixed reference point. Other means are needed to perform cloud layer description and moving trend quantification description, so that the mutual position between the cloud layer and the moving platform is converted, and therefore the characteristic information of the cloud layer is effectively utilized, and the visual navigation capability under the specific environment is realized.
Disclosure of Invention
The invention provides a visual positioning system and method based on cloud layer observation in cloudy weather, aiming at solving the navigation requirement in the scenes with unobvious ground features such as lake surface, sand land and the like and improving the navigation efficiency.
In order to achieve the purpose, the technical scheme adopted by the invention comprises the following steps: a visual positioning system based on cloud layer observation under cloudy weather comprises:
the ground image acquisition module is used for acquiring cloud layer images from the ground in real time;
the ground image processing module is used for carrying out image difference and binarization processing on the cloud layer image acquired at the current sampling moment and the cloud layer image acquired at the last sampling moment;
the ground image matching module is used for matching or comparing the binarized image at the last sampling moment with the binarized image at the current sampling moment to obtain the pixel displacement of the central point;
the ground cloud layer movement speed calculation module is used for calculating the cloud layer movement speed according to the central point pixel displacement obtained by the ground image matching module;
the vision sensor is used for acquiring cloud layer images from the mobile platform;
the visual navigation module comprises an initial visual navigation module and a visual navigation correction module;
the initial visual navigation module is used for calculating the plane displacement of the visual sensor or the mobile platform relative to the cloud layer according to the two images at the adjacent moments obtained by the visual sensor;
and the visual navigation correction module is used for receiving the cloud layer movement speed information obtained by the ground cloud layer movement speed calculation module, correcting the plane displacement of the visual sensor or the mobile platform, which is measured by the initial visual navigation module, relative to the cloud layer to obtain the real displacement of the mobile platform relative to the ground, and obtaining the actual position of the mobile platform under the navigation coordinate system.
In the ground image processing module, image sampling and period difference are carried outThe calculation method is as follows, when the ground wind speed is;
The principle of binarization is as follows:
wherein the content of the first and second substances,is the gray scale of the pixel in the difference image,is the processed gray scale.
The ground image matching module specifically comprises:
in the binary imageThe characteristic pattern is selected by selecting a region which has a gray value of 255 and can be closed in a binary image at the last sampling moment, taking an outer boundary, and calculating the central point of the region boundaryAnd the number of pixels in the pattern,Has a position coordinate of;
The selection of the central point of the area boundary adopts a geometric center mode, namely, the maximum coordinate value and the minimum coordinate value of the area boundary on a horizontal axis and a vertical axis are obtained, and the median values are respectively solved and are used as the position coordinate value of the central point;
comparing the front binary image with the rear binary image, finding out a closed-loop area corresponding to the area capable of being closed-loop in the front image from the rear image, and taking the outer boundary of the closed-loop area in the rear image as an independent graph; the center point of the outer boundary of the closed-loop area in the latter figure is indicated asThe number of pixels inside the outer boundary of the closed-loop region in the latter figure is expressed as,Has a position coordinate of;
Comparing the center pointsAndof the coordinate position of the image coordinate systemAndthe axes respectively correspond to a north direction N and an east direction E, and two central points are obtainedAndpixel shift in direction;
The mean value of the displacement of the central point pixel of all the matched graph pairs in the two binary images is represented as。
The ground cloud layer movement speed calculation module specifically comprises:
of image coordinate systemsAndthe axes respectively correspond to the north direction N and the east direction E, and the movement speed of the cloud layer at the current sampling moment is calculated;
wherein, the first and the second end of the pipe are connected with each other,indicating the north-direction moving speed of the cloud layer at the current sampling moment,indicating the east moving speed of the cloud layer at the current sampling moment,representing the motion speed of the cloud layer at the current sampling moment;is the height of the cloud layer, and is,is the focal length of the camera and is,for a single pixel size of the camera imaging plane,sampling and differentiating periods for the image;for the central point of all matched graph pairs in the two binary imagesAn axis pixel displacement mean;for the central point of all matched graph pairs in the two binary imagesAxis pixel shift mean.
The comparison principle of the central point and the pixel point is as follows:
indicating a threshold for the change in the number of pixels,a distance threshold representing a center point;
wherein the content of the first and second substances,representing the motion speed of the cloud layer at the last sampling moment;is the height of the cloud layer, and is,is the focal length of the camera and is,for a single pixel size of the camera imaging plane,the image is sampled and differentiated for a period.
The initial visual navigation module specifically comprises:
defining a navigation coordinate systemThe coordinate system of the carrier camera at the previous time isThe coordinate system of the carrier camera at the later momentIs to be prepared;
selecting two images at two adjacent moments, extracting characteristic points in the two images and matching;
Pixel coordinates of the matched feature points in the images at the previous moment and the next moment respectively,of a matching feature point in the image at a previous moment and a subsequent moment, respectivelyAn axis value;of a matching feature point in the images at the previous and subsequent times, respectivelyAn axis value;
the cloud layer characteristic points are assumed to be on the same plane, so that the matched characteristic points meet the homography matrixConstraining;
the description of the homography matrix is:
whereinIs a reference matrix in the camera, and the reference matrix,is a unit normal vector of the cloud layer plane under the coordinate system of the carrier camera at the previous moment,the distance from the cloud layer plane to the camera;andrespectively obtaining a rotation matrix and a translation vector of a carrier camera coordinate system at two adjacent moments to be solved;
forming an equation set according to multiple groups of matched feature points to obtainThereby recoveringAnd(ii) a Based on the rotation relation of the carrier camera coordinate system relative to the navigation coordinate system at the previous momentThe rotation relation of the carrier camera coordinate system relative to the navigation coordinate system at the later moment can be obtainedCan obtain the real-time displacement of the camera or the mobile platform under the navigation coordinate system;
whereinIndicating that the camera is displaced north relative to the cloud,for east displacement of the camera relative to the cloud layer,is the camera relative to the cloud layer in the sky direction.
The visual navigation correction module specifically comprises:
the visual navigation correction module combines the plane displacement of the mobile platform relative to the cloud layer obtained by the initial visual navigation module and the cloud layer movement speed information obtained by the ground cloud layer movement speed calculation module to perform differential processing to obtain the real east displacement of the mobile platformAnd north displacement:
Obtaining the actual position of the mobile platform under a navigation coordinate system:
indicating the actual position of the mobile platform in the navigation coordinate system at the later moment;navigation coordinates of mobile platform at moment before fingerActual position under tether;is the time interval between two adjacent time instants.
The ground visual observation equipment consists of a ground image acquisition module, a ground image processing module, a ground image matching module and a ground cloud layer movement speed calculation module; the visual sensor and the visual navigation module form carrier visual navigation equipment, and the ground visual observation equipment and the carrier visual navigation equipment interact cloud layer movement speed information through a data link.
The invention also provides a visual positioning method based on cloud layer observation in cloudy weather, which adopts the visual positioning system based on cloud layer observation in cloudy weather to perform visual positioning on the mobile platform.
Compared with the prior art, the invention has the beneficial effects that: cloud layer speed information measured by the ground visual observation equipment is transmitted to the mobile platform through a communication link, a visual navigation correction module of the mobile platform combines with an initial visual navigation module of the mobile platform to solve the output planar movement position of the mobile platform relative to a cloud layer and the cloud layer speed information transmitted by the ground visual observation equipment to carry out differential processing, the influence of cloud layer movement is eliminated, the real east and north displacement of the mobile platform relative to the ground is obtained, and the actual position of the mobile platform (such as an aircraft, an unmanned aerial vehicle or an automobile) under a navigation coordinate system is calculated.
Drawings
FIG. 1 is a schematic diagram of a visual positioning system based on cloud layer observation in cloudy weather;
FIG. 2 is a schematic flow chart of a visual positioning method based on cloud layer observation in cloudy weather;
FIG. 3 is a schematic diagram of an image coordinate system;
FIG. 4 is a process of observing cloud motion;
fig. 5 is a schematic diagram of two binary images.
Detailed Description
The present invention will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown.
The carrier, i.e. the mobile platform, herein may be a mobile device such as a drone, an automobile, an aircraft, etc.
The visual positioning system based on cloud layer observation in cloudy weather consists of ground visual observation equipment, carrier visual navigation equipment and a data link. The ground visual observation equipment is responsible for observing the motion of the cloud layer to obtain the motion data of the cloud layer. The carrier visual navigation equipment is responsible for observing the cloud layer, receiving cloud layer motion data sent by the ground visual observation equipment and calculating motion information of the carrier relative to the cloud layer. The ground visual observation device and the carrier visual navigation device are interconnected through a data link, and the motion information of the cloud layer is interacted, as shown in fig. 1.
A ground vision observation device comprising: a ground image acquisition module (such as a camera, the camera is positioned at the ground end) and is used for acquiring cloud layer images from the ground in real time; the ground image processing module is used for receiving the cloud layer image acquired by the ground image acquisition module, and carrying out image difference and binarization processing on the cloud layer image acquired at the current sampling moment and the cloud layer image acquired at the previous sampling moment; the ground image matching module is used for matching or comparing the binarized image at the last sampling moment with the binarized image at the current sampling moment to obtain the pixel displacement of the central point of the matched graph in the binarized image; and the ground cloud layer movement speed calculation module is used for calculating the cloud layer movement speed according to the pixel displacement of the central point of the matched graph obtained by the ground image matching module.
The ground visual observation equipment is positioned in a ground observation station (ground end for short), and the ground image processing module, the ground image matching module and the ground cloud layer movement speed calculation module are all integrated in a computer at the ground end. And the computer at the ground end is responsible for processing the observed cloud layer static information, calculating to obtain cloud layer motion data, and sending the cloud layer motion data to the carrier visual navigation equipment through a data link.
A carrier visual navigation device, comprising: a vision sensor (i.e., a skyward camera located at the carrier end) for capturing cloud images from the mobile platform; the visual navigation module comprises an initial visual navigation module and a visual navigation correction module; the initial visual navigation module is used for calculating the movement distance of the visual sensor or the mobile platform relative to the cloud layer according to the two images of the adjacent sampling moments obtained by the visual sensor; and the visual navigation correction module is used for receiving the cloud layer movement speed information obtained by the ground cloud layer movement speed calculation module, correcting the movement distance of the visual sensor or the mobile platform relative to the cloud layer, which is measured by the initial visual navigation module, and obtaining the actual movement distance of the mobile platform relative to the ground (a navigation coordinate system) and/or the actual position of the mobile platform.
The visual navigation module (including the initial visual navigation module and the visual navigation correction module) is integrated in a processing module (which may be a processing computer or a processor) carried by the carrier.
As the cloud layer moves, a ground observation station is designed to observe the cloud layer, and the cloud layer movement information is provided for the mobile platform visual navigation module for difference, so that the accurate position of the mobile platform is obtained.
Fig. 2 shows a schematic flow chart of a visual positioning method based on cloud layer observation in cloudy weather, and the specific steps are as follows: observing the cloud layer by using a ground observation station visual device (namely, a ground visual observation device) to obtain the movement speed of the cloud layer; the carrier platform (namely the mobile platform) is used for measuring the mobile position of the carrier platform relative to the cloud layer by means of visual navigation; the ground cloud layer observation information obtained by the ground observation station vision equipment is transmitted to the carrier platform, and the vision navigation output information (namely the calculated plane displacement of the carrier) of the carrier is corrected by the vision navigation module; and updating the actual position of the carrier.
The motion trend of the cloud layer motion is approximately described asIn whichRepresents the cloud layer northThe speed of the movement is changed to the moving speed,indicating the east moving speed of the cloud layer. If the cloud layer moves to the north, thenIs positive; if the cloud layer moves to the south, thenIs negative; if the cloud layer moves eastward, thenIs positive; if the cloud layer moves to the west, thenIs negative.
The image coordinate system is shown in fig. 3.
After the camera at the ground end acquires the related image information, the data is stored in the computer in the form of two-dimensional data. The image coordinate system takes the upper left corner of the image as the origin to define a rectangular coordinate system-A pixel image plane coordinate system, each pixel point uses coordinates (,) Indicating the location. In the method, the camera at the ground end is considered to be in the installation processAndthe axes correspond to the north (N) and east (E) directions, respectively.
The observation process of cloud layer movement is shown in fig. 4.
In the ground visual observation, one ground observation station approximately manages an area with the size of 5km by 5km, and a plurality of observation stations can be deployed in a gridding mode.
At the current sampling moment, every time one cloud layer image is collected, the ground image processing module performs image difference (namely the pixel value of each pixel point of the two images is subtracted, specifically, the pixel value of each pixel point of the image collected at the current sampling moment is subtracted by the pixel value of each pixel point of the image collected at the previous sampling moment to obtain a difference image) and binarization on the cloud layer image collected at the previous sampling moment, so that a binarization image at the current sampling moment is obtained. And comparing the binary image at the current sampling moment with the binary image at the last sampling moment by the ground image matching module, and calculating the motion speed of the cloud layer by the ground cloud layer motion speed calculation module.
Before acquiring the image at the current sampling moment, whether a sampling period is met needs to be judged, and the image at the current moment is acquired when the sampling period is met.
The following description will take an example in which a ground observation station has a camera.
The ground image processing module: and carrying out image difference and binarization on the image acquired at the current sampling moment and the image acquired at the previous sampling moment to obtain a binarized image at the current sampling moment. The acquisition mode of the binary image at the last sampling moment is as follows: and carrying out image difference and binarization on the image acquired at the last sampling moment and the image acquired at the last two sampling moments to obtain a binarized image at the last sampling moment, wherein the last two sampling moments are previous sampling moments of the last sampling moment, and the last sampling moment and the last two sampling moments are adjacent sampling moments.
Ground image matching module: and comparing (matching) the binarized image at the current sampling moment with the binarized image at the previous sampling moment to obtain a matching graph pair (the matching graph pair consists of a graph in the binarized image at the previous sampling moment and a graph in the binarized image at the current sampling moment). Numbering the graphs existing in the binary image, and calculating the characteristics (including a central point and a total pixel point) of the graphs: and matching the graphs in the binary images according to a certain principle, calculating the pixel displacement of the central point of the matching graph pair, and solving the average value of the pixel displacement of the central points of all the matching graph pairs in the two binary images as the representation of the overall movement trend of the cloud layer in the image coordinate system.
The ground cloud layer movement speed calculation module: and solving to obtain the motion speed of the cloud layer according to the pixel displacement average value of the central points of all the matched graph pairs, the focal length of the camera, the height of the cloud layer and other information.
Image sampling and differencing periodThe calculation method (unit is second) is as follows, and the ground wind speed at the time is comprehensively considered(unit is m/s).The empirical value is 10. WhereinThe sampling period is equal to the differential period.
The principle of binarization is as follows, wherein,is the gray scale of the pixel in the difference image,is the processed gray scale.
The principle of selecting the feature pattern in the binarized image is as follows, in the binarized image at the previous sampling time, a region (for example, a first closed-loop region) capable of being closed and having a gray value of 255 is selected, and an outer boundary is taken as an independent pattern, which is also called as follows: the zone boundaries, as shown by the irregular circles on the left in FIG. 5. Calculating center points of zone boundariesAnd the number of pixels inside the region boundaryThe region boundary refers to the outer boundary of a region capable of being closed-loop, for example, the outer boundary of the first closed-loop region, whose gray-scale value is 255.Has a position coordinate ofIn whichIs a central pointIsAxial value of whereinIs a central pointIsAnd (4) an axis value.
The central point of the area boundary is selected by adopting a geometric center mode, namely, the maximum coordinate value and the minimum coordinate value of the area boundary on the horizontal axis and the vertical axis are obtained, and the median values are respectively calculated to be used as the position coordinate value of the central point. The maximum coordinate value and the minimum coordinate value of the area boundary on the horizontal axis are obtained, and the average value of the maximum coordinate and the minimum coordinate of the area boundary on the horizontal axis is taken as the horizontal coordinate of the central point; and acquiring the maximum coordinate value and the minimum coordinate value of the area boundary on the longitudinal axis, and taking the average value of the maximum coordinate and the minimum coordinate of the area boundary on the longitudinal axis as the longitudinal coordinate of the central point. Said transverse axis beingAxis, longitudinal axisA shaft.
Number of pixels inside zone boundariesMay be counted by the computer program traversing the region boundaries. The number of the pixels is the total number of the pixels inside the region boundary, namely the total pixels. Inside the zone boundary is all the remaining part except the outer boundary of the whole area capable of closing the loop with the gray value of 255.
Comparing two previous and next binary images (namely, the binary image at the previous sampling time and the binary image at the current sampling time, the binary image at the previous sampling time is referred to as a previous image for short, and the binary image at the current sampling time is referred to as a next image for short), finding out a closed-loop area (for convenience of description, referred to as a second closed-loop area, which also needs to be an area capable of being closed and has a gray value of 255) corresponding to the first closed-loop area in the previous image in the next image, and taking the outer boundary of the second closed-loop area in the next image as an independent graph, as shown in an irregular circle at the right side in fig. 5; calculating the center point of the outer boundary of the second closed loop area asThe number of pixels inside the outer boundary of the second closed-loop region is expressed as。Has a position coordinate ofIn whichIs a central pointIs/are as followsAxial value of whereinIs a central pointIs/are as followsAnd (4) an axis value.
The calculation mode of the central point of the second closed-loop regional outer boundary and the pixel points in the regional boundary is the same as that of the central point of the first closed-loop regional outer boundary and the pixel points in the regional boundary.
The selection of the center point of the outer boundary of the second closed-loop area also adopts a geometric center mode, namely, the maximum and minimum coordinate values of the outer boundary of the second closed-loop area on the horizontal axis and the vertical axis are obtained, and the median values are respectively calculated to be used as the position coordinate value of the center point. And the number of the pixel points inside the outer boundary of the second closed-loop area is obtained by traversing the inside of the outer boundary of the second closed-loop area through a calculation program. The inner part of the outer boundary of the second closed loop area is all the rest part of the second closed loop area except the outer boundary.
Comparing the center points of the patterns formed by the outer boundaries of the first closed-loop region and the second closed-loop regionAndthe coordinate position of (a). When the formulas (8) and (9) are satisfied, the two central points are obtainedAndpixel shift in direction ()。For the central point of the left and right matched image pairs in the two binary imagesAn axis pixel displacement;for the central point of the left and right matched image pairs in the two binary imagesThe axis pixel is displaced.
Gray scale in binary imageThe number of the regions which have the value of 255 and can be closed is generally more than one, all the regions which have the gray value of 255 and can be closed and are of the binarized image at the previous sampling moment are selected and respectively matched or compared with the corresponding regions in the binarized image at the current sampling moment to form a plurality of groups of matched image pairs, for example, the first closed-loop region and the second closed-loop region form a first group of matched image pairs, and so on, two central points and the number of pixels of the plurality of groups of matched image pairs are respectively obtained, and one of the regions is obtained by calculation aiming at each group of matched image pairsAndpixel displacement in direction (calculation method andsame) and then all two center points are atAndthe pixel displacements in the directions are averaged separately (i.e. separately for eachThe axial pixel displacement is averaged and recorded as(ii) a To pairThe axial pixel displacement is averaged and recorded as)。
Central point pixel displacement mean value representation of all left and right matching graph pairs in two binary imagesIs composed of. The center-point pixel displacement mean value is the center-point average moving pixel.
Based on the convention of the method, the image coordinate systemAndthe axes respectively correspond to the north direction (N) and the east direction (E), and the movement speed of the cloud layer is obtained through calculation.
Wherein the content of the first and second substances,represents the north-facing moving speed of the cloud layer at the current sampling moment,represents the east movement speed of the cloud layer at the current sampling moment,representing the motion speed (also called observation speed) of the cloud layer at the current sampling moment, and selecting 10-30m/s as an empirical value if no observation speed exists in the initial state;is the height of the cloud layer,is the focal length of the camera and is,for a single pixel size of the camera imaging plane,sampling and differentiating periods for the image;for the central point of all matched graph pairs in the two binary imagesAn axis pixel displacement mean;for the central point of all matched graph pairs in the two binary imagesAxis pixel shift mean. In this paragraph, the camera refers to a camera used by the ground observation station to collect cloud images.
The height of the cloud layer can be obtained by ground binocular vision or laser height measurement (the ground binocular vision method and the laser distance measurement method are the prior general technologies and are not described any more),andare camera own parameters. When the formula is used for calculation, the length units are all unified into meter (m) for calculation.
The comparison principle of the center point and the pixel point (i.e. the matching principle of the first closed-loop area and the second closed-loop area) is as follows:
indicating a threshold for the change in the number of pixels,is composed of10% of the total weight of the composition,a distance threshold representing a center point;can be calculated according to the motion trend of the cloud layer, and the motion trend of the cloud layer changes, soAre constantly updated. The solution equation is as follows:
wherein, the first and the second end of the pipe are connected with each other,representing the motion speed of the cloud layer at the last sampling moment, and calculating by a ground cloud layer motion speed calculation module, wherein if the motion speed of the cloud layer is not calculated in the initial state, 10-30m/s is selected as an empirical value;is the height of the cloud layer,is the focal length of the camera and is,for a single pixel size of the camera imaging plane,the image is sampled and differentiated for a period. When the central point and the number of the pixel points of a certain matching pattern pair simultaneously satisfy the formulas (8) and (9), the sum of the central point and the number of the pixel points is calculated for the matching pattern pairAndthe pixel displacement in the direction is effective and can be used for calculating the mean value of the pixel displacement of the central point.
When ground visual observation is carried out, if the cloud layer movement speed is not calculated in the initial state, selecting 10-30m/s as the initial cloud layer movement speed, and sending the initial cloud layer movement speed to a visual navigation module of the mobile platform in real time;
at the moment 1 (ground end first sampling), a cloud layer image is obtained;
obtaining a cloud layer image at the moment 2 (second sampling at the ground end), and carrying out image difference and binarization on the image acquired at the moment 2 and the image acquired at the moment 1 to obtain a binarized image at the moment 2;
sampling for the third time at the moment 3 (ground end) to obtain a cloud layer image, carrying out image difference and binaryzation on the image acquired at the moment 3 and the image acquired at the moment 2 to obtain a binaryzation image at the moment 3, carrying out pattern matching on the binaryzation image at the moment 3 and the binaryzation image at the moment 2 to obtain the central point and the number of pixel points of each matching pattern pair, respectively judging whether the matching pattern pairs accord with the formulas (8) - (9), and calculating all the matching pattern pairs which accord with the formulas (8) - (9)The average value of the central point moving pixels is substituted into the formulas (5) to (7) for calculation, the motion speed of the cloud layer at the moment 3 is obtained, and the motion speed is sent to a visual navigation module of the mobile platform in real time. In the processCalculated by the formula (10), in the formula (10)The movement speed of the cloud layer at the time 2 is the set initial cloud layer movement speed since the movement speed of the cloud layer at the time 2 is not obtained through calculation yet.
Sampling for the fourth time at a moment 4 (the ground end), obtaining a cloud layer image, carrying out image difference and binaryzation on the image collected at the moment 4 and the image collected at the moment 3, obtaining a binaryzation image at the moment 4, carrying out pattern matching on the binaryzation image at the moment 4 and the binaryzation image at the moment 3, obtaining the central point and the number of pixel points of each matching pattern pair, respectively judging whether the central point and the number of pixel points accord with formulas (8) - (9), calculating the moving pixel mean value of the central points of all the matching pattern pairs which accord with the formulas (8) - (9), substituting the moving pixel mean value into the formulas (5) - (7) for calculation, obtaining the moving speed of the cloud layer at the moment 4, and sending the moving speed to a visual navigation module of a moving platform in real time. In the process ofCalculated by the formula (10), in the formula (10)Is the cloud movement speed at time 3.
Sampling for the fifth time at the time 5 (the ground end) to obtain a cloud layer image, carrying out image difference and binarization on the image acquired at the time 5 and the image acquired at the time 4 to obtain a binarized image at the time 5, carrying out pattern matching on the binarized image at the time 5 and the binarized image at the time 4 to obtain the central point and the number of pixel points of each matched pattern pair, respectively judging whether the central point and the number of pixel points accord with the formulas (8) - (9),calculating the mean value of the central point moving pixels of all the matching graph pairs which accord with the formulas (8) - (9), substituting the mean value into the formulas (5) - (7) for calculation to obtain the motion speed of the cloud layer at the moment 5, and sending the motion speed to the visual navigation module of the mobile platform in real time. In the process ofCalculated by the formula (10), in the formula (10)Is the cloud movement speed at time 4.
And then, the rest can be analogized.
In the moving process of the mobile platform, the cloud layer is observed by means of a camera carried by the mobile platform in the direction of the sky, two images (namely cloud layer observation images) at adjacent moments are obtained, and the initial visual navigation module can be used for calculating the plane displacement of the camera relative to the cloud layer, namely the plane displacement of the mobile platform relative to the cloud layer. The skyward camera is also called a vision sensor. The camera is fixedly connected with the mobile platform, and the position of the camera can be regarded as the position of the mobile platform. The specific process is as follows:
defining a navigation coordinate systemIs (east, north, day) and the coordinate system of the carrier camera at the previous momentThe coordinate system of the carrier camera at the later momentIs described.
Two adjacent moments (the time interval between two adjacent moments is selected as,Namely, the sampling period in the mobile platform, two adjacent moments are respectively the previous moment and the next moment), the method of SIFT/SURF/ORB (optional one) is used to extract the feature points in the two images and match the feature points, and a pair of matched feature points are expressed as。
With reference to figure 3 of the drawings,pixel coordinates of the matched feature points in the images at the previous moment and the next moment respectively,of a matching feature point in the images at the previous and subsequent times, respectivelyAn axis value;of a matching feature point in the image at a previous moment and a subsequent moment, respectivelyAn axis value;
the sampling period in the mobile platform may be different or the same as the sampling period in the ground vision observation. The mobile platform and the ground visual observation equipment can independently complete sampling, and the sampling can be completed at the same time or at different times, so that the sampling time of the mobile platform can be the same as or different from that of the ground visual observation equipment.
Because the distance between the cloud layer and the mobile platform is far greater than the focal length of the camera, the cloud layer characteristic points are considered to be on the same plane on the assumption that the matched characteristic points meet the homography matrixAnd (5) restraining.
The description of the homography matrix is:
whereinIs a reference matrix in the camera, and the reference matrix,is a unit normal vector of the cloud layer plane under the coordinate system of the carrier camera at the previous moment (at a certain moment)Is a fixed one of the main body and the auxiliary body,changes as the pose of the camera changes),is composed ofThe transpose matrix of (a) is,is the distance from the cloud plane to the camera. The air pressure height difference between the ground observation station and the carrier can be used for obtaining the height difference between the ground observation station and the carrier in combination with the ground observationThe distance from the cloud layer plane to the carrier (namely the distance from the cloud layer plane to the camera) can be obtained through the height of the cloud layer obtained by the observation station and the height difference between the ground observation station and the carrier.Andrespectively a rotation matrix and a translation vector of a carrier camera coordinate system at two adjacent moments to be solved. In this paragraph, the camera refers to the skyward camera of the mobile platform.
Forming an equation set according to multiple groups of matching feature points to obtainThereby recoveringAnd. Based on the rotation relation of the carrier camera coordinate system relative to the navigation coordinate system at the previous momentThe rotation relation of the carrier camera coordinate system relative to the navigation coordinate system at the later moment can be obtainedCan obtain the real-time displacement of the camera or the mobile platform under the navigation coordinate system。
whereinIndicating that the camera is displaced north relative to the cloud (i.e. the mobile platform is displaced north relative to the cloud),for the camera to move east with respect to the cloud layer (i.e. the mobile platform moves east with respect to the cloud layer),the displacement of the camera relative to the cloud layer in the sky direction (namely the displacement of the mobile platform relative to the cloud layer in the sky direction) is obtained.
Cloud layer speed information measured by ground visual observation equipment is transmitted to a mobile platform through a communication link (namely a data link), a visual navigation correction module of the mobile platform combines with an initial visual navigation module of the mobile platform to solve the plane movement position of the platform relative to a cloud layer and the cloud layer speed information which are output to carry out differential processing, the initial visual navigation module calculates the relative motion of the mobile platform relative to the cloud layer, the ground visual observation equipment calculates the movement condition of the cloud layer, then the camera of the mobile platform subtracts the self-displacement of the cloud layer from the plane displacement of the cloud layer to obtain the real displacement of the mobile platform relative to the ground, namely the influence of the movement of the cloud layer is eliminated, and the real east displacement of the mobile platform is obtainedAnd north displacement。
Therefore, the actual position of the mobile platform under the navigation coordinate system is obtained through accumulation.
Referring to the actual position of the mobile platform in the navigation coordinate system at the later moment,pointing at the later moment;the actual position of the mobile platform in the navigation coordinate system at the previous moment,pointing to the previous moment;is the time interval between two adjacent time instants.
The initial position of the mobile platform is certain (pre-settable). And in the carrier visual navigation, receiving real-time cloud layer movement speed information sent by the ground visual observation equipment.
In the carrier visual navigation, at the first moment (the first sampling of a carrier end), a carrier end camera shoots a cloud layer image; in the process, the displacement of the camera or the mobile platform under the navigation coordinate system is not solved, so that the actual position of the mobile platform under the navigation coordinate system at the first moment is still the initial position of the mobile platform.
At the second moment (the second sampling is carried out on the carrier end), the carrier end camera shoots the cloud layer image again, and the displacement of the camera or the moving platform at the second moment under the navigation coordinate system is obtained according to the method (actually, the real-time displacement of the camera or the moving platform from the first moment to the second moment); then, according to the received latest cloud layer movement speed information, calculating the real east displacement and north displacement (from the first moment to the second moment, the real east displacement and north displacement of the mobile platform) of the mobile platform and the actual position of the mobile platform in the navigation coordinate system at the second moment through formulas (18) and (19); in the process, the actual position of the mobile platform in the navigation coordinate system at the previous moment is the actual position of the mobile platform in the navigation coordinate system at the first moment;
at a third time (sampling at the carrier end for the third time), the carrier end camera shoots the cloud layer image again, and the displacement of the camera or the mobile platform at the third time under the navigation coordinate system is obtained according to the method (actually, the displacement refers to the real-time displacement of the camera or the mobile platform from the second time to the third time); then, according to the received latest cloud layer motion speed information, calculating the real east displacement and the real north displacement (from the second moment to the third moment, the real east displacement and the real north displacement of the mobile platform) of the mobile platform and the actual position of the mobile platform in the navigation coordinate system at the third moment through formulas (18) and (19); in the process, the actual position of the mobile platform in the navigation coordinate system at the previous moment is the actual position of the mobile platform in the navigation coordinate system at the second moment;
at a fourth moment (fourth sampling of the carrier end), the carrier end camera shoots the cloud layer image again, and the displacement of the camera or the mobile platform at the fourth moment under the navigation coordinate system is obtained according to the method (actually, the real-time displacement of the camera or the mobile platform from the third moment to the fourth moment); then, according to the received latest cloud layer motion speed information, calculating the real east displacement and north displacement (from the third moment to the fourth moment, the real east displacement and north displacement of the mobile platform) of the mobile platform and the actual position of the mobile platform in the navigation coordinate system at the fourth moment through formulas (18) and (19); in the process, the actual position of the mobile platform in the navigation coordinate system at the previous moment is the actual position of the mobile platform in the navigation coordinate system at the third moment;
and then, the rest can be done in turn.
The ground visual observation is earlier than the carrier visual navigation, and when the real east displacement and the real north displacement of the mobile platform are calculated from the first moment to the second moment in the carrier visual navigation, the latest cloud layer movement speed information received by the visual navigation module can be the movement speed of the cloud layer at the moment 3 of the ground end, the movement speed of the cloud layer at the moment 4 of the ground end, the movement speed of the cloud layer at the moment 5 of the ground end or the movement speed of the cloud layer at the moment behind the ground end.
After the ground visual observation equipment samples every time, the updated cloud layer movement speed is obtained through calculation, and the updated cloud layer movement speed information is immediately sent to the mobile platform in real time. The visual navigation module adopts the received latest cloud layer movement speed when calculating the real east displacement and the north displacement.
When the default visual navigation module calculates the real east displacement and the north displacement, the received latest cloud layer movement speed is(ground end, north moving speed of cloud layer at current sampling time) and(the cloud layer east moving speed at the current sampling time) at the ground end, wherein the current sampling time in the ground visual observation refers to the sampling time of the ground end terminal machine corresponding to the latest cloud layer movement speed information received by the visual navigation module.
The above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not depart from the spirit of the corresponding technical solutions of the embodiments of the present invention. The above embodiments are only preferred embodiments of the present invention, and any modifications and changes made according to the present invention should be included in the protection scope of the present invention.
Claims (9)
1. A visual positioning system based on cloud layer observation under cloudy weather is characterized by comprising:
the ground image acquisition module is used for acquiring cloud layer images from the ground in real time;
the ground image processing module is used for carrying out image difference and binarization processing on the cloud layer image acquired at the current sampling moment and the cloud layer image acquired at the last sampling moment;
the ground image matching module is used for matching or comparing the binarized image at the last sampling moment with the binarized image at the current sampling moment to obtain the pixel displacement of the central point;
the ground cloud layer movement speed calculation module is used for calculating the cloud layer movement speed according to the central point pixel displacement obtained by the ground image matching module;
the vision sensor is used for acquiring cloud layer images from the mobile platform;
the visual navigation module comprises an initial visual navigation module and a visual navigation correction module;
the initial visual navigation module is used for calculating the plane displacement of the visual sensor or the mobile platform relative to the cloud layer according to the two images at the adjacent moments obtained by the visual sensor;
and the visual navigation correction module is used for receiving the cloud layer movement speed information obtained by the ground cloud layer movement speed calculation module, correcting the plane displacement of the visual sensor or the mobile platform, which is measured by the initial visual navigation module, relative to the cloud layer to obtain the real displacement of the mobile platform relative to the ground, and obtaining the actual position of the mobile platform under the navigation coordinate system.
2. The visual positioning system based on cloud observation in cloudy weather as claimed in claim 1, wherein the ground image processing module is used for processing the imageImage sampling and differencing periodThe calculation method is as follows, when the ground wind speed is;
The principle of binarization is as follows:
3. The visual positioning system based on cloud observation in cloudy weather according to claim 1, wherein the ground image matching module specifically comprises:
the selection principle of the characteristic pattern in the binary image is that in the binary image at the last sampling moment, a region which has a gray value of 255 and can be closed is selected, the outer boundary is taken, and the central point of the region boundary is calculatedAnd the number of pixels inside the region boundary,Has a position coordinate of;
The selection of the central point of the area boundary adopts a geometric center mode, namely, the maximum coordinate value and the minimum coordinate value of the area boundary on a horizontal axis and a vertical axis are obtained, and the median values are respectively solved and are used as the position coordinate value of the central point;
comparing the front binary image with the rear binary image, finding out a closed-loop area corresponding to the area capable of being closed-loop in the front image from the rear image, and taking the outer boundary of the closed-loop area corresponding to the rear image as an independent graph; the center point of the outer boundary of the closed-loop area in the latter figure is indicated asThe number of pixels inside the outer boundary of the closed-loop region in the latter figure is expressed as,Has a position coordinate of;
Comparing the center pointsAndof the image coordinate systemAndthe axes respectively correspond to a north direction N and an east direction E, and two central points are obtainedAndpixel shift in direction;
4. The visual positioning system based on cloud layer observation in cloudy weather according to claim 1,
the ground cloud layer movement speed calculation module specifically comprises:
of image coordinate systemsAndthe axes respectively correspond to the north direction N and the east direction E, and the movement speed of the cloud layer at the current sampling moment is calculated;
wherein, the first and the second end of the pipe are connected with each other,indicating the north-direction moving speed of the cloud layer at the current sampling moment,indicating the east moving speed of the cloud layer at the current sampling moment,representing the motion speed of the cloud layer at the current sampling moment;is the height of the cloud layer, and is,is the focal length of the camera and is,for a single pixel size of the camera imaging plane,sampling and differentiating the image for a period;for the central point of all matched graph pairs in the two binary imagesAn axis pixel displacement mean;for the central point of all matched graph pairs in the two binary imagesAxis pixel shift mean.
5. The visual positioning system based on cloud observation in cloudy weather as claimed in claim 3, wherein the comparison principle between the central point and the pixel point is as follows:
indicating a threshold for the change in the number of pixels,a distance threshold representing a center point;
wherein the content of the first and second substances,represents the last miningSampling the movement speed of the cloud layer at the moment;is the height of the cloud layer,is the focal length of the camera and is,for a single pixel size of the camera imaging plane,the image is sampled and the period is differentiated.
6. The visual positioning system based on cloud observation in cloudy weather according to claim 1, wherein the initial visual navigation module specifically comprises:
defining a navigation coordinate systemThe coordinate system of the carrier camera at the previous time isThe coordinate system of the carrier camera at the later momentIs to be prepared;
selecting two images at two adjacent moments, extracting characteristic points in the two images and matching the characteristic points;
Pixel coordinates of the matched feature points in the images at the previous moment and the next moment respectively,of a matching feature point in the image at a previous moment and a subsequent moment, respectivelyAn axis value;of a matching feature point in the image at a previous moment and a subsequent moment, respectivelyAn axis value;
the cloud layer characteristic points are assumed to be on the same plane, so that the matched characteristic points meet the homography matrixConstraining;
the description of the homography matrix is:
whereinIs a reference matrix in the camera, and the reference matrix is a reference matrix in the camera,is a unit normal vector of the cloud layer plane under the coordinate system of the carrier camera at the previous moment,the distance from the cloud layer plane to the camera;andrespectively obtaining a rotation matrix and a translation vector of a carrier camera coordinate system at two adjacent moments to be solved;
forming an equation set according to multiple groups of matched feature points to obtainThereby recoveringAnd(ii) a Based on the rotation relation of the carrier camera coordinate system relative to the navigation coordinate system at the previous momentThe rotation relation of the carrier camera coordinate system relative to the navigation coordinate system at the later moment can be obtainedCan obtain the real-time displacement of the camera or the mobile platform under the navigation coordinate system;
7. The visual positioning system based on cloud layer observation in cloudy weather according to claim 1,
the visual navigation correction module specifically comprises:
the visual navigation correction module combines the plane displacement of the mobile platform relative to the cloud layer obtained by the initial visual navigation module and the cloud layer movement speed information obtained by the ground cloud layer movement speed calculation module to perform differential processing to obtain the real east displacement of the mobile platformAnd north displacement:
Obtaining the actual position of the mobile platform under a navigation coordinate system:
8. The visual positioning system based on cloud observation in cloudy weather according to claim 1, wherein the ground image acquisition module, the ground image processing module, the ground image matching module and the ground cloud movement speed calculation module form ground visual observation equipment; the visual sensor and the visual navigation module form carrier visual navigation equipment, and the ground visual observation equipment and the carrier visual navigation equipment interact cloud layer movement speed information through a data link.
9. A visual positioning method based on cloud layer observation under cloudy weather is characterized in that the visual positioning system based on cloud layer observation under cloudy weather of any one of claims 1 to 8 is adopted to carry out visual positioning on a mobile platform.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210701077.6A CN114782539B (en) | 2022-06-21 | 2022-06-21 | Visual positioning system and method based on cloud layer observation in cloudy weather |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210701077.6A CN114782539B (en) | 2022-06-21 | 2022-06-21 | Visual positioning system and method based on cloud layer observation in cloudy weather |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114782539A true CN114782539A (en) | 2022-07-22 |
CN114782539B CN114782539B (en) | 2022-10-11 |
Family
ID=82421289
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210701077.6A Active CN114782539B (en) | 2022-06-21 | 2022-06-21 | Visual positioning system and method based on cloud layer observation in cloudy weather |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114782539B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130048707A1 (en) * | 2011-08-26 | 2013-02-28 | Qualcomm Incorporated | Identifier generation for visual beacon |
CN103149939A (en) * | 2013-02-26 | 2013-06-12 | 北京航空航天大学 | Dynamic target tracking and positioning method of unmanned plane based on vision |
CN114184200A (en) * | 2022-02-14 | 2022-03-15 | 南京航空航天大学 | Multi-source fusion navigation method combined with dynamic mapping |
CN114419109A (en) * | 2022-03-29 | 2022-04-29 | 中航金城无人***有限公司 | Aircraft positioning method based on visual and barometric information fusion |
CN114547222A (en) * | 2022-02-21 | 2022-05-27 | 智道网联科技(北京)有限公司 | Semantic map construction method and device and electronic equipment |
-
2022
- 2022-06-21 CN CN202210701077.6A patent/CN114782539B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130048707A1 (en) * | 2011-08-26 | 2013-02-28 | Qualcomm Incorporated | Identifier generation for visual beacon |
CN103149939A (en) * | 2013-02-26 | 2013-06-12 | 北京航空航天大学 | Dynamic target tracking and positioning method of unmanned plane based on vision |
CN114184200A (en) * | 2022-02-14 | 2022-03-15 | 南京航空航天大学 | Multi-source fusion navigation method combined with dynamic mapping |
CN114547222A (en) * | 2022-02-21 | 2022-05-27 | 智道网联科技(北京)有限公司 | Semantic map construction method and device and electronic equipment |
CN114419109A (en) * | 2022-03-29 | 2022-04-29 | 中航金城无人***有限公司 | Aircraft positioning method based on visual and barometric information fusion |
Also Published As
Publication number | Publication date |
---|---|
CN114782539B (en) | 2022-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6484729B2 (en) | Unmanned aircraft depth image acquisition method, acquisition device, and unmanned aircraft | |
CN108534782B (en) | Binocular vision system-based landmark map vehicle instant positioning method | |
CN102353377B (en) | High altitude long endurance unmanned aerial vehicle integrated navigation system and navigating and positioning method thereof | |
EP3132231B1 (en) | A method and system for estimating information related to a vehicle pitch and/or roll angle | |
CN112987065B (en) | Multi-sensor-integrated handheld SLAM device and control method thereof | |
CN108665499B (en) | Near distance airplane pose measuring method based on parallax method | |
CN110174088A (en) | A kind of target ranging method based on monocular vision | |
CN105976353A (en) | Spatial non-cooperative target pose estimation method based on model and point cloud global matching | |
CN109631911B (en) | Satellite attitude rotation information determination method based on deep learning target recognition algorithm | |
CN110033480A (en) | The airborne lidar for fluorescence target motion vectors estimation method of measurement is taken the photograph based on boat | |
CN113223145B (en) | Sub-pixel measurement multi-source data fusion method and system for planetary surface detection | |
CN113624231B (en) | Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft | |
CN111913190B (en) | Near space dim target orienting device based on color infrared spectrum common-aperture imaging | |
CN110998241A (en) | System and method for calibrating an optical system of a movable object | |
CN109708627B (en) | Method for rapidly detecting space dynamic point target under moving platform | |
WO2017160356A1 (en) | Systems and methods for enhancing object visibility for overhead imaging | |
CN105606123A (en) | Method for automatic correction of digital ground elevation model for low-altitude aerial photogrammetry | |
CN109724586A (en) | A kind of spacecraft relative pose measurement method of fusion depth map and point cloud | |
CN104729482A (en) | Ground tiny target detection system and ground tiny target detection method based on airship | |
CN104154932B (en) | Implementation method of high-dynamic star sensor based on EMCCD and CMOS | |
CN116385504A (en) | Inspection and ranging method based on unmanned aerial vehicle acquisition point cloud and image registration | |
CN114544006B (en) | Low-altitude remote sensing image correction system and method based on ambient illumination condition | |
CN108961319B (en) | Method for analyzing dynamic airplane motion characteristics by double-linear-array TDI space camera | |
CN117523461B (en) | Moving target tracking and positioning method based on airborne monocular camera | |
CN109341685B (en) | Fixed wing aircraft vision auxiliary landing navigation method based on homography transformation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |