CN114782539A - Visual positioning system and method based on cloud layer observation in cloudy weather - Google Patents

Visual positioning system and method based on cloud layer observation in cloudy weather Download PDF

Info

Publication number
CN114782539A
CN114782539A CN202210701077.6A CN202210701077A CN114782539A CN 114782539 A CN114782539 A CN 114782539A CN 202210701077 A CN202210701077 A CN 202210701077A CN 114782539 A CN114782539 A CN 114782539A
Authority
CN
China
Prior art keywords
cloud layer
image
ground
moment
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210701077.6A
Other languages
Chinese (zh)
Other versions
CN114782539B (en
Inventor
万骏炜
蔡旭阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avic Jincheng Unmanned System Co ltd
Original Assignee
Avic Jincheng Unmanned System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Avic Jincheng Unmanned System Co ltd filed Critical Avic Jincheng Unmanned System Co ltd
Priority to CN202210701077.6A priority Critical patent/CN114782539B/en
Publication of CN114782539A publication Critical patent/CN114782539A/en
Application granted granted Critical
Publication of CN114782539B publication Critical patent/CN114782539B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)

Abstract

The invention relates to the field of visual positioning navigation, in particular to a visual positioning system and method based on cloud layer observation in cloudy weather. The ground image acquisition module, the ground image processing module, the ground image matching module, the ground cloud layer movement speed calculation module, the vision sensor and the vision navigation module are included. Compared with the prior art, the invention has the beneficial effects that: cloud layer speed information measured by the ground visual observation equipment is transmitted to the mobile platform through a data link, differential processing is carried out by combining the plane moving position of the platform relative to a cloud layer and the cloud layer speed information, the influence of cloud layer movement is eliminated, the real east and north displacement of the mobile platform is obtained, and the actual position of the mobile platform under a navigation coordinate system is calculated.

Description

Visual positioning system and method based on cloud layer observation in cloudy weather
Technical Field
The invention relates to the field of visual positioning navigation, in particular to a visual positioning system and method based on cloud layer observation in cloudy weather.
Background
The ground and low-altitude mobile platform navigation and positioning depends on a satellite navigation means. The satellite navigation depends on radio signals, is easy to interfere in severe environment, and is easy to cause the condition that the navigation result of the mobile platform is unavailable. A technical solution is needed to provide a stable navigation result in a special environment, instead of the satellite navigation.
The visual navigation does not depend on radio signals, and relative pose movement can be calculated by matching images acquired at different moments for navigation and positioning, so that the defect of satellite navigation can be overcome. In a city, the surrounding features of the mobile platform are obvious, and the position of a camera or the movement of the mobile platform can be obtained by reversely deducing through observing surrounding fixed feature targets. Under the scene that the ground features of a lake surface, a sand land and the like are not obvious, the visual navigation method based on feature extraction is difficult to acquire stable feature reference points on the ground for matching and feature calculation. Therefore, accurate navigation cannot be achieved.
Because the cloud layer moves, the position of the mobile platform cannot be calculated by taking the cloud layer as a fixed reference point. Other means are needed to perform cloud layer description and moving trend quantification description, so that the mutual position between the cloud layer and the moving platform is converted, and therefore the characteristic information of the cloud layer is effectively utilized, and the visual navigation capability under the specific environment is realized.
Disclosure of Invention
The invention provides a visual positioning system and method based on cloud layer observation in cloudy weather, aiming at solving the navigation requirement in the scenes with unobvious ground features such as lake surface, sand land and the like and improving the navigation efficiency.
In order to achieve the purpose, the technical scheme adopted by the invention comprises the following steps: a visual positioning system based on cloud layer observation under cloudy weather comprises:
the ground image acquisition module is used for acquiring cloud layer images from the ground in real time;
the ground image processing module is used for carrying out image difference and binarization processing on the cloud layer image acquired at the current sampling moment and the cloud layer image acquired at the last sampling moment;
the ground image matching module is used for matching or comparing the binarized image at the last sampling moment with the binarized image at the current sampling moment to obtain the pixel displacement of the central point;
the ground cloud layer movement speed calculation module is used for calculating the cloud layer movement speed according to the central point pixel displacement obtained by the ground image matching module;
the vision sensor is used for acquiring cloud layer images from the mobile platform;
the visual navigation module comprises an initial visual navigation module and a visual navigation correction module;
the initial visual navigation module is used for calculating the plane displacement of the visual sensor or the mobile platform relative to the cloud layer according to the two images at the adjacent moments obtained by the visual sensor;
and the visual navigation correction module is used for receiving the cloud layer movement speed information obtained by the ground cloud layer movement speed calculation module, correcting the plane displacement of the visual sensor or the mobile platform, which is measured by the initial visual navigation module, relative to the cloud layer to obtain the real displacement of the mobile platform relative to the ground, and obtaining the actual position of the mobile platform under the navigation coordinate system.
In the ground image processing module, image sampling and period difference are carried out
Figure DEST_PATH_IMAGE001
The calculation method is as follows, when the ground wind speed is
Figure 100002_DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE003
(1)
The principle of binarization is as follows:
Figure 100002_DEST_PATH_IMAGE004
(2)
wherein the content of the first and second substances,
Figure 653042DEST_PATH_IMAGE005
is the gray scale of the pixel in the difference image,
Figure 100002_DEST_PATH_IMAGE006
is the processed gray scale.
The ground image matching module specifically comprises:
in the binary imageThe characteristic pattern is selected by selecting a region which has a gray value of 255 and can be closed in a binary image at the last sampling moment, taking an outer boundary, and calculating the central point of the region boundary
Figure 727440DEST_PATH_IMAGE007
And the number of pixels in the pattern
Figure 100002_DEST_PATH_IMAGE008
Figure 38335DEST_PATH_IMAGE009
Has a position coordinate of
Figure 100002_DEST_PATH_IMAGE010
The selection of the central point of the area boundary adopts a geometric center mode, namely, the maximum coordinate value and the minimum coordinate value of the area boundary on a horizontal axis and a vertical axis are obtained, and the median values are respectively solved and are used as the position coordinate value of the central point;
comparing the front binary image with the rear binary image, finding out a closed-loop area corresponding to the area capable of being closed-loop in the front image from the rear image, and taking the outer boundary of the closed-loop area in the rear image as an independent graph; the center point of the outer boundary of the closed-loop area in the latter figure is indicated as
Figure 840069DEST_PATH_IMAGE011
The number of pixels inside the outer boundary of the closed-loop region in the latter figure is expressed as
Figure DEST_PATH_IMAGE012
Figure 575813DEST_PATH_IMAGE011
Has a position coordinate of
Figure 663854DEST_PATH_IMAGE013
Comparing the center points
Figure 930888DEST_PATH_IMAGE009
And
Figure 395367DEST_PATH_IMAGE011
of the coordinate position of the image coordinate system
Figure 100002_DEST_PATH_IMAGE014
And
Figure 470771DEST_PATH_IMAGE015
the axes respectively correspond to a north direction N and an east direction E, and two central points are obtained
Figure 729714DEST_PATH_IMAGE014
And
Figure 15201DEST_PATH_IMAGE015
pixel shift in direction
Figure 100002_DEST_PATH_IMAGE016
Figure 912400DEST_PATH_IMAGE017
(3)
Figure DEST_PATH_IMAGE018
(4)
The mean value of the displacement of the central point pixel of all the matched graph pairs in the two binary images is represented as
Figure 966944DEST_PATH_IMAGE019
The ground cloud layer movement speed calculation module specifically comprises:
of image coordinate systems
Figure 537733DEST_PATH_IMAGE014
And
Figure 44938DEST_PATH_IMAGE015
the axes respectively correspond to the north direction N and the east direction E, and the movement speed of the cloud layer at the current sampling moment is calculated;
Figure 585641DEST_PATH_IMAGE020
(5)
Figure 100002_DEST_PATH_IMAGE021
(6)
Figure 884904DEST_PATH_IMAGE022
Figure 100002_DEST_PATH_IMAGE023
(7)
wherein, the first and the second end of the pipe are connected with each other,
Figure 16808DEST_PATH_IMAGE024
indicating the north-direction moving speed of the cloud layer at the current sampling moment,
Figure 100002_DEST_PATH_IMAGE025
indicating the east moving speed of the cloud layer at the current sampling moment,
Figure 152254DEST_PATH_IMAGE026
representing the motion speed of the cloud layer at the current sampling moment;
Figure 100002_DEST_PATH_IMAGE027
is the height of the cloud layer, and is,
Figure 762227DEST_PATH_IMAGE028
is the focal length of the camera and is,
Figure 100002_DEST_PATH_IMAGE029
for a single pixel size of the camera imaging plane,
Figure 417462DEST_PATH_IMAGE001
sampling and differentiating periods for the image;
Figure 454688DEST_PATH_IMAGE030
for the central point of all matched graph pairs in the two binary images
Figure 670906DEST_PATH_IMAGE014
An axis pixel displacement mean;
Figure 100002_DEST_PATH_IMAGE031
for the central point of all matched graph pairs in the two binary images
Figure 225515DEST_PATH_IMAGE015
Axis pixel shift mean.
The comparison principle of the central point and the pixel point is as follows:
Figure 843578DEST_PATH_IMAGE032
(8)
Figure 100002_DEST_PATH_IMAGE033
(9)
Figure 441918DEST_PATH_IMAGE034
indicating a threshold for the change in the number of pixels,
Figure 100002_DEST_PATH_IMAGE035
a distance threshold representing a center point;
Figure 411012DEST_PATH_IMAGE036
(10)
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE037
representing the motion speed of the cloud layer at the last sampling moment;
Figure 238153DEST_PATH_IMAGE027
is the height of the cloud layer, and is,
Figure 976302DEST_PATH_IMAGE028
is the focal length of the camera and is,
Figure 355331DEST_PATH_IMAGE029
for a single pixel size of the camera imaging plane,
Figure 169310DEST_PATH_IMAGE001
the image is sampled and differentiated for a period.
The initial visual navigation module specifically comprises:
defining a navigation coordinate system
Figure 659197DEST_PATH_IMAGE038
The coordinate system of the carrier camera at the previous time is
Figure 100002_DEST_PATH_IMAGE039
The coordinate system of the carrier camera at the later moment
Figure 251852DEST_PATH_IMAGE040
Is to be prepared;
selecting two images at two adjacent moments, extracting characteristic points in the two images and matching;
a pair of matched feature points is represented as
Figure 100002_DEST_PATH_IMAGE041
Figure 942728DEST_PATH_IMAGE042
(11)
Figure 100002_DEST_PATH_IMAGE043
(12)
Figure 886413DEST_PATH_IMAGE044
Pixel coordinates of the matched feature points in the images at the previous moment and the next moment respectively,
Figure 100002_DEST_PATH_IMAGE045
of a matching feature point in the image at a previous moment and a subsequent moment, respectively
Figure 570204DEST_PATH_IMAGE014
An axis value;
Figure 17366DEST_PATH_IMAGE046
of a matching feature point in the images at the previous and subsequent times, respectively
Figure 472618DEST_PATH_IMAGE015
An axis value;
the cloud layer characteristic points are assumed to be on the same plane, so that the matched characteristic points meet the homography matrix
Figure 100002_DEST_PATH_IMAGE047
Constraining;
Figure 778966DEST_PATH_IMAGE048
(13)
the description of the homography matrix is:
Figure 100002_DEST_PATH_IMAGE049
(14)
wherein
Figure 610655DEST_PATH_IMAGE050
Is a reference matrix in the camera, and the reference matrix,
Figure 100002_DEST_PATH_IMAGE051
is a unit normal vector of the cloud layer plane under the coordinate system of the carrier camera at the previous moment,
Figure 69581DEST_PATH_IMAGE052
the distance from the cloud layer plane to the camera;
Figure 100002_DEST_PATH_IMAGE053
and
Figure 961313DEST_PATH_IMAGE054
respectively obtaining a rotation matrix and a translation vector of a carrier camera coordinate system at two adjacent moments to be solved;
forming an equation set according to multiple groups of matched feature points to obtain
Figure 614012DEST_PATH_IMAGE047
Thereby recovering
Figure 124758DEST_PATH_IMAGE053
And
Figure 280933DEST_PATH_IMAGE054
(ii) a Based on the rotation relation of the carrier camera coordinate system relative to the navigation coordinate system at the previous moment
Figure 100002_DEST_PATH_IMAGE055
The rotation relation of the carrier camera coordinate system relative to the navigation coordinate system at the later moment can be obtained
Figure 609146DEST_PATH_IMAGE056
Can obtain the real-time displacement of the camera or the mobile platform under the navigation coordinate system
Figure 100002_DEST_PATH_IMAGE057
Figure 608195DEST_PATH_IMAGE058
(15)
Figure 100002_DEST_PATH_IMAGE059
(16)
Displacement of
Figure 312846DEST_PATH_IMAGE057
The composition of (A) is as follows:
Figure 589107DEST_PATH_IMAGE060
(17)
wherein
Figure DEST_PATH_IMAGE061
Indicating that the camera is displaced north relative to the cloud,
Figure 963587DEST_PATH_IMAGE062
for east displacement of the camera relative to the cloud layer,
Figure 100002_DEST_PATH_IMAGE063
is the camera relative to the cloud layer in the sky direction.
The visual navigation correction module specifically comprises:
the visual navigation correction module combines the plane displacement of the mobile platform relative to the cloud layer obtained by the initial visual navigation module and the cloud layer movement speed information obtained by the ground cloud layer movement speed calculation module to perform differential processing to obtain the real east displacement of the mobile platform
Figure 590878DEST_PATH_IMAGE064
And north displacement
Figure 100002_DEST_PATH_IMAGE065
Figure 450950DEST_PATH_IMAGE066
(18)
Obtaining the actual position of the mobile platform under a navigation coordinate system:
Figure 100002_DEST_PATH_IMAGE067
(19)
Figure 847296DEST_PATH_IMAGE068
indicating the actual position of the mobile platform in the navigation coordinate system at the later moment;
Figure 100002_DEST_PATH_IMAGE069
navigation coordinates of mobile platform at moment before fingerActual position under tether;
Figure 392678DEST_PATH_IMAGE070
is the time interval between two adjacent time instants.
The ground visual observation equipment consists of a ground image acquisition module, a ground image processing module, a ground image matching module and a ground cloud layer movement speed calculation module; the visual sensor and the visual navigation module form carrier visual navigation equipment, and the ground visual observation equipment and the carrier visual navigation equipment interact cloud layer movement speed information through a data link.
The invention also provides a visual positioning method based on cloud layer observation in cloudy weather, which adopts the visual positioning system based on cloud layer observation in cloudy weather to perform visual positioning on the mobile platform.
Compared with the prior art, the invention has the beneficial effects that: cloud layer speed information measured by the ground visual observation equipment is transmitted to the mobile platform through a communication link, a visual navigation correction module of the mobile platform combines with an initial visual navigation module of the mobile platform to solve the output planar movement position of the mobile platform relative to a cloud layer and the cloud layer speed information transmitted by the ground visual observation equipment to carry out differential processing, the influence of cloud layer movement is eliminated, the real east and north displacement of the mobile platform relative to the ground is obtained, and the actual position of the mobile platform (such as an aircraft, an unmanned aerial vehicle or an automobile) under a navigation coordinate system is calculated.
Drawings
FIG. 1 is a schematic diagram of a visual positioning system based on cloud layer observation in cloudy weather;
FIG. 2 is a schematic flow chart of a visual positioning method based on cloud layer observation in cloudy weather;
FIG. 3 is a schematic diagram of an image coordinate system;
FIG. 4 is a process of observing cloud motion;
fig. 5 is a schematic diagram of two binary images.
Detailed Description
The present invention will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown.
The carrier, i.e. the mobile platform, herein may be a mobile device such as a drone, an automobile, an aircraft, etc.
The visual positioning system based on cloud layer observation in cloudy weather consists of ground visual observation equipment, carrier visual navigation equipment and a data link. The ground visual observation equipment is responsible for observing the motion of the cloud layer to obtain the motion data of the cloud layer. The carrier visual navigation equipment is responsible for observing the cloud layer, receiving cloud layer motion data sent by the ground visual observation equipment and calculating motion information of the carrier relative to the cloud layer. The ground visual observation device and the carrier visual navigation device are interconnected through a data link, and the motion information of the cloud layer is interacted, as shown in fig. 1.
A ground vision observation device comprising: a ground image acquisition module (such as a camera, the camera is positioned at the ground end) and is used for acquiring cloud layer images from the ground in real time; the ground image processing module is used for receiving the cloud layer image acquired by the ground image acquisition module, and carrying out image difference and binarization processing on the cloud layer image acquired at the current sampling moment and the cloud layer image acquired at the previous sampling moment; the ground image matching module is used for matching or comparing the binarized image at the last sampling moment with the binarized image at the current sampling moment to obtain the pixel displacement of the central point of the matched graph in the binarized image; and the ground cloud layer movement speed calculation module is used for calculating the cloud layer movement speed according to the pixel displacement of the central point of the matched graph obtained by the ground image matching module.
The ground visual observation equipment is positioned in a ground observation station (ground end for short), and the ground image processing module, the ground image matching module and the ground cloud layer movement speed calculation module are all integrated in a computer at the ground end. And the computer at the ground end is responsible for processing the observed cloud layer static information, calculating to obtain cloud layer motion data, and sending the cloud layer motion data to the carrier visual navigation equipment through a data link.
A carrier visual navigation device, comprising: a vision sensor (i.e., a skyward camera located at the carrier end) for capturing cloud images from the mobile platform; the visual navigation module comprises an initial visual navigation module and a visual navigation correction module; the initial visual navigation module is used for calculating the movement distance of the visual sensor or the mobile platform relative to the cloud layer according to the two images of the adjacent sampling moments obtained by the visual sensor; and the visual navigation correction module is used for receiving the cloud layer movement speed information obtained by the ground cloud layer movement speed calculation module, correcting the movement distance of the visual sensor or the mobile platform relative to the cloud layer, which is measured by the initial visual navigation module, and obtaining the actual movement distance of the mobile platform relative to the ground (a navigation coordinate system) and/or the actual position of the mobile platform.
The visual navigation module (including the initial visual navigation module and the visual navigation correction module) is integrated in a processing module (which may be a processing computer or a processor) carried by the carrier.
As the cloud layer moves, a ground observation station is designed to observe the cloud layer, and the cloud layer movement information is provided for the mobile platform visual navigation module for difference, so that the accurate position of the mobile platform is obtained.
Fig. 2 shows a schematic flow chart of a visual positioning method based on cloud layer observation in cloudy weather, and the specific steps are as follows: observing the cloud layer by using a ground observation station visual device (namely, a ground visual observation device) to obtain the movement speed of the cloud layer; the carrier platform (namely the mobile platform) is used for measuring the mobile position of the carrier platform relative to the cloud layer by means of visual navigation; the ground cloud layer observation information obtained by the ground observation station vision equipment is transmitted to the carrier platform, and the vision navigation output information (namely the calculated plane displacement of the carrier) of the carrier is corrected by the vision navigation module; and updating the actual position of the carrier.
The motion trend of the cloud layer motion is approximately described as
Figure 100002_DEST_PATH_IMAGE071
In which
Figure 507265DEST_PATH_IMAGE024
Represents the cloud layer northThe speed of the movement is changed to the moving speed,
Figure 147193DEST_PATH_IMAGE025
indicating the east moving speed of the cloud layer. If the cloud layer moves to the north, then
Figure 398046DEST_PATH_IMAGE024
Is positive; if the cloud layer moves to the south, then
Figure 973384DEST_PATH_IMAGE024
Is negative; if the cloud layer moves eastward, then
Figure 309687DEST_PATH_IMAGE025
Is positive; if the cloud layer moves to the west, then
Figure 504040DEST_PATH_IMAGE025
Is negative.
The image coordinate system is shown in fig. 3.
After the camera at the ground end acquires the related image information, the data is stored in the computer in the form of two-dimensional data. The image coordinate system takes the upper left corner of the image as the origin to define a rectangular coordinate system
Figure 609399DEST_PATH_IMAGE014
-
Figure 355638DEST_PATH_IMAGE015
A pixel image plane coordinate system, each pixel point uses coordinates (
Figure 913658DEST_PATH_IMAGE014
,
Figure 662434DEST_PATH_IMAGE015
) Indicating the location. In the method, the camera at the ground end is considered to be in the installation process
Figure 622299DEST_PATH_IMAGE014
And
Figure 539440DEST_PATH_IMAGE015
the axes correspond to the north (N) and east (E) directions, respectively.
The observation process of cloud layer movement is shown in fig. 4.
In the ground visual observation, one ground observation station approximately manages an area with the size of 5km by 5km, and a plurality of observation stations can be deployed in a gridding mode.
At the current sampling moment, every time one cloud layer image is collected, the ground image processing module performs image difference (namely the pixel value of each pixel point of the two images is subtracted, specifically, the pixel value of each pixel point of the image collected at the current sampling moment is subtracted by the pixel value of each pixel point of the image collected at the previous sampling moment to obtain a difference image) and binarization on the cloud layer image collected at the previous sampling moment, so that a binarization image at the current sampling moment is obtained. And comparing the binary image at the current sampling moment with the binary image at the last sampling moment by the ground image matching module, and calculating the motion speed of the cloud layer by the ground cloud layer motion speed calculation module.
Before acquiring the image at the current sampling moment, whether a sampling period is met needs to be judged, and the image at the current moment is acquired when the sampling period is met.
The following description will take an example in which a ground observation station has a camera.
The ground image processing module: and carrying out image difference and binarization on the image acquired at the current sampling moment and the image acquired at the previous sampling moment to obtain a binarized image at the current sampling moment. The acquisition mode of the binary image at the last sampling moment is as follows: and carrying out image difference and binarization on the image acquired at the last sampling moment and the image acquired at the last two sampling moments to obtain a binarized image at the last sampling moment, wherein the last two sampling moments are previous sampling moments of the last sampling moment, and the last sampling moment and the last two sampling moments are adjacent sampling moments.
Ground image matching module: and comparing (matching) the binarized image at the current sampling moment with the binarized image at the previous sampling moment to obtain a matching graph pair (the matching graph pair consists of a graph in the binarized image at the previous sampling moment and a graph in the binarized image at the current sampling moment). Numbering the graphs existing in the binary image, and calculating the characteristics (including a central point and a total pixel point) of the graphs: and matching the graphs in the binary images according to a certain principle, calculating the pixel displacement of the central point of the matching graph pair, and solving the average value of the pixel displacement of the central points of all the matching graph pairs in the two binary images as the representation of the overall movement trend of the cloud layer in the image coordinate system.
The ground cloud layer movement speed calculation module: and solving to obtain the motion speed of the cloud layer according to the pixel displacement average value of the central points of all the matched graph pairs, the focal length of the camera, the height of the cloud layer and other information.
Image sampling and differencing period
Figure 850335DEST_PATH_IMAGE001
The calculation method (unit is second) is as follows, and the ground wind speed at the time is comprehensively considered
Figure 386490DEST_PATH_IMAGE002
(unit is m/s).
Figure 200862DEST_PATH_IMAGE072
The empirical value is 10. Wherein
Figure 554483DEST_PATH_IMAGE001
The sampling period is equal to the differential period.
Figure 87096DEST_PATH_IMAGE003
(1)
The principle of binarization is as follows, wherein,
Figure 676209DEST_PATH_IMAGE005
is the gray scale of the pixel in the difference image,
Figure 345088DEST_PATH_IMAGE006
is the processed gray scale.
Figure 869610DEST_PATH_IMAGE004
(2)
The principle of selecting the feature pattern in the binarized image is as follows, in the binarized image at the previous sampling time, a region (for example, a first closed-loop region) capable of being closed and having a gray value of 255 is selected, and an outer boundary is taken as an independent pattern, which is also called as follows: the zone boundaries, as shown by the irregular circles on the left in FIG. 5. Calculating center points of zone boundaries
Figure 889519DEST_PATH_IMAGE009
And the number of pixels inside the region boundary
Figure 100002_DEST_PATH_IMAGE073
The region boundary refers to the outer boundary of a region capable of being closed-loop, for example, the outer boundary of the first closed-loop region, whose gray-scale value is 255.
Figure 33055DEST_PATH_IMAGE009
Has a position coordinate of
Figure 822020DEST_PATH_IMAGE010
In which
Figure 517443DEST_PATH_IMAGE074
Is a central point
Figure 493489DEST_PATH_IMAGE009
Is
Figure 188519DEST_PATH_IMAGE014
Axial value of wherein
Figure 100002_DEST_PATH_IMAGE075
Is a central point
Figure 97570DEST_PATH_IMAGE009
Is
Figure 698315DEST_PATH_IMAGE015
And (4) an axis value.
The central point of the area boundary is selected by adopting a geometric center mode, namely, the maximum coordinate value and the minimum coordinate value of the area boundary on the horizontal axis and the vertical axis are obtained, and the median values are respectively calculated to be used as the position coordinate value of the central point. The maximum coordinate value and the minimum coordinate value of the area boundary on the horizontal axis are obtained, and the average value of the maximum coordinate and the minimum coordinate of the area boundary on the horizontal axis is taken as the horizontal coordinate of the central point; and acquiring the maximum coordinate value and the minimum coordinate value of the area boundary on the longitudinal axis, and taking the average value of the maximum coordinate and the minimum coordinate of the area boundary on the longitudinal axis as the longitudinal coordinate of the central point. Said transverse axis being
Figure 568182DEST_PATH_IMAGE014
Axis, longitudinal axis
Figure 912576DEST_PATH_IMAGE015
A shaft.
Number of pixels inside zone boundaries
Figure 676133DEST_PATH_IMAGE073
May be counted by the computer program traversing the region boundaries. The number of the pixels is the total number of the pixels inside the region boundary, namely the total pixels. Inside the zone boundary is all the remaining part except the outer boundary of the whole area capable of closing the loop with the gray value of 255.
Comparing two previous and next binary images (namely, the binary image at the previous sampling time and the binary image at the current sampling time, the binary image at the previous sampling time is referred to as a previous image for short, and the binary image at the current sampling time is referred to as a next image for short), finding out a closed-loop area (for convenience of description, referred to as a second closed-loop area, which also needs to be an area capable of being closed and has a gray value of 255) corresponding to the first closed-loop area in the previous image in the next image, and taking the outer boundary of the second closed-loop area in the next image as an independent graph, as shown in an irregular circle at the right side in fig. 5; calculating the center point of the outer boundary of the second closed loop area as
Figure 447780DEST_PATH_IMAGE011
The number of pixels inside the outer boundary of the second closed-loop region is expressed as
Figure 54210DEST_PATH_IMAGE012
Figure 202295DEST_PATH_IMAGE011
Has a position coordinate of
Figure 820358DEST_PATH_IMAGE013
In which
Figure 28485DEST_PATH_IMAGE076
Is a central point
Figure 607365DEST_PATH_IMAGE011
Is/are as follows
Figure 293562DEST_PATH_IMAGE014
Axial value of wherein
Figure DEST_PATH_IMAGE077
Is a central point
Figure 297290DEST_PATH_IMAGE011
Is/are as follows
Figure 36838DEST_PATH_IMAGE015
And (4) an axis value.
The calculation mode of the central point of the second closed-loop regional outer boundary and the pixel points in the regional boundary is the same as that of the central point of the first closed-loop regional outer boundary and the pixel points in the regional boundary.
The selection of the center point of the outer boundary of the second closed-loop area also adopts a geometric center mode, namely, the maximum and minimum coordinate values of the outer boundary of the second closed-loop area on the horizontal axis and the vertical axis are obtained, and the median values are respectively calculated to be used as the position coordinate value of the center point. And the number of the pixel points inside the outer boundary of the second closed-loop area is obtained by traversing the inside of the outer boundary of the second closed-loop area through a calculation program. The inner part of the outer boundary of the second closed loop area is all the rest part of the second closed loop area except the outer boundary.
Comparing the center points of the patterns formed by the outer boundaries of the first closed-loop region and the second closed-loop region
Figure 962069DEST_PATH_IMAGE009
And
Figure 451956DEST_PATH_IMAGE011
the coordinate position of (a). When the formulas (8) and (9) are satisfied, the two central points are obtained
Figure 44611DEST_PATH_IMAGE014
And
Figure 594541DEST_PATH_IMAGE015
pixel shift in direction (
Figure 148013DEST_PATH_IMAGE016
)。
Figure 176012DEST_PATH_IMAGE078
For the central point of the left and right matched image pairs in the two binary images
Figure 623174DEST_PATH_IMAGE014
An axis pixel displacement;
Figure DEST_PATH_IMAGE079
for the central point of the left and right matched image pairs in the two binary images
Figure 734218DEST_PATH_IMAGE015
The axis pixel is displaced.
Figure 634041DEST_PATH_IMAGE017
(3)
Figure 465731DEST_PATH_IMAGE018
(4)
Gray scale in binary imageThe number of the regions which have the value of 255 and can be closed is generally more than one, all the regions which have the gray value of 255 and can be closed and are of the binarized image at the previous sampling moment are selected and respectively matched or compared with the corresponding regions in the binarized image at the current sampling moment to form a plurality of groups of matched image pairs, for example, the first closed-loop region and the second closed-loop region form a first group of matched image pairs, and so on, two central points and the number of pixels of the plurality of groups of matched image pairs are respectively obtained, and one of the regions is obtained by calculation aiming at each group of matched image pairs
Figure 767399DEST_PATH_IMAGE014
And
Figure 659132DEST_PATH_IMAGE015
pixel displacement in direction (calculation method and
Figure 187197DEST_PATH_IMAGE016
same) and then all two center points are at
Figure 291419DEST_PATH_IMAGE014
And
Figure 713173DEST_PATH_IMAGE015
the pixel displacements in the directions are averaged separately (i.e. separately for each
Figure 775807DEST_PATH_IMAGE014
The axial pixel displacement is averaged and recorded as
Figure 650222DEST_PATH_IMAGE030
(ii) a To pair
Figure 718322DEST_PATH_IMAGE015
The axial pixel displacement is averaged and recorded as
Figure 994582DEST_PATH_IMAGE031
)。
Central point pixel displacement mean value representation of all left and right matching graph pairs in two binary imagesIs composed of
Figure 228118DEST_PATH_IMAGE019
. The center-point pixel displacement mean value is the center-point average moving pixel.
Based on the convention of the method, the image coordinate system
Figure 589829DEST_PATH_IMAGE014
And
Figure 35854DEST_PATH_IMAGE015
the axes respectively correspond to the north direction (N) and the east direction (E), and the movement speed of the cloud layer is obtained through calculation.
Figure 307566DEST_PATH_IMAGE020
(5)
Figure 712003DEST_PATH_IMAGE021
(6)
Figure 561010DEST_PATH_IMAGE080
Figure 76305DEST_PATH_IMAGE023
(7)
Wherein the content of the first and second substances,
Figure 451791DEST_PATH_IMAGE024
represents the north-facing moving speed of the cloud layer at the current sampling moment,
Figure 27129DEST_PATH_IMAGE025
represents the east movement speed of the cloud layer at the current sampling moment,
Figure 97853DEST_PATH_IMAGE080
representing the motion speed (also called observation speed) of the cloud layer at the current sampling moment, and selecting 10-30m/s as an empirical value if no observation speed exists in the initial state;
Figure 416839DEST_PATH_IMAGE027
is the height of the cloud layer,
Figure 663144DEST_PATH_IMAGE028
is the focal length of the camera and is,
Figure 143804DEST_PATH_IMAGE029
for a single pixel size of the camera imaging plane,
Figure 701824DEST_PATH_IMAGE001
sampling and differentiating periods for the image;
Figure 824501DEST_PATH_IMAGE030
for the central point of all matched graph pairs in the two binary images
Figure 784367DEST_PATH_IMAGE014
An axis pixel displacement mean;
Figure DEST_PATH_IMAGE081
for the central point of all matched graph pairs in the two binary images
Figure 327606DEST_PATH_IMAGE015
Axis pixel shift mean. In this paragraph, the camera refers to a camera used by the ground observation station to collect cloud images.
The height of the cloud layer can be obtained by ground binocular vision or laser height measurement (the ground binocular vision method and the laser distance measurement method are the prior general technologies and are not described any more),
Figure 638501DEST_PATH_IMAGE028
and
Figure 299290DEST_PATH_IMAGE029
are camera own parameters. When the formula is used for calculation, the length units are all unified into meter (m) for calculation.
The comparison principle of the center point and the pixel point (i.e. the matching principle of the first closed-loop area and the second closed-loop area) is as follows:
Figure 113662DEST_PATH_IMAGE032
(8)
Figure 342649DEST_PATH_IMAGE033
(9)
Figure 875262DEST_PATH_IMAGE034
indicating a threshold for the change in the number of pixels,
Figure 74162DEST_PATH_IMAGE034
is composed of
Figure 8620DEST_PATH_IMAGE073
10% of the total weight of the composition,
Figure 267563DEST_PATH_IMAGE035
a distance threshold representing a center point;
Figure 146526DEST_PATH_IMAGE035
can be calculated according to the motion trend of the cloud layer, and the motion trend of the cloud layer changes, so
Figure 414696DEST_PATH_IMAGE035
Are constantly updated. The solution equation is as follows:
Figure 203661DEST_PATH_IMAGE036
(10)
wherein, the first and the second end of the pipe are connected with each other,
Figure 633505DEST_PATH_IMAGE082
representing the motion speed of the cloud layer at the last sampling moment, and calculating by a ground cloud layer motion speed calculation module, wherein if the motion speed of the cloud layer is not calculated in the initial state, 10-30m/s is selected as an empirical value;
Figure 16076DEST_PATH_IMAGE027
is the height of the cloud layer,
Figure 822358DEST_PATH_IMAGE028
is the focal length of the camera and is,
Figure 200250DEST_PATH_IMAGE029
for a single pixel size of the camera imaging plane,
Figure 66575DEST_PATH_IMAGE001
the image is sampled and differentiated for a period. When the central point and the number of the pixel points of a certain matching pattern pair simultaneously satisfy the formulas (8) and (9), the sum of the central point and the number of the pixel points is calculated for the matching pattern pair
Figure 795496DEST_PATH_IMAGE014
And
Figure 28638DEST_PATH_IMAGE015
the pixel displacement in the direction is effective and can be used for calculating the mean value of the pixel displacement of the central point.
When ground visual observation is carried out, if the cloud layer movement speed is not calculated in the initial state, selecting 10-30m/s as the initial cloud layer movement speed, and sending the initial cloud layer movement speed to a visual navigation module of the mobile platform in real time;
at the moment 1 (ground end first sampling), a cloud layer image is obtained;
obtaining a cloud layer image at the moment 2 (second sampling at the ground end), and carrying out image difference and binarization on the image acquired at the moment 2 and the image acquired at the moment 1 to obtain a binarized image at the moment 2;
sampling for the third time at the moment 3 (ground end) to obtain a cloud layer image, carrying out image difference and binaryzation on the image acquired at the moment 3 and the image acquired at the moment 2 to obtain a binaryzation image at the moment 3, carrying out pattern matching on the binaryzation image at the moment 3 and the binaryzation image at the moment 2 to obtain the central point and the number of pixel points of each matching pattern pair, respectively judging whether the matching pattern pairs accord with the formulas (8) - (9), and calculating all the matching pattern pairs which accord with the formulas (8) - (9)The average value of the central point moving pixels is substituted into the formulas (5) to (7) for calculation, the motion speed of the cloud layer at the moment 3 is obtained, and the motion speed is sent to a visual navigation module of the mobile platform in real time. In the process
Figure 792195DEST_PATH_IMAGE035
Calculated by the formula (10), in the formula (10)
Figure 563842DEST_PATH_IMAGE082
The movement speed of the cloud layer at the time 2 is the set initial cloud layer movement speed since the movement speed of the cloud layer at the time 2 is not obtained through calculation yet.
Sampling for the fourth time at a moment 4 (the ground end), obtaining a cloud layer image, carrying out image difference and binaryzation on the image collected at the moment 4 and the image collected at the moment 3, obtaining a binaryzation image at the moment 4, carrying out pattern matching on the binaryzation image at the moment 4 and the binaryzation image at the moment 3, obtaining the central point and the number of pixel points of each matching pattern pair, respectively judging whether the central point and the number of pixel points accord with formulas (8) - (9), calculating the moving pixel mean value of the central points of all the matching pattern pairs which accord with the formulas (8) - (9), substituting the moving pixel mean value into the formulas (5) - (7) for calculation, obtaining the moving speed of the cloud layer at the moment 4, and sending the moving speed to a visual navigation module of a moving platform in real time. In the process of
Figure 45639DEST_PATH_IMAGE035
Calculated by the formula (10), in the formula (10)
Figure 928144DEST_PATH_IMAGE082
Is the cloud movement speed at time 3.
Sampling for the fifth time at the time 5 (the ground end) to obtain a cloud layer image, carrying out image difference and binarization on the image acquired at the time 5 and the image acquired at the time 4 to obtain a binarized image at the time 5, carrying out pattern matching on the binarized image at the time 5 and the binarized image at the time 4 to obtain the central point and the number of pixel points of each matched pattern pair, respectively judging whether the central point and the number of pixel points accord with the formulas (8) - (9),calculating the mean value of the central point moving pixels of all the matching graph pairs which accord with the formulas (8) - (9), substituting the mean value into the formulas (5) - (7) for calculation to obtain the motion speed of the cloud layer at the moment 5, and sending the motion speed to the visual navigation module of the mobile platform in real time. In the process of
Figure 421573DEST_PATH_IMAGE035
Calculated by the formula (10), in the formula (10)
Figure 364122DEST_PATH_IMAGE082
Is the cloud movement speed at time 4.
And then, the rest can be analogized.
In the moving process of the mobile platform, the cloud layer is observed by means of a camera carried by the mobile platform in the direction of the sky, two images (namely cloud layer observation images) at adjacent moments are obtained, and the initial visual navigation module can be used for calculating the plane displacement of the camera relative to the cloud layer, namely the plane displacement of the mobile platform relative to the cloud layer. The skyward camera is also called a vision sensor. The camera is fixedly connected with the mobile platform, and the position of the camera can be regarded as the position of the mobile platform. The specific process is as follows:
defining a navigation coordinate system
Figure 67635DEST_PATH_IMAGE038
Is (east, north, day) and the coordinate system of the carrier camera at the previous moment
Figure 19411DEST_PATH_IMAGE039
The coordinate system of the carrier camera at the later moment
Figure 616614DEST_PATH_IMAGE040
Is described.
Two adjacent moments (the time interval between two adjacent moments is selected as
Figure 730064DEST_PATH_IMAGE070
Figure 920874DEST_PATH_IMAGE070
Namely, the sampling period in the mobile platform, two adjacent moments are respectively the previous moment and the next moment), the method of SIFT/SURF/ORB (optional one) is used to extract the feature points in the two images and match the feature points, and a pair of matched feature points are expressed as
Figure 410761DEST_PATH_IMAGE041
Figure 3416DEST_PATH_IMAGE042
(11)
Figure 428712DEST_PATH_IMAGE043
(12)
With reference to figure 3 of the drawings,
Figure DEST_PATH_IMAGE083
pixel coordinates of the matched feature points in the images at the previous moment and the next moment respectively,
Figure 841239DEST_PATH_IMAGE084
of a matching feature point in the images at the previous and subsequent times, respectively
Figure 134817DEST_PATH_IMAGE014
An axis value;
Figure DEST_PATH_IMAGE085
of a matching feature point in the image at a previous moment and a subsequent moment, respectively
Figure 473657DEST_PATH_IMAGE015
An axis value;
the sampling period in the mobile platform may be different or the same as the sampling period in the ground vision observation. The mobile platform and the ground visual observation equipment can independently complete sampling, and the sampling can be completed at the same time or at different times, so that the sampling time of the mobile platform can be the same as or different from that of the ground visual observation equipment.
Because the distance between the cloud layer and the mobile platform is far greater than the focal length of the camera, the cloud layer characteristic points are considered to be on the same plane on the assumption that the matched characteristic points meet the homography matrix
Figure 194488DEST_PATH_IMAGE047
And (5) restraining.
Figure 359891DEST_PATH_IMAGE048
(13)
The description of the homography matrix is:
Figure 191580DEST_PATH_IMAGE086
(14)
wherein
Figure 227669DEST_PATH_IMAGE050
Is a reference matrix in the camera, and the reference matrix,
Figure 994768DEST_PATH_IMAGE051
is a unit normal vector of the cloud layer plane under the coordinate system of the carrier camera at the previous moment (at a certain moment)
Figure DEST_PATH_IMAGE087
Is a fixed one of the main body and the auxiliary body,
Figure 647466DEST_PATH_IMAGE087
changes as the pose of the camera changes),
Figure 282847DEST_PATH_IMAGE088
is composed of
Figure 829235DEST_PATH_IMAGE087
The transpose matrix of (a) is,
Figure 891869DEST_PATH_IMAGE052
is the distance from the cloud plane to the camera. The air pressure height difference between the ground observation station and the carrier can be used for obtaining the height difference between the ground observation station and the carrier in combination with the ground observationThe distance from the cloud layer plane to the carrier (namely the distance from the cloud layer plane to the camera) can be obtained through the height of the cloud layer obtained by the observation station and the height difference between the ground observation station and the carrier.
Figure 500705DEST_PATH_IMAGE053
And
Figure 674197DEST_PATH_IMAGE054
respectively a rotation matrix and a translation vector of a carrier camera coordinate system at two adjacent moments to be solved. In this paragraph, the camera refers to the skyward camera of the mobile platform.
Forming an equation set according to multiple groups of matching feature points to obtain
Figure 950458DEST_PATH_IMAGE047
Thereby recovering
Figure 59359DEST_PATH_IMAGE053
And
Figure 421070DEST_PATH_IMAGE054
. Based on the rotation relation of the carrier camera coordinate system relative to the navigation coordinate system at the previous moment
Figure 132674DEST_PATH_IMAGE055
The rotation relation of the carrier camera coordinate system relative to the navigation coordinate system at the later moment can be obtained
Figure 529021DEST_PATH_IMAGE056
Can obtain the real-time displacement of the camera or the mobile platform under the navigation coordinate system
Figure 933457DEST_PATH_IMAGE057
Figure 157632DEST_PATH_IMAGE058
(15)
Figure 938507DEST_PATH_IMAGE059
(16)
Displacement of
Figure 189359DEST_PATH_IMAGE057
Comprises the following components:
Figure 764697DEST_PATH_IMAGE060
(17)
wherein
Figure 710788DEST_PATH_IMAGE061
Indicating that the camera is displaced north relative to the cloud (i.e. the mobile platform is displaced north relative to the cloud),
Figure 29773DEST_PATH_IMAGE062
for the camera to move east with respect to the cloud layer (i.e. the mobile platform moves east with respect to the cloud layer),
Figure 869554DEST_PATH_IMAGE063
the displacement of the camera relative to the cloud layer in the sky direction (namely the displacement of the mobile platform relative to the cloud layer in the sky direction) is obtained.
Cloud layer speed information measured by ground visual observation equipment is transmitted to a mobile platform through a communication link (namely a data link), a visual navigation correction module of the mobile platform combines with an initial visual navigation module of the mobile platform to solve the plane movement position of the platform relative to a cloud layer and the cloud layer speed information which are output to carry out differential processing, the initial visual navigation module calculates the relative motion of the mobile platform relative to the cloud layer, the ground visual observation equipment calculates the movement condition of the cloud layer, then the camera of the mobile platform subtracts the self-displacement of the cloud layer from the plane displacement of the cloud layer to obtain the real displacement of the mobile platform relative to the ground, namely the influence of the movement of the cloud layer is eliminated, and the real east displacement of the mobile platform is obtained
Figure 615793DEST_PATH_IMAGE064
And north displacement
Figure 439392DEST_PATH_IMAGE065
Figure 421124DEST_PATH_IMAGE066
(18)
Therefore, the actual position of the mobile platform under the navigation coordinate system is obtained through accumulation.
Figure 646568DEST_PATH_IMAGE067
(19)
Figure 563709DEST_PATH_IMAGE068
Referring to the actual position of the mobile platform in the navigation coordinate system at the later moment,
Figure DEST_PATH_IMAGE089
pointing at the later moment;
Figure 609025DEST_PATH_IMAGE069
the actual position of the mobile platform in the navigation coordinate system at the previous moment,
Figure 145180DEST_PATH_IMAGE090
pointing to the previous moment;
Figure 225131DEST_PATH_IMAGE070
is the time interval between two adjacent time instants.
The initial position of the mobile platform is certain (pre-settable). And in the carrier visual navigation, receiving real-time cloud layer movement speed information sent by the ground visual observation equipment.
In the carrier visual navigation, at the first moment (the first sampling of a carrier end), a carrier end camera shoots a cloud layer image; in the process, the displacement of the camera or the mobile platform under the navigation coordinate system is not solved, so that the actual position of the mobile platform under the navigation coordinate system at the first moment is still the initial position of the mobile platform.
At the second moment (the second sampling is carried out on the carrier end), the carrier end camera shoots the cloud layer image again, and the displacement of the camera or the moving platform at the second moment under the navigation coordinate system is obtained according to the method (actually, the real-time displacement of the camera or the moving platform from the first moment to the second moment); then, according to the received latest cloud layer movement speed information, calculating the real east displacement and north displacement (from the first moment to the second moment, the real east displacement and north displacement of the mobile platform) of the mobile platform and the actual position of the mobile platform in the navigation coordinate system at the second moment through formulas (18) and (19); in the process, the actual position of the mobile platform in the navigation coordinate system at the previous moment is the actual position of the mobile platform in the navigation coordinate system at the first moment;
at a third time (sampling at the carrier end for the third time), the carrier end camera shoots the cloud layer image again, and the displacement of the camera or the mobile platform at the third time under the navigation coordinate system is obtained according to the method (actually, the displacement refers to the real-time displacement of the camera or the mobile platform from the second time to the third time); then, according to the received latest cloud layer motion speed information, calculating the real east displacement and the real north displacement (from the second moment to the third moment, the real east displacement and the real north displacement of the mobile platform) of the mobile platform and the actual position of the mobile platform in the navigation coordinate system at the third moment through formulas (18) and (19); in the process, the actual position of the mobile platform in the navigation coordinate system at the previous moment is the actual position of the mobile platform in the navigation coordinate system at the second moment;
at a fourth moment (fourth sampling of the carrier end), the carrier end camera shoots the cloud layer image again, and the displacement of the camera or the mobile platform at the fourth moment under the navigation coordinate system is obtained according to the method (actually, the real-time displacement of the camera or the mobile platform from the third moment to the fourth moment); then, according to the received latest cloud layer motion speed information, calculating the real east displacement and north displacement (from the third moment to the fourth moment, the real east displacement and north displacement of the mobile platform) of the mobile platform and the actual position of the mobile platform in the navigation coordinate system at the fourth moment through formulas (18) and (19); in the process, the actual position of the mobile platform in the navigation coordinate system at the previous moment is the actual position of the mobile platform in the navigation coordinate system at the third moment;
and then, the rest can be done in turn.
The ground visual observation is earlier than the carrier visual navigation, and when the real east displacement and the real north displacement of the mobile platform are calculated from the first moment to the second moment in the carrier visual navigation, the latest cloud layer movement speed information received by the visual navigation module can be the movement speed of the cloud layer at the moment 3 of the ground end, the movement speed of the cloud layer at the moment 4 of the ground end, the movement speed of the cloud layer at the moment 5 of the ground end or the movement speed of the cloud layer at the moment behind the ground end.
After the ground visual observation equipment samples every time, the updated cloud layer movement speed is obtained through calculation, and the updated cloud layer movement speed information is immediately sent to the mobile platform in real time. The visual navigation module adopts the received latest cloud layer movement speed when calculating the real east displacement and the north displacement.
When the default visual navigation module calculates the real east displacement and the north displacement, the received latest cloud layer movement speed is
Figure 313173DEST_PATH_IMAGE024
(ground end, north moving speed of cloud layer at current sampling time) and
Figure 845786DEST_PATH_IMAGE025
(the cloud layer east moving speed at the current sampling time) at the ground end, wherein the current sampling time in the ground visual observation refers to the sampling time of the ground end terminal machine corresponding to the latest cloud layer movement speed information received by the visual navigation module.
The above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not depart from the spirit of the corresponding technical solutions of the embodiments of the present invention. The above embodiments are only preferred embodiments of the present invention, and any modifications and changes made according to the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A visual positioning system based on cloud layer observation under cloudy weather is characterized by comprising:
the ground image acquisition module is used for acquiring cloud layer images from the ground in real time;
the ground image processing module is used for carrying out image difference and binarization processing on the cloud layer image acquired at the current sampling moment and the cloud layer image acquired at the last sampling moment;
the ground image matching module is used for matching or comparing the binarized image at the last sampling moment with the binarized image at the current sampling moment to obtain the pixel displacement of the central point;
the ground cloud layer movement speed calculation module is used for calculating the cloud layer movement speed according to the central point pixel displacement obtained by the ground image matching module;
the vision sensor is used for acquiring cloud layer images from the mobile platform;
the visual navigation module comprises an initial visual navigation module and a visual navigation correction module;
the initial visual navigation module is used for calculating the plane displacement of the visual sensor or the mobile platform relative to the cloud layer according to the two images at the adjacent moments obtained by the visual sensor;
and the visual navigation correction module is used for receiving the cloud layer movement speed information obtained by the ground cloud layer movement speed calculation module, correcting the plane displacement of the visual sensor or the mobile platform, which is measured by the initial visual navigation module, relative to the cloud layer to obtain the real displacement of the mobile platform relative to the ground, and obtaining the actual position of the mobile platform under the navigation coordinate system.
2. The visual positioning system based on cloud observation in cloudy weather as claimed in claim 1, wherein the ground image processing module is used for processing the imageImage sampling and differencing period
Figure 691769DEST_PATH_IMAGE001
The calculation method is as follows, when the ground wind speed is
Figure DEST_PATH_IMAGE002
Figure 77751DEST_PATH_IMAGE003
(1)
The principle of binarization is as follows:
Figure DEST_PATH_IMAGE004
(2)
wherein the content of the first and second substances,
Figure 326330DEST_PATH_IMAGE005
for the gray scale of the pixels in the difference image,
Figure DEST_PATH_IMAGE006
is the processed gray scale.
3. The visual positioning system based on cloud observation in cloudy weather according to claim 1, wherein the ground image matching module specifically comprises:
the selection principle of the characteristic pattern in the binary image is that in the binary image at the last sampling moment, a region which has a gray value of 255 and can be closed is selected, the outer boundary is taken, and the central point of the region boundary is calculated
Figure 666744DEST_PATH_IMAGE007
And the number of pixels inside the region boundary
Figure DEST_PATH_IMAGE008
Figure 215537DEST_PATH_IMAGE009
Has a position coordinate of
Figure DEST_PATH_IMAGE010
The selection of the central point of the area boundary adopts a geometric center mode, namely, the maximum coordinate value and the minimum coordinate value of the area boundary on a horizontal axis and a vertical axis are obtained, and the median values are respectively solved and are used as the position coordinate value of the central point;
comparing the front binary image with the rear binary image, finding out a closed-loop area corresponding to the area capable of being closed-loop in the front image from the rear image, and taking the outer boundary of the closed-loop area corresponding to the rear image as an independent graph; the center point of the outer boundary of the closed-loop area in the latter figure is indicated as
Figure 506842DEST_PATH_IMAGE011
The number of pixels inside the outer boundary of the closed-loop region in the latter figure is expressed as
Figure 508296DEST_PATH_IMAGE013
Figure 441617DEST_PATH_IMAGE011
Has a position coordinate of
Figure DEST_PATH_IMAGE014
Comparing the center points
Figure 828604DEST_PATH_IMAGE009
And
Figure 556389DEST_PATH_IMAGE011
of the image coordinate system
Figure 45139DEST_PATH_IMAGE015
And
Figure DEST_PATH_IMAGE016
the axes respectively correspond to a north direction N and an east direction E, and two central points are obtained
Figure 250993DEST_PATH_IMAGE015
And
Figure 508799DEST_PATH_IMAGE016
pixel shift in direction
Figure DEST_PATH_IMAGE017
Figure 407484DEST_PATH_IMAGE018
(3)
Figure DEST_PATH_IMAGE019
(4)
The mean value of the displacement of the central point pixel of all the matching graph pairs in the two binary images is expressed as
Figure 134263DEST_PATH_IMAGE020
4. The visual positioning system based on cloud layer observation in cloudy weather according to claim 1,
the ground cloud layer movement speed calculation module specifically comprises:
of image coordinate systems
Figure 143807DEST_PATH_IMAGE015
And
Figure 521699DEST_PATH_IMAGE016
the axes respectively correspond to the north direction N and the east direction E, and the movement speed of the cloud layer at the current sampling moment is calculated;
Figure DEST_PATH_IMAGE021
(5)
Figure 325707DEST_PATH_IMAGE022
(6)
Figure DEST_PATH_IMAGE023
Figure 523470DEST_PATH_IMAGE024
(7)
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE025
indicating the north-direction moving speed of the cloud layer at the current sampling moment,
Figure 320394DEST_PATH_IMAGE026
indicating the east moving speed of the cloud layer at the current sampling moment,
Figure DEST_PATH_IMAGE027
representing the motion speed of the cloud layer at the current sampling moment;
Figure 287213DEST_PATH_IMAGE028
is the height of the cloud layer, and is,
Figure DEST_PATH_IMAGE029
is the focal length of the camera and is,
Figure DEST_PATH_IMAGE031
for a single pixel size of the camera imaging plane,
Figure 527701DEST_PATH_IMAGE001
sampling and differentiating the image for a period;
Figure 212761DEST_PATH_IMAGE032
for the central point of all matched graph pairs in the two binary images
Figure DEST_PATH_IMAGE033
An axis pixel displacement mean;
Figure 564107DEST_PATH_IMAGE034
for the central point of all matched graph pairs in the two binary images
Figure DEST_PATH_IMAGE035
Axis pixel shift mean.
5. The visual positioning system based on cloud observation in cloudy weather as claimed in claim 3, wherein the comparison principle between the central point and the pixel point is as follows:
Figure 133236DEST_PATH_IMAGE036
(8)
Figure DEST_PATH_IMAGE037
(9)
Figure 544625DEST_PATH_IMAGE038
indicating a threshold for the change in the number of pixels,
Figure DEST_PATH_IMAGE039
a distance threshold representing a center point;
Figure 451402DEST_PATH_IMAGE040
(10)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE041
represents the last miningSampling the movement speed of the cloud layer at the moment;
Figure 340860DEST_PATH_IMAGE028
is the height of the cloud layer,
Figure 797118DEST_PATH_IMAGE029
is the focal length of the camera and is,
Figure 644988DEST_PATH_IMAGE031
for a single pixel size of the camera imaging plane,
Figure 570219DEST_PATH_IMAGE001
the image is sampled and the period is differentiated.
6. The visual positioning system based on cloud observation in cloudy weather according to claim 1, wherein the initial visual navigation module specifically comprises:
defining a navigation coordinate system
Figure 528948DEST_PATH_IMAGE042
The coordinate system of the carrier camera at the previous time is
Figure DEST_PATH_IMAGE043
The coordinate system of the carrier camera at the later moment
Figure 59286DEST_PATH_IMAGE044
Is to be prepared;
selecting two images at two adjacent moments, extracting characteristic points in the two images and matching the characteristic points;
a pair of matched feature points is represented as
Figure DEST_PATH_IMAGE045
Figure 812479DEST_PATH_IMAGE046
(11)
Figure DEST_PATH_IMAGE047
(12)
Figure 444579DEST_PATH_IMAGE048
Pixel coordinates of the matched feature points in the images at the previous moment and the next moment respectively,
Figure DEST_PATH_IMAGE049
of a matching feature point in the image at a previous moment and a subsequent moment, respectively
Figure 941420DEST_PATH_IMAGE033
An axis value;
Figure 857423DEST_PATH_IMAGE050
of a matching feature point in the image at a previous moment and a subsequent moment, respectively
Figure 781517DEST_PATH_IMAGE035
An axis value;
the cloud layer characteristic points are assumed to be on the same plane, so that the matched characteristic points meet the homography matrix
Figure DEST_PATH_IMAGE051
Constraining;
Figure 150181DEST_PATH_IMAGE052
(13)
the description of the homography matrix is:
Figure DEST_PATH_IMAGE053
(14)
wherein
Figure 434401DEST_PATH_IMAGE054
Is a reference matrix in the camera, and the reference matrix is a reference matrix in the camera,
Figure DEST_PATH_IMAGE055
is a unit normal vector of the cloud layer plane under the coordinate system of the carrier camera at the previous moment,
Figure 204911DEST_PATH_IMAGE056
the distance from the cloud layer plane to the camera;
Figure DEST_PATH_IMAGE057
and
Figure 299906DEST_PATH_IMAGE058
respectively obtaining a rotation matrix and a translation vector of a carrier camera coordinate system at two adjacent moments to be solved;
forming an equation set according to multiple groups of matched feature points to obtain
Figure 421446DEST_PATH_IMAGE051
Thereby recovering
Figure 525668DEST_PATH_IMAGE057
And
Figure 150684DEST_PATH_IMAGE058
(ii) a Based on the rotation relation of the carrier camera coordinate system relative to the navigation coordinate system at the previous moment
Figure DEST_PATH_IMAGE059
The rotation relation of the carrier camera coordinate system relative to the navigation coordinate system at the later moment can be obtained
Figure DEST_PATH_IMAGE060
Can obtain the real-time displacement of the camera or the mobile platform under the navigation coordinate system
Figure 904663DEST_PATH_IMAGE061
Figure DEST_PATH_IMAGE062
(15)
Figure DEST_PATH_IMAGE063
(16)
Displacement of
Figure 716762DEST_PATH_IMAGE061
Comprises the following components:
Figure 624675DEST_PATH_IMAGE064
(17)
wherein
Figure DEST_PATH_IMAGE065
Representing a north displacement of the camera relative to the cloud,
Figure 104198DEST_PATH_IMAGE066
for east displacement of the camera relative to the cloud layer,
Figure DEST_PATH_IMAGE067
is the displacement of the camera relative to the cloud layer in the sky direction.
7. The visual positioning system based on cloud layer observation in cloudy weather according to claim 1,
the visual navigation correction module specifically comprises:
the visual navigation correction module combines the plane displacement of the mobile platform relative to the cloud layer obtained by the initial visual navigation module and the cloud layer movement speed information obtained by the ground cloud layer movement speed calculation module to perform differential processing to obtain the real east displacement of the mobile platform
Figure 790263DEST_PATH_IMAGE068
And north displacement
Figure DEST_PATH_IMAGE069
Figure 620816DEST_PATH_IMAGE070
(18)
Obtaining the actual position of the mobile platform under a navigation coordinate system:
Figure DEST_PATH_IMAGE071
(19)
Figure 801261DEST_PATH_IMAGE072
indicating the actual position of the mobile platform in the navigation coordinate system at the later moment;
Figure DEST_PATH_IMAGE073
the actual position of the mobile platform in the navigation coordinate system at the previous moment is pointed;
Figure DEST_PATH_IMAGE075
is the time interval between two adjacent time instants.
8. The visual positioning system based on cloud observation in cloudy weather according to claim 1, wherein the ground image acquisition module, the ground image processing module, the ground image matching module and the ground cloud movement speed calculation module form ground visual observation equipment; the visual sensor and the visual navigation module form carrier visual navigation equipment, and the ground visual observation equipment and the carrier visual navigation equipment interact cloud layer movement speed information through a data link.
9. A visual positioning method based on cloud layer observation under cloudy weather is characterized in that the visual positioning system based on cloud layer observation under cloudy weather of any one of claims 1 to 8 is adopted to carry out visual positioning on a mobile platform.
CN202210701077.6A 2022-06-21 2022-06-21 Visual positioning system and method based on cloud layer observation in cloudy weather Active CN114782539B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210701077.6A CN114782539B (en) 2022-06-21 2022-06-21 Visual positioning system and method based on cloud layer observation in cloudy weather

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210701077.6A CN114782539B (en) 2022-06-21 2022-06-21 Visual positioning system and method based on cloud layer observation in cloudy weather

Publications (2)

Publication Number Publication Date
CN114782539A true CN114782539A (en) 2022-07-22
CN114782539B CN114782539B (en) 2022-10-11

Family

ID=82421289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210701077.6A Active CN114782539B (en) 2022-06-21 2022-06-21 Visual positioning system and method based on cloud layer observation in cloudy weather

Country Status (1)

Country Link
CN (1) CN114782539B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130048707A1 (en) * 2011-08-26 2013-02-28 Qualcomm Incorporated Identifier generation for visual beacon
CN103149939A (en) * 2013-02-26 2013-06-12 北京航空航天大学 Dynamic target tracking and positioning method of unmanned plane based on vision
CN114184200A (en) * 2022-02-14 2022-03-15 南京航空航天大学 Multi-source fusion navigation method combined with dynamic mapping
CN114419109A (en) * 2022-03-29 2022-04-29 中航金城无人***有限公司 Aircraft positioning method based on visual and barometric information fusion
CN114547222A (en) * 2022-02-21 2022-05-27 智道网联科技(北京)有限公司 Semantic map construction method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130048707A1 (en) * 2011-08-26 2013-02-28 Qualcomm Incorporated Identifier generation for visual beacon
CN103149939A (en) * 2013-02-26 2013-06-12 北京航空航天大学 Dynamic target tracking and positioning method of unmanned plane based on vision
CN114184200A (en) * 2022-02-14 2022-03-15 南京航空航天大学 Multi-source fusion navigation method combined with dynamic mapping
CN114547222A (en) * 2022-02-21 2022-05-27 智道网联科技(北京)有限公司 Semantic map construction method and device and electronic equipment
CN114419109A (en) * 2022-03-29 2022-04-29 中航金城无人***有限公司 Aircraft positioning method based on visual and barometric information fusion

Also Published As

Publication number Publication date
CN114782539B (en) 2022-10-11

Similar Documents

Publication Publication Date Title
JP6484729B2 (en) Unmanned aircraft depth image acquisition method, acquisition device, and unmanned aircraft
CN108534782B (en) Binocular vision system-based landmark map vehicle instant positioning method
CN102353377B (en) High altitude long endurance unmanned aerial vehicle integrated navigation system and navigating and positioning method thereof
EP3132231B1 (en) A method and system for estimating information related to a vehicle pitch and/or roll angle
CN112987065B (en) Multi-sensor-integrated handheld SLAM device and control method thereof
CN108665499B (en) Near distance airplane pose measuring method based on parallax method
CN110174088A (en) A kind of target ranging method based on monocular vision
CN105976353A (en) Spatial non-cooperative target pose estimation method based on model and point cloud global matching
CN109631911B (en) Satellite attitude rotation information determination method based on deep learning target recognition algorithm
CN110033480A (en) The airborne lidar for fluorescence target motion vectors estimation method of measurement is taken the photograph based on boat
CN113223145B (en) Sub-pixel measurement multi-source data fusion method and system for planetary surface detection
CN113624231B (en) Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft
CN111913190B (en) Near space dim target orienting device based on color infrared spectrum common-aperture imaging
CN110998241A (en) System and method for calibrating an optical system of a movable object
CN109708627B (en) Method for rapidly detecting space dynamic point target under moving platform
WO2017160356A1 (en) Systems and methods for enhancing object visibility for overhead imaging
CN105606123A (en) Method for automatic correction of digital ground elevation model for low-altitude aerial photogrammetry
CN109724586A (en) A kind of spacecraft relative pose measurement method of fusion depth map and point cloud
CN104729482A (en) Ground tiny target detection system and ground tiny target detection method based on airship
CN104154932B (en) Implementation method of high-dynamic star sensor based on EMCCD and CMOS
CN116385504A (en) Inspection and ranging method based on unmanned aerial vehicle acquisition point cloud and image registration
CN114544006B (en) Low-altitude remote sensing image correction system and method based on ambient illumination condition
CN108961319B (en) Method for analyzing dynamic airplane motion characteristics by double-linear-array TDI space camera
CN117523461B (en) Moving target tracking and positioning method based on airborne monocular camera
CN109341685B (en) Fixed wing aircraft vision auxiliary landing navigation method based on homography transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant