CN111220156B - Navigation method based on city live-action - Google Patents
Navigation method based on city live-action Download PDFInfo
- Publication number
- CN111220156B CN111220156B CN201811411955.0A CN201811411955A CN111220156B CN 111220156 B CN111220156 B CN 111220156B CN 201811411955 A CN201811411955 A CN 201811411955A CN 111220156 B CN111220156 B CN 111220156B
- Authority
- CN
- China
- Prior art keywords
- image
- point
- points
- live
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000012545 processing Methods 0.000 claims abstract description 10
- 230000009466 transformation Effects 0.000 claims description 63
- 239000011159 matrix material Substances 0.000 claims description 56
- 238000013519 translation Methods 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 16
- 238000012216 screening Methods 0.000 claims description 16
- 230000000875 corresponding effect Effects 0.000 claims description 8
- 238000010494 dissociation reaction Methods 0.000 claims description 8
- 230000005593 dissociations Effects 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 8
- 238000013500 data storage Methods 0.000 claims description 6
- 230000002596 correlated effect Effects 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 4
- 230000001131 transforming effect Effects 0.000 claims 1
- 230000006870 function Effects 0.000 abstract description 7
- 230000008447 perception Effects 0.000 abstract description 7
- 230000008569 process Effects 0.000 abstract description 7
- 230000009471 action Effects 0.000 abstract description 5
- 238000011835 investigation Methods 0.000 abstract description 5
- 230000000007 visual effect Effects 0.000 description 13
- 230000000694 effects Effects 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000012876 topography Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3407—Route searching; Route guidance specially adapted for specific applications
- G01C21/343—Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3602—Input other than that of destination using image analysis, e.g. detection of road signs, lanes, buildings, real preceding vehicles using a camera
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3605—Destination input or retrieval
- G01C21/3623—Destination input or retrieval using a camera or code reader, e.g. for optical or magnetic codes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/01—Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/13—Receivers
- G01S19/14—Receivers specially adapted for specific applications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A30/00—Adapting or protecting infrastructure or their operation
- Y02A30/60—Planning or developing urban green infrastructure
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Navigation (AREA)
Abstract
A navigation method based on urban live-action relates to the field of live-action navigation application, wherein the method comprises the following steps: collecting data; data processing; storing data; and (5) calling data. The advantages are that: the real scene navigation data can be acquired by adopting a common camera, so that the purpose of simplicity and rapidness is realized, and the time-consuming and labor-consuming three-dimensional modeling process in three-dimensional navigation is omitted. The method can search the intended destination in the live-action navigation system, can realize the function of browsing in advance, and replaces the field investigation. The real scene of the corresponding place can be intuitively seen so as to be accurately and quickly positioned, the intuitiveness and the accuracy are good, and a brand new map reading mode is created in real scenes. The method can assist the user to know the position of the user through combination of live action and navigation, distinguish the sign building in advance, determine the travelling path, see the world with parallel view angles, and be more close to the perception habit of the person.
Description
Technical Field
The invention relates to the field of live-action navigation application, in particular to a method for acquiring live-action images according to the visual habit of a person by live-action navigation, which improves the visual effect, builds new visual experience, can cover all attribute information and ensures the demands of users on the elaboration and intuitiveness. The real scene map is completely consistent with the field scene, and the navigation method based on the city real scene provides map service with more detailed information and more real and accurate pictures for users.
Background
The navigation electronic map of China is mainly a two-dimensional map at present through investigation and statistics. Two-dimensional navigation maps are popular with the general public, but have many disadvantages: on one hand, the traditional two-dimensional map is a line drawing, attribute information is needed to be added, but the attribute information of interest points of one city is more and complicated, and the complete addition is difficult to realize; secondly, from the perception habit analysis of the person, the visual observation mode of the person is generally side view and head view, while the traditional two-dimensional navigation chart is a vertical projection chart, which does not accord with the perception habit of the person; finally, the two-dimensional navigation map is free of topographic information, and is not visualized enough in summary. Along with the improvement of urban road condition complexity, the two-dimensional navigation map exposes the defects, which road is in front is selected while driving on a complicated road and identifying the guiding route on the navigation screen, and unnecessary trouble brought to travel due to error is avoided, so that the development of the three-dimensional navigation map is promoted.
And the three-dimensional navigation picture can be seen that it is drawn according to the real scene. The three-dimensional map needs to establish a three-dimensional live-action model so as to clearly and intuitively display the topography and the topography, and simulate a real road scene and a driving route in a navigation product, so that a driver has an immersive sensation, but the manual modeling process is complex and time-consuming, and the large data quantity of the three-dimensional map has higher requirements on the data storage and transmission technology.
In order to solve the problems, a great deal of researches are carried out by a plurality of students in the aspects of the technology and application of navigation maps, and the research result is the most prominent real-scene navigation, which is a novel navigation mode for associating real-scene images with two-dimensional vector images.
In general, the price of a professional mobile measurement system required by a panoramic map is about one million, however, the serial image data of live-action navigation is acquired and manufactured by simple equipment consisting of a single-lens reflex camera, and the effect of the professional panoramic camera can be achieved although the equipment is simple. The real scene navigation can intuitively see the real scene of the corresponding place so as to accurately and quickly locate, the intuitiveness and the accuracy are good, and a brand new map reading mode is developed for the real scene. The method can assist the user to know the position of the user through combination of live action and navigation, distinguish the sign building in advance, determine the travelling path, see the world with parallel view angles, and be more close to the perception habit of the person.
Disclosure of Invention
The embodiment of the invention provides a navigation method based on urban live-action, which aims to solve the problems that live-action images are acquired according to the visual habit of people in the prior art, improve visual effect, create new visual experience and can know outdoor scenery without going out. The live-action image can cover all attribute information, and the demands of users on the elaboration and intuitiveness are guaranteed. The real-scene map is completely consistent with the field scene, so that the map service with more detailed information and more real and accurate pictures is provided for users. On the premise of achieving the same effect as the three-dimensional electronic map, the problems of large model data size and poor transmission effect in the three-dimensional electronic map are solved.
Meanwhile, the real scene navigation data can be acquired by adopting a common camera, so that the purpose of simplicity and rapidness is realized, and the time-consuming and labor-consuming three-dimensional modeling process in three-dimensional navigation is omitted. The method can search the intended destination in the live-action navigation system, can realize the function of browsing in advance, and replaces the field investigation. The real scene of the corresponding place can be intuitively seen so as to be accurately and quickly positioned, the intuitiveness and the accuracy are good, and a brand new map reading mode is created in real scenes. The method can assist the user to know the position of the user through combination of live action and navigation, distinguish the sign building in advance, determine the travelling path, see the world with parallel view angles, and be more close to the perception habit of the person.
The invention provides a navigation method based on urban live-action, which comprises the following steps:
and (3) data acquisition: image acquisition is carried out on the live-action image data through acquisition equipment; acquiring GNSS data through GPS positioning equipment;
and (3) data processing: two adjacent images in the collected live-action images are called; extracting feature points of the two live-action images, carrying out initial matching of the feature points, obtaining an initial matching point set, screening the matching points according to space geometric relation constraint among the feature points, obtaining a new matching point set, and calculating a transformation matrix; performing perspective transformation on the acquired second image according to the acquired transformation matrix; embedding the two images by adopting a weighted average method to obtain a wide-view image; respectively reading live-action image data and a timestamp in a GNSS coordinate file, converting shooting time of an image into a GNSS time coordinate system according to a time registration formula, correlating the image with the same time with position coordinates, and storing the data;
And (3) data storage: storing the live-action sequence image data processed by the system into a database, storing the correlated position points and the image data into the database, and updating the storage path of the position points and the image data into the database;
and (3) data calling: acquiring position point coordinates of the GPS according to the current GPS position; acquiring nearest neighbor image point coordinates in a database according to the GPS position point coordinates through a distance dissociation calculation formula, and acquiring an image related to the image point; and acquiring a translation vector and a rotation matrix according to the current position.
A navigation method based on urban live-action, wherein the data processing comprises the steps of:
and (3) image stitching: two adjacent views in the acquired live-action image are called; extracting feature points of two live-action images through a scale space-based local feature description algorithm by a SIFT operator, carrying out initial matching of the feature points to obtain an initial matching point set, screening matching points through space geometric relation constraint among the feature points, obtaining a new matching point set, and calculating a transformation matrix; performing perspective transformation on the acquired second image according to the acquired transformation matrix; embedding the two images by adopting a weighted average method to obtain a wide-view image;
Position association: and respectively reading the live-action image data and the time stamp in the GNSS coordinate file, converting the shooting time of the image into the GNSS time coordinate system according to a time registration formula, correlating the image with the same time with the position coordinate, and storing the data.
A navigation method based on urban live-action, wherein the image stitching comprises the steps of:
calling a live-action image 1: calling left views in two adjacent images in the acquired live-action image 1;
calling the adjacent live-action image 2: right views in two adjacent images in the acquired live-action image 1 are called;
feature point extraction: extracting feature points of two live-action images through a scale space-based local feature description algorithm of a SIFT operator;
feature point matching: extracting image feature points, carrying out initial matching on the feature points, obtaining an initial matching point set, screening the matching points through space geometric relation constraint among the feature points, obtaining a new matching point set, and calculating a transformation matrix;
solving a transformation matrix: according to the new matching point pair, a homography transformation matrix H is calculated, and the calculation formula is as follows:
AH=b
wherein:
wherein:
a- -is a matrix of coordinates of points
b- -is a matrix of coordinates of points
(x 1 ,y 1 ) And (x) 1 ’y 1 ') -the coordinates of a new pair of matching points;
(x 2 ,y 2 ) And (x) 2 ’y 2 ') -the coordinates of a new pair of matching points;
(x 3 ,y 3 ) And (x) 3 ’y 3 ') -the coordinates of a new pair of matching points;
(x 4 ,y 4 ) And (x) 4 ’y 4 ') -the coordinates of a new pair of matching points;
image transformation: performing perspective transformation on the acquired second image according to the acquired transformation matrix;
image mosaic: embedding two images by adopting a weighted average method, and setting an image M 1 And M 2 For two images to be spliced, the image M is a mosaic image, the weight is determined by calculating the distance from the pixel point to the boundary of the overlapping area, and the gray value of each point in the mosaic image is as follows:
in the above formula:
f 1 (x,y),f 2 (x, y) and f (x, y) -gray values of the three images at the pixel point (x, y);
d 1 and d 2 -represent weights, generally take d i =1/width, width represents the width of the overlap region, and d 1 +d 2 =1,0<d 1 ,d 2 <1;
Obtaining a wide viewing angle image: an image of a wide viewing angle is obtained by image mosaicing.
The navigation method based on the urban live-action comprises the following steps of:
constructing a Gaussian differential scale space: and generating a Gaussian difference scale space by utilizing the Gaussian difference kernels with different scales and image convolution, wherein the formula is as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ)
Wherein:
i (x, y) -is an image;
(x, y) -is a spatial coordinate;
g (x, y, σ) -is a gaussian function;
sigma-is a scale factor;
k-is a scale factor;
and (3) extreme point detection: using a Gaussian differential scale space to construct a pyramid by making difference values on adjacent scale spaces, and comparing eight neighborhood points in the neighborhood with 26 points corresponding to upper and lower adjacent scales to obtain a maximum value and a minimum value;
positioning key points: further screening the detected local extreme points to remove unstable and misdetected extreme points, constructing a Gaussian pyramid by adopting a downsampled image, wherein the extreme points extracted from the downsampled image correspond to the exact positions in the original image. Wherein the formula is:
wherein:
d (x) -is a local extreme point of a characteristic point in the three-dimensional scale space;
the main direction is allocated to the key points: distributing direction parameters for each key point by using the distribution characteristics of the key point neighborhood gradient, wherein a distribution formula is as follows:
θ(x,y)=tan -1 ((L(x,y+1)-L(x,y-1))/L(x+1,y)-L(x-1,y)))
l (x, y) -is the pixel gray information of the feature point;
m (x, y) -is the gradient modulus;
θ (x, y) -is direction;
generating a characteristic point descriptor: and describing each key point by using 16 seed points 4X4, respectively calculating gradient information of 8 directions in each word block, merging gradient direction histograms of all blocks, and obtaining a 128-dimensional characteristic point descriptor.
A navigation method based on urban live-action, wherein the feature point matching comprises the following steps:
initializing and matching characteristic points: initializing and matching the two adjacent images by utilizing a BBF algorithm on the extracted feature points;
three matching point pairs are arbitrarily selected: randomly extracting three pairs of matching point pairs from the initialized matching characteristic point pairs;
acquiring included angles of three pairs of matching point pairs: respectively solving four included angles of three pairs of matching points by using a vector included angle formula; wherein the calculation formula is as follows:
wherein:
θ 1 -inclination angle for two point connection on a graph
θ 2 -inclination angle for two point connection on a graph
θ 1 ' is the tilt angle of two point lines on a graph
θ 2 ' is the tilt angle of two point lines on a graph
(x 1 ,y 1 )、(x 1 ’,y 1 ') -a pair of matching point pair coordinates;
(x 2 ,y 2 )、(x 2 ’,y 2 ') -a pair of matching point pair coordinates;
(x 3 ,y 3 )、(x 3 ’,y 3 ') -a pair of matching point pair coordinates;
judging the difference value of the included angle: according to the geometric constraint relation among the feature points, a new matching point pair is obtained, and the constraint relation is as follows:
C=((Δθ 1 ≤Δθ)&&(Δθ 2 ≤Δθ))
wherein:
Δθ 1 -difference θ between two pairs of angles 1 -θ 2 Absolute value of (2);
Δθ 2 -difference of two pairs of angles theta' 1 -θ′ 2 Absolute value of (2);
Δθ—a set threshold;
c- -matching point pair.
A navigation method based on urban live-action, wherein the position association comprises the steps of:
Acquiring an image timestamp 1: acquiring original live-action image data through acquisition equipment, and extracting a time stamp in the live-action image data;
acquiring a GNSS positioning data timestamp 2: acquiring position data through a GNSS module, acquiring a time stamp in the acquired position data, and performing time transformation on the time stamp, wherein a transformation formula is as follows:
GPST=T bj -8 h +n
wherein:
GPST- -GPS time;
n- -5 seconds in 1989, 11 seconds in 1996, 13 seconds in 2002, and 17 seconds by 2017;
8 h -8 hours;
T bj -Beijing time;
a determination is made as to whether the two timestamps are equal: comparing the time stamp in the image data with the time stamp in the position data, and judging whether the two time stamp data are equal; judging that the two time stamp data are equal to acquire the associated position and the image data; judging that the two time stamp data are not equal, and then judging after acquiring the image time stamp and the positioning data time stamp again;
acquiring the associated position and image data: and correlating the image data with the same time with the position coordinates to obtain the image data with the position stamp.
A navigation method based on city reality, wherein the data call comprises the steps of:
Acquiring coordinates of a current position point: acquiring position point coordinates of the GPS according to the current GPS position;
acquiring a nearest neighbor image, namely acquiring the nearest neighbor image point coordinate in a database according to a GPS position point coordinate through a distance dissociation calculation formula, and acquiring an image point related image;
performing view transformation on the image by using the current position: and acquiring a translation vector and a rotation matrix according to the current position.
The navigation method based on the urban live-action comprises the following steps of:
obtaining a translation vector: acquiring a relative relation between two-dimensional plane points through the nearest point geodetic coordinates, the next nearest point geodetic coordinates, the position image point geodetic coordinates, the Y axis of an image acquisition coordinate system and the Z axis of the geodetic coordinate system, and acquiring a translation vector through a linear equation of the nearest point and the next nearest point;
acquiring a rotation matrix: acquiring a normal vector of a plane through an equation of the space plane, setting an image acquisition coordinate system of a nearest point as a current coordinate system, and acquiring a distance from a coordinate origin to a scene plane and a rotation angle of an original image around a Y axis of the image acquisition coordinate system; and obtaining a Y-axis rotation matrix of the original image around the image acquisition coordinate system through the rotation angle of the original image around the Y-axis of the image acquisition coordinate system.
It can be seen from this:
the navigation method based on the city reality in the embodiment of the invention comprises the following steps: the real scene navigation data can be acquired by adopting a common camera, so that the purpose of simplicity and rapidness is realized, and the time-consuming and labor-consuming three-dimensional modeling process in three-dimensional navigation is omitted. The method can search the intended destination in the live-action navigation system, can realize the function of browsing in advance, and replaces the field investigation. The real scene of the corresponding place can be intuitively seen so as to be accurately and quickly positioned, the intuitiveness and the accuracy are good, and a brand new map reading mode is created in real scenes. The method can assist the user to know the position of the user through combination of live action and navigation, distinguish the sign building in advance, determine the travelling path, see the world with parallel view angles, and be more close to the perception habit of the person. In order to solve the problem that the existing live-action images are acquired according to the visual habit of people, the visual effect is improved, new visual experience is created, and outdoor sceneries can be known without going out. The live-action image can cover all attribute information, and the demands of users on the elaboration and intuitiveness are guaranteed. The real-scene map is completely consistent with the field scene, so that the map service with more detailed information and more real and accurate pictures is provided for users. On the premise of achieving the same effect as the three-dimensional electronic map, the problems of large model data size and poor transmission effect in the three-dimensional electronic map are solved.
Drawings
Fig. 1 is an overall flow diagram of a navigation method based on city reality provided in an embodiment of the invention;
fig. 2 is a schematic flow chart of a data processing step in a navigation method based on city reality according to an embodiment of the invention;
fig. 3 is a schematic flow chart of an image stitching step in a navigation method based on urban live-action according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of a feature point extraction step in a navigation method based on urban live-action according to an embodiment of the present invention;
fig. 5 is a schematic flow chart of a feature point matching step in a navigation method based on urban live-action according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a position association step in a navigation method based on city reality according to an embodiment of the invention;
FIG. 7 is a flowchart illustrating a data call step in a navigation method based on city reality according to an embodiment of the present invention;
fig. 8 is a flowchart illustrating a step of performing perspective transformation on an image by using a current position in the navigation method based on urban live-action according to the embodiment of the present invention.
Detailed Description
For a better understanding of the present invention, reference will now be made in detail to the present invention, examples of which are illustrated in the accompanying drawings and described in the following examples, wherein the present invention is illustrated, but is not limited to, the accompanying drawings.
Example 1:
as shown in fig. 1, a navigation method based on urban live-action includes the following steps:
and (3) data acquisition: image acquisition is carried out on the live-action image data through acquisition equipment; acquiring GNSS data through GPS positioning equipment;
and (3) data processing: two adjacent images in the collected live-action images are called; extracting feature points of the two live-action images, carrying out initial matching of the feature points, obtaining an initial matching point set, screening the matching points according to space geometric relation constraint among the feature points, obtaining a new matching point set, and calculating a transformation matrix; performing perspective transformation on the acquired second image according to the acquired transformation matrix; embedding the two images by adopting a weighted average method to obtain a wide-view image; respectively reading live-action image data and a timestamp in a GNSS coordinate file, converting shooting time of an image into a GNSS time coordinate system according to a time registration formula, correlating the image with the same time with position coordinates, and storing the data;
and (3) data storage: storing the live-action sequence image data processed by the system into a database, storing the correlated position points and the image data into the database, and updating the storage path of the position points and the image data into the database;
And (3) data calling: acquiring position point coordinates of the GPS according to the current GPS position; acquiring nearest neighbor image point coordinates in a database according to the GPS position point coordinates through a distance dissociation calculation formula, and acquiring an image related to the image point; and acquiring a translation vector and a rotation matrix according to the current position.
As shown in fig. 2, a navigation method based on city reality, the data processing includes the following steps:
and (3) image stitching: two adjacent views in the acquired live-action image are called; extracting feature points of two live-action images through a scale space-based local feature description algorithm by a SIFT operator, carrying out initial matching of the feature points to obtain an initial matching point set, screening matching points through space geometric relation constraint among the feature points, obtaining a new matching point set, and calculating a transformation matrix; performing perspective transformation on the acquired second image according to the acquired transformation matrix; embedding the two images by adopting a weighted average method to obtain a wide-view image;
position association: and respectively reading the live-action image data and the time stamp in the GNSS coordinate file, converting the shooting time of the image into the GNSS time coordinate system according to a time registration formula, correlating the image with the same time with the position coordinate, and storing the data.
As shown in fig. 3, a navigation method based on urban live-action, the image stitching includes the following steps:
calling a live-action image 1: calling left views in two adjacent images in the acquired live-action image 1;
calling the adjacent live-action image 2: right views in two adjacent images in the acquired live-action image 1 are called;
feature point extraction: extracting feature points of two live-action images through a scale space-based local feature description algorithm of a SIFT operator;
feature point matching: extracting image feature points, carrying out initial matching on the feature points, obtaining an initial matching point set, screening the matching points through space geometric relation constraint among the feature points, obtaining a new matching point set, and calculating a transformation matrix;
solving a transformation matrix: according to the new matching point pair, a homography transformation matrix H is calculated, and the calculation formula is as follows:
AH=b
wherein:
wherein:
a- -is a matrix of coordinates of points
b- -is a matrix of coordinates of points
(x 1 ,y 1 ) And (x) 1 ’y 1 ') -the coordinates of a new pair of matching points;
(x 2 ,y 2 ) And (x) 2 ’y 2 ') -the coordinates of a new pair of matching points;
(x 3 ,y 3 ) And (x) 3 ’y 3 ') -the coordinates of a new pair of matching points;
(x 4 ,y 4 ) And (x) 4 ’y 4 ') -the coordinates of a new pair of matching points;
Image transformation: performing perspective transformation on the acquired second image according to the acquired transformation matrix;
image mosaic: embedding two images by adopting a weighted average method, and setting an image M 1 And M 2 For two images to be spliced, the image M is a mosaic image, the weight is determined by calculating the distance from the pixel point to the boundary of the overlapping area, and the gray value of each point in the mosaic image is as follows:
in the above formula:
f 1 (x,y),f 2 (x, y) and f (x, y) -gray values of the three images at the pixel point (x, y);
d 1 and d 2 -represent weights, generally take d i =1/width, width represents the width of the overlap region, and d 1 +d 2 =1,0<d 1 ,d 2 <1;
Obtaining a wide viewing angle image: an image of a wide viewing angle is obtained by image mosaicing.
As shown in fig. 4, a navigation method based on urban live-action, the feature point extraction includes the following steps:
constructing a Gaussian differential scale space: and generating a Gaussian difference scale space by utilizing the Gaussian difference kernels with different scales and image convolution, wherein the formula is as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ)
wherein:
i (x, y) -is an image;
(x, y) -is a spatial coordinate;
g (x, y, σ) -is a gaussian function;
sigma-is a scale factor;
k-is a scale factor;
and (3) extreme point detection: using a Gaussian differential scale space to construct a pyramid by making difference values on adjacent scale spaces, and comparing eight neighborhood points in the neighborhood with 26 points corresponding to upper and lower adjacent scales to obtain a maximum value and a minimum value;
Positioning key points: further screening the detected local extreme points to remove unstable and misdetected extreme points, constructing a Gaussian pyramid by adopting a downsampled image, wherein the extreme points extracted from the downsampled image correspond to the exact positions in the original image. Wherein the formula is:
wherein:
d (x) -is a local extreme point of a characteristic point in the three-dimensional scale space;
the main direction is allocated to the key points: distributing direction parameters for each key point by using the distribution characteristics of the key point neighborhood gradient, wherein a distribution formula is as follows:
θ(x,y)=tan -1 ((L(x,y+1)-L(x,y-1))/L(x+1,y)-L(x-1,y)))
l (x, y) -is the pixel gray information of the feature point;
m (x, y) -is the gradient modulus;
θ (x, y) -is direction;
generating a characteristic point descriptor: and describing each key point by using 16 seed points 4X4, respectively calculating gradient information of 8 directions in each word block, merging gradient direction histograms of all blocks, and obtaining a 128-dimensional characteristic point descriptor.
As shown in fig. 5, a navigation method based on urban live-action, the feature point matching includes the following steps:
initializing and matching characteristic points: initializing and matching the two adjacent images by utilizing a BBF algorithm on the extracted feature points;
Three matching point pairs are arbitrarily selected: randomly extracting three pairs of matching point pairs from the initialized matching characteristic point pairs;
acquiring included angles of three pairs of matching point pairs: respectively solving four included angles of three pairs of matching points by using a vector included angle formula; wherein the calculation formula is as follows:
wherein:
θ 1 -inclination angle for two point connection on a graph
θ 2 -inclination angle for two point connection on a graph
θ 1 ' is the tilt angle of two point lines on a graph
θ 2 ' is the tilt angle of two point lines on a graph
(x 1 ,y 1 )、(x 1 ’,y 1 ') - -a pair of matching points for sittingMarking;
(x 2 ,y 2 )、(x 2 ’,y 2 ') -a pair of matching point pair coordinates;
(x 3 ,y 3 )、(x 3 ’,y 3 ') -a pair of matching point pair coordinates;
judging the difference value of the included angle: according to the geometric constraint relation among the feature points, a new matching point pair is obtained, and the constraint relation is as follows:
C=((Δθ 1 ≤Δθ)&&(Δθ 2 ≤Δθ))
wherein:
Δθ 1 -difference θ between two pairs of angles 1 -θ 2 Absolute value of (2);
Δθ 2 -difference of two pairs of angles theta' 1 -θ′ 2 Absolute value of (2);
Δθ—a set threshold;
c- -matching point pair.
As shown in fig. 6, a navigation method based on city reality, the position association includes the following steps:
acquiring an image timestamp 1: acquiring original live-action image data through acquisition equipment, and extracting a time stamp in attribute information of the live-action image data;
Acquiring a GNSS positioning data timestamp 2: acquiring position data through a GNSS module, acquiring a time stamp in attribute information of the acquired position data, and performing time transformation on the time stamp, wherein a transformation formula is as follows:
GPST=T bj -8 h +n
wherein:
GPST- -GPS time;
n- -5 seconds in 1989, 11 seconds in 1996, 13 seconds in 2002, and 17 seconds by 2017;
8 h -8 hours;
T bj -Beijing time;
a determination is made as to whether the two timestamps are equal: comparing the time stamp in the image data with the time stamp in the position data, and judging whether the two time stamp data are equal; determining that the two time stamp data are equal to each other to acquire the associated position and the image data; judging that the two time stamp data are not equal, and then judging after acquiring the image time stamp and the positioning data time stamp again;
acquiring the associated position and image data: and correlating the image data with the same time with the position coordinates to obtain the image data with the position stamp.
As shown in fig. 7, a navigation method based on city reality, the data call includes the following steps:
acquiring coordinates of a current position point: acquiring position point coordinates of the GPS according to the current GPS position;
acquiring a nearest neighbor image, namely acquiring the nearest neighbor image point coordinate in a database according to a GPS position point coordinate through a distance dissociation calculation formula, and acquiring an image point related image;
Performing view transformation on the image by using the current position: and acquiring a translation vector and a rotation matrix according to the current position.
As shown in fig. 8, a navigation method based on urban live-action, wherein the performing view transformation on an image by using a current position includes the following steps:
obtaining a translation vector: acquiring a relative relation between two-dimensional plane points through the nearest point geodetic coordinates, the next nearest point geodetic coordinates, the position image point geodetic coordinates, the Y axis of an image acquisition coordinate system and the Z axis of the geodetic coordinate system, and acquiring a translation vector through a linear equation of the nearest point and the next nearest point;
acquiring a rotation matrix: acquiring a normal vector of a plane through an equation of the space plane, setting an image acquisition coordinate system of a nearest point as a current coordinate system, and acquiring a distance from a coordinate origin to a scene plane and a rotation angle of an original image around a Y axis of the image acquisition coordinate system; and obtaining a Y-axis rotation matrix of the original image around the image acquisition coordinate system through the rotation angle of the original image around the Y-axis of the image acquisition coordinate system.
The following description is given of a specific embodiment:
as shown in fig. 1, a navigation method based on urban live-action includes the following steps:
and (3) data acquisition: under the better environment of road conditions, gather the camera through fixing on gathering the car and gather the image acquisition to live-action image data, the camera is fixed on the tripod, and the tripod guarantees that the camera can rotate in the plane that is on a parallel with ground, regards the camera as the center, rotates the live-action image of taking our required azimuth. In an environment with poor road conditions, a person needing to be collected backs on an image collector, and in order to obtain the geographic coordinates of a camera while shooting images, a positioning module is fixed under the collecting camera, and data collection is carried out on GNSS data through GPS positioning equipment;
And (3) data processing: two adjacent images in the collected live-action images are called; extracting feature points of the two live-action images, carrying out initial matching of the feature points, obtaining an initial matching point set, screening the matching points according to space geometric relation constraint among the feature points, obtaining a new matching point set, and calculating a transformation matrix; performing perspective transformation on the acquired second image according to the acquired transformation matrix; embedding the two images by adopting a weighted average method to obtain a wide-view image; respectively reading live-action image data and a timestamp in a GNSS coordinate file, converting shooting time of an image into a GNSS time coordinate system according to a time registration formula, correlating the image with the same time with position coordinates, and storing the data;
and (3) data storage: storing the live-action sequence image data processed by the system into a database, storing the correlated position points and the image data into the database, and updating the storage path of the position points and the image data into the database;
and (3) data calling: acquiring position point coordinates of the GPS according to the current GPS position; acquiring nearest neighbor image point coordinates in a database according to the GPS position point coordinates through a distance dissociation calculation formula, and acquiring an image related to the image point; and acquiring a translation vector and a rotation matrix according to the current position.
As shown in fig. 2, a navigation method based on city reality, the data processing includes the following steps:
and (3) image stitching: two adjacent views in the acquired live-action image are called; extracting feature points of two live-action images through a scale space-based local feature description algorithm by a SIFT operator, carrying out initial matching of the feature points to obtain an initial matching point set, screening matching points through space geometric relation constraint among the feature points, obtaining a new matching point set, and calculating a transformation matrix; performing perspective transformation on the acquired second image according to the acquired transformation matrix; embedding the two images by adopting a weighted average method to obtain a wide-view image;
position association: and respectively reading the live-action image data and the time stamp in the GNSS coordinate file, converting the shooting time of the image into the GNSS time coordinate system according to a time registration formula, correlating the image with the same time with the position coordinate, and storing the data.
As shown in fig. 3, a navigation method based on urban live-action, the image stitching includes the following steps:
calling a live-action image 1: calling left views in two adjacent images in the acquired live-action image 1;
Calling the adjacent live-action image 2: right views in two adjacent images in the acquired live-action image 1 are called;
feature point extraction: extracting feature points of two live-action images through a scale space-based local feature description algorithm of a SIFT operator;
feature point matching: extracting image feature points, carrying out initial matching on the feature points, obtaining an initial matching point set, screening the matching points through space geometric relation constraint among the feature points, obtaining a new matching point set, and calculating a transformation matrix;
solving a transformation matrix: according to the new matching point pair, a homography transformation matrix H is calculated, and the calculation formula is as follows:
AH=b
wherein:
wherein:
a- -is a matrix of coordinates of points
b- -is a matrix of coordinates of points
(x 1 ,y 1 ) And (x) 1 ’y 1 ') -the coordinates of a new pair of matching points;
(x 2 ,y 2 ) And (x) 2 ’y 2 ') -the coordinates of a new pair of matching points;
(x 3 ,y 3 ) And (x) 3 ’y 3 ') -the coordinates of a new pair of matching points;
(x 4 ,y 4 ) And (x) 4 ’y 4 ') -the coordinates of a new pair of matching points;
image transformation: performing perspective transformation on the acquired second image according to the acquired transformation matrix;
image mosaic: embedding two images by adopting a weighted average method, and setting an image M 1 And M 2 For two images to be spliced, the image M is a mosaic image, the weight is determined by calculating the distance from the pixel point to the boundary of the overlapping area, and the gray value of each point in the mosaic image is as follows:
in the above formula:
f 1 (x,y),f 2 (x, y) and f (x, y) -gray values of the three images at the pixel point (x, y);
d 1 and d 2 -represent weights, generally take d i =1/width, width represents the width of the overlap region, and d 1 +d 2 =1,0<d 1 ,d 2 <1;
Obtaining a wide viewing angle image: an image of a wide viewing angle is obtained by image mosaicing.
As shown in fig. 4, a navigation method based on urban live-action, the feature point extraction includes the following steps:
constructing a Gaussian differential scale space: and generating a Gaussian difference scale space by utilizing the Gaussian difference kernels with different scales and image convolution, wherein the formula is as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ)
wherein:
i (x, y) -is an image;
(x, y) -is a spatial coordinate;
g (x, y, σ) -is a gaussian function;
sigma-is a scale factor;
k-is a scale factor;
and (3) extreme point detection: using a Gaussian differential scale space to construct a pyramid by making difference values on adjacent scale spaces, and comparing eight neighborhood points in the neighborhood with 26 points corresponding to upper and lower adjacent scales to obtain a maximum value and a minimum value;
Positioning key points: further screening the detected local extreme points to remove unstable and misdetected extreme points, constructing a Gaussian pyramid by adopting a downsampled image, wherein the extreme points extracted from the downsampled image correspond to the exact positions in the original image. Wherein the formula is:
wherein:
d (x) -is a local extreme point of a characteristic point in the three-dimensional scale space;
the main direction is allocated to the key points: distributing direction parameters for each key point by using the distribution characteristics of the key point neighborhood gradient, wherein a distribution formula is as follows:
θ(x,y)=tan -1 ((L(x,y+1)-L(x,y-1))/L(x+1,y)-L(x-1,y)))
l (x, y) -is the pixel gray information of the feature point;
m (x, y) -is the gradient modulus;
θ (x, y) -is direction;
generating a characteristic point descriptor: and describing each key point by using 16 seed points 4X4, respectively calculating gradient information of 8 directions in each word block, merging gradient direction histograms of all blocks, and obtaining a 128-dimensional characteristic point descriptor.
As shown in fig. 5, a navigation method based on urban live-action, the feature point matching includes the following steps:
initializing and matching characteristic points: first, data points are obtained in k-dimensional space (e.g., two-dimensional (x, y), three-dimensional (x, y, z), k-dimensional (x 1 Y, z..) each layer divides the space into two subspaces, defines the division dimension as the dimension with the largest variance in each feature point descriptor, compares the binary value on the dimension with the size of the feature point, if the feature point is small, places the feature point under the left subtree of the root, otherwise places the feature point under the right subtree, and then sequentially repeats the comparison process with the left and right subtrees of the root node as the root node until no instance exists in the left and right two regions. Secondly, adding the nodes possibly traced back into the queue, calculating the hyperplane distance determined by the target point to the nodes, and selecting the node with the shortest distance (highest priority) until the queue is empty.
The acquisition frequency of live-action image data is higher in the acquisition process, and the translation rotation variation among images of adjacent sequences is not great, so that the spatial geometrical relationship of the feature points of the overlapping region is approximately the same on adjacent frame images, and the method of spatial geometrical constraint is utilized to screen out mismatching points aiming at the problem of great initialization matching error.
Three matching point pairs are arbitrarily selected: randomly extracting three pairs of matching point pairs from the initialized matching characteristic point pairs;
acquiring included angles of three pairs of matching point pairs: respectively solving four included angles of three pairs of matching points by using a vector included angle formula; wherein the calculation formula is as follows:
Wherein:
θ 1 -inclination angle for two point connection on a graph
θ 2 -inclination angle for two point connection on a graph
θ 1 ' is the tilt angle of two point lines on a graph
θ 2 ' is the tilt angle of two point lines on a graph
(x 1 ,y 1 )、(x 1 ’,y 1 ') -a pair of matching point pair coordinates;
(x 2 ,y 2 )、(x 2 ’,y 2 ') -a pair of matching point pair coordinates;
(x 3 ,y 3 )、(x 3 ’,y 3 ') -a pair of matching point pair coordinates;
judging the difference value of the included angle: according to the geometric constraint relation among the feature points, a new matching point pair is obtained, and the constraint relation is as follows:
C=((Δθ 1 ≤Δθ)&&(Δθ 2 ≤Δθ))
wherein:
Δθ 1 -difference θ between two pairs of angles 1 -θ 2 Absolute value of (2);
Δθ 2 -difference of two pairs of angles theta' 1 -θ′ 2 Absolute value of (2);
Δθ—a set threshold;
c- -matching point pair.
As shown in fig. 6, a navigation method based on city reality, the position association includes the following steps:
acquiring an image timestamp 1: acquiring original live-action image data through acquisition equipment, and extracting a time stamp in the live-action image data;
acquiring a GNSS positioning data timestamp 2: acquiring position data through a GNSS module, acquiring a time stamp in the acquired position data, and performing time transformation on the time stamp, wherein a transformation formula is as follows:
GPST=T bj -8 h +n
wherein:
GPST- -GPS time;
n- -5 seconds in 1989, 11 seconds in 1996, 13 seconds in 2002, and 17 seconds by 2017;
8 h -8 hours;
T bj -Beijing time;
a determination is made as to whether the two timestamps are equal: comparing the time stamp in the image data with the time stamp in the position data, and judging whether the two time stamp data are equal; judging that the two time stamp data are equal to acquire the associated position and the image data; judging that the two time stamp data are not equal, and then judging after acquiring the image time stamp and the positioning data time stamp again;
acquiring the associated position and image data: and correlating the image data with the same time with the position coordinates to obtain the image data with the position stamp.
And (3) data storage: storing the live-action sequence image data processed by the system in a file database, updating the storage paths of the correlated spatial position points and the image data into a Mysql database, setting a 'primary key', 'IM', 'street', 'forward/reverse', 'longitude', 'latitude', 'elevation', 'photo name' in the table of the Mysql database to realize the storage of the spatial position points of the image and the association relation with the image, matching the content in the file database and the Mysql database through the photo name, and timely storing the acquired interest point attribute information into the Mysql database.
As shown in fig. 7, a navigation method based on city reality, the data call includes the following steps:
acquiring coordinates of a current position point: acquiring position point coordinates of the GPS according to the current GPS position;
acquiring a nearest neighbor image, namely acquiring the nearest neighbor image point coordinate in a database according to a GPS position point coordinate through a distance dissociation calculation formula, and acquiring an image point related image;
performing view transformation on the image by using the current position: and acquiring a translation vector and a rotation matrix according to the current position.
As shown in fig. 8, a navigation method based on urban live-action, wherein the performing view transformation on an image by using a current position includes the following steps:
obtaining a translation vector: acquiring a relative relation between two-dimensional plane points through the nearest point geodetic coordinates, the next nearest point geodetic coordinates, the position image point geodetic coordinates, the Y axis of an image acquisition coordinate system and the Z axis of the geodetic coordinate system, and acquiring a translation vector through a linear equation of the nearest point and the next nearest point;
acquiring a rotation matrix: acquiring a normal vector of a plane through an equation of the space plane, setting an image acquisition coordinate system of a nearest point as a current coordinate system, and acquiring a distance from a coordinate origin to a scene plane and a rotation angle of an original image around a Y axis of the image acquisition coordinate system; and obtaining a Y-axis rotation matrix of the original image around the image acquisition coordinate system through the rotation angle of the original image around the Y-axis of the image acquisition coordinate system.
It can be seen from this: the navigation method based on the city reality in the embodiment of the invention comprises the following steps: the real scene navigation data can be acquired by adopting a common camera, so that the purpose of simplicity and rapidness is realized, and the time-consuming and labor-consuming three-dimensional modeling process in three-dimensional navigation is omitted. The method can search the intended destination in the live-action navigation system, can realize the function of browsing in advance, and replaces the field investigation. The real scene of the corresponding place can be intuitively seen so as to be accurately and quickly positioned, the intuitiveness and the accuracy are good, and a brand new map reading mode is created in real scenes. The method can assist the user to know the position of the user through combination of live action and navigation, distinguish the sign building in advance, determine the travelling path, see the world with parallel view angles, and be more close to the perception habit of the person. In order to solve the problem that the existing live-action images are acquired according to the visual habit of people, the visual effect is improved, new visual experience is created, and outdoor sceneries can be known without going out. The live-action image can cover all attribute information, and the demands of users on the elaboration and intuitiveness are guaranteed. The real-scene map is completely consistent with the field scene, so that the map service with more detailed information and more real and accurate pictures is provided for users. On the premise of achieving the same effect as the three-dimensional electronic map, the problems of large model data size and poor transmission effect in the three-dimensional electronic map are solved.
Although embodiments of the present invention have been described by way of examples, those of ordinary skill in the art will appreciate that there are numerous modifications and variations to the invention without departing from the spirit of the invention, and it is intended that the appended claims encompass such modifications and variations without departing from the spirit of the invention.
Claims (6)
1. The navigation method based on the urban live-action is characterized by comprising the following steps:
and (3) data acquisition: image acquisition is carried out on the live-action image data through acquisition equipment; acquiring GNSS data through GPS positioning equipment;
and (3) data processing: two adjacent images in the collected live-action images are called; extracting feature points of the two live-action images, carrying out initial matching of the feature points, obtaining an initial matching point set, screening the matching points according to space geometric relation constraint among the feature points, obtaining a new matching point set, and calculating a transformation matrix; performing perspective transformation on the acquired second image according to the acquired transformation matrix; embedding the two images by adopting a weighted average method to obtain a wide-view image; respectively reading live-action image data and a timestamp in a GNSS coordinate file, converting shooting time of an image into a GNSS time coordinate system according to a time registration formula, correlating the image with the same time with position coordinates, and storing the data;
And (3) data storage: storing the live-action sequence image data processed by the system into a database, storing the correlated position points and the image data into the database, and updating the storage path of the position points and the image data into the database;
and (3) data calling: acquiring position point coordinates of the GPS according to the current GPS position; acquiring nearest neighbor image point coordinates in a database according to the GPS position point coordinates through a distance dissociation calculation formula, and acquiring an image related to the image point; obtaining a translation vector and a rotation matrix according to the current position;
the data processing comprises the following steps:
and (3) image stitching: two adjacent views in the acquired live-action image are called; extracting feature points of two live-action images through a scale space-based local feature description algorithm by a SIFT operator, carrying out initial matching of the feature points to obtain an initial matching point set, screening matching points through space geometric relation constraint among the feature points, obtaining a new matching point set, and calculating a transformation matrix; performing perspective transformation on the acquired second image according to the acquired transformation matrix; embedding the two images by adopting a weighted average method to obtain a wide-view image;
position association: respectively reading live-action image data and a timestamp in a GNSS coordinate file, converting shooting time of an image into a GNSS time coordinate system according to a time registration formula, correlating the image with the same time with position coordinates, and storing the data;
The data call comprises the following steps:
acquiring coordinates of a current position point: acquiring position point coordinates of the GPS according to the current GPS position;
acquiring a nearest neighbor image, namely acquiring the nearest neighbor image point coordinate in a database according to a GPS position point coordinate through a distance dissociation calculation formula, and acquiring an image point related image;
performing view transformation on the image by using the current position: and acquiring a translation vector and a rotation matrix according to the current position.
2. The navigation method based on city reality according to claim 1, characterized in that the image stitching comprises the steps of:
calling a live-action image 1: calling left views in two adjacent images in the acquired live-action image 1;
calling the adjacent live-action image 2: right views in two adjacent images in the acquired live-action image 1 are called;
feature point extraction: extracting feature points of two live-action images through a scale space-based local feature description algorithm of a SIFT operator;
feature point matching: extracting image feature points, carrying out initial matching on the feature points, obtaining an initial matching point set, screening the matching points through space geometric relation constraint among the feature points, obtaining a new matching point set, and calculating a transformation matrix;
Solving a transformation matrix: according to the new matching point pair, a homography transformation matrix H is calculated, and the calculation formula is as follows:
AH=b
wherein:
wherein:
a- -is a matrix of coordinates of points
b- -is a matrix of coordinates of points
(x 1 ,y 1 ) And (x) 1 ’y 1 ') -a pair of new matchesCoordinates of the point pairs;
(x 2 ,y 2 ) And (x) 2 ’y 2 ') -the coordinates of a new pair of matching points;
(x 3 ,y 3 ) And (x) 3 ’y 3 ') -the coordinates of a new pair of matching points;
(x 4 ,y 4 ) And (x) 4 ’y 4 ') -the coordinates of a new pair of matching points;
image transformation: performing perspective transformation on the acquired second image according to the acquired transformation matrix;
image mosaic: embedding two images by adopting a weighted average method, and setting an image M 1 And M 2 For two images to be spliced, the image M is a mosaic image, the weight is determined by calculating the distance from the pixel point to the boundary of the overlapping area, and the gray value of each point in the mosaic image is as follows:
in the above formula:
f 1 (x,y),f 2 (x, y) and f (x, y) -gray values of the three images at the pixel point (x, y);
d 1 and d 2 -represent weights, generally take d i =1/width, width represents the width of the overlap region, and d 1 +d 2 =1,0<d 1 ,d 2 <1;
Obtaining a wide viewing angle image: an image of a wide viewing angle is obtained by image mosaicing.
3. The navigation method based on city reality according to claim 2, characterized in that the feature point extraction comprises the steps of:
Constructing a Gaussian differential scale space: and generating a Gaussian difference scale space by utilizing the Gaussian difference kernels with different scales and image convolution, wherein the formula is as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ)
wherein:
i (x, y) -is an image;
(x, y) -is a spatial coordinate;
g (x, y, σ) -is a gaussian function;
sigma-is a scale factor;
k-is a scale factor;
and (3) extreme point detection: using a Gaussian differential scale space to construct a pyramid by making difference values on adjacent scale spaces, and comparing eight neighborhood points in the neighborhood with 26 points corresponding to upper and lower adjacent scales to obtain a maximum value and a minimum value;
positioning key points: further screening the detected local extreme points to remove unstable and misdetected extreme points, constructing a Gaussian pyramid by adopting a downsampled image, wherein the extreme points extracted from the downsampled image correspond to the exact positions in the original image. Wherein the formula is:
wherein:
d (x) -is a local extreme point of a characteristic point in the three-dimensional scale space;
the main direction is allocated to the key points: distributing direction parameters for each key point by using the distribution characteristics of the key point neighborhood gradient, wherein a distribution formula is as follows:
θ(x,y)=tan -1 ((L(x,y+1)-L(x,y-1))/L(x+1,y)-L(x-1,y)))
L (x, y) -is the pixel gray information of the feature point;
m (x, y) -is the gradient modulus;
θ (x, y) -is direction;
generating a characteristic point descriptor: and describing each key point by using 16 seed points 4X4, respectively calculating gradient information of 8 directions in each word block, merging gradient direction histograms of all blocks, and obtaining a 128-dimensional characteristic point descriptor.
4. The navigation method based on urban live-action according to claim 2, characterized in that: the feature point matching comprises the following steps:
initializing and matching characteristic points: initializing and matching the two adjacent images by utilizing a BBF algorithm on the extracted feature points;
three matching point pairs are arbitrarily selected: randomly extracting three pairs of matching point pairs from the initialized matching characteristic point pairs;
acquiring included angles of three pairs of matching point pairs: respectively solving four included angles of three pairs of matching points by using a vector included angle formula; wherein the calculation formula is as follows:
wherein:
θ 1 -inclination angle for two point connection on a graph
θ 2 -inclination angle for two point connection on a graph
θ 1 ' is the tilt angle of two point lines on a graph
θ 2 ' is the tilt angle of two point lines on a graph
(x 1 ,y 1 )、(x 1 ’,y 1 ') -a pair of matching point pair coordinates;
(x 2 ,y 2 )、(x 2 ’,y 2 ') -a pair of matching point pair coordinates;
(x 3 ,y 3 )、(x 3 ’,y 3 ') -a pair of matching point pair coordinates;
judging the difference value of the included angle: according to the geometric constraint relation among the feature points, a new matching point pair is obtained, and the constraint relation is as follows:
C=((Δθ 1 ≤Δθ)&&(Δθ 2 ≤Δθ))
wherein:
Δθ 1 -difference θ between two pairs of angles 1 -θ 2 Absolute value of (2);
Δθ 2 -difference of two pairs of angles theta' 1 -θ′ 2 Absolute value of (2);
Δθ—a set threshold;
c- -matching point pair.
5. The navigation method based on urban live-action according to claim 1, characterized in that: the location association comprises the following steps:
acquiring an image timestamp 1: acquiring original live-action image data through acquisition equipment, and extracting a time stamp in the live-action image data;
acquiring a GNSS positioning data timestamp 2: acquiring position data through a GNSS module, acquiring a time stamp in the acquired position data, and performing time transformation on the time stamp, wherein a transformation formula is as follows:
GPST=T bj -8 h +n
wherein:
GPST- -GPS time;
n- -5 seconds in 1989, 11 seconds in 1996, 13 seconds in 2002, and 17 seconds by 2017;
8 h -8 hours;
T bj -Beijing time;
a determination is made as to whether the two timestamps are equal: comparing the time stamp in the image data with the time stamp in the position data, and judging whether the two time stamp data are equal; judging that the two time stamp data are equal to acquire the associated position and the image data; judging that the two time stamp data are not equal, and then judging after acquiring the image time stamp and the positioning data time stamp again;
Acquiring the associated position and image data: and correlating the image data with the same time with the position coordinates to obtain the image data with the position stamp.
6. The navigation method based on city reality according to claim 1, wherein the transforming the view angle of the image using the current position comprises the steps of:
obtaining a translation vector: acquiring a relative relation between two-dimensional plane points through the nearest point geodetic coordinates, the next nearest point geodetic coordinates, the position image point geodetic coordinates, the Y axis of an image acquisition coordinate system and the Z axis of the geodetic coordinate system, and acquiring a translation vector through a linear equation of the nearest point and the next nearest point;
acquiring a rotation matrix: acquiring a normal vector of a plane through an equation of the space plane, setting an image acquisition coordinate system of a nearest point as a current coordinate system, and acquiring a distance from a coordinate origin to a scene plane and a rotation angle of an original image around a Y axis of the image acquisition coordinate system; and obtaining a Y-axis rotation matrix of the original image around the image acquisition coordinate system through the rotation angle of the original image around the Y-axis of the image acquisition coordinate system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811411955.0A CN111220156B (en) | 2018-11-25 | 2018-11-25 | Navigation method based on city live-action |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811411955.0A CN111220156B (en) | 2018-11-25 | 2018-11-25 | Navigation method based on city live-action |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111220156A CN111220156A (en) | 2020-06-02 |
CN111220156B true CN111220156B (en) | 2023-06-23 |
Family
ID=70827583
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811411955.0A Active CN111220156B (en) | 2018-11-25 | 2018-11-25 | Navigation method based on city live-action |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111220156B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102426019A (en) * | 2011-08-25 | 2012-04-25 | 航天恒星科技有限公司 | Unmanned aerial vehicle scene matching auxiliary navigation method and system |
CN106447585A (en) * | 2016-09-21 | 2017-02-22 | 武汉大学 | Urban area and indoor high-precision visual positioning system and method |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2950791C (en) * | 2013-08-19 | 2019-04-16 | State Grid Corporation Of China | Binocular visual navigation system and method based on power robot |
CN104748746B (en) * | 2013-12-29 | 2017-11-03 | 刘进 | Intelligent machine attitude determination and virtual reality loaming method |
CN104833368A (en) * | 2015-05-12 | 2015-08-12 | 寅家电子科技(上海)有限公司 | Live-action navigation system and method |
CN105371847B (en) * | 2015-10-27 | 2018-06-29 | 深圳大学 | A kind of interior real scene navigation method and system |
CN106679648B (en) * | 2016-12-08 | 2019-12-10 | 东南大学 | Visual inertia combination SLAM method based on genetic algorithm |
CN108564647B (en) * | 2018-03-30 | 2019-08-30 | 王乐陶 | A method of establishing virtual three-dimensional map |
-
2018
- 2018-11-25 CN CN201811411955.0A patent/CN111220156B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102426019A (en) * | 2011-08-25 | 2012-04-25 | 航天恒星科技有限公司 | Unmanned aerial vehicle scene matching auxiliary navigation method and system |
CN106447585A (en) * | 2016-09-21 | 2017-02-22 | 武汉大学 | Urban area and indoor high-precision visual positioning system and method |
Also Published As
Publication number | Publication date |
---|---|
CN111220156A (en) | 2020-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10664708B2 (en) | Image location through large object detection | |
US9858717B2 (en) | System and method for producing multi-angle views of an object-of-interest from images in an image dataset | |
CN109520500B (en) | Accurate positioning and street view library acquisition method based on terminal shooting image matching | |
JP4488233B2 (en) | Video object recognition device, video object recognition method, and video object recognition program | |
CN106373088B (en) | The quick joining method of low Duplication aerial image is tilted greatly | |
JP7273927B2 (en) | Image-based positioning method and system | |
US9529511B1 (en) | System and method of generating a view for a point of interest | |
KR102200299B1 (en) | A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof | |
Park et al. | Beyond GPS: Determining the camera viewing direction of a geotagged image | |
CN111540048A (en) | Refined real scene three-dimensional modeling method based on air-ground fusion | |
GB2557398A (en) | Method and system for creating images | |
CN109472865B (en) | Free measurable panoramic reproduction method based on image model drawing | |
US11972507B2 (en) | Orthophoto map generation method based on panoramic map | |
US20240077331A1 (en) | Method of predicting road attributers, data processing system and computer executable code | |
Xiao et al. | Geo-spatial aerial video processing for scene understanding and object tracking | |
Koeva | 3D modelling and interactive web-based visualization of cultural heritage objects | |
Revaud et al. | Did it change? learning to detect point-of-interest changes for proactive map updates | |
EP2946366A1 (en) | Method and system for geo-referencing at least one sensor image | |
Kurz et al. | Absolute spatial context-aware visual feature descriptors for outdoor handheld camera localization overcoming visual repetitiveness in urban environments | |
CN111220156B (en) | Navigation method based on city live-action | |
CN115937673A (en) | Geographic element rapid change discovery method based on mobile terminal photo | |
CN115249345A (en) | Traffic jam detection method based on oblique photography three-dimensional live-action map | |
Antigny et al. | Hybrid visual and inertial position and orientation estimation based on known urban 3D models | |
JP7467722B2 (en) | Feature Management System | |
CN115619959B (en) | Comprehensive environment three-dimensional modeling method for extracting key frames based on videos acquired by unmanned aerial vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20200602 Assignee: Tianjin survey and Design Institute Group Co.,Ltd. Assignor: STARGIS (TIANJIN) TECHNOLOGY DEVELOPMENT Co.,Ltd. Contract record no.: X2023980054666 Denomination of invention: A Navigation Method Based on Urban Realistic Scenery Granted publication date: 20230623 License type: Common license|Cross license Record date: 20231228 |