CN117419713A - Navigation method based on augmented reality, computing device and storage medium - Google Patents

Navigation method based on augmented reality, computing device and storage medium Download PDF

Info

Publication number
CN117419713A
CN117419713A CN202311133402.4A CN202311133402A CN117419713A CN 117419713 A CN117419713 A CN 117419713A CN 202311133402 A CN202311133402 A CN 202311133402A CN 117419713 A CN117419713 A CN 117419713A
Authority
CN
China
Prior art keywords
model
dimensional
information
points
position base
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311133402.4A
Other languages
Chinese (zh)
Inventor
陈永昊
朱恩予
李仁杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ropt Technology Group Co ltd
Original Assignee
Ropt Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ropt Technology Group Co ltd filed Critical Ropt Technology Group Co ltd
Priority to CN202311133402.4A priority Critical patent/CN117419713A/en
Publication of CN117419713A publication Critical patent/CN117419713A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Navigation (AREA)

Abstract

The application provides a navigation method, computing equipment and storage medium based on augmented reality, and relates to the technical field of augmented reality. The method comprises the following steps: selecting a plurality of position base points according to the geographic information and the key place distribution information of the target area; generating a road network model of the target area according to the given actual road communication condition and a plurality of position base points; drawing an augmented reality three-dimensional model comprising a close view and a distant view, and determining the position, the three-dimensional posture and the relative size of the augmented reality three-dimensional model mapped to the real environment according to the three-dimensional position information of the environment characteristic points and the position base points of the real view image; determining a target path according to the starting position and the destination of the user; and matching the augmented reality three-dimensional model to a target path, and displaying route identification, direction identification and/or route overview information in the distant view direction of the target path in a live-action guide picture. The scheme can provide visual, accurate and clear road guidance for users from two dimensions of close view and distant view in real time.

Description

Navigation method based on augmented reality, computing device and storage medium
Technical Field
The application relates to the technical field of augmented reality, in particular to a navigation method, computing equipment and storage medium based on augmented reality.
Background
In some attractions, due to the complex geographic structure and intricate road networks, guests are prone to getting lost in the attraction, and do not know which attraction should be addressed to or how to reach the destination. This not only gives trouble and inconvenience to the tourist, but also affects the tourist's tour experience.
At present, when the scenic spot is used for internal guidance, three modes of a signboard, a map and a navigator exist:
1. identification plate: the arrangement of the signboard has a problem that it is not clear enough or not complete enough, which makes it difficult for tourists to recognize and understand accurately. Sometimes, the position of the signboard is also not obvious enough and is easy to ignore or misunderstand, thus causing confusion to tourists.
2. Map: in complex scenic environments, there may be situations where the map is inaccurate or too brief. Guests may encounter problems with route inconsistencies, lack of detailed information, or untimely updates, causing them to become confused in selecting and following a route.
3. Guide the person: this approach is limited by human resources and time constraints. The navigator cannot provide personalized guidance for each tourist at the same time, and knowledge reserve and accuracy have certain limitations.
In summary, the above-mentioned way of guiding cards and mobile terminal map navigation at scenic spot construction entity has the problems of non-intuitive, inaccurate, unclear and untimely road guiding, which results in the difficulty of tourists to find the correct route rapidly.
Disclosure of Invention
The application provides a navigation method, computing equipment and storage medium based on augmented reality, which can provide visual, accurate and clear road guidance for users from two dimensions of close range and distant range in real time.
To achieve the above object, in a first aspect, the present application provides an augmented reality-based navigation method, the method comprising:
s1, selecting a plurality of position base points according to geographic information of a target area and distribution information of a selected key place, wherein the position base points comprise: a first type of position base points corresponding to key places with three-dimensional entity attributes in the target area and a second type of position base points corresponding to blank areas among the key places;
s2, generating a road network model of the target area according to the given actual road communication condition of the target area and the plurality of position base points, wherein the road network model is used for describing the communication relation among the plurality of position base points;
S3, drawing an augmented reality three-dimensional model for providing virtual guide information, wherein the virtual guide information comprises: at least one of route identification, direction identification, and route overview information in the distant view direction in the close view path;
s4, carrying out posture estimation and posture tracking on the augmented reality three-dimensional model according to the live-action image of the target area and the three-dimensional position information of the plurality of position base points, and determining the position, the three-dimensional posture and the relative size of the augmented reality three-dimensional model mapped to the real environment of the target area;
s5, determining a target path from the initial position to a destination in the road network model according to the initial position of the user and the destination selected by the user; and matching the augmented reality three-dimensional model to the target path, and displaying at least one of route identification, direction identification and route overview information of the target path in the distant view direction of the live view guiding picture of the user.
In one possible implementation manner, the step S2 includes:
judging, for a first position base point and a second position base point of the plurality of position base points, according to the given actual road communication condition:
If the first position base point can reach the second position base point without passing through other position base points, constructing a path pointing to the second position base point from the first position base point in the road network model, and marking the first position base point and the second position base point as a pair of adjacent straight-through points;
if the first location base point passes other location base points to reach the second location base point, a path pointing from the first location base point to the second location base point is not constructed in the road network model.
In one possible implementation, the augmented reality three-dimensional model includes a route guidance model providing route identifications and direction identifications in a close-up route and an overhead guidance model providing route overview information; the step S3 includes:
s31, carrying out feature point matching, gesture estimation, depth perception and gesture tracking on the path guiding model according to the environmental feature points in the live-action image of the target area, and determining the position, three-dimensional gesture and relative size of the path guiding model mapped to the real environment of the target area;
s32, carrying out attitude estimation and attitude tracking on the high-altitude guiding model according to the three-dimensional position information of the plurality of position base points to determine the position, the three-dimensional attitude and the relative size of the high-altitude guiding model mapped to the real environment of the target area.
In one possible implementation manner, the step S31 includes:
feature point matching: identifying the live-action image of the target area, and determining the environmental characteristic points, wherein the characteristic points comprise angular points and/or edges; acquiring characteristic points of the path guiding model, matching the environment characteristic points with the characteristic points, and determining the position and the relative size of the path guiding model in the real environment;
posture estimation: determining a preset relative three-dimensional gesture of a virtual camera and the path guiding model according to the corresponding relation between the environment characteristic points and the characteristic points of the path guiding model, wherein the virtual camera is used for simulating shooting of the path guiding model;
depth perception: determining the size of a dispersion range of the environment feature points based on the visual angle information of the virtual camera and the position information of the environment feature points, and adjusting the relative size of the path guiding model according to the size of the dispersion range so that the size of the path guiding model changes along with the distance change between the path guiding model and the virtual camera;
posture tracking: based on the relative three-dimensional posture of the path guiding model, simulating the motion of the virtual camera according to a plurality of frames of real-time images acquired in real time, tracking the path guiding model in the plurality of frames of real-time images, and updating the position, the three-dimensional posture and the relative size of the path guiding model in real time.
In one possible implementation manner, the step S32 includes:
acquiring three-dimensional position information of the plurality of position base points, wherein the three-dimensional position information comprises: longitude and latitude coordinates and altitude;
according to the three-dimensional position information, longitude and latitude coordinates of a preset virtual camera and altitude, calculating relative azimuth, elevation angle and relative distance between the virtual camera and the high-altitude guiding model;
according to the relative distance, adjusting the relative size of the high-altitude guiding model according to a preset proportion; determining a visual angle center point of the virtual camera according to the relative azimuth and the elevation angle, and adjusting the relative three-dimensional posture of the high-altitude guiding model according to the visual angle center point so that the high-altitude guiding model is perpendicular to the visual line of the virtual camera;
based on the relative three-dimensional posture of the high-altitude guiding model, simulating the motion of the virtual camera according to a plurality of frames of real-time images acquired in real time, tracking the high-altitude guiding model in the plurality of frames of real-time images, and updating the position, the three-dimensional posture and the relative size of the high-altitude guiding model in real time.
In one possible embodiment, the route overview information includes at least one of a name of the destination, a route distance between the destination and a current location of the user, and a time required to reach the destination.
In one possible implementation, the high altitude guidance model is used to: any position base point is indicated according to a preset target pattern, and the route overview information corresponding to the position base point is displayed in the form of an information billboard;
the adjusting the relative three-dimensional posture of the high-altitude guiding model according to the view angle center point so that the high-altitude guiding model is perpendicular to the line of sight of the virtual camera comprises:
and according to the visual angle center point, adjusting the three-dimensional relative posture of the information billboard in the high-altitude guiding model, so that the information billboard is always perpendicular to the visual line of the virtual camera.
In one possible implementation manner, after the step S1, the method further includes:
acquiring three-dimensional position information and buffer distances of the plurality of position base points;
for each position base point, taking a three-dimensional position corresponding to the three-dimensional position information as a circle center, taking the buffer distance as a radius, and determining a circular area as a buffer area of the position base point in the road network model;
and when the user is positioned in the buffer area of the position base point, the position base point is used as the starting position of the user.
In a second aspect, a computing device is provided, the computing device comprising a memory and a processor, the memory storing at least one program, the at least one program being executable by the processor to implement the augmented reality based navigation method as provided in the first aspect.
In a third aspect, there is provided a computer-readable storage medium having stored therein at least one program that is executed by a processor to implement the augmented reality based navigation method as provided in the first aspect.
The technical scheme provided by the application at least comprises the following technical effects:
the road network structure of the complex area such as the scenic spot is accurately and efficiently modeled, virtual guide information is drawn by utilizing the augmented reality technology on the basis of the road network model, when a user needs road guidance, a real-time guiding function is provided for the user, the position of the user and the destination to be pointed are accurately indicated, the virtual guide information is accurately, vividly and intuitively displayed from two dimensions of a close view and a distant view in a real-view guiding picture of the user equipment, the user is helped to quickly find a correct route, and the efficiency of guiding service is improved.
Drawings
Fig. 1 is a flow chart of an augmented reality-based navigation method provided in accordance with an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of an implementation environment provided herein according to an exemplary embodiment;
fig. 3 is a schematic hardware structure of a computing device according to an exemplary embodiment provided in the present application.
Detailed Description
To further illustrate the embodiments, the present application provides the accompanying drawings. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate embodiments and together with the description, serve to explain the principles of the embodiments. With reference to these matters, one of ordinary skill in the art would understand other possible embodiments and the advantages of the present application. The components in the figures are not drawn to scale and like reference numerals are generally used to designate like components. The term "at least one" in this application means one or more, the term "plurality" in this application means two or more, for example, a plurality of location base points means two or more.
The present application will now be further described with reference to the drawings and detailed description.
Aiming at the problems of unclear, invisible and untimely guiding modes such as entity guiding cards and mobile terminal map navigation in scenic spots, the method for guiding the scenic spots is provided, the road network structure of the complex areas such as the scenic spots can be accurately and efficiently modeled, virtual guiding information is drawn by using an augmented reality technology on the basis of a road network model, and when a user needs to navigate, the virtual guiding information is accurately, vividly and intuitively displayed in a live-action guiding picture, so that the user is helped to quickly find a correct route, and the efficiency of navigation service is improved.
Fig. 1 is a flow chart of a navigation method based on augmented reality according to an exemplary embodiment of the present application. As shown in fig. 1, the method includes the following steps S1 to S5.
S1, selecting a plurality of position base points according to geographic information of a target area and distribution information of a selected key place.
In the embodiment of the application, the position base point is selected as a node of the road network model according to actual conditions such as the topography, the key place distribution and the like of the target area. The key places are preselected places such as entrances of scenic spots, squares, tourist centers, ticket selling places, intersections, scenic spots, toilets, restaurants and the like.
Wherein, the position base point includes: the method comprises the steps of providing first type position base points corresponding to key places with three-dimensional entity attributes in a target area and second type position base points corresponding to blank areas among the key places.
The target area may be an area including a plurality of reachable places and a plurality of traffic roads, such as a scenic spot or an urban area. The geographic information describes the topography and topography of the target area. The key location distribution information describes a distribution of each key location within the target area, and illustratively, the key location distribution information describes a distribution of each key location in the form of position coordinates.
Specifically, the first type of location base points have physical characteristics in the real world as a contrast, and have physical properties, such as entrance of scenic spots, squares, tourist centers, ticket vending places, intersections, scenic spots, toilets, restaurants, and other key places. Further, considering that the distribution of key places in the target area may be sparse, the first type of location base points may have difficulty in completely covering the whole of the target area. And selecting a second type of position base points from the blank areas among the key places on the basis of the first type of position base points to be used as supplements. The second category of location base points has no realistic physical features as a contrast, only as a supplement. Alternatively, a midpoint of a connection line between two key places may be taken as a second type of base point; the area with the preset size can be randomly selected from the blank area between the buffer areas of the two key places to be used as the buffer area (the definition of the buffer area is described later), and the central point of the buffer area is used as a second type base point.
Through the technical scheme, the position points with guiding value can be selected according to the real topography of the target area and the road path condition to serve as the construction basis of the road network model, and the accuracy and the practicability of the road network model are ensured.
After determining a plurality of location base points in step S1, information supplementation is further performed on each of the location base points that have been selected. The supplementing process comprises the following steps:
three-dimensional position information and buffer distances of a plurality of position base points are obtained; and determining a circular area as a buffer area of the position base point in the road network model by taking the three-dimensional position corresponding to the three-dimensional position information as a circle center and the buffer distance as a radius aiming at each position base point. The three-dimensional position information includes: longitude, latitude, altitude (elevation).
When the user is located in the buffer area of the location base point, the location base point is used as the starting location of the user. Therefore, when the road guidance is carried out, the starting point position of the user can be determined according to the position of the user and the position conditions between the buffer areas of the base points of the positions.
S2, generating a road network model of the target area according to the actual road communication condition of the given target area and the plurality of position base points.
The road network model is used for describing the communication relation among a plurality of position base points.
In the embodiment of the application, the process of generating the road network model includes generating a corresponding path according to the communication relation between the base points of each position. Specifically, the method can be generated according to the actual road communication condition.
In one possible embodiment. Judging, for a first position base point and a second position base point of the plurality of position base points, according to a given actual road communication condition: if the first position base point can reach the second position base point without passing through other position base points, constructing a path pointing to the second position base point from the first position base point in the road network model, and marking the first position base point and the second position base point as a pair of adjacent straight-through points; if the first location base point passes through the other location base points to reach the second location base point, a path pointing from the first location base point to the second location base point is not constructed in the road network model.
Specifically, the selected position base point a, position base point B, and position base point C are taken as examples. If the position base point A is adjacent to the position base point B, and the position base point A can directly reach the position base point B without passing through other position base points according to the actual road communication condition of the target area, the position base point B is the adjacent straight-through point of the position base point A, and a path pointing to the position base point B from the position base point A is further constructed in the road network model. If the position base point A is adjacent to the position base point C, but according to the actual road communication condition of the scenic spot, the position base point A cannot directly reach the position base point C, and other position base points must be passed between the position base point A and the position base point A, the position base point C is not an adjacent straight-through point of the position base point A, and a path between the position base point A and the position base point C cannot be constructed in a road network model.
According to the above process, the adjacent straight-through points of all the position base points can be selected and the path is drawn, so that the construction of the road network model is completed.
Through the process, a plurality of position base points with three-position attributes can be selected in the range of the whole target area, the connection state among the base points is set according to the real topography and road communication condition of the target area, the whole road network structure of the target area is accurately formed, and a data base is provided for accurate and visual road guidance.
And S3, drawing an augmented reality three-dimensional model for providing virtual guide information.
In the embodiment of the application, any three-dimensional drawing software may be used to draw the augmented reality three-dimensional model, for example, the three-dimensional drawing software may be C4D (Cinema 4D).
Wherein the virtual guide information includes: at least one of route identification in the close-up path, direction identification, and route overview information in the distant view direction.
In an embodiment of the application, the augmented reality three-dimensional model includes a route guidance model providing route identifications and direction identifications in close-up route and an altitude guidance model providing route overview information. The path guiding model and the high-altitude guiding model are three-dimensional models built according to the reality scene to be enhanced, wherein the path guiding model is enhanced for a close-range path, and the high-altitude guiding model is enhanced for a position base point (key place) existing in a distant view direction. The display height provided by the path guiding model is lower, and the distance between users is shorter; the high altitude guidance model provides a higher presentation height and a longer distance from the user.
In one possible implementation, the route guidance model is used to display route identifications and direction identifications attached to close-up routes on a live-action screen, for example, an arrow guiding the route is displayed on the route in the live-action screen, the arrow pointing in the forward direction.
In one possible embodiment, the route overview information includes at least one of a name of the destination, a route distance between the destination and a current location of the user, and a time required to reach the destination. The route overview information may be obtained by invoking a geographic information system (Geographic Information System, GIS) service.
In one possible implementation, the high altitude guidance model is used to: and indicating any position base point according to a preset target pattern, and displaying route overview information corresponding to the position base point in the form of an information board. The high-altitude guiding model is constructed according to three-dimensional position coordinates of a plurality of position base points, when a certain position base point exists in the distant view direction in a live-action picture shot by a user through the intelligent device, the position base point in the distant view direction in the live-action picture is indicated according to a preset target pattern according to the high-altitude guiding model, and route overview information corresponding to the position base point in the distant view direction is displayed in the form of an information billboard. The target pattern is, for example, a hot air balloon pattern, an airship pattern, etc.
Through the technical scheme, the augmented reality three-dimensional model adapting to the target area is accurately constructed, virtual guide information of two dimensions of a close view and a distant view can be provided, the interestingness of the road guide is enriched, and the intuitiveness of the road guide is improved.
And S4, carrying out posture estimation and posture tracking on the augmented reality three-dimensional model according to the live-action image of the target area and the three-dimensional position information of the plurality of position base points, and determining the position, the three-dimensional posture and the relative size of the augmented reality three-dimensional model mapped to the real environment of the target area.
After the construction of the three-dimensional model of the augmented reality (the two guiding models described above) is completed, the three-dimensional model of the augmented reality is projected into the real environment, and two different gesture confirmation methods are adopted for the two guiding models described above, respectively. The following description will be made by way of step S31 and step S32, respectively.
And S31, carrying out feature point matching, gesture estimation, depth perception and gesture tracking on the path guiding model according to the environmental feature points in the live-action image of the target area, and determining the position, the three-dimensional gesture and the relative size of the path guiding model mapped to the real environment of the target area.
In one possible embodiment, step S31 includes steps A-E described below.
A. And monitoring the characteristic points of the real environment. For example, a camera of a smartphone may be employed to capture a live image of the real environment of the target area.
B. And (5) matching the characteristic points. Illustratively, a given machine vision algorithm is adopted to identify the acquired live-action image of the target area, and environmental characteristic points are determined, wherein the characteristic points comprise angular points and/or edges; and acquiring characteristic points of the path guiding model, matching the environment characteristic points with the characteristic points, and determining the position and the relative size of the path guiding model in the real environment. Wherein these environmental feature points can be used for localization and tracking in subsequent steps.
C. And (5) estimating the posture. Illustratively, according to the corresponding relation between the environment characteristic points and the characteristic points of the path guiding model, determining the relative three-dimensional gesture of a preset virtual camera and the path guiding model, wherein the virtual camera is used for simulating shooting of the path guiding model. That is, by determining the relative three-dimensional pose, it is possible to determine which face of the path guidance model should be directed toward the virtual camera when the virtual camera is directed toward the path guidance model, and in which pose it is presented.
D. Depth perception. Illustratively, the size of the dispersion range of the environmental feature points is determined based on the view angle information of the virtual camera and the position information of the environmental feature points, and the relative size of the route guidance model is adjusted according to the size of the dispersion range so that the size of the route guidance model changes with the distance between the route guidance model and the virtual camera. That is, by adjusting the relative size of the path guidance model, it is rendered on the user's display device screen at a suitable distance scale. For example, the dimensions may become smaller when the path directing model is farther from the camera, and larger when the path directing model is closer to the camera. Based on this, a more realistic guiding effect can be achieved.
E. And (5) gesture tracking. Illustratively, based on the relative three-dimensional pose of the path guiding model, the motion of the virtual camera is simulated according to a plurality of frames of live-action images acquired in real time, the path guiding model is tracked in the plurality of frames of live-action images, and the position, the three-dimensional pose and the relative size of the path guiding model are updated in real time. The path guiding model can be kept consistent with the real environment through iterative updating. Thereby accurately mapping the position, three-dimensional pose and size of the path guiding model in the real world. Enabling a user to see on a screen of a display device a precise projection of virtual guideline information provided by a path guideline model in a real environment.
S32, according to the three-dimensional position information of the plurality of position base points, carrying out gesture estimation and gesture tracking on the high-altitude guiding model to determine the position, the three-dimensional gesture and the relative size of the high-altitude guiding model mapped to the real environment of the target area.
Considering that the display model of the high-altitude guiding model is higher, the periphery of the high-altitude guiding model is mostly sky from the shooting view angle of a user on the ground, and the real environment feature points which can be used for matching are fewer. Therefore, a coordinate positioning method is adopted for the high-altitude guidance model. In one possible embodiment, step S32 includes steps 1-4 described below.
Step 1, acquiring three-dimensional position information of a plurality of position base points, wherein the three-dimensional position information comprises: longitude and latitude coordinates and altitude. The absolute coordinates of a plurality of position base points are used as calibration in the step.
And 2, calculating the relative azimuth, elevation angle and relative distance between the virtual camera and the high-altitude guiding model according to the three-dimensional position information of the plurality of position base points and the longitude and latitude coordinates and the altitude of the preset virtual camera.
Step 3, according to the relative distance, adjusting the relative size of the high-altitude guidance model according to a preset proportion; and determining a visual angle center point of the virtual camera according to the relative azimuth and elevation angle, and adjusting the relative three-dimensional posture of the high-altitude guiding model according to the visual angle center point so that the high-altitude guiding model is perpendicular to the visual line of the virtual camera.
In one possible implementation, the high altitude guidance model is used to: indicating any position base point according to a preset target pattern, and displaying route overview information corresponding to the position base point in the form of an information board; the step 3 specifically includes:
and according to the visual angle center point, adjusting the three-dimensional relative posture of the information board in the high-altitude guiding model, so that the information board is always perpendicular to the visual line of the virtual camera.
And 4, based on the relative three-dimensional posture of the high-altitude guiding model, simulating the motion of the virtual camera according to the multi-frame live-action images acquired in real time, tracking the high-altitude guiding model in the multi-frame live-action images, and updating the position, the three-dimensional posture and the relative size of the high-altitude guiding model in real time.
In some embodiments, the preset virtual cameras used in the step S31 and the step S32 perform pose estimation and pose tracking on the corresponding models respectively at different perspectives. For example, the virtual camera in step S31 performs posture estimation and posture tracking on the route guidance model with a head-up view angle, and the virtual camera in step S32 performs posture estimation and posture tracking on the high-altitude guidance model with a head-up view angle.
S5, determining a target path from the initial position to the destination in the road network model according to the initial position of the user and the destination selected by the user; and matching the augmented reality three-dimensional model to a target path, and displaying at least one of route identification, direction identification and route overview information of the target path in the distant view direction of the live-action guide picture of the user.
In the embodiment of the application, the starting position of the user is determined according to the buffer area where the current positioning of the user is located. When the user is located in the buffer of a certain position base point, the position base point is used as the starting position of the user. The destination of the user may be any one of the key places in the target area or a location corresponding to a blank area between key places. Correspondingly, the destination is the location base point corresponding to the buffer area where the destination selected by the user is located.
In the embodiment of the application, the path guiding model and the high-altitude guiding model are three-dimensional models built according to the reality scene to be enhanced, wherein the path guiding model is enhanced for a close-range path, and the high-altitude guiding model is enhanced for a position base point (key place) existing in a distant view direction.
In one possible implementation, a route guidance model in the augmented reality three-dimensional model is matched to a target route according to a live-action picture acquired in real time by a display device of a user, a route identifier and a direction identifier attached to a close-up route are displayed in the live-action guidance picture of the user display device, for example, an arrow of a guidance route is displayed on a route in the live-action picture, and the arrow points to a forward direction.
In one possible embodiment, the route overview information includes at least one of a name of the destination, a route distance between the destination and a current location of the user, and a time required to reach the destination. The route overview information may be obtained by invoking a geographic information system (Geographic Information System, GIS) service.
In one possible implementation manner, according to a live-action picture acquired by a display device of a user in real time, matching a high-altitude guiding model in an augmented reality three-dimensional model to a target path, displaying a position base point in a live-action direction in the live-action guiding picture of the user display device according to a preset target pattern, and displaying route overview information corresponding to the position base point in the form of an information billboard. The method includes the steps that a certain position base point exists in a long-distance direction in a real-distance picture shot by a user through intelligent equipment, the position base point in the long-distance direction in the real-distance picture can be indicated according to a preset target pattern according to a high-altitude guiding model, and route overview information corresponding to the position base point in the long-distance direction is displayed in the form of an information billboard.
It should be noted that, the live-action guiding picture may include both a path identifier and a direction identifier for the near-scene path, and path overview information in the distant-scene direction. And particularly, matching and displaying according to the situation in the live-action picture acquired by the user equipment.
In one possible implementation manner, the display device of the user has the functions of real-time positioning and map construction (Simultaneous Localization And Mapping, SLAM), and can perform real-time image acquisition, positioning and construction of the surrounding environment, so as to support real-time display of the real-time guiding picture.
Through the technical scheme, the road network structure of the complex area such as the scenic spot can be accurately and efficiently modeled, virtual guide information is drawn by utilizing the augmented reality technology on the basis of the road network model, a real-time guiding function is provided for the user when the user needs road guiding, the position of the user and the destination to be moved to are accurately indicated, the virtual guide information is accurately, vividly and intuitively displayed in the live-action guiding picture of the user equipment, the user is helped to quickly find a correct route, and the efficiency of guiding service is improved. In addition, visual, accurate and clear road guidance can be provided for users from two dimensions of close view and distant view in real time.
In one possible implementation, personalized and interactive functionality is provided in the live-action guide. Specifically, personalized navigation content may be provided according to the interests and preferences of the user, e.g., customized navigation information for different age groups, language preferences, or guests with personalized needs. For example, interactive elements such as games, puzzles and the like are displayed in the live-action guide picture, so that the participation degree and entertainment of tourists in navigation are increased.
In one possible implementation, the navigation content displayed in the adaptive navigation interface may be updated in real-time. For example, detailed information such as rich scenic spot introduction, historical cultural background, story explanation and the like can be provided, so that tourists can know the scenic spot more. Meanwhile, the manager can update the navigation content in real time to adapt to the change of scenic spots and temporary activities.
In conclusion, the technical scheme provided by the application can effectively improve the navigation service quality and the user satisfaction; the road guidance can be compatible with more accurate, clear and convenient navigation information, helps tourists to find destinations more easily, and reduces the conditions of getting lost and wasting time. The navigation experience is improved, personalized services are provided, interactivity is increased, and information can be updated at any time, so that more advantages and benefits are brought.
FIG. 2 is a schematic diagram of an implementation environment provided herein according to an exemplary embodiment. As shown in fig. 2, the solution provided in the embodiment of the present application can be executed by a computing device, and the computing device can provide, according to the foregoing steps S1 to S5, a live-action guide screen including virtual guide information for a display device used by a user according to a target path of the user; the display equipment displays the live-action guiding picture in real time, and finally achieves the effect of intuitively displaying the destination guiding through the live-action guiding picture of augmented reality, and provides immersive guiding experience. The virtual guide information is combined with the actual environment, and the guide information is displayed through the mobile phone, the tablet or the AR glasses and other equipment, so that tourists can feel scenery and history culture of scenery areas in an immersive manner.
The computing device may be a server cluster or a distributed file system formed by a server and a plurality of physical servers, or a cloud server cluster providing cloud storage, cloud services, cloud databases, cloud computing, cloud functions, network services, cloud communication, middleware services, domain name services, security services, a content delivery network (content delivery network, CDN), basic cloud computing services such as big data and an artificial intelligent platform, which is not limited in this application. The display device may be a smart phone, tablet, augmented reality/virtual reality device (AR/VR glasses), etc.
The augmented reality-based navigation method provided by the application can be executed by a computing device. Fig. 3 is a schematic hardware structure of a computing device provided in an embodiment of the present application, where, as shown in fig. 3, the computing device includes a processor 301, a memory 302, a bus 303, and a computer program stored in the memory 302 and capable of running on the processor 301, where the processor 301 includes one or more processing cores, the memory 302 is connected to the processor 301 through the bus 303, and the memory 302 is used to store program instructions, and the processor implements all or part of the steps in the foregoing method embodiments provided in the present application when the processor executes the computer program.
Further, as an executable scheme, the computing device may be a computer unit, and the computer unit may be a computing device such as a desktop computer, a notebook computer, a palm computer, and a cloud server. Computer units may include, but are not limited to, processors, memory. It will be appreciated by those skilled in the art that the constituent structures of the computer unit described above are merely examples of the computer unit and are not limiting, and may include more or fewer components than those described above, or may combine certain components, or different components. For example, the computer unit may further include an input/output device, a network access device, a bus, etc., which is not limited in this embodiment.
Further, as an implementation, the processor may be a central processing unit (Central Processing Unit, CPU), other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like that is a control center of the computer unit, connecting various parts of the entire computer unit using various interfaces and lines.
The memory may be used to store the computer program and/or modules, and the processor may implement the various functions of the computer unit by running or executing the computer program and/or modules stored in the memory, and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created according to the use of the cellular phone, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
The present application also provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the methods described above in the embodiments of the present application.
The modules/units integrated with the computer unit may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the legislation and the patent practice in the jurisdiction.
While this application has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the application as defined by the appended claims.

Claims (10)

1. An augmented reality-based navigation method, the method comprising:
s1, selecting a plurality of position base points according to geographic information of a target area and distribution information of a selected key place, wherein the position base points comprise: a first type of position base points corresponding to key places with three-dimensional entity attributes in the target area and a second type of position base points corresponding to blank areas among the key places;
s2, generating a road network model of the target area according to the given actual road communication condition of the target area and the plurality of position base points, wherein the road network model is used for describing the communication relation among the plurality of position base points;
s3, drawing an augmented reality three-dimensional model for providing virtual guide information, wherein the virtual guide information comprises: at least one of route identification, direction identification, and route overview information in the distant view direction in the close view path;
S4, carrying out posture estimation and posture tracking on the augmented reality three-dimensional model according to the live-action image of the target area and the three-dimensional position information of the plurality of position base points, and determining the position, the three-dimensional posture and the relative size of the augmented reality three-dimensional model mapped to the real environment of the target area;
s5, determining a target path from the initial position to a destination in the road network model according to the initial position of the user and the destination selected by the user; and matching the augmented reality three-dimensional model to the target path, and displaying at least one of route identification, direction identification and route overview information of the target path in the distant view direction of the live view guiding picture of the user.
2. The navigation method according to claim 1, wherein the step S2 comprises:
judging, for a first position base point and a second position base point of the plurality of position base points, according to the given actual road communication condition:
if the first position base point can reach the second position base point without passing through other position base points, constructing a path pointing to the second position base point from the first position base point in the road network model, and marking the first position base point and the second position base point as a pair of adjacent straight-through points;
If the first location base point passes other location base points to reach the second location base point, a path pointing from the first location base point to the second location base point is not constructed in the road network model.
3. The navigation method of claim 1, wherein the augmented reality three-dimensional model includes a route guidance model providing route identifications and direction identifications in close-up route and an altitude guidance model providing route overview information; the step S3 includes:
s31, carrying out feature point matching, gesture estimation, depth perception and gesture tracking on the path guiding model according to the environmental feature points in the live-action image of the target area, and determining the position, three-dimensional gesture and relative size of the path guiding model mapped to the real environment of the target area;
s32, carrying out attitude estimation and attitude tracking on the high-altitude guiding model according to the three-dimensional position information of the plurality of position base points to determine the position, the three-dimensional attitude and the relative size of the high-altitude guiding model mapped to the real environment of the target area.
4. A navigation method according to claim 3, wherein the step S31 comprises:
Feature point matching: identifying the live-action image of the target area, and determining the environmental characteristic points, wherein the characteristic points comprise angular points and/or edges; acquiring characteristic points of the path guiding model, matching the environment characteristic points with the characteristic points, and determining the position and the relative size of the path guiding model in the real environment;
posture estimation: determining a preset relative three-dimensional gesture of a virtual camera and the path guiding model according to the corresponding relation between the environment characteristic points and the characteristic points of the path guiding model, wherein the virtual camera is used for simulating shooting of the path guiding model;
depth perception: determining the size of a dispersion range of the environment feature points based on the visual angle information of the virtual camera and the position information of the environment feature points, and adjusting the relative size of the path guiding model according to the size of the dispersion range so that the size of the path guiding model changes along with the distance change between the path guiding model and the virtual camera;
posture tracking: based on the relative three-dimensional posture of the path guiding model, simulating the motion of the virtual camera according to a plurality of frames of real-time images acquired in real time, tracking the path guiding model in the plurality of frames of real-time images, and updating the position, the three-dimensional posture and the relative size of the path guiding model in real time.
5. A navigation method according to claim 3, wherein the step S32 comprises:
acquiring three-dimensional position information of the plurality of position base points, wherein the three-dimensional position information comprises: longitude and latitude coordinates and altitude;
according to the three-dimensional position information, longitude and latitude coordinates of a preset virtual camera and altitude, calculating relative azimuth, elevation angle and relative distance between the virtual camera and the high-altitude guiding model;
according to the relative distance, adjusting the relative size of the high-altitude guiding model according to a preset proportion; determining a visual angle center point of the virtual camera according to the relative azimuth and the elevation angle, and adjusting the relative three-dimensional posture of the high-altitude guiding model according to the visual angle center point so that the high-altitude guiding model is perpendicular to the visual line of the virtual camera;
based on the relative three-dimensional posture of the high-altitude guiding model, simulating the motion of the virtual camera according to a plurality of frames of real-time images acquired in real time, tracking the high-altitude guiding model in the plurality of frames of real-time images, and updating the position, the three-dimensional posture and the relative size of the high-altitude guiding model in real time.
6. The navigation method of claim 5, wherein the route overview information includes at least one of a name of a destination, a route distance between the destination and a current location of a user, and a time required to reach the destination.
7. The navigation method of claim 5, wherein the high altitude guidance model is used to: any position base point is indicated according to a preset target pattern, and the route overview information corresponding to the position base point is displayed in the form of an information billboard;
the adjusting the relative three-dimensional posture of the high-altitude guiding model according to the view angle center point so that the high-altitude guiding model is perpendicular to the line of sight of the virtual camera comprises:
and according to the visual angle center point, adjusting the three-dimensional relative posture of the information billboard in the high-altitude guiding model, so that the information billboard is always perpendicular to the visual line of the virtual camera.
8. The navigation method according to claim 1, wherein after step S1, the method further comprises:
acquiring three-dimensional position information and buffer distances of the plurality of position base points;
for each position base point, taking a three-dimensional position corresponding to the three-dimensional position information as a circle center, taking the buffer distance as a radius, and determining a circular area as a buffer area of the position base point in the road network model;
and when the user is positioned in the buffer area of the position base point, the position base point is used as the starting position of the user.
9. A computing device comprising a memory and a processor, the memory storing at least one program, the at least one program being executable by the processor to implement the augmented reality based navigation method of any one of claims 1 to 8.
10. A computer-readable storage medium, wherein at least one program is stored in the storage medium, the at least one program being executed by a processor to implement the augmented reality based navigation method of any one of claims 1 to 8.
CN202311133402.4A 2023-09-05 2023-09-05 Navigation method based on augmented reality, computing device and storage medium Pending CN117419713A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311133402.4A CN117419713A (en) 2023-09-05 2023-09-05 Navigation method based on augmented reality, computing device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311133402.4A CN117419713A (en) 2023-09-05 2023-09-05 Navigation method based on augmented reality, computing device and storage medium

Publications (1)

Publication Number Publication Date
CN117419713A true CN117419713A (en) 2024-01-19

Family

ID=89527290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311133402.4A Pending CN117419713A (en) 2023-09-05 2023-09-05 Navigation method based on augmented reality, computing device and storage medium

Country Status (1)

Country Link
CN (1) CN117419713A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117853694A (en) * 2024-03-07 2024-04-09 河南百合特种光学研究院有限公司 Virtual-real combined rendering method of continuous depth

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117853694A (en) * 2024-03-07 2024-04-09 河南百合特种光学研究院有限公司 Virtual-real combined rendering method of continuous depth

Similar Documents

Publication Publication Date Title
US11120628B2 (en) Systems and methods for augmented reality representations of networks
CN108446310B (en) Virtual street view map generation method and device and client device
CN109891195A (en) For using visually target system and method in initial navigation
WO2017203774A1 (en) Information processing device, information processing method, and storage medium
CN104537550A (en) Internet autonomous advertising method based on augmented reality IP map
JP3225882B2 (en) Landscape labeling system
WO2019016820A1 (en) A METHOD FOR PLACING, TRACKING AND PRESENTING IMMERSIVE REALITY-VIRTUALITY CONTINUUM-BASED ENVIRONMENT WITH IoT AND/OR OTHER SENSORS INSTEAD OF CAMERA OR VISUAL PROCCESING AND METHODS THEREOF
CN117419713A (en) Navigation method based on augmented reality, computing device and storage medium
CN107976185A (en) A kind of alignment system and localization method and information service method based on Quick Response Code, gyroscope and accelerometer
CN110609883A (en) AR map dynamic navigation system
JP2023106379A (en) Method and device for navigating two or more users to meeting location
CN104501797B (en) A kind of air navigation aid based on augmented reality IP maps
JP3156646B2 (en) Search-type landscape labeling device and system
CN109887099A (en) A kind of interaction display method that AR is combined with guideboard
CN110751616B (en) Indoor and outdoor panoramic house-watching video fusion method
JP3156645B2 (en) Information transmission type landscape labeling device and system
Yao et al. Development overview of augmented reality navigation
JP3114862B2 (en) An interactive landscape labeling system
CN114689063A (en) Map modeling and navigation guiding method, electronic device and computer program product
US9734619B2 (en) 3-dimensional map view with background images
CN113724397A (en) Virtual object positioning method and device, electronic equipment and storage medium
CN108090092B (en) Data processing method and system
US11568616B1 (en) Display apparatuses and methods for facilitating location-based virtual content
Ali et al. Design an augmented reality application for Android smart phones
KR20160111644A (en) Method for providing personalized navigation service and navigation system performing the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination