CN117739995A - System and method for realizing navigation and space-time backtracking based on shooting and live-action map - Google Patents

System and method for realizing navigation and space-time backtracking based on shooting and live-action map Download PDF

Info

Publication number
CN117739995A
CN117739995A CN202410189446.7A CN202410189446A CN117739995A CN 117739995 A CN117739995 A CN 117739995A CN 202410189446 A CN202410189446 A CN 202410189446A CN 117739995 A CN117739995 A CN 117739995A
Authority
CN
China
Prior art keywords
grid
user
image
mode
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410189446.7A
Other languages
Chinese (zh)
Other versions
CN117739995B (en
Inventor
姚树元
王向春
陈庆聪
陈健
张旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Kingtop Information Technology Co Ltd
Original Assignee
Xiamen Kingtop Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Kingtop Information Technology Co Ltd filed Critical Xiamen Kingtop Information Technology Co Ltd
Priority to CN202410189446.7A priority Critical patent/CN117739995B/en
Publication of CN117739995A publication Critical patent/CN117739995A/en
Application granted granted Critical
Publication of CN117739995B publication Critical patent/CN117739995B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Navigation (AREA)

Abstract

The invention provides a system and a method for realizing navigation and space-time backtracking based on shooting and live-action maps, wherein the system comprises a grid dividing module, an information association module, a positioning correction module, a navigation module and a space-time backtracking module; the grid dividing module divides a three-dimensional space which can be shot by a user into non-overlapping grids; the information association module is used for associating the images shot by the user with the time, space position, environment, event, social contact, health and emotion information at the moment; the positioning correction module is used for assisting a user to realize positioning correction in the live-action three-dimensional map through the shot image; the navigation module is used for helping a user to realize route planning in the live-action three-dimensional map by receiving images shot by other users; the space-time backtracking module is used for helping a user backtrack the related data of the shot images based on the live three-dimensional map.

Description

System and method for realizing navigation and space-time backtracking based on shooting and live-action map
Technical Field
The invention relates to the field of navigation, in particular to a system and a method for realizing navigation and space-time backtracking based on shooting and live-action maps.
Background
The mobile terminal is equipped with a positioning and shooting element, so that the mobile terminal has become a common basic configuration, when a user shoots by using the mobile terminal product, the generated image files such as pictures, videos and the like contain longitude and latitude information, and the user can navigate between two positions by transmitting the image files and decoding the position information. The patent application number 2007101601763 describes the technology in detail, but the technology is relatively backward, the real image file not only can contain position information, but also can be related to various space-time data acquired by a terminal and reversely pass through image backtracking related information, an electronic map for navigation is also put into a real three-dimensional map stage, the real three-dimensional map can present a geographic environment in a more real mode, the map data can be used as a reference and a sample of shot image data to assist in realizing positioning correction, and the real three-dimensional map has elevation information, so that navigation can be more refined.
Disclosure of Invention
In order to upgrade the old technology, navigation and backtracking of image related information are carried out through shot images in a live-action three-dimensional map, and a system and a method for realizing navigation and space-time backtracking based on shot and live-action maps are designed.
The technical scheme adopted by the invention is that a system for realizing navigation and space-time backtracking based on shooting and live-action maps is as follows:
the system comprises a grid dividing module, an information association module, a positioning correction module, a navigation module and a space-time backtracking module.
The grid division module divides a three-dimensional space which can be shot by a user into non-overlapping grids, and the specific mode is as follows:
a1, in a building, carrying out grid division by taking a room of the building as a basic grid unit, wherein the boundary of the grid is aligned to the boundary of the room, and the coding mode is I+address coding+room coding, wherein I represents the grid in the building; the address code is accurate to the house, and is in the form of administrative division codes, street courtyard codes, building door building codes and unit house codes; the room code consists of six digits, the first three digits representing the order of the rooms from north to south and the last three digits representing the order of the rooms from west to east.
A2, uniformly dividing the space in the street courtyard into a plurality of three-dimensional grids outside the building, wherein the coding mode of the grids is O+address coding+grid ordering coding, wherein O represents the grid outside the building; the address code is accurate to the courtyard of the street, and the form is administrative division codes and courtyard codes of the street; the grid ordering code consists of nine bits, wherein the first three bits represent the grid arrangement sequence from the south to the north, the middle three bits represent the grid arrangement sequence from the west to the east, and the last three bits represent the grid arrangement sequence from the bottom to the top.
The information association module is used for associating the images shot by the user with the time, space position, environment, event, social contact, health and emotion information, and the specific modes are as follows:
b1, when an image is generated, recording time, space position, environment and health data acquired by a terminal, wherein the space position data is the code of a grid where the image is positioned; the environmental data comprises weather, temperature and humidity data; health data includes heart rate, blood pressure, exercise, sleep, blood oxygen saturation, and respiration data;
the user autonomously supplements event data and social data, wherein the event data comprises event types and participators, the event types comprise business activities, celebration, cultural activities, public welfare activities, daily life, travel adventure, event activities, artistic activities, natural landscapes, disaster events and accident events, and other event types can be autonomously created; the social data comprises character information and character relations;
the mood data is derived from the following four forms of analytical inferences:
(1) deducing the emotional state of the user according to the physiological data collected on the terminal device used by the user, wherein the emotional state comprises heart rate and respiratory rate, the increase of the heart rate is related to anxiety and anger, and the change of the respiration is related to tension and relaxation;
(2) Deducing emotion of the user according to language organization forms, vocabulary and emoticons of the user in text, social media and chat dialogue expressions;
(3) analyzing the voice of the user in social communication, and deducing the emotion state of the user by analyzing the voice characteristics of tone, speech speed and volume change;
(4) and analyzing the facial expression of the user in the video social contact to infer the emotional state of the user.
B2, classifying according to the purpose of image shooting:
(1) the images are mainly used for recording environment information and are classified as environment images;
(2) the images are mainly used for recording event information and are classified as event images;
(3) the images are mainly used for recording social information and are classified as social images;
(4) the images are mainly used for recording health information and are classified as health images;
(5) the images are mainly used for recording emotion information and are classified as emotion images.
The positioning correction module is used for assisting a user to realize positioning correction in the live-action three-dimensional map through the shot image, and the specific mode is as follows:
c1, setting the positioning error of a positioning element of the mobile terminal as delta R, and setting the original grid related to the image shot by the user as Q;
c2, taking the geometric center of gravity of the grid Q as a sphere center, taking the space range which is less than or equal to DeltaR from the sphere center as a comparison space, and classifying the grids in the comparison space range and the grids with partial overlapping spaces with the comparison space as grids to be compared;
And C3, finding each grid to be compared in the real-scene three-dimensional map according to the space position corresponding to the grid in the real world, sequentially collecting images in six real-scene three-dimensional maps of the right upper, the right lower, the right east, the right west, the right north and the right south at the center of gravity position of each grid to be compared, taking the images as comparison images, wherein the coding mode of the comparison images is grid coding+azimuth coding, and the azimuth coding mode is as follows: the upper part is U, the lower part is D, the east is E, the west is W, the north is N, and the south is S;
c4, comparing the image shot by the user with the contrast image, and screening out the contrast image P with the highest overlapping degree with the image shot by the user;
and C5, when the grid code associated with the image shot by the user is different from the grid code associated with the image P, replacing the grid code associated with the image shot by the user, and replacing the grid code associated with the image P.
The navigation module is used for helping a user to realize route planning in a live-action three-dimensional map by receiving images shot by other users, and the specific mode is as follows:
d1, setting the elevation of the starting point of a certain section of route of a road of a real three-dimensional map as H0, advancing along a certain direction of the road, wherein the distance travelled is J, the elevation of the highest point in the distance is Hi, and classifying the road as a flat road when the elevation of the highest point in the distance is-0.5 percent or more [ (Hi-H0)/(J ] < 0.5 percent); classifying the road as a gentle slope road when the ratio of-3% to [ (Hi-H0)/(J ] < -0.5% or 0.5% to [ (Hi-H0)/(J ] < 3%; classifying the road as a medium slope road when the ratio of-6% is less than or equal to [ (Hi-H0)/(J ] < -3% or 3% is less than or equal to [ (Hi-H0)/(J ] < 6%; classification as steep road when [ (Hi-H0)/(J ] > 6% or [ (Hi-H0)/(J ] < -6%;
D2, taking the end point of each type of road as a new starting point, and continuing to divide the route according to the classification method of D1 until all routes are divided;
d3, subdividing a gentle slope road, a medium slope road and a steep slope road, advancing along a certain direction of the road, wherein the road with the road elevation from low to high is an uphill section, and the road with the road elevation from high to low is a downhill section; the gentle slope road is divided into a gentle slope ascending road section and a gentle slope descending road section, the middle slope road is divided into a middle slope ascending road section and a middle slope descending road section, and the abrupt slope road is divided into an abrupt slope ascending road section and an abrupt slope descending road section;
setting cost coefficients of a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section as x1, x2, x3, x4, x5 and x6 respectively; when the traffic mode is that a motor vehicle is driven, x2 is more than 1 and less than x2, x1 is more than 4 and less than x3 and less than x6 and less than x5;
setting difficulty coefficients of a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section as y1, y2, y3, y4, y5 and y6 respectively; when the traffic mode is to drive the motor vehicle, the difficulty coefficients are set to be y11, y21, y31, y41, y51 and y61 respectively, and 1 < y21 < y11 < y41=y31 < y61 < y51; when the traffic mode is bicycle, the difficulty coefficients are respectively y12, y22, y32, y42, y52 and y62, and y12 is more than 1 and y32 is more than 2 and y52 is more than 1 and y22 is more than 42 and y62 is more than 42.
When the traffic mode is public traffic or walking, the cost coefficient and the difficulty coefficient are not involved;
d4, generating a navigation route according to the position of the user and the position contained in the received images shot by other users and four passable roads of driving motor vehicles, bicycles, public transportation and walking; setting the paths of a flat road, a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section in the route as J0, J1, J2, J3, J4, J5 and J6 respectively;
(1) when the mode of transportation for driving the motor vehicle is selected,
selecting a navigation route according to the lowest cost, wherein the calculation mode is Z=min (J0+x1 xJ1+x2 xJ2+x3 xJ3+x4 xJ4+x5 xJ5+x6 xJ6);
selecting a navigation route according to the lowest driving difficulty, and selecting a difficulty coefficient corresponding to the motor vehicle, wherein the calculation mode is v1=min (J0+y11×J1+y21×J2+y31×J3+y41×J4+y51×J5+y61×J6);
selecting a navigation route according to the shortest path, wherein the calculation mode is M=min (J0+J1+J2+J3+J4+J5+J6);
(2) when the mode of transportation of the bicycle is selected,
selecting a navigation route according to the lowest difficulty, and selecting a difficulty coefficient corresponding to a bicycle, wherein the calculation mode is v2=min (J0+y12×J1+y22×J2+y32×J3+y42×J4+y52×J5+y62×J6);
The navigation route is selected according to the shortest path, and the calculation mode is the same as the mode of selecting the navigation route according to the shortest path in the step (1) in the step D4;
(3) when a mode of transportation for public transportation is selected,
the navigation route is selected according to the shortest path, and the calculation mode is the same as the mode of selecting the navigation route according to the shortest path in the step (1) in the step D4;
(4) when a mode of transportation for public transportation is selected,
the navigation route is selected by the shortest path, and the calculation mode is the same as the navigation route is selected by the shortest path in (1) in D4.
The space-time backtracking module is used for helping a user backtrack the related data of the shot images based on the live-action three-dimensional map, and the specific mode is as follows:
e1, adding an interactive layer based on a live-action three-dimensional map, and only displaying and distributing grids with associated image data in the interactive layer;
e2, filling the grids in the interactive layer with colors,
(1) the grid with the largest environmental image data is filled with green;
(2) the grid with the largest event image data is filled with red;
(3) the grid with the most social image data is filled with yellow;
(4) the grid with the largest health image data is filled with orange;
(5) the grid with the largest emotion image data is filled with purple;
e3, opening image backtracking by checking a certain color filling grid in the live-action three-dimensional map, wherein the image backtracking mode is divided into five types, namely position transition, character growth, character change, health trend and emotion process:
(1) When backtracking is performed in a position transition mode, all environment images are sequentially reproduced according to an old-to-new rule, before each image appears, the color filling grid associated with the image flashes on a live-action three-dimensional map for t seconds, t is more than 0 and less than 1 second, and meanwhile, soft and natural music is played, so that a user is helped to recall building development conditions and urban and rural transitions;
(2) when the person is traced back in a growth mode, all event images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with the image flashes on a live-action three-dimensional map for t seconds, 0 < t < 1 second, and music exciting the person is played at the same time, so that a user is helped to recall the growth and change in different events;
(3) when the method backtracks in a role change mode, all social images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with each image flashes on a live-action three-dimensional map for t seconds, 0 < t < 1 second, and meanwhile, clear and active music is played, so that a user is helped to know the role and influence change of the user in a social circle;
(4) when tracing back in a health trend mode, all health images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with each image flashes on a live-action three-dimensional map for t seconds, and t is more than 0 and less than 1 seconds, and meanwhile, a health trend analysis report is generated;
(5) When the emotion process is traced back, all emotion images are reproduced in sequence according to the rule from old to new, before each image appears, the color filling grid associated with each image flashes on the live-action three-dimensional map for t seconds, t is more than 0 and less than 1 second, and meanwhile, music with the emotion is played, so that the user is helped to recall the emotion state and emotion change.
A method for realizing navigation and space-time backtracking based on shooting and live-action maps comprises the following steps:
step 1, dividing a three-dimensional space which can be shot by a user into non-overlapping grids, wherein the specific mode is as follows:
a1, in a building, carrying out grid division by taking a room of the building as a basic grid unit, wherein the boundary of the grid is aligned to the boundary of the room, and the coding mode is I+address coding+room coding, wherein I represents the grid in the building; the address code is accurate to the house, and is in the form of administrative division codes, street courtyard codes, building door building codes and unit house codes; the room code consists of six digits, the first three digits representing the order of the rooms from north to south and the last three digits representing the order of the rooms from west to east.
A2, uniformly dividing the space in the street courtyard into a plurality of three-dimensional grids outside the building, wherein the coding mode of the grids is O+address coding+grid ordering coding, wherein O represents the grid outside the building; the address code is accurate to the courtyard of the street, and the form is administrative division codes and courtyard codes of the street; the grid ordering code consists of nine bits, wherein the first three bits represent the grid arrangement sequence from the south to the north, the middle three bits represent the grid arrangement sequence from the west to the east, and the last three bits represent the grid arrangement sequence from the bottom to the top.
Step 2, associating the image shot by the user with the time, space position, environment, event, social contact, health and emotion information, wherein the specific mode is as follows:
b1, when an image is generated, recording time, space position, environment and health data acquired by a terminal, wherein the space position data is the code of a grid where the image is positioned; the environmental data comprises weather, temperature and humidity data; health data includes heart rate, blood pressure, exercise, sleep, blood oxygen saturation, and respiration data;
the user autonomously supplements event data and social data, wherein the event data comprises event types and participators, the event types comprise business activities, celebration, cultural activities, public welfare activities, daily life, travel adventure, event activities, artistic activities, natural landscapes, disaster events and accident events, and other event types can be autonomously created; the social data comprises character information and character relations;
the mood data is derived from the following four forms of analytical inferences:
(1) deducing the emotional state of the user according to the physiological data collected on the terminal device used by the user, wherein the emotional state comprises heart rate and respiratory rate, the increase of the heart rate is related to anxiety and anger, and the change of the respiration is related to tension and relaxation;
(2) Deducing emotion of the user according to language organization forms, vocabulary and emoticons of the user in text, social media and chat dialogue expressions;
(3) analyzing the voice of the user in social communication, and deducing the emotion state of the user by analyzing the voice characteristics of tone, speech speed and volume change;
(4) and analyzing the facial expression of the user in the video social contact to infer the emotional state of the user.
B2, classifying according to the purpose of image shooting:
(1) the images are mainly used for recording environment information and are classified as environment images;
(2) the images are mainly used for recording event information and are classified as event images;
(3) the images are mainly used for recording social information and are classified as social images;
(4) the images are mainly used for recording health information and are classified as health images;
(5) the images are mainly used for recording emotion information and are classified as emotion images.
Step 3, the user realizes positioning correction in the live-action three-dimensional map through the shot image, and the specific mode is as follows:
c1, setting the positioning error of a positioning element of the mobile terminal as delta R, and setting the original grid related to the image shot by the user as Q;
c2, taking the geometric center of gravity of the grid Q as a sphere center, taking the space range which is less than or equal to DeltaR from the sphere center as a comparison space, and classifying the grids in the comparison space range and the grids with partial overlapping spaces with the comparison space as grids to be compared;
And C3, finding each grid to be compared in the real-scene three-dimensional map according to the space position corresponding to the grid in the real world, sequentially collecting images in six real-scene three-dimensional maps of the right upper, the right lower, the right east, the right west, the right north and the right south at the center of gravity position of each grid to be compared, taking the images as comparison images, wherein the coding mode of the comparison images is grid coding+azimuth coding, and the azimuth coding mode is as follows: the upper part is U, the lower part is D, the east is E, the west is W, the north is N, and the south is S;
c4, comparing the image shot by the user with the contrast image, and screening out the contrast image P with the highest overlapping degree with the image shot by the user;
and C5, when the grid code associated with the image shot by the user is different from the grid code associated with the image P, replacing the grid code associated with the image shot by the user, and replacing the grid code associated with the image P.
Step 4, the user receives the shot images sent by other users, and the route planning is realized in the live-action three-dimensional map, and the specific mode is as follows:
d1, setting the elevation of the starting point of a certain section of route of a road of a real three-dimensional map as H0, advancing along a certain direction of the road, wherein the distance travelled is J, the elevation of the highest point in the distance is Hi, and classifying the road as a flat road when the elevation of the highest point in the distance is-0.5 percent or more [ (Hi-H0)/(J ] < 0.5 percent); classifying the road as a gentle slope road when the ratio of-3% to [ (Hi-H0)/(J ] < -0.5% or 0.5% to [ (Hi-H0)/(J ] < 3%; classifying the road as a medium slope road when the ratio of-6% is less than or equal to [ (Hi-H0)/(J ] < -3% or 3% is less than or equal to [ (Hi-H0)/(J ] < 6%; classification as steep road when [ (Hi-H0)/(J ] > 6% or [ (Hi-H0)/(J ] < -6%;
D2, taking the end point of each type of road as a new starting point, and continuing to divide the route according to the classification method of D1 until all routes are divided;
d3, subdividing a gentle slope road, a medium slope road and a steep slope road, advancing along a certain direction of the road, wherein the road with the road elevation from low to high is an uphill section, and the road with the road elevation from high to low is a downhill section; the gentle slope road is divided into a gentle slope ascending road section and a gentle slope descending road section, the middle slope road is divided into a middle slope ascending road section and a middle slope descending road section, and the abrupt slope road is divided into an abrupt slope ascending road section and an abrupt slope descending road section;
setting cost coefficients of a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section as x1, x2, x3, x4, x5 and x6 respectively; when the traffic mode is that a motor vehicle is driven, x2 is more than 1 and less than x2, x1 is more than 4 and less than x3 and less than x6 and less than x5;
setting difficulty coefficients of a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section as y1, y2, y3, y4, y5 and y6 respectively; when the traffic mode is to drive the motor vehicle, the difficulty coefficients are set to be y11, y21, y31, y41, y51 and y61 respectively, and 1 < y21 < y11 < y41=y31 < y61 < y51; when the traffic mode is bicycle, the difficulty coefficients are respectively y12, y22, y32, y42, y52 and y62, and y12 is more than 1 and y32 is more than 2 and y52 is more than 1 and y22 is more than 42 and y62 is more than 42.
When the traffic mode is public traffic or walking, the cost coefficient and the difficulty coefficient are not involved;
d4, generating a navigation route according to the position of the user and the position contained in the received images shot by other users and four passable roads of driving motor vehicles, bicycles, public transportation and walking; setting the paths of a flat road, a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section in the route as J0, J1, J2, J3, J4, J5 and J6 respectively;
(1) when the mode of transportation for driving the motor vehicle is selected,
selecting a navigation route according to the lowest cost, wherein the calculation mode is Z=min (J0+x1 xJ1+x2 xJ2+x3 xJ3+x4 xJ4+x5 xJ5+x6 xJ6);
selecting a navigation route according to the lowest driving difficulty, and selecting a difficulty coefficient corresponding to the motor vehicle, wherein the calculation mode is v1=min (J0+y11×J1+y21×J2+y31×J3+y41×J4+y51×J5+y61×J6);
selecting a navigation route according to the shortest path, wherein the calculation mode is M=min (J0+J1+J2+J3+J4+J5+J6);
(2) when the mode of transportation of the bicycle is selected,
selecting a navigation route according to the lowest difficulty, and selecting a difficulty coefficient corresponding to a bicycle, wherein the calculation mode is v2=min (J0+y12×J1+y22×J2+y32×J3+y42×J4+y52×J5+y62×J6);
The navigation route is selected according to the shortest path, and the calculation mode is the same as the mode of selecting the navigation route according to the shortest path in the step (1) in the step D4;
(3) when a mode of transportation for public transportation is selected,
the navigation route is selected according to the shortest path, and the calculation mode is the same as the mode of selecting the navigation route according to the shortest path in the step (1) in the step D4;
(4) when a mode of transportation for public transportation is selected,
the navigation route is selected by the shortest path, and the calculation mode is the same as the navigation route is selected by the shortest path in (1) in D4.
And 5, backtracking the related data of the shot image by the user based on the live-action three-dimensional map, wherein the specific mode is as follows:
e1, adding an interactive layer based on a live-action three-dimensional map, and only displaying and distributing grids with associated image data in the interactive layer;
e2, filling the grids in the interactive layer with colors,
(1) the grid with the largest environmental image data is filled with green;
(2) the grid with the largest event image data is filled with red;
(3) the grid with the most social image data is filled with yellow;
(4) the grid with the largest health image data is filled with orange;
(5) the grid with the largest emotion image data is filled with purple;
e3, opening image backtracking by checking a certain color filling grid in the live-action three-dimensional map, wherein the image backtracking mode is divided into five types, namely position transition, character growth, character change, health trend and emotion process:
(1) When backtracking is performed in a position transition mode, all environment images are sequentially reproduced according to an old-to-new rule, before each image appears, the color filling grid associated with the image flashes on a live-action three-dimensional map for t seconds, t is more than 0 and less than 1 second, and meanwhile, soft and natural music is played, so that a user is helped to recall building development conditions and urban and rural transitions;
(2) when the person is traced back in a growth mode, all event images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with the image flashes on a live-action three-dimensional map for t seconds, 0 < t < 1 second, and music exciting the person is played at the same time, so that a user is helped to recall the growth and change in different events;
(3) when the method backtracks in a role change mode, all social images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with each image flashes on a live-action three-dimensional map for t seconds, 0 < t < 1 second, and meanwhile, clear and active music is played, so that a user is helped to know the role and influence change of the user in a social circle;
(4) when tracing back in a health trend mode, all health images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with each image flashes on a live-action three-dimensional map for t seconds, and t is more than 0 and less than 1 seconds, and meanwhile, a health trend analysis report is generated;
(5) When the emotion process is traced back, all emotion images are reproduced in sequence according to the rule from old to new, before each image appears, the color filling grid associated with each image flashes on the live-action three-dimensional map for t seconds, t is more than 0 and less than 1 second, and meanwhile, music with the emotion is played, so that the user is helped to recall the emotion state and emotion change.
The system and the method for real-scene three-dimensional geographic information have the following advantages:
(1) According to the image shooting characteristics, grid division is carried out on a space, grid codes are associated with the images, the images are mapped into a live-action three-dimensional map according to grid positions, a live-action three-dimensional image with a view angle in a grid in the live-action three-dimensional map is used for manufacturing a contrast image, a contrast image P with highest similarity with the image shot by a user is selected, when the grid codes associated with the image shot by the user are different from the grid codes associated with the image P, the grid codes associated with the image shot by the user are replaced with the grid codes associated with the image P, and the space positions associated with the image shot by the user in the real world are corrected, so that the idea is novel;
(2) The elevation information in the live-action three-dimensional map is fully utilized to conduct accurate path planning, and a path selection formula with lowest cost and lowest difficulty is created according to the type of the traffic tool, so that the method has remarkable creativity;
(3) The user backtracks image related data by looking at the color filling grids in the live-action three-dimensional map, and carries out immersive coordination or generates analysis reports according to the backtracked data types, so that the conception is ingenious.
Additional features and advantages of the invention will be set forth in the description which follows, or may be learned by practice of the invention.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, like reference numerals being used to refer to like parts throughout the several views.
Fig. 1 is a flow chart of the method of the present invention.
Detailed Description
The system and the method for realizing navigation and space-time backtracking based on shooting and live-action maps are described in further detail below with reference to the accompanying drawings and embodiments.
The technical scheme adopted by the invention is that a system for realizing navigation and space-time backtracking based on shooting and live-action maps:
the system comprises a grid dividing module, an information association module, a positioning correction module, a navigation module and a space-time backtracking module.
The grid division module divides a three-dimensional space which can be shot by a user into non-overlapping grids, and the specific mode is as follows:
a1, in a building, carrying out grid division by taking a room of the building as a basic grid unit, wherein the boundary of the grid is aligned to the boundary of the room, and the coding mode is I+address coding+room coding, wherein I represents the grid in the building; the address code is accurate to the house, and is in the form of administrative division codes, street courtyard codes, building door building codes and unit house codes; the room code consists of six digits, the first three digits representing the order of the rooms from north to south and the last three digits representing the order of the rooms from west to east.
A2, uniformly dividing the space in the street courtyard into a plurality of three-dimensional grids outside the building, wherein the coding mode of the grids is O+address coding+grid ordering coding, wherein O represents the grid outside the building; the address code is accurate to the courtyard of the street, and the form is administrative division codes and courtyard codes of the street; the grid ordering code consists of nine bits, wherein the first three bits represent the grid arrangement sequence from the south to the north, the middle three bits represent the grid arrangement sequence from the west to the east, and the last three bits represent the grid arrangement sequence from the bottom to the top.
The information association module is used for associating the images shot by the user with the time, space position, environment, event, social contact, health and emotion information, and the specific modes are as follows:
b1, when an image is generated, recording time, space position, environment and health data acquired by a terminal, wherein the space position data is the code of a grid where the image is positioned; the environmental data comprises weather, temperature and humidity data; health data includes heart rate, blood pressure, exercise, sleep, blood oxygen saturation, and respiration data;
the user autonomously supplements event data and social data, wherein the event data comprises event types and participators, the event types comprise business activities, celebration, cultural activities, public welfare activities, daily life, travel adventure, event activities, artistic activities, natural landscapes, disaster events and accident events, and other event types can be autonomously created; the social data comprises character information and character relations;
The mood data is derived from the following four forms of analytical inferences:
(1) deducing the emotional state of the user according to the physiological data collected on the terminal device used by the user, wherein the emotional state comprises heart rate and respiratory rate, the increase of the heart rate is related to anxiety and anger, and the change of the respiration is related to tension and relaxation;
(2) deducing emotion of the user according to language organization forms, vocabulary and emoticons of the user in text, social media and chat dialogue expressions;
(3) analyzing the voice of the user in social communication, and deducing the emotion state of the user by analyzing the voice characteristics of tone, speech speed and volume change;
(4) and analyzing the facial expression of the user in the video social contact to infer the emotional state of the user.
B2, classifying according to the purpose of image shooting:
(1) the images are mainly used for recording environment information and are classified as environment images;
(2) the images are mainly used for recording event information and are classified as event images;
(3) the images are mainly used for recording social information and are classified as social images;
(4) the images are mainly used for recording health information and are classified as health images;
(5) the images are mainly used for recording emotion information and are classified as emotion images.
The positioning correction module is used for assisting a user to realize positioning correction in the live-action three-dimensional map through the shot image, and the specific mode is as follows:
c1, setting the positioning error of a positioning element of the mobile terminal as delta R, and setting the original grid related to the image shot by the user as Q;
c2, taking the geometric center of gravity of the grid Q as a sphere center, taking the space range which is less than or equal to DeltaR from the sphere center as a comparison space, and classifying the grids in the comparison space range and the grids with partial overlapping spaces with the comparison space as grids to be compared;
and C3, finding each grid to be compared in the real-scene three-dimensional map according to the space position corresponding to the grid in the real world, sequentially collecting images in six real-scene three-dimensional maps of the right upper, the right lower, the right east, the right west, the right north and the right south at the center of gravity position of each grid to be compared, taking the images as comparison images, wherein the coding mode of the comparison images is grid coding+azimuth coding, and the azimuth coding mode is as follows: the upper part is U, the lower part is D, the east is E, the west is W, the north is N, and the south is S;
c4, comparing the image shot by the user with the contrast image, and screening out the contrast image P with the highest overlapping degree with the image shot by the user;
And C5, when the grid code associated with the image shot by the user is different from the grid code associated with the image P, replacing the grid code associated with the image shot by the user, and replacing the grid code associated with the image P.
The navigation module is used for helping a user to realize route planning in a live-action three-dimensional map by receiving images shot by other users, and the specific mode is as follows:
d1, setting the elevation of the starting point of a certain section of route of a road of a real three-dimensional map as H0, advancing along a certain direction of the road, wherein the distance travelled is J, the elevation of the highest point in the distance is Hi, and classifying the road as a flat road when the elevation of the highest point in the distance is-0.5 percent or more [ (Hi-H0)/(J ] < 0.5 percent); classifying the road as a gentle slope road when the ratio of-3% to [ (Hi-H0)/(J ] < -0.5% or 0.5% to [ (Hi-H0)/(J ] < 3%; classifying the road as a medium slope road when the ratio of-6% is less than or equal to [ (Hi-H0)/(J ] < -3% or 3% is less than or equal to [ (Hi-H0)/(J ] < 6%; classification as steep road when [ (Hi-H0)/(J ] > 6% or [ (Hi-H0)/(J ] < -6%;
d2, taking the end point of each type of road as a new starting point, and continuing to divide the route according to the classification method of D1 until all routes are divided;
d3, subdividing a gentle slope road, a medium slope road and a steep slope road, advancing along a certain direction of the road, wherein the road with the road elevation from low to high is an uphill section, and the road with the road elevation from high to low is a downhill section; the gentle slope road is divided into a gentle slope ascending road section and a gentle slope descending road section, the middle slope road is divided into a middle slope ascending road section and a middle slope descending road section, and the abrupt slope road is divided into an abrupt slope ascending road section and an abrupt slope descending road section;
Setting cost coefficients of a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section as x1, x2, x3, x4, x5 and x6 respectively; when the traffic mode is that a motor vehicle is driven, x2 is more than 1 and less than x2, x1 is more than 4 and less than x3 and less than x6 and less than x5;
setting difficulty coefficients of a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section as y1, y2, y3, y4, y5 and y6 respectively; when the traffic mode is to drive the motor vehicle, the difficulty coefficients are set to be y11, y21, y31, y41, y51 and y61 respectively, and 1 < y21 < y11 < y41=y31 < y61 < y51; when the traffic mode is bicycle, the difficulty coefficients are respectively y12, y22, y32, y42, y52 and y62, and y12 is more than 1 and y32 is more than 2 and y52 is more than 1 and y22 is more than 42 and y62 is more than 42.
When the traffic mode is public traffic or walking, the cost coefficient and the difficulty coefficient are not involved;
d4, generating a navigation route according to the position of the user and the position contained in the received images shot by other users and four passable roads of driving motor vehicles, bicycles, public transportation and walking; setting the paths of a flat road, a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section in the route as J0, J1, J2, J3, J4, J5 and J6 respectively;
(1) When the mode of transportation for driving the motor vehicle is selected,
selecting a navigation route according to the lowest cost, wherein the calculation mode is Z=min (J0+x1 xJ1+x2 xJ2+x3 xJ3+x4 xJ4+x5 xJ5+x6 xJ6);
selecting a navigation route according to the lowest driving difficulty, and selecting a difficulty coefficient corresponding to the motor vehicle, wherein the calculation mode is v1=min (J0+y11×J1+y21×J2+y31×J3+y41×J4+y51×J5+y61×J6);
selecting a navigation route according to the shortest path, wherein the calculation mode is M=min (J0+J1+J2+J3+J4+J5+J6);
(2) when the mode of transportation of the bicycle is selected,
selecting a navigation route according to the lowest difficulty, and selecting a difficulty coefficient corresponding to a bicycle, wherein the calculation mode is v2=min (J0+y12×J1+y22×J2+y32×J3+y42×J4+y52×J5+y62×J6);
the navigation route is selected according to the shortest path, and the calculation mode is the same as the mode of selecting the navigation route according to the shortest path in the step (1) in the step D4;
(3) when a mode of transportation for public transportation is selected,
the navigation route is selected according to the shortest path, and the calculation mode is the same as the mode of selecting the navigation route according to the shortest path in the step (1) in the step D4;
(4) when a mode of transportation for public transportation is selected,
the navigation route is selected by the shortest path, and the calculation mode is the same as the navigation route is selected by the shortest path in (1) in D4.
The space-time backtracking module is used for helping a user backtrack the related data of the shot images based on the live-action three-dimensional map, and the specific mode is as follows:
e1, adding an interactive layer based on a live-action three-dimensional map, and only displaying and distributing grids with associated image data in the interactive layer;
e2, filling the grids in the interactive layer with colors,
(1) the grid with the largest environmental image data is filled with green;
(2) the grid with the largest event image data is filled with red;
(3) the grid with the most social image data is filled with yellow;
(4) the grid with the largest health image data is filled with orange;
(5) the grid with the largest emotion image data is filled with purple;
e3, opening image backtracking by checking a certain color filling grid in the live-action three-dimensional map, wherein the image backtracking mode is divided into five types, namely position transition, character growth, character change, health trend and emotion process:
(1) when backtracking is performed in a position transition mode, all environment images are sequentially reproduced according to an old-to-new rule, before each image appears, the color filling grid associated with the image flashes on a live-action three-dimensional map for t seconds, t is more than 0 and less than 1 second, and meanwhile, soft and natural music is played, so that a user is helped to recall building development conditions and urban and rural transitions;
(2) When the person is traced back in a growth mode, all event images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with the image flashes on a live-action three-dimensional map for t seconds, 0 < t < 1 second, and music exciting the person is played at the same time, so that a user is helped to recall the growth and change in different events;
(3) when the method backtracks in a role change mode, all social images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with each image flashes on a live-action three-dimensional map for t seconds, 0 < t < 1 second, and meanwhile, clear and active music is played, so that a user is helped to know the role and influence change of the user in a social circle;
(4) when tracing back in a health trend mode, all health images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with each image flashes on a live-action three-dimensional map for t seconds, and t is more than 0 and less than 1 seconds, and meanwhile, a health trend analysis report is generated;
(5) when the emotion process is traced back, all emotion images are reproduced in sequence according to the rule from old to new, before each image appears, the color filling grid associated with each image flashes on the live-action three-dimensional map for t seconds, t is more than 0 and less than 1 second, and meanwhile, music with the emotion is played, so that the user is helped to recall the emotion state and emotion change.
As shown in fig. 1, a method for realizing navigation and space-time backtracking based on shooting and live-action maps comprises the following steps:
step 1, dividing a three-dimensional space which can be shot by a user into non-overlapping grids, wherein the specific mode is as follows:
a1, in a building, carrying out grid division by taking a room of the building as a basic grid unit, wherein the boundary of the grid is aligned to the boundary of the room, and the coding mode is I+address coding+room coding, wherein I represents the grid in the building; the address code is accurate to the house, and is in the form of administrative division codes, street courtyard codes, building door building codes and unit house codes; the room code consists of six digits, the first three digits representing the order of the rooms from north to south and the last three digits representing the order of the rooms from west to east.
A2, uniformly dividing the space in the street courtyard into a plurality of three-dimensional grids outside the building, wherein the coding mode of the grids is O+address coding+grid ordering coding, wherein O represents the grid outside the building; the address code is accurate to the courtyard of the street, and the form is administrative division codes and courtyard codes of the street; the grid ordering code consists of nine bits, wherein the first three bits represent the grid arrangement sequence from the south to the north, the middle three bits represent the grid arrangement sequence from the west to the east, and the last three bits represent the grid arrangement sequence from the bottom to the top.
Step 2, associating the image shot by the user with the time, space position, environment, event, social contact, health and emotion information, wherein the specific mode is as follows:
b1, when an image is generated, recording time, space position, environment and health data acquired by a terminal, wherein the space position data is the code of a grid where the image is positioned; the environmental data comprises weather, temperature and humidity data; health data includes heart rate, blood pressure, exercise, sleep, blood oxygen saturation, and respiration data;
the user autonomously supplements event data and social data, wherein the event data comprises event types and participators, the event types comprise business activities, celebration, cultural activities, public welfare activities, daily life, travel adventure, event activities, artistic activities, natural landscapes, disaster events and accident events, and other event types can be autonomously created; the social data comprises character information and character relations;
the mood data is derived from the following four forms of analytical inferences:
(1) deducing the emotional state of the user according to the physiological data collected on the terminal device used by the user, wherein the emotional state comprises heart rate and respiratory rate, the increase of the heart rate is related to anxiety and anger, and the change of the respiration is related to tension and relaxation;
(2) Deducing emotion of the user according to language organization forms, vocabulary and emoticons of the user in text, social media and chat dialogue expressions;
(3) analyzing the voice of the user in social communication, and deducing the emotion state of the user by analyzing the voice characteristics of tone, speech speed and volume change;
(4) and analyzing the facial expression of the user in the video social contact to infer the emotional state of the user.
B2, classifying according to the purpose of image shooting:
(1) the images are mainly used for recording environment information and are classified as environment images;
(2) the images are mainly used for recording event information and are classified as event images;
(3) the images are mainly used for recording social information and are classified as social images;
(4) the images are mainly used for recording health information and are classified as health images;
(5) the images are mainly used for recording emotion information and are classified as emotion images.
Step 3, the user realizes positioning correction in the live-action three-dimensional map through the shot image, and the specific mode is as follows:
c1, setting the positioning error of a positioning element of the mobile terminal as delta R, and setting the original grid related to the image shot by the user as Q;
c2, taking the geometric center of gravity of the grid Q as a sphere center, taking the space range which is less than or equal to DeltaR from the sphere center as a comparison space, and classifying the grids in the comparison space range and the grids with partial overlapping spaces with the comparison space as grids to be compared;
And C3, finding each grid to be compared in the real-scene three-dimensional map according to the space position corresponding to the grid in the real world, sequentially collecting images in six real-scene three-dimensional maps of the right upper, the right lower, the right east, the right west, the right north and the right south at the center of gravity position of each grid to be compared, taking the images as comparison images, wherein the coding mode of the comparison images is grid coding+azimuth coding, and the azimuth coding mode is as follows: the upper part is U, the lower part is D, the east is E, the west is W, the north is N, and the south is S;
c4, comparing the image shot by the user with the contrast image, and screening out the contrast image P with the highest overlapping degree with the image shot by the user;
and C5, when the grid code associated with the image shot by the user is different from the grid code associated with the image P, replacing the grid code associated with the image shot by the user, and replacing the grid code associated with the image P.
Step 4, the user receives the shot images sent by other users, and the route planning is realized in the live-action three-dimensional map, and the specific mode is as follows:
d1, setting the elevation of the starting point of a certain section of route of a road of a real three-dimensional map as H0, advancing along a certain direction of the road, wherein the distance travelled is J, the elevation of the highest point in the distance is Hi, and classifying the road as a flat road when the elevation of the highest point in the distance is-0.5 percent or more [ (Hi-H0)/(J ] < 0.5 percent); classifying the road as a gentle slope road when the ratio of-3% to [ (Hi-H0)/(J ] < -0.5% or 0.5% to [ (Hi-H0)/(J ] < 3%; classifying the road as a medium slope road when the ratio of-6% is less than or equal to [ (Hi-H0)/(J ] < -3% or 3% is less than or equal to [ (Hi-H0)/(J ] < 6%; classification as steep road when [ (Hi-H0)/(J ] > 6% or [ (Hi-H0)/(J ] < -6%;
D2, taking the end point of each type of road as a new starting point, and continuing to divide the route according to the classification method of D1 until all routes are divided;
d3, subdividing a gentle slope road, a medium slope road and a steep slope road, advancing along a certain direction of the road, wherein the road with the road elevation from low to high is an uphill section, and the road with the road elevation from high to low is a downhill section; the gentle slope road is divided into a gentle slope ascending road section and a gentle slope descending road section, the middle slope road is divided into a middle slope ascending road section and a middle slope descending road section, and the abrupt slope road is divided into an abrupt slope ascending road section and an abrupt slope descending road section;
setting cost coefficients of a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section as x1, x2, x3, x4, x5 and x6 respectively; when the traffic mode is that a motor vehicle is driven, x2 is more than 1 and less than x2, x1 is more than 4 and less than x3 and less than x6 and less than x5;
setting difficulty coefficients of a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section as y1, y2, y3, y4, y5 and y6 respectively; when the traffic mode is to drive the motor vehicle, the difficulty coefficients are set to be y11, y21, y31, y41, y51 and y61 respectively, and 1 < y21 < y11 < y41=y31 < y61 < y51; when the traffic mode is bicycle, the difficulty coefficients are respectively y12, y22, y32, y42, y52 and y62, and y12 is more than 1 and y32 is more than 2 and y52 is more than 1 and y22 is more than 42 and y62 is more than 42.
When the traffic mode is public traffic or walking, the cost coefficient and the difficulty coefficient are not involved;
d4, generating a navigation route according to the position of the user and the position contained in the received images shot by other users and four passable roads of driving motor vehicles, bicycles, public transportation and walking; setting the paths of a flat road, a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section in the route as J0, J1, J2, J3, J4, J5 and J6 respectively;
(1) when the mode of transportation for driving the motor vehicle is selected,
selecting a navigation route according to the lowest cost, wherein the calculation mode is Z=min (J0+x1 xJ1+x2 xJ2+x3 xJ3+x4 xJ4+x5 xJ5+x6 xJ6);
selecting a navigation route according to the lowest driving difficulty, and selecting a difficulty coefficient corresponding to the motor vehicle, wherein the calculation mode is v1=min (J0+y11×J1+y21×J2+y31×J3+y41×J4+y51×J5+y61×J6);
selecting a navigation route according to the shortest path, wherein the calculation mode is M=min (J0+J1+J2+J3+J4+J5+J6);
(2) when the mode of transportation of the bicycle is selected,
selecting a navigation route according to the lowest difficulty, and selecting a difficulty coefficient corresponding to a bicycle, wherein the calculation mode is v2=min (J0+y12×J1+y22×J2+y32×J3+y42×J4+y52×J5+y62×J6);
The navigation route is selected according to the shortest path, and the calculation mode is the same as the mode of selecting the navigation route according to the shortest path in the step (1) in the step D4;
(3) when a mode of transportation for public transportation is selected,
the navigation route is selected according to the shortest path, and the calculation mode is the same as the mode of selecting the navigation route according to the shortest path in the step (1) in the step D4;
(4) when a mode of transportation for public transportation is selected,
the navigation route is selected by the shortest path, and the calculation mode is the same as the navigation route is selected by the shortest path in (1) in D4.
And 5, backtracking the related data of the shot image by the user based on the live-action three-dimensional map, wherein the specific mode is as follows:
e1, adding an interactive layer based on a live-action three-dimensional map, and only displaying and distributing grids with associated image data in the interactive layer;
e2, filling the grids in the interactive layer with colors,
(1) the grid with the largest environmental image data is filled with green;
(2) the grid with the largest event image data is filled with red;
(3) the grid with the most social image data is filled with yellow;
(4) the grid with the largest health image data is filled with orange;
(5) the grid with the largest emotion image data is filled with purple;
e3, opening image backtracking by checking a certain color filling grid in the live-action three-dimensional map, wherein the image backtracking mode is divided into five types, namely position transition, character growth, character change, health trend and emotion process:
(1) When backtracking is performed in a position transition mode, all environment images are sequentially reproduced according to an old-to-new rule, before each image appears, the color filling grid associated with the image flashes on a live-action three-dimensional map for t seconds, t is more than 0 and less than 1 second, and meanwhile, soft and natural music is played, so that a user is helped to recall building development conditions and urban and rural transitions;
(2) when the person is traced back in a growth mode, all event images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with the image flashes on a live-action three-dimensional map for t seconds, 0 < t < 1 second, and music exciting the person is played at the same time, so that a user is helped to recall the growth and change in different events;
(3) when the method backtracks in a role change mode, all social images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with each image flashes on a live-action three-dimensional map for t seconds, 0 < t < 1 second, and meanwhile, clear and active music is played, so that a user is helped to know the role and influence change of the user in a social circle;
(4) when tracing back in a health trend mode, all health images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with each image flashes on a live-action three-dimensional map for t seconds, and t is more than 0 and less than 1 seconds, and meanwhile, a health trend analysis report is generated;
(5) When the emotion process is traced back, all emotion images are reproduced in sequence according to the rule from old to new, before each image appears, the color filling grid associated with each image flashes on the live-action three-dimensional map for t seconds, t is more than 0 and less than 1 second, and meanwhile, music with the emotion is played, so that the user is helped to recall the emotion state and emotion change.
In the specification, the expression min [ f (x) ] refers to the smallest value calculated by taking the equation in parentheses.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention.

Claims (2)

1. A system for realizing navigation and space-time backtracking based on shooting and live-action maps is characterized in that:
the system comprises a grid dividing module, an information association module, a positioning correction module, a navigation module and a space-time backtracking module;
the grid division module divides a three-dimensional space which can be shot by a user into non-overlapping grids, and the specific mode is as follows:
a1, in a building, carrying out grid division by taking a room of the building as a basic grid unit, wherein the boundary of the grid is aligned to the boundary of the room, and the coding mode is I+address coding+room coding, wherein I represents the grid in the building; the address code is accurate to the house, and is in the form of administrative division codes, street courtyard codes, building door building codes and unit house codes; the room code consists of six digits, the first three digits representing the order of the rooms from north to south and the last three digits representing the order of the rooms from west to east;
A2, uniformly dividing the space in the street courtyard into a plurality of three-dimensional grids outside the building, wherein the coding mode of the grids is O+address coding+grid ordering coding, wherein O represents the grid outside the building; the address code is accurate to the courtyard of the street, and the form is administrative division codes and courtyard codes of the street; the grid ordering code consists of nine bits, wherein the first three bits represent the grid arrangement sequence from the south to the north, the middle three bits represent the grid arrangement sequence from the west to the east, and the last three bits represent the grid arrangement sequence from the bottom to the top;
the information association module is used for associating the images shot by the user with the time, space position, environment, event, social contact, health and emotion information, and the specific modes are as follows:
b1, when an image is generated, recording time, space position, environment and health data acquired by a terminal, wherein the space position data is the code of a grid where the image is positioned; the environmental data comprises weather, temperature and humidity data; health data includes heart rate, blood pressure, exercise, sleep, blood oxygen saturation, and respiration data;
the user autonomously supplements event data and social data, wherein the event data comprises event types and participators, the event types comprise business activities, celebration, cultural activities, public welfare activities, daily life, travel adventure, event activities, artistic activities, natural landscapes, disaster events and accident events, and other event types can be autonomously created; the social data comprises character information and character relations;
The mood data is derived from the following four forms of analytical inferences:
(1) deducing the emotional state of the user according to the physiological data collected on the terminal equipment used by the user, wherein the emotional state comprises heart rate and respiratory frequency;
(2) deducing emotion of the user according to language organization forms, vocabulary and emoticons of the user in text, social media and chat dialogue expressions;
(3) analyzing the voice of the user in social communication, and deducing the emotion state of the user by analyzing the voice characteristics of tone, speech speed and volume change;
(4) analyzing facial expressions of the user in the video social contact, and deducing the emotional state of the user;
b2, classifying according to the purpose of image shooting:
(1) the images are mainly used for recording environment information and are classified as environment images;
(2) the images are mainly used for recording event information and are classified as event images;
(3) the images are mainly used for recording social information and are classified as social images;
(4) the images are mainly used for recording health information and are classified as health images;
(5) the images are mainly used for recording emotion information and are classified as emotion images;
the positioning correction module is used for assisting a user to realize positioning correction in the live-action three-dimensional map through the shot image, and the specific mode is as follows:
C1, setting the positioning error of a positioning element of the mobile terminal as delta R, and setting the original grid related to the image shot by the user as Q;
c2, taking the geometric center of gravity of the grid Q as a sphere center, taking the space range which is less than or equal to DeltaR from the sphere center as a comparison space, and classifying the grids in the comparison space range and the grids with partial overlapping spaces with the comparison space as grids to be compared;
and C3, finding each grid to be compared in the real-scene three-dimensional map according to the space position corresponding to the grid in the real world, sequentially collecting images in six real-scene three-dimensional maps of the right upper, the right lower, the right east, the right west, the right north and the right south at the center of gravity position of each grid to be compared, taking the images as comparison images, wherein the coding mode of the comparison images is grid coding+azimuth coding, and the azimuth coding mode is as follows: the upper part is U, the lower part is D, the east is E, the west is W, the north is N, and the south is S;
c4, comparing the image shot by the user with the contrast image, and screening out the contrast image P with the highest overlapping degree with the image shot by the user;
c5, when the grid code associated with the image shot by the user is different from the grid code in which the image P is positioned, changing the grid code associated with the image shot by the user into the grid code in which the image P is positioned;
The navigation module is used for helping a user to realize route planning in a live-action three-dimensional map by receiving images shot by other users, and the specific mode is as follows:
d1, setting the elevation of the starting point of a certain section of route of a road of a real three-dimensional map as H0, advancing along a certain direction of the road, wherein the distance travelled is J, the elevation of the highest point in the distance is Hi, and classifying the road as a flat road when the elevation of the highest point in the distance is-0.5 percent or more [ (Hi-H0)/(J ] < 0.5 percent); classifying the road as a gentle slope road when the ratio of-3% to [ (Hi-H0)/(J ] < -0.5% or 0.5% to [ (Hi-H0)/(J ] < 3%; classifying the road as a medium slope road when the ratio of-6% is less than or equal to [ (Hi-H0)/(J ] < -3% or 3% is less than or equal to [ (Hi-H0)/(J ] < 6%; classification as steep road when [ (Hi-H0)/(J ] > 6% or [ (Hi-H0)/(J ] < -6%;
d2, taking the end point of each type of road as a new starting point, and continuing to divide the route according to the classification method of D1 until all routes are divided;
d3, subdividing a gentle slope road, a medium slope road and a steep slope road, advancing along a certain direction of the road, wherein the road with the road elevation from low to high is an uphill section, and the road with the road elevation from high to low is a downhill section; the gentle slope road is divided into a gentle slope ascending road section and a gentle slope descending road section, the middle slope road is divided into a middle slope ascending road section and a middle slope descending road section, and the abrupt slope road is divided into an abrupt slope ascending road section and an abrupt slope descending road section;
Setting cost coefficients of a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section as x1, x2, x3, x4, x5 and x6 respectively; when the traffic mode is that a motor vehicle is driven, x2 is more than 1 and less than x2, x1 is more than 4 and less than x3 and less than x6 and less than x5;
setting difficulty coefficients of a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section as y1, y2, y3, y4, y5 and y6 respectively; when the traffic mode is to drive the motor vehicle, the difficulty coefficients are set to be y11, y21, y31, y41, y51 and y61 respectively, and 1 < y21 < y11 < y41=y31 < y61 < y51; when the traffic mode is bicycle, the difficulty coefficients are respectively y12, y22, y32, y42, y52 and y62, and y12 is more than 1 and y32 is more than 2 and y52 is more than 1 and y22 is more than 42 and y62 is more than 42;
when the traffic mode is public traffic or walking, the cost coefficient and the difficulty coefficient are not involved;
d4, generating a navigation route according to the position of the user and the position contained in the received images shot by other users and four passable roads of driving motor vehicles, bicycles, public transportation and walking; setting the paths of a flat road, a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section in the route as J0, J1, J2, J3, J4, J5 and J6 respectively;
(1) When the mode of transportation for driving the motor vehicle is selected,
selecting a navigation route according to the lowest cost, wherein the calculation mode is Z=min (J0+x1 xJ1+x2 xJ2+x3 xJ3+x4 xJ4+x5 xJ5+x6 xJ6);
selecting a navigation route according to the lowest driving difficulty, and selecting a difficulty coefficient corresponding to the motor vehicle, wherein the calculation mode is v1=min (J0+y11×J1+y21×J2+y31×J3+y41×J4+y51×J5+y61×J6);
selecting a navigation route according to the shortest path, wherein the calculation mode is M=min (J0+J1+J2+J3+J4+J5+J6);
(2) when the mode of transportation of the bicycle is selected,
selecting a navigation route according to the lowest difficulty, and selecting a difficulty coefficient corresponding to a bicycle, wherein the calculation mode is v2=min (J0+y12×J1+y22×J2+y32×J3+y42×J4+y52×J5+y62×J6);
the navigation route is selected according to the shortest path, and the calculation mode is the same as the mode of selecting the navigation route according to the shortest path in the step (1) in the step D4;
(3) when a mode of transportation for public transportation is selected,
the navigation route is selected according to the shortest path, and the calculation mode is the same as the mode of selecting the navigation route according to the shortest path in the step (1) in the step D4;
(4) when a mode of transportation for public transportation is selected,
the navigation route is selected according to the shortest path, and the calculation mode is the same as the mode of selecting the navigation route according to the shortest path in the step (1) in the step D4;
The space-time backtracking module is used for helping a user backtrack the related data of the shot images based on the live-action three-dimensional map, and the specific mode is as follows:
e1, adding an interactive layer based on a live-action three-dimensional map, and only displaying and distributing grids with associated image data in the interactive layer;
e2, filling the grids in the interactive layer with colors,
(1) the grid with the largest environmental image data is filled with green;
(2) the grid with the largest event image data is filled with red;
(3) the grid with the most social image data is filled with yellow;
(4) the grid with the largest health image data is filled with orange;
(5) the grid with the largest emotion image data is filled with purple;
e3, opening image backtracking by checking a certain color filling grid in the live-action three-dimensional map, wherein the image backtracking mode is divided into five types, namely position transition, character growth, character change, health trend and emotion process:
(1) when backtracking is performed in a position transition mode, all environment images are sequentially reproduced according to an old-to-new rule, before each image appears, the color filling grid associated with the image flashes on a live-action three-dimensional map for t seconds, t is more than 0 and less than 1 second, and meanwhile, soft and natural music is played, so that a user is helped to recall building development conditions and urban and rural transitions;
(2) When the person is traced back in a growth mode, all event images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with the image flashes on a live-action three-dimensional map for t seconds, 0 < t < 1 second, and music exciting the person is played at the same time, so that a user is helped to recall the growth and change in different events;
(3) when the method backtracks in a role change mode, all social images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with each image flashes on a live-action three-dimensional map for t seconds, 0 < t < 1 second, and meanwhile, clear and active music is played, so that a user is helped to know the role and influence change of the user in a social circle;
(4) when tracing back in a health trend mode, all health images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with each image flashes on a live-action three-dimensional map for t seconds, and t is more than 0 and less than 1 seconds, and meanwhile, a health trend analysis report is generated;
(5) when the emotion process is traced back, all emotion images are reproduced in sequence according to the rule from old to new, before each image appears, the color filling grid associated with each image flashes on the live-action three-dimensional map for t seconds, t is more than 0 and less than 1 second, and meanwhile, music with the emotion is played, so that the user is helped to recall the emotion state and emotion change.
2. A method for realizing navigation and space-time backtracking based on shooting and live-action maps is characterized by comprising the following steps:
step 1, dividing a three-dimensional space which can be shot by a user into non-overlapping grids, wherein the specific mode is as follows:
a1, in a building, carrying out grid division by taking a room of the building as a basic grid unit, wherein the boundary of the grid is aligned to the boundary of the room, and the coding mode is I+address coding+room coding, wherein I represents the grid in the building; the address code is accurate to the house, and is in the form of administrative division codes, street courtyard codes, building door building codes and unit house codes; the room code consists of six digits, the first three digits representing the order of the rooms from north to south and the last three digits representing the order of the rooms from west to east;
a2, uniformly dividing the space in the street courtyard into a plurality of three-dimensional grids outside the building, wherein the coding mode of the grids is O+address coding+grid ordering coding, wherein O represents the grid outside the building; the address code is accurate to the courtyard of the street, and the form is administrative division codes and courtyard codes of the street; the grid ordering code consists of nine bits, wherein the first three bits represent the grid arrangement sequence from the south to the north, the middle three bits represent the grid arrangement sequence from the west to the east, and the last three bits represent the grid arrangement sequence from the bottom to the top;
Step 2, associating the image shot by the user with the time, space position, environment, event, social contact, health and emotion information, wherein the specific mode is as follows:
b1, when an image is generated, recording time, space position, environment and health data acquired by a terminal, wherein the space position data is the code of a grid where the image is positioned; the environmental data comprises weather, temperature and humidity data; health data includes heart rate, blood pressure, exercise, sleep, blood oxygen saturation, and respiration data;
the user autonomously supplements event data and social data, wherein the event data comprises event types and participators, the event types comprise business activities, celebration, cultural activities, public welfare activities, daily life, travel adventure, event activities, artistic activities, natural landscapes, disaster events and accident events, and other event types can be autonomously created; the social data comprises character information and character relations;
the mood data is derived from the following four forms of analytical inferences:
(1) deducing the emotional state of the user according to the physiological data collected on the terminal equipment used by the user, wherein the emotional state comprises heart rate and respiratory frequency;
(2) deducing emotion of the user according to language organization forms, vocabulary and emoticons of the user in text, social media and chat dialogue expressions;
(3) Analyzing the voice of the user in social communication, and deducing the emotion state of the user by analyzing the voice characteristics of tone, speech speed and volume change;
(4) analyzing facial expressions of the user in the video social contact, and deducing emotion of the user;
b2, classifying according to the purpose of image shooting:
(1) the images are mainly used for recording environment information and are classified as environment images;
(2) the images are mainly used for recording event information and are classified as event images;
(3) the images are mainly used for recording social information and are classified as social images;
(4) the images are mainly used for recording health information and are classified as health images;
(5) the images are mainly used for recording emotion information and are classified as emotion images;
step 3, the user realizes positioning correction in the live-action three-dimensional map through the shot image, and the specific mode is as follows:
c1, setting the positioning error of a positioning element of the mobile terminal as delta R, and setting the original grid related to the image shot by the user as Q;
c2, taking the geometric center of gravity of the grid Q as a sphere center, taking the space range which is less than or equal to DeltaR from the sphere center as a comparison space, and classifying the grids in the comparison space range and the grids with partial overlapping spaces with the comparison space as grids to be compared;
And C3, finding each grid to be compared in the real-scene three-dimensional map according to the space position corresponding to the grid in the real world, sequentially collecting images in six real-scene three-dimensional maps of the right upper, the right lower, the right east, the right west, the right north and the right south at the center of gravity position of each grid to be compared, taking the images as comparison images, wherein the coding mode of the comparison images is grid coding+azimuth coding, and the azimuth coding mode is as follows: the upper part is U, the lower part is D, the east is E, the west is W, the north is N, and the south is S;
c4, comparing the image shot by the user with the contrast image, and screening out the contrast image P with the highest overlapping degree with the image shot by the user;
c5, when the grid code associated with the image shot by the user is different from the grid code in which the image P is positioned, changing the grid code associated with the image shot by the user into the grid code in which the image P is positioned;
step 4, the user receives the shot images sent by other users, and the route planning is realized in the live-action three-dimensional map, and the specific mode is as follows:
d1, setting the elevation of the starting point of a certain section of route of a road of a real three-dimensional map as H0, advancing along a certain direction of the road, wherein the distance travelled is J, the elevation of the highest point in the distance is Hi, and classifying the road as a flat road when the elevation of the highest point in the distance is-0.5 percent or more [ (Hi-H0)/(J ] < 0.5 percent); classifying the road as a gentle slope road when the ratio of-3% to [ (Hi-H0)/(J ] < -0.5% or 0.5% to [ (Hi-H0)/(J ] < 3%; classifying the road as a medium slope road when the ratio of-6% is less than or equal to [ (Hi-H0)/(J ] < -3% or 3% is less than or equal to [ (Hi-H0)/(J ] < 6%; classification as steep road when [ (Hi-H0)/(J ] > 6% or [ (Hi-H0)/(J ] < -6%;
D2, taking the end point of each type of road as a new starting point, and continuing to divide the route according to the classification method of D1 until all routes are divided;
d3, subdividing a gentle slope road, a medium slope road and a steep slope road, advancing along a certain direction of the road, wherein the road with the road elevation from low to high is an uphill section, and the road with the road elevation from high to low is a downhill section; the gentle slope road is divided into a gentle slope ascending road section and a gentle slope descending road section, the middle slope road is divided into a middle slope ascending road section and a middle slope descending road section, and the abrupt slope road is divided into an abrupt slope ascending road section and an abrupt slope descending road section;
setting cost coefficients of a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section as x1, x2, x3, x4, x5 and x6 respectively; when the traffic mode is that a motor vehicle is driven, x2 is more than 1 and less than x2, x1 is more than 4 and less than x3 and less than x6 and less than x5;
setting difficulty coefficients of a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section as y1, y2, y3, y4, y5 and y6 respectively; when the traffic mode is to drive the motor vehicle, the difficulty coefficients are set to be y11, y21, y31, y41, y51 and y61 respectively, and 1 < y21 < y11 < y41=y31 < y61 < y51; when the traffic mode is bicycle, the difficulty coefficients are respectively y12, y22, y32, y42, y52 and y62, and y12 is more than 1 and y32 is more than 2 and y52 is more than 1 and y22 is more than 42 and y62 is more than 42;
When the traffic mode is public traffic or walking, the cost coefficient and the difficulty coefficient are not involved;
d4, generating a navigation route according to the position of the user and the position contained in the received images shot by other users and four passable roads of driving motor vehicles, bicycles, public transportation and walking; setting the paths of a flat road, a gentle slope ascending road section, a gentle slope descending road section, a middle slope ascending road section, a middle slope descending road section, a steep slope ascending road section and a steep slope descending road section in the route as J0, J1, J2, J3, J4, J5 and J6 respectively;
(1) when the mode of transportation for driving the motor vehicle is selected,
selecting a navigation route according to the lowest cost, wherein the calculation mode is Z=min (J0+x1 xJ1+x2 xJ2+x3 xJ3+x4 xJ4+x5 xJ5+x6 xJ6);
selecting a navigation route according to the lowest driving difficulty, and selecting a difficulty coefficient corresponding to the motor vehicle, wherein the calculation mode is v1=min (J0+y11×J1+y21×J2+y31×J3+y41×J4+y51×J5+y61×J6);
selecting a navigation route according to the shortest path, wherein the calculation mode is M=min (J0+J1+J2+J3+J4+J5+J6);
(2) when the mode of transportation of the bicycle is selected,
selecting a navigation route according to the lowest difficulty, and selecting a difficulty coefficient corresponding to a bicycle, wherein the calculation mode is v2=min (J0+y12×J1+y22×J2+y32×J3+y42×J4+y52×J5+y62×J6);
The navigation route is selected according to the shortest path, and the calculation mode is the same as the mode of selecting the navigation route according to the shortest path in the step (1) in the step D4;
(3) when a mode of transportation for public transportation is selected,
the navigation route is selected according to the shortest path, and the calculation mode is the same as the mode of selecting the navigation route according to the shortest path in the step (1) in the step D4;
(4) when a mode of transportation for public transportation is selected,
the navigation route is selected according to the shortest path, and the calculation mode is the same as the mode of selecting the navigation route according to the shortest path in the step (1) in the step D4;
and 5, backtracking the related data of the shot image by the user based on the live-action three-dimensional map, wherein the specific mode is as follows:
e1, adding an interactive layer based on a live-action three-dimensional map, and only displaying and distributing grids with associated image data in the interactive layer;
e2, filling the grids in the interactive layer with colors,
(1) the grid with the largest environmental image data is filled with green;
(2) the grid with the largest event image data is filled with red;
(3) the grid with the most social image data is filled with yellow;
(4) the grid with the largest health image data is filled with orange;
(5) the grid with the largest emotion image data is filled with purple;
e3, opening image backtracking by checking a certain color filling grid in the live-action three-dimensional map, wherein the image backtracking mode is divided into five types, namely position transition, character growth, character change, health trend and emotion process:
(1) When backtracking is performed in a position transition mode, all environment images are sequentially reproduced according to an old-to-new rule, before each image appears, the color filling grid associated with the image flashes on a live-action three-dimensional map for t seconds, t is more than 0 and less than 1 second, and meanwhile, soft and natural music is played, so that a user is helped to recall building development conditions and urban and rural transitions;
(2) when the person is traced back in a growth mode, all event images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with the image flashes on a live-action three-dimensional map for t seconds, 0 < t < 1 second, and music exciting the person is played at the same time, so that a user is helped to recall the growth and change in different events;
(3) when the method backtracks in a role change mode, all social images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with each image flashes on a live-action three-dimensional map for t seconds, 0 < t < 1 second, and meanwhile, clear and active music is played, so that a user is helped to know the role and influence change of the user in a social circle;
(4) when tracing back in a health trend mode, all health images are reproduced in sequence according to an old-to-new rule, before each image appears, the color filling grid associated with each image flashes on a live-action three-dimensional map for t seconds, and t is more than 0 and less than 1 seconds, and meanwhile, a health trend analysis report is generated;
(5) When the emotion process is traced back, all emotion images are reproduced in sequence according to the rule from old to new, before each image appears, the color filling grid associated with each image flashes on the live-action three-dimensional map for t seconds, t is more than 0 and less than 1 second, and meanwhile, music with the emotion is played, so that the user is helped to recall the emotion state and emotion change.
CN202410189446.7A 2024-02-20 2024-02-20 System and method for realizing navigation and space-time backtracking based on shooting and live-action map Active CN117739995B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410189446.7A CN117739995B (en) 2024-02-20 2024-02-20 System and method for realizing navigation and space-time backtracking based on shooting and live-action map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410189446.7A CN117739995B (en) 2024-02-20 2024-02-20 System and method for realizing navigation and space-time backtracking based on shooting and live-action map

Publications (2)

Publication Number Publication Date
CN117739995A true CN117739995A (en) 2024-03-22
CN117739995B CN117739995B (en) 2024-06-21

Family

ID=90277787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410189446.7A Active CN117739995B (en) 2024-02-20 2024-02-20 System and method for realizing navigation and space-time backtracking based on shooting and live-action map

Country Status (1)

Country Link
CN (1) CN117739995B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231642A (en) * 2007-08-27 2008-07-30 中国测绘科学研究院 Space-time database administration method and system
CN101464154A (en) * 2007-12-21 2009-06-24 英业达股份有限公司 Navigation method and navigation apparatus using the same
US20130073387A1 (en) * 2011-09-15 2013-03-21 Stephan HEATH System and method for providing educational related social/geo/promo link promotional data sets for end user display of interactive ad links, promotions and sale of products, goods, and/or services integrated with 3d spatial geomapping, company and local information for selected worldwide locations and social networking
CN105797349A (en) * 2016-03-17 2016-07-27 深圳市智游人科技有限公司 Live-action running device, method and system
CN106027936A (en) * 2016-07-19 2016-10-12 姚前 Video recording method and device
CN107590173A (en) * 2017-07-28 2018-01-16 武汉市测绘研究院 Backtracking and the control methods online of two-dimension time-space geography information
CN109475294A (en) * 2016-05-06 2019-03-15 斯坦福大学托管董事会 For treat phrenoblabia movement and wearable video capture and feedback platform
CN109714563A (en) * 2017-10-25 2019-05-03 北京航天长峰科技工业集团有限公司 A kind of overall view monitoring system based on critical position
CN110378293A (en) * 2019-07-22 2019-10-25 泰瑞数创科技(北京)有限公司 A method of high-precision map is produced based on outdoor scene threedimensional model
CN110580269A (en) * 2019-09-06 2019-12-17 中科院合肥技术创新工程院 public safety event-oriented spatio-temporal data dynamic evolution diagram generation method and dynamic evolution system thereof
CN113283669A (en) * 2021-06-18 2021-08-20 南京大学 Intelligent planning travel investigation method and system combining initiative and passive
CN113596400A (en) * 2021-07-28 2021-11-02 上海六梓科技有限公司 Method for backtracking track
CN113743772A (en) * 2021-09-01 2021-12-03 厦门精图信息技术有限公司 KingMap GIS-based smart city management method, system and equipment
CN114937210A (en) * 2022-06-21 2022-08-23 西安瑞特森信息科技有限公司 Forest region natural resource multi-period image data comparison analysis method
CN115481212A (en) * 2022-09-26 2022-12-16 福州市勘测院有限公司 Building space-time coding method considering logical building
CN115543964A (en) * 2022-10-20 2022-12-30 北斗伏羲中科数码合肥有限公司 Space object history backtracking method and device based on space-time grid
CN116561085A (en) * 2022-01-28 2023-08-08 华为技术有限公司 Picture sharing method and electronic equipment
CN116989820A (en) * 2023-09-27 2023-11-03 厦门精图信息技术有限公司 Intelligent navigation system and method

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231642A (en) * 2007-08-27 2008-07-30 中国测绘科学研究院 Space-time database administration method and system
CN101464154A (en) * 2007-12-21 2009-06-24 英业达股份有限公司 Navigation method and navigation apparatus using the same
US20130073387A1 (en) * 2011-09-15 2013-03-21 Stephan HEATH System and method for providing educational related social/geo/promo link promotional data sets for end user display of interactive ad links, promotions and sale of products, goods, and/or services integrated with 3d spatial geomapping, company and local information for selected worldwide locations and social networking
CN105797349A (en) * 2016-03-17 2016-07-27 深圳市智游人科技有限公司 Live-action running device, method and system
CN109475294A (en) * 2016-05-06 2019-03-15 斯坦福大学托管董事会 For treat phrenoblabia movement and wearable video capture and feedback platform
CN106027936A (en) * 2016-07-19 2016-10-12 姚前 Video recording method and device
CN107590173A (en) * 2017-07-28 2018-01-16 武汉市测绘研究院 Backtracking and the control methods online of two-dimension time-space geography information
CN109714563A (en) * 2017-10-25 2019-05-03 北京航天长峰科技工业集团有限公司 A kind of overall view monitoring system based on critical position
CN110378293A (en) * 2019-07-22 2019-10-25 泰瑞数创科技(北京)有限公司 A method of high-precision map is produced based on outdoor scene threedimensional model
CN110580269A (en) * 2019-09-06 2019-12-17 中科院合肥技术创新工程院 public safety event-oriented spatio-temporal data dynamic evolution diagram generation method and dynamic evolution system thereof
CN113283669A (en) * 2021-06-18 2021-08-20 南京大学 Intelligent planning travel investigation method and system combining initiative and passive
CN113596400A (en) * 2021-07-28 2021-11-02 上海六梓科技有限公司 Method for backtracking track
CN113743772A (en) * 2021-09-01 2021-12-03 厦门精图信息技术有限公司 KingMap GIS-based smart city management method, system and equipment
CN116561085A (en) * 2022-01-28 2023-08-08 华为技术有限公司 Picture sharing method and electronic equipment
CN114937210A (en) * 2022-06-21 2022-08-23 西安瑞特森信息科技有限公司 Forest region natural resource multi-period image data comparison analysis method
CN115481212A (en) * 2022-09-26 2022-12-16 福州市勘测院有限公司 Building space-time coding method considering logical building
CN115543964A (en) * 2022-10-20 2022-12-30 北斗伏羲中科数码合肥有限公司 Space object history backtracking method and device based on space-time grid
CN116989820A (en) * 2023-09-27 2023-11-03 厦门精图信息技术有限公司 Intelligent navigation system and method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHANG-HYEON JOH等: "A Position-Sensitive Sequence-Alignment Method Illustrated for Space–Time Activity-Diary Data", 《ENVIRONMENT AND PLANNING A: ECONOMY AND SPACE》, vol. 33, no. 2, 28 February 2001 (2001-02-28), pages 313 - 338 *
PEIZHONG YANG等: "SCPM-CR: A Novel Method for Spatial Co-location Pattern Mining with Coupling Relation Consideration", 《2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE)》, 2 August 2022 (2022-08-02), pages 1503 - 1504 *
张鹏程等: "智慧广州时空云平台接口即服务的设计与实现", 《测绘与空间地理信息》, vol. 42, no. 5, 31 May 2019 (2019-05-31), pages 1 - 3 *
方梦静;郑钰旦;夏兆煊;晏海;邵锋;: "基于微博大数据的游客情感时空变化特征――以杭州西溪国家湿地公园为例", 《西南大学学报(自然科学版)》, vol. 42, no. 3, 20 March 2020 (2020-03-20), pages 156 - 164 *

Also Published As

Publication number Publication date
CN117739995B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
CN108829852B (en) Personalized tour route recommendation method
Butler Memoryscape: How audio walks can deepen our sense of place by integrating art, oral history and cultural geography
US20050192025A1 (en) Method and apparatus for an interactive tour-guide system
CN109120653A (en) A kind of multi-medium data recommended method and device
CN104769970A (en) Method and apparatus for providing an application engine based on real-time commute activity
CN109636679A (en) A kind of interactive tour schedule planing method based on artificial intelligence
CN106225799A (en) Travel information dynamic vehicle navigation system and method
CA2666535A1 (en) Method, system and computer program for detecting and monitoring human activity utilizing location data
CN103906993A (en) Method and apparatus for constructing a road network based on point-of-interest (poi) information
Lin et al. Exploring virtual geographic environments
CN107270922A (en) A kind of traffic accident space-location method based on POI indexes
US20210173415A1 (en) Method and apparatus for providing dynamic obstacle data for a collision probability map
Neuhaus Emergent spatio-temporal dimensions of the city
CN116664346A (en) Scenic spot operation management system and method based on action track
CN106897382A (en) The vehicle-mounted content service system of adaptability and devices and methods therefor
Speed Developing a sense of place with locative media: An “Underview Effect”
CN117739995B (en) System and method for realizing navigation and space-time backtracking based on shooting and live-action map
CN110532464B (en) Tourism recommendation method based on multi-tourism context modeling
CN111367902A (en) Track visual analysis method based on OD data
Linturi et al. Helsinki Arena 2000-Augmenting a real city to a virtual one
US20220100796A1 (en) Method, apparatus, and system for mapping conversation and audio data to locations
CN113704625A (en) Interest point recommendation method based on time and geographic position
CN107860385A (en) Offer method and device, server apparatus and the computer-readable recording medium of indoor navigation service
Benita et al. 3D-4D visualisation of IoT data from Singapore’s National Science Experiment
Fonseca et al. The EXPO’98 CD-ROM: a multimedia system for environmental exploration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant