CN117707368A - Moving track fitting method and electronic equipment - Google Patents

Moving track fitting method and electronic equipment Download PDF

Info

Publication number
CN117707368A
CN117707368A CN202311131173.2A CN202311131173A CN117707368A CN 117707368 A CN117707368 A CN 117707368A CN 202311131173 A CN202311131173 A CN 202311131173A CN 117707368 A CN117707368 A CN 117707368A
Authority
CN
China
Prior art keywords
track
point
geographic coordinate
coordinate position
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311131173.2A
Other languages
Chinese (zh)
Inventor
曹世超
杨伟康
肖鹏
鄂华君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311131173.2A priority Critical patent/CN117707368A/en
Publication of CN117707368A publication Critical patent/CN117707368A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Navigation (AREA)

Abstract

The application discloses a moving track fitting method and electronic equipment, and relates to the technical field of terminals. The electronic equipment acquires data of a first moving track and data of a second moving track; fitting each track point on the first moving track to a corresponding position on the second moving track according to the data of the first moving track and the data of the second moving track, and obtaining a fitted first moving track; and the electronic equipment compares and displays the fitted first moving track and second moving track according to the time stamp of the track point. Fitting the movement track of the challenged user to the movement track of the challenged user may be achieved. In this way, the path tracks presented to the user by the challenged user and the challenged user on the electronic device are completely coincident, and the user is not misled that the two users do not move based on the same path.

Description

Moving track fitting method and electronic equipment
Technical Field
The application relates to the technical field of terminals, in particular to a moving track fitting method and electronic equipment.
Background
With the rapid development of mobile devices such as mobile phones and wearable devices, the mobile devices can provide more and more functions and services for users. For example, the mobile device may provide navigation functions, sports health class functions, and the like. When using these functions, the user may select a path on the mobile device and move along that path. The mobile device may collect the user's real-time location and present the user with the user's specific location of the user's real-time location on the selected path.
How to accurately present the real-time location of a user to the user is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides a moving track fitting method and electronic equipment, which can accurately present the real-time position of a user on a preset path.
In order to achieve the above purpose, the embodiments of the present application adopt the following technical solutions:
in a first aspect, a method for fitting a movement track is provided, and the method is applied to electronic equipment, and the electronic equipment acquires data of a first movement track and data of a second movement track; wherein the first moving track and the second moving track respectively comprise at least one track point; each track point is represented as a geographic coordinate location at time t, which is a timestamp of the track point. The electronic equipment fits each track point on the first moving track to a corresponding position on the second moving track according to the data of the first moving track and the data of the second moving track, and acquires the fitted first moving track; the electronic equipment displays the first moving track and the second moving track which are displayed according to the track point time stamp comparison after fitting.
In this method, one movement track is fitted to another movement track. For example, a movement track of the challenged user is fitted to a movement track of the challenged user. In this way, the path tracks presented to the user by the challenged user and the challenged user on the electronic device are completely coincident, and the user is not misled that the two users do not move based on the same path.
With reference to the first aspect, in one possible implementation manner, the fitting, by the electronic device, each track point on the first movement track to a corresponding position on the second movement track includes:
the electronic equipment acquires a first coordinate sequence (a sequence formed by the geographic coordinate positions of each track point on the first moving track) according to the data of the first moving track; the electronic equipment acquires a second coordinate sequence (a sequence formed by the geographic coordinate positions of each track point on the second moving track) according to the data of the second moving track; the electronic equipment maps the geographic coordinate position of each track point on the first moving track to the corresponding geographic coordinate position on the second moving track to obtain a mapped first coordinate sequence; and the electronic equipment adds a corresponding timestamp to each geographic coordinate position in the mapped first coordinate sequence to obtain a fitted first moving track.
In the method, the fitting of the track points is performed according to the geographic coordinate position of each track point.
In one possible implementation, the mapping, by the electronic device, the geographic coordinate position of each track point on the first movement track to the corresponding geographic coordinate position on the second movement track includes:
The electronic equipment maps the geographic coordinate position of a first track point on the first moving track to the geographic coordinate position of a second track point on the second moving track; the first track point is any track point on the first moving track; the ratio of the distance from the starting point of the second moving track to the second track point to the total length of the second moving track is equal to the ratio of the distance from the starting point of the first moving track to the first track point to the total length of the first moving track.
The mapping, by the electronic device, the geographic coordinate position of the first track point on the first moving track to the geographic coordinate position of the second track point on the second moving track includes: the electronic equipment acquires a first geographic coordinate position (geographic coordinate position of a first track point on a first moving track) in a first coordinate sequence; obtaining a distance from a geographic coordinate position of a first track point in a first coordinate sequence to reach the first geographic coordinate position; the electronic equipment acquires a distance from a geographic coordinate position of a first track point in the first coordinate sequence to a geographic coordinate position of a last track point in the first coordinate sequence; the electronic equipment obtains the distance from the geographic coordinate position of the first track point in the second coordinate sequence to the geographic coordinate position of the last track point in the second coordinate sequence; then, obtaining a first distance length according to the following formula; d2 =d1/D1×d2; wherein D2 represents a first distance length, D1 represents a distance from the geographic coordinate position of the first track point in the first coordinate sequence to the first geographic coordinate position, D1 represents a distance from the geographic coordinate position of the first track point in the first coordinate sequence to the geographic coordinate position of the last track point in the first coordinate sequence, and D2 represents a distance from the geographic coordinate position of the first track point in the second coordinate sequence to the geographic coordinate position of the last track point in the second coordinate sequence; the electronic equipment acquires the geographic coordinate position of a second track point on the second moving track according to the first distance length; the geographic coordinate position of the second track point is a geographic coordinate position reached after the first distance length passes from the first geographic coordinate position in the second coordinate sequence.
In the method, in a scene after the movement of the challenging user is finished, the proportion of the track points after mapping to the total length of the second movement track is equal to the proportion of the track points before mapping to the total length of the first movement track. And performing track point fitting according to the proportion of the track points in the moving track. When the first movement track includes n track points, the calculation complexity is o (n). Compared with the method for mapping each coordinate point in the first coordinate sequence to the second coordinate sequence by adopting the DTW algorithm, the calculation complexity is greatly reduced, and the calculation amount is reduced.
In one possible implementation, the mapping, by the electronic device, the geographic coordinate position of each track point on the first movement track to the corresponding geographic coordinate position on the second movement track includes:
the electronic equipment maps the geographic coordinate position of a first track point on the first moving track to the geographic coordinate position of a second track point on the second moving track; the first track point is any track point on the first moving track; the ratio of the distance from the starting point of the second moving track to the second track point to the total length of the second moving track is equal to the ratio of the distance from the starting point of the first moving track to the first track point to the total length of the second moving track.
The mapping, by the electronic device, the geographic coordinate position of the first track point on the first moving track to the geographic coordinate position of the second track point on the second moving track includes: the electronic equipment acquires a first geographic coordinate position in a first coordinate sequence; the first geographic coordinate position is the geographic coordinate position of a first track point on the first moving track; the electronic equipment acquires a distance from a geographic coordinate position of a first track point in a first coordinate sequence to reach the first geographic coordinate position; acquiring a first distance length according to the following formula; d2 =d1; wherein d2 represents a first distance length, and d1 is a distance from a geographic coordinate position of a first track point in the first coordinate sequence to the first geographic coordinate position; the electronic equipment acquires the geographic coordinate position of a second track point on the second moving track according to the first distance length; the geographic coordinate position of the second track point is the geographic coordinate position reached after the first distance length from the first geographic coordinate position in the second coordinate sequence.
In the method, in a scene during the movement of the challenged user, a complete movement track of the challenged user is not formed yet, and the total length of the movement track (second movement track) of the challenged user is taken as the total length of the movement track (first movement track) of the challenged user. The proportion of the track points after mapping to the total length of the second moving track is equal to the proportion of the track points before mapping to the total length of the first moving track. And performing track point fitting according to the proportion of the track points in the moving track. When the first movement track includes n track points, the calculation complexity is o (n). Compared with the method for mapping each coordinate point in the first coordinate sequence to the second coordinate sequence by adopting the DTW algorithm, the calculation complexity is greatly reduced, and the calculation amount is reduced.
In one possible implementation, the mapping, by the electronic device, the geographic coordinate position of each track point on the first movement track to the corresponding geographic coordinate position on the second movement track includes:
the electronic equipment maps the geographic coordinate position of a first track point on the first moving track to the geographic coordinate position of a second track point on the second moving track; the first track point is any track point on the first moving track, and the second track point is the track point with the nearest distance between the second track point and the first track point.
In the method, the geographical coordinate position of a track point A in a movement track (a first movement track) of a challenged user is traversed with the geographical coordinate position of each track point in the movement track (a second movement track) of the challenged user, one track point (track point B) closest to the track point A on the movement track of the challenged user is obtained, and the track point A is fitted to the track point B. In the method, when the challenging user movement trajectory (first movement trajectory) includes m trajectory points and the challenged user movement trajectory (second movement trajectory) includes n trajectory points, the calculated amount is m×n times; the algorithm complexity is o (n 2).
With reference to the first aspect, in one possible implementation manner, the first movement track and the second movement track are generated by two users moving respectively based on the same path.
With reference to the first aspect, in a possible implementation manner, the method further includes: the electronic device obtains data of the second movement track from the server.
In a second aspect, an electronic device is provided, which has the functionality to implement the method of the first aspect. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a third aspect, an electronic device is provided, comprising: a processor, a memory, and a display; the display is for displaying a user interface of an electronic device, the memory is for storing computer-executable instructions, the processor executing the computer-executable instructions stored by the memory when the electronic device is operating to cause the electronic device to perform the method as described in any one of the first aspects.
In a fourth aspect, there is provided an electronic device comprising: a processor; the processor is configured to perform the method according to any of the first aspects above according to instructions in a memory after being coupled to the memory and reading the instructions in the memory.
In a fifth aspect, there is provided a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of any of the first aspects above.
In a sixth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any of the first aspects above.
In a seventh aspect, there is provided an apparatus (e.g. the apparatus may be a system-on-a-chip) comprising a processor for supporting an electronic device to implement the functions referred to in the first aspect above. In one possible design, the apparatus further includes a memory for storing program instructions and data necessary for the electronic device. When the device is a chip system, the device can be formed by a chip, and can also comprise the chip and other discrete devices.
The technical effects of any one of the design manners of the second aspect to the seventh aspect may be referred to the technical effects of the different design manners of the first aspect, and will not be repeated here.
Drawings
Fig. 1 is a schematic view of a scene to which the moving track fitting method provided in the embodiment of the present application is applicable;
Fig. 2 is a schematic view of a scenario to which the moving track fitting method provided in the embodiment of the present application is applicable;
fig. 3 is a schematic diagram of an example of a scenario of a moving track fitting method according to an embodiment of the present application;
fig. 4 is a schematic diagram of an example of a scenario of a moving track fitting method according to an embodiment of the present application;
fig. 5 is a schematic diagram of an example of a scenario of a moving track fitting method according to an embodiment of the present application;
fig. 6 is a schematic diagram of an example of a scenario of a moving track fitting method according to an embodiment of the present application;
fig. 7 is a schematic diagram of an example of a scenario of a moving track fitting method according to an embodiment of the present application;
fig. 8 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 9 is a schematic block diagram related to a moving track fitting method according to an embodiment of the present application;
fig. 10 is a schematic diagram of module interaction in a moving track fitting method according to an embodiment of the present application;
fig. 11 is a schematic diagram of module interaction in a moving track fitting method according to an embodiment of the present application;
FIG. 12 is a flowchart of a method for fitting each track point on a movement track of a challenged user to a movement track of a challenged user according to an embodiment of the present application;
Fig. 13A is a schematic diagram of an example of a scenario of a moving track fitting method according to an embodiment of the present application;
fig. 13B is a schematic diagram of an example of a scenario of a moving track fitting method according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 15 is a schematic diagram of a chip system according to an embodiment of the present application.
Detailed Description
In the description of the embodiments of the present application, the terminology used in the embodiments below is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in the specification of this application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary. It should also be understood that in the various embodiments herein below, "at least one", "one or more" means one or more than two (including two). The term "and/or" is used to describe an association relationship of associated objects, meaning that there may be three relationships; for example, a and/or B may represent: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise. The term "coupled" includes both direct and indirect connections, unless stated otherwise. The terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
Currently, various popular sports health class applications (apps) may provide sports athletic ranking functionality. That is, a plurality of users perform exercises such as running, walking, riding, etc. based on the same path, and rank the results according to the time period for the users to perform the same exercise. For example, as shown in fig. 1 (a), after the user completes one movement, the mobile device may obtain a path track of the current movement process of the user, including a path start point, a path end point, each track point where the path passes, and so on; corresponding movement information of the user in the movement process, such as movement time, average speed, heat consumption and the like, can be obtained. Furthermore, the mobile device can store the path track and the motion information of the current motion of the user locally, and can upload the path track and the motion information of the current motion of the user to the cloud server. Thus, other users can also acquire the path track from the cloud server. Other users or users uploading the path can move based on the path track, and the movement information moving on the path is uploaded to the cloud server. The mobile device may obtain the motion information of the users from the cloud server, and rank the results according to the motion information (motion time, average speed, etc.) of each user. Illustratively, as shown in (b) of fig. 1, two users respectively move based on the path with the starting point being "central park east gate" and the ending point being "D1 south gate", and the mobile device acquires the movement information of the two users moving based on the path, and ranks according to the movement information (movement time, average speed matching, etc.) of the two users moving on the path. For example, as shown in fig. 1 (b), the mobile phone displays that the movement time (movement duration) of the user 1 is 20 minutes 51 seconds, and the average speed is 4 minutes 34 seconds; the user 2 movement time was 24 minutes 56 seconds, and the average speed was 4 minutes 37 seconds. In this function, the user does not need to agree to move at the same time, and can participate in the athletic ranking as long as the user moves based on the same road section.
In one scenario, the mobile device presents at least one path to the user that is selectable. The user may select one of the paths and perform the movement based on the selected path. The mobile device receives a selection operation of a user on one path and determines the path as a preset path. The user can move based on the preset path, and the mobile equipment acquires the moving track, the moving information and the like of the user in the moving process.
The user (challenged user) may select any one of the other users performing the exercise based on the preset path as the challenged object (challenged user), and the mobile device ranks the exercise achievements of the two users based on the exercise information of the user (challenged user) and the challenged user.
In some embodiments, the mobile device may display real-time ranking of the challenged user and the challenged user's movements on the preset path. For example, the mobile device may present the dynamic trajectory in a time dimension from the positions of the challenged user and the challenged user during their movements. In this way, the user can dynamically and intuitively view the motion positions of himself and the challenge object at the same moment. It should be noted that, in the embodiment of the present application, the same time is not the same time of absolute time, but is a relative time during the movement. That is, the t second after the challenged user starts the movement is the same time as the t second after the challenged user starts the movement. For example, the 1 st second the challenged user is in his motion and the 1 st second the challenged user is in motion is at the same time.
Illustratively, referring to FIG. 2, the mobile device is an example of a cell phone. The cell phone 100 displays an "outdoor running" interface 101 on which a user can set running-related parameters. Illustratively, the "outdoor running" interface 101 includes a routing page 102, the routing page 102 for displaying information of at least one route available for selection by a user. The user may select one of the paths and move based on the selected path. Illustratively, the path selection page 102 displays a path with a start point of "center park east gate" and an end point of "D1 south gate". The path selection page 102 also includes a "go to challenge" button 103 that the user can click on to view information for the path. For example, in response to a click operation of the "go challenge" button 103 by the user, the mobile phone 100 displays an interface 104, the interface 104 being for displaying information of the path "center park east gate-D1 south gate". Illustratively, the interface 104 includes a start point name "center park east gate" of the path, an end point name "D1 south gate", a start point specific location "Shenzhen city ford field lotus street", and the like. The interface 104 also includes athletic performance, etc., of the user performing the exercise based on the path. Illustratively, the interface 104 includes an information page 105, the information page 105 for displaying athletic performance of a user performing an athletic activity based on the path "center park east gate-D1 south gate". As shown in fig. 2, in response to a user clicking on the Bar at the top of the information page 105, the handset 100 displays more information items in the information page 105. Illustratively, the information page 105 includes a ranking of athletic performance for each user performing an athletic activity based on the path "center park east gate-D1 south gate". Wherein, a 'challenge TA' button is displayed beside the athletic performance of each user. The user may choose to click on a "challenge TA" button, and select the user corresponding to the "challenge TA" button as the challenge object (challenged user).
In one example, as shown in fig. 3 (a), the mobile phone 100 receives a click operation of a "challenge TA" button by a user, determines the user corresponding to the "challenge TA" button as a challenged user, and displays a real-time information interface 106, where the real-time information interface 106 is used for displaying real-time information during the movement process of the challenged user; for example, the path track length of the movement of the challenged user, the length of the movement, the real-time position of the challenged user and the challenged user at the t-th second after starting the movement, the real-time ranking of the challenged user and the challenged user, and the like. Illustratively, as shown in fig. 3 (b), the challenged user currently runs 2.54 km (the length of the motion path track is 2.54 km), the challenged user has run 1 minute and 51 seconds (the motion duration is 00:01:51), the real-time location of the challenged user is location 1061, the real-time location of the challenged user is location 1062, and the challenged user is 20 seconds behind the challenged user. In this example, the mobile phone 100 may dynamically display motion trajectories of the challenged user and the challenged user during the user's motion based on the preset path.
In another example, after the challenged user finishes the motion, the challenged user and the challenged user can view the dynamic track displayed by the challenged user according to the time dimension in the motion process on the mobile phone. For example, after the challenging user completes one movement based on the path "center park east gate-D1 south gate", the user may click on "road segment score" to view the relevant information of the current movement process. As shown in fig. 4 (a), the mobile phone 100 displays a "link score" page 107, where the "link score" page 107 includes information such as a speed allocation and a movement duration of each kilometer during the current movement of the user. The "road segment achievements" page 107 includes a "dynamic track" button 108. In response to a click operation of the "dynamic track" button 108 by the user, the mobile phone 100 displays a dynamic track presentation page 109. The dynamic track display page 109 is used for dynamically displaying the motion track of the challenged user and the challenged user at each moment in the motion process. Illustratively, as shown in fig. 4 (b), when the movement mileage of the challenged user reaches 3.45 km, the real-time position of the challenged user is position 1091, and the real-time position of the challenged user is position 1092; as shown in fig. 4 (c), when the movement mileage of the challenged user reaches 5.03 km, the real-time position of the challenged user is position 1093, and the real-time position of the challenged user is position 1094. In this example, after the user's movement based on the preset path is completed, the mobile phone 100 may dynamically display the movement trajectories of the challenged user and the challenged user.
In both examples, the mobile phone 100 dynamically displays the motion trajectories of the challenged user and the challenged user; that is, when the mobile phone 100 displays the real-time position of each moment in the motion process of the challenged user, the real-time position of the challenged user at the moment is displayed. In this way, the user can conveniently perceive the real-time ranking of the two users during one movement.
In this scenario, the mobile device will mark the real-time locations of the challenged and challenged users on the map in the time dimension for convenient viewing by the user.
In practical application, although the challenged user and the challenged user move based on the same preset path, the challenged user and the challenged user are limited by the hardware of the mobile device, the problem of inaccurate positioning may occur, and the situation that the movement tracks of the two users are not completely overlapped may occur.
It will be appreciated that a user movement trajectory may generally be represented as a sequence of trajectory points; the ith track point may be represented as pi= (x, y, t), where the geographic coordinate position of the track point Pi at time t (e.g., t seconds after the start of movement) is (x, y), where x represents a longitude value, and y represents a latitude value. the time t is the time stamp of the ith trace point. In one implementation, the geographic coordinate locations of the tracking points, i.e., (x, y) values, may be acquired by a global navigation satellite system (global navigation satellite system, GNSS). The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS), etc. Illustratively, the geographic coordinate location of the ith track point is ("x": 39.997761, "y": 116.478928).
In one example, the challenged user and the challenged user move based on the same preset path. During the movement of the challenged user, the mobile device carried by the challenged user may acquire the real-time position (geographic coordinate position) of the challenged user through GNSS according to a certain rule (e.g., periodically). During the movement of the challenged user, the mobile device carried by the challenged user may acquire the real-time position (geographic coordinate position) of the challenged user by GNSS according to a certain rule (e.g., periodically). In this way, the movement track of the challenged user can be drawn as track 1 according to the track point sequence in the collected challenged user movement process, and the movement track of the challenged user can be drawn as track 2 according to the track point sequence in the collected challenged user movement process.
For example, as shown in fig. 5, track 1 and track 2 do not completely coincide when the challenged user and the challenged user move based on the same preset path due to hardware limitations (e.g., GPS positioning inaccuracy).
In the scene that the mobile device displays the real-time ranking of the two users, if the movement tracks of the challenged users and the challenged users displayed on the mobile device deviate, the users feel that the two users do not move based on the same path, and the use experience of the users is reduced.
In one example, as shown in fig. 6 (a), the mobile phone 100 draws a movement track of both the challenged user and the challenged user. Because the movement tracks of the challenged user and the challenged user do not completely coincide, the users can feel that the two users do not move based on the same path.
In another example, as shown in fig. 6 (b), the mobile phone 100 draws a path track through which the challenged user moves this time according to a movement track of the challenged user, and marks real-time positions of the challenged user and the challenged user at each moment. Because the movement tracks of the challenged user and the challenged user do not completely coincide, the situation that the real-time position of the challenged user is not on the preset path track may occur, so that the users feel that the two users do not move based on the same path.
The embodiment of the application provides a moving track fitting method, which can fit one moving track to another moving track. For example, a movement track of the challenged user is fitted to a movement track of the challenged user. That is, the corresponding position of each track point on the movement track of the challenged user is obtained. Fitting the track points on the movement track of the challenged user to the corresponding positions on the movement track of the challenged user.
For example, as shown in fig. 7, the mobile phone 100 draws a path track passing through the current motion process according to a motion track of the challenged user, and marks the positions (coordinate positions of track points on a map) of the challenged user and the challenged user at each moment. At a certain moment, the motion mileage of the challenged user is 3.50 km, and the real-time position of the challenged user is position a. According to the moving track fitting method provided by the embodiment of the application, the position a is fitted to the position b on the moving track of the challenged user. In this way, the path tracks of the challenged user and the challenged user are completely coincident, and the misunderstanding that the two users do not move based on the same path is not brought to the user.
The moving track fitting method provided by the embodiment of the application can be applied to any electronic equipment with computing capability. For example, the electronic device may be a mobile device. The mobile device may include a mobile phone, a wearable device (e.g., a smart watch, a smart bracelet, etc.), a tablet computer, a notebook computer, an ultra-mobile personal computer (UMPC), a handheld computer, a netbook, a personal digital assistant (personal digital assistant, PDA), a vehicle-mounted device, a virtual reality device, etc., which the embodiments of the present application do not impose any limitation.
For example, please refer to fig. 8, which illustrates a schematic structure of an electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, an audio module 130, a speaker 130A, a microphone 130B, a display screen 140, a communication module 150, a power module 160, an input device 170, a sensor module 180, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. For example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate components or may be integrated in one or more processors. In some embodiments, the electronic device 100 may also include one or more processors 110.
Wherein the controller is the neural and command center of the electronic device 100. The operation control signal can be generated according to the instruction operation code and the time sequence signal to finish the control of instruction fetching and instruction execution.
An operating system of the electronic device 100 may be run on the application processor for managing hardware and software resources of the electronic device 100. Such as managing and configuring memory, prioritizing system resources, controlling input and output devices, operating networks, managing file systems, managing drivers, etc. The operating system may also be used to provide an operator interface for a user to interact with the system. Various types of software, such as drivers, applications (apps), etc., may be installed in the operating system.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. The NPU can implement applications such as intelligent cognition of the electronic device 100.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include inter-integrated circuit (inter-integrated circuit, I2C) interfaces, inter-integrated circuit audio (integrated circuit sound, I2S) interfaces, pulse code modulation (pulse code modulation, PCM) interfaces, universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interfaces, mobile industry processor interfaces (mobile industry processor interface, MIPI), general-purpose input/output (GPIO) interfaces, SIM card interfaces, and/or USB interfaces, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store one or more computer programs, including instructions. The processor 110 may cause the electronic device 100 to execute the application running method provided in some embodiments of the present application, as well as various applications, data management, and the like, by executing the above-described instructions stored in the internal memory 121. The internal memory 121 may include a code storage area and a data storage area. Wherein the data storage area may store data created during use of the electronic device 100, etc. In addition, the internal memory 121 may include high-speed random access memory, and may also include nonvolatile memory, such as one or more disk storage units, flash memory units, universal flash memory (universal flash storage, UFS), and the like. In some embodiments, the processor 110 may cause the electronic device 100 to perform the application running methods provided in the embodiments of the present application, as well as other applications and data management, by executing instructions stored in the internal memory 121, and/or instructions stored in a memory provided in the processor 110.
The electronic device 100 may implement audio functions through an audio module 130, a speaker 130A, a microphone 130B, an application processor, and the like. Such as music playing, recording, etc. The audio module 130 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 130 may also be used to encode and decode audio signals. In some embodiments, the audio module 130 may be disposed in the processor 110, or a portion of the functional modules of the audio module 130 may be disposed in the processor 110.
Speaker 130A, also known as a "horn," is used to convert audio electrical signals into sound signals.
Microphone 130B, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. The user may sound near microphone 130B through the mouth, inputting a sound signal to microphone 130B.
The communication function of the electronic device 100 can be realized by the antenna 1, the antenna 2, the communication module 150, and the like.
The communication module 150 may provide solutions for wireless communication including cellular, wi-Fi, bluetooth (BT), wireless data transfer modules (e.g., 433mhz,868mhz,915 mhz), etc., as applied to the electronic device 100. The communication module 150 may be one or more devices integrating at least one communication processing module. The communication module 150 receives electromagnetic waves via the antenna 1 or the antenna 2, filters and frequency-modulates the electromagnetic wave signals, and transmits the processed signals to the processor 110. The communication module 150 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 1 or the antenna 2. For example, the electronic device 100 may obtain a source database, a destination database through the communication module 150; the source database is a database before data integration, and the target database is a database after data integration. And extracting and integrating the data of the source database to form the target database.
The electronic device 100 implements display functions through a GPU, a display screen 140, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 140 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 140 is used to display images, videos, and the like. The display screen 140 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 140, N being a positive integer greater than 1. In the embodiment of the application, the display screen 140 may be used to display a UI and receive an operation of the UI by a user. For example, the display screen 140 may display difference data of the source database and the destination database acquired according to the method provided in the embodiment of the present application.
In some embodiments, a pressure sensor, a touch sensor, etc. is provided on the display screen 140. The pressure sensor is used for sensing a pressure signal and can convert the pressure signal into an electric signal. When a touch operation is applied to the display screen 140, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor. The touch sensor, also referred to as a "touch panel," may form a touch screen, also referred to as a "touch screen," with the display screen 140. The touch sensor is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may also be provided through the display screen 140.
The power module 160 may be used to power various components included in the electronic device 100. In some embodiments, the power module 160 may be a battery, such as a rechargeable battery.
The input device 170 may include a keyboard, a mouse, etc. The keyboard is used to input english alphabets, numerals, punctuation marks, etc. into the electronic device 100, thereby giving commands, inputting data, etc. to the electronic device 100. The mouse is a pointer for the electronic device 100 to display the system aspect coordinate positioning, and is used for inputting instructions and the like to the electronic device 100. The input device 170 may be connected to the electronic apparatus 100 through a wired connection, for example, the input device 170 is connected to the electronic apparatus 100 through a GPIO interface, a USB interface, or the like. The input device 170 may also be connected to the electronic apparatus 100 by wireless means, for example, the input device 170 may be connected to the electronic apparatus 100 by bluetooth, infrared, etc.
The sensor module 180 may include a fingerprint sensor, a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
The moving track fitting method provided by the embodiment of the application is described in detail below by taking the sports competition ranking function of the sports health App as an example with reference to the accompanying drawings.
Illustratively, referring to FIG. 9, a sports health class App is installed on the mobile device 100, which may provide sports athletic ranking functionality, e.g., the functionality illustrated in FIGS. 1-4. In one example, the App may provide a motion interface, a motion recording interface, a dynamic track interface, and the like. The motion interface is used for displaying a user interface in the motion process of a user; illustratively, the motion interface is the interface shown in FIG. 3. The motion recording interface is used for displaying relevant information of the user in the motion process after the user completes the motion; illustratively, the motion recording interface is the interface shown in fig. 4 (a). The dynamic track interface is used for displaying a dynamic moving track in the one-time motion process of the user; illustratively, the motion recording interface is the interface shown in fig. 4 (b) and (c). The App also comprises a data processing module and a fitting module, wherein the data processing module is used for acquiring and processing track point data, and the fitting module is used for fitting track points on one moving track to another moving track. The mobile device 100 further includes a database, configured to store data that needs to be used in the execution process of the moving track fitting method provided in the embodiment of the present application, generated data, and the like. In one implementation, the mobile device 100 may communicate with the server 200 via wired or wireless means. For example, the mobile device 100 may send the track point list data of the movement track generated by the user movement process to the server 200, and the mobile device 100 may obtain the track point list data of any one path track stored in the server 200 or the track point list data of the movement track generated by the user movement process from the server 200.
In one embodiment, in conjunction with fig. 9, fig. 10 shows a schematic flow chart of a method for fitting a moving track according to an embodiment of the present application.
S301, the mobile device acquires movement track data of the challenged user from the server.
The mobile device may determine a predetermined path based on user operation, based on which both the challenged user and the challenged user are moving. Illustratively, referring to FIG. 2, the information page 105 includes path information for the path "center park east gate-D1 south gate" and a ranking of athletic performance for each user performing an athletic activity based on the path "center park east gate-D1 south gate". Wherein, a 'challenge TA' button is displayed beside the athletic performance of each user. The user may choose to click on a "challenge TA" button, and select the user corresponding to the "challenge TA" button as the challenge object (challenged user). Upon receiving a click operation of a "challenge TA" button from the user on the information page 105, the mobile device determines that the path "center park east gate-D1 south gate" displayed on the information page 105 is a preset path. A user's click on a "challenge TA" button on the information page 105 is also a user's selection of a challenged user.
Illustratively, upon receiving a user selection operation on a challenged user (e.g., a user clicking operation on a "challenge TA" button, see fig. 3), the mobile device determines the challenged user based on the selection operation on the challenged user. After the mobile device determines the challenged user, a first request is sent to a server to request to acquire the movement track data of the challenged user. The movement trajectory data is data of all trajectory points included in the movement trajectory, and may be a set of trajectory point sequences, for example. The server sends a first response to the mobile device, the first response including movement trajectory data of the challenged user. The mobile device may obtain movement trace data of the challenged user based on the first response.
S302, the mobile equipment stores movement track data of the challenged user in a database.
S303, after the challenged user starts to move, the data processing module acquires the movement track data of the challenged user in real time, and acquires the movement track data of the challenged user.
In one implementation, the mobile device obtains the motion state of the user through the sensor, and periodically collects the real-time position of the challenged user after determining that the challenged user starts to move. For example, the mobile device gathers the user's real-time location once per second. Thus, the real-time movement track data of the challenged user can be acquired.
Illustratively, the user starts to move for 1 second, and the real-time position of the user is acquired as (X a 1,Y a 1) The method comprises the steps of carrying out a first treatment on the surface of the The user starts to move and then the user's real-time position is collected as (X a 2,Y a 2) The method comprises the steps of carrying out a first treatment on the surface of the The user starts to move and then the user's real-time position is collected as (X a 3,Y a 3) … …; the data processing module obtains the real-time movement track data of the challenged user as (X) a 1,Y a 1,1),(X a 2,Y a 2,2),(X a 3,Y a 3,3),……。
The data processing module may obtain movement trace data of the challenged user from the database. Exemplary, the movement trace data of the challenged user is (X b 1,Y b 1,1),(X b 2,Y b 2,2),(X b 3,Y b 3,3),……,(X b i,Y b i,i),……。
And S304, the data processing module sends the movement track data of the challenged user and the movement track data of the challenged user to the fitting module.
S305, fitting each track point on the movement track of the challenged user to the movement track of the challenged user by a fitting module, and obtaining the fitted movement track of the challenged user.
Illustratively, the first data in the challenging user movement trajectory data is (X a 1,Y a 1, 1), the geographic coordinate position of the 1 st track point is (X) a 1,Y a 1). The fitting module fits the track points to the 1 st track point in the challenged user movement track data, so that the first data in the challenged user movement track data is updated to (X b 1,Y b 1,1)。
The second data in the movement track data of the challenged user is (X) a 2,Y a 2, 2), the geographic coordinate position of the 2 nd track point is (X) a 2,Y a 2). The fitting module fits the track point to the 2 nd track point in the challenged user movement track data, so that the second data in the challenged user movement track data is updated to (X b 2,Y b 2,2)。
The third data in the movement track data of the challenged user is (X) a 3,Y a 3, 3), the geographic coordinate position representing the 3 rd track point is (X) a 3,Y a 3). The fitting module fits the track point to the 2 nd track point in the challenged user movement track data, so that the third data in the challenged user movement track data is updated to (X b 2,Y b 2,3)。
The ith data in the movement track data of the challenged user is (X) a i,Y a i, i), the geographic coordinate position of the i-th track point is (X) a i,Y a i) A. The invention relates to a method for producing a fibre-reinforced plastic composite The fitting module fits the locus point to the ith locus point in the challenged user moving locus data, so that the ith locus point in the challenged user moving locus data is updated to (X) b i’,Y b i', i). Referring to fig. 7, the real-time location of the challenged user is location a, the geographic coordinate location of location a is (X a i,Y a i) The method comprises the steps of carrying out a first treatment on the surface of the The real-time position of the challenge user after fitting is position b, and the geographic coordinate position of position b is (X b i’,Y b i’)。
And so on, the fitting module fits each track point on the challenged user movement track to the challenged user movement track.
Furthermore, the fitting module can also send the fitted movement track data of the challenging user to the data processing module.
S306, displaying the challenged user movement track and the challenged user movement track after fitting which are displayed according to the timestamp comparison on the movement interface.
And the motion interface displays the movement track of the challenged user according to the movement track data of the challenged user, and displays the movement track of the challenged user according to the fitted movement track data of the challenged user.
In one example, during the movement of the challenged user, at the ith second of the movement of the challenged user, the position of the challenged user on the map is displayed, and the corresponding geographic coordinate position is (X b i’,Y b i'); displaying the position of the challenged user on the map, the corresponding geographic coordinate position being (X b i,Y b i)。
Illustratively, the motion interface displays a real-time information interface 106 as shown in fig. 3 (b).
S307, when the motion duration of the challenged user in the current motion process is an integer multiple of M minutes, storing the motion track data of the challenged user generated in the latest M minutes into a database.
Optionally, M (M > 0) minutes are preconfigured as one period, and the challenge user movement trajectory data is saved every M minutes. In this way, the method can realize segmented storage of the movement track data of the challenging user, and save storage resources in the database.
And S308, when the movement of the challenged user is finished, storing the remaining unsaved movement track data of the challenged user into a database.
Thus, the mobile device acquires the movement track data for challenging the user to move based on the preset path. Optionally, the mobile device may further send movement track data for challenging the user to perform the movement based on the preset path to the server for storage.
After the movement of the challenged user is finished, the challenged user and the dynamic track displayed by the challenged user according to the time dimension can be viewed by the user.
As shown in fig. 11, in an exemplary embodiment, in conjunction with fig. 9 and fig. 10, the method for fitting a moving track provided in the embodiment of the present application further includes:
s401, responding to operation of viewing the motion record by a user, and displaying a motion record interface by the mobile equipment.
Illustratively, referring to fig. 4, in response to a user clicking on the "road segment score" button, the handset 100 displays a "road segment score" page 107. The athletic record interface is a "road segment achievements" page 107.
In one implementation, after receiving an operation of checking a motion record by a user, a data processing module acquires movement track data of a challenged user in the current motion process from a database; the data processing module also sends the movement track data of the challenged user in the current movement process to the movement recording interface. Thus, the motion record interface can display the moving track of the challenged user in the current motion process.
S402, the mobile device receives an operation of viewing the dynamic track by a user on the motion record page.
Illustratively, referring to fig. 4, the "road segment score" page 107 includes information such as a speed allocation and a movement duration of each kilometer during the current movement of the user. The "road segment achievements" page 107 includes a "dynamic track" button 108. The user can click on the "dynamic track" button 108 to view the movement tracks of the challenged user and the challenged user, which are dynamically displayed according to the time dimension in the current movement process of the challenged user.
S403, after receiving the operation of checking the dynamic track by the user, the mobile device enters a dynamic track interface.
Illustratively, referring to FIG. 4, in response to a user clicking on the "dynamic track" button 108, the handset 100 displays a dynamic track presentation page 109.
S404, after entering a dynamic track interface, the data processing module acquires movement track data of the challenged user.
In one implementation, if movement track data of the challenged user exists in the database of the mobile device, the data processing module obtains the movement track data of the challenged user from the database. If the movement track data of the challenged user does not exist in the database of the mobile device, the mobile device sends a first request to a server to request to acquire the movement track data of the challenged user; the server sends a first response to the mobile device, wherein the first response comprises the movement track data of the challenged user; the mobile device can acquire movement track data of the challenged user according to the first response; the mobile device stores the movement track data of the challenged user in a database; the data processing module acquires the movement track data of the challenged user from the database.
And S405, the data processing module sends the movement track data of the challenged user and the movement track data of the challenged user to the fitting module.
S406, fitting each track point on the movement track of the challenged user to the movement track of the challenged user by a fitting module, and obtaining the fitted movement track of the challenged user.
Furthermore, the fitting module can also send the fitted movement track data of the challenging user to the data processing module.
S407, displaying the challenged user movement track and the challenged user movement track after fitting which are displayed according to the time stamp comparison on the dynamic track interface.
And the dynamic track interface displays the movement track of the challenged user according to the movement track data of the challenged user, and displays the movement track of the challenged user according to the fitted movement track data of the challenged user.
In one example, the challenged user movement trajectory and the fitted challenged user movement trajectory are displayed in comparison according to a timestamp based on a time dimension. For example, the challenged user moving track comprises p track points, and the challenged user moving track after fitting comprises q track points; p < q. Displaying the 1 st track point in the movement track of the challenged user in the 1 st second, and displaying the 1 st track point in the movement track of the challenged user after fitting; displaying the 2 nd track point in the movement track of the challenged user in the 2 nd second, and displaying the 2 nd track point in the movement track of the challenged user after fitting; … …; displaying the p-th track point in the movement track of the challenged user in the p-th second, and displaying the p-th track point in the movement track of the challenged user after fitting; displaying the p-th track point in the movement track of the challenged user in the p+1th second, and displaying the p+1th track point in the movement track of the challenged user after fitting; … …; and displaying the p-th track point in the movement track of the challenged user in the q-th second, and displaying the q-th track point in the movement track of the challenged user after fitting.
Illustratively, the dynamic trajectory interface displays a dynamic trajectory presentation page 109 as shown in fig. 4 (b) or (c).
The following describes a method for fitting each track point on the movement track of the challenged user to the movement track of the challenged user in the embodiment of the present application.
For example, as shown in fig. 12, the method may include:
s501, obtaining movement track data of a challenged user and movement track data of a challenged user.
In one example, the user movement trajectory is represented as a sequence of trajectory points; where the i-th trace point is denoted pi= (x, y, t); t is a time stamp, (x, y, t) indicates that the geographical coordinate position of the trajectory point Pi at time t (for example, t seconds after the start of the movement) is (x, y), x indicates a longitude value, and y indicates a latitude value.
Alternatively, in other examples, the geographic coordinate location of the trajectory point may be represented as (x, y, z), where z represents the altitude value.
In one implementation, the user movement trajectory data and the challenged user movement trajectory data may be pre-processed, e.g., denoised, interpolated, normalized, etc.
S502, acquiring a first coordinate sequence according to the movement track data of the challenged user, and acquiring a second coordinate sequence according to the movement track data of the challenged user.
The first coordinate sequence is a sequence formed by the geographic coordinate positions of each track point on the movement track of the challenged user, and the second coordinate sequence is a sequence formed by the geographic coordinate positions of each track point in the movement track data of the challenged user. In the embodiment of the application, the geographic coordinate positions are exemplified in the form of coordinate points.
Illustratively, the challenging user movement trajectory data is (X a 1,Y a 1,1),(X a 2,Y a 2,2),(X a 3,Y a 3,3),……,(X a i,Y a i, i), … …; wherein the geographic coordinate position of the 1 st track point is (X a 1,Y a 1) The geographic coordinate position of the 2 nd track point is (X a 2,Y a 2) The 3 rd track pointThe geographic coordinate position is (X) a 3,Y a 3) … … the geographic coordinate position of the ith track point is (X a i,Y a i) … …. Obtaining a first coordinate sequence of (X a 1,Y a 1),(X a 2,Y a 2),(X a 3,Y a 3),……,(X a i,Y a i),……。
The challenged user movement track data is (X) b 1,Y b 1,1),(X b 2,Y b 2,2),(X b 3,Y b 3,3),……,(X b i,Y b i, i), … …; wherein the geographic coordinate position of the 1 st track point is (X b 1,Y b 1) The geographic coordinate position of the 2 nd track point is (X b 2,Y b 2) The geographic coordinate position of the 3 rd track point is (X b 3,Y b 3) … … the geographic coordinate position of the ith track point is (X b i,Y b i) … …. Obtaining a second coordinate sequence of (X b 1,Y b 1),(X b 2,Y b 2),(X b 3,Y b 3),……,(X b i,Y b i),……。
Wherein, the geographic coordinate position of a track point is (x, y), x represents the longitude value of the track point, and y represents the latitude value of the track point. For example, as shown in fig. 13A, the track point a on the track 1 is located at the position a, where the longitude value of the position a is 22.573495 and the latitude value is 114.059827, and the geographic coordinate position of the track point a is (22.573495, 114.059827). Track point B on track 2 is located at position B, where the longitude value of position B is 22.573492 and the latitude value is 114.059822, and the geographic coordinate position of track point B is (22.573492, 114.059822).
And S503, mapping each coordinate point in the first coordinate sequence to the second coordinate sequence to obtain a mapped first coordinate sequence.
In some embodiments, a dynamic time warping (dynamic time warping, DTW) algorithm may be employed to map each coordinate point in the first coordinate sequence onto the second coordinate sequence. DTW is a method for measuring the similarity between two time series, and elastic matching on the time axis can be automatically performed.
In one example, a method of mapping each coordinate point in a first coordinate sequence to a second coordinate sequence using a DTW algorithm may include:
s5031, acquiring a distance matrix according to the first coordinate sequence and the second coordinate sequence.
It should be noted that, for convenience of description, in the following embodiment, the movement track of the challenge user is taken as a track 1, and a first coordinate sequence corresponding to the track 1 is: (0, 0), (1, 1), (2, 2), (3, 3); the movement track of the challenged user is track 2, and a second coordinate sequence corresponding to the track 2 is as follows: (0, 0), (2, 1), (3, 2), (4, 3) are described as examples.
In one implementation, euclidean distance (Euclidean distance) between each two coordinate points in the first coordinate sequence and the second coordinate sequence is calculated, or other distance measurement methods are adopted to calculate the distance between each two coordinate points in the first coordinate sequence and the second coordinate sequence; the distance between every two coordinate points in the first coordinate sequence and the second coordinate sequence forms a distance matrix. The number of coordinate points in the first coordinate sequence is m, and the number of coordinate points in the second coordinate sequence is n, so that the distance matrix is m multiplied by n.
The calculation formula of the Euclidean distance is as follows:
d (x, y) =sqrt ((x 1-x 2)/(2+ (y 1-y 2)/(2) 1);
where sqrt represents the open root number and 2 represents the square.
Illustratively, according to the first coordinate sequence: (0, 0), (1, 1), (2, 2), (3, 3); the second coordinate sequence is: (0, 0), (2, 1), (3, 2), (4, 3); the corresponding distance matrix D is obtained as follows:
s5032, calculating an optimal path by using dynamic programming according to the distance matrix D.
Dynamic programming (dynamic programming) is a general method of solving an optimal solution problem, which can be broken down into a series of sub-problems and find the optimal solution starting with the simplest sub-problem. In the embodiment of the present application, it is necessary to obtain a best path, which defines the best match between track 1 and track 2.
In one example, the method of calculating the best path using dynamic programming is as follows:
traversing the distance matrix D, and calculating the accumulated distance of each matrix element from the matrix elements (1, 1) to (m, n) by adopting a formula (2) to obtain an accumulated distance matrix C.
cumulative_distance (i, j) =distance (i, j) +min (cumulative_distance (i-1, j), cumulative_distance (i, j-1), cumulative_distance (i-1, j-1)), formula (2);
Where, simultaneous_distance represents the cumulative distance, and distance represents the distance.
Illustratively, the cumulative distance matrix C calculated from the distance matrix D is:
s5033, backtracking the optimal path.
First, a minimum accumulated distance path P is calculated according to formula (3).
Pj=argmin { Pi-1 j-1], pi-1 j, pi j-1 } +1 formula (3);
where P [ i ] [ j ] represents the end point of the minimum accumulated distance path from the first point to the (i, j) th point in the first coordinate sequence, argmin represents the minimum-valued index. The boundary conditions are p0 ] [0] =0, p0 ] =i, p0 ] [ j ] =j.
And secondly, calculating a mapping relation M according to the minimum accumulated distance path P.
1. A null mapping M is initialized.
2. From P [ n-1] [ m-1], the minimum index in equation (3) is back-pushed to P [0] [0].
3. For each point (i, j), the mapping relation M [ i ] is set to j, indicating that the jth point in the second coordinate sequence is mapped to the ith point in the first coordinate sequence.
Illustratively, the values of the mapping relationship M are as follows:
/>
s5034, mapping each coordinate point in the first coordinate sequence to the second coordinate sequence according to the mapping relation M, and obtaining a mapped first coordinate sequence.
Illustratively, the mapped first coordinate sequence is (0, 0), (2, 1), (3, 2), (4, 3).
In this embodiment, the distance calculation is performed between the geographic coordinate position of the track point a in the movement track of the challenged user and the traversal of the geographic coordinate position of each track point in the movement track of the challenged user, so as to obtain one track point (track point B) closest to the track point a on the movement track of the challenged user, and the track point a is fitted to the track point B. In the method, when the challenged user moving track comprises m track points and the challenged user moving track comprises n track points, the calculated amount is m multiplied by n times; the algorithm complexity is o (n 2).
In other embodiments, a proportional mapping method is used to map each coordinate point in the first coordinate sequence to the second coordinate sequence.
The following describes an example of mapping a coordinate point a (corresponding to a trajectory point a in the movement trajectory of the challenged user) in the first coordinate sequence onto the second coordinate sequence.
S503a, obtaining a distance d1 that a challenged user moves from a first coordinate point (corresponding to a challenged user moving track starting point) to a coordinate point a in a first coordinate sequence.
S503b, acquiring a total distance D.
In the motion process of the challenged user, the motion track of the challenged user is a real-time motion track generated in the motion process of the challenged user, the challenged user does not complete the motion, and the total distance D adopts the total length D2 of the motion track of the challenged user. D2 is the distance that the user moves from the first coordinate point in the second coordinate sequence to the last coordinate point in the second coordinate sequence, i.e. the distance that the challenged user moves from the challenged user moving track starting point to the challenged user moving track ending point.
After the movement of the challenge user is finished, the movement track of the challenge user is a movement track generated after the movement process of the challenge user is finished, and the total distance D is the total length D1 of the movement track of the challenge user. D1 is the distance that the user moves from the first coordinate point in the first coordinate sequence to the last coordinate point in the first coordinate sequence, i.e. the distance that the challenging user moves from the starting point of the moving track of the challenging user to the end point of the moving track of the challenging user.
S503c, obtaining a scale factor R, wherein the scale factor R is the proportion of D1 to D.
In one implementation, r=d1/D.
In challenging the scene during user motion, r=d1/D2.
In the scene after the end of the challenge user movement, r=d1/D1.
S503D, determining a distance D2 according to the scale factor R and the total length D2 of the movement track of the challenged user.
In one implementation, d2=rjd2. That is, the proportion of d1 to the total distance is equal to the proportion of d2 to the total distance; i.e. D1/d=d2/D2.
In one example, d2=d1 is the scene during which the user is challenged to move.
In one example, d2=d1/d1×d2 for a scene after the end of the challenging user movement.
S503e, determining a coordinate point reached by the challenged user after a distance d2 from a first coordinate point (corresponding to the challenged user moving track starting point) in the second coordinate sequence as a coordinate point b. The coordinate point B corresponds to the locus point B in the movement locus of the challenged user.
S503f, mapping the coordinate point a to the coordinate point b. I.e. fitting the track point A in the movement track of the challenged user to the track point B in the movement track of the challenged user.
For example, referring to fig. 13B, the challenged user movement trajectory is trajectory 1, and the challenged user movement trajectory is trajectory 2. The geographic coordinate position of the track point a on the track 1 is a coordinate point a. The distance traveled from the start of track 1 to track point a is D1 and the total length of track 1 is D1. Track 2 has a total length D2. D2=d1/d1×d2 can be determined. After moving along the track 2 by a distance d2 from the start point of the track 2, the track point B is reached, and the geographic coordinate position of the track point B is the coordinate point B. The ratio of the distance between the track point A and the track 1 starting point to the total length of the track 1 is equal to the ratio of the distance between the track point B and the track 2 starting point to the total length of the track 2.
By repeating S503a to S503f, each coordinate point in the first coordinate sequence may be mapped to the second coordinate sequence, and the mapped first coordinate sequence may be obtained.
Illustratively, the first coordinate sequence is (0, 0), (1, 1), (2, 2), (3, 3), and the second coordinate sequence is (0, 0), (1, 1), (2, 1), (3, 2), (4, 3), (5, 4), (6, 5). Mapping (0, 0) in the first coordinate sequence to (0, 0) in the second coordinate sequence; mapping (1, 1) in the first coordinate sequence to (2, 1) in the second coordinate sequence; mapping (2, 2) in the first coordinate sequence to (4, 3) in the second coordinate sequence; the (3, 3) in the first coordinate sequence is mapped to the (6, 5) in the second coordinate sequence. I.e. the mapped first coordinate sequence is (0, 0), (2, 1), (4, 3), (6, 5).
In this embodiment, each coordinate point in the first coordinate sequence is mapped onto the second coordinate sequence by using a proportional mapping method, and when n coordinate points are included in the first coordinate sequence, the computational complexity is o (n). Compared with the method for mapping each coordinate point in the first coordinate sequence to the second coordinate sequence by adopting the DTW algorithm, the calculation complexity is greatly reduced, and the calculation amount is reduced.
S504, acquiring a fitted challenging user movement track according to the mapped first coordinate sequence.
And adding a corresponding time stamp to each coordinate point in the mapped first coordinate sequence to obtain a fitted challenge user movement track.
Illustratively, the challenging user movement trajectory includes trajectory points of (0, 1), (1, 2), (2, 3), (3, 4); the corresponding first coordinate sequence is (0, 0), (1, 1), (2, 2), (3, 3); the mapped first coordinate sequence is (0, 0), (2, 1), (4, 3), (6, 5); the fitted challenging user movement trajectory includes trajectory points of (0, 1), (2, 1, 2), (4,3,3), (6, 5, 4).
According to the moving track fitting method, one moving track is fitted to the other moving track. For example, a movement track of the challenged user is fitted to a movement track of the challenged user. In this way, the path tracks of the challenged user and the challenged user are completely coincident, and the misunderstanding that the two users do not move based on the same path is not brought to the user.
In the embodiment of the present application, the movement track of the challenged user is fitted to the movement track of the challenged user. In other embodiments, the movement track of the challenged user may also be fitted to the movement track of the challenged user. Specific implementation methods can refer to a method for fitting the movement track of the challenged user to the movement track of the challenged user. The embodiments of the present application are not described in detail.
It may be understood that, in order to implement the above-mentioned functions, the electronic device (mobile device) provided in the embodiments of the present application includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The embodiment of the application may divide the functional modules of the electronic device according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
In one example, please refer to fig. 14, which shows a possible structural schematic diagram of the electronic device involved in the above embodiment. The electronic device 900 includes: a processing unit 910 and a storage unit 920.
The processing unit 910 is configured to control and manage an operation of the electronic device 900. The storage unit 920 is used for storing program codes and data of the electronic device 900, and the processing unit 910 invokes the program codes stored in the storage unit 920 to perform the steps in the above method embodiments.
Of course, the unit modules in the above-described electronic device 900 include, but are not limited to, the above-described processing unit 910 and storage unit 920. For example, the electronic device 900 may further include a display unit, a communication unit, a power supply unit, and the like. The display unit is used for displaying a user interface of the electronic device 900. The communication unit is used for the electronic device 900 to communicate with other electronic devices; for example, the electronic device 900 receives trajectory information, motion information, and the like of a path from other devices through a communication unit. The power supply unit is used to power the electronic device 900.
The processing unit 910 may be a processor or controller, such as a central processing unit (central processing unit, CPU), a graphics processor (graphics processing unit, GPU), a digital signal processor (digital signal processor, DSP), an application-specific integrated circuit (application-specific integrated circuit, ASIC), a field programmable gate array (field programmable gate array, FPGA) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. The storage unit 920 may be a memory. The display unit may be a display screen or the like.
For example, the processing unit 910 is a processor (such as the processor 110 shown in fig. 8), the storage unit 920 may be a memory (such as the internal memory 121 shown in fig. 8), and the display unit may be a display screen (such as the display screen 140 shown in fig. 8). The electronic device 900 provided in the embodiment of the present application may be the electronic device 100 shown in fig. 8. Wherein the processors, memory, display screen, etc. may be coupled together, for example, via a bus. The processor invokes the memory-stored program code to perform the steps in the method embodiments above.
Embodiments of the present application also provide a system-on-a-chip (SoC) including at least one processor 1001 and at least one interface circuit 1002, as shown in fig. 15. The processor 1001 and the interface circuit 1002 may be interconnected by wires. For example, interface circuit 1002 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, the interface circuit 1002 may be used to send signals to other devices (e.g., the processor 1001 or a touch screen of an electronic apparatus). The interface circuit 1002 may, for example, read instructions stored in a memory and send the instructions to the processor 1001. The instructions, when executed by the processor 1001, may cause the electronic device to perform the various steps of the embodiments described above. Of course, the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
Embodiments of the present application also provide a computer-readable storage medium including computer instructions that, when executed on an electronic device described above, cause the electronic device to perform the functions or steps of the method embodiments described above.
Embodiments of the present application also provide a computer program product which, when run on a computer, causes the computer to perform the functions or steps of the method embodiments described above.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A method for fitting a movement track, which is applied to an electronic device, and is characterized by comprising the following steps:
the electronic equipment acquires data of a first moving track and data of a second moving track; the first moving track and the second moving track respectively comprise at least one track point; each track point is expressed as a geographic coordinate position at a moment t, and the moment t is a timestamp of the track point;
the electronic equipment fits each track point on the first moving track to a corresponding position on the second moving track according to the data of the first moving track and the data of the second moving track, and the fitted first moving track is obtained;
and the electronic equipment compares and displays the fitted first moving track and the second moving track according to the time stamp of the track point.
2. The method of claim 1, wherein the electronic device fitting each track point on the first movement track to a corresponding location on the second movement track comprises:
The electronic equipment acquires a first coordinate sequence according to the data of the first moving track; the first coordinate sequence is a sequence formed by the geographic coordinate positions of each track point on the first moving track;
the electronic equipment acquires a second coordinate sequence according to the data of the second moving track; the second coordinate sequence is a sequence formed by the geographic coordinate positions of each track point on the second moving track;
the electronic equipment maps the geographic coordinate position of each track point on the first moving track to the corresponding geographic coordinate position on the second moving track to obtain a mapped first coordinate sequence;
and the electronic equipment adds a corresponding time stamp to each geographic coordinate position in the mapped first coordinate sequence to obtain a fitted first moving track.
3. The method of claim 2, wherein the mapping the geographic coordinate position of each track point on the first movement track to the corresponding geographic coordinate position on the second movement track by the electronic device comprises:
the electronic equipment maps the geographic coordinate position of a first track point on the first moving track to the geographic coordinate position of a second track point on the second moving track; the first track point is any track point on the first moving track;
The ratio of the distance from the starting point of the second moving track to the second track point to the total length of the second moving track is equal to the ratio of the distance from the starting point of the first moving track to the first track point to the total length of the first moving track.
4. The method of claim 3, wherein the electronic device mapping the geographic coordinate location of the first trajectory point on the first movement trajectory to the geographic coordinate location of the second trajectory point on the second movement trajectory comprises:
the electronic equipment acquires a first geographic coordinate position in the first coordinate sequence; the first geographic coordinate position is the geographic coordinate position of a first track point on the first moving track;
the electronic equipment obtains the distance from the geographic coordinate position of the first track point in the first coordinate sequence to reach the first geographic coordinate position;
the electronic equipment obtains the distance from the geographic coordinate position of the first track point in the first coordinate sequence to the geographic coordinate position of the last track point in the first coordinate sequence;
the electronic equipment obtains the distance from the geographic coordinate position of the first track point in the second coordinate sequence to the geographic coordinate position of the last track point in the second coordinate sequence;
Acquiring a first distance length according to the following formula;
d2=d1/D1*D2;
wherein D2 represents a first distance length, D1 represents a distance from a geographic coordinate position of a first track point in the first coordinate sequence to the first geographic coordinate position, D1 represents a distance from a geographic coordinate position of the first track point in the first coordinate sequence to a geographic coordinate position of a last track point in the first coordinate sequence, and D2 represents a distance from a geographic coordinate position of the first track point in the second coordinate sequence to a geographic coordinate position of the last track point in the second coordinate sequence;
the electronic equipment acquires the geographic coordinate position of a second track point on the second moving track according to the first distance length; the geographic coordinate position of the second track point is a geographic coordinate position reached after the first distance length passes from the first geographic coordinate position in the second coordinate sequence.
5. The method of claim 2, wherein the mapping the geographic coordinate position of each track point on the first movement track to the corresponding geographic coordinate position on the second movement track by the electronic device comprises:
The electronic equipment maps the geographic coordinate position of a first track point on the first moving track to the geographic coordinate position of a second track point on the second moving track; the first track point is any track point on the first moving track;
the ratio of the distance from the starting point of the second moving track to the second track point to the total length of the second moving track is equal to the ratio of the distance from the starting point of the first moving track to the first track point to the total length of the second moving track.
6. The method of claim 5, wherein the electronic device mapping the geographic coordinate location of a first trajectory point on the first movement trajectory to the geographic coordinate location of a second trajectory point on the second movement trajectory comprises:
the electronic equipment acquires a first geographic coordinate position in the first coordinate sequence; the first geographic coordinate position is the geographic coordinate position of a first track point on the first moving track;
the electronic equipment obtains the distance from the geographic coordinate position of the first track point in the first coordinate sequence to reach the first geographic coordinate position;
Acquiring a first distance length according to the following formula;
d2=d1;
wherein d2 represents a first distance length, and d1 is a distance from a geographic coordinate position of a first track point in the first coordinate sequence to the first geographic coordinate position;
the electronic equipment acquires the geographic coordinate position of a second track point on the second moving track according to the first distance length; the geographic coordinate position of the second track point is a geographic coordinate position reached after the first distance length passes from the first geographic coordinate position in the second coordinate sequence.
7. The method of claim 2, wherein the mapping the geographic coordinate position of each track point on the first movement track to the corresponding geographic coordinate position on the second movement track by the electronic device comprises:
the electronic equipment maps the geographic coordinate position of a first track point on the first moving track to the geographic coordinate position of a second track point on the second moving track; the first track point is any track point on the first moving track, and the second track point is the track point with the nearest distance between the second moving track and the first track point.
8. The method of any of claims 1-7, wherein the first movement trajectory and the second movement trajectory are generated by two users moving on the same path, respectively.
9. The method according to any one of claims 1-8, further comprising:
and the electronic equipment acquires the data of the second movement track from a server.
10. An electronic device, comprising: the device comprises a processor, a display screen and a memory; the memory has stored therein one or more computer programs, the one or more computer programs comprising instructions, which when executed by the electronic device, cause the electronic device to perform the method of any of claims 1-9.
11. A computer-readable storage medium comprising computer instructions; the computer instructions, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-9.
CN202311131173.2A 2023-08-31 2023-08-31 Moving track fitting method and electronic equipment Pending CN117707368A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311131173.2A CN117707368A (en) 2023-08-31 2023-08-31 Moving track fitting method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311131173.2A CN117707368A (en) 2023-08-31 2023-08-31 Moving track fitting method and electronic equipment

Publications (1)

Publication Number Publication Date
CN117707368A true CN117707368A (en) 2024-03-15

Family

ID=90152190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311131173.2A Pending CN117707368A (en) 2023-08-31 2023-08-31 Moving track fitting method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117707368A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105528187A (en) * 2014-10-24 2016-04-27 虹映科技股份有限公司 moving image system and method
CN108114453A (en) * 2016-11-30 2018-06-05 巨码科技股份有限公司 The movement technique and system of usage history record
CN112734801A (en) * 2020-12-30 2021-04-30 深圳市爱都科技有限公司 Motion trail display method, terminal device and computer readable storage medium
CN112948515A (en) * 2021-02-07 2021-06-11 张帆 Track mapping method, device, equipment and storage medium based on positioning technology
WO2021213451A1 (en) * 2020-04-23 2021-10-28 华为技术有限公司 Track playback method, and related apparatus
WO2022068887A1 (en) * 2020-09-30 2022-04-07 华为技术有限公司 Method for displaying motion track, and electronic device
CN114363821A (en) * 2020-09-30 2022-04-15 华为技术有限公司 Trajectory rectification method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105528187A (en) * 2014-10-24 2016-04-27 虹映科技股份有限公司 moving image system and method
CN108114453A (en) * 2016-11-30 2018-06-05 巨码科技股份有限公司 The movement technique and system of usage history record
WO2021213451A1 (en) * 2020-04-23 2021-10-28 华为技术有限公司 Track playback method, and related apparatus
WO2022068887A1 (en) * 2020-09-30 2022-04-07 华为技术有限公司 Method for displaying motion track, and electronic device
CN114363821A (en) * 2020-09-30 2022-04-15 华为技术有限公司 Trajectory rectification method and system
CN112734801A (en) * 2020-12-30 2021-04-30 深圳市爱都科技有限公司 Motion trail display method, terminal device and computer readable storage medium
CN112948515A (en) * 2021-02-07 2021-06-11 张帆 Track mapping method, device, equipment and storage medium based on positioning technology

Similar Documents

Publication Publication Date Title
US10220258B2 (en) Method and device for providing workout guide information
US11830375B2 (en) Driving analysis and instruction device
US9569898B2 (en) Wearable display system that displays a guide for a user performing a workout
CN110147705A (en) A kind of vehicle positioning method and electronic equipment of view-based access control model perception
US20110250937A1 (en) Race participant tracking via wireless positioning technology and near real time reporting of location and pertinent race metrics to the participant and optionally to other individuals or publication on the internet
CN111897996B (en) Topic label recommendation method, device, equipment and storage medium
US9020918B2 (en) Information registration device, information registration method, information registration system, information presentation device, informaton presentation method, informaton presentaton system, and program
CN105229490A (en) Use the positional accuracy of satellite visibility data for promoting
CN115136101A (en) System and method for deep learning based pedestrian dead reckoning for externally aware sensor enabled devices
US10488222B2 (en) Mobile device control leveraging user kinematics
US9788164B2 (en) Method and apparatus for determination of kinematic parameters of mobile device user
JP2017009341A (en) Data analysis device, data analysis method and data analysis program
US20220157032A1 (en) Multi-modality localization of users
CN113205515A (en) Target detection method, device and computer storage medium
KR102578119B1 (en) Smart glasses operation method interworking to mobile device
CN117707368A (en) Moving track fitting method and electronic equipment
WO2022252337A1 (en) Encoding method and apparatus for 3d map, and decoding method and apparatus for 3d map
US20180164110A1 (en) Ranking system, server, ranking method, ranking program, recording medium, and electronic apparatus
US10650037B2 (en) Enhancing information in a three-dimensional map
JP2023027548A (en) Device, method, and program for processing information
CN104321617A (en) Mobile system and method for marking location
CN112911363A (en) Track video generation method, terminal device and computer-readable storage medium
CN117723074A (en) Method for monitoring deviation of movement track from reference path track and electronic equipment
CN117725137A (en) Similar path matching method and electronic equipment
WO2022252237A1 (en) Methods and apparatus for encoding and decoding 3d map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination