SE1451095A1 - A system and method for a client device comprised in a virtual navigation system - Google Patents

A system and method for a client device comprised in a virtual navigation system Download PDF

Info

Publication number
SE1451095A1
SE1451095A1 SE1451095A SE1451095A SE1451095A1 SE 1451095 A1 SE1451095 A1 SE 1451095A1 SE 1451095 A SE1451095 A SE 1451095A SE 1451095 A SE1451095 A SE 1451095A SE 1451095 A1 SE1451095 A1 SE 1451095A1
Authority
SE
Sweden
Prior art keywords
video stream
user
simulated
area
display
Prior art date
Application number
SE1451095A
Other languages
Swedish (sv)
Other versions
SE538303C2 (en
Inventor
Göran Garvner
Original Assignee
Signup Software Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Signup Software Ab filed Critical Signup Software Ab
Priority to SE1451095A priority Critical patent/SE538303C2/en
Publication of SE1451095A1 publication Critical patent/SE1451095A1/en
Publication of SE538303C2 publication Critical patent/SE538303C2/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3647Guidance involving output of stored or live camera images or video streams
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Automation & Control Theory (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

ABSTRACT This invention relates a system and method for a client device comprised in a virtual navigation system, said method comprising; sending a video stream request (140) from a client device (120) to a video server (110) providing pre-recorded video streams (150, 160) wherein said video stream request (140) comprises spatial data, wherein said spatial data comprises a simulated location (2) of a user, simulated user location direction and simulated user speed information; receiving a first pre-recorded video stream (150) from said video server (110), wherein said first received pre-recorded video stream (150) comprises angular and scaling information; generating a simulated video (8) stream based on said first received pre-recorded video stream (150) and displaying said simulated video stream (8) on a display (130) connected to said client device (120) by means of client software (40) in said user device (4), wherein said client software (40) is arranged to dynamically process said first received pre-recorded video stream (150) based on said spatial data in relation to said to display an adapted real time view.

Description

1 A SYSTEM AND METHOD FOR A CLIENT DEVICE COMPRISED IN A VIRTUAL NAVIGATION SYSTEM 5 TECHNICAL FIELD This invention relates to a system and method for a client device comprised in a virtual navigation system, said method comprising; sending a video stream request from a client device to a video server providing pre- recorded video streams wherein said video stream request comprises spatial data, wherein said spatial data comprises simulated user location, simulated user direction and simulated user speed information; receiving a first pre-recorded video stream from said video server, wherein said first received pre-recorded video stream comprises angular and scaling information, and, generating a simulated video stream based on said first received pre-recorded video stream and displaying said simulated video stream on a display connected to said client device.
Generally, embodiments of the invention relate to the technical field of displaying zero or more photos followed by the next photo in line, video stream, in an environment wherein a user can traverse a road or path, wherein oscillation and non recorded environment may be calculated virtually and presented by the invention.
BACKGROUND Virtual and non virtual environments depicting real world scenes are useful for 25 navigation for a wide demographic.
Typically, a visual orientation method is presented as a map or a photo taken as a birds-eye view. Other methods is ground-level view wherein users can walk around in a city at ground level, this method is achieved through large amounts of photos taken, stored and then placed in order to visually represent a real 3D world wherein user can navigate by means of looking around, up and down as well as zoom. This is achieved by having zero or more photos and can be played in a row over time; video. An example of having zero photos can be when the video stops and no more photos are being played; it may occur an error with the invention wherein no more photos are being sent and/or the 35 video may reach an area wherein no photos of said area is stored on server. 2 Most common navigation method is used on electric systems, for example a computer, handheld devices for example smart phone and tablet, GPS system etc.
Most suchelectric systems are dependent upon Internet to provide data for navigation in 5 a virtual system as it can contain a large amount of data and can be updated easier compared to an electric system with no Internet connection; so called closed system.
Internet is one of the most complex systems to exist and therefore it is prone to complex faults. It is also dependent on a strong data stream or data signal to provide the user with a satisfactory experience and understanding of the navigation data provided.
There exist navigation systems, where the user can move according to available photos or video stream. However with these known systems one cannot travel in a video stream sequence without undesired restrictions that fails to provide a more realistic experience for the user.
Examples of existing navigation systems are: Google MAPS relates to a method of viewing a scene from a bird perspective, further comprises STREET VIEW wherein a user can traverse on street level by means of photos taken in a 360 degree view. A user can e.g. click and drag to view surroundings. User stay still until a new point is clicked in the scene wherein user is now standing on the point in which they clicked. It does not embody free navigation but rather zooming in the scene or a continual photo/video stream without clicking new navigation points. 25 US2013090850(A1) to Mays et al. relates to a navigation system method that can include, automatically processing machine instructions to create a visual travel guide for the predetermined route, the visual travel guide comprising a second set of machine instructions adapted to cause an information device to render: an identification of a destination of the predetermined route; a plurality of videos, each video corresponding to a road intersection located approximately on the predetermined route, each video adapted to substantially reproduce a view of a driver of an automobile approaching the road intersection; a plurality of textual descriptions associated with the plurality of videos; at least one of an identification of an origin of the predetermined route and an identification of an approach to the destination; and an advertisement associated with at least a portion of the predetermined route. 3 There is therefore a need for new methods for improving data transmitted and presented to user.
SUMMARY OF THE INVENTION 5 According to one aspect this invention proposes a new method for simulating movements outside of recorded video streams, and the transition between different video streams which allows the user to, for example, to enter stores wherein said stores have web stores where the user can make purchases, or to round corners or leave the prerecorded paths where the user at all times may experience the travel as if only one 10 video stream is viewed even though one, two or several video streams are used simultaneously to generate that experience, which is achieved by means of features defined in the independent claims.
This invention also proposes a new method for simulating and enabling a choice to 15 make a movement into an alternate virtual path, e.g away from one first prerecorded path that the user follwos into a transversal optional path that crosses said first path.
The invention also relates to a web platform based on real world video where users may easily travel to and explore locations across the world. Locations may be historical sites, museums, shopping malls or any other place of interest. The user experience is presented as video using compilations of seamless multi-angle video streams allowing the user to navigate freely. The technology may further be incorporated with QR-tags or locational access points for starting audio and video presentations or the user may decide to follow a guide in a virtual guided tour, but still have the opportunity to look 25 around during the tour.
The intended main application of the invention is in an economic eco-system of tourism actors, private and public museums, NPOs and governmental institutions as well as schools, educational operators, shopping malls, and commercial actors.
Very few people can afford to travel all over the world and experience the wonders of nature, architecture and culture. At the same time companies, museums and commercial stakeholders need effective ways of communication that are cost effective and have merit over long distances and/or over varying time zones. The invention may offer a cost effective solution to the above. Furthermore it may offer users the opportunity to visit hard to reach or potentially hazardous sites. 4 As an example of use of the invention, it is foreseen that school children may attend a guided tour of Machu Pichu and then being able to "have a look at it" on their own and go where they want to go. The learning experience become more interesting and vibrant, at a small cost, and the students gets a chance to experience the wonders of the 5 world from home or school.
A further possibility is to make business presentations of large factory locations for customers and clients, e.g. that Boeing wants to invite pilots across the world on a guided tour of the main assembly building, indeed the largest building in the world, to 10 look at production procedures and implementation of safety features or procedures. Apart from offering the general guided tour; attendants may use locational triggers to move around "freely" and have specific presentations. The same would apply to e.g. a museum where similar technology using MP3-recorders exists but you actually need to go there physically. With the invention it is possible, wherever, whenever.
Further one may imagine following the building process of a new skyscraper, not on a glossy pamphlet, but on video whenever you want to, which may be of great interest for future tenants, and others.
Local shops and businesses can gain the ability to get in touch with virtual customers and attract new customer groups by advertising and by integrating web shops. This is accomplished by connecting web shops with a specific location as shown in the virtual interface and where the user can make actual purchases meaning users may move from merely observing the shop to actually purchasing real items. Each location may also be connected to information resources enabling public and private organizations to offer guided tours (e.g. museums or city tours) or other information (e.g. directions and company information) related to each location.
In figure 4 there is shown a coarse grained conceptual model according to a preferred embodiment of the invention, presenting thata plurality of technical concepts are used that should be put into relation with the general concepts regarding the invention, thereby connecting methods of the invention and hardware design with the concepts they are supposed to model. 35 The invention aims to give the sense of "having been there" by providing 3D views and movies of different locations with high degree of interactivity where the visitor can choose where to go, how to move, and what to see just as if the visitor was on site instead of watching a screen. This is complemented by an economic eco-system of museums, organizations and other stakeholders providing content and potentially admittance fees from visitors. Furthermore the technical platform will also be complemented with rentable surface area links that provide access to web shops and other relevant information.
Preferred embodiments of the invention are presented in the dependent claims.
BRIEF DESCRIPTION OF DRAWINGS 10 The present invention will be further explained by means of exemplifying embodiments and with reference to the accompanying figures, in which: FIG 1 shows a schematic block diagram of basic features according to the invention, FIG 2 shows an example of using the invention when traversing an intersection FIG 3 shows a preferred embodiment of a display according to the invention, 15 FIG 4 shows a coarse grained conceptual exemplary model of the invention, describing how different concepts used may be related.
FIG 5 shows a proposed, might change during development, subsystem partitioning capable of supporting all use cases.
FIG 6 shows a coarse grained initial design of the Content Management System.
FIG 7 shows how three different types of components can interact; content worker, the request broker and the data synchronizer.
FIG 8 shows the overall design system, comprising a plurality of equipment and methods/modules to produce a recording.
FIG 9 shows a proposed design for managing and synchronizing the camera sensors programmatically.
FIG 10 shows recording management system.
DETAILED DESCRIPTION Embodiments of the claimed invention relate to methods and navigation systems for adding, storing and presenting a virtual world depicting real world scenes on a display 130.
Other embodiments of the claimed invention relate to computer readable mediums on which are stored non-transitory information for presenting photos and or videos of real world scenes. 6 Photos or videos can be taken in various manners, e.g. by traversing by foot, car, or other means.
Fig 1 shows a block diagram presenting the most basic features of a system according to 5 the invention. It presents a setting up arrangement 101 including means for photographing/filming places/scenes and storing a plurality of pre-recorded video streams, comprises angular and scaling information; as data files 150, 160, on a non-transitory medium 102 connected to a server 110. The server 102 may be connected, e.g. via internet, with a user device 4 having a client device 120 typically comprising client software stored thereon, with a video stream generator 40. The client device 1 may via a request 140 access zero, one or more data files 150, 160 from said server 102, comprising a video stream processor module 115is arranged to dynamically process the received data 150. Based on spatial data regarding the simulated position of the user, in relation to the angular and scaling information of the data files, the video stream generator40 adapts the data to present on a display 130 an adapted real time view.
An embodiment comprises a method for a client device comprised in a virtual navigation system, said method comprising; sending a video stream request 140 from a client device 120 to a video server 110 configured to provide pre-recorded video streams 150, 160 wherein said video stream request 140 comprises spatial data, wherein said spatial data comprises a simulated location 2 of a user, simulated user location direction and simulated user speed information; receiving a first pre-recorded video stream 150 from said video server 110, wherein said first received pre-recorded video stream 150 comprises angular and scaling information;generating a simulated video stream 8 based 25 on said first received pre-recorded video stream 150 by means of a video stream generator 40 in said user device 4 and displaying said simulated video stream 8 on a display 130 connected to said client device 120, whereinsaid first received pre-recorded video stream 150 is dynamically processed based on said spatial data[msi] to display an adapted real time view.
Another embodiment comprises a virtual navigation system, comprising; a user device 4 including a client device 120 and a video server 110 having video streams 150, 160 stored thereon; a video stream request module 112 comprised in said client device 120 arranged to send a video stream request 140 from said client device 120 to said video 35 server 110 being configured to provide pre-recorded video streams 150, 160 wherein said video stream request 140 comprises spatial data, wherein said spatial data comprises a simulated location 2 of a user, simulated user location direction and simulated user 7 speed information; A video stream receiver 114 configured to receive a first prerecorded video stream 150 from said video server 110, wherein said first received prerecorded video stream 150 comprises angular and scaling information; A video stream generator 40 configured to generata a simulated video stream 8 based on said first 5 received pre-recorded video stream 150 and arranged to display said simulated video stream 8 on a display 130 connected to said client device 120 by means of said video stream generator 40 in said user device 4, wherein said video stream generator 40 includes a video stream processor module configured to dynamically process said first received pre-recorded video stream 150 based on said spatial data[M52] to display an 10 adapted real time view.
The invention may be used for various situations, e.g. for virtually going from one point to another or several other, and is especially useful when travelers have no prior knowledge of the traversed distance. Moreover the system enables traversing a distance continually in real time without any user integration except choosing navigation points.
While running the client device 120 processing is performed dynamically, by means of computer executable means, wherein computer executable means can for example be computer language code e.g. JavaScript and Ajax, to continuously send positional, directional, and speed data 140 to the server 110 to enable sending the right data filels 150, 160 (e.g. cut scene) to the client device 120. The invention may allow the user to move outside the recorded route, e.g. by using 360 film sequences that have been recorded in all directions simultaneously.
In the preferred embodiment the video stream generator 40 of invention uses HPALcapabilities and scripting, that i.a. may enable the user to continually traverse a distance with 360 degree film and image zoom, and also traveling alongside traversed paths.
By transferring the image display settings for future frames, claimed invention may simulate the user to traverse a dynamic path without being hindered to a rigid recorded path, implemented by use of photos and/or film sequences of the tiles 150, 160 sent from the server 110 to be dynamically processed in real-time by video stream generator 40 of the client device 120. 35 Fig 2 shows an example when using the invention in an intersection 7 where each circle 1A, 1B defines an end 1B and/or a start point lA for a video stream of a data file 150, 160. Dashed circles 2, 2' present examples of how a user may virtually be positioned 8 offset 5 from the actual recorded position, e.g. along a line 150, 160 from one point lA to another 1B. The virtual positions represented by the dashed circles 2, 2' are preferably determined by an algorithm contained in the video stream generator40 of the client device. (an example of an appropriate algorithm will be described in more detail 5 below). Further preferred embodiments of the invention include computing process related methodsand algorithm for handling a corner turn 6 in virtual manner that gives a real life sensation to the user, i.e. by dynamically processing data and displaying a view that is not the actual position in the recorded video.
The client device 120, haying received pre-recorded video sequences 150, 160, wherein angles and scaling is known, may create a virtual position 2, 2' of a specific frame and scale of the imagels received in zero, based on input from one or more files 150, 160 so that it appears correctly and undistorted even though the center of the image does not correspond to the center position 1 of the pre-recorded file. Said virtual position 2, 2' 15 may be dynamically created by means of computer generated code, e.g. html5 and JavaSeript. Also other computer languages or platforms e.g. Adobe Flash, may be used.
The quality of the displayed scene may get lower the further away from the recording path the user choses to locate the virtual position 2, 2'.
According to a modified "add on" 3D modeling may be regarded, for implementation when a video sequence may not be directly used, and instead analyzed from different angles and saved to a 3D environment modelling tool, e.g. to achieve an elevated simulation of traversing outside the path when e.g. if there is a desire to display the back of objects, e.g. poles and houses. Hence, in that case there is no actual photolvideo used. for creating the virtual image hut a -totally vitually created view.
When the user approaches an intersection 7, virtually travelling based on a first data file 150 the client device 120 have sent, or will send a request 140 to server 110, given the chosen direction and speed, requesting at least one other data file 160 whose endpoints IA, 1B are near or meet an end/start point 1A, LB of a current video stream sequence, that concurs with a choice made by the user. This may be performed dynamically by arranging for predetermined direction choice requests, i.e when a user (virtually) is approaching an end point 1A, 1B is located in an intersection 7 the client 120 may request a direction selection from the User, such that the server 110 may send the correct data file 160. The server 110 will upon receipt of such a request 140 then send, or prepare to send, a new data file 160 eorrespondinci, to the chosen direction through the 9 intersection 7. Requests can either be sent to the client 120 as described above or the client 120 can access the desired infiarma.tion as part of the server's 110 response to the call position (e.g. by a JavaScript Ajax calD.
The system is set to trigger supply of a direction selection data set to said user device 4 upon detection that the virtual travel approaches an intersection 9, wherein the trigger function may be based on a pretermined distance from the intersection 9 and/or a predetermined calculated time in advance of of arrival to the intersection 9. Preferably said direction selection data set is displayed on the display 130 to display at least one direction selection data item 134A, 134B fascilitating easy input of a selection data item 134A, 134B to the system, e.g to make a turn.
Direction sele,ction data contains information about the intersection, e.g. its position, the number of paths available, and directions they are located. in. When the client 1 produces a request 140 for direction selection, it shows, e.g. through JavaScript and litm1,5-suipt, direction control members 134A, 134B (see also Fig 3) displaying for the user the different roads through th.e intersection 7 and allows the user to select one direction, for example, by clicking on one of the directions 134A, 134B in a touch sensitive display 130, or use keystrokes on a keyboard, or a mouse, or other means.
While thc user makes a direction selection the vidc.o stream sequence continues to appear, When the user is in the intersection_ 7 the video stream sequences 150, 160 may be dynamically processed by the client device to display an imaginary viev,/, preferably being curved 6 in a manner that gives a real life sensation. The client 120 will then calculate the cornering to be displayed by calculating a circle sector 6 between the users present position and a target position well into the new video stream sequence and allow user to move according to the circle sector 6 and adjust the direction., position, viewed angle execution in unit:yr. Said corner calculation is preferably executed by the video stream generator 40 of the user d.evic(. 4 but may also be sent, e.g. through Ajax call and JavaSeript, to the server 110 in a. corner path sequence. Using corner path sequence th.e server 110 will continue to send data files rel.ated to the first video stream sequence 1 and also regarding the new video stream sequence 160 to the client 120. The client 120 receives new and previous video stream sequences simultaneously. With corner calculation and the new and previous video strewn sequence, the client 120 using computer executable code e.g. JavaScript and EFIMI,5 in real-time to create a unique -user cut scene where the fram_(,,s from the new and the previous video stream sequence merged and displayed to the user.
Further, the user may set the velocity to zero and the user will make an active choice before any video stream sequence starts again, and instead spin on the spot or zoom in any images, etc. connected to said spot.
Accordingly, at intersections 7, the invention provides a technique that may makes corner turns fluid, e.g, without a lowering in frames over time in video stream sequence, rather than getting an abrupt change of direction. if the user is supposed to traverse straight forward the sequence continues with the user's position transferred to the new video stream sequence. If the new video stream sequence's recorded center I goes along a different line as the chosen path 6, this will be compensated by the simulated lateral movement 5 in such a manner that the user will not notice any offset jump, i.e. the invention dynamically overlaps off centered video stream sequences such that an undisturbed view is presented on the display 130. in fig 3 there is shown a preferred embodiment of a display 130 for a "Visitor" in connection with the invention. Within the display area there is a first larger area 134 that preferably presents a view in color of the path that the user is traveling, e.g. as seen when driving a ear. Within that area of 134 of the display there may fade in and out directional arrows 134A, 134B, during movement along the path, comprising at least one 134B indicating possibilities to make turns away from a straight forward drive along the chosen path. Preferably there is also a directional arrow 134A that may be chosen to indicate that the -user desires to continue straight forward. Further, preferably within in that major display area 134, there is a travel speed controller 131, whereby the user may choose how fast the traveling shall be. In the preferred embodiment the travel speed controller 131 comprises an arrow symbol 131A having a movable slider 131B there on, whereby moving the slider 131B will increase or decrease the speed. Moreover the display 130 contains a second area 132 containing a navigation map wherein a map of the area is shown, e.g. in a birds perspective, displaying roads 132A of a limited area and also a symbol 13213 identif:,iing the position of the user within that area, Furthermore there is also preferably symbol 132C that indicates the direction of movement of the user. A third area 133 within the display 130 may preferably display user instructions, e.g. quick help, act. Finally preferably there is a fourth display area 135 displaying local information connected to the virtual position of the user, e.g. informing of upcoming events, nearby places of special interests, advertisements etc.
The table below further refines this concept image with use cases, i.e. actions that the "Visitor" may want to do. 11 The tale contains four columns where the first column is a unique identifier used to identify- a specific use case. The second column provides the name of the use case and is retrieved from the use case image above. The third column contains a brief description of the use case and the fourth column shows in which iteration that the use case primarily should be addressed according to a preferred iteration plan, Identifier Name Description Iteration VUCO1 Search for Area The actor searches for an area to visit. 2 VUCO2 Navigate to Area The actor selects an Area or location to visit and is brought to that location. 2 VUCO3 Pan Navigation Map The actor pans the navigation map. 4 VUCO4 Choose Location on Navigation Map The actor uses the Navigation Map to choose a Location to visit. 4 VUCOChange Travel Speed The actor changes the speed at which it is travelling through the Photo Path. 1 VUCO6 Change Viewing Direction This use case describes the courses of action taken when the visitor changes in which direction to view for the Displayed Video View. 1 VUCO7 Choose Path at Junction The actor is traveling through a Photo Path and is approaching a Junction and makes a choice on which path to take at the Junction. 1 VUCO8 Join Guided Tour The actor joins a predesigned tour with a travel path and textual and/or auditory information about what is observed.
VUCO9 Visit Licensed Area The actor visits a Licensed Area.
VUCVisit Surface Area Link The actor enters a Surface Area Link. 3 In fig 4, an examplary model is outlined describing how different concepts used may be related. Concepts in rectangles denote entities El-Eli, and concepts C20-C70 in ellipses denote attributes of entities. Lines between entities denote relations ("knows about each other") and a line between an entity and attribute means that the entity "has a" attribute. An arrow denotes specialization between entities and can be read as "is a" so that an entity pointing at another entity is a specialization of that entity, i.e. contains further restrictions or fewer restrictions.
At the end of the associations are cardinalities where '1' means "must have exactly one" and 'n' means "zero, one or more". For example, in fig 4, "an Area El can know about 12 zero, one or more Area Slice" E2 and "an Area Slice" E2 must know about exactly one Area El, no more and no less.
An Area El, in this terminology, describes a location covering a certain geographic area, such as a city, part of a city, or some location that a user of the invention may want to visit. An Area El may be adjacent other areas (not shown in fig 4), but two areas cannot cover the same geographic location. There are two types of areas, Unlicensed Areas E 10 which are free for all users to visit and Licensed Area Ell to which access must be obtained, e.g. purchased.
An Area El is (e.g. partially) covered by zero or more Photo Paths E3 (e.g. 150, 160 in fig 3), i.e. some kind of video sequence with a start location lA and an end location 1B (see fig 3) that can be played to users of the invention. Photo Paths E3 can be adjacent with other Photo Paths by having the same start lA or end locations 1B as other Photo Paths start or end locations. This overlap is in this document denoted Junction J (see fig 3).
Each Photo Path E3 stems from one Recording which is the result of using cameras and other sensors to create video from a geographic area. Each Recording can generate one or more Photo Paths E3.
Each Photo Path E3 contains zero or more Photos. A Photo is a snap shot of a specific location and can be thought of as a photo from a camera that is taken from a specific geographic coordinate.
An Area El can also contain zero, one or more Area Slices E2. An Area Slice E2 is contained in an area El and can be thought of as a part of that area covering a specific volume, i.e. it has a position and an extension in width, height and depth. There are two types of Area Slices E2: Linkable Area Slices E20 and Surface Area Link slices E21. A 30 Linkable Area slice E20 is an Area Slice that can be purchased in the sense that someone can get a license to place their own information on that slice E20 and by the purchase is transformed to a Surface Area Link E21.
A Surface Area Link E21 is a purchased Area Slice by a user in order to display 35 banners, commercials, messages and link to web shops or other external web pages. When the license to use the Area Slice E2 expires then the Surface Area Link E21 is transformed into a Linkable Area E20. 13 To one or more Photo Paths E3 a Tour E7 can be connected. A Tour E7 is a part of an area that has additional textual information and comments connected to the Photo Paths E3. The textual information is called Tour Information and is associated to specific 5 Photos in a Photo Path E3 so that the information changes as the user travels through the area El. In a preferred embodiment each Photo Path E3 and Photo can take part in many tours allowing for some information to be free, for instance, and other information to be purchasable.
In a preferred embodiment there are five different distinct actors involved that are embodied by users, which are listed below.
Name Description Visitor A user with purpose of visiting one or more Areas. The role of the Visitor is to move around through the photo paths, Commercial Visitor A user -visiting with the purpose of making or managing purchases of, for example, banners, adds, linked web shops, access to licensed areas.
The role of the Commercial Visitor is to make and manage purchases.
Administrator A user working with the purpose of supporting Commercial Visitors and Visitors, e.g handles support requests, account information, etc, Content Recorder A user that generates Recordings for future conversion to Photo Paths and addition into the invention system.
Content Editor A user responsible for managing, adding, deleting the viewable contents of the invention: Recordings, Photo Paths, Photos, etc.
In a preferred embodiment the role of the Commercial Visitor is to make and manage purchases, Ibr a commercial visitor. These can be to purchase, manage and revoke licenses to visit certain Licensed Areas, Surface Area Links and join 'fours. The licenses may be tied to the specific Commercial Visitor and therefore a few use cases for creating and managing user accounts are identified, Furthermore, if something goes wrong a Commercial Visitor must be able to get support.
The table below contains four columns where the first column is a unique identifier used to identify a specific use case. The second column provides the name of the use case and is retrieved from the use case image above. The third column contains a brief 14 description of the use case and the fourth col-wnn shows in which iteration that the use case primarily may be addressed according to a preferred iteration plan.
Identifier Name Description Iteration CUCO1 Search for Linkable Area The use case describes the actions the actor has to perform in order o find Linkable Areas that can be turned into Surface Area Links. 3 CUCO2 Turn Linkable Area into Surface Area Link This Use Case describes how a Linkable Area is turned into a Surface Area Link. 3 CUCO3 Manage Surface Area Link This use case describes the actions of changing information in a Surface Area Link. 3 CUCO4 List Own Surface Area Links This use case describes the actions for listing own Surface Area Links. 3 CUCODelete Surface Area Link This use case describes the actions of deleting a Surface Area Link (by revoking the license). 3 CUCO6 Create Account This use case describes the actions necessary for creating a commercial visitor account. 3 CUCO7 Account Login This use case describes the actions necessary to log into an account. 3 CUCO8 Account Logout This use case describes the actions necessary to log out from an account. 3 CUCO9 Manage Account This use case describes the actions necessary for changing information in a commercial visitor account. 3 CUCRequest Support This use case describes the actions for the actor to request support. 3 CUC11 List Tours This use case describes the actions for listing tours that has been purchased.
CUC12 List Licensed Areas This use case describes the actions for listing Licensed Areas that has been purchased. in a preferred. embodiment the role of the Administrator is to help Conunerc,ial 'Visitors and to administrate the registered accounts. Due to the sensitivity of the tasks, the administrator must also be able to identify itself which is why there must be administrator accounts in addition to having Commercial Visitor accounts. Here, the Administrator can manage a Commercial Visitor Account by impersonating that visitor, which entails that once the Administrator is impersonating the Commercial Visitor it gets access to the use cases of that Commercial Visitor regarding purchases and accounts management.
The table below contains four columns where the first c,ohinin is a -unique identifier used to identify a specific use case, for an administrator. The second column provides the name of the use case and is retrieved from the use case image above. The third column contains a brief description of the use case and the fourth column shows in which iteration that the use case primarily should be addressed according to a preferred iteration plan.
Identifier Name Description Iteration AUCO1 Account Login The administrator logs into an administrator account. 3 AUCO2 Account Logout The administrator logs out from an administrator account. 3 AUCO3 Manage Account The administrator changes account information, e.g. passwords and email addresses. 3 AUCO4 Impersonate Commercial Visitor The administrator logs into a Commercial Visitor account. 3 AUCOSearch Commercial Visitors The Administrator searches for one or more Commercial Visitor accounts. 3 AUCO6 Deactivate Commercial Visitor Account The administrator deactivates a commercial visitor account so that it is impossible to log in from to that account. 3 AUCO7 List Area Slices The administrator retrieves a list of Surface Area Links and Linkable Areas according to criteria: belonging to a Commercial Visitor, in a specific Area. 3 AUCO8 Manage Support Request The administrator selects a Support Request and views or updates the information stored about that request. 3 AUCO9 List Support Requests The administrator retrieves a list of open Support Requests. 3 AUCClose Support Request The administrator closes a Support Request. 3 In a preferred embodiment the Content Recorder is responsible for creating Recordings kg later retrieval by Content Editors, The table below contains four columns -Where the first column is a unique identifier used to identify a specific use case, for a content recorder. The second column provides the name of the use case and is retrieved, from the use case image above. The third column contains a brief description of the use case and the fourth column shows in which iteration that the use case primarily should be addressed according to a preferred iteration. plan.
Identifier Name Description Iteration RUCO1 Record The content recorder makes a recording. 2 RUCO2 Convert The content recorder converts a recording to a preprocessed format ready to be integrated to the contents of the invention. 2 16 In a preferred embodiment the role of the Content Editor is to manage the data stored in the data bases 102 adding and managing Recordings and their transformation to Area El, Photo Paths E3 and Photos E5, to add and manage Tours E7, Licensed areas Ell and Area Slices E2.
The table below contains four columns where the first column is a unique identifier used to identify a specific use case, fix a content editor, The second column provides the name of the use case and is retrieved from the use case image above. The third column contains a brief description of the use case and the fourth column shows in which iteration that the use case primarily may be addressed according to a preferred iteration plan Identifier Name Description Iteration EUCO1 Add Recording A recording prepared by the Content Recorder (see RUCO2) is converted and added to the stored Photo Paths E3 for an Area El. 2 EUCO2 Modify Recording Information in a recording or photo path E3 or photo E5 is altered. 6 EUCO3 Delete Recording The Photo Paths E3 from a recording are made inaccessible for Visitors. 6 EUCO4 Search for Recordings The Content Editor searches for one or many Recordings based on search criteria: Area, Date. The retrieved recordings are viewed as a list or as overlays on a 2D map. 6 EUCOAdd Tour The Content Editor attaches textual information to Photos E5 in one or more Photo Paths E3 that is to be presented to a Visitor thereby creating a purchasable guided tour E7.
EUCO6 Modify Tour The Content Editor changes the information in a Tour E7 EUCO7 Delete Tour The Content Editor removes a Tour E7 EUCO8 Search for Tour The Content Editor searches for one or many according to criteria: is in specific Area El or specific Photo Path E3 or Tour name E7. The retrieved Tours E7 are presented as a textual list or as overlays on a map.
EUCO9 Add Licensed Area The Content Editor adds one or more Photo Paths E3 to be a Licensed Area Ell EUCModify Licensed Area The Content Editor modifies information in a Licensed Area Ell EUC11 Delete Licensed Area The Content Editor deletes a Licensed area Ell either by removing all Photo Paths E3 or by making it a normal Area El0 17 EUC12 Search for Licensed Areas The Content Editor searches for one or more Licensed Areas according to some criterion Area Name. The result is presented in a textual list.
EUC13 Add Linkable Area The Content Editor creates a Linkable Area E3 EUC14 Delete Linkable Area The Content Editor removes a Linkable Area E3 EUCSearch for Area Slices The Content Editor searches and retrieves a list of Area Slices E2 that can be bothLinkable Areas E20 and Surface Area Links E21 according to criteria: belonging to an Area El or if of specific Area Slice type E2. 3 According to a preferred embodiment the processing priority is that the Visitors always have the highest priority, so that their experience is as good as possible. The second highest priority is for the Commercial Visitors. The third is the Administrator's since they will have to handle support requests that may be urgent. The lowest priority is adding and managing content.
According to a preferred embodiment the number of actors is quite limited with clear cut responsibilities and roles. Further according to a preferred embodiment the inventive system may be partitioned largely so that each subsystem is responsible to handle one specific actor. This makes further design and extensions easier since side effects for additional functions and use cases only affect the current actor and not the others. Furthermore, new use cases can easily be assigned to a specific subsystem. The 15 information that is common for the different actors may preferably be gathered in separate subsystems that can never be accessed by the actors themselves but are guarded by the actor-specific subsystems and have clear-cut interfaces.
In fig. 5 there is presented a preferred subsystem partitioning capable of supporting all use cases, comprising the recording system 101, having a recoding managing system RMS connected thereto and a content management system CMS as a subsystem to the recording managing system RMS. A sub visitor managing system VMS is connected to the content management system CMS as also a sub account managing system AMS. Further there are connected an administrator management system connected to the 25 account managing system AMS as well as a commercial visitor managing system CVMS. It is evident for the skilled person that the design may easily be varied/changed depending on requirements/desires. 18 The preferred system according to the invention may have seven subsystems. Each of these subsystems is presented in greater detail in relation to some preferred design criteria in subsequent sections. Again, it is evident for the skilled person that the design may easily be varied/changed depending on requirements/desires.
Some of the proposed subsystems may be executed on the server 110 of the invention, but some of the subsystems may also be executed on the client device 120, such as HTLM-interpretations and Javascripts etc.
The main responsibility of the Content Management System may be to store and make available data that can be accessed by or have influence on a "Visitor". It contains Photos E5, Photo Paths E3, Recordings and other data that might be accessed.
In a preferred embodiment In order to handle scalability and performance, the data stored should be easy to access and if the work load is too high in the system, then adding new servers should preferably directly decrease workload. Adding new servers shall preferably be easy and preferably not require the entire system to be shut down. Also, the servers should preferably as much as possible contain the same data so that a Visitor gets the same experience whichever server is used. This is achieved by data synchronization that preferably should be automatized.
In fig. 6 there is a coarse grained preferred embodiment of a design of a Content Management System (CMS) 25 There are shown three different types of components: the Content Workers CW, the Request Broker RB and the Data Synchronizer DS. How they may interact is illustrated in the sequence diagram in fig. 7.
The sequence diagram in fig. 7 outlines two exemplary cases Cl, C2. A first contact Si 30 is made by an actor to the invention subsystem which goes to the Request Broker RB, including a request for a server 110 to work with. The Request Broker RB returns S3 the address to an available server 110, which is connected.
If no content changes are to be made of the server 110, then the request Si', is sent to the server 110, which responds S4 to the request. 19 The second case C2 outlines a sequence where a content change is requested (such as adding a new Photo Path, Area, Photo etc.). That requested change is added data request Si" sent to the server, which in turn contacts S5 the Data Synchronizer DS reporting which change it has performed. The Data Synchronizer DS acknowledges S6 this change to the server and the server 110 acknowledges S7 the change to the actor. After this, and while the rest of the system is still running, the Data Synchronizer DS starts ordering S8 the rest of the servers 110 to perform the change which one by one acknowledge S9 the change when it has been performed. This way entails that a content change can be propagated throughout the system without having to halt all servers 110.
The Request Broker RB is responsible for load balancing. The first request from a newly arrived Actor (Visitor Management System, Account Management System or Content Management System) is preferably a request for a Server address from the Request Broker RB and the Request Broker should provide the Actor with the address 15 to one of the servers 110. All further contact from the Actor should be directed and managed by the designated server 110.
The Content Worker CW preferably contains all data for paths and recordings and is responsible for providing the Visitors with data files 150, 160. Scaling the system should merely be the task of adding more Content Workers to the system and replicating the data stored on the other workers. The Content Workers should not know each other. Each Content Worker serves zero, one or more Actors with data but an Actor should only know one worker at a time. 25 The Data Synchronizer DS preferably manages the data synchronization between the Content Workers. If a change in one Content Worker, e.g. from Content Manager, takes place, that change should as fast as possible be propagated to the other Content Workers. As can also be seen in figure above, if a change is commanded to a Content Worker, then that change is reported from the Content Worker to the Data Synchronizer that acknowledges the change. Thereafter, one at a time or all simultaneously (e.g. by broadcast) the reported change is commanded and subsequently acknowledged to all the other Content Workers. A check may be made that all the Content Workers have changed, e.g. that each Content Worker responds to the Data Synchronizer that the change is inserted and committed correctly.
The Data Synchronizer DS may also be responsible for ensuring that a newly added Content Worker gets synchronized with the other Content Workers.
The content management system CMS preferably interfaces three other subsystems; the Recording Management System RM, the Visitor Management System and V MS the Account Management System AMS.
The Recording Management System RN/IS interface preferably contains functions for managing the contents of the Content Management System: Recordings, Photo Paths, Licensed Areas and Tours. 10 The Visitor Management System VMS interface p contains functions for retrieval of Photo Paths. It cannot change any information in the Content Management System.
The Account Management System interface contains functions that access and manage purchased items such as Surface Area Links E21 and unlocked Licensed Areas E20.
The responsibility of the Account Management System AMS may be to store and make available data that can be accessed or modified by or have influence on a Commercial Visitor. It may contain account information and information on services purchased by the Commercial Visitor. It also may handle support requests.
The design for this system may basically be the same as the design of the Content Management System (see fig. 6). Since potentially lots of Commercial Visitors may want to access and manage their accounts simultaneously this subsystem may face the same scalability and performance issues.
There are three different types of components: the Account Worker, the Request Broker and the Data Synchronizer, wherein the Content Workers of fig. 6 are exchanged by Account Workers. How they interact is similar to the sequence shown in fig. 7. 30 The responsibility of the Content Recorder System is preferably to serve the Content Recorder actor by providing the use case functionality, i.e. providing cameras, sensors, storage medium, etc. and GUI related to that actor.
This system 101 contains all the equipment andmethods necessary to produce a recording (apart from a possible vehicle). Primarily, the methods typically implemented by means of computer program portions, resides on a laptop that is part 21 of the content recorder system and which is used to synchronize sensors and store the recordings.
The Content Recorder System 101 preferably comprises of two components; i.e. a recording synchronizer RS and a recording converter RC, as shown in fig. 8.
The first component RS, the Recording Synchronization, may coordinate and synchronize the data sampling from all used sensors 50 such as the gps 51, camera 52, compass 53, etc. It ensures that the data sampled gets stored in a correct way and warns 10 and recovers from detected errors. This component RS runs when video is recorded by moving around in an area El.
The second component, the Conversion Manager RC, is responsible for converting the sensor samples to data adapted for insertion into the Recording Management System 15 RMS. The choice is made to place this initial conversion onto the a recorder system device 54 since the conversion may require lots of processing power and is therefore performed offline from the servers of the system.
The different sensors 50, etc. are connected (in some way) to the device 54, is sampled through the Recording Synchronization module RS to produce one or more files 56 that together constitutes an initial Recording 55. Once the Recording 56 has been made the Content Recorder CR can order conversion which means that the recording is processed through a Conversion Manager and may be inserted into an intermediate database 59 in a format that is prepared for easy insertion into 25 the Recording Management System, RMS In connection with fig. 9 there is shown an exemplary embodiment of computer program implemented methodsused to summarize a recording onto a storage medium. The camera should preferably have the possibility to generate its own file.
The recording files may have, at least, time stamped entries so that entries in different recording files can be matched and synchronized.
Figure 9 presents a design for managing and synchronizing the sensors programmatically, if that is possible given the available sensors based on these modules, a controller module 60, a Business module 61 and a resource module 62. 22 The Controller module 60 is responsible for controlling and monitoring the execution of the recording and also error handling and system recovery.
The Work Manager 61 may be responsible for loading and initializing all other components. It also may provide an interface to the person doing the recording. It can start and stop a recording, monitor and display progress and also perform system recovery if critical errors occur.
The Business module 61 may be responsible for performing and synchronizing the 10 actual work that has to be performed, and preferably includes a data manager 610 and recording manager 611.
The Data Manager 610 may be responsible for providing a standardized interface to allow access the data to be converted in a format that supports the recording process. In fig. 9 there are presented two potential sources: movie (Movie Manager) 612 and position (Position Manager, gps and compass) 613, but the design allows for more sources to be added easily.
The Resource module 62 may be responsible for being an interface to the external components, such as camera 621, gps reader 622, compass reader, file reader 623, database reader, etc. It preferably contains two main resource types: InData Resource 625 and OutData Resource 626.
The InData Resource 625 may be responsible for communicating with external 25 equipment, such as camera (Camera Resource), gps (GPS Resource) and compass (Compass Resource, not included in the picture) and provides an interface to the data to become accessible for the Data Manager 610.
The OutData Resource 626 may be responsible for providing an interface to the 30 recording storage, that may be a file 623 (File Resource) or a local database 624 (DB Resource). It may handle the storage when the necessary synchronization, conversion and processing has taken place.
The exemplary design presented in fig. 9 may be used for a computer program 35 implemented method package for converting recordings to a local repository. As stated above, this may be done to reduce processing and conversion requirements on the main 23 system thereby making the integration faster and less resource consuming in relation to the main system.
In order to save performance capacity on the central servers 110 the user device 120 5 may have the same, or similar, structure as the central database 110. Thanks to such a solution it may be possible to preprocess the recording and transform it to the correct formats and perform error checks before uploading it to the server 110, making the uploading more of a task to merge the data rather than convert the data and emphasis can be put on identifying junctions rather than performing all steps at once, and only then be able to detect if the recording has worked A challenge is how to convert a recording into rapidly processable Photo Paths E3 that incorporates all sensors used, whichever they are, and where the end user may alter the travelling speed dynamically during display. This may be achieved in many different ways. Below there are identified three such possible options, i.e. clock driven, video driven or GPS driven.
In a clock driven embodiment a central clock is used to govern the conversion process. A clock starts at 0 and is gradually stepped forward at millisecond resolution, e.g. 40ms/step. For each step the system will find the matching positions in the recordings from the sensors and store that data.
An advantage of this embodiment is that the different sensors are easily synchronized in a standardized and extensible way.
A drawback may be that the conversion quality is dependent on the travelling conditions. Slower speed increases stored details, but also the amount of data that has to be processed so that the travel conditions strongly affect the quality of the Photo Paths.
In a video driven embodiment the recording from the camera is used to control the conversion process. The first priority on the conversion process is to create high quality video. The frames used to create that video is matched to recordings of gps positions and other sensors.
An advantage is that the resulting Photo Path has good quality and is optimized for performance. 24 A drawback may be that the process of extraction of frames from the camera recording may be unreliable and may require a lot of manual work in order to identify the frames at a rate of 24/second while simultaneously take into account different travel speeds, traffic lights and other travelling conditions.
In a GPS driven embodiment the Photo Path is created based on the recording of gps positions. At regular intervals (on at least meter resolution) based on the recorded gps positions the time stamp may be read and that time stamp is used to extract a frame from the camera recording and all other sensors.
The advantage of this is that the conversion process from the recording to the Photo Path can be automatized creating Photo with reasonable quality. 15 A drawback may be that the gps positioning system is unreliable at the required resolution so that two points a meter apart in the real world may end up in the wrong order and ten meters apart according to the gps file.
The Content Editor Subsystem is an interface that preferably includes functions for transmitting the recordings to the Content Editor so that it can be modified and added to the Content Management System CMS and made available to the Visitor Management System, VMS.
The responsibility of the Recording Management System RMS may be to serve 25 the Content Editor actor by providing the use case functionality and GUI related to that actor.
This subsystems are preferable web based, e.g. using PHP, HTML5 and/or Javascripts to implement the subsystems. Code Igniter or a similar framework may be used as a basis for system design since it contains a predefined, best practice, design using the right languages and has good support for database accesses, interfacing with other systems and graphics displays.
In order to have really high quality recordings that can be converted into high quality 35 Photo Paths and Photos the sensors shall preferably be synchronized regarding measurement rates so that recordings can be started and stopped relatively simultaneously for all sensors and that the samples can be matched afterwards (e.g. through time stamps) so that, for example, a specific Photo from the 360-camera can be pinpointed to a specific GPS coordinate and a specific compass direction. Thanks to such a synchronization it will be known where a photo was taken and in which direction it was taken which entails that a movie of good quality can be reconstructed in a way usable for the invention.
The sensors shall preferably be constructed in such a way that data can be extracted from them either by continuously polling from an external synchronizer or by, afterwards be able to extract a sequence of measurements corresponding to the start and stop of the recording.
There shall preferably be an automatized way of deciding when a junction should be created, i.e. when two different Photo Paths meet and splits into different directions and separate this from when two recordings takes place on the same road but a few meters apart (for example recording the same place twice, but moving in a different lane).
Once a recording has been converted and incorporated into the database 110 it shall preferably be possible to smoothly play this video footage on a client screen.
Given current development in web platforms (for instance HTML5) it is considered to be relativity easy to achieve the latter by using playback on PC's or smartphones. HTML5 has several hardware accelerated routines that will lend themselves good for the invention. What may be used is HTML5 together with JavaScript to operate the controls and video playback at a rate that is good enough on computers with normal capacity (where normal in this case refers to what is available for a majority of the intended end users). Initial prototyping on HTML5 and JavaScript performance suggest that this is well within reach.
The invention is not limited to the embodiments described above, but may be varied within the scope of the appended claims. For instance, the skilled person realizes that it may also encompass a computer-readable medium on which is stored non-transitory information adapted to control a processor/processing unit to perform any of the steps or functions of the invention described herein. Further also, a computer program product comprising code portions configured to control a processor to perform any of the steps or functions of the invention described herein. Finally it is eveident for the skilled person that many of the aspects of the invention may be executed as a stand alone aspect, i.e. not limited to a combination as described above in context of a preferred 26 embodiment, e.g. that the basic technology of the invention may also be used for making "non-flow" turns (e.g. an abrubt 90 degree turn), etc, and that such aspect may be made the subject for protection of its own, e.g. in the form of divisional application/s. 5 According to an embodiment of the invention, there is provided a computer-readable medium on which is stored non-transitory information adapted to control a processor to perform any of the steps or functions of the method embodiments described herein.
According to an embodiment of the invention, there is provided a computer program 10 product comprising code portions adapted to control a processor to perform any of the steps or functions of the method embodiments described herein.

Claims (10)

27 CLAIMS 1. A method for a client device comprised in a virtual navigation system, said method comprising; 1. sending a video stream request (140) from a client device (120) to a video server (110) configured to provide pre-recorded video streams (150, 160) wherein said video stream request (140) comprises spatial data, wherein said spatial data comprises a simulated location (2) of a user, simulated user location direction and simulated user speed information; 2. receiving a first pre-recorded video stream (150) from said video server (110), wherein said first received pre-recorded video stream (150) comprises angular and scaling information; 3. generating a simulated video stream (8) based on said first received pre-recorded video stream (150) by means of a video stream generator(40) in said user device (4) and displaying said simulated video stream (8) on a display (130) connected to said client device (120), 4. wherein said first received pre-recorded video stream (150) is dynamically processed based on said spatial data to display an adapted real time view, wherein each one of said pre-recorded video stream (150, 160) defines a photo path (E3) with a start location (1A) and an end location (1B) within a geografic area (El), characterized in that a geographic area (El) contains at least one Area Slice (E2), having a defined position and extension, the extension being defined in at least two directions. 2. A method according to claim 1, wherein said spatial data together with said angular and scaling information is processed by said video stream generator(40) to adapt said simulated video from a chosen virtual position (2) of a user, enabling a view offset (5) from the actual recorded position of the video stream (150). 3. A method according to claim 1 or 2, wherein, further a second pre-recorded video stream (160) from said video server (110), is received by said client device and that said video stream generator (40) is arranged to dynamically process both of said video streams (150, 160) to adapt said simulated video to display an adapted real time view based on both video streams (150, 160). 4. A method according to claim 1, 2 or 3, whereby said user device (4) receivs a direction selection data set upon detection that the user device (4) approaches an 28 end (1B) of a pre-recorded video stream. (150,160), wherein preferably said direction selection data set is displayed on said display (130) to display a direction selection data item (134A, 134B) fascilitating input of a selection data item (134A, 134B) to the system. 5. A method according to any preceeding claim, a Photo Path (E3) stems from one recording, and more preferred each recording generates a plurality of Photo Paths (E3). 6. A method according to claim 1 or 5, wherein a geographic area (El) contains a plurality of Area Slices (E2), wherein the extension preferably being defined in three directions, i.e. in width, height and depth, within said geographic area (El), wherein preferably there are at least two types of Area Slices (E2), non-assigned Area Slices (E20) and assigned Area slices (E21), wherein an assigned Area slice (E21) is an Area Slice in connection with which an external user may connect information and/or a link, being automatically displayed onto said display (130). 7. A method according to any preceeding claim, wherein said display (130) comprises a plurality of display areas (132-135), including a first larger area 134. presenting a view from the simulated position (2) of the user of the path (150, 160) that the user is traveling, wherein preferably a further display area 135. displays information, preferably including textual information, connected to the virtual position (2) of the user, wherein preferably the display (130) incased a further display area (132) displaying a navigation map, preferably identifying the position of the user within that area and/or a further display area (133) displaOng user instructions. 8. A method according to any preceeding claim, wherein there are a pluralty of different distinct actors, having different roles connected to the system, including at least a user, an administrator and a a content editor, wherein the content editor has a role to manage data stored in a data basels (102), by adding and managing recordings and. their transformation to the server (110), and wherein preferably the user is given the highest priority for use of the system, from a capacity perspective, Wherein preferably the inventive system is partitioned having at least one subsystem being responsible to handle one specific actor, and preferably whwrein information that is common for the 29 different actors is gathered in at least one separate subsystems that can never be accessed by the actors themselves but are guarded by the actor-specific subsystems by means of clear-cut interfaces. 9. A method according to any preceeding claim, including a Content Management System (MS) having three different types of components, i.e. the Content Worker (CW), a Request Broker (RB) and a Data Synchronizer (DS), wherein preferably The Request Broker (RB) is responsible for load balancing, by means handling a first request from a newly arrived Actor to provide said Actor with an address to a server (110) and all further contact with said Actor is managed directly by said server (110). 10. A virtual navigation system, comprising;
1. a user device (4) including a client device (120) and a video server (110) having video streams (150, 160) stored thereon;
2. a video stream request module (112) comprised in said client device (120) arranged to send a video stream request (140) from said client device (120) to said video server (110) being configured to provide pre-recorded video streams (150, 160) wherein said video stream request (140) comprises spatial data, wherein said spatial data comprises a simulated location (2) of a user, simulated user location direction and simulated user speed information;
3. A video stream receiver (114) configured to receive a first pre-recorded video stream (150) from said video server (110), wherein said first received pre- recorded video stream (150) comprises angular and scaling information; - A video stream generator (40) configured to generate a simulated video stream (8) based on said first received pre-recorded video stream (150) and arranged to display said simulated video stream (8) on a display (130) connected to said client device (120) by means of said video stream generator (40) in said user device (4), wherein said video stream generator(40) includes a video stream processor module (115) configured to dynamically process said first received pre-recorded video stream (150) based on said spatial data to display an adapted real time view, and the processor further being configured to perform the steps or functions of claims 1, and wherein preferably said video stream generator(40) configured to include process also a second pre-recorded video stream (160) to dynamically process both of said video streams (150, 160) to adapt said simulated video to display an adapted real time view based on both video streams (150, 160), and more preferred wherein the processor further being configured to perform any of the steps or functions of claims 2-13, or a computer system having a processor being configured to perform any of the steps or functions of claims 1-13. 1/6 ---102 160
SE1451095A 2013-09-19 2014-09-18 A system and method for a client device comprised in a virtual navigation system SE538303C2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
SE1451095A SE538303C2 (en) 2013-09-19 2014-09-18 A system and method for a client device comprised in a virtual navigation system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE1351080 2013-09-19
SE1451095A SE538303C2 (en) 2013-09-19 2014-09-18 A system and method for a client device comprised in a virtual navigation system

Publications (2)

Publication Number Publication Date
SE1451095A1 true SE1451095A1 (en) 2015-03-20
SE538303C2 SE538303C2 (en) 2016-05-03

Family

ID=52876110

Family Applications (1)

Application Number Title Priority Date Filing Date
SE1451095A SE538303C2 (en) 2013-09-19 2014-09-18 A system and method for a client device comprised in a virtual navigation system

Country Status (1)

Country Link
SE (1) SE538303C2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170052035A1 (en) * 2015-08-21 2017-02-23 Nokia Technologies Oy Location based service tools for video illustration, selection, and synchronization

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170052035A1 (en) * 2015-08-21 2017-02-23 Nokia Technologies Oy Location based service tools for video illustration, selection, and synchronization
US11709070B2 (en) * 2015-08-21 2023-07-25 Nokia Technologies Oy Location based service tools for video illustration, selection, and synchronization

Also Published As

Publication number Publication date
SE538303C2 (en) 2016-05-03

Similar Documents

Publication Publication Date Title
US20210021761A1 (en) Connecting And Using Building Data Acquired From Mobile Devices
US10163255B2 (en) Three-dimensional geospatial visualization
CN101763607B (en) Online exhibition platform system constructed by using panoramic electronic map and construction method thereof
CN103842777B (en) Map feature is generated and rendered based on conspicuousness
Milosavljević et al. Integration of GIS and video surveillance
CN103971589A (en) Processing method and device for adding interest point information of map to street scene images
US11454502B2 (en) Map feature identification using motion data and surfel data
US10636207B1 (en) Systems and methods for generating a three-dimensional map
Milosavljević et al. GIS-augmented video surveillance
US10859382B1 (en) Systems and methods for indoor mapping
US10459598B2 (en) Systems and methods for manipulating a 3D model
US20130290908A1 (en) Systems and methods for creating and utilizing high visual aspect ratio virtual environments
CN110462337A (en) Map terrestrial reference is automatically generated using sensor readable tag
CN111222190A (en) Ancient building management system
Erra et al. Engineering an advanced geo-location augmented reality framework for smart mobile devices
US20230251655A1 (en) Geocoding data for an automated vehicle
US10489965B1 (en) Systems and methods for positioning a virtual camera
CN112287048A (en) Map service processing method and device
SE1451095A1 (en) A system and method for a client device comprised in a virtual navigation system
Zhuang et al. [Retracted] Augmented Reality Interactive Guide System and Method for Tourist Attractions Based on Geographic Location
US20190392545A1 (en) Systems and methods of indoor navigation for emergency services
Cui et al. System design for local culture protection based on smart item
Varinlioglu et al. Envisioning Ambiances of the Past
Abboud Architecture in an age of augmented reality
Wei et al. a Book Retrieval and Location System Based on Real-Scene 3D