CN108450035B - Navigating through a multi-dimensional image space - Google Patents

Navigating through a multi-dimensional image space Download PDF

Info

Publication number
CN108450035B
CN108450035B CN201680053204.9A CN201680053204A CN108450035B CN 108450035 B CN108450035 B CN 108450035B CN 201680053204 A CN201680053204 A CN 201680053204A CN 108450035 B CN108450035 B CN 108450035B
Authority
CN
China
Prior art keywords
image
sweep
display
overlay
dimensional space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680053204.9A
Other languages
Chinese (zh)
Other versions
CN108450035A (en
Inventor
斯科特·爱德华·迪拉德
温贝托·卡斯塔尼达
梁淑君
迈克尔·卡梅伦·琼斯
克里斯多佛·格雷
埃文·哈德斯蒂·帕克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Publication of CN108450035A publication Critical patent/CN108450035A/en
Application granted granted Critical
Publication of CN108450035B publication Critical patent/CN108450035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Navigation (AREA)

Abstract

Aspects of the present disclosure generally relate to providing a user with an image navigation experience. For example, a first image (332) of a multi-dimensional space is provided with an overlay (502), the overlay (502) indicating a direction in which the space extends into the first image, such that a second image is connected to the first image along the direction of the overlay. User input is received indicating a sweep across a portion of the display. When a sweep occurs at least partially within an interaction zone (602) that defines an area surrounding the overlay line with which the user can interact with the space, the sweep indicates a request to display an image that is different from the first image. The second image is selected and provided for display based on the sweep and a connection map connecting the first image and the second image along the direction of the overlay line.

Description

Navigating through a multi-dimensional image space
Cross Reference to Related Applications
This application is a continuation of U.S. patent application No.14/972,843 filed on 12/17/2015, the disclosure of which is incorporated herein by reference.
Background
Various systems allow a user to view images sequentially, for example, in time or space. In some examples, these systems may provide a navigation experience in a remote or location of interest. Some systems allow users to feel as if they are rotating within the virtual world by clicking on an edge of a displayed portion of the panorama and making the panorama appear to "move" in the direction of the clicked edge.
Disclosure of Invention
One aspect of the present disclosure provides a computer-implemented method of navigating a multidimensional space. The method comprises the following steps: providing, by one or more processors, a first image of a multidimensional space for display on a display of a client computing device and an overlay line extending across a portion of the first image and indicating a direction in which the multidimensional space extends into the first image such that a second image is connected to the first image along the direction of the overlay line; receiving, by the one or more processors, user input indicating a sweep across a portion of the display, the sweep defined by a start pixel and an end pixel of the display; determining, by the one or more computing devices, based on the start pixel and the end pixel, that the sweep has occurred at least partially within an interaction region of the first image, the interaction region defining an area around the overlay graph that the user can interact with the multi-dimensional space; determining, by the one or more processors, that the sweep indicates a request to display an image different from the first image when the sweep occurs at least partially within the interaction zone; selecting, by the one or more computing devices, the second image based on a starting point of the sweep, an ending point of the sweep, and a connection map connecting the first image and the second image along the direction of the overlay line when the sweep indicates a request to display the image different from the first image; and providing, by the one or more computing devices, the second image for display on the display to provide a sensation of movement in the multi-dimensional space.
In one example, the method also includes: providing a transition image for display between the first image and the second image, the transition image being provided as a thumbnail image having less detail than the first image and the second image. In another example, the method also includes: instructions are provided to dissolve the overlay graph after a threshold period of time has elapsed without any user action on the overlay graph. In this example, after fading out the overlay graph, the method comprises: receiving a second user input on the display; and providing instructions to redisplay the overlay graph in response to the second user input. In another example, the method also includes: determining a direction and a magnitude of the sweep based on the start pixel of the sweep and the end pixel of the sweep, and selecting the second image is further based on the direction and magnitude.
In another example, the method also includes: providing the second image with a second overlay line extending across a portion of the second image and indicating a direction in which the multi-dimensional space extends into the second image, such that a third image is connected to the second image in the connection map along the direction of the second overlay line. In this example, the method includes: receiving a second user input indicating a second sweep; determining that the second sweep is within a threshold angle of the direction extending perpendicular to the multi-dimensional space into the second image; and translating across the multi-dimensional space of the second image when the second sweep is within a threshold angle that extends perpendicular to the multi-dimensional space to the direction in the second image. Alternatively, the method also includes: receiving a second user input indicating a second sweep; determining that the second sweep is within a threshold angle of the direction extending perpendicular to the multi-dimensional space into the second image; and changing an orientation within the second image when the second sweep is within a threshold angle that extends perpendicular to the direction in the second image into the multi-dimensional space. In another alternative, the method also includes: receiving a second user input indicating a second sweep; determining that the second sweep is within a threshold angle of the direction extending perpendicular to the multi-dimensional space into the second image; and when the second sweep is within a threshold angle perpendicular to the direction in which the multi-dimensional space extends into the second image, switching from the second image to a third image located on a second connection map adjacent to the connection map, the second and third images having no directional connection in the connection map. In this example, the method also includes: verifying a third overlay line for display with the second image, the third overlay line representing a second navigation path proximate to a current view of the second image, the third overlay line being provided such that the third overlay line and the second overlay line cross each other when displayed with the second image. Further, the method comprises: receiving a second user input along the third overlay line, the second user input indicating a request to transition from an image along the second overlay line to an image along the third overlay line; and providing, in response to the second user input, a third image for display, the third image being arranged in the connection map along the third overlay line. In addition, the method comprises: selecting a set of images for serial display as a transition between the second image and the third image based on connections between images in the connection map; and providing the set of images for display on the display. Further, prior to providing the set of images, the method also includes: filtering the set of images to remove at least one image based on a connection between two images of the set of images in a second connection map different from the first connection map such that a filtered set of images is provided for display as the transition between the second image and the third image.
Another aspect of the present disclosure provides a system. The system includes one or more computing devices, each having one or more processors. The one or more computing devices are configured to: providing a first image of a multi-dimensional space for display on a display of a client computing device and an overlay line extending across a portion of the first image and indicating a direction in which the multi-dimensional space extends into the first image such that a second image is connected to the first image along the direction of the overlay line; receiving user input indicating a sweep across a portion of the display, the sweep defined by a start pixel and an end pixel of the display; determining, based on the start pixel and the end pixel, that the sweep has occurred at least partially within an interaction region of the first image, the interaction region defining an area around the overlay line where the user can interact with the multi-dimensional space; determining that the sweep indicates a request to display an image different from the first image when the sweep occurs at least partially within the interaction zone; when the sweep indicates a request to display the image different from the first image, selecting the second image based on a start point of the sweep, an end point of the sweep, and a connection map connecting the first image and the second image along the direction of the overlay line; and providing the second image for display on the display so as to provide a sensation of movement in the multi-dimensional space.
In one example, the one or more computing devices are further configured to provide a transition image for display between the first image and the second image, the transition image being provided as a thumbnail image having less detail than the first image and the second image. In another example, the one or more computing devices are further configured to provide instructions to fade out the overlay graph after a threshold period of time has elapsed without any user action on the overlay graph. In this example, the one or more computing devices are further configured to, after fading out the overlay graph, receive a second user input on the display and provide instructions to redisplay the overlay graph in response to the second user input. In another example, the one or more computing devices are further configured to determine a direction and magnitude of the sweep based on the start pixel of the sweep and the end pixel of the sweep and select the second image further based on the direction and magnitude. In another example, the one or more computing devices are further configured to provide the second image with a second overlay line that extends across a portion of the second image and indicates a direction in which the multi-dimensional space extends into the second image, such that a third image is connected to the second image in the connection map along the direction of the second overlay line.
Another aspect of the disclosure provides a non-transitory computer readable storage device storing computer readable instructions of a program. The instructions, when executed by one or more processors, cause the one or more processors to perform a method. The method comprises the following steps: providing a first image of a multi-dimensional space for display on a display of a client computing device and an overlay line extending across a portion of the first image and indicating a direction in which the multi-dimensional space extends into the first image such that a second image is connected to the first image along the direction of the overlay line; receiving user input indicating a sweep across a portion of the display, the sweep defined by a start pixel and an end pixel of the display; determining, based on the start pixel and the end pixel, that the sweep has occurred at least partially within an interaction region of the first image, the interaction region defining an area around the overlay line where the user can interact with the multi-dimensional space; determining that the sweep indicates a request to display an image different from the first image when the sweep occurs at least partially within the interaction zone; selecting the second image based on a start point of the sweep, an end point of the sweep, and a connection map connecting the first image and the second image along the direction of the overlay line when the sweep indicates a request to display the image different from the first image; and providing the second image for display on the display so as to provide a sensation of movement in the multi-dimensional space.
Drawings
Fig. 1 is a functional diagram of an example system in accordance with aspects of the present disclosure.
Fig. 2 is a schematic diagram of the example system of fig. 1.
FIG. 3 is an example representation of images and data in accordance with aspects of the present disclosure.
Fig. 4 is a representation of an example graph diagram in accordance with aspects of the present disclosure.
Fig. 5 is an example client computing device and screen shot in accordance with aspects of the present disclosure.
FIG. 6 is a representation of an example client computing device, screen shot, and data in accordance with aspects of the present disclosure.
FIG. 7 is a representation of an example client computing device and data in accordance with aspects of the present disclosure.
Fig. 8A and 8B are examples of user input in accordance with aspects of the present disclosure.
FIG. 9 is a representation of example data in accordance with aspects of the present disclosure.
FIG. 10 is another example client computing device and screen shot in accordance with aspects of the present disclosure.
FIG. 11 is another example client computing device and screen shot in accordance with aspects of the present disclosure.
FIG. 12 is yet another example client computing device and screen shot in accordance with aspects of the present disclosure.
FIG. 13 is another representation of example data in accordance with aspects of the present disclosure.
FIG. 14 is another representation of example data in accordance with aspects of the present disclosure.
Fig. 15A is another example client computing device and screen shot in accordance with aspects of the present disclosure.
Fig. 15B is an example representation of an image map in accordance with aspects of the present disclosure.
Fig. 16 is a flow diagram in accordance with aspects of the present disclosure.
Detailed Description
SUMMARY
The present technology relates to an interface for enabling a user to navigate within a multi-dimensional environment in a first or third human view. In some examples, the environment may include a three-dimensional model rendered by mapping images to a model or series of geo-located (e.g., initially associated with orientation and location information) images using information identifying the two-dimensional or three-dimensional relationship of the images to each other.
To provide "realistic" motion in a multi-dimensional space, the interface may allow continuous motion, intuitive turns, looking around the scene, and moving forward and backward. For example, reference lines may be displayed to indicate to the user the direction in which the user may "traverse" in the multi-dimensional space using touch and/or motion controls. By sweeping in different directions relative to the line, the interface can easily identify the time and direction the user is trying to move, as compared to when the user is trying to simply change orientation and look around.
In order to provide an interface, multiple geo-located images must be available. In addition to being associated with geographic location information, images may also be connected to each other in one or more image graphs. The map may be generated using various techniques including manual and automatic linking based on the location and distance between images, the manner in which the images are captured (such as the images being captured by the camera as it is moved forward), and other methods of identifying the best image of a set of images for connecting to any given point or orientation in the image.
One or more server computing devices may access the one or more graphs to provide images for display to a user. For example, a client computing device of a user may send a request for an image of an identified location. The one or more server computing devices may access the one or more image maps to identify an image corresponding to the location. This image may then be provided to the user's client computing device.
In addition to providing images, the one or more server computing devices may also provide instructions to the client computing device for displaying the navigation overlay. This overlay may be represented as a line indicating to the user the direction in which the user may move in the multi-dimensional space represented by the image. The line itself may actually correspond to a connection between the image currently being viewed by the user and the other images in the one or more image maps. As an example, this line may correspond to a road along which the camera is moved in order to capture the image identified in the image map.
The user may then navigate through the multidimensional space using the line. For example, the lines may be used to suggest interaction regions to the user for moving from the image to a different image in one or more image maps. If the user sweeps within the interaction region of the line and generally parallel to the line or within some small angular difference from the line, the user's client computing device may "move" around in the multi-dimensional space by transitioning to a new image according to the characteristics of the sweep and the image map. In other examples, the sweep may be identified as a request to rotate the view within the current image or to translate in the current image.
Characteristics of the sweep including direction, magnitude and speed can be used to define how the view will change. The magnitude of the sweep or the pixel length can be used to determine how far to move the view forward or backward. Furthermore, the speed of sweeping (number of pixels per second) and even the acceleration of sweeping can be used to determine how fast the view appears to traverse the multi-dimensional space.
The direction may be determined by back projecting (converting from two to three dimensions) the current and previous screen coordinates onto the y = z plane in terms of Normalized Device Coordinates (NDC) and then projecting down onto the ground plane again. This allows vertical display movement to be mapped to forward movement in a multi-dimensional space, while horizontal display movement is mapped to lateral movement in a predictable manner independent of scene geometry or horizon.
As an example, a line may appear when a user taps a view represented on a display of a client computing device. The user may sweep along portions of the display. Using the initial position or pixels of the sweep and other positions along the sweep, the speed of the sweep can be determined. Once the sweep is complete or the user's finger is off the display, a transition animation (such as zooming and fading into a new image) is displayed to transition to the new image.
When the view will typically traverse multiple images in order to reach an image identified based on the characteristics of the sweep, the full resolution of these multiple images may not actually be displayed. Alternatively, a lower resolution version of the image (such as a thumbnail image) may be displayed as part of the transition between the current image and the identified image. When the actually recognized image is displayed, this image may be displayed at full resolution. This may save time and processing power.
Characteristics of the lines such as opacity, width, color, and location may be varied to allow a user to more easily understand the navigation of the multi-dimensional space. Additionally, to further reduce the interference of the line with the user's exploration of the multidimensional space, the line may not be shown when the user is not interacting with the interface. As an example, a line may be faded in or faded out as desired based on whether the user is interacting with the display.
In some examples, the connection map may branch, such as there being an intersection of two roads, or rather, there may be an intersection in one or more of the map maps. To keep the overlay simple and easy to understand, lines directly connected to the current line only within a short distance of the current viewpoint may appear to be superimposed on the image. In these branch regions, such as at traffic intersections where more than one road intersects each other, the user may want to change from one line to another. Of course, driving forward and making a 90 degree turn in the middle of the intersection may feel unnatural. In this regard, one or more image maps may be used to transcend the corners of the intersection between two lines by displaying images that are not on any one line as a transition between the two lines.
The features described herein allow a user to explore while simultaneously following a particular predetermined motion path inside a multidimensional space, while at the same time preventing the user from being "trapped" or moving in an inefficient manner. Further, the system can identify discrepancies when the user is trying to move versus when the user is trying to look around. Other systems require multiple types of input in order to distinguish between these types of movements. These systems may also require the user to point to a particular location in order to move to that location. This is less intuitive than allowing the user to sweep in order to move in a multi-dimensional space and does not allow continuous motion, as the user must tap or click an arrow each time he wants to move.
Further aspects, features and advantages of the present disclosure will be appreciated when considered with reference to the following description of the embodiments and the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Furthermore, the following description is not limiting; the scope of the present technology is defined by the appended claims and equivalents. Although certain processes in accordance with example embodiments are illustrated in the figures as occurring in a linear fashion, this is not a requirement unless explicitly stated herein. The different processes may be performed in a different order or simultaneously. Steps may also be added or omitted unless stated otherwise.
Example System
Fig. 1 and 2 include an example system 100 that can implement the features described above. It should not be taken as limiting the scope of the disclosure or the usefulness of the features described herein. In this example, system 100 may include computing devices 110, 120, 130, and 140 and storage system 150. Each of the computing devices 110 may contain one or more processors 112, memory 114, and other components typically found in general purpose computing devices. The memory 114 of each of the computing devices 110, 120, 130, and 140 may store information accessible by the one or more processors 112, including instructions 116 executable by the one or more processors 112.
The memory may also include data 118 that may be retrieved, manipulated or stored by the processor. The memory may be of any non-transitory type capable of storing information accessible by the processor, such as a hard drive, memory card, ROM, RAM, DVD, CD-ROM, writable and read-only memory.
The instructions 116 may be any set of instructions to be executed directly by one or more processors, such as machine code, or indirectly by one or more processors, such as scripts. In this regard, the terms "instructions," "applications," "steps," and "programs" may be used interchangeably herein. The instructions may be stored in an object code format directly processed by the processor, or in any other computing device language, including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. The function, method, and routine of the instructions are described in more detail below.
The data 118 may be retrieved, stored, or modified by the one or more processors 112 in accordance with the instructions 116. For example, although the subject matter described herein is not limited by any particular data structure, data may be stored in computer registers, in a relational database or XML document as a table with many different fields and records. The data may also be formatted in any computing device readable format such as, but not limited to, binary values, ASCII, or Unicode. Further, the data may include any information sufficient to identify the relevant information, such as numbers, descriptive text, proprietary codes, pointers, references to data stored in other memories, such as other network locations, or information used by the function to compute the relevant data.
The one or more processors 112 may be any conventional processor, such as a commercially available CPU. Alternatively, the processor may be a dedicated component, such as an application specific integrated circuit ("ASIC") or other hardware-based processor. Although not necessary, one or more of the computing devices 110 may include dedicated hardware components to perform specific computing processes, such as decoding video, matching video frames to images, distorting video, encoding distorted video, etc., faster or more efficiently.
Although fig. 1 functionally illustrates the processors, memories, and other elements of the computing device 110 as being within the same block, the processors, computers, computing devices, or memories may actually comprise multiple processors, computers, computing devices, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard drive or other storage medium located in a different housing than the housing of the computing device 110. Thus, references to a processor, computer, computing device, or memory are to be understood as including references to a processor, computer, computing device, or memory that may or may not operate in parallel. For example, the computing devices 110 may include server computing devices operating as a load balancing server farm, a distributed system, and so forth. Still further, while some of the functionality described below is indicated as occurring on a single computing device having a single processor, various aspects of the subject matter described herein may be implemented by multiple computing devices, e.g., communicating information over network 180.
Each of the computing devices 110 may be located at a different node of the network 180 and may be capable of communicating directly and indirectly with other nodes of the network 180. Although only a few computing devices are depicted in fig. 1 and 2, it should be appreciated that a typical system may include a large number of connected computing devices, with each different computing device located at a different node of the network 180. The network 180 and intermediate nodes described herein may be interconnected using various protocols and systems, such that the network may be part of the internet, world wide web, a particular intranet, a wide area network, or a local network. The network may utilize standard communication protocols (such as ethernet, wiFi, and HTTP), protocols proprietary to one or more companies, and various combinations of the above. Although certain advantages are obtained when information is transmitted or received as noted above, other aspects of the subject matter described herein are not limited to any particular manner of transmission of information.
By way of example, each of computing devices 110 may include a web server capable of communicating with storage system 150 and computing devices 120, 130, and 140 via a network. For example, one or more of server computing devices 110 may use network 180 to communicate information to a user (such as user 220, 230, or 240) and present the information to the user on a display (such as display 122, 132, or 142 of computing devices 120, 130, and 140). In this regard, computing devices 120, 130, and 140 may be considered client computing devices and may perform all or some of the features described herein.
Each of the client computing devices 120, 130, and 140 may be configured with one or more processors, memories, and instructions as described above, similar to the server computing device 110. Each client computing device 120, 130, or 140 may be a personal computing device intended for use by a user 220, 230, 240, and have all of the components typically used in connection with a personal computing device, such as a Central Processing Unit (CPU), memory (e.g., RAM and internal hard drives) that stores data and instructions, a display such as display 122, 132, or 142 (e.g., a monitor having a screen, a touch screen, a projector, a television, or other device operable to display information), and a user input device 124 (e.g., a mouse, a keyboard, a touch screen, or a microphone). The client computing device may also include a camera for recording video streams, speakers, a network interface device, and all components for connecting these elements to each other.
Although the client computing devices 120, 130, and 140 may each comprise a full-sized personal computing device, they may alternatively comprise mobile computing devices capable of wirelessly exchanging data with a server over a network such as the internet. By way of example only, the client computing device 120 may be a mobile phone or a device such as a wireless-enabled PDA, a tablet PC, or a netbook capable of obtaining information via the internet. In another example, the client computing device 130 may be a head-mounted computing system. As examples the user may input information using a keyboard, a keypad, a microphone, using a camera using visual signals or a touch screen.
As with the memory 114, the storage system 150 may be any type of computerized storage capable of storing information accessible by the server computing device 110, such as a hard drive, memory card, ROM, RAM, DVD, CD-ROM, writable and read-only memory. Further, storage system 150 may comprise a distributed storage system in which data is stored on multiple different storage devices that may be physically located at the same or different geographic locations. Storage system 150 may be connected to computing devices via network 180 as shown in fig. 1 and/or may be directly connected to any of computing devices 110, 120, 130, and 140 (not shown).
The storage system 150 may store images and related information such as image identifiers, orientation and position of the images, orientation and position of the cameras capturing the images, and intrinsic camera settings (such as focal length, zoom, etc.). In addition to being associated with orientation and position information, images may be connected to each other in one or more images. In other words, from any given image, the image maps may indicate which other images are connected to the image and in which direction.
The map may be generated using various techniques including manual and automatic linking based on the location and distance between images, the manner in which the images are captured (such as the images being captured by the camera as it is moved forward), and other methods of identifying the best image of a set of images for connecting to any given point or orientation in the image. For example, as shown in example 300 of fig. 3, an image may be captured by maneuvering a vehicle along a road 302. In this example, the roadway includes a plurality of lanes 304, 306, 308. At one time, a vehicle traveling in a lane 304 captures a series of images 310 of features in an environment including, among other things, the vehicle, such as a road 302 and buildings 330, 332, 334, 336, 338. At another time, the same or a different vehicle traveling in lane 306 (and later switching to lane 308 between images 340 and 342) captures a series of images 320. As the one or more cameras that captured the series of images are moved forward, they may associate each image with information regarding the position of the camera at the time the image was captured, the orientation of the camera at the time the image was captured (e.g., relative to the orientation indicator 360), timestamp information indicating the date and time the image was captured, and so forth. The camera's change in position over time and along the series of images can be used to "link" the image pairs together via lines 350, 352. Again, the position and orientation of the camera and the timestamp information of the image may be used to perform this linking manually or automatically based on the change in position of the camera over time. The links then create a connection of the image maps, or information describing the relative orientation between each adjacent pair of images. In this example, the result of the linking is two image maps corresponding to the series of images 310 and 320 and the connections between these images.
The maps themselves may have different images and connections. Fig. 4 is an example of a series of images 410, 420, 450. In this example, image maps 410 and 420 may be generated from images and connections that represent only images and connections along a street or map grid corresponding to a road, path, or the like, or by pairing more complex maps with images and connections that include only a grid. For example, image maps 410 and 420 may be generated from the image series 310 and 320 as discussed above. The image map 420 actually includes only images 402-430 relating to the road 302 or mesh of roads in the map. The other connectivity graphs may then identify connections from the images in the first graph to images other than those in the first graph. For example, image graph 450 includes connections between images 402, 404, 406, and 408 of image series 320 and images 442, 444, 446, and 448 of image series 322. Further, image graph 450 includes connections to images 452 and 545, which may be images from within building 330, but are not included in any of image series 320 or 322. Again, these connections may be drawn manually or automatically as discussed above.
Example method
As previously discussed, the following operations need not be performed in the exact order described below. Rather, as mentioned above, various operations may be processed in a different order or simultaneously, and operations may be added or omitted.
As an example, a client computing device may provide a user with an image navigation experience. The client computing device may do so by communicating with one or more server computing devices to retrieve and display images. One or more server computing devices may access the one or more graphs to provide images for display to a user. For example, a client computing device of a user may send a request for an image of an identified location. The one or more server computing devices may access the one or more image maps to identify an image corresponding to the location. This image may then be provided to the user's client computing device.
In addition to providing images, the one or more server computing devices may provide instructions to the client computing device for displaying the navigation overlay. This overlay may be represented as a line indicating to the user the direction in which the user may move in the multi-dimensional space. For example, fig. 5 is an example view 500 of an image 444 of image map 310 displayed on display 122 of computing device 120. An overlay 502 is provided herein that indicates the direction the user can move into the image 444. The line itself may actually correspond to the connection between the image 444 currently being viewed by the user and the other images in the image map 310. As can be seen in this example, image 444 includes portions of buildings 332, 334, 336, and 338.
The user may then navigate through the multidimensional space using the line. For example, the lines may be used to suggest interaction regions to the user for moving from the image to a different image in one or more image maps. As shown in example 600 of fig. 6, interaction zone 602 extends beyond the outer edge of line 502. Although the interactive area 602 is shown on the display 122 in fig. 6, this interactive area need not actually be displayed to the user and may be "hidden" in order to reduce clutter on the display. Because the interaction zone 602 extends beyond the outer edge of the line, this may allow some room for error when the user attempts to sweep his or her finger (or stylus). For example, referring to example 700 of fig. 7, region 702 corresponds to display 122. In this example, region 704 refers to an area of display 122 that is outside of interaction zone 602. Again, these areas may not necessarily be displayed to the user, and fig. 7 is used for purposes of illustrating the examples more clearly below.
If the user is within the interaction zone of the line and typically either parallel to the line or sweeping within some small angular difference from the line, the client computing device may "move" around in the multidimensional space. For example, as shown in examples 800A and 800B of fig. 8A and 8B, a user may use a finger (or stylus) to interact with the interaction zone 602 of lines by sweeping generally parallel to the lines 502 in the direction of arrows 804 or 806. Because the sweep may not be exactly parallel to line 502, as shown in example 900 of fig. 9, small angular differences (θ 1), such as 10 or 15 degrees, from the direction of line 502 within interaction zone 602 may be ignored or considered to be "generally" parallel.
Returning to fig. 8A, as the user sweeps in the direction of arrow 804, a new image may be displayed that appears to move the user within the multi-dimensional space. For example, in response to a user sweeping within the interaction zone 602 generally parallel to the line 502, the display may transition to another image of the image map 310. As shown in example 1000 of fig. 10, image 470 is displayed transitioning to image map 310. In this example, the display appears to have moved the user further down lane 304 and generally along line 502 so that building 336 is no longer visible. This again allows the user to feel that he or she is moving in three-dimensional space even though he or she is viewing the two-dimensional image. The selection of the image to be displayed may depend on the characteristics of the sweep and one or more image maps as discussed below.
If the user sweeps outside this interaction region 602 or away from the line 502 so many pixels or within some specified angular distance from the line (e.g., greater than θ 1, but less than 90- θ 1), rather than transitioning to a new image, the user's client computing device may change the orientation of the view of the current image, or otherwise rotate the view within the current image. For example, if the user sweeps his or her finger across the display of example 600 from the left side of display 122 toward the right side of display 122 at an angle greater than θ 1 but less than 90- θ 1, rather than generally moving along line 502, the display may rotate within image 444. As shown in example 1100 of fig. 11, the display rotates from the position of example 600 within image 444 and more buildings 332 are displayed. This gives the user the sense that he or she has rotated or changed his or her orientation, as opposed to moving forward or backward in a multi-dimensional space. Thus, the interaction zone allows the client computing device to distinguish between requests to move around (i.e., move forward or backward) and requests to change the orientation of the user.
Further, if the user sweeps within or outside the interaction zone, generally perpendicular to the direction of line 502 or within a small angular distance θ 1 from the direction perpendicular to line 502 (or less than 90- θ 1 from the direction parallel to line 502), this may indicate that the user wishes to pan (sideways move) in the current image. For example, if the user were to sweep his or her finger across the display of example 600 from the left side of display 122 toward the right side of display 122 at an angle greater than 90- θ 1 in a direction parallel to line 502, rather than generally moving along line 502 or rotating within image 444, the display may pan within the current image. In this regard, if there is more than one line according to one or more image graphs, this movement may cause the graphs to actually "jump" to different lines. For example, from image 444, display may jump to image 404 of image map 420 as shown in example 1200 of FIG. 12. Here, the user appears to have moved from a point within lane 304 to a point within lane 306 by moving between images 444 and 402. At this point, a new overlay 1202 corresponding to the image map 420 may be provided for display with the image 404.
Characteristics of the sweep including direction, magnitude, and speed can be used to define how the view will change. For example, while being displayed on-line, if the user sweeps along the interaction zone, the result may be a movement in a direction (within the image map) towards the start point of the sweep. In this regard, dragging down as shown in FIG. 8A advances the view, and dragging up as shown in FIG. 8B moves the view backward.
The magnitude of the sweep or the pixel length may be used to determine how far forward or backward the view is to be moved. The result may be no movement, for example, if the sweep does not cross the threshold minimum number of pixels. If the sweep satisfies a threshold minimum number of pixels, the movement in the multiple dimensions may correspond to the number of pixels the sweep spans. However, because the view is perspective, the relationship between the distance in the pixel and the distance in the multi-dimension can be exponential (as opposed to linear) as the plane in which the line appears will tilt towards the vanishing point in the image. In this regard, the distance in a pixel may be converted to a distance in a multi-dimensional space. An image along a line from the original image closest to a distance in the multi-dimensional space according to the one or more image maps may be identified as the image to which the view is to transition.
Furthermore, the speed of sweeping (number of pixels per second) and even the acceleration of sweeping can be used to determine how fast the view appears to traverse the multi-dimensional space. For example, the movement may initially correspond to the speed of the sweep, but this speed may slow down and stop at the image identified from the magnitude of the sweep. Where the distance determined based on the magnitude of the sweep is between two images, the speed (or acceleration) of the sweep may be used to determine which image to identify as the image to which the view will transition. For example, if the velocity (or acceleration) is relatively high or greater than some threshold velocity (or acceleration), then images further away from the original image may be selected. Meanwhile, if the velocity (or acceleration) is relatively low or below a threshold velocity (or acceleration), an image closer to the original image may be selected. In yet another example, where the velocity (or acceleration) meets some other threshold, the view may appear to continuously traverse the multi-dimensional space by transitioning between images along a line in one or more image maps in response according to the speed of the sweep until the user taps the display. This tap may slow down the movement to a stop or stop immediately on the current or next image. In yet another example, the speed of the sweep, which is generally perpendicular to the line, may be translated into a slower movement through multiple dimensions than with the same speed of the sweep, which is generally parallel to the line. Still further, the speed of the sweep may be determined based on where the sweep occurs. In this regard, the velocity may be determined by measuring meters per pixel at a point on the screen halfway between the bottom of the screen and the horizon, which is intuitively the "average" screen position of the sweep. So if the user drags his or her finger at this point, the pixels on the ground will move at the same speed as the user's finger.
The direction may be determined by back projecting (converting from two to three dimensions) the current and previous screen coordinates onto the y = z plane in terms of Normalized Device Coordinates (NDC) and then projecting down onto the ground plane again. For example, the example 1300 of fig. 13 is a diagram illustrating a relative direction (indicated by arrow 1302) of a plane of a screen 1304 of the computing device 120 with respect to a ground plane 1306. Between these two planes is an NDC plane 1308. Fig. 14 depicts the direction of the finger sweep shown by arrow 1404 on the plane of the screen 1304 relative to the movement shown by arrow 1406 along the ground plane 1306. Here, the user's finger initiates a sweep at point a on the screen 1304 and ends the sweep at point B on the screen. These points are converted to corresponding points B 'and A' on the NDC plane 1308. From the NDC plane, these points are projected to points C and D on the ground plane 1306. This allows vertical display movement to be mapped to forward movement in the multi-dimensional space and horizontal display movement to be mapped to lateral movement in the multi-dimensional space in a predictable manner independent of scene geometry or horizon.
As an example, a line may appear when a user taps a view represented on a display of a client computing device. The user may then sweep along portions of the display. Using the initial position or pixels of the sweep and other positions along the sweep, the speed of the sweep can be determined. Once the sweep has been completed or the user's finger has left the display, a transition animation (such as zooming and fading into a new image) is displayed to transition to the new image. As an example, if the speed is less than or less than the threshold, the next image along the line may be displayed as a new view. If the speed is greater than the threshold, the end time of the sweep and the location on the display are determined. The image closest to this position along the line is identified as the target image based on one or more image maps. In this example, the transition animation between images along the line continues until the target image is reached. In some examples, the images displayed during the transition animation and the target image may be retrieved in real-time from local memory or by verifying location information and requesting the images from one or more server computing devices while the transition animation is being played.
Since the view will typically traverse through multiple images in order to reach the image identified based on the characteristics of the sweep, the full resolution of these multiple images may not actually be displayed. Alternatively, a lower resolution version of the image (such as a thumbnail image) may be displayed as part of the transition between the current image and the identified image. When the actually recognized image is displayed, this image may be displayed at full resolution. This may save time and processing power.
The characteristics of the lines may be altered to allow the user to more easily understand the navigation through the multidimensional space. For example, the opacity of a line may be adjusted to allow a user to see more or less features below the line, thereby reducing the impact of the line on the user's ability to visually explore multiple dimensions. In this regard, the width of the line may correspond to the width of the road on which the line is covered. Similarly, the width of the lines and the interaction zone may be adjusted to prevent the lines from taking up too much of the display, while at the same time making the lines thick enough for the user to be able to interact with the lines using his or her fingers. The color of the line (e.g., blue) may be selected to complement the current view or to allow the line to protrude from the current view.
The position of the lines is not always the same as the connecting lines in one or more of the images. For example, where a line is displayed to correspond to the width of a road, the corresponding connections in one or more image maps may actually run down not in the middle of the road, such that the line does not correspond perfectly to the one or more image maps. In areas where the geometry of the connecting lines is tortuous, the connecting lines may actually fit into straighter lines as an overlay. In this regard, the line may not pass through the center of each image, but may have a smooth appearance when overlaid on the view.
In some examples, the connection map may branch, such as if there is an intersection of two roads or, less so, an intersection in one or more image maps. Example 1500A of fig. 15 depicts an example of a branch region where two lines 1502 and 1504 cross each other. In this example, if the user sweep travels in this or the same way, the resulting motion follows the desired branch according to the image map or maps, and the view adjusts to continue the user facing forward along the line. By showing a number of branches, the available direction of movement is immediately apparent to the user. However, in order to keep the overlay simple and easy to understand, lines directly connected to the current line only within a short distance of the current viewpoint may appear overlaid on the imagery. Using a line pointing in a direction that the user can travel within a predetermined distance may prevent the user from being provided with conflicting or confusing information about where the user may go in the multidimensional space. This is particularly important in complex geometric regions or in complex navigation sections.
In these branch regions, such as at traffic intersections where more than one road intersects each other, the user may want to change from one line to another. Of course, driving forward and making a 90 degree turn in the middle of the intersection may feel unnatural. In this regard, by displaying images that are not on any one line as transitions, one or more image maps may be used to transcend the corners of the intersection between two lines. Fig. 15B depicts image map 1500B with lines 1512 and 1514 corresponding to lines 1504 and 1502, respectively. Thus, in the example of fig. 15A, the display 122 is displaying the image of the image E in the direction of the image G. If the user were to sweep within the interaction zone around 1502, this would correspond to the location of image H on image map 1500B. Rather than transitioning to image F, then to image G, then to image H in this example because the branch region transitions to image H, the display may appear to transition directly from image E to image H along path 1516. Alternatively, only a single image may be skipped and the display may appear to transition from image E to image F and then along path 1518 to image H.
To further reduce the interference of the line with the user's exploration of the multi-dimensions, the line may not be shown when the user is not interacting with the interface. This allows the user to see the entire 3D scene. For example, if the user taps and drags around (or taps and drags around, or moves and drags around) any other party in the scene, he or she will look around the scene when the line is not visible. If the user taps once on the image, a line may appear. Effects such as flashing of the line, highlighting then dimming, thickening then thinning of the line, or quickly making the line more or less opaque then returning to normal opacity may be used to indicate to the user the interactive nature of the line. If the user makes a dragging motion within the interaction zone, lines may appear and the image may appear to transition along the lines even when the lines are not visible. After some predetermined period of time (such as 2 seconds or more or less) on the display without input received by the client computing device, or after the second half of a single tap on the display for a second or more or less, the line may fade until it disappears, again to reduce the effect of the line on the image.
The interface may also provide other types of navigation other than between images along lines corresponding to the grid. For example, as noted above, a single tap may cause a line to appear. A single tap in this example may not cause the view to change, but instead, the view may appear to remain motionless. At the same time, the double tap may bring the user to an image in one or more image maps that is connected to the current image at or near the point of the double tap. Additionally, if the user is currently facing nearly perpendicular to the line, and the user is typically sweeping perpendicular to the line, the view may appear to remain perpendicular to the line, allowing the user to "sweep" along the road. Further, the pinch gesture may zoom in or out on a particular image without actually causing a transition to a new image.
Although the above examples refer to lines, various other overlays can be used to provide the user with an indication of navigable areas in multiple dimensions. For example, in addition to or in conjunction with a toggle switch for switching between looking around (changing orientation) and moving through multiple dimensions, a plurality of scroll bars may be placed on the map to provide visual feedback to the user as he or she moves through the multiple dimensions. In another example, wider lines may be used instead of finite lines that appear to blend laterally into the view. In yet another alternative, interactive zones may be suggested using a disk or short arrow-like indication that does not necessarily appear to extend far into the scene.
Fig. 16 is an example flow diagram 1600 of the various aspects described above that may be performed by one or more server computing devices, such as server computing device 110. In this example, a first image of a multidimensional space for display on a display of a client computing device is provided at block 1602. The first image is provided with an overlay line extending across a portion of the first image and indicating a direction in which the multi-dimensional space extends into the first image, such that the second image is connected to the first image along the direction of the overlay line. User input indicating a sweep across a portion of the display is received at block 1604. The sweep is defined by a start pixel and an end pixel of the display. Based on the start pixel and the end pixel, it is determined at block 1606 that a sweep has occurred at least partially within the interaction region of the first image. The interactive zone defines an area around the overlay line where a user can interact with the three-dimensional space. When a sweep has occurred at least partially within the interaction zone, a determination is made at block 1608 that the sweep indicates a request to display an image that is different from the first image. When the sweep indicates a request to display a different image than the first image, the second image is selected at block 1610 based on a starting point of the sweep, an ending point of the sweep, and a connection map connecting the first image and the second image along a direction that covers the map line. A second image for display on the display is provided at block 1612 to provide a sensation of movement in the multi-dimensional space.
Most of the above alternative examples are not mutually exclusive, but can be implemented in various combinations to achieve unique advantages. As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. By way of example, the preceding operations need not be performed in the exact order described above. Rather, the various steps may be processed in a different order or simultaneously. Steps may also be omitted unless otherwise stated. Furthermore, the provision of examples described herein, as well as clauses phrased as "such as," "including," and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, the example is intended to illustrate only one of many possible embodiments. In addition, the same reference numbers in different drawings may identify the same or similar elements.
INDUSTRIAL APPLICABILITY
The present technology enjoys wide industrial applicability including, but not limited to, interfaces for enabling a user to navigate within a multi-dimensional environment in a first or third human view.

Claims (17)

1. A method of navigating a multi-dimensional space, the method comprising:
providing, by one or more processors, a first image of a multidimensional space for display on a display of a client computing device and an overlay line extending across a portion of the first image and indicating a direction in which the multidimensional space extends into the first image such that a second image is connected to the first image along the direction of the overlay line;
receiving, by the one or more processors, user input indicative of a sweep of the display across a portion, the sweep defined by a start pixel and an end pixel of the display;
determining, by the one or more computing devices, based on the start pixel and the end pixel, that the sweep occurred at least partially within an interaction region of the first image, the interaction region defining an area around the overlay line that the user can interact with the multi-dimensional space;
determining, by the one or more processors, that the sweep indicates a request to display an image different from the first image when the sweep occurs at least partially within the interaction zone;
selecting, by the one or more computing devices, the second image based on a starting point of the sweep, an ending point of the sweep, and a connection map connecting the first image and the second image along the direction of the overlay line when the sweep indicates a request to display the image different from the first image;
providing the second image with a second overlay line extending across a portion of the second image and indicating a direction in which the multi-dimensional space extends into the second image, such that a third image is connected to the second image in the connection map along the direction of the second overlay line;
receiving a second user input indicating a second sweep;
determining that the second sweep is within a threshold angle perpendicular to the direction that the multi-dimensional space extends into the second image; and
switching from the second image to a fourth image located on a second connection map adjacent to the connection map in a direction perpendicular to the direction in which the multi-dimensional space extends into the second image when the second sweep is within a threshold angle perpendicular to the direction in which the multi-dimensional space extends into the second image, the second and fourth images being neither connected to each other in the connection map nor connected to each other in the second connection map.
2. The method of claim 1, further comprising providing a transition image for display between the first image and the second image, the transition image being provided as a thumbnail image having less detail than the first image and the second image.
3. The method of claim 1, further comprising providing instructions to fade out the overlay after a threshold period of time has elapsed without any user action on the overlay.
4. The method of claim 3, further comprising, after fading out the overlay graph:
receiving a second user input on the display; and
providing instructions to redisplay the overlay graph in response to the second user input.
5. The method of claim 1, further comprising:
determining a direction and a magnitude of the sweep based on the start pixel of the sweep and the end pixel of the sweep, an
Wherein selecting the second image is further based on the direction and magnitude.
6. The method of claim 1, further comprising:
receiving a third user input indicative of a third sweep;
determining that the third sweep is within a threshold angle perpendicular to the direction that the multi-dimensional space extends into the fourth image; and
translating across the multi-dimensional space of the fourth image when the third sweep is within a threshold angle perpendicular to the direction that the multi-dimensional space extends into the fourth image.
7. The method of claim 1, further comprising:
receiving a third user input indicative of a third sweep;
determining that the third sweep is within a threshold angle perpendicular to the direction that the multi-dimensional space extends into the fourth image; and
changing an orientation within the fourth image when the third sweep is within a threshold angle perpendicular to the direction in which the multi-dimensional space extends into the fourth image.
8. The method of claim 1, further comprising: providing a third overlay line for display with the second image, the third overlay line representing a second navigation path proximate to a current view of the second image, the third overlay line being provided such that the third overlay line and the second overlay line cross each other when displayed with the second image.
9. The method of claim 8, further comprising:
receiving a second user input along the third overlay line, the second user input indicating a request to transition from an image along the second overlay line to an image along the third overlay line; and
providing, for display, a fifth image in response to the second user input, the fifth image arranged in the connection map along the third overlay line.
10. The method of claim 9, further comprising:
selecting a set of images for serial display as a transition between the second image and the fifth image based on connections between images in the connection map; and
providing the set of images for display on the display.
11. The method of claim 10, further comprising, prior to providing the set of images, filtering the set of images based on connections between two images of the set of images in a third connection map different from the connection map to remove at least one image such that a filtered set of images is provided for display as the transition between the second image and the fifth image.
12. A system comprising one or more computing devices, each computing device having one or more processors, the one or more computing devices configured to:
providing a first image of a multi-dimensional space for display on a display of a client computing device and an overlay line extending across a portion of the first image and indicating a direction in which the multi-dimensional space extends into the first image such that a second image is connected to the first image along the direction of the overlay line;
receiving user input indicative of a sweep of the display across a portion, the sweep defined by a start pixel and an end pixel of the display;
determining, based on the start pixel and the end pixel, that the sweep has occurred at least partially within an interaction region of the first image, the interaction region defining an area around the overlay line where the user can interact with the multi-dimensional space;
determining that the sweep indicates a request to display an image different from the first image when the sweep occurs at least partially within the interaction zone;
selecting the second image based on a start point of the sweep, an end point of the sweep, and a connection map connecting the first image and the second image along the direction of the overlay line when the sweep indicates a request to display the image different from the first image;
providing the second image with a second overlay line extending across a portion of the second image and indicating a direction in which the multi-dimensional space extends into the second image, such that a third image is connected to the second image in the connection map along the direction of the second overlay line;
receiving a second user input indicating a second sweep;
determining that the second sweep is within a threshold angle perpendicular to the direction that the multi-dimensional space extends into the second image; and
switching from the second image to a fourth image located on a second connection map adjacent to the connection map in a direction perpendicular to the direction in which the multi-dimensional space extends into the second image when the second sweep is within a threshold angle perpendicular to the direction in which the multi-dimensional space extends into the second image, the second and fourth images being neither connected to each other in the connection map nor connected to each other in the second connection map.
13. The system of claim 12, wherein the one or more computing devices are further configured to provide a transition image for display between the first image and the second image, the transition image being provided as a thumbnail image having less detail than the first image and the second image.
14. The system of claim 12, wherein the one or more computing devices are further configured to provide instructions to fade out the overlay after a threshold period of time has elapsed without any user action on the overlay.
15. The system of claim 14, wherein the one or more computing devices are further configured to, after fading out the overlay graph:
receiving a second user input on the display; and is
Providing instructions to redisplay the overlay graph in response to the second user input.
16. The system of claim 12, wherein the one or more computing devices are further configured to:
determining a direction and a magnitude of the sweep based on the start pixel of the sweep and the end pixel of the sweep, and
selecting the second image is further based on the direction and magnitude.
17. A non-transitory computer-readable storage device storing computer-readable instructions of a program, which when executed by one or more processors, cause the one or more processors to perform a method, the method comprising:
providing a first image of a multi-dimensional space for display on a display of a client computing device and an overlay line extending across a portion of the first image and indicating a direction in which the multi-dimensional space extends into the first image such that a second image is connected to the first image along the direction of the overlay line;
receiving user input indicative of a sweep of the display across a portion, the sweep defined by a start pixel and an end pixel of the display;
determining, based on the start pixel and the end pixel, that the sweep has occurred at least partially within an interaction region of the first image, the interaction region defining an area around the overlay line where the user can interact with the multi-dimensional space;
when the sweep has occurred at least partially within the interaction zone, determining that the sweep indicates a request to display an image different from the first image;
selecting the second image based on a start point of the sweep, an end point of the sweep, and a connection map connecting the first image and the second image along the direction of the overlay line when the sweep indicates a request to display the image different from the first image;
providing the second image with a second overlay line extending across a portion of the second image and indicating a direction in which the multi-dimensional space extends into the second image, such that a third image is connected to the second image in the connection map along the direction of the second overlay line;
receiving a second user input indicating a second sweep;
determining that the second sweep is within a threshold angle perpendicular to the direction that the multi-dimensional space extends into the second image; and
switching from the second image to a fourth image located on a second connection map adjacent to the connection map in a direction perpendicular to the direction in which the multi-dimensional space extends into the second image when the second sweep is within a threshold angle perpendicular to the direction in which the multi-dimensional space extends into the second image, the second and fourth images being neither connected to each other in the connection map nor connected to each other in the second connection map.
CN201680053204.9A 2015-12-17 2016-12-13 Navigating through a multi-dimensional image space Active CN108450035B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US14/972,843 US10217283B2 (en) 2015-12-17 2015-12-17 Navigation through multidimensional images spaces
US14/972,843 2015-12-17
PCT/US2016/066342 WO2017106170A2 (en) 2015-12-17 2016-12-13 Navigation through multidimensional images spaces

Publications (2)

Publication Number Publication Date
CN108450035A CN108450035A (en) 2018-08-24
CN108450035B true CN108450035B (en) 2023-01-03

Family

ID=58765892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680053204.9A Active CN108450035B (en) 2015-12-17 2016-12-13 Navigating through a multi-dimensional image space

Country Status (8)

Country Link
US (1) US10217283B2 (en)
EP (1) EP3338057A2 (en)
JP (2) JP6615988B2 (en)
KR (2) KR102230738B1 (en)
CN (1) CN108450035B (en)
DE (1) DE112016004195T5 (en)
GB (1) GB2557142B (en)
WO (1) WO2017106170A2 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100974155B1 (en) * 2003-12-11 2010-08-04 인터내셔널 비지네스 머신즈 코포레이션 Data transfer error checking
USD780777S1 (en) 2014-04-22 2017-03-07 Google Inc. Display screen with graphical user interface or portion thereof
US9934222B2 (en) 2014-04-22 2018-04-03 Google Llc Providing a thumbnail image that follows a main image
USD781317S1 (en) 2014-04-22 2017-03-14 Google Inc. Display screen with graphical user interface or portion thereof
US9972121B2 (en) * 2014-04-22 2018-05-15 Google Llc Selecting time-distributed panoramic images for display
CN115361482A (en) * 2017-04-28 2022-11-18 索尼公司 Information processing device, information processing method, information processing program, image processing device, and image processing system
RU2740445C2 (en) * 2018-09-14 2021-01-14 Общество с ограниченной ответственностью "НАВИГАТОРСПАС" Method of converting 2d images to 3d format
KR20200070320A (en) * 2018-09-25 2020-06-17 구글 엘엘씨 Dynamic re-styling of digital maps
JP6571859B1 (en) * 2018-12-26 2019-09-04 Amatelus株式会社 VIDEO DISTRIBUTION DEVICE, VIDEO DISTRIBUTION SYSTEM, VIDEO DISTRIBUTION METHOD, AND VIDEO DISTRIBUTION PROGRAM
CN109821237B (en) * 2019-01-24 2022-04-22 腾讯科技(深圳)有限公司 Method, device and equipment for rotating visual angle and storage medium
US11675494B2 (en) * 2020-03-26 2023-06-13 Snap Inc. Combining first user interface content into second user interface

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101319908A (en) * 2007-06-07 2008-12-10 索尼株式会社 Navigation device and map scroll processing method
CN102460421A (en) * 2009-06-05 2012-05-16 微软公司 Scrubbing variable content paths
CN102741797A (en) * 2009-12-01 2012-10-17 诺基亚公司 Method and apparatus for transforming three-dimensional map objects to present navigation information
CN103150759A (en) * 2013-03-05 2013-06-12 腾讯科技(深圳)有限公司 Method and device for dynamically enhancing street view image
US8817067B1 (en) * 2011-07-29 2014-08-26 Google Inc. Interface for applying a photogrammetry algorithm to panoramic photographic images
US8941685B1 (en) * 2011-03-08 2015-01-27 Google Inc. Showing geo-located information in a 3D geographical space
CN104335012A (en) * 2012-06-05 2015-02-04 苹果公司 Voice instructions during navigation
CN105164683A (en) * 2014-01-31 2015-12-16 谷歌公司 System and method for geo-locating images

Family Cites Families (155)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2040273C (en) * 1990-04-13 1995-07-18 Kazu Horiuchi Image displaying system
JP2900632B2 (en) 1991-04-19 1999-06-02 株式会社日立製作所 Digital map processing device and digital map display method
US5247356A (en) 1992-02-14 1993-09-21 Ciampa John A Method and apparatus for mapping and measuring land
US5396583A (en) 1992-10-13 1995-03-07 Apple Computer, Inc. Cylindrical to planar image mapping using scanline coherence
US5528263A (en) * 1994-06-15 1996-06-18 Daniel M. Platzker Interactive projected video image display system
US5559707A (en) 1994-06-24 1996-09-24 Delorme Publishing Company Computer aided routing system
EP0807352A1 (en) 1995-01-31 1997-11-19 Transcenic, Inc Spatial referenced photography
US6097393A (en) 1996-09-03 2000-08-01 The Takshele Corporation Computer-executed, three-dimensional graphical resource management process and system
US6199015B1 (en) 1996-10-10 2001-03-06 Ames Maps, L.L.C. Map-based navigation system with overlays
JP4332231B2 (en) 1997-04-21 2009-09-16 ソニー株式会社 Imaging device controller and imaging system
US6597818B2 (en) 1997-05-09 2003-07-22 Sarnoff Corporation Method and apparatus for performing geo-spatial registration of imagery
US6285317B1 (en) 1998-05-01 2001-09-04 Lucent Technologies Inc. Navigation system with three-dimensional display
JP4119529B2 (en) 1998-06-17 2008-07-16 オリンパス株式会社 Virtual environment generation method and apparatus, and recording medium on which virtual environment generation program is recorded
US6687753B2 (en) 1998-06-25 2004-02-03 International Business Machines Corporation Method and system for providing three-dimensional graphics over computer networks
US6477268B1 (en) 1998-11-17 2002-11-05 Industrial Technology Research Institute Producing transitions between vistas
FR2788617B1 (en) 1999-01-15 2001-03-02 Za Production METHOD FOR SELECTING AND DISPLAYING A DIGITAL FILE TYPE ELEMENT, STILL IMAGE OR MOVING IMAGES, ON A DISPLAY SCREEN
US6346938B1 (en) 1999-04-27 2002-02-12 Harris Corporation Computer-resident mechanism for manipulating, navigating through and mensurating displayed image of three-dimensional geometric model
US6611615B1 (en) 1999-06-25 2003-08-26 University Of Iowa Research Foundation Method and apparatus for generating consistent image registration
US6609128B1 (en) 1999-07-30 2003-08-19 Accenture Llp Codes table framework design in an E-commerce architecture
US7574381B1 (en) 1999-08-06 2009-08-11 Catherine Lin-Hendel System and method for constructing and displaying active virtual reality cyber malls, show rooms, galleries, stores, museums, and objects within
US6563529B1 (en) 1999-10-08 2003-05-13 Jerry Jongerius Interactive system for displaying detailed view and direction in panoramic images
CA2388260C (en) 1999-10-19 2009-01-27 American Calcar Inc. Technique for effective navigation based on user preferences
US6515664B1 (en) 1999-11-12 2003-02-04 Pixaround.Com Pte Ltd Fast single-pass cylindrical to planar projection
US6717608B1 (en) 1999-12-31 2004-04-06 Stmicroelectronics, Inc. Motion estimation for panoramic digital camera
US6885392B1 (en) 1999-12-31 2005-04-26 Stmicroelectronics, Inc. Perspective correction for preview area of panoramic digital camera
US6771304B1 (en) 1999-12-31 2004-08-03 Stmicroelectronics, Inc. Perspective correction device for panoramic digital camera
US20020010734A1 (en) 2000-02-03 2002-01-24 Ebersole John Franklin Internetworked augmented reality system and method
US6606091B2 (en) 2000-02-07 2003-08-12 Siemens Corporate Research, Inc. System for interactive 3D object extraction from slice-based medical images
US20030210228A1 (en) 2000-02-25 2003-11-13 Ebersole John Franklin Augmented reality situational awareness system and method
US6895126B2 (en) 2000-10-06 2005-05-17 Enrico Di Bernardo System and method for creating, storing, and utilizing composite images of a geographic location
US20020063725A1 (en) 2000-11-30 2002-05-30 Virtuallylisted Llc Method and apparatus for capturing and presenting panoramic images for websites
US7006707B2 (en) 2001-05-03 2006-02-28 Adobe Systems Incorporated Projecting images onto a surface
JP2003006680A (en) 2001-06-20 2003-01-10 Zenrin Co Ltd Method for generating three-dimensional electronic map data
US7509241B2 (en) 2001-07-06 2009-03-24 Sarnoff Corporation Method and apparatus for automatically generating a site model
US20040234933A1 (en) 2001-09-07 2004-11-25 Dawson Steven L. Medical procedure training system
US7096428B2 (en) 2001-09-28 2006-08-22 Fuji Xerox Co., Ltd. Systems and methods for providing a spatially indexed panoramic video
US7274380B2 (en) 2001-10-04 2007-09-25 Siemens Corporate Research, Inc. Augmented reality system
JP2003141562A (en) 2001-10-29 2003-05-16 Sony Corp Image processing apparatus and method for nonplanar image, storage medium, and computer program
JP4610156B2 (en) 2001-11-13 2011-01-12 アルスロン株式会社 A system for grasping the space of roads, rivers, and railways using panoramic images
JP2003172899A (en) 2001-12-05 2003-06-20 Fujitsu Ltd Display device
US20030110185A1 (en) 2001-12-10 2003-06-12 Rhoads Geoffrey B. Geographically-based databases and methods
US7411594B2 (en) 2002-01-15 2008-08-12 Canon Kabushiki Kaisha Information processing apparatus and method
CN1162806C (en) 2002-03-07 2004-08-18 上海交通大学 Shooting, formation, transmission and display method of road overall view image tape
US7348963B2 (en) * 2002-05-28 2008-03-25 Reactrix Systems, Inc. Interactive video display system
JP4487775B2 (en) 2002-10-15 2010-06-23 セイコーエプソン株式会社 Panorama composition processing of multiple image data
US7424133B2 (en) 2002-11-08 2008-09-09 Pictometry International Corporation Method and apparatus for capturing, geolocating and measuring oblique images
US6836723B2 (en) 2002-11-29 2004-12-28 Alpine Electronics, Inc Navigation method and system
KR100464342B1 (en) 2003-04-14 2005-01-03 삼성전자주식회사 To-can type optical module
US7526718B2 (en) 2003-04-30 2009-04-28 Hewlett-Packard Development Company, L.P. Apparatus and method for recording “path-enhanced” multimedia
US6968973B2 (en) 2003-05-31 2005-11-29 Microsoft Corporation System and process for viewing and navigating through an interactive video tour
KR100703444B1 (en) 2003-06-03 2007-04-03 삼성전자주식회사 Device and method for downloading and displaying a images of global position information in navigation system
JP4321128B2 (en) 2003-06-12 2009-08-26 株式会社デンソー Image server, image collection device, and image display terminal
US7197170B2 (en) 2003-11-10 2007-03-27 M2S, Inc. Anatomical visualization and measurement system
JP2005149409A (en) 2003-11-19 2005-06-09 Canon Inc Image reproduction method and apparatus
CA2455359C (en) 2004-01-16 2013-01-08 Geotango International Corp. System, computer program and method for 3d object measurement, modeling and mapping from single imagery
JP4437677B2 (en) 2004-03-01 2010-03-24 三菱電機株式会社 Landscape display device
US20050195096A1 (en) 2004-03-05 2005-09-08 Ward Derek K. Rapid mobility analysis and vehicular route planning from overhead imagery
US20060098010A1 (en) 2004-03-09 2006-05-11 Jeff Dwyer Anatomical visualization and measurement system
WO2005104039A2 (en) 2004-03-23 2005-11-03 Google, Inc. A digital mapping system
DE102004020861B4 (en) 2004-04-28 2009-10-01 Siemens Ag Method for the reconstruction of projection data sets with dose-reduced section-wise spiral scanning in computed tomography
US7746376B2 (en) 2004-06-16 2010-06-29 Felipe Mendoza Method and apparatus for accessing multi-dimensional mapping and information
US7460953B2 (en) 2004-06-30 2008-12-02 Navteq North America, Llc Method of operating a navigation system using images
US20060002590A1 (en) 2004-06-30 2006-01-05 Borak Jason M Method of collecting information for a geographic database for use with a navigation system
US20080189031A1 (en) 2007-02-06 2008-08-07 Meadow William D Methods and apparatus for presenting a continuum of image data
JP4130428B2 (en) 2004-09-02 2008-08-06 ザイオソフト株式会社 Image processing method and image processing program
US8195386B2 (en) 2004-09-28 2012-06-05 National University Corporation Kumamoto University Movable-body navigation information display method and movable-body navigation information display unit
JP2006105640A (en) 2004-10-01 2006-04-20 Hitachi Ltd Navigation system
US7529552B2 (en) 2004-10-05 2009-05-05 Isee Media Inc. Interactive imaging for cellular phones
JP2008520052A (en) 2004-11-12 2008-06-12 モク3, インコーポレイテッド Method for transition between scenes
FR2879791B1 (en) 2004-12-16 2007-03-16 Cnes Epic METHOD FOR PROCESSING IMAGES USING AUTOMATIC GEOREFERENCING OF IMAGES FROM A COUPLE OF IMAGES TAKEN IN THE SAME FOCAL PLAN
US7369136B1 (en) * 2004-12-17 2008-05-06 Nvidia Corporation Computing anisotropic texture mapping parameters
US7580952B2 (en) 2005-02-28 2009-08-25 Microsoft Corporation Automatic digital image grouping using criteria based on image metadata and spatial information
US7391899B2 (en) 2005-03-31 2008-06-24 Harris Corporation System and method for three dimensional change detection and measurement of a scene using change analysis
US7466244B2 (en) 2005-04-21 2008-12-16 Microsoft Corporation Virtual earth rooftop overlay and bounding
US7777648B2 (en) 2005-04-21 2010-08-17 Microsoft Corporation Mode information displayed in a mapping application
US20060241859A1 (en) 2005-04-21 2006-10-26 Microsoft Corporation Virtual earth real-time advertising
US20070210937A1 (en) 2005-04-21 2007-09-13 Microsoft Corporation Dynamic rendering of map information
US8103445B2 (en) 2005-04-21 2012-01-24 Microsoft Corporation Dynamic map rendering as a function of a user parameter
US8044996B2 (en) 2005-05-11 2011-10-25 Xenogen Corporation Surface construction using combined photographic and structured light information
US7554539B2 (en) 2005-07-27 2009-06-30 Balfour Technologies Llc System for viewing a collection of oblique imagery in a three or four dimensional virtual scene
US20070035563A1 (en) 2005-08-12 2007-02-15 The Board Of Trustees Of Michigan State University Augmented reality spatial interaction and navigational system
EP1920423A2 (en) 2005-09-01 2008-05-14 GeoSim Systems Ltd. System and method for cost-effective, high-fidelity 3d-modeling of large-scale urban environments
JP4853149B2 (en) 2005-09-14 2012-01-11 ソニー株式会社 Image processing apparatus, image display apparatus, image processing method, program, and recording medium
US7980704B2 (en) 2005-09-14 2011-07-19 Sony Corporation Audiovisual system including wall-integrated audiovisual capabilities
US8160400B2 (en) 2005-11-17 2012-04-17 Microsoft Corporation Navigating images using image based geometric alignment and object based controls
US8081827B2 (en) 2006-02-28 2011-12-20 Ricoh Co., Ltd. Compressed data image object feature extraction, ordering, and delivery
US7778491B2 (en) 2006-04-10 2010-08-17 Microsoft Corporation Oblique image stitching
US10042927B2 (en) 2006-04-24 2018-08-07 Yeildbot Inc. Interest keyword identification
US7310606B2 (en) 2006-05-12 2007-12-18 Harris Corporation Method and system for generating an image-textured digital surface model (DSM) for a geographical area of interest
WO2007146967A2 (en) 2006-06-12 2007-12-21 Google Inc. Markup language for interactive geographic information system
US7712052B2 (en) 2006-07-31 2010-05-04 Microsoft Corporation Applications of three-dimensional environments constructed from images
US20080027985A1 (en) 2006-07-31 2008-01-31 Microsoft Corporation Generating spatial multimedia indices for multimedia corpuses
US20080043020A1 (en) * 2006-08-18 2008-02-21 Microsoft Corporation User interface for viewing street side imagery
US8611673B2 (en) 2006-09-14 2013-12-17 Parham Aarabi Method, system and computer program for interactive spatial link-based image searching, sorting and/or displaying
US8041730B1 (en) 2006-10-24 2011-10-18 Google Inc. Using geographic data to identify correlated geographic synonyms
US8542884B1 (en) 2006-11-17 2013-09-24 Corelogic Solutions, Llc Systems and methods for flood area change detection
US8498497B2 (en) 2006-11-17 2013-07-30 Microsoft Corporation Swarm imaging
MX2009005351A (en) 2006-11-20 2009-06-08 Satori Pharmaceuticals Inc Compounds useful for treating neurodegenerative disorders.
US20080143709A1 (en) 2006-12-14 2008-06-19 Earthmine, Inc. System and method for accessing three dimensional information from a panoramic image
US7877707B2 (en) * 2007-01-06 2011-01-25 Apple Inc. Detecting and interpreting real-world and security gestures on touch and hover sensitive devices
US8593518B2 (en) 2007-02-01 2013-11-26 Pictometry International Corp. Computer system for continuous oblique panning
US20080309668A1 (en) 2007-05-17 2008-12-18 Igor Borovikov Image processing method and apparatus
CA2958728C (en) * 2007-05-25 2019-04-30 Google Inc. Rendering, viewing and annotating panoramic images, and applications thereof
US7990394B2 (en) 2007-05-25 2011-08-02 Google Inc. Viewing and navigating within panoramic images, and applications thereof
KR100837283B1 (en) * 2007-09-10 2008-06-11 (주)익스트라스탠다드 Mobile device equipped with touch screen
US8326048B2 (en) 2007-10-04 2012-12-04 Microsoft Corporation Geo-relevance for images
US8049750B2 (en) * 2007-11-16 2011-11-01 Sportvision, Inc. Fading techniques for virtual viewpoint animations
KR100979910B1 (en) 2008-01-29 2010-09-06 (주)멜파스 Touchscreen panel having partitioned transparent electrode structure
RU2460187C2 (en) 2008-02-01 2012-08-27 Рокстек Аб Transition frame with inbuilt pressing device
CA2707246C (en) 2009-07-07 2015-12-29 Certusview Technologies, Llc Automatic assessment of a productivity and/or a competence of a locate technician with respect to a locate and marking operation
US20090210277A1 (en) * 2008-02-14 2009-08-20 Hardin H Wesley System and method for managing a geographically-expansive construction project
JP5039922B2 (en) 2008-03-21 2012-10-03 インターナショナル・ビジネス・マシーンズ・コーポレーション Image drawing system, image drawing server, image drawing method, and computer program
US8350850B2 (en) 2008-03-31 2013-01-08 Microsoft Corporation Using photo collections for three dimensional modeling
US20090327024A1 (en) 2008-06-27 2009-12-31 Certusview Technologies, Llc Methods and apparatus for quality assessment of a field service operation
US8805110B2 (en) 2008-08-19 2014-08-12 Digimarc Corporation Methods and systems for content processing
JP2010086230A (en) 2008-09-30 2010-04-15 Sony Corp Information processing apparatus, information processing method and program
US20100085350A1 (en) 2008-10-02 2010-04-08 Microsoft Corporation Oblique display with additional detail
US8243997B2 (en) 2008-10-16 2012-08-14 The Curators Of The University Of Missouri Detecting geographic-area change using high-resolution, remotely sensed imagery
US8457387B2 (en) 2009-03-13 2013-06-04 Disney Enterprises, Inc. System and method for interactive environments presented by video playback devices
US9189124B2 (en) * 2009-04-15 2015-11-17 Wyse Technology L.L.C. Custom pointer features for touch-screen on remote client devices
KR20100122383A (en) 2009-05-12 2010-11-22 삼성전자주식회사 Method and apparatus for display speed improvement of image
US8274571B2 (en) 2009-05-21 2012-09-25 Google Inc. Image zooming using pre-existing imaging information
US10440329B2 (en) * 2009-05-22 2019-10-08 Immersive Media Company Hybrid media viewing application including a region of interest within a wide field of view
US9213780B2 (en) 2009-06-26 2015-12-15 Microsoft Technology Licensing Llc Cache and index refreshing strategies for variably dynamic items and accesses
JP5513806B2 (en) * 2009-08-07 2014-06-04 株式会社 ミックウェア Linked display device, linked display method, and program
CA2754159C (en) 2009-08-11 2012-05-15 Certusview Technologies, Llc Systems and methods for complex event processing of vehicle-related information
US20110082846A1 (en) 2009-10-07 2011-04-07 International Business Machines Corporation Selective processing of location-sensitive data streams
US8159524B2 (en) 2009-11-09 2012-04-17 Google Inc. Orthorectifying stitched oblique imagery to a nadir view, and applications thereof
US8730309B2 (en) * 2010-02-23 2014-05-20 Microsoft Corporation Projectors and depth cameras for deviceless augmented reality and interaction
EP2545540B1 (en) * 2010-03-12 2016-03-09 Telefonaktiebolaget LM Ericsson (publ) Cellular network based assistant for vehicles
US8660355B2 (en) 2010-03-19 2014-02-25 Digimarc Corporation Methods and systems for determining image processing operations relevant to particular imagery
US8640020B2 (en) * 2010-06-02 2014-01-28 Microsoft Corporation Adjustable and progressive mobile device street view
US20110310088A1 (en) 2010-06-17 2011-12-22 Microsoft Corporation Personalized navigation through virtual 3d environments
US8488040B2 (en) 2010-06-18 2013-07-16 Microsoft Corporation Mobile and server-side computational photography
JP5842000B2 (en) 2010-06-30 2016-01-13 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Zoom in the displayed image
JP5514038B2 (en) * 2010-08-16 2014-06-04 株式会社 ミックウェア Street view automatic output device, map automatic output device, street view automatic output method, map automatic output method, and program
JP5732218B2 (en) 2010-09-21 2015-06-10 任天堂株式会社 Display control program, display control device, display control system, and display control method
US8957909B2 (en) 2010-10-07 2015-02-17 Sensor Platforms, Inc. System and method for compensating for drift in a display of a user interface state
US8669976B1 (en) * 2010-10-12 2014-03-11 Google Inc. Selecting and verifying textures in image-based three-dimensional modeling, and applications thereof
US8577177B2 (en) 2010-11-11 2013-11-05 Siemens Aktiengesellschaft Symmetric and inverse-consistent deformable registration
EP2474950B1 (en) * 2011-01-05 2013-08-21 Softkinetic Software Natural gesture based user interface methods and systems
US8612465B1 (en) 2011-04-08 2013-12-17 Google Inc. Image reacquisition
JP2012222674A (en) 2011-04-12 2012-11-12 Sony Corp Image processing apparatus, image processing method, and program
AU2011202609B2 (en) 2011-05-24 2013-05-16 Canon Kabushiki Kaisha Image clustering method
US20120320057A1 (en) * 2011-06-15 2012-12-20 Harris Corporation Communications system including data server storing 3d geospatial model and mobile electronic device to display 2d images based upon the 3d geospatial model
JP5800386B2 (en) * 2011-07-08 2015-10-28 株式会社 ミックウェア Map display device, map display method, and program
EP2740104B1 (en) 2011-08-02 2016-12-28 ViewsIQ Inc. Apparatus and method for digital microscopy imaging
US9153062B2 (en) 2012-02-29 2015-10-06 Yale University Systems and methods for sketching and imaging
KR101339570B1 (en) 2012-05-30 2013-12-10 삼성전기주식회사 Method and apparatus for sensing touch input
US8965696B2 (en) * 2012-06-05 2015-02-24 Apple Inc. Providing navigation instructions while operating navigation application in background
US9418672B2 (en) * 2012-06-05 2016-08-16 Apple Inc. Navigation application with adaptive instruction text
KR20150034255A (en) * 2012-07-15 2015-04-02 애플 인크. Disambiguation of multitouch gesture recognition for 3d interaction
JP6030935B2 (en) * 2012-12-04 2016-11-24 任天堂株式会社 Information processing program, display control apparatus, display system, and display method
US9046996B2 (en) 2013-10-17 2015-06-02 Google Inc. Techniques for navigation among multiple images
KR20150099255A (en) * 2014-02-21 2015-08-31 삼성전자주식회사 Method for displaying information and electronic device using the same

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101319908A (en) * 2007-06-07 2008-12-10 索尼株式会社 Navigation device and map scroll processing method
CN102460421A (en) * 2009-06-05 2012-05-16 微软公司 Scrubbing variable content paths
CN102741797A (en) * 2009-12-01 2012-10-17 诺基亚公司 Method and apparatus for transforming three-dimensional map objects to present navigation information
US8941685B1 (en) * 2011-03-08 2015-01-27 Google Inc. Showing geo-located information in a 3D geographical space
US8817067B1 (en) * 2011-07-29 2014-08-26 Google Inc. Interface for applying a photogrammetry algorithm to panoramic photographic images
CN104335012A (en) * 2012-06-05 2015-02-04 苹果公司 Voice instructions during navigation
CN103150759A (en) * 2013-03-05 2013-06-12 腾讯科技(深圳)有限公司 Method and device for dynamically enhancing street view image
CN105164683A (en) * 2014-01-31 2015-12-16 谷歌公司 System and method for geo-locating images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Smart Navigation for Street View UI;Luke Wroblewski;《http://swww.lukew.com/ff/entry.asp?830》;20090606;第1-2页 *
利用SOSO街景地图快速生成导航路线;一江春水;《电脑迷》;20130901(第9期);第75页 *

Also Published As

Publication number Publication date
WO2017106170A3 (en) 2017-08-17
GB2557142A (en) 2018-06-13
KR102230738B1 (en) 2021-03-22
JP2020053058A (en) 2020-04-02
GB201803712D0 (en) 2018-04-25
KR20180038503A (en) 2018-04-16
JP2018538588A (en) 2018-12-27
JP6818847B2 (en) 2021-01-20
KR20200083680A (en) 2020-07-08
GB2557142B (en) 2022-02-23
WO2017106170A2 (en) 2017-06-22
EP3338057A2 (en) 2018-06-27
DE112016004195T5 (en) 2018-08-02
US20170178404A1 (en) 2017-06-22
KR102131822B1 (en) 2020-07-08
JP6615988B2 (en) 2019-12-04
US10217283B2 (en) 2019-02-26
CN108450035A (en) 2018-08-24

Similar Documents

Publication Publication Date Title
CN108450035B (en) Navigating through a multi-dimensional image space
US11099654B2 (en) Facilitate user manipulation of a virtual reality environment view using a computing device with a touch sensitive surface
US20230306688A1 (en) Selecting two-dimensional imagery data for display within a three-dimensional model
CA2818695C (en) Guided navigation through geo-located panoramas
US20180007340A1 (en) Method and system for motion controlled mobile viewing
JP2014504384A (en) Generation of 3D virtual tour from 2D images
US9128612B2 (en) Continuous determination of a perspective
WO2013181032A2 (en) Method and system for navigation to interior view imagery from street level imagery
JP2012518849A (en) System and method for indicating transitions between street level images
WO2005069170A1 (en) Image file list display device
EP2788974B1 (en) Texture fading for smooth level of detail transitions in a graphics application
US9025007B1 (en) Configuring stereo cameras
US20180032536A1 (en) Method of and system for advertising real estate within a defined geo-targeted audience
CN104599310A (en) Three-dimensional scene cartoon recording method and device
Tatzgern et al. Exploring Distant Objects with Augmented Reality.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant