GB2556068A - Data interation device - Google Patents

Data interation device Download PDF

Info

Publication number
GB2556068A
GB2556068A GB1619417.7A GB201619417A GB2556068A GB 2556068 A GB2556068 A GB 2556068A GB 201619417 A GB201619417 A GB 201619417A GB 2556068 A GB2556068 A GB 2556068A
Authority
GB
United Kingdom
Prior art keywords
data
visualisation
user
field
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1619417.7A
Other versions
GB2556068A8 (en
Inventor
Martin Paul
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chartify It Ltd
Original Assignee
Chartify It Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chartify It Ltd filed Critical Chartify It Ltd
Priority to GB1619417.7A priority Critical patent/GB2556068A/en
Publication of GB2556068A publication Critical patent/GB2556068A/en
Publication of GB2556068A8 publication Critical patent/GB2556068A8/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Displaying simultaneously 305 a plurality of field elements, each indicative of a field of data, in association with a plurality of visualisation elements, each representative of a visualisation scheme against which to plot data; selecting, in response to a users choice of visualisation element, a field of data and a visualisation scheme 307; and displaying data plotted based on the visualisation scheme 309. The virtual environment may be an augmented, virtual, or mixed-reality space. A user may choose a second field element indicative of a second field of data, a second visualisation scheme may automatically be selected based on a characteristic of the field of data. The device may comprise a hand tracking device and gesture detection module to detect user intentions by identifying hand gestures 311. The device may respond to the detection of hand gestures by modifying the displayed data 313, for example: adjusting the axis, panning, resizing, translation the displayed information. The visualisation schemes may be a variety of graphs or charts (Fig.5A-C).

Description

(71) Applicant(s):
Chartify.it Ltd
Riverbank Close, Deeping St James, Peterborough, PE6 8TL, United Kingdom (72) Inventor(s):
Paul Martin (74) Agent and/or Address for Service:
CMS Cameron McKenna Nabarro Olswang LLP Cannon Place, 78 Cannon Street, London, EC4N 6AF, United Kingdom (56) Documents Cited:
EP 2990924 A1 US 20160012154 A1 US 20130080444 A1 US 20100325564 A1 US 20080022562 A1
US 5461708 A1 US 20150135113 A1 US 20110115814 A1 US 20080115049 A1 www.virtualitics.com, Accessed: 27/04/17, First available: 02/02/17 (via https://web.archive.org/ web/20170202094235/www.virtualitics.com) (58)
Field of Search:
INT CL G06F, G06T
Other: WPI, EPODOC, INTERNET (54) Title of the Invention: Data interation device
Abstract Title: A data interaction device for facilitating user interaction with data in a virtual environment (57) Displaying simultaneously 305 a plurality of field elements, each indicative of a field of data, in association with a plurality of visualisation elements, each representative of a visualisation scheme against which to plot data; selecting, in response to a user’s choice of visualisation element, a field of data and a visualisation scheme 307; and displaying data plotted based on the visualisation scheme 309. The virtual environment may be an augmented, virtual, or mixed-reality space. A user may choose a second field element indicative of a second field of data, a second visualisation scheme may automatically be selected based on a characteristic of the field of data. The device may comprise a hand tracking device and gesture detection module to detect user intentions by identifying hand gestures 311. The device may respond to the detection of hand gestures by modifying the displayed data 313, for example: adjusting the axis, panning, resizing, translation the displayed information. The visualisation schemes may be a variety of graphs or charts (Fig.5A-C).
300
Fig. 3
Figure GB2556068A_D0001
Step 301
Step 303
Step 305
Step 307
Step 309
Step 311
Step 313
At least one drawing originally filed was informal and the print reproduced here is taken from a later filed formal copy.
of 17
Figure GB2556068A_D0002
Fig. 1
01 18 of
Figure GB2556068A_D0003
FSG. 2 of 17
300
Figure GB2556068A_D0004
Step 301
Step 303
Step 305
Step 307
Step 309
Step 311
Step 313
Figure GB2556068A_D0005
Fig. 3 of 17 rco
41a 42 a 43 a
Figure GB2556068A_D0006
Figure GB2556068A_D0007
co
CM xr
Figure GB2556068A_D0008
<
Q
LU
LU
LU
H <
Q <
CD
CO co xr
1LU >
LU
O
O
Q
LU
CO
CD
CO
1<
o o
_i or o
ω
H
LU ω
o
Q
LU
O
CD
CO
41c 42c 43c of 17
Figure GB2556068A_D0009
Fig. 5A
Figure GB2556068A_D0010
o° o
o o
o o
° o o
A
Fig. 5C
Fig. 5 of 17
Figure GB2556068A_D0011
Figure GB2556068A_D0012
Fig. 6A Fig. 6B
Figure GB2556068A_D0013
Fig. 6C
Fig. 6 of 17
Figure GB2556068A_D0014
Figure GB2556068A_D0015
Fig. 7 of 17
Figure GB2556068A_D0016
Fig. 8a
Fig. 8d of 17
Figure GB2556068A_D0017
Figure GB2556068A_D0018
Fig. 9
Figure GB2556068A_D0019
Figure GB2556068A_D0020
Fig. 10 </ / ?
//
Figure GB2556068A_D0021
Fig. 11 of 17
Figure GB2556068A_D0022
Fig. 12
Figure GB2556068A_D0023
Fig. 14 of 17
Figure GB2556068A_D0024
Fig. 15
Figure GB2556068A_D0025
Fig. 16a
Figure GB2556068A_D0026
Fig. 16b of 17
Figure GB2556068A_D0027
Fig. 17a
Figure GB2556068A_D0028
Figure GB2556068A_D0029
Fig. 18 of 17
Figure GB2556068A_D0030
Figure GB2556068A_D0031
Fig. 19c of 17
Figure GB2556068A_D0032
Fig. 20a
Figure GB2556068A_D0033
Fig. 20b of 17
Figure GB2556068A_D0034
Fig. 21a
Figure GB2556068A_D0035
,i<x$SNiiiS?y<\ .........
Figure GB2556068A_D0036
Figure GB2556068A_D0037
Fig. 21b of 17
Figure GB2556068A_D0038
Fig. 22
01 18 of 17
Figure GB2556068A_D0039
DATA INTERACTION DEVICE
TECHNICAL FIELD [01] This disclosure relates to a data interaction device, a data interaction system and a method of facilitating user-interaction with data in a virtual environment.
BACKGROUND [02] Typically, a conventional system for allowing a user to interact with data generates a range of charts from an input data set and displays these charts to the user. The user is then required to select which chart that they wish to see in greater detail.
[03] This conventional approach may be appropriate if there are only one or two fields (or variables) in the data, or if there is only a small number of possible charts to be generated. However, this approach becomes problematic as the number of fields grows and as the number of different possible charts increases. Generating each different possible chart requires processing capacity for each chart to be generated. Therefore, as the number of fields and possible charts grows, the data processing burden grows as well. In addition, presenting an increased number of possible charts to a user occupies an increased amount of display space within the display area.
[04] To provide an illustrative example of this technical problem, if a data set comprises ten different fields, there would be forty-five different possible combinations of pairs of fields from the ten fields. Therefore, there would be at least forty-five different possible scatter charts to display before even considering the different ways of presenting these charts or presenting different types of chart other than scatter charts. Generating forty five different scatter charts represents a significant processing burden, and displaying forty five scatter charts will occupy a large amount of available display area.
[05] While the conventional approach may be convenient when only a small selection of charts exist, the approach becomes impractical from a data processing and display area perspective as the number of different possible charts increases. In addition, the conventional approach of displaying a large number of possible charts to a user is impractical in a virtual environment, such as a virtual, augmented or mixed reality environment.
[06] In order to review the many different charts, the user may be required to perform either panning or scrolling gestures. These gestures are more complicated for a user to perform in a virtual reality environment than in a conventional 2D display where the user can operate a mouse in relation to a 2D display for pointing and clicking.
[07] US 9,202,297 describes a process where previews of possible data visualisations are displayed to a user. This process requires a representation of each possible data visualisation to be generated. This process may waste processing effort particularly where some of the data visualisations are not relevant to the user. In the situation in which there are too many available visualisations, the display can present a scrolling representation of the available visualisations. This process may be unsuitable for a system with limited graphics processing capabilities and may take a long time to scroll through all of the available options.
[08] Therefore, it would be desirable to provide a system for facilitating userinteraction with data in a virtual environment which minimises data processing requirements and makes efficient use of the display area within a virtual environment.
[09] In addition, it would be desirable to provide a system for facilitating userinteraction with data, which is more appropriate for implementation within a virtual, augmented or mixed reality environment and increases the speed at which data is visualised and manipulated. Furthermore, it would be desirable to increase the speed at which further computational analysis can be requested.
SUMMARY [010] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. [011] According to an aspect of the invention there is provided a data interaction device for facilitating user-interaction with data in a virtual environment comprising: a visualisation module arranged to display, in the virtual environment, a plurality of field elements to a user, each field element indicative of one of a plurality of fields in the data; and arranged to simultaneously display each field element in association with a plurality of visualisation elements, each visualisation element responsive to a user interaction to indicate a visualisation scheme against which to plot data; a selection module, responsive to user input, arranged to select a first visualisation element associated with a first field element based on at least one user interaction with the elements, to identify a first field and a first visualisation scheme; and a display module arranged to display, in the virtual environment, at least a part of the data by plotting data of the first field based on the first visualisation scheme within a data display area.
[012] According to another aspect of the invention there is provided a method of facilitating user-interaction with data in a virtual environment, the method comprising: displaying in the virtual environment a plurality of field elements to a user, each field element indicative of one of a plurality of fields in the data; simultaneously displaying each field element in association with a plurality of visualisation elements, each visualisation element responsive to a user interaction to indicate a visualisation scheme against which to plot data; selecting a first visualisation element associated with a first field element based on at least one user interaction with the elements, to identify a first field and a first visualisation scheme; and displaying, in the virtual environment, at least a part of the data by plotting data of the first field based on the first visualisation scheme within a data display area.
[013] According to another aspect of the invention there is provided a computer program comprising code portions which when loaded and run on a computer cause the computer to execute a method as described herein.
[014] According to another aspect of the invention there is provided a data interaction system comprising: a user interface device arranged to receive inputs from a user; a display device arranged to display a virtual environment for facilitating user interactions with data; and a visualisation module arranged to display in the virtual environment a plurality of field elements to a user, each field element indicative of one of a plurality of fields in the data; and arranged to simultaneously display each field element in association with a plurality of visualisation elements, each visualisation element responsive to a user interaction to indicate a visualisation scheme against which to plot data; a selection module, responsive to user inputs, arranged to select a first visualisation element associated with a first field element based on at least one user interaction with the elements, to identify a first field and a first visualisation scheme; and a display module arranged to display, in the virtual environment, at least a part of the data by plotting data of the first field based on the first visualisation scheme within a data display area.
[015] In another example, there is provided a data interaction device according to any one of the preceding claims, the device further comprising: a hand tracking device for tracking hand gestures of a user in three dimensions; and a gesture detection module arranged to detect a user interaction by identifying a hand gesture.
[016] In another example, there is provided a method comprising tracking hand gestures of a user in three dimensions using a hand tracking device; and detecting a user interaction by identifying a hand gesture.
BRIEF DESCRIPTION OF THE DRAWINGS [017] Embodiments of the invention will be described, by way of example, with reference to the following drawings, in which:
[018] Figure 1 illustrates a schematic representation of the general architecture of a data interaction system;
[019] Figure 2 illustrates an example of a virtual environment generated by the data interaction system;
[020] Figure 3 illustrates a high-level flow diagram of a method for facilitating user interaction with data;
[021] Figure 4 illustrates a compact display for generating data visualisations;
[022] Figures 5A-C illustrate a set of visualisation schemes;
[023] Figures 6A-C illustrate another set of visualisation schemes;
[024] Figure 7 illustrates another visualisation scheme;
[025] Figures 8A-D illustrate another set of visualisation schemes;
[026] Figure 9 illustrates the user performing a translation gesture;
[027] Figure 10 illustrates the user performing a panning gesture;
[028] Figure 11 illustrates the user performing a domain changing gesture;
[029] Figure 12 illustrates the user performing a resizing gesture;
[030] Figure 13 illustrates the user performing a slicing gesture in the x dimension;
[031] Figure 14 illustrates the user performing another slicing gesture in the z dimension;
[032] Figure 15 illustrates the user performing a pointing gesture;
[033] Figures 16A-B illustrate the user performing an x-axis adjustment gesture;
[034] Figures 17A-B illustrate the user performing a z-axis adjustment gesture;
[035] Figure 18 illustrates the user performing a decomposition gesture;
[036] Figures 19A-C illustrate the user performing a pattern recognition gesture;
[037] Figures 20A-B illustrate the user performing a menu display gesture;
[038] Figures 21A-B illustrate the user performing a detailed menu display gesture; [039] Figure 22 illustrates examples of changing the density of data displayed in the virtual environment; and [040] Figure 23 illustrates an example of the virtual environment in response to the decomposition gesture.
DETAILED DESCRIPTION [041] Described below is an arrangement in which field elements are displayed in association with visualisation elements within a virtual environment. Each field element can indicate a field in the data set and each visualisation element can indicate a visualisation scheme against which to plot data in response to a user interaction. This provides a compact display menu from which it is possible for the user to generate a vast number of different possible charts to be generated. This compact display menu avoids the processing burden in generating all of the different possible charts to display to a user for selection, and minimises the amount of the display area that is required in order to display all of the different possible charts.
[042] A visualisation scheme can be identified based on the selection of a visualisation element and a field be identified based on the association of the selected visualisation element with its respective field element. Thus, the compact display menu may require, at a minimum, only a single user interaction with the elements displayed.
[043] For example, the user may wish to display a first field in accordance with a first visualisation scheme. In order to do this, all that is required is for the user to perform a single gesture to select a first visualisation element. Since the first visualisation element is associated with the first field, it is possible for the method to learn from this context that the user desires to plot a display based on data of the first field. In addition, the system learns from the user's selection of the first visualisation element that the data of the first field should be plotted based on a first visualisation scheme indicated by the first visualisation element.
[044] The virtual environment may be an augmented, mixed or virtual reality environment or a two-dimensional representation of the augmented, mixed or virtual reality environment. In addition to the compact display menu being able to avoid burdensome processing and excessive display area occupation, it is able to achieve this without using conventional mechanisms that require panning and scrolling. Panning and scrolling within an augmented or virtual reality environment, can sometimes lead to a loss in situational or contextual awareness. Thus, the compact display menu provides a system for facilitating user-interaction with data that is more appropriate for use in an augmented or virtual reality environment, since only a single tapping gesture may be required instead of the user being required to scroll or pan through a multitude of options.
[045] In the arrangement, each field element may be responsive to a user interaction to select one of a plurality of fields in the data. For instance, the user may wish to display the first field in accordance with the first visualisation scheme; however, the user may also wish to see data of a second field in greater detail, without specifying exactly how this data is to be displayed. In order to do this, all that is required is for the user to perform two gestures: one to select the first visualisation element and another to select the second field element. Selecting the first visualisation element indicates that the data of the first field should be plotted based on a first visualisation scheme indicated by the first visualisation element, as explained previously. In addition, selecting the second field element indicates that the second field should be displayed also. However, in this case the visualisation scheme for the second field is determined for the user automatically depending on a characteristic of the second field. For example, the second field relates to a CO2 level monitored over time. Thus, a characteristic of the second field is that it represents continuous data. In this case, the visualisation scheme that is determined is that the CO2 should be plotted as a line-chart against a horizontal axis.
[046] Figure 1 is a schematic representation of the general architecture of a data interaction system 1. The data interaction system 1 comprises a computing device 3, or data interaction device, with a display module 5 which is arranged to generate a virtual environment for displaying to a user. The system 1 comprises a virtual-reality (VR) headset 7 or a display device 9 for displaying the virtual environment to the user.
[047] An augmented reality (AR) headset, or any other type of device for displaying an immersive virtual environment within which the user can interact with data, may be provided instead of the VR headset 7. The display device 9 may be, for example, a monitor for displaying three-dimensional (3D) images or, indeed, a monitor for displaying two-dimensional (2D) images.
[048] The VR headset 7 and/or the display device 9 may be connected to the display module 5 of the computing device 3 via a device interface 11. This connection may be a wired or a wireless connection, for instance, the VR headset 7 may be connected to the computing device 3 via Bluetooth® or Wi-Fi®.
[049] The system 1 further comprises user interface devices 13, 15. In one example, these user interface devices 13, 15 may be hand tracking devices for tracking the motion and pose of a user's hands, such as the LEAP MOTION® controller. However, it will be appreciated that any other suitable type of user interface device may be used. For example, a touch-screen device may be used instead of or in combination with the hand tracking device. In another example, a camera is used for tracking the motion and pose of a user's hands.
[050] The user interface devices 13, 15 are connected to the computing device 3 via the device interface 11. In addition, the user interface devices 13, 15 are connected to a gesture detection module 17 at the computing device, which receives inputs form the user interface devices 13, 15 in order to detect user gestures.
[051] Referring to Figure 2, the display module 5 at the computing device 3 generates a display of a virtual environment 19, which is displayed to the user via the VR/AR headset 7 or via the display device 9. In the example where the VR/AR headset 7 is provided, the virtual environment 19 is displayed as an immersive 3D environment. However, in the example where the display device 9 is provided, the virtual environment 19 may be displayed as a two-dimensional representation of the immersive 3D environment.
[052] The display module 5 at the computing device 3 is arranged to present display areas 20, or slates, within the virtual environment. The display areas 20 are areas within the virtual environment upon which data may be displayed, for instance in the form of charts.
[053] The user interface device 13, 15 tracks the motion of the user's hands and displays an avatar of the user's hands 21, 23 within the virtual environment 19. Thus, the user is able to perform gestures with their hands in order to interact with the virtual environment 19.
[054] Referring to Figure 3, there is provided a method 300 of facilitating user interaction with data with the virtual environment 19 using the data interaction system 1. [055] In STEP 301, the computing device 3 receives data. In one example, the data set is pre-stored at a storage device 25 at the computing device 3. In another example, the data is communicated or transmitted to the computing device 3 from a server 27 or from another suitable input device 29. Then, the data may be received at communication interface 31 at the computing device 3 and stored at the storage device 25.
[056] The data received in STEP 301 may be structured data or unstructured data. Data points within structured data may be organised to be explicitly associated with a number of different fields or variables. For instance, the structured data may be organised into a table of fields and values linked with each field. On the other hand, data points within unstructured data may not be organised at all.
[057] In STEP 303, the data received in STEP 301 is received at a data processing module 33 at the computing device 3. In this step, the data processing module 33 analyses the data to determine the fields or the types of fields directly or indirectly represented in or by the data. Data fields can be identified at different levels based on data processing. In one example, the data fields are identified directly based on the number of lines and the occurrences of a particular character. In another example, data fields are identified indirectly based on the occurrences or range of character blocks/words and the number of blocks. In another example, data fields are identified indirectly by a range of dates/times or occurrences of phrases, states, categories and their associated vales. In the following example, the data comprises a data set comprising CO2 measurements obtained from sensors located in various places in the United Kingdom at different dates and times.
[058] In the example where the data is structured, the data set may comprise a table of three fields: date/time, CO2 level and sensor location. On the other hand, the data may be less well structured, for instance the data may have a schema but may not be organised into a table and data fields may need to be determined indirectly as explained above. In this case, the data may comprise values associated with the fields without specific reference to the fields themselves. For this type of data set, the data processing module 33 may determine the fields from the data itself. For instance, the sensor location values may be represented as post codes. Therefore, the data processing module 33 can determine that the postcode values relate to a location field. The field displayed to the user may be post code. The data processing module 33 may translate data from one type to another. For instance, the data processing module 33 may translate post codes into geographic co-ordinates i.e. latitude and longitude.
[059] In STEP 305, a visualisation module 35 operates with the display module 5 to generate a display of a compact display menu from which it is possible for the user to generate a variety of different possible charts, or other data visualisations, that can be generated from the received data.
[060] Referring to Figure 4, the visualisation module 35 generates a compact display 37, which may occupy only a small area within the virtual environment 19. The compact display 37 comprises a plurality of field elements 39a-c. Each one of the field elements 39a-c indicates one of a plurality of fields in the data. In this example, FIELD A indicates the date/time field, FIELD B indicates the CO2 level field, and FIELD C indicates the sensor location field. Each field element 39a-c, in the compact display 37 is responsive to a user interaction to select a field indicated by that field element and is responsive to a user interaction to indicate a visualisation scheme.
[061] The compact display 37 comprises a plurality of visualisation elements including a set of vertical visualisation elements 41a-c, a set of horizontal visualisation elements 42a-c and a set of spatial visualisation elements 43a-c.
[062] Each one of the visualisation elements 41a-c, 42a-c, 43a-c is associated with one of the field elements 39a-c. Specifically, a first set of visualisation elements 41a-43a is associated with the FIELD A field element 39a, a second set of visualisation elements 41b-43b is associated with the FIELD B field element 39b, and a third set of visualisation elements 41c-43c is associated with the FIELD C field element 39c. In the example shown in Figure 4, the visualisation elements are associated with a respective field element by being shown alongside the field element.
[063] Each visualisation element 41a-c, 42a-c, 43a-c is responsive to a user interaction to indicate a visualisation scheme against which to plot data. Each vertical visualisation element 42a-c responds to a user interaction to indicate that data of a particular field should be displayed or plotted in reference to a vertical dimension, such as a vertical or y axis. Each horizontal visualisation element 41a-c responds to a user interaction to indicate that data of a particular field should be displayed or plotted in reference to a horizontal dimension, such as a horizontal or x axis. Each spatial visualisation element 43a-c responds to a user interaction to indicate that data of a particular field should be displayed or plotted in reference to a coordinate space of at least two dimensions, such as a 3D Euclidean space.
[064] In STEP 307 fields and visualisation schemes are identified based on user interactions with the compact display 37. Then, in STEP 309 the display module 5 generates a display of at least a part of the data in the virtual environment 19. The display module 5 displays the data by plotting data in a data display area 20 based on the visualisation schemes and fields identified by the user.
[065] In one example, the user selects the horizontal visualisation element 41a associated with the FIELD A element 39a and the vertical visualisation element 42b associated with the FIELD B element 39b by pointing a finger at these elements. In the following examples, the user may push their finger towards an element in order to select the desired element. The gesture detection module 17 recognises these gestures to inform a selection module 45 at the computing device 3 that the user has selected these visualisation elements 41a, 42b.
[066] Since the horizontal visualisation element 41a is associated with the FIELD A element 39a, the selection module 45 determines that the user wishes for field A to be displayed in accordance with a horizontal visualisation scheme. In this case, field A is the date/time field, so the selection module 45 decides that date/time values should be represented horizontally with an appropriate date/time scale. For example, if dates are stored in milliseconds in the system then, since these values are identified as date/time values, rather than displaying a number the scale displays a date value (e.g. 2016-1 ΙΙΟ) where the range of the scale is over years. However, if the user changes the scale after creation, then the time values may be displayed on the scale.
[067] Since the vertical visualisation element 42b is associated with the FIELD B element 39b, the selection module 45 determines that the user wishes for field B to be displayed in accordance with a horizontal visualisation scheme. In this case, field B is the CO2 level field, so the selection module 45 decides that the CO2 levels should represented vertically.
[068] Since the selection module 45 has determined that date/time values should be represented horizontally and that the CO2 levels should be represented vertically, the selection module 45 determines that date/time vs CO2 level should be plotted on an x-y graph, with date/time being plotted on the x-axis and the CO2 level being plotted on the y-axis.
[069] Referring to Figures 5A-C there are a number of possible charts that could be displayed, each in accordance with a different visualisation scheme. Figure 5A shows a continuous line chart of date/time vs CO2 level, Figure 5B shows an irregular line chart of date/time vs CO2 level and Figure 5C shows a scatter plot of date/time vs CO2 level.
[070] In one example, the data processing module 33 determines that the CO2 level readings are continuous and regularly sampled. Therefore, the selection module 45 identifies the visualisation scheme described with reference to Figure 5A, as the most appropriate visualisation scheme for displaying the data.
[071] In another example, the data processing module 33 determines that the CO2 level readings are mostly regularly sampled, but with clear gaps between regular samples. Therefore, the selection module 45 identifies the visualisation scheme described with reference to Figure 5B, as the most appropriate visualisation scheme for displaying the data.
[072] In a further example, the data processing module 33 determines that the CO2 level readings are sampled at irregular intervals. Therefore, the selection module 45 identifies that visualisation scheme described with reference to Figure 5C, as the most appropriate visualisation scheme for displaying the data.
[073] Referring to Figures 5A-C, it can be seen that the visualisation elements are responsive to a user interaction to indicate a plurality of visualisation schemes against which to plot data. In this case, identifying the most appropriate visualisation scheme may be based on a characteristic of one of the fields. In Figures 5A-C, the data is aggregated such that if there are multiple sensor readings at the same time and/or at the same or different location then these multiple sensor readings are aggregated into a single reading. For instance, the median, mean, maximum or minimum of these readings may be determined and displayed, so that the location information is not shown in the charts.
[074] In this example, when the data is displayed to the user the data display area may be displayed along with a selectable option to provide negative feedback regarding the visualisation scheme selected. The user may select the negative feedback option in order to provide a negative feedback input associated with the first visualisation scheme to a feedback module 46 at the computing device. In response, the selection module 45 may identify a different visualisation scheme based on the negative feedback input. In another example, the feedback mechanism may be invisible to the user. In this case, the feedback module 46 determines a negative feedback input if the user creates a new chart or requests a different representation within a predefined time period. In this case the negative feedback input is implied by user activity. Therefore, an indication may be displayed that there are alternative representations available, hinting to the user to open the compact display menu. After a time period of the user not unlocking the compact display menu the indication will automatically hide, although the option to change visualisation will still be available later.
[075] Negative feedback inputs from users of different computing devices 3 can be stored in a database at the server 27. The selection module 45 can use the information in this database to make more accurate identifications of the most appropriate visualisation scheme. In this way, a visualisation scheme can be identified based on a characteristic of the first field and negative feedback inputs.
[076] Each one of a plurality of visualisation schemes may be associated with an initial score. Each time a negative feedback input is received from the user in connection with a visualisation scheme, the score associated with that visualisation scheme is reduced. Then, when the selection module 45 identifies a visualisation scheme, this identification may be based on the score associated with each visualisation scheme, such that visualisation schemes with higher scores are favoured over those with lower scores.
[077] In another example of the user interacting with the compact display 37, the user selects the horizontal visualisation element 41a associated with the FIELD A element 39a and the vertical visualisation element 42b associated with the FIELD B element 39b by pointing a finger at these elements, as described in detail above. However, in this example the user also selects the FIELD C element 39c itself.
[078] The gesture detection module 17 recognises these gestures to inform a selection module 45 at the computing device 3 that the user has selected the visualisation elements 41a, 42b, 39c.
[079] As before, the selection module 45 determines that date/time values should represented horizontally, and that the CO2 levels should represented vertically. In addition, the selection module 45 determines that the data of FIELD C should be displayed as well, so that it is not necessary to aggregate the data in this field as in Figures 5A-C. FIELD C relates to sensor location data. However, in this example the user has not provided an indication of a visualisation scheme for the sensor location data.
[080] As explained above, the selection module 45 is able to determine that date/time vs CO2 level should be plotted on an x-y graph, with date/time being plotted against the x-axis and the CO2 level being plotted against the y-axis. However, the selection module 45 also determines that, since the sensor locations are discrete data values, the sensor locations should be presented through stylistic attributes, such as colour, line colour, line weight and line pattern, in the x-y graph.
[081] Referring to Figures 6A-C there are a number of possible charts that could be displayed, each in accordance with a different visualisation scheme. These charts are similar to those described with reference to Figures 5A-C. However in Figures 6A and 6B, data corresponding with each sensor location is independently represented with a different style of line. In Figure 6C, data corresponding with each sensor location is independently represented using a different style of marker. Instead of different styles of line or marker, different colours may be used to represent the unique series of sensor locations.
[082] In a similar manner to that described above, the data processing module 33 determines that the CO2 level readings are continuous and regularly sampled. Therefore, the selection module 45 identifies the visualisation scheme described with reference to Figure 6A, as the most appropriate visualisation scheme for displaying the data.
[083] The data processing module 33 may determine that the CO2 level readings are mostly regularly sampled, but with clear gaps between regular samples. Therefore, the selection module 45 identifies the visualisation scheme described with reference to Figure 6B, as the most appropriate visualisation scheme for displaying the data.
[084] The data processing module 33 may determine that the CO2 level readings are sampled at irregular intervals. Therefore, the selection module 45 identifies that visualisation scheme described with reference to Figure 6C, as the most appropriate visualisation scheme for displaying the data.
[085] In another example of the user interacting with the compact display 37, the user selects the horizontal visualisation element 41a associated with the FIELD A element 39a and the vertical visualisation element 42c associated with the FIELD C element 39c by pointing a finger at these elements, as described in detail above. In this example the user also selects the FIELD B element 39b itself.
[086] The gesture detection module 17 recognises these gestures to inform a selection module 45 at the computing device 3 that the user has selected the visualisation elements 41a, 42c, 39b. In response, the selection module 45 determines that date/time values should represented horizontally, and that the sensor locations should represented vertically as categories. In addition, the selection module 45 determines that the data of FIELD B should be displayed as well. In this case, FIELD B relates to CO2 levels. However, in this example the user has not provided an indication of a visualisation scheme for the CO2 levels.
[087] In this example, the selection module 45 determines that date/time vs sensor location should be plotted on an x-y graph, with date/time being plotted against the xaxis and sensor location being plotted on a categorical y-axis. Since there are only three sensor locations and these locations are discrete, the selection module 45 determines that each sensor location is to be displayed as a bar. The selection module 45 also determines that the CO2 level data can be represented by a colour map on each bar in the chart. The resulting representation of the data is illustrated in Figure 7.
[088] In another example of the user interacting with the compact display 37, the user selects only the horizontal visualisation element 41c associated with the FIELD C element 39c. In response, the selection module 45 determines that sensor location data should be represented horizontally. However, in this example the user has not indicated any further instruction for the visualisation scheme for the sensor locations.
[089] In this example, the selection module 45 determines that the sensor locations should be plotted on a bar chart, with sensor location being plotted against the x-axis and the number of readings being plotted against the y-axis. The resulting representation of the data is illustrated in Figure 8A.
[090] In another example of the user interacting with the compact display 37, the user selects only the FIELD C element 39c itself. In response, the selection module 45 determines that sensor location data should be represented horizontally. However, in this example, the user has not indicated any visualisation scheme for the sensor locations whatsoever besides indicating that sensor location data should be represented. [091] In this example, the selection module 45 determines that the sensor locations should be plotted on a stacked bar chart with a single bar. The number of readings for each location as a percentage of the total number of readings is displayed as a proportion of the bar. The resulting representation of the data is illustrated in Figure 8B. [092] In a further example of the user interacting with the compact display 37, the user selects only the spatial visualisation element 43c associated with the FIELD C element 39c. In response, the selection module 45 determines that sensor location data should represented spatially.
[093] In this example, since the sensor locations represent geographical data, the selection module 45 determines that the sensor locations should be on a map. The resulting representation of the data is illustrated in Figure 8C.
[094] In a further example of the user interacting with the compact display 37, the user selects the spatial visualisation element 43c associated with the FIELD C element 39c and the user also selects the field element FIELD C 39c. In response, the selection module 45 determines that sensor location data should be represented spatially, but with additional information relating to the number of sensor readings aggregated across the temporal domain displayed also.
[095] In this example, since the sensor locations represent geographical data, the selection module 45 determines that the sensor locations should be on a map. In addition, the number of readings for each location as a percentage of the total number of readings, is displayed as a pie chart at each sensor location on the map. The resulting representation of the data is illustrated in Figure 8D.
[096] Each of the field elements 39a-c and the visualisation elements 41a-c, 42a-c, 43a-c may respond to a user gesture to output a code indicative of the user selection. These codes provide a mechanism for capturing, storing, retrieving and communicating user inputs. Each of the field elements 39a-c generates the greek small letter sigma (o), as a symbol in response to a user input. This symbol can be represented in Unicode by U+03C3. Each of the horizontal visualisation elements 41a-c generates the left right arrow symbol (<->) in response to a user input. This symbol can be represented in Unicode by U+2194. Each of the vertical field elements 42a-c generates the up down arrow symbol ($) in response to a user input. This symbol can be represented in Unicode by U+2195. Each of the vertical field elements 43a-c generates a globe with meridans symbol in response to a user input. This symbol can be represented in Unicode by U+1F310.
[097] In this way, the compact display menu 37 is able to communicate using a simple language that uses Unicode symbols to convey user inputs. This may be particularly useful if processing of user gestures is carried out at the server 27 rather than at the computing device 3. In this example, the visualisation module 35, the display module 5, the gesture detection module 17, the selection module 45, the data processing module 33, the feedback module 36 and the storage device 25 are located on the server 3. In this example, the device interface 11 operates with the communication interface 31 to transmit user inputs to the server 27. In this example, the functionality described with reference to Figure 3 is carried out at the server 27 rather than at the computing device 3.
[098] In STEP 311 the gesture detection module 17 is arranged to detect a number of pre-defined user gestures. Each of these gestures causes a different instruction to be performed in relation to the display area 20 generated in STEP 309. Each of these gestures are described in detail below with reference to Figures 9-21. Each of these gestures are described in reference to a right hand 50 and a left hand 52. However, it will be appreciated that these gestures could be performed and detected with the right hand 50 performing the gestures of the left hand 52 and vice versa.
[099] Referring to Figure 9, the user may perform a translation gesture by forming a fist with one of their hands 50. The translation gesture is input via the user interface device 13, 15 and, where a virtual reality environment is displayed, an avatar version of the user's fist is replicated by the display module 5 within the virtual environment. Detection of the translation gesture provides an instruction to move a selected object within the virtual environment 19. An object is selected by the user forming a replicated version of their fist in proximity to the object, such as by forming their fist whilst their hand or associated avatar is in contact with the object. For instance, the user may form the fist a short distance in front of a display area 20. This may be more appropriate for the virtual environment which is displayed on a 2D display, since depth is more difficult for the user to perceive in this environment. In another example, an object may be selected by the user forming a fist with their hand in proximity to the object, only if the user is not interacting with another object at that time. For instance, the user may reach through a display area to select a different display area behind it. Then, the user may form a fist to select the display area behind in order to activate translation. If the user pulls the display area through the display area in front, then the display area in front will not be selected.
In this example, the object closest to the fist when the fist is formed is chosen for translation.
[0100] In another example, the user forms the fist by grabbing a side of the display area 20 in the virtual environment. This may be more appropriate for the 3D immersive version of the virtual environment. In the example, where an augmented or mixed reality environment is displayed, it is not necessary to display an avatar of the user's hand 50, except for calibration purposes; instead, the user is able to see their own hand performing the gesture.
[0101] Once the gesture detection module 17 has detected the translation gesture, movement of the fist within the virtual environment may be detected also. In this case, the gesture detection module 17 and the display module 5 cause the selected data display area 20 to be responsive to movement of the fist, so that the display area 20 follows the movement of the fist. Once the display area 20 is in the desired location, the user may release their hand 50 from forming a fist in order to deactivate the translation in respect of the virtual object. Thus, in the following description reference to the user's hands may refer to their actual hands in an augmented or mixed reality environment or a virtual representation of their hands in a virtual reality environment. In the example, where the user actively grabs the display then mapping of motion should be 1:1, for instance movement of the fist by 10cm in physical world, should move the display area 10cm in the virtual environment. However, in the example where the user does not grab the slate directly, then the translation gesture movement can be amplified for improved user experience/speed. For instance, 10cm of physical movement could cause 20cm of movement of the display area in the virtual environment. In addition, there are three selection modes controlled via menu with selection options, where the options comprise a “replace mode”, an “additive mode” and a “subtractive mode”. The default mode is the replace mode where touching a display area will deselect all other display areas but select the touched display area. In the additive mode, touching a display area will select that display area as well as maintain previously selected display areas. In the subtractive mode, touching a display area will deselect it from a set of previously selected display areas. The additive and subtractive modes are particularly useful when either performing composition/decomposition or performing the same operation on multiple slates simultaneously.
[0102] The user may perform a data density gesture by moving their hand rapidly in a particular direction, or in other words by performing a swiping gesture. The user may perform a swiping gesture in a vertical direction to rotate the display area upwards or downwards. The user may perform a swiping gesture in a horizontal direction to rotate the display area left or right. The data display area is responsive to the data density gesture to change the data density of the data plotted.
[0103] Referring to Figure 22, initially a line chart 60 is displayed to the user in the display area 20. The user may perform a data density-reducing gesture by interacting with the data display area 20. In this case the user rotates the display area downwards, and the data display area is responsive to the data density-reducing gesture to reduce the data density of the data plotted. For example, the data display area may disappear and new display area may appear in the same location, facing the user, with the updated chart displayed. In this case, the line chart changes to a display of a table 62 showing the raw data to the user. The table 62 illustrates the data less densely than in the line chart 60. Thus, the user has reduced the density of the data displayed by performing the data density-reducing gesture.
[0104] The user may perform another data density gesture by performing a data densityincreasing gesture. In this case, the user rotates the display area 20 from left to right. The display area 20 is responsive to the data density-increasing gesture to increase the data density of the data plotted. In this case, the line chart 60 changes to a display of a box-plot of the same data displayed in the lined chart 60. The box plot 61 illustrates the data more densely than in the line chart 60. Thus, the user has increased the density of the data displayed by performing the data density-increasing gesture.
[0105] The user may decrease the density of the data displayed by the box plot 61 by rotating the display area 20 downwards. In response, the display area 20 displays the minimum, maximum, mean, median, first quartile and third quartile data that was displayed in the box plot 61. Outliers may be presented in the box plot 61 depending on the type of box plot visualised. For instance, if the user was shown a box plot with outliers then outliers will be shown in the table. However, if the user decided to change the boxplot style to ignore outliers then outliers would also not be shown on the table view.
[0106] The user may increase the density of the data displayed by the line chart 60 in another way by rotating the display of the line chart upwards to remove field B as a variable from the line chart 60. This displays a different line chart 64, where field B is simply replaced by the number of aggregated samples of field A.
[0107] The user may decrease the density of the data displayed in the chart 64 by rotating the display of the chart 64 from right to left. This displays a table 65 of the raw data that was displayed in the chart 64.
[0108] Referring to Figure 10, the user may perform a panning gesture by opening one of their hands and facing the palm of their hand away from themselves. This panning gesture is input via user interface device 13, 15 and detected by the gesture detection module 17.
[0109] Detection of the panning gesture provides an instruction to move or scroll through data displayed within the data display area 20 rather than moving the data display area itself. In this case the data within the display area is responsive to movement of the palm, so that the data within the display moves and scrolls to follow the palm. In one example, the user touches the display area with their palm. In this case, the mapping of motion should be 1:1, for instance movement of the palm by 10cm in physical world, should move the data in the display area 10cm in the virtual environment. However, in the example where the user does not touch the data in the display area directly, then panning gesture movement can be amplified for improved user experience/speed. A scale setting for mapping motion may be set by the user. For instance, a scale setting of 1:0.5 may be set where more precision is required. On the other hand, a scale setting of 1:2 may be set where rapid navigation is required. Once the data is in the desired location, the user may turn their palm away from the data in order to cease moving or scrolling.
[0110] Referring to Figure 11, the user may perform a domain changing gesture by forming a fist with one of their hands and opening another one of their hands to face the palm of that hand away from themselves. The domain changing gesture may be described as the translation gesture modified with the panning gesture. This gesture may be performed by the user in the proximity of the display area 20. This domain changing gesture is input via user interface devices 13, 15 and detected by the gesture detection module 17.
[0111] Detection of the domain changing gesture provides an instruction to the display module 5, to change the domain in which the data is displayed. A number of domain options are presented to the user in response to the domain changing gesture. In one example, the user moves data from the time domain into the Fourier domain using the domain changing gesture. The user may select one of the domain options by moving the fist within the virtual environment towards that domain option. Once the domain option has been selected the data within the display area 20 is re-plotted to be displayed in the selected domain.
[0112] Referring to Figure 12, the user may perform a resizing gesture by forming a first with both of their hands. This gesture may be performed in the proximity of a virtual object, such as the display area 20, to change the size of the object within the virtual environment. The user may select the virtual object by touching it before resizing. Once the virtual object has been selected, it is not necessary for the resizing gesture to be performed in proximity to the virtual object, instead the resizing gesture can be performed anywhere in the virtual environment. This resizing gesture is input via user interface device 13, 15 and detected by the gesture detection module 17.
[0113] Detection of the resizing gesture provides an instruction to the display module 5 to change the size of a selected object. Once the user has made a fist with their hands, the user may move the fists in relation to one another within the virtual environment. The object responds to movement of the fists relative to one another to alter its size. Thus, the size of the object is dependent on the movement of the fists relative to one another. When the resizing gesture is performed in proximity to the object the mapping of motion to resizing should be 1:1. However, when the object is selected and the resizing gesture is performed away from the object then the scaling may be based on user preference, as discussed in reference to the panning gesture. In one example, the user moves their fists towards one another to reduce the size of the object. In another example, the user moves their fists away from one another to increase the size of the object.
[0114] The user may perform a slicing gesture by directing a palm of one of their hands away from their body, such that the length of the palm is substantially parallel with the x, y, or z axis. Referring to Figure 13, the user may perform a slicing gesture by placing a palm of one of their hands parallel to the sagittal plane of the user's body. The sagittal plane of the user's body is the anatomical plane that divides the user's body into left and right sides. Here the length of the user's palm is parallel with the sagittal plane. This gesture may be performed through a virtual object, such as the display area 20, to identify the value at the given position or if used in conjunction with the modifier gesture to select a range across the axis on which to activate a filter. The exact value may be identified by snapping to the nearest scale value to ease usability. Thus, the user may select a portion of data aligned with a chosen axis, or in other words the user can select a slice of the data. Alternatively if a 3D object is displayed then this will generate a display of a cross section of the object. This slicing gesture is input via user interface device 13, 15 and detected by the gesture detection module 17.
[0115] Detection of the slicing gesture, if performed on a 3D object, generates a display of a cross-section of the object. The cross-section display represents a cross section through the object in a plane parallel and in-line with the palm of the hand. Alternatively if performed on a 2D object such as a line chart then a mapping line is generated displaying the closest snapping x axis value and corresponding y axis mapping. The user may perform the slicing gesture along with a modifier gesture, which in this case is the palm facing away from the user. This causes an area of the object to be selected, which is defined by the motion of the palm performing the slicing gesture. It may then be possible for this to trigger filtering across the selected area, which may alter the underlying data and cause display updates on related charts.
[0116] In the example illustrated in Figure 14, the palm is parallel with the coronal plane of the body of the user. The coronal plane of the user's body is the anatomical plane that divides the user's body into front and back sides. Here the length of the user's palm is parallel with the coronal plane. This generates a display of a cross section of the x-y plane. This may occur at the closest z-value to which the scale can snap. In another example (not shown), the user may perform the slicing gesture by placing a palm of one of their hands parallel to the transverse plane of the user's body. The transverse plane of the user's body is the anatomical plane that divides the user's body into head and tail portions. Here the length of the user's palm is parallel with the transverse plane. This generates a display of a cross section of the x-z plane. This may occur at the closest yvalue to which the scale can snap.
[0117] Referring to Figure 15, the user may perform a pointing gesture by pointing a finger of one of their hands towards a virtual object in the virtual environment. This gesture may be used to select objects if none are currently selected within the virtual environment either by physically touching or from a distance. If an object is selected then prodding the object at the given location reveals further information at that point. This pointing gesture is input via user interface device 13, 15 and detected by the gesture detection module 17.
[0118] In another example, the user may perform an area selection gesture by pointing a finger of one of their hands towards a virtual object in the virtual environment in conjunction with a modify gesture, by opening a palm with their other hand away from themselves. The user may then draw an area by moving their pointed finger. Then, in response to the user folding their fingers over their palm of the modifier hand, the points within the areamay be selected to allow for filtering operations within the selected area. [0119] Referring to Figures 16A-B and Figures 17A-B, the user may perform an axis adjustment gesture by pointing a finger of one of their hands towards a first part of an axis of a chart and a finger of the other hand towards a second part of the axis. The user may perform an axis adjustment gesture by pointing a finger on each hand perpendicular to the plane with which they wish to adjust. This avoids the user necessarily having to point to a specific part of an axis within the virtual environment. In the example in Figure 16A the user's fingers point to two different parts on the x-axis of the chart. However, in the example in Figure 17A the user's fingers point to two different parts on the z-axis of the chart. The axis adjustment gesture is input via the user interface 13, 15 and detected by the gesture detection module 17.
[0120] Once the user has used two fingers, one on each hand, to initiate the axis adjustment gesture, the user may move one finger in relation to the other finger. In response, the axis is adjusted in response to movement of the first and second fingers. For instance, Figure 16B illustrates the user moving their fingers away from each other in in the direction of the x-axis. The affect of the movement depends on the mode of the axis, for example in a min/max mode such movement causes the display of the chart to zoom-in on the x-axis, by presenting a reduced range of values along the x-axis. However, in a step/interval size mode the reverse would occur as the interval size would be increased. In another example in a min/max mode, the user may move their fingers towards each other in the direction of the x-axis. This causes the display of the chart to zoom-out from the x-axis, by presenting an increased range of values along the x-axis.
[0121] Figure 17B illustrates the user moving their fingers away from each other in the direction of the z-axis. This causes the display of the chart to zoom-in on the z-axis, by presenting a reduced range of values along the z-axis. In another example, the user may move their fingers towards each other in the direction of the z-axis. This causes the display of the chart to zoom-out from the z-axis, by presenting an increased range of value along the z-axis. In another example (not shown), the user may adjust the y-axis by performing the gestures described above, but in relation to the y-axis. In the axis adjust examples described above re-scaling can be determined based on the currently set scale. For instance, where the scale shows linear intervals then a change in distance between fingers determines the size of the interval between tick marks on the scale. The values at the tick marks on the scale may be snapped to appropriate values, for instance to the nearest ten units. If the scale shows linear intervals with a minimum and a maximum, then the minimum is set based on the change in distance of the left finger, and the maximum is set based on the change in distance of right finger from the starting position of the gesture. Moving both fingers changes both the minimum and the maximum by equal amounts, whilst maintaining a static central value. The axis may respond to movement of the fingers, when they have moved over a particular threshold distance. An alternative scale may be selected, by the user moving their fingers through a plane perpendicular to the axis, whilst maintaining a relatively consistent finger separation distance. In this way, the scale may be changed to a linear interval, linear min/max, logarithmic interval, logarithm min/max or exponential scale or other scale appropriate in the context of the data.
[0122] Referring to Figure 18, the user may perform a decomposition gesture by placing both palms parallel to the coronal plane of the user one behind the other, and moving their palms apart. The decomposition gesture is input via the user interface 13, 15 and detected by the gesture detection module 17.
[0123] Detection of the decomposition gesture generates a decomposed version of the data displayed in the currently selected display area 20. Figure 23 illustrates an example of the virtual environment in response to the decomposition gesture. The first display area 20A is the representation of the data before decomposition. When the user performs the decomposition gesture in relation the first display area 20A, this generates a first decomposition of the data in a first decomposed display area 20B and a second decomposition of the data in a second decomposed display area 20C and further decompositions in further display areas. The number of display areas is selected by the distance between the hands.
[0124] In one example, the user is able to relocate the data in the first decomposed display area 20B by using the translation gesture to remove the first decomposed display area 20B from the stack of display areas. Then, the user is able to recombine the data in the first display area 20A and the second decomposed display area 20C into a single representation of the data by first selecting both then performing a decomposition gesture with palms moving towards each other. This results in an additive composition where the noise from the first decomposed display area 20B is removed. Further such composition and decomposition methods are selectable via menu. For example in the above case a series composition would combine each of the selected display areas into a single display area on the same scale but indicated as different series.
[0125] Referring to Figures 19A-C, the user may perform a pattern recognition gesture by forming a fist with one of their hands and by directing a palm of their other hand parallel to their sagittal plane. The user initiates the pattern recognition gesture by performing the gesture shown in Figure 19A. This gesture is input via the user interface 13, 15 and detected by the gesture detection module 17.
[0126] Once the pattern recognition gesture has been initiated, the user may move the fist, for instance up or down, in order to select a pattern recognition algorithm from a range of pattern recognition algorithms displayed in response to initiating the gesture. Once the fist is in the correct position for the user's desired pattern recognition algorithm the user may move their palm in relation to the fist. This movement selects a parameter for applying the selected pattern recognition algorithm.
[0127] Referring to Figure 20A and Figure 20B, the user may perform a menu display gesture by opening one of their hands towards themselves in order to face one of their palms towards themselves. This gesture is input via the user interface 13, 15 and detected by the gesture detection module 17. In response to detection of the menu display gesture, the display module 5 causes a menu to be displayed in the virtual environment. The menu comprises a number of selectable options with each option being displayed at a tip of each digit of the user's hand. The user may then select an option by touching a digit with a digit of the other hand in order to select the option displayed in association with that digit. This gesture for selecting an option is illustrated in Figure 20B. In this way, haptic feedback is provided to the user when the user touches one of their fingers to select an option.
[0128] Referring to Figure 21A and Figure 21B, the user may perform a detailed menu display gesture by opening both of their hands towards themselves in order to face both of their palms towards themselves. This gesture is input via the user interface 13, 15 and detected by the gesture detection module 17. In response to the detection of the detailed menu display gesture, the display module 5 causes a detailed menu to be displayed in the virtual environment. The detailed menu comprises a number of selectable options with each option being displayed extending away from the tip of each digit of one of the hands towards the corresponding digit of the user's other hand. The user may then select an option by pointing towards one of the options displayed with a finger of one of their hands. In order to do this, the user moves one of their hands whilst keeping the other open as illustrated in Figure 21B.
[0129] It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages.
[0130] The methods described herein may be performed by software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. Examples of tangible (or non-transitory) storage media include disks, thumb drives, memory cards etc and do not include propagated signals. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously. This acknowledges that firmware and software can be valuable, separately tradable commodities. It is intended to encompass software, which runs on or controls “dumb” or standard hardware, to carry out the desired functions. It is also intended to encompass software which “describes” or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.
[0131] The term 'computer' or 'computing device' is used herein to refer to any device with processing capability such that it can execute instructions. Those skilled in the art will realise that such processing capabilities are incorporated into many different devices and therefore the term 'computer' or 'computing device' includes PCs, servers, mobile telephones, personal digital assistants and many other devices.
[0132] Those skilled in the art will realise that storage devices utilised to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realise that by utilising conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like.
[0133] Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.
[0134] It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages.
[0135] Any reference to 'an' item refers to one or more of those items. The term 'comprising' is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.
[0136] The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the spirit and scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought. Any of the modules described above may be implemented in hardware or software, as individual modules or modules integrated with other modules.
[0137] It will be understood that the above description of a preferred embodiment is given by way of example only and that various modifications may be made by those skilled in the art. Although various embodiments have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the scope of this invention.

Claims (58)

1. A data interaction device for facilitating user-interaction with data in a virtual environment comprising:
a visualisation module arranged to display, in the virtual environment, a plurality of field elements to a user, each field element indicative of one of a plurality of fields in the data; and arranged to simultaneously display each field element in association with a plurality of visualisation elements, each visualisation element responsive to a user interaction to indicate a visualisation scheme against which to plot data;
a selection module, responsive to user input, arranged to select a first visualisation element associated with a first field element based on at least one user interaction with the elements, to identify a first field and a first visualisation scheme; and a display module arranged to display, in the virtual environment, at least a part of the data by plotting data of the first field based on the first visualisation scheme within a data display area.
2. The data interaction device according to claim 1 wherein the first visualisation scheme is identified based on the selection of the first visualisation element and the first field is identified based on the association of the first visualisation element with the first field element.
3. The data interaction device according to claim 1 or claim 2, wherein the virtual environment is an augmented, virtual or mixed reality environment or a two-dimensional representation of the augmented, virtual or mixed reality environment.
4. The data interaction device according to any one of the preceding claims wherein each field element is responsive to a user interaction to select one of a plurality of fields in the data; and the selection module is arranged to select a second field element based on at least one user interaction to identify a second field and a second visualisation scheme; and the display module is arranged to display at least a part of the data based on first field, first visualisation scheme, second field and second visualisation scheme identified, by plotting data of the first field based on the first visualisation scheme and plotting data of the second field based on the second visualisation scheme.
5. The data interaction device according to claim 4 wherein the second field is identified based on the selection of the second field element and the second field is used to identify the second visualisation scheme based on a characteristic of the second field.
6. The data interaction device according to any one of the preceding claims, wherein the selection module is arranged to select a third visualisation element associated with a third field element based on at least one user interaction with the elements, to identify a third field and a third visualisation scheme; and the display module is arranged to display at least a part of the data based on the first field, first visualisation scheme, third field and third visualisation scheme, by plotting data of the first field based on the first visualisation scheme and data of the third field based on the third visualisation scheme.
7. The data interaction device according to claim 6 wherein the third visualisation scheme is identified based on the selection of the third visualisation element and the third field is identified based on the association of the third visualisation element with the third field element.
8. The data interaction device according to claim 6 or claim 7, wherein the display module is arranged to display at least a part of the data based on the first field, first visualisation scheme, second field, second visualisation scheme, third field and third visualisation scheme, by plotting data of the first field based on the first visualisation scheme, plotting data of the second field based on the second visualisation scheme and plotting data of the third field based on the third visualisation scheme.
9. The data interaction device according to any one of the preceding claims wherein at least one of the visualisation elements is responsive to a user interaction to indicate a plurality of visualisation schemes against which to plot data; and wherein the selection module is arranged to identify a visualisation scheme based on a characteristic of the field associated with the visualisation scheme.
10. The data interaction device according to claim 9 further comprising:
a feedback module arranged to receive a negative feedback input associated with the identified visualisation scheme; and the selection module is arranged to identify a different visualisation scheme based on the negative feedback input.
11. The data interaction device according to claim 9 or claim 10 wherein identifying the visualisation scheme is based on a characteristic of the associated field and negative feedback inputs.
12. The data interaction device according to any one of the preceding claims wherein at least one of the visualisation elements is responsive to a user interaction to indicate a vertical visualisation scheme for data to be plotting in a vertical dimension.
13. The data interaction device according to any one of the preceding claims wherein at least one of the visualisation elements is responsive to a user interaction to indicate a horizontal visualisation scheme for data to be plotting in a horizontal dimension.
14. The data interaction device according to any one of the preceding claims wherein at least one of the visualisation elements is responsive to a user interaction to indicate a spatial visualisation scheme for data to be plotted in a vertical and a horizontal dimension ora depth dimension.
15. The data interaction device according to any one of the preceding claims, the device further comprising:
a hand tracking device for tracking hand gestures of a user in three dimensions; and a gesture detection module arranged to detect user interactions by identifying hand gestures.
16. The data interaction device according to claim 14, wherein the gesture detection module is arranged to:
detect a data density gesture of a user interacting with the data display area, wherein the data display area is responsive to the data density gesture to change the data density of the data plotted.
17. The data interaction device according to claim 15 or claim 16, wherein the gesture detection module is arranged to:
detect a data density-reducing gesture of a user interacting with the data display area, wherein the data display area is responsive to the data density-reducing gesture to reduce the data density of the data plotted.
18. The data interaction device according to any one of claims 15 to 17, wherein the gesture detection module is arranged to:
detect a data density-increasing gesture of a user interacting with the data display area, wherein the data display area is responsive to the data density-increasing gesture to increase the data density of the data plotted.
19. The data interaction device according to any one of claims 15 to 18, wherein the gesture detection module is arranged to:
detect a selection of a display area detect a translation gesture of a user for moving the selected display area around the virtual environment;
wherein the translation gesture comprises the user making a fist with one of their hands and moving the fist within the virtual environment;
wherein the data display area is responsive to movement of the fist to follow the movement of the fist.
20. The data interaction device according to any one of claims 15 to 19, wherein the gesture detection module is arranged to:
detect a domain changing gesture of a user for changing the domain in which the data is displayed;
wherein the domain changing gesture comprises the user facing the palm of one of their hands away from themselves and making a fist with the other of their hands in proximity of the data display area;
wherein a number of domain options are presented to the user in response to the domain changing gesture; and one of the domain options is selected in response to moving the fist within the virtual environment towards that domain option.
21. The data interaction device according to any one of claims 15 to 20, wherein the gesture detection module is arranged to:
detect a panning gesture of a user for panning or scrolling through the data displayed;
wherein the panning gesture comprises the user facing the palm of one of their hands away from themselves;
wherein the data within the display area is responsive to movement of the palm to follow the movement of the palm.
22. The data interaction device according to any one of claims 15 to 21, wherein the gesture detection module is arranged to:
detect a selection of a display area;
detect a resizing gesture of a user for changing the size of the selected display area;
wherein the resizing gesture comprises the user making a fist with both of their hands and moving the fists in relation to one another within the virtual environment;
wherein the data display area is responsive to movement of the fists to alter the size of the data display area dependent on the movement of the fists relative to one another.
23. The data interaction device according to any one of claims 15 to 22, wherein the gesture detection module is arranged to:
detect a slicing gesture of a user for generating a display of a cross-section;
wherein the slicing gesture comprises the user directing a palm of one of their hands parallel to one of the anatomical planes;
wherein a portion or a cross section of the data is selected in response to the slicing gesture.
24. The data interaction device according to any one of claims 15 to 23, wherein the gesture detection module is arranged to:
detect an area selection gesture of a user for selecting an area within the virtual environment;
wherein the area selection gesture comprises the user facing the palm of one of their hands away from themselves and pointing a finger away from themselves.
wherein an area is selected in response to movement of the finger.
25. The data interaction device according to any one of claims 15 to 24, wherein the gesture detection module is arranged to:
detect an axis adjustment gesture of a user for adjusting an axis of a chart in the display area;
wherein the axis adjustment gesture comprises positioning a finger of one hand and a finger of the other hand in relation to an axis and moving one finger in relation to the other finger;
wherein the scale of the axis is changed in response to movement of the fingers.
26. The data interaction device according to any one of claims 15 to 25, wherein the gesture detection module is arranged to:
detect a pattern recognition gesture of a user for applying a pattern recognition algorithm to the data in the display area;
wherein the pattern recognition gesture comprises:
the user making a fist with one of their hands and directing a palm of the other one of their hands parallel to the sagittal plane of the user's body;
wherein a pattern recognition algorithm is selectable in response to movement of the fist and a parameter for the pattern recognition algorithm is selectable in response to movement of the palm.
27. The data interaction device according to any one of claims 15 to 26, wherein the gesture detection module is arranged to:
detect a menu display gesture of a user for generating a display of a menu within the virtual environment;
wherein the menu display gesture comprises the user opening one of their hands towards themselves;
wherein menu options are displayed against each digit of the user's hand, each menu option selectable in response to the user pointing towards that option with a finger of the other hand.
28. The data interaction device according to any one of claims 15 to 27, wherein the gesture detection module is arranged to:
detect a detailed menu display gesture of a user for generating a display of a detailed menu within the virtual environment;
wherein the detailed menu display gesture comprises opening both hands towards themselves;
wherein at least one menu option is displayed extending from a digit of one of the hands towards a digit on the other hand, each menu option selectable in response to the user pointing towards the menu option with a finger of one hand.
29. A method of facilitating user-interaction with data in a virtual environment, the method comprising:
displaying in the virtual environment a plurality of field elements to a user, each field element indicative of one of a plurality of fields in the data;
simultaneously displaying each field element in association with a plurality of visualisation elements, each visualisation element responsive to a user interaction to indicate a visualisation scheme against which to plot data;
selecting a first visualisation element associated with a first field element based on at least one user interaction with the elements, to identify a first field and a first visualisation scheme; and displaying, in the virtual environment, at least a part of the data by plotting data of the first field based on the first visualisation scheme within a data display area.
30. The method according to claim 29 wherein the first visualisation scheme is identified based on the selection of the first visualisation element and the first field is identified based on the association of the first visualisation element with the first field element
31. The method according to claim 29 or claim 30, wherein the virtual environment is an augmented, virtual or mixed reality environment or a two-dimensional representation of the augmented, virtual or mixed reality environment.
32. The method according to any one of claims 29 to 31 wherein each field element is responsive to a user interaction to select one of a plurality of fields in the data; and the method further comprises:
selecting a second field element based on at least one user interaction to identify a second field and a second visualisation scheme; and displaying at least a part of the data based on the first field, first visualisation scheme, second field and second visualisation scheme identified, by plotting data of the first field based on the first visualisation scheme and plotting data of the second field based on the second visualisation scheme.
33. The method according to claim 32 wherein the second field is identified based on the selection of the second field element and the second field is used to identify the second visualisation scheme based on a characteristic of the second field.
34. The method according to any one of claims 29 to 33, the method further comprising:
selecting a third visualisation element associated with a third field element based on at least one user interaction with the elements, to identify a third field and a third visualisation scheme; and displaying at least a part of the data based on the first field, first visualisation scheme, third field and third visualisation scheme, by plotting data of the first field based on the first visualisation scheme and data of the third field based on the third visualisation scheme.
35. The method according to claim 34 wherein the third visualisation scheme is identified based on the selection of the third visualisation element and the third field is identified based on the association of the third visualisation element with the third field element.
36. The method according to claim 34 or claim 35, the method further comprising:
displaying at least a part of the data based on the first field, first visualisation scheme, second field, second visualisation scheme, third field and third visualisation scheme, by plotting data of the first field based on the first visualisation scheme, plotting data of the second field based on the second visualisation scheme and plotting data of the third field based on the third visualisation scheme.
37. The method according to any one of claims 29 to 36 wherein at least one of the visualisation elements is responsive to a user interaction to indicate a plurality of visualisation schemes against which to plot data; and the method comprises identifying a visualisation scheme based on a characteristic of the field associated with the visualisation scheme.
38. The method according to claim 36 further comprising:
receiving a negative feedback input associated with the identified visualisation scheme; and identifying a different visualisation scheme based on the negative feedback input.
39. The method according to claim 37 or claim 38 wherein identifying the visualisation scheme is based on a characteristic of the associated field and negative feedback inputs.
40. The method according to any one of claims 29 to 39 wherein at least one of the visualisation elements is responsive to a user interaction to indicate a vertical visualisation scheme for data to be plotting in a vertical dimension.
41. The method according to any one of claims 29 to 40 wherein at least one of the visualisation elements is responsive to a user interaction to indicate a horizontal visualisation scheme for data to be plotting in a horizontal dimension.
42. The method according to any one of claims 29 to 41 wherein at least one of the visualisation elements is responsive to a user interaction to indicate a spatial visualisation scheme for data to be plotted in in a vertical and a horizontal dimension or a depth dimension.
43. The method according to any one of claims 29 to 42, the method further comprising:
tracking hand gestures of a user in three dimensions using a hand tracking device; and detecting user interactions by identifying hand gestures.
44. The method according to claim 43, the method further comprising:
detecting a data density gesture of a user interacting with the data display area, wherein the data display area is responsive to the data density gesture to change the data density of the data plotted.
45. The method according to claim 43 or claim 44, the method further comprising:
detecting a data density-reducing gesture of a user interacting with the data display area, wherein the data display area is responsive to the data density-reducing gesture to reduce the data density of the data plotted.
46. The method according to any one of claims 43 to 45, the method further comprising:
detecting a data density-increasing gesture of a user interacting with the data display area, wherein the data display area is responsive to the data density-increasing gesture to increase the data density of the data plotted.
47. The method according to any one of claims 43 to 46, the method further comprising:
selecting a display area;
detecting a translation gesture of a user for moving the selected display area around the virtual environment;
wherein the translation gesture comprises the user making a fist and moving the fist within the virtual environment;
wherein the data display area is responsive to movement of the fist to follow the movement of the fist.
48. The method according to any one of claims 43 to 47, the method further comprising:
selecting a display area;
detecting a domain changing gesture of a user for changing the domain in which the data in the selected display area is displayed;
wherein the domain changing gesture comprises the user facing the palm of one of their hands away from themselves and making a fist;
wherein a number of domain options are presented to the user in response to the domain changing gesture; and one of the domain options is selected in response to moving the fist within the virtual environment towards that domain option.
49. The method according to any one of claims 43 to 48, the method further comprising:
detecting a panning gesture of a user for panning or scrolling through the data displayed;
wherein the panning gesture comprises the user facing the palm of one of their hands away from themselves;
wherein the data within the display area is responsive to movement of the palm to follow the movement of the palm.
50. The method according to any one of claims 43 to 49, the method further comprising:
selecting a display area;
detecting a resizing gesture of a user for changing the size of the selected display area;
wherein the resizing gesture comprises the user making a fist with both of their hands and moving the fists in relation to one another within the virtual environment;
wherein the data display area is responsive to movement of the fists alter the size of the data display area dependent on the movement of the fists relative to one another.
51. The method according to any one of claims 43 to 50, the method further comprising:
detecting a slicing gesture of a user for generating a display of a cross-section;
wherein the slicing gesture comprises the user directing a palm of one of their hands parallel to one of the anatomical planes;
wherein a portion or a cross section of the data is selected in response to the slicing gesture.
52. The method according to any one of claims 43 to 51, the method further comprising:
detecting an area selection gesture of a user for selecting an area within the virtual environment;
wherein the area selection gesture comprises the user facing the palm of one of their hands away from themselves and pointing a finger away from themselves.
wherein an area is selected in response to movement of the finger.
53. The method according to any one of claims 43 to 52, the method further comprising:
detecting an axis adjustment gesture of a user for adjusting an axis of a chart in the display area;
wherein the axis adjustment gesture comprises positioning a finger of one hand and a finger of the other hand in relation to an axis and moving one finger in relation to other finger;
wherein the scale of the axis is adjusted in response to movement of the fingers.
54. The method according to any one of claims 43 to 53, the method further comprising:
detecting a pattern recognition gesture of a user for applying a pattern recognition algorithm to the data in the display area;
wherein the pattern recognition gesture comprises:
the user making a fist with one of their hands and directing a palm of the other one of their hands parallel to the sagittal plane of the user's body;
wherein a pattern recognition algorithm is selectable in response to movement of the fist and a parameter for the pattern recognition algorithm is selectable in response to movement of the palm.
55. The method according to any one of claims 43 to 54, the method further comprising:
detecting a menu display gesture of a user for generating a display of a menu within the virtual environment;
wherein the menu display gesture comprises the user opening one of their hands towards themselves;
wherein menu options are displayed against each digit of the user's hand, each menu option selectable in response to the user pointing towards that option with a finger of the other hand.
56. The method according to any one of claims 43 to 55, the method further comprising:
detecting a detailed menu display gesture of a user for generating a display of a detailed menu within the virtual environment;
wherein the detailed menu display gesture comprises opening both hands towards themselves;
wherein at least one menu option is displayed extending from a digit of one of the hands towards a digit on the other hand, each menu option selectable in response to the user pointing towards the menu option with a finger of one hand.
57. A computer program comprising code portions which when loaded and run on a computer cause the computer to execute a method according to any of claims 29 to 56.
58. A data interaction system comprising:
a user interface device arranged to receive inputs from a user;
a display device arranged to display a virtual environment for facilitating user interactions with data; and a visualisation module arranged to display in the virtual environment a plurality of field elements to a user, each field element indicative of one of a plurality of fields in the data; and arranged to simultaneously display each field element in association with a plurality of visualisation elements, each visualisation element responsive to a user interaction to indicate a visualisation scheme against which to plot data;
a selection module, responsive to user inputs, arranged to select a first visualisation element associated with a first field element based on at least one user interaction with the elements, to identify a first field and a first visualisation scheme; and a display module arranged to display, in the virtual environment, at least a part of the data by plotting data of the first field based on the first visualisation scheme within a data display area.
Intellectual
Property
Office
Application No: GB 1619417.7 Examiner: Rianis Dickson
GB1619417.7A 2016-11-16 2016-11-16 Data interation device Withdrawn GB2556068A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1619417.7A GB2556068A (en) 2016-11-16 2016-11-16 Data interation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1619417.7A GB2556068A (en) 2016-11-16 2016-11-16 Data interation device

Publications (2)

Publication Number Publication Date
GB2556068A true GB2556068A (en) 2018-05-23
GB2556068A8 GB2556068A8 (en) 2018-06-27

Family

ID=62043343

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1619417.7A Withdrawn GB2556068A (en) 2016-11-16 2016-11-16 Data interation device

Country Status (1)

Country Link
GB (1) GB2556068A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114625255A (en) * 2022-03-29 2022-06-14 北京邮电大学 Free-hand interaction method for visual view construction, visual view construction device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5461708A (en) * 1993-08-06 1995-10-24 Borland International, Inc. Systems and methods for automated graphing of spreadsheet information
US20080022562A1 (en) * 2006-07-31 2008-01-31 John Robert Manis Shoe static outsole structrue connected to rotary midsole structrue
US20080115049A1 (en) * 2006-11-14 2008-05-15 Microsoft Corporation Dynamically generating mini-graphs to represent style and template icons
US20100325564A1 (en) * 2009-06-19 2010-12-23 Microsoft Corporation Charts in virtual environments
US20110115814A1 (en) * 2009-11-16 2011-05-19 Microsoft Corporation Gesture-controlled data visualization
US20130080444A1 (en) * 2011-09-26 2013-03-28 Microsoft Corporation Chart Recommendations
US20150135113A1 (en) * 2013-11-08 2015-05-14 Business Objects Software Ltd. Gestures for Manipulating Tables, Charts, and Graphs
US20160012154A1 (en) * 2014-07-08 2016-01-14 Kyocera Document Solutions Inc. Information processing system that organizes and clearly presents mass data to user, information processing methods, and recording medium
EP2990924A1 (en) * 2014-08-22 2016-03-02 Business Objects Software Ltd. Gesture-based on-chart data filtering

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5461708A (en) * 1993-08-06 1995-10-24 Borland International, Inc. Systems and methods for automated graphing of spreadsheet information
US20080022562A1 (en) * 2006-07-31 2008-01-31 John Robert Manis Shoe static outsole structrue connected to rotary midsole structrue
US20080115049A1 (en) * 2006-11-14 2008-05-15 Microsoft Corporation Dynamically generating mini-graphs to represent style and template icons
US20100325564A1 (en) * 2009-06-19 2010-12-23 Microsoft Corporation Charts in virtual environments
US20110115814A1 (en) * 2009-11-16 2011-05-19 Microsoft Corporation Gesture-controlled data visualization
US20130080444A1 (en) * 2011-09-26 2013-03-28 Microsoft Corporation Chart Recommendations
US20150135113A1 (en) * 2013-11-08 2015-05-14 Business Objects Software Ltd. Gestures for Manipulating Tables, Charts, and Graphs
US20160012154A1 (en) * 2014-07-08 2016-01-14 Kyocera Document Solutions Inc. Information processing system that organizes and clearly presents mass data to user, information processing methods, and recording medium
EP2990924A1 (en) * 2014-08-22 2016-03-02 Business Objects Software Ltd. Gesture-based on-chart data filtering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
www.virtualitics.com, Accessed: 27/04/17, First available: 02/02/17 (via https://web.archive.org/web/20170202094235/www.virtualitics.com) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114625255A (en) * 2022-03-29 2022-06-14 北京邮电大学 Free-hand interaction method for visual view construction, visual view construction device and storage medium

Also Published As

Publication number Publication date
GB2556068A8 (en) 2018-06-27

Similar Documents

Publication Publication Date Title
US10852913B2 (en) Remote hover touch system and method
US10705707B2 (en) User interface for editing a value in place
US20240202259A1 (en) Data visualization implementation
US20180074680A1 (en) Enhanced roll-over, button, menu, slider, and hyperlink environments for high dimensional touchpad (hdtp), other advanced touch user interfaces, and advanced mice
JP5807686B2 (en) Image processing apparatus, image processing method, and program
US20160034058A1 (en) Mobile Device Input Controller For Secondary Display
US11226734B1 (en) Triggering multiple actions from a single gesture
US20140033124A1 (en) Object selection
KR20150132527A (en) Segmentation of content delivery
EP3204843B1 (en) Multiple stage user interface
JP2013168144A (en) Image display method and device thereof
US10073612B1 (en) Fixed cursor input interface for a computer aided design application executing on a touch screen device
GB2556068A (en) Data interation device
JP6711616B2 (en) Graphic element selection
KR20150145611A (en) Method and apparatus of controlling display using control pad, and server that distributes computer program for executing the method

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)