TW201502959A - Enhanced canvas environments - Google Patents

Enhanced canvas environments Download PDF

Info

Publication number
TW201502959A
TW201502959A TW103106787A TW103106787A TW201502959A TW 201502959 A TW201502959 A TW 201502959A TW 103106787 A TW103106787 A TW 103106787A TW 103106787 A TW103106787 A TW 103106787A TW 201502959 A TW201502959 A TW 201502959A
Authority
TW
Taiwan
Prior art keywords
gesture
interaction
items
response
item
Prior art date
Application number
TW103106787A
Other languages
Chinese (zh)
Inventor
Anton Oguzhan Alford Andrews
Frederick David Jones
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Publication of TW201502959A publication Critical patent/TW201502959A/en

Links

Abstract

Systems, methods, and software are disclosed herein for facilitating enhanced canvas presentation environments. In an implementation, a user interacts with a touch-enabled display system capable of displaying items on a canvas. In response to a gesture made by the user with respect to an item being displayed, a format-specific interaction model is identified based on a format associated with the item. A response to the gesture may then be determined using the interaction model and the response rendered for display.

Description

強化的畫布環境 Enhanced canvas environment 【相關申請案】[related application]

本申請案主張享有2013年3月3日申請之美國臨時專利申請案第61/771,900號,標題名稱為「ENHANCED CANVAS ENVIRONMENTS」,以及主張2013年3月3日申請之標題名稱亦為「ENHANCED CANVAS ENVIRONMENTS」的優先權權益,該等申請案以引用之方式將其全部內容併入本文。 This application claims to have the US Provisional Patent Application No. 61/771,900 filed on March 3, 2013, entitled "ENHANCED CANVAS ENVIRONMENTS", and the title of the title claimed on March 3, 2013 is also "ENHANCED CANVAS". Priority is hereby incorporated by reference.

本揭示態樣是關於計算硬體和軟體技術,且特別是關於強化的數位畫布技術。 The present disclosure relates to computational hardware and software techniques, and in particular to enhanced digital canvas techniques.

數位白板提供使用者畫布,使用者可與其互動以建立插圖、做筆記、及以類似於使用非數位白板的使用方式操作。各種應用包括數位白板,如Microsoft® OneNote®和Microsoft® Lync®,使用者可在其上收集筆記、儲存文件或照片、及視為工作空間。數位筆可用於產生諸如附圖和塗鴉之內容,其可如文字處理文件或其他此類創作項目般儲存。 The digital whiteboard provides a user canvas that the user can interact with to create illustrations, take notes, and operate in a manner similar to the use of non-digit whiteboards. Applications include digital whiteboards such as Microsoft® OneNote® and Microsoft® Lync®, where users can collect notes, store files or photos, and view workspaces. Digital pens can be used to generate content such as drawings and graffiti, which can be stored as word processing files or other such creative items.

多個數位白板還包括嵌入各種不同類型的項目於畫布上的能力。例如,視頻、照片和創作項目可以存放在用數 位筆建立的其他內容旁的畫布。與任何項目的互動通常需要先選擇項目或由某使用者輸入將項目轉換到主動狀態。然後,使用者可以使用控制或針對特定項目或項目類型的手勢進行互動。因此,在畫布層級,每個項目是以相對於任何其他項目的相同方式互動,直到該項目被賦能。 Multiple digital whiteboards also include the ability to embed various different types of items on the canvas. For example, videos, photos, and creative projects can be stored in the usage The canvas next to the other content created by the pen. Interaction with any project usually requires first selecting the project or by a user input to convert the project to an active state. The user can then interact with the controls or gestures for a particular item or item type. Therefore, at the canvas level, each item interacts in the same way as any other item until the item is enabled.

如何允許使用者在畫布層級與項目互動可以被視為是一種互動模型。在畫布層級,與每個項目的互動典型上是由每個項目的相同互動模型所管轄。因此,不管項目類型為何,相同的手勢或指令是用來操控項目,如同用於操控任何其他項目。此外,根據特定項目的互動模型,項目是在可開始進行互動之前被選擇或賦能。換句話說,對於在畫布上不主動的任何項目,使用者發出選擇或賦能項目的初步或第一互動。一旦項目賦能,使用者可以根據該項目的互動模型進行後續互動。 How to allow users to interact with projects at the canvas level can be seen as an interactive model. At the canvas level, interaction with each project is typically governed by the same interactive model for each project. Therefore, regardless of the type of project, the same gesture or instruction is used to manipulate the project as if it were used to manipulate any other project. In addition, depending on the interaction model of a particular project, the project is selected or empowered before it can begin to interact. In other words, for any item that is not active on the canvas, the user issues a preliminary or first interaction of the selection or empowerment item. Once the project is enabled, the user can follow up on the interaction model of the project.

在一熟悉情境中,畫布可包括數個嵌入式物件或項目,例如文字處理文件、照片、視頻和文字。使用者可以使用特定於項目類型的控制與每個項目互動,但只有在已賦能該項目後。例如,在使用者可依其相關互動模型來操控文字處理文件之前,文字處理文件是透過觸控而首次帶入焦點。同樣地,圖片是在依其互動模型來操控圖片之前被帶入焦點。 In a familiar situation, the canvas can include several embedded objects or items, such as word processing files, photos, videos, and text. Users can interact with each item using project-specific controls, but only after the project has been enabled. For example, a word processing file is brought into focus for the first time through touch before the user can manipulate the word processing file according to its related interaction model. Similarly, images are brought into focus before they are manipulated according to their interaction model.

事實上,哪個互動模型是用來解釋手勢或相對於特定項目的其他指令通常是在選擇或賦能項目之後。因此,當選擇一項目時,其互動模型被載入,互動模型可以適用於任何後續手勢。特定項目的互動模型可以透過載入與主動項目 相關聯的全部或部分應用程式來實現。例如,當賦能文字處理項目時,按照特定於文字處理文件的互動模型,載入賦能解釋手勢的文字處理應用程式的組件。 In fact, which interaction model is used to interpret gestures or other instructions relative to a particular project is usually after selecting or empowering the project. Therefore, when an item is selected, its interaction model is loaded and the interaction model can be applied to any subsequent gestures. Interaction models for specific projects can be loaded and active projects All or part of the associated application is implemented. For example, when a word processing project is enabled, a component of a word processing application that is capable of interpreting gestures is loaded in accordance with an interaction model specific to the word processing file.

本文提供用於促進強化的畫布呈現環境的系統、方法和軟體。在一實施中,使用者與可顯示項目於畫布上的觸控賦能顯示系統互動。回應於相對於顯示項目的使用者做出的手勢,特定格式的互動模型是基於與該項目相關聯的格式所識別。可使用互動模型和呈現用於顯示的回應來決定手勢的回應。在這種方式中,不同的互動模式可為應用項目和物件,其可相對於它們的類型或格式而有變化,但共享相同的主動或非主動狀態。 This document provides systems, methods, and software for promoting an enhanced canvas rendering environment. In one implementation, the user interacts with a touch enabled display system that can display items on the canvas. In response to a gesture made by a user displaying the item, the interaction model of the particular format is identified based on the format associated with the item. The response of the gesture can be determined using an interactive model and presenting a response for display. In this manner, different modes of interaction may be application items and objects that may vary with respect to their type or format, but share the same active or inactive state.

本發明內容以簡化形式介紹在下面的實施方式中將進一步描述的選擇性概念。本發明內容的目的既非專供識別申請專利之標的之主要特徵或基本特徵,亦非單獨作為決定申請專利之標的之範圍之輔助。 This Summary of the Invention introduces, in a simplified form, a selection of concepts that are further described in the following embodiments. The object of the present invention is not intended to identify the main features or essential features of the subject matter of the patent application, nor is it alone as an aid to determining the scope of the patent application.

100‧‧‧畫布環境 100‧‧‧Canvas environment

101‧‧‧使用者 101‧‧‧Users

102‧‧‧手臂 102‧‧‧ Arm

103‧‧‧壁 103‧‧‧ wall

105‧‧‧地板 105‧‧‧floor

106‧‧‧計算系統 106‧‧‧Computation System

107‧‧‧多格式畫布 107‧‧‧Multi-format canvas

111‧‧‧文件 111‧‧ ‧ documents

113‧‧‧圖片 113‧‧‧ Pictures

115‧‧‧畫廊 115‧‧ Gallery

117‧‧‧視頻 117‧‧‧Video

121‧‧‧右滑手勢 121‧‧‧Right swipe gesture

122‧‧‧滑動手勢 122‧‧‧Slide gesture

123‧‧‧觸控時間欄 123‧‧‧Touch time bar

201-205‧‧‧步驟方法 201-205‧‧‧Step method

300‧‧‧互動模型 300‧‧‧Interactive model

400‧‧‧互動模型 400‧‧‧Interactive model

800‧‧‧計算系統 800‧‧‧ Computing System

801‧‧‧處理系統 801‧‧‧Processing system

803‧‧‧儲存系統 803‧‧‧Storage system

805‧‧‧軟體 805‧‧‧Software

807‧‧‧通訊介面 807‧‧‧Communication interface

809‧‧‧使用者介面 809‧‧‧User interface

811‧‧‧顯示介面 811‧‧‧Display interface

900‧‧‧互動模型 900‧‧‧Interactive model

參照下面的附圖,可更好地理解本揭示的許多態樣。雖然一些實施方式是連結這些附圖所描述,本發明並不限於本文中揭示的實施方式。相反地,本文是旨在覆蓋所有的替代物、修改和均等物。 Many aspects of the present disclosure can be better understood with reference to the following drawings. Although some embodiments are described in connection with the figures, the invention is not limited to the embodiments disclosed herein. Instead, this document is intended to cover all alternatives, modifications, and equivalents.

圖1繪示一實施的強化畫布環境。 Figure 1 illustrates an enhanced canvas environment in an implementation.

圖2繪示一實施的強化畫布程序。 2 illustrates an implementation of a enhanced canvas program.

圖3繪示一實施的互動模型。 Figure 3 illustrates an interactive model of an implementation.

圖4繪示一實施的互動模型。 Figure 4 illustrates an interactive model of an implementation.

圖5繪示一實施的運作情境。 Figure 5 illustrates the operational context of an implementation.

圖6繪示一實施的運作情境。 Figure 6 illustrates the operational context of an implementation.

圖7繪示一實施的運作情境。 Figure 7 illustrates the operational context of an implementation.

圖8繪示一實施的計算系統。 Figure 8 illustrates an implementation of a computing system.

圖9繪示一實施的互動模型。 Figure 9 illustrates an interactive model of an implementation.

圖10繪示一實施的互動模型。 Figure 10 illustrates an interactive model of an implementation.

本文所揭示實施是用於強化的畫布環境,使用者可在此環境中以多種方式多種內容互動。特別是,各種格式特定的互動模型是由具有不同格式的各種項目所支援。例如,使用者可以與具有第一格式的項目互動,在這種情況下,回應於相對於該項目所作之手勢時,使用特定於該格式的互動模型。使用者還可以與具有不同格式的另一項目互動,在這種情況下,當回應相對於該項目所作出之手勢時,使用特定於不同格式的不同互動模型。 The implementations disclosed herein are for a fortified canvas environment in which a user can interact with multiple content in a variety of ways. In particular, various format-specific interaction models are supported by a variety of projects with different formats. For example, a user can interact with an item having a first format, in which case an interaction model specific to the format is used in response to a gesture made relative to the item. The user can also interact with another item having a different format, in which case different interaction models specific to different formats are used when responding to gestures made relative to the item.

此外,本文所描述的實施提供在作出手勢之後用於識別項目的相關互動模型,而非在作出手勢之後識別用於解釋相對於項目所作之手勢的互動模型。可相對於相片、文字處理文件、文字或數位字跡、或任何其他類型的項目作出手勢。回應於所述手勢,識別特定於項目類型或格式的模型。然後基於該互動模型解釋該手勢並相應呈現回應。 Moreover, the implementations described herein provide a relevant interaction model for identifying an item after the gesture is made, rather than identifying an interaction model for interpreting gestures made relative to the item after the gesture is made. Gestures can be made relative to photos, word processing files, text or digital writing, or any other type of item. In response to the gesture, a model specific to the item type or format is identified. The gesture is then interpreted based on the interaction model and the response is presented accordingly.

強化的畫布環境中可提供於筆記型、桌上型或平板電腦、以及行動電話、遊戲系統、或者任何其他合適的計算 裝置上所經歷的筆記應用程式(如OneNote®)的情境中。此外,強化的畫布環境可能透過強化的計算系統提供,例如具有非常大的觸控螢幕或觸控螢幕陣列的那些系統。在一些實施中,大螢幕一般可以是書桌、桌子或其他工作空間的尺寸,使用者可以透過手勢和其他互動的方式與其進行互動。 Enhanced canvas environment available in notebook, desktop or tablet, as well as mobile phones, gaming systems, or any other suitable computing In the context of a note-taking application (such as OneNote®) that the device has experienced. In addition, the enhanced canvas environment may be provided through an enhanced computing system, such as those with very large touch screens or touch screen arrays. In some implementations, the large screen can generally be the size of a desk, table, or other workspace, and the user can interact with it through gestures and other interactive means.

在一些實施中,其他裝置可用來進一步加強與內容和其他項目的互動。例如,語音識別裝置可以用來擷取所說的話,使得相對於畫布上的項目所做的手勢可考慮口說內容進行解釋。在另一例子中,在作出手勢以進一步強化手勢的處理時,動作擷取裝置可用於擷取使用者的動作。可以理解,這些技術可以綜合的方式單獨或一起使用。因此,在識別手勢的回應時,可以考慮語音和運動。 In some implementations, other devices can be used to further enhance interaction with content and other items. For example, the speech recognition device can be used to retrieve the spoken words such that gestures made relative to items on the canvas can be interpreted in consideration of the spoken content. In another example, the action capture device can be used to capture the motion of the user while making a gesture to further enhance the processing of the gesture. It will be appreciated that these techniques can be used individually or together in a comprehensive manner. Therefore, speech and motion can be considered when recognizing the response of a gesture.

現在參照附圖,圖1繪示強化的畫布環境100。強化的畫布環境100包括使用者101,位於至少由地板105和壁103所定義之空間內,這僅僅是範例以便於說明本發明。計算系統106驅動常駐於空間的顯示系統109。在一些實施方式,顯示系統109可以掛在牆壁上。在其他實施方式,顯示系統109可以桌子或書桌形式設置或部署。在又一些其他實施方式中,顯示系統109可以位於角落,有點像繪圖桌或辦公桌。 Referring now to the drawings, Figure 1 illustrates a reinforced canvas environment 100. The enhanced canvas environment 100 includes a user 101 located in a space defined by at least the floor 105 and the wall 103, which is merely an example to facilitate the description of the present invention. Computing system 106 drives display system 109 that is resident in space. In some embodiments, display system 109 can be hung on a wall. In other embodiments, display system 109 can be arranged or deployed in the form of a desk or desk. In still other embodiments, display system 109 can be located at a corner, somewhat like a drawing table or a desk.

在操作中,使用者101與顯示系統109所顯示的多格式畫布107互動。在此說明中,多格式畫布107包括各種內容項目,如文件111、圖片113、畫廊115、和視頻117。使用者101可以相對於任何這些項目做出各種手勢,對於這些手勢之回應是基於特定於這些項目之至少一者的互動模型 決定。各種互動模型可能有部分共同之處。此外,一些形式的互動模型可與其他形式的互動模型相同。然而,至少兩種格式將具有相對於彼此的不同互動模型。 In operation, user 101 interacts with multi-format canvas 107 displayed by display system 109. In this description, the multi-format canvas 107 includes various content items such as a file 111, a picture 113, a gallery 115, and a video 117. The user 101 can make various gestures relative to any of these items, and the response to these gestures is based on an interaction model specific to at least one of the items. Decide. Various interactive models may have something in common. In addition, some forms of interaction models can be identical to other forms of interaction models. However, at least two formats will have different interaction models with respect to each other.

不同的互動模型允許使用者101使用可能是對於每者而言相同的手勢而與每個項目進行不同的互動。例如,相對於視頻而做出的向右揮動手勢可以調用可以進行導航的視頻時間線。相反地,相對於地圖而做出的滑動手勢可以移動地圖的焦點。在另一例子中,相對於照片而做出的揮動手勢可將照片更改為來自數位照片庫的另一照片。 Different interaction models allow the user 101 to interact differently with each item using gestures that may be the same for each. For example, a swipe gesture to the right relative to the video can invoke a video timeline that can be navigated. Conversely, a swipe gesture made relative to the map can move the focus of the map. In another example, a swipe gesture made relative to a photo may change the photo to another photo from a digital photo library.

使用者101作出的各種手勢由顯示系統109擷取,代表該手勢的手勢資訊傳送到計算系統106。在以這種手勢資訊提供時,圖2中所示之強化畫布程序200描述計算系統106所執行的程序。計算系統106處理手勢資訊以識別用於回應手勢的互動模型(步驟201)。一旦識別互動模型,計算系統106基於模型決定手勢的回應(步驟203)。計算系統106呈現回應(步驟205)並驅動顯示系統109以顯示該回應。參照圖3-7更詳細地描述各種範例互動模型和說明運作情境以說明強化的畫布程序200。 The various gestures made by the user 101 are retrieved by the display system 109 and the gesture information representative of the gesture is communicated to the computing system 106. The enhanced canvas program 200 shown in FIG. 2 describes the program executed by the computing system 106 when provided with such gesture information. Computing system 106 processes the gesture information to identify an interactive model for responding to the gesture (step 201). Once the interactive model is identified, computing system 106 determines a response to the gesture based on the model (step 203). Computing system 106 presents a response (step 205) and drives display system 109 to display the response. Various example interaction models and illustrative operational scenarios are described in more detail with respect to Figures 3-7 to illustrate the enhanced canvas program 200.

圖3繪示互動模型300,基於其格式以特定方式將內容項目適用於每個項目。在項目118中間以圓形符號表示的觸控手勢可能會導致四個動作之一者。向上方向的手勢導致情境選項的繪製和顯示。向下方向的手勢可以導致可採取不同動作的代理者的出現(呈現和顯示),例如以使用者的名義進行查詢。向左滑可能會導致該項目的特定格式類型的情 境動作,而向右滑可能會導致該項目的特定格式類型的另一情境動作。例如,類似的滑動可能導致對具有不同格式的內容項不同動作。 Figure 3 illustrates an interaction model 300 that applies content items to each item in a particular manner based on its format. A touch gesture represented by a circular symbol in the middle of item 118 may result in one of four actions. The upward direction gesture results in the drawing and display of context options. A downward direction gesture can result in the presence (presentation and display) of an agent that can take different actions, such as querying on behalf of the user. Sliding to the left may result in a specific format type of the item. The action, while swiping to the right, may result in another situational action of the particular format type of the item. For example, a similar swipe may result in different actions for content items having different formats.

圖4繪示出另一互動模型400,通常對於每一項目以相同的方式適用於內容項目而不管其格式。特別是,互動模型400描述了當使用者執行長按手勢於任何項目時會發生的情況。這樣的姿勢會導致四種特定選項的出現:分享、復原、關閉及放大。分享選項被選定時允許使用者與他人共享項目118。復原選項被選定時允許過去的動作被復原。關閉選項被選定時關閉主題項目。放大選項被選定時啟動更詳細的視圖或主項目的調查。 Figure 4 illustrates another interaction model 400 that is generally applicable to content items in the same manner for each item regardless of its format. In particular, the interaction model 400 describes what happens when a user performs a long press gesture on any item. This pose leads to the emergence of four specific options: share, restore, close, and zoom. The sharing option is selected to allow the user to share the item 118 with others. Allowed past actions to be restored when the restore option is selected. The theme item is closed when the Close option is selected. A more detailed view or survey of the main project is initiated when the zoom option is selected.

圖5和圖6與互動情境相關,繪示當相對於特定的項目實施時,諸如互動模型300之互動模型如何成為特定的互動模型。在圖5中,使用者101以手臂102延伸而進行右滑手勢121。滑動手勢121是相對於視頻117。向右滑動導致時間欄123呈現覆蓋於視頻117。然後使用者101可以透過觸控時間欄123瀏覽視頻117的各個部分。另外,於視頻上任何點的側邊滑動導致洗刷動作,其中該視頻是與使用者滑動的距離成比例地前進或復原。向上或向下滑動導致視頻對使用者滑動的距離成比例地向上或向下調節。滑動發生時,這可以在顯示諸如視頻洗刷介面或音量控制之控制表面時達到,或者也可以在不顯示任何附加使用者介面和僅進行洗刷或音量調整的情況下達成。 Figures 5 and 6 relate to an interactive context, showing how an interaction model, such as interaction model 300, becomes a particular interaction model when implemented relative to a particular project. In FIG. 5, the user 101 performs a right swipe gesture 121 with the arm 102 extending. The swipe gesture 121 is relative to the video 117. Sliding to the right causes the time bar 123 to appear to overlay the video 117. The user 101 can then browse various parts of the video 117 through the touch time bar 123. Additionally, sliding at the side of any point on the video results in a scrubbing action in which the video is advanced or restored in proportion to the distance the user is sliding. Sliding up or down causes the video to adjust up or down proportionally to the distance the user slides. This can be achieved when a sliding surface such as a video scrubbing interface or volume control is displayed, or it can be achieved without displaying any additional user interface and only scrubbing or volume adjustment.

現在參照圖6,使用者101再次以向右的方向作出另一滑動手勢122。然而,在這種情況下,新照片已經取代最初顯示於圖5的圖庫115中的圖片,表示該滑動手勢122觸發跨圖庫115的滾動功能。這是因為按滑動手勢122之內容與按滑動手勢121之視頻有不同的項目類型。因此,不同於相對於滑動手勢121所使用之模型,不同的互動模式是用來決定和呈現滑動手勢122之回應。當這些格式將都受益於導致映射內容和圖片內容縮放的相同手勢時,如捏的手勢,一些手勢可以多種格式實現相同的結果。 Referring now to Figure 6, the user 101 again makes another swipe gesture 122 in the right direction. However, in this case, the new photo has replaced the picture originally displayed in the gallery 115 of FIG. 5, indicating that the swipe gesture 122 triggers the scrolling function across the gallery 115. This is because the content of the swipe gesture 122 has a different item type than the video of the swipe gesture 121. Thus, unlike the model used with respect to the swipe gesture 121, different interaction modes are used to determine and present the response of the swipe gesture 122. When these formats will all benefit from the same gestures that cause the mapped content and the image content to be scaled, such as pinch gestures, some gestures can achieve the same result in multiple formats.

圖7繪示涉及長按手勢的操作情境。在圖7中,使用者101延伸他的手臂102及相對於圖庫115中的圖片作出長按手勢。長按手勢觸發四個覆蓋的選項出現在圖庫115的圖片中。四個選項對應於相對於圖4所討論的互動模式,如分享、放大、復原、及關閉選項。使用者101接著可以選擇任何選項或可繼續進行其他互動。 FIG. 7 illustrates an operational scenario involving a long press gesture. In FIG. 7, user 101 extends his arm 102 and makes a long press gesture relative to the picture in gallery 115. The long-press gesture triggers four overlay options to appear in the gallery 115 image. The four options correspond to the interaction modes discussed with respect to Figure 4, such as sharing, zooming in, restoring, and closing options. User 101 can then select any option or can proceed with other interactions.

圖9和圖10繪示出另一互動模型900。互動模式900描述使用者可以如何與多格式畫布107上的自由形式繪圖或其他類型繪圖互動以清除所有或部分繪圖。在這種情況下,使用者已經使用諸如用來在顯示器上繪製的筆或觸筆之一些合適描繪機制建立繪圖。向繪圖下方進行的單一觸控互動導致相應的繪圖部分的擦除。這個單一觸控互動類似於具有傳統的油墨可擦除標記的傳統非數位白板的繪畫動作,然後用手指選擇性地擦拭部分繪圖,例如形狀、直線或曲線中的區段。相反地,跨圖式向下進行的多觸控互動導致動作相交的 形狀或線條的完全擦除。這種多點觸控互動類似於以手或橡皮擦擦除傳統白板。這些手勢可以由單一觸控互動的單一手指與完成觸控互動的多個手指偵測來區分。手勢可以任何方向或路徑完成,不須為線性或對齊任何特定軸。以這種方式,附圖的較小或較大部分可以類似於傳統可擦型油墨的白板所使用的方式擦除。 9 and 10 illustrate another interaction model 900. The interactive mode 900 describes how a user can interact with a free-form drawing or other type of drawing on the multi-format canvas 107 to clear all or part of the drawing. In this case, the user has established a drawing using some suitable rendering mechanism such as a pen or stylus for drawing on the display. A single touch interaction below the drawing causes the corresponding portion of the drawing to be erased. This single touch interaction is similar to the traditional non-digital whiteboard drawing action with traditional ink erasable markings, and then selectively wipes a portion of the drawing, such as a segment in a shape, line, or curve, with a finger. Conversely, multiple touch interactions across the graph result in intersecting actions. Complete erasure of shapes or lines. This multi-touch interaction is similar to erasing a traditional whiteboard with a hand or eraser. These gestures can be distinguished by a single touch of a single touch interaction and multiple finger detections that complete the touch interaction. Gestures can be done in any direction or path without having to be linear or aligning any particular axis. In this manner, the smaller or larger portion of the drawing can be erased in a manner similar to that used for whiteboards of conventional erasable inks.

可以從前面的論述理解各種態樣。在至少一實施中,多格式畫布互動可以透過相當大的螢幕觸控/筆顯示使用者介面而接受,該介面使得內容成為主要的並將表面控制為隱藏直到被存取。三個手勢和四個動作可能在某些情況下得到支持。內容可能會表面化,而沒有任何邊框、鍍鉻、或其他裝飾。當使用者希望進行互動,諸如捏縮(pinch to zoom)之自然手勢觸發捏縮的回應呈現。因此,多格式畫布可以因此具有多個內容類型呈現並於畫布內主動,而不需要顯示諸如與該內容相關的按鈕或滑件之控制。 Various aspects can be understood from the previous discussion. In at least one implementation, multi-format canvas interaction can be accepted through a relatively large screen touch/pen display user interface that makes the content primary and controls the surface to be hidden until it is accessed. Three gestures and four actions may be supported in some cases. The content may be superficial without any borders, chrome, or other decorations. When the user wishes to interact, a natural gesture such as pinch to zoom triggers the response presentation of the pinch. Thus, a multi-format canvas can thus be presented with multiple content types and active within the canvas without the need to display controls such as buttons or sliders associated with the content.

此外,多格式畫布中的手勢可以透過語音指令賦能輸入。例如,多手指長按手勢可能表示語音指令或將針對此內容之內容。針對使用者介面之特定區域的語音可以不同於整體僅針對計算系統的語音的方式處理。例如,針對特定視訊內容區段之語音可以使用僅包括視訊之語法來處理,如「快轉」,或可改變語法以使相關視頻指令被選擇作為匹配較高頻率之指令。所選特定內容的內容也可以用於調整所呈現的語音解釋。也可考慮觸控情境來解釋語音,例如透過識別「列印圖片」之指令是指示經觸控之圖片,而不是可能在這時顯 示的其他圖片。此外,如果選擇了諸如即時通訊對話之內容,語音可被導引到遠端即時訊息使用者作為語音對話,而不是在本地端解釋為指令。 In addition, gestures in a multi-format canvas can be input via voice commands. For example, a multi-finger long press gesture may indicate a voice command or content that will be targeted to this content. The voice for a particular area of the user interface may be processed differently than the overall voice for the computing system. For example, speech for a particular portion of video content may be processed using a syntax that includes only video, such as "fast forward," or the syntax may be changed to cause the associated video instruction to be selected as an instruction to match a higher frequency. The content of the selected specific content can also be used to adjust the presented speech interpretation. You can also consider the touch context to interpret the voice. For example, the command to identify the "print picture" is to indicate the touched picture, instead of possibly displaying it at this time. Other pictures shown. In addition, if content such as an instant messaging conversation is selected, the voice can be directed to the remote instant message user as a voice conversation rather than being interpreted as an instruction at the local end.

在至少一實施方式中,可以識別三個通用手勢。首先,上滑手勢啟動特定於手勢主題之內容的情境選單。例如,專注於文字的上滑動可能顯現以Bing搜尋到的資料、其他控制件、或其他元資料發現的資訊。第二手勢,下滑動,可能會啟動語音控制之代理者用於知道關於內容情境的自然語言查詢。例如,觀看視頻時,以口語查詢「下次播放是何時?」發生的下滑動可能啟動代理者。代理者將輸入關於正顯示電影之情境,並知道僅是那部電影的搜尋和顯示結果。或者使用者可能意味著更具體的內容,如工程圖的圈選部分,並詢問:「尋找適合此處的感測器。」透過圈選和執行下滑動手勢,代理者將被傳喚並於搜尋感測器時整合口語查詢。多格式畫布的更詳細討論是提供於附錄A和附錄B。 In at least one embodiment, three general gestures can be identified. First, the swipe gesture launches a context menu specific to the content of the gesture theme. For example, focusing on the text's top slide may reveal information discovered by Bing's searched data, other controls, or other metadata. The second gesture, swiping down, may initiate a voice control agent to know the natural language query about the content context. For example, when watching a video, the next swipe that occurs when the next query is "speak next time?" may start the agent. The agent will enter a situation about the movie being displayed and know that it is only the search and display of that movie. Or the user may mean more specific content, such as the circled part of the drawing, and ask: "Look for a sensor that fits here." By circle and perform a swipe gesture, the agent will be summoned and searched. The sensor integrates spoken language queries. A more detailed discussion of multi-format canvas is provided in Appendix A and Appendix B.

長按內容可能導致四種基本操作:分享、關閉、復原和放大。對於分享,可以各種不同的方式分享內容,例如透過電子郵件、分享服務、微博等。關閉將關閉內容項目。復原允許復原過去的行為,而放大導致可以顯現相關人員、內容、行為或資料的情境搜尋。例如,情境搜尋可以識別與科學研究相關的專家何時出現在使用者的組織,並呈現通訊使用者介面元件,以允許聯絡該專家以獲取建議。放大功能還可以在某一特定領域總結結果,例如透過顯示一話題在文學作品裡是如何被涵蓋等。 Long presses can lead to four basic operations: sharing, closing, restoring, and zooming in. For sharing, you can share content in different ways, such as via email, sharing services, Weibo, and more. Closing will close the content item. Recovery allows for the reinstatement of past behavior, while magnification leads to situational searches that reveal relevant people, content, behavior, or material. For example, a contextual search can identify when an expert associated with a scientific study appears in the user's organization and present a communication user interface component to allow the expert to be contacted for advice. The zoom function can also summarize results in a particular area, such as by showing how a topic is covered in a literary work.

圖8繪示計算系統800,其代表任何計算裝置、系統、或適用於實施圖1中所示計算系統106的系統集合。計算系統800的範例包括通用電腦、桌上型電腦、膝上型電腦、平板電腦、工作站、虛擬電腦、或任何其他類型的合適計算系統、系統組合或其變體。 8 illustrates a computing system 800 that represents any computing device, system, or collection of systems suitable for implementing the computing system 106 shown in FIG. Examples of computing system 800 include general purpose computers, desktop computers, laptops, tablets, workstations, virtual computers, or any other type of suitable computing system, system combination, or variations thereof.

計算系統800包括處理系統801、儲存系統803、軟體805、通訊介面807、使用者介面809和顯示介面811。計算系統800可任選地包括為簡潔目的本文不作討論之額外裝置、特徵或功能。例如,計算系統106可以在某些情況下包括整合的感測器裝置、裝置和功能,例如當計算系統與感測器系統整合時。 The computing system 800 includes a processing system 801, a storage system 803, a software 805, a communication interface 807, a user interface 809, and a display interface 811. Computing system 800 can optionally include additional apparatus, features, or functionality not discussed herein for the sake of brevity. For example, computing system 106 may, in some cases, include integrated sensor devices, devices, and functions, such as when the computing system is integrated with the sensor system.

處理系統801可選擇地耦合至儲存系統803、通訊介面807、使用者介面809和顯示介面811。處理系統801自儲存系統803載入並執行軟體805。在一般由計算系統800執行時,特別是由處理系統801執行時,軟體805導引計算系統800如本文所述而操作用於強化的畫布程序200,以及其之任何變化或本文描述的其他功能。 Processing system 801 is optionally coupled to storage system 803, communication interface 807, user interface 809, and display interface 811. Processing system 801 loads and executes software 805 from storage system 803. When generally executed by computing system 800, particularly by processing system 801, software 805 directs computing system 800 to operate canvas program 200 for reinforcement as described herein, as well as any variations thereof or other functions described herein. .

仍參考圖8,處理系統801可以包括自儲存系統803擷取及執行軟體805的微處理器和其他電路。處理系統801可實施於單一處理裝置內,亦可分散於合作執行程式指令之多個處理裝置或次系統。處理系統801的例子包括通用中央處理單元、特定應用程式處理器和邏輯裝置,以及任何其他類型的處理裝置、組合、或其變體。 Still referring to FIG. 8, processing system 801 can include a microprocessor and other circuitry that retrieves and executes software 805 from storage system 803. The processing system 801 can be implemented in a single processing device or can be distributed among multiple processing devices or subsystems that cooperatively execute program instructions. Examples of processing system 801 include general purpose central processing units, application specific processor and logic devices, as well as any other type of processing device, combination, or variant thereof.

儲存系統803可以包括可由處理系統801讀取和能夠儲存軟體805的任何電腦可讀取儲存媒體。儲存系統803可以包括以任何方法或技術用於儲存資訊的揮發性和非揮發性、可移除和非可移除媒體,如電腦可讀取指令、資料結構、程式模組或其他資料。儲存媒體的範例包括隨機存取記憶體、唯讀記憶體、磁碟、光碟、快閃記憶體、虛擬記憶體和非虛擬記憶體、磁卡帶、磁帶、磁碟儲存或其他磁性儲存裝置,或任何其他合適的儲存媒體。儲存媒體不可能是傳播訊號。除了儲存媒體,在一些實施,儲存系統803還可以包括通訊媒體,軟體805可透過通訊媒體內部或外部通訊。儲存系統803可以實施為單一儲存裝置,但也可以實施於在共同位置或相對於彼此分佈之多個儲存裝置或次系統。儲存系統803可以包括能夠與處理系統801進行通訊之附加元件,例如控制器。 Storage system 803 can include any computer readable storage medium readable by processing system 801 and capable of storing software 805. The storage system 803 can include volatile and non-volatile, removable and non-removable media, such as computer readable instructions, data structures, program modules, or other materials, for storing information in any method or technology. Examples of storage media include random access memory, read only memory, disk, optical disk, flash memory, virtual memory and non-virtual memory, magnetic cassette, magnetic tape, disk storage or other magnetic storage device, or Any other suitable storage medium. The storage medium cannot be a broadcast signal. In addition to the storage medium, in some implementations, the storage system 803 can also include communication media, and the software 805 can communicate internally or externally through the communication medium. The storage system 803 can be implemented as a single storage device, but can also be implemented in multiple storage devices or sub-systems that are co-located or distributed relative to each other. Storage system 803 can include additional components, such as a controller, that can communicate with processing system 801.

軟體805可以程式指令實施並且當一般由計算系統800或特別由處理系統801執行時,可在其他功能之中導引計算系統800或處理系統801如本文所述地運作用於強化的畫布程序200。軟體805可以包括額外程序、程式或組件,如運作系統軟體或其他應用程式軟體。軟體805也可包括韌體或可由處理系統801執行的某形式的機器可讀取程序指令。 Software 805 can be implemented by program instructions and, when generally executed by computing system 800 or, in particular, by processing system 801, computer system 800 or processing system 801 can be directed among other functions to operate canvas program 200 for reinforcement as described herein. . Software 805 can include additional programs, programs, or components, such as operating system software or other application software. Software 805 may also include firmware or some form of machine readable program instructions that may be executed by processing system 801.

一般來說,當載入處理系統801並執行時,軟體805可將計算系統800自通用計算系統轉換到客製化以促進如本文所描述用於每個實施的強化呈現環境的特定用途計算系統。的確,儲存系統803上的編碼軟體805可轉換儲存系統 803的實體結構。特定的實體結構轉換可以取決於在此描述的不同實施方式的各種因素。這些因素的實例可以包括但不限於用於實現儲存系統803的儲存媒體的技術,和電腦儲存媒體是否為一級或二級儲存。 In general, when loaded into processing system 801 and executed, software 805 can convert computing system 800 from a general purpose computing system to a special purpose computing system that is customized to facilitate the enhanced rendering environment for each implementation as described herein. . Indeed, the encoding software 805 convertible storage system on the storage system 803 The physical structure of 803. The particular physical structure transformation may depend on various factors of the different embodiments described herein. Examples of such factors may include, but are not limited to, techniques for implementing a storage medium for storage system 803, and whether the computer storage medium is a primary or secondary storage.

例如,如果電腦儲存媒體是實施為基於半導體的記憶體,當程式是編碼於其中時,軟體805可轉換半導體記憶體的實體狀態,例如透過轉換電晶體、電容器、或構成半導體記憶體的其他個別電路元件的狀態。類似的轉換也可以相對於磁性或光學媒體而發生。在不脫離本描述的範圍內,其他的實體媒體轉換是可能的,前述範例僅提供以便於此討論。 For example, if the computer storage medium is implemented as a semiconductor-based memory, the software 805 can convert the physical state of the semiconductor memory when the program is encoded therein, such as by converting a transistor, a capacitor, or other individual constituting the semiconductor memory. The state of the circuit components. Similar conversions can also occur with respect to magnetic or optical media. Other physical media conversions are possible without departing from the scope of the description, and the foregoing examples are provided only for the purposes of this discussion.

可以理解,計算系統800一般是用來代表部署和執行軟體805以實施強化的畫布程序200(及其變體)的計算系統。然而,計算系統800還可以代表其上可有軟體805和軟體805可自其分散、傳送、下載或提供給另一計算系統用於部署和執行或另一額外分佈之任何計算系統。 It will be appreciated that computing system 800 is generally a computing system for representing deployment and execution of software 805 to implement enhanced canvas program 200 (and variations thereof). However, computing system 800 can also represent any computing system on which software 805 and software 805 can be distributed, transmitted, downloaded, or provided to another computing system for deployment and execution or another additional distribution.

再次參照上述的各種實施方式,透過使用軟體805的計算系統800的運作,可以相對於強化的畫布環境100執行轉換。作為一例,在一狀態中,項目可呈現並顯示在顯示系統109。當使用者101以特定方式與多格式畫布107互動,例如透過作出觸控手勢,(與顯示系統109通訊的)計算系統106可以呈現手勢的回應用於由顯示系統109顯示,藉此將多格式畫布107轉換至第二、不同的狀態。 Referring again to the various embodiments described above, the conversion can be performed with respect to the enhanced canvas environment 100 through the operation of the computing system 800 using the software 805. As an example, in one state, an item can be presented and displayed on display system 109. When the user 101 interacts with the multi-format canvas 107 in a particular manner, such as by making a touch gesture, the computing system 106 (in communication with the display system 109) can present a response to the gesture for display by the display system 109, thereby multi-formatting The canvas 107 is switched to a second, different state.

再次參照圖8,通訊介面807可包括通訊連接和裝置,允許透過通訊網路或網路集合(未繪示)或空中在計算 系統800與其他計算系統(未繪示)之間通訊。一起允許系統間通訊的連接和裝置範例可以包括網路介面卡、天線、功率放大器、RF電路、收發器、和其他通訊電路。連接和裝置可以透過通訊媒體進行通訊,以與其他計算系統或系統的網路交換通訊,如金屬、玻璃、空氣、或任何其他合適的通訊媒體。上述的通訊媒體、網路、連接、和裝置是眾所周知的,不必在此詳細討論。 Referring again to Figure 8, communication interface 807 can include communication connections and means that allow for computation through a communication network or network collection (not shown) or in the air. System 800 communicates with other computing systems (not shown). Examples of connections and devices that allow intersystem communication together may include network interface cards, antennas, power amplifiers, RF circuits, transceivers, and other communication circuits. Connections and devices can communicate over a communication medium to exchange communications with other computing systems or systems, such as metal, glass, air, or any other suitable communication medium. The aforementioned communication media, networks, connections, and devices are well known and need not be discussed in detail herein.

可選的使用者介面809可以包括滑鼠、鍵盤、語音輸入裝置、用於從使用者接收觸控手勢的觸控輸入裝置、用於偵測非觸控手勢和使用者的其他動作的動作輸入裝置、和能夠接收來自使用者的使用者輸入的其他類似輸入裝置及相關處理元件。諸如顯示器、揚聲器、觸覺裝置、以及其他類型輸出裝置之輸出裝置也可以包括在使用者介面809。上述使用者介面組件是眾所周知的,不必在此詳細討論。 The optional user interface 809 can include a mouse, a keyboard, a voice input device, a touch input device for receiving a touch gesture from a user, and an action input for detecting a non-touch gesture and other actions of the user. The device, and other similar input devices and associated processing elements capable of receiving user input from the user. Output devices such as displays, speakers, haptic devices, and other types of output devices may also be included in the user interface 809. The above user interface components are well known and need not be discussed in detail herein.

顯示介面811可以包括各種連接和裝置,允許透過通訊鏈結或鏈結集合或空中進行計算系統800與顯示系統之間的通訊。例如,計算系統106可以透過顯示介面與顯示系統109通訊。共同允許系統間通訊的連接和裝置之範例可包括各種顯示埠、顯示卡、顯示器佈線和連接、以及其他電路。顯示介面811傳送呈現之顯示系統回應用於顯示,如視頻和其他圖像。在一些實施方式中,顯示系統可以是能夠以觸控手勢的形式接受使用者輸入,在這種情況下,顯示介面811還可為能夠接收對應這種手勢的資訊。前述的連接和裝置是眾所周知的,不必在此詳細討論。 Display interface 811 can include various connections and devices that allow communication between computing system 800 and the display system to be communicated through the communication link or link set or over the air. For example, computing system 106 can communicate with display system 109 via a display interface. Examples of connections and devices that collectively allow intersystem communication may include various display ports, display cards, display wiring and connections, and other circuitry. The display interface 811 transmits the rendered display system response for display, such as video and other images. In some embodiments, the display system can be capable of accepting user input in the form of a touch gesture, in which case the display interface 811 can also be capable of receiving information corresponding to such a gesture. The aforementioned connections and devices are well known and need not be discussed in detail herein.

從上述討論可以理解,在至少一實施中,合適的計算系統可以執行軟體以促進強化的畫布環境。當執行軟體時,回應於與顯示在顯示表面上之項目相關聯的手勢,計算系統可針對於識別特定於該項目之一格式的互動模型。然後該計算系統可以按照互動模型識別手勢的回應並針對顯示表面上的項目呈現回應。 As can be appreciated from the above discussion, in at least one implementation, a suitable computing system can execute software to facilitate an enhanced canvas environment. When executing the software, in response to a gesture associated with the item displayed on the display surface, the computing system can be directed to identifying an interactive model that is specific to one of the items. The computing system can then recognize the response of the gesture in accordance with the interaction model and present a response to the item on the display surface.

為了識別互動模型,計算系統可識別該項目的格式,並從與不同格式相關聯的各種互動模組選擇互動模型。在某些情境中,項目的格式是各種格式之一者。 To identify the interactive model, the computing system can identify the format of the project and select an interactive model from various interactive modules associated with the different formats. In some situations, the format of the project is one of various formats.

計算系統可呈現各種項目於顯示表面上。每個項目可以有與所有其他項目相同的主動狀態。在某些情況下,這些項目可呈現於使用者介面的情境中。使用者介面可被視為包括前景和背景。因此,主動狀態可以指示每個項目是主動於使用者介面的前景或背景。 The computing system can present various items on the display surface. Each project can have the same active state as all other projects. In some cases, these items can be presented in the context of the user interface. The user interface can be considered to include the foreground and background. Thus, the active state can indicate that each item is proactive to the foreground or background of the user interface.

每一該等互動模型可定義不同的方向手勢為對應於不同的回應(自其識別該回應)。在一些實施中,至少一些所述回應對於每個互動模型而言是唯一的,而至少有些其他者是橫跨每一互動模型共同分享。導引手勢的範例可包括右滑手勢、左滑手勢、上滑手勢、和下滑手勢。回應的第一部分可對應於右滑手勢和左滑手勢,而回應的第二部分可對應於上滑手勢和下滑手勢。 Each of these interactive models can define different directional gestures to correspond to different responses from which the response is identified. In some implementations, at least some of the responses are unique to each interaction model, while at least some others are shared across each interaction model. Examples of the guiding gestures may include a right sliding gesture, a left sliding gesture, an upward sliding gesture, and a sliding gesture. The first portion of the response may correspond to a right swipe gesture and a left swipe gesture, and the second portion of the response may correspond to a swipe gesture and a swipe gesture.

在至少一實施方式中,計算系統可以呈現圖式於包括多格式畫布的使用者介面上。回應於越過圖式之單一觸控互動,計算系統可以呈現移除僅圖式的一部分。回應於越過 圖式之多點觸控互動,計算系統可以呈現擦除整個圖式。單一觸控互動的範例包括將單一位數向下拖過圖式。多點觸控互動的範例包括將至少三位數向下拖過圖式。擦除僅圖式之一部分可包括跨圖式之已擦除垂直條,對應於由單一觸控互動建立的跨圖式的路徑。 In at least one embodiment, the computing system can present a schema to a user interface that includes a multi-format canvas. In response to a single touch interaction across the schema, the computing system can present a portion of the removal only schema. Responding to crossing The multi-touch interaction of the schema allows the computing system to render the entire schema. An example of a single touch interaction involves dragging a single digit down the schema. An example of multi-touch interaction involves dragging at least three digits down the schema. Erasing only one portion of the pattern may include an erased vertical bar across the pattern, corresponding to a cross-pattern path established by a single touch interaction.

功能方塊圖、操作順序、及圖中提供的流程圖代表用於執行本揭示之新穎態樣之範例架構、環境、和方法。雖然為了簡化說明,包括在本文中的方法可以是功能圖、操作順序或流程圖的形式,並且可以描述為一系列的動作,要理解和認識到,方法不受動作的順序限制,因為一些動作可以據此以不同的順序發生及/或與本文顯示和描述之其他動作同時發生。例如,熟習本領域技術人員將理解並欣賞方法可替代地表示為一系列相互關聯的狀態或事件,例如在狀態圖中。此外,並非繪示於方法中的所有動作皆可要求用於新穎實施。 The functional block diagrams, operational sequences, and flowcharts provided in the figures represent example architectures, environments, and methods for performing the novel aspects of the present disclosure. Although the method included herein may be in the form of a functional diagram, an operational sequence, or a flow diagram, and may be described as a series of acts, it is understood and appreciated that the method is not limited by the order of the acts, as some acts This may occur in different orders and/or concurrently with other acts shown and described herein. For example, those skilled in the art will understand and appreciate that the method may alternatively be represented as a series of interrelated states or events, such as in a state diagram. In addition, not all of the acts illustrated in the method can be claimed in a novel implementation.

所包含的描述和附圖描述了具體的實施方式,以教導熟悉本領域技術人員如何製造和使用最佳選擇。為了教導發明原理的目的,已簡化或省略一些常規態樣。熟悉本領域技術人員將理解可以不同方式組合上述特徵,以形成多個實施。因此,本發明並不限於以上所述的具體實施方式,而是僅依申請專利範圍及其均等物。 The description and drawings are included to describe specific embodiments in order to teach the skilled in the art how to make and use the best. Some conventional aspects have been simplified or omitted for the purpose of teaching the principles of the invention. Those skilled in the art will appreciate that the above features may be combined in various ways to form multiple implementations. Therefore, the present invention is not limited to the specific embodiments described above, but only the scope of the claims and the equivalents thereof.

100‧‧‧畫布環境 100‧‧‧Canvas environment

101‧‧‧使用者 101‧‧‧Users

102‧‧‧手臂 102‧‧‧ Arm

103‧‧‧壁 103‧‧‧ wall

105‧‧‧地板 105‧‧‧floor

106‧‧‧計算系統 106‧‧‧Computation System

107‧‧‧多格式畫布 107‧‧‧Multi-format canvas

111‧‧‧文件 111‧‧ ‧ documents

113‧‧‧圖片 113‧‧‧ Pictures

115‧‧‧畫廊 115‧‧ Gallery

117‧‧‧視頻 117‧‧‧Video

121‧‧‧右滑手勢 121‧‧‧Right swipe gesture

123‧‧‧觸控時間欄 123‧‧‧Touch time bar

Claims (20)

一種裝置,包括:一或更多電腦可讀取儲存媒體;及儲存在一或更多電腦可讀取儲存媒體上的程式指令,當由一處理系統執行時,指示該處理系統至少:回應於與顯示於一顯示表面上之一項目相關聯的一手勢,識別特定於該項目之一格式的一互動模型;按照該互動模型,識別回應該手勢之一回應;及呈現相對於該項目的回應於該顯示表面。 An apparatus comprising: one or more computer readable storage media; and program instructions stored on one or more computer readable storage media, when executed by a processing system, instructing the processing system to at least: respond to Identifying an interaction model associated with one of the items on a display surface, identifying an interaction model specific to one of the items; identifying one of the response gestures in response to the interaction model; and presenting a response relative to the item On the display surface. 如請求項1之裝置,其中該識別該互動模型,該些程式指令指示該處理系統識別該項目的格式,並自複數個格式相關聯的複數個互動模組選擇互動模型,其中該格式包括該些格式中之一者。 The apparatus of claim 1, wherein the interaction model identifies the processing system to identify a format of the item, and select an interaction model from a plurality of interactive modules associated with the plurality of formats, wherein the format includes the One of these formats. 如請求項2之裝置,其中該等程式指令進一步導引該處理系統呈現複數個項目於顯示表面上,該等項目之每者具有與該等項目之每一其他者相同的一主動狀態,且其中該等項目包括與該手勢相關聯的項目。 The device of claim 2, wherein the program instructions further direct the processing system to present a plurality of items on the display surface, each of the items having an active state that is the same as each of the other items, and Where the items include items associated with the gesture. 如請求項3之裝置,其中該等程式指令進一步導引該處理系統以呈現顯示該等項目之一使用者介面,其中該使用者介面包括一前景及一背景,並且其中該主動狀態指示每一該等項目是於該使用者介面的前景或背景中主動。 The device of claim 3, wherein the program instructions further direct the processing system to present a user interface for displaying the items, wherein the user interface includes a foreground and a background, and wherein the active status indicates each These items are active in the foreground or context of the user interface. 如請求項4之裝置,其中每一該等互動模型將複數個導引手勢定義為對應於識別回應之複數個回應,其中該等回應之至少一第一部分對於每一該等互動模型是唯一的,且其中該等回應之至少一第二部分是相對於每一該等互動模型而共享。 The device of claim 4, wherein each of the interaction models defines a plurality of guidance gestures as a plurality of responses corresponding to the recognition response, wherein at least a first portion of the responses is unique to each of the interaction models And wherein at least a second portion of the responses is shared with respect to each of the interaction models. 如請求項5之裝置,其中該等導引手勢包括一右揮動手勢、一左揮動手勢、一上揮動手勢、及一下揮動手勢。 The device of claim 5, wherein the guiding gestures comprise a right waving gesture, a left waving gesture, an up waving gesture, and a swing gesture. 如請求項6之裝置,其中該等回應之該第一部分對應於該右揮動手勢及該左揮動手勢,其中該等回應之該第二部分對應於該上揮動手勢及該下揮動手勢。 The device of claim 6, wherein the first portion of the responses corresponds to the right waving gesture and the left waving gesture, wherein the second portion of the responses corresponds to the upper waving gesture and the lower waving gesture. 如請求項1之裝置,還包括一顯示系統及該處理系統,該顯示系統配置以透過一觸控介面接受該手勢並顯示回應於該手勢,該處理系統是配置以執行程式指令。 The device of claim 1, further comprising a display system configured to accept the gesture through a touch interface and display a response in response to the gesture, the processing system being configured to execute the program instructions. 一種電腦可讀取儲存媒體,具有用於促進強化的畫布環境的程式指令儲存於其中,當由一計算系統執行時,導引該計算系統以至少:呈現一畫作於包括一多格式畫布的一使用者介面;回應於透過該畫作進行之一單觸控互動,呈現僅一部分之該畫作之一擦除;及 回應於透過該畫作進行之一多觸控互動,呈現一整體之該畫作之一擦除。 A computer readable storage medium having program instructions for facilitating an enhanced canvas environment, wherein when executed by a computing system, the computing system is directed to at least: present a picture to a canvas comprising a multi-format canvas a user interface; in response to a single touch interaction through the painting, presenting only a portion of the painting to be erased; and In response to one of the multi-touch interactions through the painting, one of the paintings presented as a whole is erased. 如請求項9之電腦可讀取儲存媒體,其中該單觸控互動包括將單一位數拖曳過該畫作。 The computer of claim 9 can read the storage medium, wherein the single touch interaction comprises dragging a single digit to the painting. 如請求項10之電腦可讀取儲存媒體,其中該多觸控互動包括將至少三位數拖曳過該畫作。 The computer of claim 10 can read the storage medium, wherein the multi-touch interaction comprises dragging at least three digits through the painting. 如請求項11之電腦可讀取儲存媒體,其中僅該部分之該畫作之該擦除包括一越過該畫作之一擦除垂直條,對應該單一觸控互動產生之越過該畫作之一路徑。 The computer of claim 11 can read the storage medium, wherein only the portion of the painting of the erasing comprises erasing the vertical bar over one of the paintings, corresponding to a path of one of the paintings generated by the single touch interaction. 一種用於促進強化的畫布環境的方法,包括以下步驟:回應於與顯示於一顯示表面之一項目相關聯之一手勢,識別特定於該項目的一格式的一互動模型;按照該互動模型識別該手勢的一回應;及呈現相對於該項目的回應於該顯示表面。 A method for facilitating an enhanced canvas environment, comprising the steps of: recognizing an interaction model specific to a format of the item in response to a gesture associated with an item displayed on a display surface; identifying the interaction model according to the interaction model a response to the gesture; and presenting a response relative to the item to the display surface. 如請求項13之方法,其中識別該互動模型之步驟包括以下步驟:識別該項目的格式,並從與複數個格式相關聯的複數個互動模組選擇互動模型,其中該格式包括該等格式之一者。 The method of claim 13, wherein the step of identifying the interaction model comprises the steps of: identifying a format of the item, and selecting an interaction model from a plurality of interaction modules associated with the plurality of formats, wherein the format includes the formats One. 如請求項14之方法,進一步包括以下步驟:呈現複數個項目於該顯示表面,每一該等項目具有與該等項目之每一其他者相同之一主動狀態,且其中該等項目包括與該手勢相關聯的項目。 The method of claim 14, further comprising the steps of: presenting a plurality of items on the display surface, each of the items having an active state that is the same as each of the other items, and wherein the items include Gesture associated items. 如請求項15之方法,進一步包括以下步驟:呈現顯示該等項目之一使用者介面,其中該使用者介面包括一前景及一背景,且其中該主動狀態指示每一該等項目是於該使用者介面的前景或背景中主動。 The method of claim 15, further comprising the step of presenting a user interface displaying the items, wherein the user interface includes a foreground and a background, and wherein the active status indicates that each of the items is for use Active in the foreground or background of the interface. 如請求項16之方法,其中每一該等互動模型將複數個導引手勢定義為對應於識別該回應之複數個回應,其中該等回應之至少一第一部分對於每一該等互動模型是唯一的,且其中該等回應之至少一第二部分是相對於每一該等互動模型而共享。 The method of claim 16, wherein each of the interaction models defines a plurality of guidance gestures as corresponding to a plurality of responses identifying the response, wherein at least a first portion of the responses is unique to each of the interaction models And wherein at least a second portion of the responses is shared with respect to each of the interaction models. 如請求項17之方法,其中該等導引手勢包括一右揮動手勢、一左揮動手勢、一上揮動手勢、及一下揮動手勢。 The method of claim 17, wherein the guiding gestures comprise a right waving gesture, a left waving gesture, an up waving gesture, and a swing gesture. 如請求項18之方法,其中該等回應之該第一部分對應於該右揮動手勢及該左揮動手勢,其中該等回應之該第二部分對應於該上揮動手勢及該下揮動手勢。 The method of claim 18, wherein the first portion of the responses corresponds to the right swipe gesture and the left swipe gesture, wherein the second portion of the responses corresponds to the up swipe gesture and the down swipe gesture. 如請求項13之方法,還包括於一顯示系統透過一觸控介 面接受該手勢並顯示該回應於該手勢。 The method of claim 13 is further included in a display system through a touch interface The face accepts the gesture and displays the response to the gesture.
TW103106787A 2013-03-03 2014-02-27 Enhanced canvas environments TW201502959A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US201361771900P 2013-03-03 2013-03-03

Publications (1)

Publication Number Publication Date
TW201502959A true TW201502959A (en) 2015-01-16

Family

ID=52718431

Family Applications (1)

Application Number Title Priority Date Filing Date
TW103106787A TW201502959A (en) 2013-03-03 2014-02-27 Enhanced canvas environments

Country Status (1)

Country Link
TW (1) TW201502959A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107728852A (en) * 2017-11-14 2018-02-23 苏州数艺网络科技有限公司 Interactive wall device based on electrically conductive ink

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107728852A (en) * 2017-11-14 2018-02-23 苏州数艺网络科技有限公司 Interactive wall device based on electrically conductive ink

Similar Documents

Publication Publication Date Title
TWI609317B (en) Smart whiteboard interactions
JP5726916B2 (en) Multi-screen reduction and enlargement gestures
CN102754352B (en) Method and apparatus for providing information of multiple applications
US20050015731A1 (en) Handling data across different portions or regions of a desktop
EP3183640B1 (en) Device and method of providing handwritten content in the same
EP2608007A2 (en) Method and apparatus for providing a multi-touch interaction in a portable terminal
EP3491506B1 (en) Systems and methods for a touchscreen user interface for a collaborative editing tool
US20130132878A1 (en) Touch enabled device drop zone
US10146341B2 (en) Electronic apparatus and method for displaying graphical object thereof
KR20110081040A (en) Method and apparatus for operating content in a portable terminal having transparent display panel
EP2965181B1 (en) Enhanced canvas environments
US11379112B2 (en) Managing content displayed on a touch screen enabled device
US9927973B2 (en) Electronic device for executing at least one application and method of controlling said electronic device
US9372622B2 (en) Method for recording a track and electronic device using the same
MX2014002955A (en) Formula entry for limited display devices.
US20130127745A1 (en) Method for Multiple Touch Control Virtual Objects and System thereof
US20160132478A1 (en) Method of displaying memo and device therefor
US10970476B2 (en) Augmenting digital ink strokes
KR102551568B1 (en) Electronic apparatus and control method thereof
US20130205201A1 (en) Touch Control Presentation System and the Method thereof
KR20190141122A (en) How to Navigate a Panel of Displayed Content
TW201502959A (en) Enhanced canvas environments
JP5213794B2 (en) Information processing apparatus and information processing method
US10345957B2 (en) Proximity selector
US10552022B2 (en) Display control method, apparatus, and non-transitory computer-readable recording medium