TWI757872B - Augmented reality system and augmented reality display method integrated with motion sensor - Google Patents

Augmented reality system and augmented reality display method integrated with motion sensor Download PDF

Info

Publication number
TWI757872B
TWI757872B TW109131893A TW109131893A TWI757872B TW I757872 B TWI757872 B TW I757872B TW 109131893 A TW109131893 A TW 109131893A TW 109131893 A TW109131893 A TW 109131893A TW I757872 B TWI757872 B TW I757872B
Authority
TW
Taiwan
Prior art keywords
display
head
virtual object
frame
augmented reality
Prior art date
Application number
TW109131893A
Other languages
Chinese (zh)
Other versions
TW202213063A (en
Inventor
黃詠証
Original Assignee
宏碁股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宏碁股份有限公司 filed Critical 宏碁股份有限公司
Priority to TW109131893A priority Critical patent/TWI757872B/en
Application granted granted Critical
Publication of TWI757872B publication Critical patent/TWI757872B/en
Publication of TW202213063A publication Critical patent/TW202213063A/en

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

An augmented reality system and an augmented reality display method integrated with motion sensor are provided. The augmented reality system includes a head-mounted-display apparatus and a computing apparatus. The head-mounted-display apparatus includes an image capturing device and a motion sensor. The i thenvironmental frame is captured by the image capturing device, a display position of a virtual object is displayed according to the i thenvironmental frame. A depth distance between a display and the head-mounted-display apparatus is obtained. Motion sensing data of the head-mounted-display apparatus is generated by the motion sensor. Before capturing the (i+1) thenvironmental frame, the display position of the virtual object is adjusted according to the motion sensing data and the depth distance. The virtual object is displayed by the head-mounted-display apparatus according to the adjusted display position. The virtual object is displayed as being anchored to a screen bezel of the display.

Description

整合動作感測器的擴增實境系統與擴增實境顯示方法Augmented reality system and augmented reality display method integrating motion sensor

本發明是有關於一種擴增實境設備,且特別是有關於一種整合動作感測器的擴增實境系統與擴增實境顯示方法。 The present invention relates to an augmented reality device, and more particularly, to an augmented reality system and an augmented reality display method integrating motion sensors.

隨著科技的發展,擴增實境(Augmented Reality,AR)技術的應用越來越多,AR技術將虛擬的資訊應用到真實世界。 With the development of technology, the application of Augmented Reality (AR) technology is increasing, and AR technology applies virtual information to the real world.

另一方面,隨著資訊處理量的增加,單一螢幕的筆記型電腦已逐漸無法滿足工作者的需求。一般而言,位於辦公室內的使用者,可將筆記型電腦連接至另一台桌上型顯示器,以使用多螢幕顯示功能來提昇工作效率。但是,在外辦公的使用者無法隨身攜帶體積龐大的桌上型顯示器,因而較難以享受多螢幕顯示功能帶來的便利。 On the other hand, with the increase in the amount of information processing, single-screen notebook computers have gradually been unable to meet the needs of workers. Generally speaking, users in the office can connect the laptop to another desktop monitor to use the multi-screen display function to improve work efficiency. However, users who work outside cannot carry the bulky desktop monitors with them, so it is difficult to enjoy the convenience brought by the multi-screen display function.

有鑑於此,本發明提出一種整合動作感測器的擴增實境系統與擴增實境顯示方法,其可即時反應於使用者的操作狀態透過頭戴式顯示裝置顯示位於顯示器的顯示邊框旁邊的虛擬物件。 In view of this, the present invention proposes an augmented reality system and an augmented reality display method integrating a motion sensor, which can instantly respond to the user's operating state and display the position next to the display frame of the display through the head-mounted display device virtual objects.

本發明實施例提供一種擴增實境系統,其包括頭戴式顯示裝置以及計算機裝置。頭戴式顯示裝置包括影像擷取裝置與動作感測器,並用以顯示一虛擬物件。前述虛擬物件疊合顯示為錨定於一顯示器的顯示邊框上。計算機裝置連接頭戴式顯示裝置,並包括儲存裝置與處理器。前述處理器耦接儲存裝置,並經配置以執行下列步驟。透過影像擷取裝置擷取第i幀環境影像,並依據第i幀環境影像決定虛擬物件的顯示位置,其中i為大於0的整數。獲取顯示器相對於頭戴式顯示裝置的深度距離。藉由動作感測器產生頭戴式顯示裝置的動作感測資料。在影像擷取裝置擷取第(i+1)幀環境影像之前,依據動作感測資料與深度距離調整虛擬物件的顯示位置。藉由頭戴式顯示裝置依據調整後的顯示位置顯示虛擬物件。 Embodiments of the present invention provide an augmented reality system including a head-mounted display device and a computer device. The head-mounted display device includes an image capturing device and a motion sensor, and is used for displaying a virtual object. The aforesaid virtual objects are superimposed and displayed to be anchored on a display frame of a display. The computer device is connected to the head-mounted display device, and includes a storage device and a processor. The aforementioned processor is coupled to the storage device and configured to perform the following steps. Capture the ith frame of environment image through the image capture device, and determine the display position of the virtual object according to the ith frame of the environment image, wherein i is an integer greater than 0. Get the depth distance of the display relative to the HMD. The motion sensing data of the head-mounted display device is generated by the motion sensor. Before the image capturing device captures the (i+1)th frame of the environment image, the display position of the virtual object is adjusted according to the motion sensing data and the depth distance. The virtual objects are displayed by the head-mounted display device according to the adjusted display position.

本發明實施例提供一種擴增實境顯示方法,包括下列步驟。透過頭戴式顯示裝置上的影像擷取裝置擷取第i幀環境影像,並依據第i幀環境影像決定虛擬物件的顯示位置,其中i為大於0的整數。獲取顯示器相對於頭戴式顯示裝置的深度距離。藉由頭戴式顯示裝置上的動作感測器產生頭戴式顯示裝置的動作感測資料。在影像擷取裝置擷取第(i+1)幀環境影像之前,依據動作感測資料與深度距離調整虛擬物件的該顯示位置。藉由頭戴式顯示 裝置依據調整後的顯示位置顯示虛擬物件。虛擬物件疊合顯示為錨定於顯示器的顯示邊框上。 An embodiment of the present invention provides an augmented reality display method, which includes the following steps. Capture the i-th frame of the environment image through the image capture device on the head-mounted display device, and determine the display position of the virtual object according to the i-th frame of the environment image, wherein i is an integer greater than 0. Get the depth distance of the display relative to the HMD. The motion sensing data of the head-mounted display device is generated by the motion sensor on the head-mounted display device. Before the image capturing device captures the (i+1)th frame of the environment image, the display position of the virtual object is adjusted according to the motion sensing data and the depth distance. by head-mounted display The device displays the virtual object according to the adjusted display position. The virtual object overlay appears anchored to the display border of the display.

基於上述,於本發明的實施例中,可藉由頭戴式顯示裝置顯示虛擬物件來實現多螢幕顯示功能。並且,在擷取下一幀環境影像定位出虛擬物件的顯示位置之前,可透過頭戴式顯示裝置的動作感測資料來動態調整虛擬物件的顯示位置,好讓使用者可觀看到穩定地相連於主顯示器之顯示邊框上的虛擬物件。藉此,虛擬物件的錨定顯示不僅可讓使用者感受到多螢幕功能的便利,並且可提昇使用者觀看虛擬物件的觀看體驗。 Based on the above, in the embodiments of the present invention, the multi-screen display function can be realized by displaying virtual objects on the head-mounted display device. In addition, before capturing the next frame of the environment image to locate the display position of the virtual object, the display position of the virtual object can be dynamically adjusted through the motion sensing data of the head-mounted display device, so that the user can watch the stable connection A virtual object on the display frame of the main monitor. In this way, the anchored display of the virtual object not only allows the user to feel the convenience of the multi-screen function, but also enhances the user's viewing experience for viewing the virtual object.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。 In order to make the above-mentioned features and advantages of the present invention more obvious and easy to understand, the following embodiments are given and described in detail with the accompanying drawings as follows.

10:擴增實境系統 10: Augmented reality system

110:頭戴式顯示裝置 110: Head Mounted Displays

120:計算機裝置 120: Computer Devices

111:影像擷取裝置 111: Image capture device

112:顯示器 112: Display

130:顯示器 130: Display

122:儲存裝置 122: Storage Device

123:處理器 123: Processor

E_L:左顯示邊框 E_L: left display border

E_T:上顯示邊框 E_T: top display border

E_R:右顯示邊框 E_R: Display border on the right

V_T、V_R、V_L:虛擬物件 V_T, V_R, V_L: virtual objects

S310~S350、S410~S433:步驟 S310~S350, S410~S433: Steps

圖1是依照本發明一實施例的擴增實境系統的示意圖。 FIG. 1 is a schematic diagram of an augmented reality system according to an embodiment of the present invention.

圖2A至圖2C是依照本發明一實施例的擴增實境系統的應用情境圖。 2A to 2C are application scenario diagrams of an augmented reality system according to an embodiment of the present invention.

圖3是依照本發明一實施例的擴增實境顯示方法的流程圖。 FIG. 3 is a flowchart of an augmented reality display method according to an embodiment of the present invention.

圖4是依照本發明一實施例的調整虛擬物件之顯示位置的流程圖。 FIG. 4 is a flowchart of adjusting the display position of a virtual object according to an embodiment of the present invention.

本發明的部份實施例接下來將會配合附圖來詳細描述,以下的描述所引用的元件符號,當不同附圖出現相同的元件符號將視為相同或相似的元件。這些實施例只是本發明的一部份,並未揭示所有本發明的可實施方式。更確切的說,這些實施例只是本發明的專利申請範圍中的方法與系統的範例。 Some embodiments of the present invention will be described in detail below with reference to the accompanying drawings. Element symbols quoted in the following description will be regarded as the same or similar elements when the same element symbols appear in different drawings. These examples are only a part of the invention and do not disclose all possible embodiments of the invention. Rather, these embodiments are merely exemplary of methods and systems within the scope of the present invention.

圖1是依照本發明一實施例的擴增實境系統的示意圖。請參照圖1,擴增實境系統10包括頭戴式顯示裝置110以及計算機裝置120,其可為單一整合系統或分離式系統。具體而言,擴增實境(AR)系統10中的頭戴式顯示裝置110與計算機裝置120可以實作成一體式(all-in-one,AIO)頭戴式顯示器。於另一實施例中,計算機裝置120可實施為電腦系統,並經由有線傳輸介面或是無線傳輸介面與頭戴式顯示裝置110相連。舉例而言,擴增實境系統10可實施為一體式的AR眼鏡,或者實施為經由通訊介面相連結的AR眼鏡與電腦系統。 FIG. 1 is a schematic diagram of an augmented reality system according to an embodiment of the present invention. Referring to FIG. 1 , the augmented reality system 10 includes a head-mounted display device 110 and a computer device 120 , which can be a single integrated system or a separate system. Specifically, the head-mounted display device 110 and the computer device 120 in the augmented reality (AR) system 10 can be implemented as an all-in-one (AIO) head-mounted display. In another embodiment, the computer device 120 can be implemented as a computer system, and is connected to the head-mounted display device 110 via a wired transmission interface or a wireless transmission interface. For example, the augmented reality system 10 may be implemented as an integrated AR glasses, or implemented as AR glasses and a computer system connected through a communication interface.

擴增實境系統10用於向使用者提供擴增實境內容。需特別說明的是,擴增實境系統10中的頭戴式顯示裝置110用以顯示虛擬物件,且此虛擬物件會顯示為錨定於真實場景中的顯示器130的顯示邊框上。顯示器130例如為筆記型電腦、平板電腦或智慧型手機的顯示螢幕或桌上型顯示器,本發明對此不限制。換言之,當使用者配戴頭戴式顯示裝置110觀看真實場景中的顯示器130時,擴增實境系統10所提供的虛擬物件可作為輔助螢幕。 The augmented reality system 10 is used to provide augmented reality content to a user. It should be noted that the head-mounted display device 110 in the augmented reality system 10 is used to display virtual objects, and the virtual objects are displayed as anchored on the display frame of the display 130 in the real scene. The display 130 is, for example, a display screen of a notebook computer, a tablet computer or a smart phone or a desktop display, which is not limited in the present invention. In other words, when the user wears the head-mounted display device 110 to watch the display 130 in the real scene, the virtual object provided by the augmented reality system 10 can be used as an auxiliary screen.

頭戴式顯示裝置110包括影像擷取裝置111、顯示器112 以及動作感測器113。影像擷取裝置111用以擷取環境影像並且包括具有透鏡以及感光元件的攝像鏡頭。感光元件用以感測進入透鏡的光線強度,進而產生影像。感光元件可以例如是電荷耦合元件(charge coupled device,CCD)、互補性氧化金屬半導體(complementary metal-oxide semiconductor,CMOS)元件或其他元件,本發明不在此設限。在一實施例中,影像擷取裝置111固定設置於頭戴式顯示裝置110上,並用於拍攝位於頭戴式顯示裝置110前方的實際場景。舉例而言,當使用者配戴頭戴式顯示裝置110時,影像擷取裝置111可位於使用者雙眼之間或位於某一眼外側而朝使用者前方的實際場景進行拍攝動作。 The head-mounted display device 110 includes an image capture device 111 and a display 112 And the motion sensor 113 . The image capturing device 111 is used for capturing an image of the environment and includes a camera lens having a lens and a photosensitive element. The photosensitive element is used to sense the intensity of light entering the lens, thereby generating an image. The photosensitive element may be, for example, a charge coupled device (CCD), a complementary metal-oxide semiconductor (CMOS) element or other elements, which are not limited in the present invention. In one embodiment, the image capturing device 111 is fixedly disposed on the head-mounted display device 110 and is used to capture the actual scene in front of the head-mounted display device 110 . For example, when the user wears the head-mounted display device 110 , the image capturing device 111 may be located between the user's eyes or located outside one of the eyes to capture the actual scene in front of the user.

顯示器112是具有一定程度的光線穿透性的顯示裝置,使用者觀看時能夠呈現出相對於觀看者另一側的實際場景。顯示器112可以液晶、有機發光二極體、電子墨水或是投影方式等顯示技術顯示虛擬物件,其具有半透明或是透明的光學鏡片。因此,使用者透過顯示器112所觀看到的內容將會是疊加虛擬物件的擴增實境場景。在一實施例中,顯示器112可實作為擴增實境眼鏡的鏡片。 The display 112 is a display device with a certain degree of light penetration, and the user can present the actual scene on the other side of the viewer when viewing. The display 112 can display virtual objects by display technologies such as liquid crystal, organic light emitting diode, electronic ink, or projection, and has a translucent or transparent optical lens. Therefore, the content viewed by the user through the display 112 will be an augmented reality scene with superimposed virtual objects. In one embodiment, the display 112 may be implemented as a lens for augmented reality glasses.

動作感測器113例如是六軸感測器(可進行方向及加速度的感測),可使用的感測器種類包括重力感測器(g-sensor)、陀螺儀(gyroscope)、加速度計(accelerometer)、電子羅盤(Electronic Compass)、測高儀(altitude meter)或是其他適合的動作感測器或上述感測器的組合搭配。 The motion sensor 113 is, for example, a six-axis sensor (which can sense direction and acceleration). The types of sensors that can be used include a gravity sensor (g-sensor), a gyroscope (gyroscope), an accelerometer ( accelerometer), electronic compass (Electronic Compass), altimeter (altitude meter) or other suitable motion sensor or combination of the above sensors.

然而,除了影像擷取裝置111、顯示器112以及動作感測器113之外,頭戴式顯示裝置110更可包括未繪示於圖1的元件,像是揚聲器、控制器以及各式通信介面等等,本發明對此不限制。 However, in addition to the image capture device 111 , the display 112 and the motion sensor 113 , the head-mounted display device 110 may further include components not shown in FIG. 1 , such as speakers, controllers, and various communication interfaces. etc., the present invention is not limited thereto.

另一方面,計算機裝置120可包括儲存裝置122,以及處理器123。儲存裝置122用以儲存資料與供處理器123存取的程式碼(例如作業系統、應用程式、驅動程式)等資料,其可以例如是任意型式的固定式或可移動式隨機存取記憶體(random access memory,RAM)、唯讀記憶體(read-only memory,ROM)、快閃記憶體(flash memory)或其組合。 On the other hand, the computer device 120 may include a storage device 122 , and a processor 123 . The storage device 122 is used to store data and data such as code (such as operating system, application program, driver) for the processor 123 to access, which can be, for example, any type of fixed or removable random access memory ( random access memory (RAM), read-only memory (ROM), flash memory (flash memory), or a combination thereof.

處理器123耦接儲存裝置122,例如是中央處理單元(central processing unit,CPU)、應用處理器(application processor,AP),或是其他可程式化之一般用途或特殊用途的微處理器(microprocessor)、數位訊號處理器(digital signal processor,DSP)、影像訊號處理器(image signal processor,ISP)、圖形處理器(graphics processing unit,GPU)或其他類似裝置、積體電路及其組合。處理器123可存取並執行記錄在儲存裝置122中的程式碼與軟體元件,以實現本發明實施例中的擴增實境顯示方法。 The processor 123 is coupled to the storage device 122, such as a central processing unit (CPU), an application processor (AP), or other programmable general-purpose or special-purpose microprocessors (microprocessors). ), digital signal processor (DSP), image signal processor (ISP), graphics processing unit (GPU) or other similar devices, integrated circuits and combinations thereof. The processor 123 can access and execute the program codes and software components recorded in the storage device 122, so as to realize the augmented reality display method in the embodiment of the present invention.

為了方便說明,以下將以計算機裝置120實施為包括內建的顯示器130且經由習知通訊介面與頭戴式顯示裝置110相連的電腦系統為範例進行說明。具體而言,於一實施例中,計算機裝置120可將相關的AR內容提供予頭戴式顯示裝置110,再由頭戴式顯示裝置110呈現予使用者觀看。舉例而言,計算機裝置120 實施為筆記型電腦、智慧型手機、平板電腦、電子書、遊戲機等具有顯示功能的電子裝置,本發明並不對此限制。顯示器130可以是液晶顯示器(Liquid Crystal Display,LCD)、發光二極體(Light Emitting Diode,LED)顯示器、有機發光二極體(Organic Light Emitting Diode,OLED)等各類型的顯示器,本發明對此不限制。 For the convenience of description, the following description will be given by taking the computer device 120 implemented as a computer system including a built-in display 130 and connected to the head-mounted display device 110 via a conventional communication interface as an example. Specifically, in one embodiment, the computer device 120 can provide the relevant AR content to the head-mounted display device 110 , and then the head-mounted display device 110 can present it to the user for viewing. For example, computer device 120 It is implemented as an electronic device with a display function, such as a notebook computer, a smart phone, a tablet computer, an electronic book, a game console, etc., which is not limited in the present invention. The display 130 may be a liquid crystal display (Liquid Crystal Display, LCD), a light emitting diode (Light Emitting Diode, LED) display, an organic light emitting diode (Organic Light Emitting Diode, OLED) and other types of displays. not limited.

圖2A至圖2C是依照本發明一實施例的擴增實境系統的應用情境圖。請參照圖2A至圖2C,當使用者在配戴頭戴式顯示裝置110的情況下觀看真實場景中的顯示器130時,影像擷取裝置111會朝顯示器130拍攝環境影像。需說明的是,顯示器130之螢幕邊緣或螢幕角落呈現有輔助標記,而計算機裝置120可依據環境影像的輔助標記來定位顯示器130的顯示邊框,並依據定位結果決定虛擬物件的顯示參數,像是顯示邊界、顯示尺寸或顯示位置等等,致使虛擬物件可呈現為錨定於顯示器130的顯示邊框上。於一實施例中,輔助標記可以實施為貼附於顯示器130上的貼紙,或由顯示器130自行顯示輔助標記。 2A to 2C are application scenario diagrams of an augmented reality system according to an embodiment of the present invention. Referring to FIGS. 2A to 2C , when the user watches the display 130 in a real scene while wearing the head-mounted display device 110 , the image capturing device 111 captures an environment image toward the display 130 . It should be noted that auxiliary marks are displayed on the edge or corner of the screen of the display 130, and the computer device 120 can locate the display frame of the display 130 according to the auxiliary marks of the environment image, and determine the display parameters of the virtual objects according to the positioning results, such as Display boundaries, display sizes, or display positions, etc., such that virtual objects may appear anchored to the display bezels of display 130 . In one embodiment, the auxiliary mark may be implemented as a sticker attached to the display 130 , or the display 130 displays the auxiliary mark by itself.

基此,當使用者透過頭戴式顯示裝置110的顯示器112觀看真實場景中的顯示器130時,使用者可看到疊加虛擬物件的實際場景,且虛擬物件顯示為錨定於顯示器130的顯示邊框上。更具體而言,虛擬物件會顯示為固定連接於顯示器130的上側、左側或右側,而不會遮蔽到顯示器130的顯示內容。 Based on this, when the user watches the display 130 in the real scene through the display 112 of the head-mounted display device 110 , the user can see the actual scene superimposed with the virtual object, and the virtual object is displayed as anchored to the display frame of the display 130 superior. More specifically, the virtual object will be displayed as being fixedly connected to the upper side, left side or right side of the display 130 , without obscuring the display content of the display 130 .

如圖2A至圖2C的範例所示,當使用者透過頭戴式顯示裝置110的顯示器112觀看顯示器130時,使用者可看到自顯示 器130的上顯示邊框E_T、左顯示邊框E_L與右顯示邊框E_R向外展開的虛擬物件V_T、V_R、V_L。虛擬物件V_T、V_R、V_L可用以提供各種資訊給使用者,例如是視窗、文件、影像、桌面或執行應用程式生成的視覺輸出等等。因此,當使用者透過頭戴式顯示裝置110觀看顯示器130時,可享受到多螢幕顯示功能帶來的便利。然而,圖2A至圖2C僅為一示範說明,本發明對於虛擬物件的數量與其錨定的顯示邊框並不限制。 As shown in the examples of FIGS. 2A to 2C , when the user views the display 130 through the display 112 of the head-mounted display device 110 , the user can see the self-display The upper display frame E_T, the left display frame E_L, and the right display frame E_R of the controller 130 display virtual objects V_T, V_R, and V_L that expand outward. The virtual objects V_T, V_R, and V_L can be used to provide various information to the user, such as windows, documents, images, desktops, or visual outputs generated by running applications. Therefore, when the user views the display 130 through the head-mounted display device 110, the user can enjoy the convenience brought by the multi-screen display function. However, FIG. 2A to FIG. 2C are only an exemplary illustration, and the present invention does not limit the number of virtual objects and their anchored display frames.

此外,參照圖2A與圖2C可知,用以定位出顯示器130之顯示邊框的輔助標記可包括位於顯示器130之螢幕邊緣的標記線B1、B2、B3。參照圖2B可知,用以定位出顯示器130之顯示邊框的輔助標記可包括位於顯示器130之多個螢幕角落的多個標記點C1、C2、C3、C4。亦即,計算機裝置120可藉由辨識環境影像中的標記線B1、B2、B3的或標記點C1、C2、C3、C4來定位出顯示器130於AR座標系統下的位置資訊與深度資訊,從而決定虛擬物件V_T、V_R、V_L的顯示位置。 In addition, referring to FIGS. 2A and 2C , the auxiliary marks for locating the display frame of the display 130 may include marking lines B1 , B2 , and B3 located at the edge of the screen of the display 130 . Referring to FIG. 2B , the auxiliary marks for locating the display frame of the display 130 may include a plurality of mark points C1 , C2 , C3 and C4 located at the corners of a plurality of screens of the display 130 . That is, the computer device 120 can locate the position information and depth information of the display 130 under the AR coordinate system by identifying the marked lines B1, B2, B3 or the marked points C1, C2, C3, and C4 in the environment image, so as to Determines the display position of virtual objects V_T, V_R, V_L.

需說明的是,影像擷取裝置111可定時地拍攝環境影像(例如以30Hz的擷取幀率來產生環境影像),而計算機裝置120可依據環境影像重複地計算顯示器130於AR座標系統下的位置資訊與深度資訊,而持續更新虛擬物件V_T、V_R、V_L的顯示參數。藉此,在滿足顯示虛擬物件V_T、V_R、V_L之條件的情況下,即便使用者的位置改變或其頭部轉動,虛擬物件V_T、V_R、V_L依然可顯示為錨定於顯示器130的顯示邊框上。 It should be noted that, the image capture device 111 can periodically capture the environment image (for example, the capture frame rate of 30 Hz is used to generate the environment image), and the computer device 120 can repeatedly calculate the display 130 under the AR coordinate system according to the environment image. Position information and depth information, and continuously update the display parameters of virtual objects V_T, V_R, V_L. Therefore, under the condition that the conditions for displaying the virtual objects V_T, V_R, and V_L are satisfied, even if the user's position changes or his head turns, the virtual objects V_T, V_R, and V_L can still be displayed as anchored to the display frame of the display 130 . superior.

值得一提的是,於一實施例中,影像擷取裝置111可依序先後擷取第i幀環境影像與第(i+1)幀環境影像,其中i為大於0的整數。計算機裝置120依據第i幀環境影像決定虛擬物件的顯示位置之後,在依據第(i+1)幀環境影像決定虛擬物件的顯示位置之前,計算機裝置120可依據動作感測器113所產生的動作感測資料動態調整虛擬物件的顯示位置,以避免虛擬物件的顯示受限於影像擷取幀率而出現不順暢的視覺感受。 It is worth mentioning that, in one embodiment, the image capturing device 111 may capture the i-th frame of the environment image and the (i+1)-th frame of the environment image in sequence, where i is an integer greater than 0. After the computer device 120 determines the display position of the virtual object according to the i-th frame of the environment image, before determining the display position of the virtual object according to the (i+1)-th frame of the environment image, the computer device 120 may determine the display position of the virtual object according to the action generated by the motion sensor 113 The sensing data dynamically adjusts the display position of the virtual object, so as to avoid unsmooth visual experience caused by the display of the virtual object being limited by the frame rate of image capture.

以下即搭配擴增實境系統10的各元件列舉實施例,以說明擴增實境顯示方法的詳細步驟。 The following examples are listed with the elements of the augmented reality system 10 to illustrate the detailed steps of the augmented reality display method.

圖3是依照本發明一實施例的擴增實境顯示方法的流程圖。請參照圖1與圖3,本實施例的方式適用於上述實施例中的擴增實境系統10,以下即搭配擴增實境系統10中的各項元件說明本實施例之擴增實境顯示方法的詳細步驟。 FIG. 3 is a flowchart of an augmented reality display method according to an embodiment of the present invention. Please refer to FIG. 1 and FIG. 3 . The method of this embodiment is applicable to the augmented reality system 10 in the above-mentioned embodiment. The following describes the augmented reality of this embodiment with the elements in the augmented reality system 10 Displays the detailed steps of the method.

於步驟S310,處理器123透過頭戴式顯示裝置110上的影像擷取裝置111擷取第i幀環境影像,並依據第i幀環境影像決定虛擬物件的顯示位置。環境影像為位於使用者周遭的實際場景的影像。詳細來說,實際場景的影像關聯於影像擷取裝置111的視野範圍。於一實施例中,影像擷取裝置111可依據一擷取幀率擷取環境影像。於一實施例中,影像擷取裝置111可透過有線傳輸介面或無線傳輸介面將環境影像傳送計算機裝置120。基於前述可知,處理器123可依據第i幀環境影像中的輔助標記定位出顯示器130於AR座標系統中的位置資訊,並據以決定虛擬物件的顯示 位置。 In step S310, the processor 123 captures the ith frame of the environment image through the image capture device 111 on the head-mounted display device 110, and determines the display position of the virtual object according to the ith frame of the environment image. The environment image is an image of the actual scene located around the user. Specifically, the image of the actual scene is associated with the field of view of the image capturing device 111 . In one embodiment, the image capturing device 111 may capture the environment image according to a capturing frame rate. In one embodiment, the image capturing device 111 can transmit the environmental image to the computer device 120 through a wired transmission interface or a wireless transmission interface. Based on the foregoing, the processor 123 can locate the position information of the display 130 in the AR coordinate system according to the auxiliary marker in the ith frame of the environment image, and determine the display of the virtual object accordingly Location.

於一實施例中,當處理器123自第i幀環境影像辨識出顯示器130上的輔助標記,虛擬物件可便持續顯示於顯示器130的周圍。於一實施例中,反應於第i幀環境影像中輔助標記位於第i幀環境影像中的預定範圍內,處理器123依據第i幀環境影像決定虛擬物件的顯示位置,並控制頭戴式顯示裝置110基於顯示位置顯示虛擬物件。反應於第i幀環境影像中輔助標記未位於第i幀環境影像中的預定範圍內,處理器123控制頭戴式顯示裝置110不顯示虛擬物件。舉圖2A為例,第i幀環境影像可劃分為尺寸相同的左區塊與右區塊。當據第i幀環境影像中的標記線B3位於第i幀環境影像的左區塊(即預定範圍)內時,代表使用者往顯示器130右側的方向觀看,因而處理器123可控制頭戴式顯示裝置110基於虛擬物件的顯示位置顯示虛擬物件V_R。換言之,當使用者頭部的轉動至一定程度時,虛擬物件才會顯示於顯示器130的一側。 In one embodiment, when the processor 123 recognizes the auxiliary mark on the display 130 from the ith frame of the environment image, the virtual object can be continuously displayed around the display 130 . In one embodiment, in response to the fact that the auxiliary marker in the ith frame of environment image is located within a predetermined range in the ith frame of environment image, the processor 123 determines the display position of the virtual object according to the ith frame of environment image, and controls the head-mounted display The device 110 displays the virtual object based on the display position. In response to the fact that the auxiliary marker in the ith frame of the environment image is not located within the predetermined range in the ith frame of the environment image, the processor 123 controls the head-mounted display device 110 to not display the virtual object. Taking FIG. 2A as an example, the ith frame of the environment image can be divided into a left block and a right block with the same size. When the marked line B3 in the i-th frame of ambient image is located in the left block (ie, the predetermined range) of the i-th frame of ambient image, it means that the user is looking to the right of the display 130 , so the processor 123 can control the headset The display device 110 displays the virtual object V_R based on the display position of the virtual object. In other words, when the user's head rotates to a certain degree, the virtual object will be displayed on one side of the display 130 .

於步驟S320,處理器123獲取顯示器130相對於頭戴式顯示裝置110的深度距離。於一實施例中,藉由訊號發射器(未繪示)主動發出光源、紅外線、超音波、雷射等作為訊號搭配時差測距技術(time-of-flight,ToF),處理器123可獲取顯示器130相對於頭戴式顯示裝置110的深度距離。於一實施例中,可透過影像擷取裝置111與另一影像感測器以不同視角同時擷取其前方的兩張影像,以利用兩張影像的視差來計算顯示器130相對於頭 戴式顯示裝置110的深度距離。 In step S320 , the processor 123 obtains the depth distance of the display 130 relative to the head-mounted display device 110 . In one embodiment, through a signal transmitter (not shown) actively sending out a light source, infrared rays, ultrasonic waves, lasers, etc. as a signal with time-of-flight (ToF) technology, the processor 123 can obtain the The depth distance of the display 130 relative to the head mounted display device 110 . In one embodiment, the image capturing device 111 and another image sensor can simultaneously capture two images in front of it from different viewing angles, so as to use the disparity of the two images to calculate the relative position of the display 130 relative to the head. The depth distance of the wearable display device 110 .

值得一提的是,於一實施例中,處理器123可依據第i幀環境影像中輔助標記的成像尺寸,估測顯示器130相對於頭戴式顯示裝置110的深度距離。具體而言,當影像擷取裝置111與輔助標記相距標準深度並拍攝輔助標記時,輔助標記於此拍攝影像中的尺寸即為標準尺寸。標準尺寸與標準深度可依據事前校正程序而產生,並記錄於儲存裝置122中。因此,藉由比較第i幀環境影像中輔助標記的成像尺寸以及標準尺寸,處理器123可估測出取顯示器130相對於頭戴式顯示裝置110的深度距離。 It is worth mentioning that, in one embodiment, the processor 123 can estimate the depth distance of the display 130 relative to the head-mounted display device 110 according to the imaging size of the auxiliary marker in the ith frame of the environment image. Specifically, when the image capturing device 111 is at a standard depth from the auxiliary mark and shoots the auxiliary mark, the size of the auxiliary mark in the captured image is the standard size. The standard size and standard depth can be generated according to a pre-calibration procedure and recorded in the storage device 122 . Therefore, the processor 123 can estimate the depth distance of the display 130 relative to the head-mounted display device 110 by comparing the imaging size of the auxiliary marker in the ith frame of the environment image with the standard size.

舉例而言,當輔助標記為位於顯示器130之多個螢幕角落的多個標記點(如圖2B所示的標記點C1~C4),則輔助標記的成像尺寸包括標記點的直徑。當輔助標記為位於顯示器130之螢幕邊緣的多個標記線(如圖2A所示的標記線B1~B3),則輔助標記的成像尺寸包括標記線的長度。 For example, when the auxiliary mark is a plurality of mark points located at the corners of a plurality of screens of the display 130 (the mark points C1 - C4 shown in FIG. 2B ), the imaging size of the auxiliary mark includes the diameter of the mark point. When the auxiliary mark is a plurality of marking lines located at the edge of the screen of the display 130 (marking lines B1-B3 shown in FIG. 2A ), the imaging size of the auxiliary mark includes the length of the marking line.

於一實施例中,處理器123可獲取輔助標記的標準尺寸與標準深度之間的比例值。接著,處理器123可依據比例值與第i幀環境影像中輔助標記的成像尺寸,估測顯示器130相對於頭戴式顯示裝置110的深度距離。舉例而言,處理器123可先獲取標記線的標準長度與標準深度之間的比例值,再依據此比例值與第i幀環境影像中標記線的成像長度計算顯示器130相對於頭戴式顯示裝置110的深度距離。處理器123可以下列公式(1)計算顯示器130相對於頭戴式顯示裝置110的深度距離。 In one embodiment, the processor 123 may obtain a ratio value between the standard size and the standard depth of the auxiliary marker. Then, the processor 123 can estimate the depth distance of the display 130 relative to the head-mounted display device 110 according to the scale value and the imaging size of the auxiliary marker in the ith frame of the environment image. For example, the processor 123 may first obtain the ratio value between the standard length of the marker line and the standard depth, and then calculate the relative value of the display 130 relative to the head-mounted display according to the ratio value and the imaging length of the marker line in the ith frame of the environment image. The depth distance of the device 110. The processor 123 may calculate the depth distance of the display 130 relative to the head mounted display device 110 by the following formula (1).

d=(D1/X1)*x 公式(1)其中,d為顯示器130相對於頭戴式顯示裝置110的深度距離;D1為標準深度;X1為標準長度;x為第i幀環境影像中輔助標記的成像長度。 d=(D1/X1)*x Formula (1) Wherein, d is the depth distance of the display 130 relative to the head-mounted display device 110; D1 is the standard depth; X1 is the standard length; Marked imaging length.

接著,於步驟S330,處理器123可藉由動作感測器113產生頭戴式顯示裝置110的動作感測資料。動作感測資料可包括速度、加速度、角加速度、壓力、磁力等等感測資料。於一實施例中,動作感測資料可包括對應於三座標軸向(X軸、Y軸與Z軸)的加速度與角速度。 Next, in step S330 , the processor 123 may generate motion sensing data of the head-mounted display device 110 through the motion sensor 113 . The motion sensing data may include speed, acceleration, angular acceleration, pressure, magnetic force, etc. sensing data. In one embodiment, the motion sensing data may include acceleration and angular velocity corresponding to three coordinate axes (X-axis, Y-axis, and Z-axis).

於步驟S340,在影像擷取裝置111擷取第(i+1)幀環境影像之前,處理器123可依據動作感測資料與深度距離調整虛擬物件的顯示位置。詳細而言,處理器123可依據動作感測器113產生的動作感測資料計算出頭戴式顯示裝置110相對於三座標軸向上的移動量與旋轉量。舉例而言,處理器123可積分動作感測資料中的角速度而計算出旋轉角度,或者處理器123可對動作感測資料中的加速度進行兩次積分而獲取移動距離。於是,於一實施例中,處理器123可依據頭戴式顯示裝置110的移動量來調整虛擬物件的顯示位置。於一實施例中,處理器123可依據頭戴式顯示裝置110的旋轉量與顯示器130的深度距離來調整虛擬物件的顯示位置。基此,反應於使用者的頭部轉動或移動,處理器123會更新虛擬物件於AR座標系統中的顯示位置,而因此一併更新輸出畫面中虛擬物件的顯示位置。 In step S340, before the image capturing device 111 captures the (i+1)th frame of the environment image, the processor 123 may adjust the display position of the virtual object according to the motion sensing data and the depth distance. Specifically, the processor 123 can calculate the upward movement and rotation of the head-mounted display device 110 relative to the three coordinate axes according to the motion sensing data generated by the motion sensor 113 . For example, the processor 123 may integrate the angular velocity in the motion sensing data to calculate the rotation angle, or the processor 123 may integrate the acceleration in the motion sensing data twice to obtain the moving distance. Therefore, in one embodiment, the processor 123 can adjust the display position of the virtual object according to the movement amount of the head mounted display device 110 . In one embodiment, the processor 123 may adjust the display position of the virtual object according to the rotation amount of the head mounted display device 110 and the depth distance of the display 130 . Based on this, in response to the rotation or movement of the user's head, the processor 123 will update the display position of the virtual object in the AR coordinate system, thereby also updating the display position of the virtual object in the output screen.

於步驟S350,藉由頭戴式顯示裝置110依據調整後的顯示位置顯示虛擬物件。至少一虛擬物件顯示為錨定於顯示器130的至少一顯示邊框上。詳細而言,透過利用已知的幾何向量投影演算法,處理器123可依據虛擬物件於AR座標系統中的顯示位置產生提供給頭戴式顯示裝置110的輸出畫面。基此,當頭戴式顯示裝置110依據處理器123提供的輸出畫面進行顯示時,使用者可看到錨定於顯示器130之顯示邊框上的虛擬物件。虛擬物件的顯示位置會依據動作感測資料而調整,但視覺上虛擬物件不會反應使用者頭部的移動或轉動而與顯示器130的顯示邊框分離。 In step S350, the virtual object is displayed by the head-mounted display device 110 according to the adjusted display position. At least one virtual object is displayed anchored on at least one display frame of the display 130 . Specifically, by using a known geometric vector projection algorithm, the processor 123 can generate an output image provided to the head-mounted display device 110 according to the display position of the virtual object in the AR coordinate system. Based on this, when the head-mounted display device 110 displays the output image provided by the processor 123 , the user can see the virtual objects anchored on the display frame of the display 130 . The display position of the virtual object will be adjusted according to the motion sensing data, but visually the virtual object will not be separated from the display frame of the display 130 in response to the movement or rotation of the user's head.

圖4是依照本發明一實施例的調整虛擬物件之顯示位置的流程圖。請參照圖1與圖4,圖4所示的流程為圖3步驟S340的一實施例。本實施例的方式適用於上述實施例中的擴增實境系統10,以下即搭配擴增實境系統10中的各項元件說明本實施例之擴增實境顯示方法的詳細步驟。 FIG. 4 is a flowchart of adjusting the display position of a virtual object according to an embodiment of the present invention. Please refer to FIG. 1 and FIG. 4 . The process shown in FIG. 4 is an embodiment of step S340 in FIG. 3 . The method of this embodiment is applicable to the augmented reality system 10 in the above-mentioned embodiment. The following describes the detailed steps of the augmented reality display method of this embodiment in combination with various elements in the augmented reality system 10 .

於步驟S410,處理器123依據動作感測資料計算頭戴式顯示裝置110的旋轉量。旋轉量可以是單位時間內的旋轉角度。於一實施例中,此旋轉量可包括相對於第一座標軸旋轉的傾仰(pitch)旋轉量以及相對於第二座標軸旋轉的偏擺(yaw)旋轉量。例如:傾仰旋轉量可用以表示使用者上下擺動頭部時的旋轉角度,而偏擺旋轉量可用以表示使用者左右搖動頭部時的旋轉角度。 In step S410, the processor 123 calculates the rotation amount of the head-mounted display device 110 according to the motion sensing data. The rotation amount may be a rotation angle per unit time. In one embodiment, the rotation may include a pitch rotation relative to the first coordinate axis and a yaw rotation relative to the second coordinate axis. For example, the tilt rotation amount can be used to represent the rotation angle when the user swings the head up and down, and the yaw rotation amount can be used to represent the rotation angle when the user shakes the head left and right.

於步驟S420,處理器123依據旋轉量與深度距離計算一位置變化量。詳細而言,基於頭戴式顯示裝置110的旋轉量以及 顯示器130的深度距離,處理器123可計算出虛擬物件於一參考座標系統中的絕對位置變化量,例如是AR座標系統中的絕對位置變化量。於一實施例中,處理器123可將旋轉量與深度距離代入預設函式而產生絕對位置變化量。於是,於步驟S430,處理器123可依據位置變化量調整虛擬物件的顯示位置。 In step S420, the processor 123 calculates a position change amount according to the rotation amount and the depth distance. In detail, based on the rotation amount of the head mounted display device 110 and For the depth distance of the display 130, the processor 123 can calculate the absolute position change of the virtual object in a reference coordinate system, such as the absolute position change in the AR coordinate system. In one embodiment, the processor 123 may substitute the rotation amount and the depth distance into a preset function to generate the absolute position change amount. Therefore, in step S430, the processor 123 may adjust the display position of the virtual object according to the position change amount.

於此,步驟S430可實施為子步驟S431~子步驟S433。於子步驟S431,處理器123判斷位置變化量是否大於一門檻值。此門檻值可依據實際需求而設置,本發明對此不限制。若位置變化量未大於門檻值,於子步驟S432,處理器123不調整虛擬物件的顯示位置。若位置變化量大於門檻值,於子步驟S433,處理器123依據位置變化量調整虛擬物件的顯示位置。於一實施例中,處理器123可依據位置變化量將基於第i幀環境影像所計算出來的先前顯示位置調整為當前顯示位置,以控制頭戴式顯示裝置110基於虛擬物件的當前顯示位置進行顯示。此外,藉由判斷位置變化量是否大於門檻值的條件設置,可避免動作感測的誤差影響虛擬物件的顯示穩定度。 Here, step S430 can be implemented as sub-steps S431 to S433. In sub-step S431, the processor 123 determines whether the position change amount is greater than a threshold value. The threshold value can be set according to actual requirements, which is not limited in the present invention. If the position change amount is not greater than the threshold value, in sub-step S432, the processor 123 does not adjust the display position of the virtual object. If the position change amount is greater than the threshold value, in sub-step S433, the processor 123 adjusts the display position of the virtual object according to the position change amount. In one embodiment, the processor 123 may adjust the previous display position calculated based on the i-th frame of the environment image to the current display position according to the position change, so as to control the head-mounted display device 110 to perform the display based on the current display position of the virtual object. show. In addition, by setting the condition of judging whether the position change amount is greater than the threshold value, it is possible to avoid the error of motion sensing from affecting the display stability of the virtual object.

綜上所述,於本發明實施例中,當使用者配戴頭戴式顯示裝置觀看主顯示器時,即便頭戴式顯示裝置動態移動,但頭戴式顯示裝置所呈現的虛像物件與實際場景中主顯示器的顯示邊框可達到良好的對齊貼合。藉此,使用者可透過虛擬物件獲取更多的資訊量,並享優良舒適的觀看體驗。此外,在依據下一幀環境影像定位出虛擬物件的顯示位置之前,透過依據動作感測資料調 整虛擬物件的顯示位置,可加強虛擬物件的顯示流暢度。 To sum up, in the embodiment of the present invention, when the user wears the head-mounted display device to watch the main display, even if the head-mounted display device moves dynamically, the virtual image object and the actual scene presented by the head-mounted display device The display frame of the main display in the middle can achieve a good alignment and fit. In this way, users can obtain more information through virtual objects and enjoy an excellent and comfortable viewing experience. In addition, before locating the display position of the virtual object according to the next frame of the environment image, adjust the Adjusting the display position of virtual objects can enhance the display fluency of virtual objects.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present invention has been disclosed above by the embodiments, it is not intended to limit the present invention. Anyone with ordinary knowledge in the technical field can make some changes and modifications without departing from the spirit and scope of the present invention. Therefore, The protection scope of the present invention shall be determined by the scope of the appended patent application.

10:擴增實境系統 10: Augmented reality system

110:頭戴式顯示裝置 110: Head Mounted Displays

120:計算機裝置 120: Computer Devices

130:顯示器 130: Display

111:影像擷取裝置 111: Image capture device

112:顯示器 112: Display

113:動作感測器 113: Motion Sensor

122:儲存裝置 122: Storage Device

123:處理器 123: Processor

Claims (14)

一種擴增實境系統,包括: 一頭戴式顯示裝置,包括影像擷取裝置與動作感測器,並用以顯示一虛擬物件,其中該虛擬物件疊合顯示為錨定於一顯示器的顯示邊框上;以及 一計算機裝置,連接該頭戴式顯示裝置,並包括: 一儲存裝置;以及 一處理器,耦接該儲存裝置,經配置以: 透過該影像擷取裝置擷取第i幀環境影像,並依據該第i幀環境影像決定該虛擬物件的顯示位置,其中i為大於0的整數; 獲取該顯示器相對於該頭戴式顯示裝置的深度距離; 藉由該動作感測器產生該頭戴式顯示裝置的動作感測資料; 在該影像擷取裝置擷取第(i+1)幀環境影像之前,依據該動作感測資料與該深度距離調整虛擬物件的該顯示位置;以及 藉由該頭戴式顯示裝置依據調整後的該顯示位置顯示該虛擬物件。 An augmented reality system comprising: a head-mounted display device, comprising an image capture device and a motion sensor, and used for displaying a virtual object, wherein the virtual object is superimposed and displayed to be anchored on a display frame of a display; and a computer device connected to the head-mounted display device and comprising: a storage device; and a processor, coupled to the storage device, configured to: Capture the i-th frame of the environment image through the image capture device, and determine the display position of the virtual object according to the i-th frame of the environment image, wherein i is an integer greater than 0; obtaining the depth distance of the display relative to the head-mounted display device; generating motion sensing data of the head-mounted display device by the motion sensor; before the image capturing device captures the (i+1)th frame of the environment image, adjusting the display position of the virtual object according to the motion sensing data and the depth distance; and The virtual object is displayed by the head-mounted display device according to the adjusted display position. 如請求項1所述的擴增實境系統,其中該處理器經配置以: 依據該第i幀環境影像中一輔助標記的成像尺寸,估測該顯示器相對於該頭戴式顯示裝置的該深度距離。 The augmented reality system of claim 1, wherein the processor is configured to: According to the imaging size of an auxiliary marker in the ith frame of the environment image, the depth distance of the display relative to the head-mounted display device is estimated. 如請求項2所述的擴增實境系統,其中該處理器經配置以: 獲取該輔助標記的標準尺寸與一標準深度之間的比例值;以及 依據該比例值與該第i幀環境影像中該輔助標記的成像尺寸,估測該顯示器相對於該頭戴式顯示裝置的該深度距離。 The augmented reality system of claim 2, wherein the processor is configured to: obtaining a ratio between a standard size of the auxiliary marker and a standard depth; and The depth distance of the display relative to the head-mounted display device is estimated according to the ratio value and the imaging size of the auxiliary marker in the i-th frame of environment image. 如請求項2所述的擴增實境系統,其中該輔助標記包括位於該顯示器之多個螢幕角落的多個標記點或位於該顯示器之螢幕邊緣的標記線,而該輔助標記的該成像尺寸包括該些標記點的直徑或該標記線的長度。The augmented reality system of claim 2, wherein the auxiliary marker comprises a plurality of marker points located at corners of a plurality of screens of the display or marker lines located at an edge of a screen of the display, and the imaging size of the auxiliary marker Including the diameter of the marked points or the length of the marked line. 如請求項1所述的擴增實境系統,其中該處理器經配置以: 依據該動作感測資料計算該頭戴式顯示裝置的一旋轉量; 依據該旋轉量與該深度距離計算一位置變化量;以及 依據該位置變化量調整該虛擬物件的該顯示位置。 The augmented reality system of claim 1, wherein the processor is configured to: calculating a rotation amount of the head-mounted display device according to the motion sensing data; calculating a position change amount according to the rotation amount and the depth distance; and The display position of the virtual object is adjusted according to the position change amount. 如請求項5所述的擴增實境系統,其中該處理器經配置以: 判斷該位置變化量是否大於一門檻值; 若該位置變化量未大於該門檻值,不調整該虛擬物件的顯示位置;以及 若該位置變化量大於該門檻值,依據該位置變化量調整該虛擬物件的顯示位置。 The augmented reality system of claim 5, wherein the processor is configured to: Determine whether the position change is greater than a threshold value; If the position change amount is not greater than the threshold value, the display position of the virtual object is not adjusted; and If the position change amount is greater than the threshold value, the display position of the virtual object is adjusted according to the position change amount. 如請求項1所述的擴增實境系統,其中該處理器經配置以: 反應於該第i幀環境影像中一輔助標記位於該第i幀環境影像中的預定範圍內,依據該第i幀環境影像決定該虛擬物件的該顯示位置,並控制該頭戴式顯示裝置基於該顯示位置顯示該虛擬物件。 The augmented reality system of claim 1, wherein the processor is configured to: In response to the fact that an auxiliary marker in the i-th frame of environment image is located within a predetermined range in the i-th frame of environment image, the display position of the virtual object is determined according to the i-th frame of environment image, and the head-mounted display device is controlled based on The display position displays the virtual object. 一種擴增實境顯示方法,包括: 透過一頭戴式顯示裝置上的影像擷取裝置擷取第i幀環境影像,並依據該第i幀環境影像決定一虛擬物件的顯示位置,其中i為大於0的整數; 獲取顯示器相對於該頭戴式顯示裝置的深度距離; 藉由該頭戴式顯示裝置上的動作感測器產生該頭戴式顯示裝置的動作感測資料; 在該影像擷取裝置擷取第(i+1)幀環境影像之前,依據該動作感測資料與該深度距離調整該虛擬物件的該顯示位置;以及 藉由該頭戴式顯示裝置依據調整後的該顯示位置顯示該虛擬物件,其中該虛擬物件疊合顯示為錨定於該顯示器的顯示邊框上。 An augmented reality display method, comprising: capturing an i-th frame of an environment image through an image capture device on a head-mounted display device, and determining a display position of a virtual object according to the i-th frame of the environment image, wherein i is an integer greater than 0; obtaining the depth distance of the display relative to the head-mounted display device; generating motion sensing data of the head-mounted display device by a motion sensor on the head-mounted display device; before the image capturing device captures the (i+1)th frame of the environment image, adjusting the display position of the virtual object according to the motion sensing data and the depth distance; and The virtual object is displayed by the head-mounted display device according to the adjusted display position, wherein the virtual object is superimposed and displayed to be anchored on the display frame of the display. 如請求項8所述的擴增實境顯示方法,其中獲取該顯示器相對於該頭戴式顯示裝置的該深度距離的步驟包括: 依據該第i幀環境影像中一輔助標記的成像尺寸,估測該顯示器相對於該頭戴式顯示裝置的該深度距離。 The augmented reality display method according to claim 8, wherein the step of obtaining the depth distance of the display relative to the head-mounted display device comprises: According to the imaging size of an auxiliary marker in the ith frame of the environment image, the depth distance of the display relative to the head-mounted display device is estimated. 如請求項9所述的擴增實境顯示方法,其中依據該第i幀環境影像中該輔助標記的該成像尺寸,估測該顯示器相對於該頭戴式顯示裝置的該深度距離的步驟包括: 獲取該輔助標記的標準尺寸與一標準深度之間的比例值;以及 依據該比例值與該第i幀環境影像中該輔助標記的成像尺寸,估測該顯示器相對於該頭戴式顯示裝置的該深度距離。 The augmented reality display method according to claim 9, wherein the step of estimating the depth distance of the display relative to the head-mounted display device according to the imaging size of the auxiliary marker in the i-th frame of environment image comprises the following steps: : obtaining a ratio between a standard size of the auxiliary marker and a standard depth; and The depth distance of the display relative to the head-mounted display device is estimated according to the ratio value and the imaging size of the auxiliary marker in the i-th frame of environment image. 如請求項9所述的擴增實境顯示方法,其中該輔助標記包括位於該顯示器之多個螢幕角落的多個標記點或位於該顯示器之螢幕邊緣的標記線,而該輔助標記的該成像尺寸包括該些標記點的直徑或該標記線的長度。The augmented reality display method as claimed in claim 9, wherein the auxiliary mark comprises a plurality of mark points located at corners of a plurality of screens of the display or mark lines located at a screen edge of the display, and the imaging of the auxiliary mark Dimensions include the diameter of the marked points or the length of the marked line. 如請求項8所述的擴增實境顯示方法,其中在該影像擷取裝置擷取該第(i+1)幀環境影像之前,依據該動作感測資料與該深度距離調整該虛擬物件的該顯示位置的步驟包括: 依據該動作感測資料計算該頭戴式顯示裝置的一旋轉量; 依據該旋轉量與該深度距離計算一位置變化量;以及 依據該位置變化量調整該虛擬物件的該顯示位置。 The augmented reality display method as claimed in claim 8, wherein before the image capturing device captures the (i+1)th frame of the environment image, the virtual object is adjusted according to the motion sensing data and the depth distance. The steps for displaying the location include: calculating a rotation amount of the head-mounted display device according to the motion sensing data; calculating a position change amount according to the rotation amount and the depth distance; and The display position of the virtual object is adjusted according to the position change amount. 如請求項12所述的擴增實境顯示方法,其中依據該位置變化量調整該虛擬物件的顯示位置的步驟包括: 判斷該位置變化量是否大於一門檻值; 若該位置變化量未大於該門檻值,不調整該虛擬物件的顯示位置;以及 若該位置變化量大於該門檻值,依據該位置變化量調整該虛擬物件的顯示位置。 The augmented reality display method according to claim 12, wherein the step of adjusting the display position of the virtual object according to the position change comprises: Determine whether the position change is greater than a threshold value; If the position change amount is not greater than the threshold value, the display position of the virtual object is not adjusted; and If the position change amount is greater than the threshold value, the display position of the virtual object is adjusted according to the position change amount. 如請求項8所述的擴增實境顯示方法,其中透過該頭戴式顯示裝置上的該影像擷取裝置擷取該第i幀環境影像,並依據該第i幀環境影像決定該虛擬物件的該顯示位置的步驟包括: 反應於該第i幀環境影像中一輔助標記位於該第i幀環境影像中的預定範圍內,依據該第i幀環境影像決定該虛擬物件的該顯示位置,並控制該頭戴式顯示裝置基於該顯示位置顯示該虛擬物件。 The augmented reality display method as claimed in claim 8, wherein the i-th frame of the environment image is captured by the image capture device on the head-mounted display device, and the virtual object is determined according to the i-th frame of the environment image The steps for this display position include: In response to the fact that an auxiliary marker in the i-th frame of environment image is located within a predetermined range in the i-th frame of environment image, the display position of the virtual object is determined according to the i-th frame of environment image, and the head-mounted display device is controlled based on The display position displays the virtual object.
TW109131893A 2020-09-16 2020-09-16 Augmented reality system and augmented reality display method integrated with motion sensor TWI757872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109131893A TWI757872B (en) 2020-09-16 2020-09-16 Augmented reality system and augmented reality display method integrated with motion sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109131893A TWI757872B (en) 2020-09-16 2020-09-16 Augmented reality system and augmented reality display method integrated with motion sensor

Publications (2)

Publication Number Publication Date
TWI757872B true TWI757872B (en) 2022-03-11
TW202213063A TW202213063A (en) 2022-04-01

Family

ID=81710594

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109131893A TWI757872B (en) 2020-09-16 2020-09-16 Augmented reality system and augmented reality display method integrated with motion sensor

Country Status (1)

Country Link
TW (1) TWI757872B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI638310B (en) * 2017-09-13 2018-10-11 宏達國際電子股份有限公司 Head mounted display system and image display method thereof
TWI694357B (en) * 2018-10-21 2020-05-21 未來市股份有限公司 Method of virtual user interface interaction based on gesture recognition and related device
JP2020115274A (en) * 2019-01-17 2020-07-30 株式会社アルファコード Virtual space image display control device, virtual space image display control program
JP2020123260A (en) * 2019-01-31 2020-08-13 株式会社日立製作所 Head-mounted display device and virtual space display control method
TWI702351B (en) * 2019-06-04 2020-08-21 陳柏伸 Linear sliding block processing jig mechanism
TWI727421B (en) * 2019-09-16 2021-05-11 藏識科技有限公司 Mixed reality system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI638310B (en) * 2017-09-13 2018-10-11 宏達國際電子股份有限公司 Head mounted display system and image display method thereof
TWI694357B (en) * 2018-10-21 2020-05-21 未來市股份有限公司 Method of virtual user interface interaction based on gesture recognition and related device
JP2020115274A (en) * 2019-01-17 2020-07-30 株式会社アルファコード Virtual space image display control device, virtual space image display control program
JP2020123260A (en) * 2019-01-31 2020-08-13 株式会社日立製作所 Head-mounted display device and virtual space display control method
TWI702351B (en) * 2019-06-04 2020-08-21 陳柏伸 Linear sliding block processing jig mechanism
TWI727421B (en) * 2019-09-16 2021-05-11 藏識科技有限公司 Mixed reality system

Also Published As

Publication number Publication date
TW202213063A (en) 2022-04-01

Similar Documents

Publication Publication Date Title
EP3469458B1 (en) Six dof mixed reality input by fusing inertial handheld controller with hand tracking
JP6333801B2 (en) Display control device, display control program, and display control method
JP6177872B2 (en) I / O device, I / O program, and I / O method
WO2014128749A1 (en) Shape recognition device, shape recognition program, and shape recognition method
EP3248045A1 (en) Augmented reality field of view object follower
JP6250024B2 (en) Calibration apparatus, calibration program, and calibration method
TWI704376B (en) Angle of view caliration method, virtual reality display system and computing apparatus
EP3847530B1 (en) Display device sharing and interactivity in simulated reality (sr)
JP6250025B2 (en) I / O device, I / O program, and I / O method
JPWO2016051431A1 (en) I / O device, I / O program, and I / O method
TWI757872B (en) Augmented reality system and augmented reality display method integrated with motion sensor
US11836842B2 (en) Moving an avatar based on real-world data
CN114253389B (en) Augmented reality system integrating motion sensor and augmented reality display method
KR20180055637A (en) Electronic apparatus and method for controlling thereof
TW202213994A (en) Augmented reality system and display brightness adjusting method thereof
CN112308906B (en) Visual angle correction method, virtual reality display system and computing device
US11380071B2 (en) Augmented reality system and display method for anchoring virtual object thereof
KR102542641B1 (en) Apparatus and operation method for rehabilitation training using hand tracking
US11838486B1 (en) Method and device for perspective correction using one or more keyframes
US20240070931A1 (en) Distributed Content Rendering
JP6503407B2 (en) Content display program, computer device, content display method, and content display system
CN118318219A (en) Augmented reality display with eye image stabilization
JP2024040034A (en) Processor, image processing apparatus, spectacle-type information display device, image processing method, and image processing program