TWI828527B - Method for automatic guide and electronic apparatus - Google Patents

Method for automatic guide and electronic apparatus Download PDF

Info

Publication number
TWI828527B
TWI828527B TW112102570A TW112102570A TWI828527B TW I828527 B TWI828527 B TW I828527B TW 112102570 A TW112102570 A TW 112102570A TW 112102570 A TW112102570 A TW 112102570A TW I828527 B TWI828527 B TW I828527B
Authority
TW
Taiwan
Prior art keywords
event
script
events
dimensional space
space model
Prior art date
Application number
TW112102570A
Other languages
Chinese (zh)
Other versions
TW202431085A (en
Inventor
王士麒
柯政安
賴昱州
Original Assignee
王一互動科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 王一互動科技有限公司 filed Critical 王一互動科技有限公司
Priority to TW112102570A priority Critical patent/TWI828527B/en
Application granted granted Critical
Publication of TWI828527B publication Critical patent/TWI828527B/en
Publication of TW202431085A publication Critical patent/TW202431085A/en

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

A method for automatic guide and electronic apparatus are provided. The method includes: starting a three-dimensional space model; obtaining a task script corresponding to the three-dimensional space model, wherein the task script includes a plurality of events; executing the events based on a navigation order of the plurality of events in a time track of the task script. For each event belonging to a first category with a point name, a real-time rendering program is executed and corresponding action is performed based on corresponding parameter setting data. For each event belonging to a second category without a point name, a corresponding action is performed based on corresponding parameter setting data.

Description

自動導覽的方法及電子裝置Automatic navigation methods and electronic devices

本發明是有關於一種建立虛擬空間的技術,且特別是有關於一種用於立體空間模型的自動導覽的方法及電子裝置。The present invention relates to a technology for establishing a virtual space, and in particular, to a method and electronic device for automatic navigation of a three-dimensional space model.

隨著科技的進步以及網路的興起,越來越多網站可供民眾透過網路連線來取得所需的資訊。在求新求變的時代,傳統照片瀏覽搭配文字閱讀的方式已經越來越不能滿足民眾的需求,因而發展出三維導覽技術,其是透過建立擬真環境,讓使用者有身歷其境之感。而自動導覽則為目前待研究的方向之一。With the advancement of technology and the rise of the Internet, more and more websites allow the public to obtain the information they need through Internet connections. In the era of innovation and change, the traditional way of browsing photos and reading text has become increasingly unable to meet the needs of the public. Therefore, three-dimensional navigation technology has been developed, which allows users to immerse themselves in the scene by creating a realistic environment. feel. Automatic navigation is one of the directions currently to be studied.

本發明提供一種自動導覽的方法及電子裝置,可完成自動導覽的功能。The present invention provides an automatic navigation method and an electronic device, which can complete the automatic navigation function.

本發明的自動導覽的方法,利用處理器來執行。所述方法包括:啟動立體空間模型,其中立體空間模型所包括的多個定位點分別與多個環景圖相對應,每一個定位點具有對應的點位名稱;取得對應於立體空間模型的任務腳本,其中任務腳本包括多個事件;基於任務腳本的多個事件位於時間軌道的導覽順序,執行這些事件。針對屬於具有點位名稱的第一類別的每一事件,執行即時渲染程序,以將自所述多個環景圖中取出的對應於點位名稱的指定環景圖作為材質置入立體空間模型,而顯示於使用者介面中,並且基於對應的參數設定資料執行對應的動作。針對屬於不具有點位名稱的第二類別的每一事件,基於對應的參數設定資料執行對應的動作。The automatic navigation method of the present invention is executed by a processor. The method includes: starting a three-dimensional space model, wherein multiple positioning points included in the three-dimensional space model respectively correspond to multiple panoramic views, and each positioning point has a corresponding point name; and obtaining a task corresponding to the three-dimensional space model. Script, wherein the task script includes multiple events; multiple events based on the task script are located in the navigation sequence of the time track, and these events are executed. For each event belonging to the first category with a point name, execute a real-time rendering program to place a specified panoramic image corresponding to the point name taken out from the plurality of panoramic images into the three-dimensional space model as a material , and is displayed in the user interface, and the corresponding action is performed based on the corresponding parameter setting data. For each event belonging to the second category without a point name, perform a corresponding action based on the corresponding parameter setting data.

在本發明的一實施例中,所述自動導覽的方法更包括:透過腳本建立模組建立任務腳本,其中腳本建立模組提供可視化編輯介面來建立任務腳本所包括的事件,可視化編輯介面包括鏡頭編輯功能、文字編輯功能、多媒體檔案編輯功能以及互動觸發編輯功能。In an embodiment of the present invention, the automatic navigation method further includes: creating a task script through a script creation module, wherein the script creation module provides a visual editing interface to create events included in the task script, and the visual editing interface includes Lens editing function, text editing function, multimedia file editing function and interactive trigger editing function.

在本發明的一實施例中,在透過腳本建立模組建立任務腳本的步驟中,透過鏡頭編輯功能建立鏡頭腳本事件,包括:經由設定點位名稱來決定虛擬相機的鏡頭位置,並基於鏡頭位置,將其中一個定位點所對應的環景圖作為材質置入立體空間模型而顯示於預覽畫面中;透過預覽畫面或參數設定欄位,決定虛擬相機的鏡頭參數,其中預覽畫面用以接收操作軌跡,參數設定欄位用以接收鏡頭參數的數值輸入;以及在時間軌道上設定鏡頭腳本事件的時間區段。In an embodiment of the present invention, in the step of creating a task script through the script creation module, creating a lens script event through the lens editing function includes: determining the lens position of the virtual camera by setting the point name, and based on the lens position , place the panoramic view corresponding to one of the anchor points as a material into the three-dimensional space model and display it in the preview screen; determine the lens parameters of the virtual camera through the preview screen or parameter setting field, where the preview screen is used to receive the operation trajectory , the parameter setting field is used to receive numerical input of lens parameters; and to set the time section of the lens script event on the time track.

在本發明的一實施例中,在透過腳本建立模組建立任務腳本的步驟中,透過文字編輯功能建立導覽文字事件,包括:透過文字欄位接收文字內容;以及在時間軌道上設定導覽文字事件的時間區段。In one embodiment of the present invention, in the step of creating a task script through the script creation module, creating a navigation text event through a text editing function includes: receiving text content through a text field; and setting navigation on a time track. The time range of the text event.

在本發明的一實施例中,在透過腳本建立模組建立任務腳本的步驟中,透過多媒體檔案編輯功能建立多媒體檔案事件,包括:透過檔案選擇介面來設定多媒體檔案路徑;以及在時間軌道上設定多媒體檔案事件的時間區段。In one embodiment of the present invention, in the step of creating a task script through the script creation module, creating a multimedia file event through a multimedia file editing function includes: setting the multimedia file path through the file selection interface; and setting on the time track Time range of multimedia archive events.

在本發明的一實施例中,在透過腳本建立模組建立任務腳本的步驟中,透過互動觸發編輯功能建立互動觸發事件,包括:透過元件選擇介面來設定互動元件編號;以及在時間軌道上設定互動元件編號的時間區段。In one embodiment of the present invention, in the step of creating a task script through the script creation module, creating an interactive trigger event through the interactive trigger editing function includes: setting the interactive component number through the component selection interface; and setting the interactive component number on the time track. The time period of the interactive component number.

在本發明的一實施例中,每一個事件具有對應的事件索引,而在取得對應於立體空間模型的任務腳本之後,更包括:執行任務分發程式,以針對每一個事件,分發對應的處理程式;利用處理程式針對每一個事件產生對應的執行指令組;將對應的事件索引與執行指令組相關連,並將事件索引儲存至待執行隊列,其中待執行隊列是基於導覽順序來進行儲存;以及響應於至少其中一個事件的分發失敗,發出警示訊號,並終止任務分發程式。In an embodiment of the present invention, each event has a corresponding event index, and after obtaining the task script corresponding to the three-dimensional space model, it further includes: executing a task distribution program to distribute the corresponding processing program for each event. ;Use the handler to generate a corresponding execution command group for each event; associate the corresponding event index with the execution command group, and store the event index in the queue to be executed, where the queue to be executed is stored based on the navigation sequence; and in response to the distribution failure of at least one of the events, issuing a warning signal and terminating the task distribution program.

在本發明的一實施例中,在執行所述事件中中,基於導覽順序,自待執行隊列中取出具有相同順序的一組事件索引;以及取出該組事件索引相關聯的執行指令組,並透過對應的處理程式來執行。In an embodiment of the present invention, during the execution of the event, based on the navigation sequence, a group of event indexes with the same order are retrieved from the queue to be executed; and an execution instruction group associated with the group of event indexes is retrieved, And executed through the corresponding processing program.

在本發明的一實施例中,每一事件可以是鏡頭腳本事件、導覽文字事件、多媒體檔案事件或互動觸發事件。鏡頭腳本事件的參數設定資料包括虛擬相機的鏡頭參數,導覽文字事件的參數設定資料包括文字內容,多媒體檔案事件的參數設定資料包括多媒體檔案路徑,互動觸發事件的參數設定資料包括互動元件編號。In an embodiment of the present invention, each event may be a shot script event, a navigation text event, a multimedia file event or an interaction trigger event. The parameter setting data of the lens script event includes the lens parameters of the virtual camera, the parameter setting data of the navigation text event includes the text content, the parameter setting data of the multimedia file event includes the multimedia file path, and the parameter setting data of the interactive trigger event includes the interactive component number.

在本發明的一實施例中,自動導覽的方法更包括:響應於在使用者介面啟動立體空間模型,並透過立體空間模型接收到的使用者操作,觸發互動元件;以及針對在時間軌道中接續於與互動元件對應的事件之後的一或多個剩餘事件,基於導覽順序,執行所述剩餘事件。In one embodiment of the present invention, the automatic navigation method further includes: in response to activating the three-dimensional space model in the user interface and receiving user operations through the three-dimensional space model, triggering the interactive element; and One or more remaining events following the event corresponding to the interactive element are executed based on the navigation sequence.

在本發明的一實施例中,自動導覽的方法更包括:響應於在使用者介面啟動該立體空間模型,並透過立體空間模型接收到的使用者操作,觸發互動元件並中斷任務腳本;取得對應於互動元件的另一任務腳本;以及基於另一任務腳本所包括的另外多個事件位於另一時間軌道的另一導覽順序,執行所述另外多個事件。In an embodiment of the present invention, the automatic navigation method further includes: in response to activating the three-dimensional space model in the user interface and receiving user operations through the three-dimensional space model, triggering the interactive element and interrupting the task script; obtaining Another task script corresponding to the interactive element; and based on another navigation sequence of another plurality of events included in the other task script located in another time track, executing the other plurality of events.

本發明的電子裝置,包括:儲存裝置以及處理器。儲存裝置包括:立體空間模型,包括多個定位點,其中每一個定位點具有對應的點位名稱;以及資料庫,儲存了分別與定位點相對應的多個環景圖以及與立體空間模型對應的任務腳本,其中任務腳本包括多個事件。處理器耦接至儲存裝置,且經配置以執行所述自動導覽的方法。The electronic device of the present invention includes: a storage device and a processor. The storage device includes: a three-dimensional space model, including a plurality of anchor points, each anchor point having a corresponding point name; and a database, which stores multiple panoramic views corresponding to the anchor points and corresponding to the three-dimensional space model. A task script, where the task script includes multiple events. The processor is coupled to the storage device and configured to perform the method of automatic navigation.

基於上述,本揭露提升立體空間模型的創作者與使用者之間的互動性,讓使用者透過自動導覽的功能,了解立體空間模型的創作者欲表達的理念。Based on the above, the present disclosure enhances the interaction between the creator of the three-dimensional space model and the user, allowing the user to understand the concept that the creator of the three-dimensional space model wants to express through the automatic navigation function.

圖1是依照本發明一實施例的用於立體空間模型的自動導覽系統的方塊圖。請參照圖1,自動導覽系統包括電子裝置100A以及用戶端裝置100B。電子裝置100A例如為網站伺服器。電子裝置100A與用戶端裝置100B可利用有線或無線的通訊技術進行連線。用戶端裝置100B例如為使用者所使用的智慧型手機、平板電腦、筆記型電腦或個人電腦等具有運算能力以及連網功能的電子裝置。FIG. 1 is a block diagram of an automatic navigation system for a three-dimensional space model according to an embodiment of the present invention. Referring to FIG. 1 , the automatic navigation system includes an electronic device 100A and a client device 100B. The electronic device 100A is, for example, a website server. The electronic device 100A and the client device 100B can be connected using wired or wireless communication technology. The client device 100B is, for example, an electronic device with computing capabilities and networking functions used by a user, such as a smartphone, a tablet, a notebook computer, or a personal computer.

電子裝置100A包括處理器110、通訊元件120以及儲存裝置130。處理器110耦接至通訊元件120以及儲存裝置130。The electronic device 100A includes a processor 110, a communication component 120 and a storage device 130. The processor 110 is coupled to the communication component 120 and the storage device 130 .

處理器110例如是中央處理單元(Central Processing Unit,CPU)、圖形處理單元(Graphics Processing Unit,GPU),或是其他可程式化之微處理器(Microprocessor)、數位訊號處理器(Digital Signal Processor,DSP)、可程式化控制器、特殊應用積體電路(Application Specific Integrated Circuits,ASIC)、程式化邏輯裝置(Programmable Logic Device,PLD)或其他類似裝置。The processor 110 is, for example, a central processing unit (CPU), a graphics processing unit (GPU), or other programmable microprocessor (Microprocessor) or digital signal processor (Digital Signal Processor). DSP), programmable controllers, Application Specific Integrated Circuits (ASIC), Programmable Logic Device (PLD) or other similar devices.

通訊元件120可以是採用區域網路(Local Area Network,LAN)技術、無線區域網路(Wireless LAN,WLAN)技術或行動通訊技術的晶片或電路。區域網路例為乙太網路(Ethernet)。無線區域網路例如為Wi-Fi。行動通訊技術例如為全球行動通訊系統(Global System for Mobile Communications,GSM)、第三代行動通訊技術(third-Generation,3G)、***行動通訊技術(fourth-Generation,4G)、第五代行動通訊技術(fifth-Generation,5G)等。The communication element 120 may be a chip or circuit using local area network (LAN) technology, wireless LAN (WLAN) technology or mobile communication technology. An example of a local network is Ethernet. The wireless local area network is Wi-Fi, for example. Mobile communication technologies are, for example, Global System for Mobile Communications (GSM), third-generation mobile communication technology (third-Generation, 3G), fourth-generation mobile communication technology (fourth-Generation, 4G), fifth-generation Mobile communication technology (fifth-Generation, 5G), etc.

儲存裝置130例如是任意型式的固定式或可移動式隨機存取記憶體、唯讀記憶體、快閃記憶體、硬碟或其他類似裝置或這些裝置的組合。儲存裝置130用以儲存資料庫131、立體空間模型133、前台程式135以及後台程式137。前台程式135以及後台程式137例如分別包括一或多個程式碼片段,而上述程式碼片段在被安裝後,由處理器110來執行,藉此來執行下述自動導覽的方法。前台程式135負責供用戶端裝置100B的瀏覽器160呈現的內容。後台程式137負責供管理者進行編輯的部分。The storage device 130 is, for example, any type of fixed or removable random access memory, read-only memory, flash memory, hard disk or other similar device or a combination of these devices. The storage device 130 is used to store the database 131, the three-dimensional space model 133, the front-end program 135 and the back-end program 137. For example, the foreground program 135 and the background program 137 each include one or more program code snippets, and after being installed, the above program code snippets are executed by the processor 110 to perform the following automatic navigation method. The front-end program 135 is responsible for content presented by the browser 160 of the client device 100B. The background program 137 is responsible for the editing part for the administrator.

資料庫131儲存了分別與立體空間模型133所包括的多個定位點相對應的多個環景圖以及與立體空間模型133對應的任務腳本。環景圖的建立例如可以由三維動畫軟體來製成,也可利用相機進行拍攝獲得。三維動畫軟體的虛擬攝影機能夠模擬真實世界的相機,將三維場景的分成多張角度的相片之後,將此組相片後製為環景圖。The database 131 stores a plurality of panoramic views respectively corresponding to a plurality of positioning points included in the three-dimensional space model 133 and task scripts corresponding to the three-dimensional space model 133 . The panoramic view can be created by, for example, three-dimensional animation software, or it can be captured using a camera. The virtual camera of the 3D animation software can simulate a real-world camera, divide the 3D scene into photos from multiple angles, and then make this group of photos into a panoramic view.

圖2是依照本發明一實施例的包括多個定位點的立體空間模型的示意圖。請參照圖2,立體空間模型133例如是由三維模型設計師根據客戶需求所利用中三維建模工具等軟體生成對應於一空間的立體空間模型133。而在生成立體空間模型133之後,三維模型設計師再根據客戶需求在立體空間模型133設置多個定位點P1~P29,並分別根據定位點P1~P29每一個點來設置其對應的一個環景圖,並儲存至資料庫131中。另外,還可進一步根據需求在立體空間模型133中預先保留至少一個互動位置,以供使用者來動態置入互動元件。互動元件例如為多媒體檔案、互動介面或按鈕等。在圖2所示的實施例中,立體空間模型133預先設置了19個互動位置E1~E19。然,在此僅為舉例說明,並不限定定位點的數量以及預留的互動位置的數量。FIG. 2 is a schematic diagram of a three-dimensional space model including multiple positioning points according to an embodiment of the present invention. Please refer to FIG. 2 . The three-dimensional space model 133 is, for example, a three-dimensional space model 133 corresponding to a space generated by software such as a three-dimensional modeling tool used by a three-dimensional model designer according to customer requirements. After generating the three-dimensional space model 133, the three-dimensional model designer sets multiple positioning points P1 to P29 in the three-dimensional space model 133 according to customer needs, and sets a corresponding surrounding scene according to each of the positioning points P1 to P29. Figure, and stored in the database 131. In addition, at least one interactive position can be reserved in advance in the three-dimensional space model 133 according to needs for the user to dynamically place interactive elements. Interactive components are, for example, multimedia files, interactive interfaces or buttons. In the embodiment shown in FIG. 2 , the three-dimensional space model 133 is preset with 19 interactive positions E1 to E19. However, this is only an example and does not limit the number of anchor points and the number of reserved interactive positions.

用戶端裝置100B包括處理器140、通訊元件150以及瀏覽器160。在此,處理器140與通訊元件150的功能分別與電子裝置100A的處理器110與通訊元件120相同或相似,可參照上述相關說明,在此不再贅述。瀏覽器160為用以訪問網站的應用程式。當使用者透過瀏覽器160自特定網站請求一個網頁時,瀏覽器160會從網頁服務器(web server)取得其相關檔案,然後將網頁內容顯示在用戶端裝置100B的顯示器上的瀏覽器160。The client device 100B includes a processor 140, a communication component 150 and a browser 160. Here, the functions of the processor 140 and the communication element 150 are the same as or similar to the processor 110 and the communication element 120 of the electronic device 100A respectively. Please refer to the above related descriptions and will not be repeated here. Browser 160 is an application used to access websites. When the user requests a web page from a specific website through the browser 160, the browser 160 obtains its related files from the web server (web server), and then displays the web page content on the browser 160 on the display of the client device 100B.

圖3是依照本發明一實施例的自動導覽的方法流程圖。請參照圖1及圖2,在步驟S305中,啟動立體空間模型133。具體而言,以用戶端裝置100B欲連線至電子裝置100A來開啟三維網頁(使用者介面)進行說明,電子裝置100A接收到連線通知之後,會去啟動立體空間模型133。在電子裝置100A中,處理器110通過前台程式135,根據任務腳本、互動操作等去執行任務。Figure 3 is a flow chart of an automatic navigation method according to an embodiment of the present invention. Referring to Figures 1 and 2, in step S305, the three-dimensional space model 133 is started. Specifically, it is explained that the client device 100B wants to connect to the electronic device 100A to open a three-dimensional web page (user interface). After receiving the connection notification, the electronic device 100A will activate the three-dimensional space model 133 . In the electronic device 100A, the processor 110 executes tasks according to task scripts, interactive operations, etc. through the foreground program 135 .

在執行立體空間模型133之後,且尚未偵測到作用於立體空間模型133的操作指令之前,處理器110會先通過前台程式135將虛擬相機的鏡頭位置移動至預設位置(例如定位點P3,然並不以此為限),並且以定位點P3對應的環景圖作為材質置入立體空間模型133,使得用戶端裝置100B的瀏覽器160的顯示畫面呈現已置入環景圖的立體空間模型133。After executing the three-dimensional space model 133 and before detecting the operation command acting on the three-dimensional space model 133, the processor 110 will first move the lens position of the virtual camera to a preset position (such as the anchor point P3, through the front-end program 135, (but not limited to this), and the panoramic view corresponding to the positioning point P3 is used as a material to insert into the three-dimensional space model 133, so that the display screen of the browser 160 of the client device 100B presents the three-dimensional space in which the panoramic view has been inserted. Model 133.

處理器110通過前台程式135,利用渲染(rendering)方式即時將指定環景圖作為材質置入立體空間模型133。一般而言,三維模型會使用紋理進行覆蓋,將紋理排列放到三維模型上的過程稱作紋理對映。經由紋理對映處理後,可讓三維模型更加細緻並且看起來更加真實。在本實施例中,便是將環景圖作為是紋理對映的材質將其置入至立體空間模型133。除了紋理對映之外,還可以調整曲面法線以實現照亮效果,一些曲面還可以使用凸凹紋理對映(Bump mapping)方法以及其它一些立體彩現的技巧。The processor 110 uses the rendering method to instantly place the specified panoramic image as a material into the three-dimensional space model 133 through the front-end program 135 . Generally speaking, 3D models are covered with textures, and the process of arranging textures onto the 3D model is called texture mapping. After texture mapping processing, the 3D model can be made more detailed and look more realistic. In this embodiment, the panoramic image is used as a texture-mapping material and placed into the three-dimensional space model 133 . In addition to texture mapping, surface normals can also be adjusted to achieve lighting effects. Some surfaces can also use bump mapping methods and other three-dimensional rendering techniques.

在渲染過程中,處理器110通過前台程式135,利用虛擬相機來定位與立體空間模型133中各物體的關係,藉此獲得多個渲染參數。在現實世界中,由於光線經由物體而反射至眼睛,因而眼睛看到物體。而虛擬相機所在位置即“眼睛”的位置。透過射線追蹤(ray tracing)方式可計算虛擬相機與立體空間模型133中的每一個像素點之間相對應的反射指數等渲染參數。故,在獲得一指定定位點(移動目的地)之後,處理器110還會進一步將虛擬相機移動至指定定位點,藉此來獲得與此指定定位點相對應的渲染參數。During the rendering process, the processor 110 uses the virtual camera to locate the relationship with each object in the three-dimensional space model 133 through the front-end program 135, thereby obtaining multiple rendering parameters. In the real world, objects are seen by the eye because light is reflected from the object to the eye. The location of the virtual camera is the location of the "eye". Rendering parameters such as the reflection index corresponding to each pixel in the virtual camera and the three-dimensional space model 133 can be calculated through ray tracing. Therefore, after obtaining a designated anchor point (movement destination), the processor 110 will further move the virtual camera to the designated anchor point, thereby obtaining rendering parameters corresponding to the designated anchor point.

在本實施例中,在立體空間模型133被上傳至電子裝置100A之後且在執行立體空間模型133之前,處理器110還可先執行後台程式137。通過後台程式137來執行定位點映對程序,以對定位點P1~P29的每一個建立與其對應的點位名稱(定位點的名稱或編號),以獲得一名稱映對關係表。In this embodiment, after the three-dimensional space model 133 is uploaded to the electronic device 100A and before executing the three-dimensional space model 133, the processor 110 may also execute the background program 137 first. The anchor point mapping program is executed through the background program 137 to establish a corresponding point name (the name or number of the anchor point) for each of the anchor points P1 to P29 to obtain a name mapping relationship table.

接著,在步驟S310中,處理器110取得對應於立體空間模型133的任務腳本。在建立了映對關係表之後,處理器110便可通過後台程式137基於此映對關係表來建立任務腳本。任務腳本是針對特定任務來設計一組事件組合。例如,特定任務可以是底下其中一種:網站頁面導覽、空間自動行走、網站視覺化說明、視線導引、語音講解、模型各角度解說等等。Next, in step S310 , the processor 110 obtains the task script corresponding to the three-dimensional space model 133 . After establishing the mapping relationship table, the processor 110 can create a task script based on the mapping relationship table through the background program 137 . A task script is a set of event combinations designed for a specific task. For example, the specific task can be one of the following: website page navigation, automatic walking in space, website visual explanation, sight guidance, voice explanation, explanation from various angles of the model, etc.

具體而言,任務腳本包括多個事件以及這些事件的導覽順序。可被建立的事件例如為:鏡頭腳本事件、導覽文字事件、多媒體檔案事件或互動觸發事件等。每一個事件更包括對應的參數設定資料。例如,鏡頭腳本事件的參數設定資料包括虛擬相機的鏡頭參數,導覽文字事件的參數設定資料包括文字內容,多媒體檔案事件的參數設定資料包括多媒體檔案路徑,互動觸發事件的參數設定資料包括互動元件編號。Specifically, a mission script includes multiple events and the navigation sequence of those events. Events that can be created are, for example, lens script events, navigation text events, multimedia file events, or interactive trigger events. Each event also includes corresponding parameter setting data. For example, the parameter setting data of the lens script event includes the lens parameters of the virtual camera, the parameter setting data of the navigation text event includes the text content, the parameter setting data of the multimedia file event includes the multimedia file path, and the parameter setting data of the interactive trigger event includes the interactive element. number.

管理者可透過電子裝置100A的後台程式137來建立定位點映對、任務腳本的建立等動作。進而,前台程式135可提供自動導覽內容給用戶端裝置100B的瀏覽器160,以通過瀏覽器160來呈現立體空間模型133的自動導覽。The administrator can create positioning point mapping, create task scripts and other actions through the background program 137 of the electronic device 100A. Furthermore, the front-end program 135 can provide automatic navigation content to the browser 160 of the client device 100B, so as to present the automatic navigation of the three-dimensional space model 133 through the browser 160 .

接著,在步驟S315中,基於任務腳本的多個事件位於時間軌道的導覽順序,執行所述多個事件。在此,前台程序135針對屬於具有點位名稱的第一類別的每一事件,執行即時渲染程序,並且基於對應的參數設定資料執行對應的動作。所述即時渲染程序包括:前台程序135基於點位名稱,自對應於立體空間模型133的多個環景圖中取出指定環景圖,並將所取出的指定環景圖作為材質置入立體空間模型133而顯示於使用者介面(在瀏覽器160中顯示)中。例如,前台程序135基於名稱映對關係表得知目標事件的點位名稱對應至哪一個定位點,進而獲得對應的環景圖。另外,前台程序135針對屬於不具有點位名稱的第二類別的每一事件,基於對應的參數設定資料執行對應的動作。Next, in step S315, multiple events of the task script are executed based on the navigation order of the time track located in the time track. Here, the front-end program 135 executes a real-time rendering program for each event belonging to the first category with a point name, and executes a corresponding action based on the corresponding parameter setting data. The real-time rendering program includes: based on the point name, the front-end program 135 extracts a specified panoramic image from a plurality of panoramic images corresponding to the three-dimensional space model 133, and places the extracted specified panoramic image into the three-dimensional space as a material. The model 133 is displayed in the user interface (displayed in the browser 160). For example, the front-end program 135 learns which anchor point the point name of the target event corresponds to based on the name mapping relationship table, and then obtains the corresponding panoramic view. In addition, for each event belonging to the second category without a point name, the front-end program 135 performs a corresponding action based on the corresponding parameter setting data.

例如,鏡頭腳本事件屬於具有點位名稱的第一類別。導覽文字事件、多媒體檔案事件以及互動觸發事件則屬於不具有點位名稱的第二類別。For example, shot script events belong to the first category with point names. Navigation text events, multimedia file events and interactive trigger events belong to the second category without point names.

另外,在任務腳本中的每一個事件具有對應的事件索引。處理器110在取得任務腳本之後,會進一步透過前台程式135執行任務分發程序,以針對每一個事件,分發對應的處理程式。接著,前台程式135利用處理程式針對每一個事件產生對應的執行指令組。之後,前台程式135將對應的事件索引與執行指令組相關連,並將事件索引儲存至待執行隊列,其中待執行隊列是基於導覽順序來進行儲存。In addition, each event in the task script has a corresponding event index. After obtaining the task script, the processor 110 will further execute the task distribution program through the front-end program 135 to distribute the corresponding processing program for each event. Then, the front-end program 135 uses the handler to generate a corresponding execution instruction group for each event. Afterwards, the front-end program 135 associates the corresponding event index with the execution command group, and stores the event index in the queue to be executed, where the queue to be executed is stored based on the navigation sequence.

由於不同的事件需要使用不同的軟體(應用程式)去觸發(執行),因此需找出每一個事件對應的處理程式。例如,多媒體檔案事件對應的處理程式為多媒體播放程式,鏡頭腳本事件對應的處理程式為虛擬相機應用程式。Since different events require different software (applications) to be triggered (executed), it is necessary to find the handler corresponding to each event. For example, the handler corresponding to the multimedia file event is the multimedia player program, and the handler corresponding to the lens script event is the virtual camera application.

並且,前台程式135還會響應於至少其中一個事件的分發失敗,發出警示訊號,並終止任務分發程式。例如,倘若後台程式137建立了客製化的事件,然,前台程式135尚未基於此客製化事件來建立對應的處理程式,據此便會導致前台程式135對於此客製化事件的任務分發失敗。Moreover, the front-end program 135 will also send a warning signal in response to the failure of distribution of at least one of the events, and terminate the task distribution program. For example, if the background program 137 creates a customized event, but the front-end program 135 has not yet created a corresponding handler based on this customized event, this will cause the front-end program 135 to distribute tasks for this customized event. Fail.

另外,響應於在使用者介面啟動立體空間模型133,並透過立體空間模型133接收到的使用者操作,觸發互動元件。In addition, in response to activating the three-dimensional space model 133 in the user interface and receiving user operations through the three-dimensional space model 133, the interactive element is triggered.

在一實施例中,在觸發互動元件之後,前台程式135針對在時間軌道中接續於與互動元件對應的事件之後的其他剩餘事件,基於導覽順序,執行所述剩餘事件。例如,假設任務腳本依照導覽順序包括事件001~010,並且假設互動元件對應至事件005。在偵測到互動元件被觸發之後,前台程式135會依序執行事件005~010。In one embodiment, after the interactive element is triggered, the foreground program 135 executes other remaining events that follow the event corresponding to the interactive element in the time track based on the navigation sequence. For example, assume that the task script includes events 001 to 010 in the navigation order, and assume that the interactive element corresponds to event 005. After detecting that the interactive element is triggered, the foreground program 135 will execute events 005 to 010 in sequence.

在另一實施例中,可針對每一個互動元件來設定對應的任務腳本。在觸發互動元件之後,前台程式135取得對應於互動元件的另一任務腳本,並基於另一任務腳本所包括的另外多個事件位於另一時間軌道的另一導覽順序,執行所述另外多個事件。In another embodiment, a corresponding task script can be set for each interactive element. After triggering the interactive element, the foreground program 135 obtains another task script corresponding to the interactive element, and executes the other multiple events based on another navigation sequence included in the other task script and located in another time track. event.

在一實施例中,前台程式135可在接收到連線請求時,直接基於預設的任務腳本來進行自動導覽。例如,用戶端裝置100B透過瀏覽器160開啟三維網頁(使用者介面)而顯示立體空間模型133之後,使用者無須執行任何動作,處理器110便會自動執行上述步驟S310~S315,以在三維網頁中觀看到自動導覽的內容。In one embodiment, the front-end program 135 can directly perform automatic navigation based on a preset task script when receiving a connection request. For example, after the client device 100B opens the three-dimensional web page (user interface) through the browser 160 and displays the three-dimensional space model 133, the user does not need to perform any action, and the processor 110 will automatically execute the above steps S310 to S315 to display the three-dimensional space model 133 on the three-dimensional web page. View the content of the automatic navigation.

在另一實施例中,立體空間模型133也可設定為一鍵參觀的模式。舉例來說,在用以呈現立體空間模型133的三維網頁(使用者介面)上設定一個自動導覽選項(按鈕)。用戶端裝置100B透過瀏覽器160開啟三維網頁而顯示立體空間模型133之後,響應於使用者點選立體空間模型133的自動導覽選項(按鈕),處理器110會自動執行上述步驟S310~S315,以觀看自動導覽的內容。In another embodiment, the three-dimensional space model 133 can also be set to a one-click tour mode. For example, an automatic navigation option (button) is set on a three-dimensional web page (user interface) used to present the three-dimensional space model 133 . After the client device 100B opens the three-dimensional web page through the browser 160 to display the three-dimensional space model 133, in response to the user clicking the automatic navigation option (button) of the three-dimensional space model 133, the processor 110 will automatically execute the above steps S310 to S315, to view automatically guided content.

並且,在自動導覽過程中,使用者隨時可在透過使用者介面所顯示立體空間模型133的互動元件中來進行使用者操作(例如,點擊)。前台程式135響應於自使用者介面接收到的使用者操作而觸發對應的互動元件時,會中斷目前正在執行的任務腳本,並取得被觸發的互動元件對應的另一任務腳本,而重新基於此另一任務腳本來進行自動導覽。Moreover, during the automatic navigation process, the user can perform user operations (eg, click) on the interactive elements of the three-dimensional space model 133 displayed through the user interface at any time. When the front-end program 135 triggers the corresponding interactive element in response to the user operation received from the user interface, it will interrupt the currently executing task script, obtain another task script corresponding to the triggered interactive element, and re-start based on this Another task script to automate navigation.

例如,在目前自動導覽內容為對應至定位點P4的環景圖,並且顯示設置有互動元件(例如為“按鈕”)的互動位置E4的情境下,使用者點擊了互動位置E4的“按鈕”,則前台程式135會中斷目前的任務腳本,而取出互動位置E4的“按鈕”對應的另一任務腳本,進而基於另一任務腳本來繼續執行自動導覽。For example, in the situation where the current automatic navigation content is a panoramic view corresponding to the anchor point P4, and the interactive position E4 with an interactive element (such as a "button") is displayed, the user clicks the "button" of the interactive position E4. ", the front-end program 135 will interrupt the current task script, take out another task script corresponding to the "button" of the interactive position E4, and then continue to execute the automatic navigation based on the other task script.

一個任務腳本裡面包括多個事件,一個事件代表一個動作。舉例來說,如表1所示,任務腳本包括第一鏡頭腳本事件、第二鏡頭腳本事件、第三鏡頭腳本事件以及多媒體檔案事件。A task script includes multiple events, and each event represents an action. For example, as shown in Table 1, the task script includes a first shot script event, a second shot script event, a third shot script event and a multimedia file event.

表1 事件 對應動作 參數設定資料 第一鏡頭腳本事件 畫面動態移動至第一目的地 初始點位名稱:P2; 鏡頭移動時間:10秒; 第一目的地的點位名稱:P24; 途經點位名稱:P26。 第二鏡頭腳本事件 轉向 鏡頭角度:θ1。 第三鏡頭腳本事件 走向第二目的地 鏡頭移動時間:5秒; 第二目的地的點位名稱:P29。 多媒體檔案事件 播放音檔 多媒體檔案路徑:“:/root/aaa/xxx”。 Table 1 event Corresponding action Parameter setting data first shot script event The screen dynamically moves to the first destination Initial point name: P2; Lens movement time: 10 seconds; First destination point name: P24; Passing point name: P26. Second shot script event turn Lens angle: θ1. Third shot script event Toward the second destination Lens movement time: 5 seconds; Second destination point name: P29. multimedia archive event Play audio files Multimedia file path: ":/root/aaa/xxx".

表2列舉了表1所述4個事件分別對應的事件索引、處理程序以以及執行指令組。然,在此僅為舉例說明,並不以此為限。Table 2 lists the event indexes, handlers, and execution instruction groups corresponding to the four events described in Table 1. However, this is only an example and is not limited to this.

表2 事件索引 事件 處理程序 執行指令組 001 第一鏡頭腳本事件 虛擬相機應用程式 第一執行指令組:即時渲染指令;鏡頭移動指令(指示移動軌跡P2→P26→P24,移動時間10秒)。 002 第二鏡頭腳本事件 虛擬相機應用程式 第二執行指令組:鏡頭轉向指令。 003 第三鏡頭腳本事件 虛擬相機應用程式 第三執行指令組:鏡頭移動指令(指示移動軌跡P24→P29,移動時間5秒);即時渲染指令。 004 多媒體檔案事件 多媒體播放程式 第四執行指令組:播放指令。 Table 2 event index event handler Execute command group 001 first shot script event virtual camera app The first execution instruction group: real-time rendering instructions; lens movement instructions (indicates movement trajectory P2→P26→P24, movement time 10 seconds). 002 Second shot script event virtual camera app The second execution command group: lens steering command. 003 Third shot script event virtual camera app The third execution command group: lens movement command (indicates movement trajectory P24→P29, movement time 5 seconds); real-time rendering command. 004 multimedia archive event multimedia player The fourth execution command group: play command.

前台程式135利用任務分發程式將第一鏡頭腳本事件、第二鏡頭腳本事件、第三鏡頭腳本事件分配給虛擬相機應用程式,並且基於對應的參數設定資料利用虛擬相機應用程式來分別產生對應的第一執行指令組、第二執行指令組、第三執行指令組。前台程式135利用任務分發程式將多媒體檔案事件分配給多媒體播放程式,並且基於對應的參數設定資料利用多媒體播放程式來產生第四執行指令組。The front-end program 135 uses the task distribution program to distribute the first lens script event, the second lens script event, and the third lens script event to the virtual camera application program, and uses the virtual camera application program to respectively generate the corresponding third lens event based on the corresponding parameter setting data. A first execution instruction group, a second execution instruction group, and a third execution instruction group. The front-end program 135 uses the task distribution program to distribute the multimedia file event to the multimedia player program, and uses the multimedia player program to generate a fourth execution command group based on the corresponding parameter setting data.

之後,透過任務分發程式依序執行下述動作。將事件索引“001”與第一執行指令以及點位名稱P2、P26、P24相關聯後,將事件索引“001”存入待執行隊列。將事件索引“002”與第二執行指令相關聯後,將事件索引“002”存入待執行隊列。將事件索引“003”與第三執行指令以及點位名稱“P29”相關聯後,將事件索引“003”存入待執行隊列。將事件索引“004”與第四執行指令相關聯後,將事件索引“004”存入待執行隊列。Afterwards, the following actions are performed in sequence through the task dispatcher. After associating the event index "001" with the first execution instruction and point names P2, P26, and P24, the event index "001" is stored in the queue to be executed. After associating the event index "002" with the second execution instruction, the event index "002" is stored in the queue to be executed. After associating the event index "003" with the third execution instruction and the point name "P29", the event index "003" is stored in the queue to be executed. After associating the event index "004" with the fourth execution instruction, the event index "004" is stored in the queue to be executed.

之後,在選擇目標事件的情況下,前台程式135首先自待執行隊列取出事件索引“001”,以基於事件索引“001”相關聯的第一執行指令組以及點位名稱“P2”,指示虛擬相機初始的鏡頭位置在定位點P2,並取出定位點P2對應的環景圖來進行即時渲染程序後,基於鏡頭移動指令,在10秒內,將鏡頭位置由定位點P2移動至定位點P26後,取出定位點P26對應的環景圖來進行即時渲染程序後,繼續將鏡頭位置移動至定位點P24,之後再取出定位點P24對應的環景圖來進行即時渲染程序。After that, when the target event is selected, the foreground program 135 first retrieves the event index "001" from the queue to be executed, and uses the first execution instruction group associated with the event index "001" and the point name "P2" to indicate the virtual The initial lens position of the camera is at anchor point P2, and after taking out the panoramic view corresponding to anchor point P2 for the real-time rendering process, based on the lens movement command, the lens position is moved from anchor point P2 to anchor point P26 within 10 seconds. , after taking out the panoramic view corresponding to the positioning point P26 to perform the real-time rendering process, continue to move the lens position to the positioning point P24, and then take out the panoramic view corresponding to the positioning point P24 to perform the real-time rendering process.

接著,前台程式135自待執行隊列取出事件索引“002”,以基於事件索引“002”相關聯的第二執行指令組,指示虛擬相機的鏡頭角度切換θ1度。Next, the front-end program 135 retrieves the event index "002" from the queue to be executed, and uses the second execution instruction group associated with the event index "002" to instruct the virtual camera to switch the lens angle by θ1 degrees.

之後,前台程式135自待執行隊列取出事件索引“003”,以基於事件索引“003”相關聯的第三執行指令組以及點位名稱“P29”,基於鏡頭移動指令,在5秒內,將鏡頭位置由定位點P24移動至定位點P29後,取出定位點P29對應的環景圖來進行即時渲染程序。After that, the front-end program 135 retrieves the event index "003" from the queue to be executed, and uses the third execution command group associated with the event index "003" and the point name "P29" based on the lens movement command within 5 seconds. After the lens position is moved from positioning point P24 to positioning point P29, the panoramic view corresponding to positioning point P29 is taken out to perform the real-time rendering process.

然後,前台程式135自待執行隊列取出事件索引“004”,以基於事件索引“004”相關聯的第四執行指令組,指示透過多媒體播放程式來播放在多媒體路徑“:/root/aaa/xxx”下的多媒體檔案。Then, the front-end program 135 retrieves the event index "004" from the queue to be executed, and uses the fourth execution command group associated with the event index "004" to instruct the multimedia player program to play the multimedia path ":/root/aaa/xxx" "Multimedia files under.

圖4是依照本發明一實施例的任務腳本的顯示介面的示意圖。請參照圖4,顯示介面400列出了儲存在電子裝置100A的資料庫131中的多個任務腳本。由顯示介面400可以知道各個任務腳本對應的類型、所包括的事件數量、以及各任務腳本是否已公開供前台程式135來使用的狀態。並且,顯示介面400還提供編輯功能,以供使用者來查看、編輯或刪除所選擇的任務腳本。FIG. 4 is a schematic diagram of a display interface of a task script according to an embodiment of the present invention. Referring to FIG. 4 , the display interface 400 lists a plurality of task scripts stored in the database 131 of the electronic device 100A. From the display interface 400, you can know the type corresponding to each task script, the number of events included, and the status of whether each task script has been exposed for use by the front-end program 135. Moreover, the display interface 400 also provides an editing function for the user to view, edit or delete the selected task script.

電子裝置100A的儲存裝置130更包括腳本建立模組。在一實施例中,腳本建立模組可由後台程式137來提供,透過腳本建立模組建立任務腳本。在一實施例中,腳本建立模組提供可視化編輯介面來建立任務腳本所包括的多個事件。底下舉例來說明。The storage device 130 of the electronic device 100A further includes a script creation module. In one embodiment, the script creation module can be provided by the background program 137, and the task script is created through the script creation module. In one embodiment, the script creation module provides a visual editing interface to create multiple events included in the task script. Below is an example to illustrate.

圖5A~圖5E是依照本發明一實施例的腳本編輯介面的示意圖。在本實施例中,腳本編輯介面500為可視化編輯介面。腳本編輯介面500可分為區塊510、520、530、540。在區塊510中提供4個編輯功能供使用者選擇。所述4個編輯功能分別為鏡頭編輯功能511、文字編輯功能512、多媒體檔案編輯功能513以及互動觸發編輯功能514。區塊520用以顯示所選功能的編輯項目頁面。區塊530用以顯示預覽畫面。區塊540用以顯示對應至所述4個編輯功能的4個編輯軌道541~544。這4個編輯軌道對應至相同的時間軌道。5A to 5E are schematic diagrams of a script editing interface according to an embodiment of the present invention. In this embodiment, the script editing interface 500 is a visual editing interface. The script editing interface 500 can be divided into blocks 510, 520, 530, and 540. In block 510, four editing functions are provided for the user to choose. The four editing functions are lens editing function 511, text editing function 512, multimedia file editing function 513 and interactive trigger editing function 514. Block 520 is used to display the edit item page of the selected function. Block 530 is used to display the preview screen. Block 540 is used to display four editing tracks 541 to 544 corresponding to the four editing functions. These 4 editing tracks correspond to the same time track.

在此,圖5A與圖5B對應至鏡頭編輯功能511,圖5C對應至文字編輯功能512,圖5D對應至多媒體檔案編輯功能513,圖5E對應至互動觸發編輯功能514。Here, FIG. 5A and FIG. 5B correspond to the lens editing function 511, FIG. 5C corresponds to the text editing function 512, FIG. 5D corresponds to the multimedia file editing function 513, and FIG. 5E corresponds to the interactive trigger editing function 514.

在圖5A中,區塊510中的鏡頭編輯功能511被選定,使用者在區塊520中新增兩筆鏡頭腳本事件521、522。各鏡頭腳本事件具有編輯選項(進入事件編輯)與刪除選項(刪除對應事件)。底下以點選鏡頭腳本事件521的編輯選項進行說明,在鏡頭腳本事件521的編輯選項被點選之後,腳本編輯介面500進入圖5B所示的畫面。In FIG. 5A , the shot editing function 511 in block 510 is selected, and the user adds two shot script events 521 and 522 in block 520 . Each lens script event has edit options (enter event editing) and delete options (delete the corresponding event). The following description is based on clicking the editing option of the lens script event 521. After the editing option of the lens script event 521 is clicked, the script editing interface 500 enters the screen shown in FIG. 5B.

在圖5B中,使用者可在區塊520中編輯鏡頭腳本事件521,並將編輯完的鏡頭腳本事件521***至編輯軌道541中。編輯軌道541中的事件E1、E2分別對應至鏡頭腳本事件521、522。事件E1、E2在時間軌道上對應至時間區段tp1、tp2。鏡頭編輯功能511包括欄位521-1以及欄位521-2。In FIG. 5B , the user can edit the lens script event 521 in block 520 and insert the edited lens script event 521 into the editing track 541 . Events E1 and E2 in the editing track 541 correspond to shot script events 521 and 522 respectively. Events E1 and E2 correspond to time segments tp1 and tp2 on the time track. The shot editing function 511 includes a field 521-1 and a field 521-2.

參照圖5B,在欄位521-1中選擇(設定)點位名稱(選定點位名稱A)來決定虛擬相機的鏡頭位置,並基於所選定的鏡頭位置,將點位名稱A的定位點所對應的環景圖Img-1作為材質置入立體空間模型133而顯示於區塊530的預覽畫面中。另外,可透過欄位521-2或預覽畫面來決定虛擬相機的鏡頭參數。例如,透過欄位521-2接收鏡頭參數的數值輸入。或者,透過預覽畫面來接收操作軌跡。具體而言,使用者可直接在預覽畫面上對所顯示的預渲染後的立體空間模型133進行拖曳、移動、轉動等操作來設定鏡頭參數。鏡頭參數包括水平角度、俯仰角度以及視角(field of view,FOV)縮放比例。之後,在編輯軌道541中加入對應的事件E1,並且設定事件E1在時間軌道上的時間區段tp1。Referring to Figure 5B, select (set) a point name (selected point name A) in field 521-1 to determine the lens position of the virtual camera, and based on the selected lens position, set the anchor point of point name A to The corresponding panoramic image Img-1 is inserted into the three-dimensional space model 133 as a material and displayed in the preview screen of block 530. In addition, the lens parameters of the virtual camera can be determined through field 521-2 or the preview screen. For example, receive numerical input of lens parameters through field 521-2. Or, receive the operation track through the preview screen. Specifically, the user can directly perform operations such as dragging, moving, rotating, etc. on the displayed pre-rendered three-dimensional space model 133 on the preview screen to set lens parameters. Lens parameters include horizontal angle, pitch angle, and field of view (FOV) scaling. Afterwards, the corresponding event E1 is added to the editing track 541, and the time section tp1 of the event E1 on the time track is set.

在圖5C中,區塊510中的文字編輯功能512被選定,使用者在區塊520中新增三筆導覽文字事件523、524、525,透過文字欄位接收文字內容,並將編輯完的導覽文字事件523、524、525***至編輯軌道542中。編輯軌道542中的事件E3、E4、E5分別對應至鏡頭腳本事件523、524、525。事件E3、E4、E5在時間軌道上對應至時間區段tp3、tp4、tp5。In Figure 5C, the text editing function 512 in block 510 is selected. The user adds three navigation text events 523, 524, and 525 in block 520, receives the text content through the text field, and completes the editing. The navigation text events 523, 524, and 525 are inserted into the editing track 542. Events E3, E4, and E5 in the editing track 542 correspond to lens script events 523, 524, and 525 respectively. Events E3, E4, and E5 correspond to time segments tp3, tp4, and tp5 on the time track.

在圖5D中,區塊510中的多媒體檔案編輯功能513被選定,使用者在區塊520中新增兩筆多媒體檔案事件526、527,透過檔案選擇介面來設定多媒體檔案路徑,並將編輯完的多媒體檔案事件526、527***至編輯軌道543中。編輯軌道543中的事件E6、E7分別對應至多媒體檔案事件526、527。事件E6、E7在時間軌道上對應至時間區段tp6、tp7。在此,腳本建立模組還可自動調整多媒體檔案事件的播放時間長度,使其符合In Figure 5D, the multimedia file editing function 513 in block 510 is selected. The user adds two multimedia file events 526 and 527 in block 520, sets the multimedia file path through the file selection interface, and completes the editing. The multimedia file events 526 and 527 are inserted into the editing track 543. Events E6 and E7 in the editing track 543 correspond to multimedia file events 526 and 527 respectively. Events E6 and E7 correspond to time segments tp6 and tp7 on the time track. Here, the script creation module can also automatically adjust the playback time length of multimedia file events to make it consistent

在圖5E中,區塊510中的互動觸發編輯功能514被選定,使用者在區塊520中新增兩筆互動觸發事件528、529,透過元件選擇介面來設定互動元件編號,並將編輯完的互動觸發事件528、529***至編輯軌道544中。編輯軌道544中的事件E8、E9分別對應至互動觸發事件528、529。事件E8、E9在時間軌道上對應至時間區段tp8、tp9。In Figure 5E, the interactive trigger editing function 514 in block 510 is selected. The user adds two interactive trigger events 528 and 529 in block 520, sets the interactive component number through the component selection interface, and completes the editing. The interactive triggering events 528 and 529 are inserted into the editing track 544. Events E8 and E9 in the editing track 544 correspond to interaction trigger events 528 and 529 respectively. Events E8 and E9 correspond to time segments tp8 and tp9 on the time track.

基此,使用者透過腳本編輯介面500可以快速建立與編輯任務腳本。Based on this, the user can quickly create and edit task scripts through the script editing interface 500 .

綜上所述,本發明利用立體空間模型的定位特性,預先設定欲展示的導覽順序以及和立體空間模型的互動行為,使得前台程式能依循設定好的行程進行導覽體驗。據此,使用者可透過自動導覽的功能,了解立體空間模型的創作者欲表達的理念。To sum up, the present invention uses the positioning characteristics of the three-dimensional space model to pre-set the navigation sequence to be displayed and the interactive behavior with the three-dimensional space model, so that the front-end program can follow the set itinerary to conduct a navigation experience. Accordingly, users can use the automatic navigation function to understand the concept that the creator of the three-dimensional space model wants to express.

100A:電子裝置 100B:用戶端裝置 110、140:處理器 120、150:通訊元件 130:儲存裝置 131:資料庫 133:立體空間模型 135:前台程式 137:後台程式 160:瀏覽器 400:顯示介面 500:腳本編輯介面 510~540:區塊 511:鏡頭編輯功能 512:文字編輯功能 513:多媒體檔案編輯功能 514:互動觸發編輯功能 521、522:鏡頭腳本事件 523、524、525:導覽文字事件 526、527:多媒體檔案事件 528、529:互動觸發事件 541~544:編輯軌道 e1~e9:事件 E1~E19:互動位置 P1~P29:定位點 tp1~tp9:時間區段 S305~S315:自動導覽的方法的步驟 100A: Electronic devices 100B: Client device 110, 140: Processor 120, 150: Communication components 130:Storage device 131:Database 133: Three-dimensional space model 135:Front-end program 137:Background program 160:Browser 400: Display interface 500:Script editing interface 510~540: block 511: Lens editing function 512: Text editing function 513: Multimedia file editing function 514: Interactive trigger editing function 521, 522: Lens script event 523, 524, 525: Navigation text events 526, 527: Multimedia archive event 528, 529: Interaction trigger event 541~544: Edit track e1~e9: event E1~E19: interactive position P1~P29: anchor point tp1~tp9: time section S305~S315: Steps of automatic navigation method

圖1是依照本發明一實施例的用於立體空間模型的自動導覽系統的方塊圖。 圖2是依照本發明一實施例的包括多個定位點的立體空間模型的示意圖。 圖3是依照本發明一實施例的自動導覽的方法流程圖。 圖4是依照本發明一實施例的任務腳本列表的顯示介面的示意圖。 圖5A~圖5E是依照本發明一實施例的腳本編輯介面的示意圖。 FIG. 1 is a block diagram of an automatic navigation system for a three-dimensional space model according to an embodiment of the present invention. FIG. 2 is a schematic diagram of a three-dimensional space model including multiple positioning points according to an embodiment of the present invention. Figure 3 is a flow chart of an automatic navigation method according to an embodiment of the present invention. FIG. 4 is a schematic diagram of a display interface of a task script list according to an embodiment of the present invention. 5A to 5E are schematic diagrams of a script editing interface according to an embodiment of the present invention.

S305~S315:自動導覽的方法的步驟 S305~S315: Steps of automatic navigation method

Claims (20)

一種自動導覽的方法,利用一處理器來執行,該方法包括:啟動一立體空間模型,其中該立體空間模型所包括的多個定位點分別與多個環景圖相對應,每一該些定位點具有對應的一點位名稱;響應於該立體空間模型的啟動,將一虛擬相機的一鏡頭位置移動至一預設位置,並執行一即時渲染程序以將該預設位置所對應的一預設環景圖作為材質置入該立體空間模型而顯示於一使用者介面;在以該預設環景圖作為材質置入該立體空間模型而顯示於該使用者介面之後,取得對應於該立體空間模型的一任務腳本,其中該任務腳本包括多個事件;以及基於該任務腳本的該些事件位於一時間軌道的一導覽順序,執行該些事件,其中,執行該些事件包括:響應於執行屬於具有該點位名稱的第一類別的每一事件,將該虛擬相機的該鏡頭位置移動至對應該點位名稱對應的該些定位點中的一指定定位點,並執行該即時渲染程序以將自該些環景圖中取出的對應於該指定定位點的一指定環景圖作為材質重新置入該立體空間模型而顯示於該使用者介面中,並且基於對應的參數設定資料執行對應的動作;以及 響應於執行屬於不具有該點位名稱的第二類別的每一事件,基於對應的參數設定資料執行對應的動作。 An automatic navigation method is executed using a processor. The method includes: activating a three-dimensional space model, wherein multiple positioning points included in the three-dimensional space model correspond to multiple panoramic views respectively, and each of the The anchor point has a corresponding point name; in response to the activation of the three-dimensional space model, a lens position of a virtual camera is moved to a preset position, and a real-time rendering program is executed to convert a preset position corresponding to the preset position. Assume that the panoramic view is used as a material to insert the three-dimensional space model and display it in a user interface; after using the default panoramic view as a material to insert the three-dimensional space model and display it in the user interface, obtain the corresponding three-dimensional space model. A mission script of the space model, wherein the mission script includes a plurality of events; and a navigation sequence based on the events of the mission script located in a time track, executing the events, wherein executing the events includes: responding to Execute each event belonging to the first category with the point name, move the lens position of the virtual camera to a specified anchor point among the anchor points corresponding to the point name, and execute the real-time rendering program A specified panoramic image corresponding to the specified positioning point extracted from the panoramic images is re-inserted into the three-dimensional space model as a material and displayed in the user interface, and the corresponding parameter setting data is performed based on the corresponding actions; and In response to executing each event belonging to the second category that does not have the point name, a corresponding action is executed based on the corresponding parameter setting data. 如請求項1所述的自動導覽的方法,更包括:透過一腳本建立模組建立該任務腳本,其中該腳本建立模組提供一可視化編輯介面來建立該任務腳本所包括的該些事件,該可視化編輯介面包括一鏡頭編輯功能、一文字編輯功能、一多媒體檔案編輯功能以及一互動觸發編輯功能。 The method of automatic navigation as described in claim 1 further includes: creating the task script through a script creation module, wherein the script creation module provides a visual editing interface to create the events included in the task script, The visual editing interface includes a lens editing function, a text editing function, a multimedia file editing function and an interactive trigger editing function. 如請求項2所述的自動導覽的方法,其中透過該腳本建立模組建立該任務腳本的步驟包括:透過該鏡頭編輯功能建立一鏡頭腳本事件,包括:經由設定點位名稱來決定該虛擬相機的該鏡頭位置,並基於該鏡頭位置,將該些定位點的其中一者所對應的環景圖作為材質置入該立體空間模型而顯示於一預覽畫面中;透過該預覽畫面或一參數設定欄位,決定該虛擬相機的一鏡頭參數,其中該預覽畫面用以接收一操作軌跡,該參數設定欄位用以接收該鏡頭參數的數值輸入;以及在該時間軌道上設定該鏡頭腳本事件的時間區段。 The method of automatic navigation as described in claim 2, wherein the step of creating the task script through the script creation module includes: creating a lens script event through the lens editing function, including: determining the virtual point name by setting a point name. Based on the lens position of the camera, the panoramic view corresponding to one of the anchor points is placed as a material into the three-dimensional space model and displayed in a preview screen; through the preview screen or a parameter A setting field is used to determine a lens parameter of the virtual camera, where the preview screen is used to receive an operation track, and the parameter setting field is used to receive a numerical input of the lens parameter; and to set the lens script event on the time track time period. 如請求項2所述的自動導覽的方法,其中透過該腳本建立模組建立該任務腳本的步驟包括:透過該文字編輯功能建立一導覽文字事件,包括:透過一文字欄位接收一文字內容;以及在該時間軌道上設定該導覽文字事件的時間區段。 The method of automatic navigation as described in claim 2, wherein the step of creating the task script through the script creation module includes: creating a navigation text event through the text editing function, including: receiving a text content through a text field; And set the time section of the navigation text event on the time track. 如請求項2所述的自動導覽的方法,其中透過該腳本建立模組建立該任務腳本的步驟包括:透過該多媒體檔案編輯功能建立一多媒體檔案事件,包括:透過一檔案選擇介面來設定一多媒體檔案路徑;以及在該時間軌道上設定該多媒體檔案事件的時間區段。 The method of automatic navigation as described in claim 2, wherein the step of creating the task script through the script creation module includes: creating a multimedia file event through the multimedia file editing function, including: setting a multimedia file event through a file selection interface. The multimedia file path; and setting the time section of the multimedia file event on the time track. 如請求項2所述的自動導覽的方法,其中透過該腳本建立模組建立該任務腳本的步驟包括:透過該互動觸發編輯功能建立一互動觸發事件,包括:透過一元件選擇介面來設定一互動元件編號;以及在該時間軌道上設定該互動元件編號的時間區段。 The method of automatic navigation as described in request item 2, wherein the step of creating the task script through the script creation module includes: creating an interactive trigger event through the interactive trigger editing function, including: setting an interactive trigger event through an element selection interface. The interactive element number; and the time period in which the interactive element number is set on the time track. 如請求項1所述的自動導覽的方法,其中每一該些事件具有對應的一事件索引,而在取得對應於該立體空間模型的該任務腳本之後,更包括:執行一任務分發程式,以針對每一該些事件,分發對應的一處理程式;利用該處理程式針對每一該些事件產生對應的一執行指令組;將對應的該事件索引與該執行指令組相關連,並將該事件索引儲存至一待執行隊列,其中該待執行隊列是基於該導覽順序來進行儲存;以及響應於該些事件至少其中一個的分發失敗,發出一警示訊號,並終止該任務分發程式。 The method of automatic navigation as described in claim 1, wherein each of the events has a corresponding event index, and after obtaining the task script corresponding to the three-dimensional space model, it further includes: executing a task distribution program, For each of the events, a corresponding handler is distributed; the handler is used to generate a corresponding execution instruction group for each of the events; the corresponding event index is associated with the execution instruction group, and the The event index is stored in a pending execution queue, wherein the pending execution queue is stored based on the navigation sequence; and in response to a distribution failure of at least one of the events, a warning signal is issued and the task dispatcher is terminated. 如請求項7所述的自動導覽的方法,其中基於該任務腳本的該些事件位於該時間軌道的該導覽順序,執行該些事件的步驟包括:基於該導覽順序,自該待執行隊列中取出具有相同順序的一組事件索引;以及取出該組事件索引相關聯的執行指令組,並透過對應的處理程式來執行。 The method of automatic navigation as described in request item 7, wherein the events based on the task script are located in the navigation sequence of the time track, and the step of executing the events includes: based on the navigation sequence, from the to-be-executed A group of event indexes with the same sequence are retrieved from the queue; and an execution instruction group associated with the event index group is retrieved and executed through a corresponding handler. 如請求項1所述的自動導覽的方法,其中每一該些事件為:一鏡頭腳本事件、一導覽文字事件、一多媒體檔案事件以及一互動觸發事件中的其中一個,該鏡頭腳本事件的參數設定資料包括該虛擬相機的鏡頭參數,該導覽文字事件的參數設定資料包括文字內容,該多媒體檔案事件的參數設定資料包括多媒體檔案路徑,該互動觸發事件的參數設定資料包括一互動元件編號。 The method of automatic navigation as described in claim 1, wherein each of the events is one of: a lens script event, a navigation text event, a multimedia file event and an interactive trigger event, and the lens script event The parameter setting data includes the lens parameters of the virtual camera, the parameter setting data of the navigation text event includes text content, the parameter setting data of the multimedia file event includes the multimedia file path, and the parameter setting data of the interactive trigger event includes an interactive element number. 如請求項1所述的自動導覽的方法,更包括:響應於在該使用者介面啟動該立體空間模型,並透過該立體空間模型接收到的一使用者操作,觸發一互動元件;以及針對在該時間軌道中接續於與該互動元件對應的事件之後的一或多個剩餘事件,基於該導覽順序,執行該或該些剩餘事件。 The method of automatic navigation as described in claim 1 further includes: in response to activating the three-dimensional space model in the user interface and receiving a user operation through the three-dimensional space model, triggering an interactive element; and One or more remaining events following the event corresponding to the interactive element in the time track are executed based on the navigation sequence. 如請求項1所述的自動導覽的方法,更包括:響應於在該使用者介面啟動該立體空間模型,並透過該立體空間模型接收到的一使用者操作,觸發一互動元件並中斷該任務 腳本;取得對應於該互動元件的另一任務腳本;以及基於該另一任務腳本所包括的另外多個事件位於另一時間軌道的另一導覽順序,執行另外該些事件。 The method of automatic navigation as described in claim 1 further includes: in response to activating the three-dimensional space model in the user interface and receiving a user operation through the three-dimensional space model, triggering an interactive element and interrupting the three-dimensional space model. Task Script; obtain another task script corresponding to the interactive element; and execute other events based on another navigation sequence of another plurality of events included in the other task script located in another time track. 一種電子裝置,包括:一儲存裝置,包括:一立體空間模型,包括多個定位點,其中每一該些定位點具有對應的一點位名稱;以及一資料庫,儲存了分別與該些定位點相對應的多個環景圖、一預設位置對應的一預設環景圖以及與該立體空間模型對應的一任務腳本,其中該任務腳本包括多個事件;以及一處理器,耦接至該儲存裝置,且經配置以:啟動該立體空間模型;響應於該立體空間模型的啟動,將一虛擬相機的一鏡頭位置移動至該預設位置,並執行一即時渲染程序以將該預設位置所對應的該預設環景圖作為材質置入該立體空間模型而顯示於一使用者介面;在以該預設環景圖作為材質置入該立體空間模型而顯示於該使用者介面之後,取得對應於該立體空間模型的該任務腳本;基於該任務腳本的該些事件位於一時間軌道的一導覽順序,執行該些事件,其中執行該些事件包括: 響應於執行屬於具有該點位名稱的第一類別的每一事件,將該虛擬相機的該鏡頭位置移動至對應該點位名稱對應的該些定位點中的一指定定位點,並執行該即時渲染程序以將自該些環景圖中取出的對應於該指定定位點的一指定環景圖作為材質重新置入該立體空間模型而顯示於該使用者介面中,並且基於對應的參數設定資料執行對應的動作;以及響應於執行屬於不具有該點位名稱的第二類別的每一事件,基於對應的參數設定資料執行對應的動作。 An electronic device includes: a storage device, including: a three-dimensional space model including a plurality of positioning points, each of which has a corresponding point name; and a database that stores the corresponding positioning points respectively. A plurality of corresponding panoramic views, a preset panoramic map corresponding to a preset position, and a task script corresponding to the three-dimensional space model, wherein the task script includes a plurality of events; and a processor coupled to The storage device is configured to: activate the three-dimensional space model; in response to the activation of the three-dimensional space model, move a lens position of a virtual camera to the preset position, and execute a real-time rendering program to convert the preset The default panoramic view corresponding to the position is inserted into the three-dimensional space model as a material and displayed in a user interface; after the preset panoramic view is inserted into the three-dimensional space model as a material and displayed in the user interface , obtain the task script corresponding to the three-dimensional space model; the events based on the task script are located in a navigation sequence of a time track, and execute the events, where executing the events includes: In response to executing each event belonging to the first category with the point name, move the lens position of the virtual camera to a specified anchor point among the anchor points corresponding to the point name, and execute the real-time The rendering program re-inserts a specified panoramic image corresponding to the specified positioning point extracted from the panoramic images into the three-dimensional space model as a material and displays it in the user interface, and based on the corresponding parameter setting data Execute the corresponding action; and in response to executing each event belonging to the second category that does not have the point name, execute the corresponding action based on the corresponding parameter setting data. 如請求項12所述的電子裝置,其中該儲存裝置更包括一腳本建立模組,該處理器經配置以:透過該腳本建立模組建立該任務腳本,其中該腳本建立模組提供一可視化編輯介面來建立該任務腳本所包括的該些事件,該可視化編輯介面包括一鏡頭編輯功能、一文字編輯功能、一多媒體檔案編輯功能以及一互動觸發編輯功能。 The electronic device of claim 12, wherein the storage device further includes a script creation module, and the processor is configured to: create the task script through the script creation module, wherein the script creation module provides a visual editing An interface is used to create the events included in the task script. The visual editing interface includes a lens editing function, a text editing function, a multimedia file editing function and an interactive trigger editing function. 如請求項13所述的電子裝置,其中該處理器經配置以:透過該鏡頭編輯功能建立一鏡頭腳本事件,包括:經由設定點位名稱來決定該虛擬相機的該鏡頭位置,並基於該鏡頭位置,將該些定位點的其中一者所對應的環景圖作為材質置入該立體空間模型而顯示於一預覽畫面中;透過該預覽畫面或一參數設定欄位,決定該虛擬相機的一鏡頭參數,其中該預覽畫面用以接收一操作軌跡,該參數設定欄位用以接收該鏡頭參數的數值輸入;以及 在該時間軌道上設定該鏡頭腳本事件的時間區段。 The electronic device of claim 13, wherein the processor is configured to: create a lens script event through the lens editing function, including: determining the lens position of the virtual camera by setting a point name, and based on the lens Position, the panoramic view corresponding to one of the anchor points is placed as a material into the three-dimensional space model and displayed in a preview screen; through the preview screen or a parameter setting field, a parameter setting field of the virtual camera is determined. Lens parameters, where the preview screen is used to receive an operation track, and the parameter setting field is used to receive numerical input of the lens parameters; and Set the time section of the shot script event on this time track. 如請求項13所述的電子裝置,其中該處理器經配置以:透過該文字編輯功能建立一導覽文字事件,包括:透過一文字欄位接收一文字內容;以及在該時間軌道上設定該導覽文字事件的時間區段。 The electronic device of claim 13, wherein the processor is configured to: create a navigation text event through the text editing function, including: receiving a text content through a text field; and setting the navigation on the time track The time range of the text event. 如請求項13所述的電子裝置,其中該處理器經配置以:透過該多媒體檔案編輯功能建立一多媒體檔案事件,包括:透過一檔案選擇介面來設定一多媒體檔案路徑;以及在該時間軌道上設定該多媒體檔案事件的時間區段。 The electronic device of claim 13, wherein the processor is configured to: create a multimedia file event through the multimedia file editing function, including: setting a multimedia file path through a file selection interface; and on the time track Set the time range of the multimedia file event. 如請求項13所述的電子裝置,其中該處理器經配置以:透過該互動觸發編輯功能建立一互動觸發事件,包括:透過一元件選擇介面來設定一互動元件編號;以及在該時間軌道上設定該互動元件編號的時間區段。 The electronic device of claim 13, wherein the processor is configured to: create an interactive trigger event through the interactive trigger editing function, including: setting an interactive component number through an component selection interface; and on the time track Set the time period of this interactive element number. 如請求項12所述的電子裝置,其中每一該些事件具有對應的一事件索引,該處理器經配置以:執行一任務分發程式,以針對每一該些事件,分發對應的一處理程式;利用該處理程式針對每一該些事件產生對應的一執行指令組;將對應的該事件索引與該執行指令組相關連,並將該事件索引儲存至一待執行隊列,其中該待執行隊列是基於該導覽順序來進行儲存;以及 響應於該些事件至少其中一個的分發失敗,發出一警示訊號,並終止該任務分發程式。 The electronic device of claim 12, wherein each of the events has a corresponding event index, and the processor is configured to: execute a task distribution program to distribute a corresponding processing program for each of the events. ;Use the handler to generate a corresponding execution instruction group for each of the events; associate the corresponding event index with the execution instruction group, and store the event index in a pending execution queue, wherein the pending execution queue is stored based on the navigation sequence; and In response to a distribution failure in at least one of the events, a warning signal is issued and the task distribution program is terminated. 如請求項18所述的電子裝置,其中該處理器經配置以:基於該導覽順序,自該待執行隊列中取出具有相同順序的一組事件索引;以及取出該組事件索引相關聯的執行指令組,並透過對應的處理程式來執行。 The electronic device of claim 18, wherein the processor is configured to: based on the navigation sequence, retrieve a set of event indexes in the same order from the to-be-executed queue; and retrieve executions associated with the set of event indexes. The command group is executed through the corresponding processing program. 如請求項12所述的電子裝置,其中該處理器經配置以:響應於在該使用者介面啟動該立體空間模型,並透過該立體空間模型接收到的一使用者操作,觸發一互動元件;以及在觸發該互動元件之後,執行:(a)針對在該時間軌道中接續於與該互動元件對應的事件之後的一或多個剩餘事件,基於該導覽順序,執行該或該些剩餘事件;或者(b)取得對應於該互動元件的另一任務腳本,並基於該另一任務腳本所包括的另外多個事件位於另一時間軌道的另一導覽順序,執行另外該些事件。 The electronic device of claim 12, wherein the processor is configured to: trigger an interactive element in response to activating the three-dimensional space model in the user interface and receiving a user operation through the three-dimensional space model; and after triggering the interactive element, execute: (a) for one or more remaining events that follow the event corresponding to the interactive element in the time track, execute the or the remaining events based on the navigation sequence ; Or (b) obtain another task script corresponding to the interactive element, and execute the other events based on another navigation sequence in which the other events included in the other task script are located in another time track.
TW112102570A 2023-01-19 2023-01-19 Method for automatic guide and electronic apparatus TWI828527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW112102570A TWI828527B (en) 2023-01-19 2023-01-19 Method for automatic guide and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW112102570A TWI828527B (en) 2023-01-19 2023-01-19 Method for automatic guide and electronic apparatus

Publications (2)

Publication Number Publication Date
TWI828527B true TWI828527B (en) 2024-01-01
TW202431085A TW202431085A (en) 2024-08-01

Family

ID=90458976

Family Applications (1)

Application Number Title Priority Date Filing Date
TW112102570A TWI828527B (en) 2023-01-19 2023-01-19 Method for automatic guide and electronic apparatus

Country Status (1)

Country Link
TW (1) TWI828527B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201624272A (en) * 2014-12-31 2016-07-01 群邁通訊股份有限公司 Recording and playing script system and method
CN106598574A (en) * 2016-11-25 2017-04-26 腾讯科技(深圳)有限公司 Method and device for page rendering
CN107168780A (en) * 2017-04-06 2017-09-15 北京小鸟看看科技有限公司 Loading method, equipment and the virtual reality device of virtual reality scenario
CN113359994A (en) * 2021-06-24 2021-09-07 福州大学 Teaching content configuration and interaction scheme implementation method suitable for AR education application
TWM626899U (en) * 2021-12-17 2022-05-11 王一互動科技有限公司 Electronic apparatus for presenting three-dimensional space model
CN114510626A (en) * 2020-10-28 2022-05-17 秀铺菲公司 System and method for providing enhanced media
CN114579226A (en) * 2021-12-10 2022-06-03 国网浙江省电力有限公司宁波供电公司 Lightweight human-computer interaction terminal system
TW202225941A (en) * 2020-11-03 2022-07-01 美商視野公司 Virtually viewing devices in a facility

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201624272A (en) * 2014-12-31 2016-07-01 群邁通訊股份有限公司 Recording and playing script system and method
CN106598574A (en) * 2016-11-25 2017-04-26 腾讯科技(深圳)有限公司 Method and device for page rendering
CN107168780A (en) * 2017-04-06 2017-09-15 北京小鸟看看科技有限公司 Loading method, equipment and the virtual reality device of virtual reality scenario
CN114510626A (en) * 2020-10-28 2022-05-17 秀铺菲公司 System and method for providing enhanced media
TW202225941A (en) * 2020-11-03 2022-07-01 美商視野公司 Virtually viewing devices in a facility
CN113359994A (en) * 2021-06-24 2021-09-07 福州大学 Teaching content configuration and interaction scheme implementation method suitable for AR education application
CN114579226A (en) * 2021-12-10 2022-06-03 国网浙江省电力有限公司宁波供电公司 Lightweight human-computer interaction terminal system
TWM626899U (en) * 2021-12-17 2022-05-11 王一互動科技有限公司 Electronic apparatus for presenting three-dimensional space model

Similar Documents

Publication Publication Date Title
Nebeling et al. The trouble with augmented reality/virtual reality authoring tools
US10984601B2 (en) Data visualization objects in a virtual environment
US20230298285A1 (en) Augmented and virtual reality
JP5524965B2 (en) Inspection in geographic information system
TW201814438A (en) Virtual reality scene-based input method and device
US10039982B2 (en) Artist-directed volumetric dynamic virtual cameras
CN110581947A (en) Taking pictures within virtual reality
TW201816554A (en) Interaction method and device based on virtual reality
US10191612B2 (en) Three-dimensional virtualization
KR101989089B1 (en) Method and system for authoring ar content by collecting ar content templates based on crowdsourcing
US9489759B1 (en) File path translation for animation variables in an animation system
US20220068029A1 (en) Methods, systems, and computer readable media for extended reality user interface
US10216863B2 (en) Program generation method, program generation apparatus, and storage medium
JP2023517367A (en) Virtual scene data processing method, device, electronic device and program
CN112150602A (en) Model image rendering method and device, storage medium and electronic equipment
KR101949493B1 (en) Method and system for controlling play of multimeida content
Lee et al. A component based framework for mobile outdoor ar applications
TWM626899U (en) Electronic apparatus for presenting three-dimensional space model
US11418857B2 (en) Method for controlling VR video playing and related apparatus
TWI828527B (en) Method for automatic guide and electronic apparatus
WO2019105062A1 (en) Content display method, apparatus, and terminal device
KR20210157742A (en) Method and system for providing web content in virtual reality environment
US20190311424A1 (en) Product visualization system and method for using two-dimensional images to interactively display photorealistic representations of three-dimensional objects based on smart tagging
WO2011150702A1 (en) Method for displaying contacts in instant messenger and instant messaging client
TWI799012B (en) Electronic apparatus and method for presenting three-dimensional space model