TW202420034A - The agent of metaverse go outside. - Google Patents
The agent of metaverse go outside. Download PDFInfo
- Publication number
- TW202420034A TW202420034A TW111141424A TW111141424A TW202420034A TW 202420034 A TW202420034 A TW 202420034A TW 111141424 A TW111141424 A TW 111141424A TW 111141424 A TW111141424 A TW 111141424A TW 202420034 A TW202420034 A TW 202420034A
- Authority
- TW
- Taiwan
- Prior art keywords
- user
- robot
- camera
- metaverse
- control software
- Prior art date
Links
- 239000011521 glass Substances 0.000 claims abstract description 8
- 230000014759 maintenance of location Effects 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 claims 1
- 230000007613 environmental effect Effects 0.000 claims 1
- 230000000694 effects Effects 0.000 description 11
- 238000000034 method Methods 0.000 description 4
- 238000010276 construction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 210000003739 neck Anatomy 0.000 description 2
- 230000006870 function Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
Abstract
Description
本創作之技術領域係元宇宙相關應用。 The technical field of this creation is metaverse-related applications.
目前一般元宇宙的技術開發大多注重入口與虛擬世界的建置與經營,導致元宇宙停滯不前,因為元宇宙的內容建置需搭配3D模型建置,以及內容的交互設計,再加上穿戴設備未普及,所以能使用的內容極為有限,而且用3D模型建置的物件,沒有實體物件精緻與實際功效,所以元宇宙的世界中,能進行的工作極少。 At present, most of the technical development of the general metaverse focuses on the construction and operation of the portal and the virtual world, which has caused the metaverse to stagnate. Because the content construction of the metaverse requires the construction of 3D models and the interactive design of the content, and wearable devices are not popular, the content that can be used is extremely limited. Moreover, objects built with 3D models do not have the sophistication and practical effects of physical objects, so there are very few things that can be done in the world of the metaverse.
而我們則反思從另一角度切入,讓元宇宙的人們可以從虛擬世界再進入真實世界,成為元宇宙出口,讓元宇宙的人可以擁有虛擬的身份與形象,然後在真實的世界中進行各種活動,如參觀博物館,看畫展,逛精品街,教學,甚至表演等,因此當人們進入元宇宙後,除了參與虛擬世界中的任何活動,還能藉由其各種不同的出口,進入各種真實世界(如同次宇宙)然後在真實世界中,以其虛擬的形象,智能的控制真實世界中的代理人與真實世界的人事物進行互動。 We reflect on this from another angle, allowing people in the metaverse to enter the real world from the virtual world, becoming the exit of the metaverse, allowing people in the metaverse to have a virtual identity and image, and then carry out various activities in the real world, such as visiting museums, art exhibitions, shopping in boutique streets, teaching, and even performing. Therefore, when people enter the metaverse, in addition to participating in any activities in the virtual world, they can also enter various real worlds (such as the sub-universe) through its various exits, and then in the real world, with their virtual image, intelligently control the agents in the real world to interact with people and things in the real world.
另外,用建模的方式來建設元宇宙,其建模時間長,且效果遠低於真實的事物,並且成本高昂,而利用代理人於真實世界中穿梭,其效果真實且沉浸效果佳,再加上成本低廉,還能建立一條元宇宙與真實世界的互動管道,讓真實世界的人也能透過代理人,眼見元宇宙內的事物,形成一條雙向的互動管道。 In addition, the modeling method used to build the metaverse takes a long time to build, the effect is far inferior to the real thing, and the cost is high. Using agents to travel in the real world has a real effect and good immersion effect. In addition, the cost is low, and an interactive channel between the metaverse and the real world can be established, so that people in the real world can also see things in the metaverse through agents, forming a two-way interactive channel.
試想,在元宇宙中,要參觀大英博物館,或是故宮博物院,那需要的建模時間數十年,甚至上百年,且3D模型與真品的效果差距甚大,可是利用代理人,可將此時間縮短成幾個月,就能讓元宇宙的人們來進行一個宏大博物院的參觀,當代理人廣泛鋪設至上萬個真實空間時,元宇宙世界裡,就多出上萬個出口通向不同的次宇宙,而若每個次宇宙中的物件與建築由上萬個真實元件形成時,那就是相當多出上億個元宇宙元件,且若是每個次宇宙中的參與人員有數十甚至數百人,則表示元宇宙的人,隨時可與數十萬甚至數百萬的真實世界人員一起互動,快速擴大元宇宙的參與。 Imagine that in the metaverse, if you want to visit the British Museum or the Palace Museum, it would take decades or even hundreds of years to build models, and the effect of the 3D model is very different from the real thing. However, by using agents, this time can be shortened to a few months, allowing people in the metaverse to visit a grand museum. When agents are widely deployed in tens of thousands of real spaces, there will be tens of thousands of exits in the metaverse leading to different sub-universes. If the objects and buildings in each sub-universe are formed by tens of thousands of real components, that is equivalent to hundreds of millions of metaverse components. And if there are dozens or even hundreds of participants in each sub-universe, it means that people in the metaverse can interact with hundreds of thousands or even millions of real-world people at any time, rapidly expanding the participation of the metaverse.
圖1為本創作的示意圖。其主要有五大部分組成本創作之機器人與網路平台,其中1為載體底盤用於承載本創作機器人之各項設備與機器人運動之輪子與電機,2為支撐架用於支撐3顯影LED風扇及4攝影機雲台5電腦軟體。 Figure 1 is a schematic diagram of this creation. It mainly consists of five parts, including the robot and network platform of this creation, 1 is the carrier chassis used to carry the various equipment of the robot of this creation and the wheels and motors for the robot's movement, 2 is the support frame used to support 3 the display LED fan and 4 the camera pan and tilt 5 the computer software.
圖2為本創作的結構圖。1載體底盤用於承載本創作機器人之各項設備與機器人運動之輪子與電機,其中1-2輪子用於承載1載體底盤使其移動,1-3驅動輪子的馬達,用以驅動輪子使輪子旋轉以致1載體底盤移動,在1載體底盤上還有1-4喇叭用於播放使用者的聲音,1-5網路控制器用於連結網路作為使用者控制機器人及傳輸資料等,而5電腦軟體則是用於傳輸與控制使用者的電腦裝備(如AR/VR/手機/電腦…)與1-5網路控制器間的軟體,1-6距離傳感器用於檢測機器人于真實環境中,週圍的人或物的距離與位置,以避免機器人碰撞環境中的人或物,1-7電池為整個機器人的電力來源,供電給1載體底盤上的各項設備及3顯影LED風扇與4攝影機雲台,2支撐架上有3顯影LED風扇及4攝影機雲台,其中3顯影LED風扇由3-1LED燈條及3-2風扇馬達所組成,當馬達旋轉時,LED燈條上的LED會經由1-5網路控制器上傳來的資料,閃爍顏色及亮度,透過視覺暫留的效果,讓一般觀眾可以看到使用者所傳輸出來的影像圖形,讓觀眾不必配帶3D眼鏡,也能呈現立體的顯影,4攝影機雲台由4-1攝影機及4-2雲台馬達及4-3麥克風所組成,4-1攝影機裝置在4-2雲台馬達上,當雲台馬達旋轉時,4-1攝影機所拍攝的角度也隨之移動,而使用者可以利用自己穿戴的VR眼鏡或頭盔轉動脖子或是手持的設備 或是手機,來傳輸要4攝影機雲台移動的角度,進而控制攝影機拍攝的視角,看到真實世界的場景並透過4-3麥克風聽到真實世界的聲音。 Figure 2 is the structure of this invention. 1 Carrier chassis is used to carry various equipment of this invention robot and the wheels and motors for the robot to move. Among them, 1-2 wheels are used to carry 1 carrier chassis to make it move, 1-3 motors driving wheels are used to drive wheels to rotate so that 1 carrier chassis moves, and there are 1-4 speakers on 1 carrier chassis to play the user's voice, 1-5 network controller is used to connect to the network for users to control the robot and transmit data, and 5 computer software is used to transmit The software between the computer equipment (such as AR/VR/mobile phone/computer...) and the network controller 1-5 is used to control the user. The distance sensor 1-6 is used to detect the distance and position of the people or objects around the robot in the real environment to prevent the robot from colliding with people or objects in the environment. The battery 1-7 is the power source of the entire robot, supplying power to various devices on the 1 carrier chassis, 3 display LED fans and 4 camera pan/tilts. There are 3 display LED fans and 2 support frames on the 2 support frames. 4-camera pan-tilt, 3-display LED fans are composed of 3-1 LED light strip and 3-2 fan motor. When the motor rotates, the LED on the LED light strip will flash color and brightness through the data uploaded by 1-5 network controller. Through the visual retention effect, the general audience can see the image graphics transmitted by the user, so that the audience can present three-dimensional display without wearing 3D glasses. The 4-camera pan-tilt consists of 4-1 camera and 4-2 cloud The 4-1 camera is mounted on the 4-2 pan-tilt motor. When the pan-tilt motor rotates, the angle of the 4-1 camera also moves. Users can use their VR glasses or helmets to turn their necks or handheld devices or mobile phones to transmit the angle of the 4-camera pan-tilt movement, thereby controlling the camera's shooting angle, seeing the real world scene and hearing the real world sound through the 4-3 microphone.
圖3為本創作實際運行的控制方式,其中代理人為機器人在現實中的某場域中運行,透過網路平台的傳輸,使用者可以用各種能上網的工具如電腦/手機/VR眼鏡等與代理人連線,而使用者在元宇宙中的虛擬角色的影像則會成像在代理人的顯影LED風扇上,而代理人上的攝影機及麥克風則會透過網路控制器將所拍到的影像及聲音回傳給使用者,讓使用者如同身處於該場域中。 Figure 3 shows the actual control method of this creation, where the agent is a robot running in a real field. Through the transmission of the network platform, users can connect to the agent with various tools that can access the Internet, such as computers/mobile phones/VR glasses, etc., and the image of the user's virtual character in the metaverse will be imaged on the agent's display LED fan, and the camera and microphone on the agent will send the captured images and sounds back to the user through the network controller, making the user feel as if they are in the field.
圖4為本創作中不同形式的1載體底盤,該底盤可以是一台類似單純車體的移動平台,也可以是一個具有能移動又具有手臂的人形機器人,具備移動與手部行動的功能,而3顯影LED風扇在此載體底盤只須顯示頭像,不須顯示使用者的身體。 Figure 4 shows different forms of 1 carrier chassis in this creation. The chassis can be a mobile platform similar to a simple car body, or a humanoid robot with arms that can move and have the functions of movement and hand movements. The 3 display LED fan only needs to display the head image on this carrier chassis, without displaying the user's body.
圖5為本創作中不同形式的1載體底盤,該底盤可以是一台時下流行的無人機,可使代理人三維空間移動。 Figure 5 shows different forms of 1 carrier chassis in this creation. The chassis can be a popular drone that allows the agent to move in three-dimensional space.
1:載體底盤 1: Carrier chassis
1-2:輪子 1-2: Wheels
1-3:馬達 1-3: Motor
1-4:喇叭 1-4: Speaker
1-5:網路控制器 1-5: Network controller
1-6:距離傳感器 1-6: Distance sensor
1-7:電池 1-7:Battery
2:支撐架 2: Support frame
3:顯影LED風扇 3: Display LED fan
3-1:LED燈條 3-1: LED light strip
3-2:風扇馬達 3-2: Fan motor
4:攝影機雲台 4: Camera gimbal
4-1:攝影機 4-1: Camera
4-2:雲台馬達 4-2: PTZ motor
4-3:麥克風 4-3: Microphone
5:電腦軟體 5: Computer software
為讓本創作之上述以及其他特徵、優點與實施例能更明顯易懂,所附圖式之說明如下: In order to make the above and other features, advantages and embodiments of this invention more clearly understood, the attached diagrams are explained as follows:
第1圖繪示本創作示意圖; Figure 1 shows a schematic diagram of this creation;
第2圖繪示本創作結構圖; Figure 2 shows the structure of this creation;
第3圖繪示本創作之實際運行的控制方式; Figure 3 shows the actual control method of this creation;
第4圖繪示本創作之不同形式的載體底盤1; Figure 4 shows different forms of carrier chassis 1 of this creation;
第5圖繪示本創作之不同形式的載體底盤2; Figure 5 shows different forms of carrier chassis 2 of this creation;
本創作之實施如圖2。1載體底盤用於承載本創作機器人之各項設備與機器人運動之輪子與電機,其中1-2輪子至少2個用於承載1載體底盤使其移動,1-3馬達用以驅動輪子使輪子旋轉以致1載體底盤移動,輪子可以是一般輪子也可以是萬向輪,或是組成履帶的形式,方便在地面不平整的地方上使用,在1載體底盤上還有1-4喇叭用於播放使用者的聲音,1-5網路控制器用於連結網路作為使用者控制機器人及傳輸資料等,控制器可以是WIFI/又或是4G/5G/6G手機等設備,透過5電腦軟體,讓代理人的控制與資料能在互聯網上傳輸,進而與各種元宇宙平台聯結,並傳送影音資料於使用者端與機器人端,而1-6距離傳感器,可以是超聲波或紅外光或視雷射測距儀或攝影機等所組成,用於檢測機器人於真實環境中,週圍的人或物的距離與位置,以避免機器人碰撞環境中的人或物,或是用於檢測代理人於場域中的位置,進而可以設計代理人自走路線及自動返航充電等,1-7電池為整個機器人的電力來源,供電給1載體底盤上的各項設備及3顯影LED風扇與4攝影機雲台,2支撐架上有3顯影LED風扇及4攝影機雲台,其中3顯影LED風扇由3-1LED燈條及3-2風扇馬達所組成,當馬達旋轉時,LED燈條上的LED會經由1-5網路控制器上傳來的資料,閃爍顏色及亮度,透過視覺暫留的效果,讓一般觀眾可以看到使用者所傳輸出來的影像圖形,讓觀眾不必配帶3D眼鏡,也能呈現立體的顯影,此方式有別於一般的螢幕如LCD或是OLED螢幕,因為一般螢幕都有其外圍的邊框,所以任何影像的呈現,都是出現在框框內,而螢幕中不成像的部分會以黑色存在,也會被觀眾的眼睛所看見,說 白了就是看到一整個螢幕,而本創作利用LED燈條旋轉加上視覺暫留效果所做的成像,沒有任何邊框,而沒有成像的部分也沒有顏色純透明,也就是觀眾會看到成像背後的實體物,因此觀眾看到裸眼3D的效果類似幽靈的成像,再加上一個會移動的載體底盤帶著成像在場域中移動,就像一個卡通人物跳進了真實的世界中與真人互動,而4攝影機雲台由4-1攝影機及4-2雲台馬達及4-3麥克風所組成,4-1攝影機裝置在4-2雲台馬達上,當雲台馬達旋轉時,4-1攝影機所拍攝的角度也隨之移動,而使用者可以利用自己穿戴的VR眼鏡或頭盔轉動脖子或是手持的設備或是手機,透過5電腦軟體來傳輸要4攝影機雲台移動的角度,進而控制攝影機拍攝的視角,看到真實世界的場景並透過4-3麥克風聽到真實世界的聲音,以及控制機器人的移動如前進後退等。 The implementation of this creation is shown in Figure 2. 1 Carrier chassis is used to carry various equipment of the robot of this creation and the wheels and motors for the movement of the robot. Among them, at least 2 of the wheels 1-2 are used to carry 1 carrier chassis to make it move, and 1-3 motors are used to drive the wheels to rotate so that 1 carrier chassis moves. The wheels can be ordinary wheels or universal wheels, or in the form of tracks, which are convenient for use on uneven ground. There are also 1-4 speakers on 1 carrier chassis to play the user's voice, and 1-5 network controllers are used to connect to the network as users to control the robot and transmit data, etc. The controller can be WIFI/or 4G/5G/6G mobile phones and other devices. Through 5 computer software, the agent's control and data can be transmitted on the Internet, and then connected to various metaverse platforms. The 1-6 distance sensors can be ultrasonic, infrared, laser, or camera sensors, etc., and are used to detect the distance and position of people or objects around the robot in the real environment to prevent the robot from colliding with people or objects in the environment, or to detect the position of the agent in the field, so that the agent can design a self-propelled route and automatic return to charge, etc. The 1-7 battery is the power source of the entire robot, supplying power to various devices on the 1 carrier chassis, 3 display LED fans, and 4 camera pan-tilts. There are 3 display LED fans and 4 camera pan-tilts on the 2 support frames, of which the 3 display LED fans are composed of 3-1 LED light strips and 3-2 fan motors. When the motor rotates, the LED on the LED light strip The data transmitted from the 1-5 network controller will flash in color and brightness. Through the visual retention effect, the general audience can see the image graphics transmitted by the user, so that the audience can present a three-dimensional image without wearing 3D glasses. This method is different from ordinary screens such as LCD or OLED screens, because ordinary screens have their outer frames, so any image presentation will appear within the frame, and the non-imaged part of the screen will exist in black and will also be seen by the audience's eyes. To put it simply, you can see the entire screen. This creation uses the rotation of LED light strips and the visual retention effect to create an image without any frames, and the non-imaged part has no color and is purely transparent, that is, the audience will see the real object behind the image, so the audience sees the naked The effect of the eye 3D is similar to the image of a ghost, and a moving carrier chassis moves the image in the field, just like a cartoon character jumping into the real world and interacting with real people. The 4-camera gimbal is composed of a 4-1 camera, a 4-2 gimbal motor and a 4-3 microphone. The 4-1 camera is installed on the 4-2 gimbal motor. When the gimbal motor rotates, the angle of the 4-1 camera also moves. Users can use the VR glasses or helmets they wear to turn their necks or handheld devices or mobile phones to transmit the angle of the 4-camera gimbal movement through 5 computer software, thereby controlling the camera's shooting angle, seeing the real world scene and hearing the real world sound through the 4-3 microphone, and controlling the robot's movement such as forward and backward.
1:載體底盤 1: Carrier chassis
2:支撐架 2: Support frame
3:顯影LED風扇 3: Display LED fan
4:攝影機雲台 4: Camera gimbal
5:電腦軟體 5: Computer software
Claims (7)
Publications (1)
Publication Number | Publication Date |
---|---|
TW202420034A true TW202420034A (en) | 2024-05-16 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11484790B2 (en) | Reality vs virtual reality racing | |
JP6929380B2 (en) | Second screen virtual window to VR environment | |
US11695901B2 (en) | Emotes for non-verbal communication in a videoconferencing system | |
CN109069932B (en) | Viewing Virtual Reality (VR) environment associated with VR user interactivity | |
US20180348856A1 (en) | Massive simultaneous remote digital presence world | |
JP2021036449A (en) | System and method for augmented and virtual reality | |
US11212437B2 (en) | Immersive capture and review | |
JP2020513957A (en) | Extended vehicle system and method | |
CN106023289A (en) | Image generation system and image generation method | |
CN105080134A (en) | Realistic remote-control experience game system | |
Zhang et al. | LightBee: A self-levitating light field display for hologrammatic telepresence | |
CN105359063A (en) | Head mounted display with tracking | |
CN112150885A (en) | Cockpit system based on mixed reality and scene construction method | |
CN203899120U (en) | Realistic remote-control experience game system | |
WO2020095368A1 (en) | Information processing system, display method, and computer program | |
US20240087236A1 (en) | Navigating a virtual camera to a video avatar in a three-dimensional virtual environment, and applications thereof | |
WO2019009163A1 (en) | Information processing system, player-side device, control method, and program | |
TW202420034A (en) | The agent of metaverse go outside. | |
JP2019175322A (en) | Simulation system and program | |
KR20190101621A (en) | Miniature land tour system by RC car steering and real time video transmission | |
US11776227B1 (en) | Avatar background alteration | |
US12028651B1 (en) | Integrating two-dimensional video conference platforms into a three-dimensional virtual environment | |
US11748939B1 (en) | Selecting a point to navigate video avatars in a three-dimensional environment | |
US11741652B1 (en) | Volumetric avatar rendering | |
US11741664B1 (en) | Resituating virtual cameras and avatars in a virtual environment |