CN111741287B - Method for triggering content by using position information of MR glasses - Google Patents

Method for triggering content by using position information of MR glasses Download PDF

Info

Publication number
CN111741287B
CN111741287B CN202010659833.4A CN202010659833A CN111741287B CN 111741287 B CN111741287 B CN 111741287B CN 202010659833 A CN202010659833 A CN 202010659833A CN 111741287 B CN111741287 B CN 111741287B
Authority
CN
China
Prior art keywords
live
action
glasses
position coordinates
scenic spots
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010659833.4A
Other languages
Chinese (zh)
Other versions
CN111741287A (en
Inventor
沈重
张鲲
周晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Xinyan Collaborative Positioning And Navigation Research Institute Co ltd
Original Assignee
Nanjing Xinyan Collaborative Positioning And Navigation Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Xinyan Collaborative Positioning And Navigation Research Institute Co ltd filed Critical Nanjing Xinyan Collaborative Positioning And Navigation Research Institute Co ltd
Priority to CN202010659833.4A priority Critical patent/CN111741287B/en
Publication of CN111741287A publication Critical patent/CN111741287A/en
Application granted granted Critical
Publication of CN111741287B publication Critical patent/CN111741287B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a method for triggering contents by using position information of MR (magnetic resonance) glasses, which comprises the following steps: setting geographic position coordinates of the corresponding trigger playing contents of the virtual scenes in the MR glasses; integrating an optical camera in communication connection with an operation background and a cloud server in the MR glasses, dynamically and intermittently taking pictures by using the optical camera to obtain live-action pictures in front of a user, and uploading the pictures to the cloud server; and analyzing the geographical position coordinates of the scenic spots in the current live-action photo through an image recognition method, and controlling the local MCU of the MR glasses to trigger the playing content under the current geographical position coordinates. By applying the technical scheme of the invention, peripheral equipment can be separated, portable equipment worn by a user can be simplified, and non-inductive trigger content can be realized, so that the man-machine interaction is more friendly and convenient; the cost related to code scanning arrangement and maintenance is saved; meanwhile, the situation that the speech recognition accuracy is influenced and cannot be accurately triggered can be effectively avoided.

Description

Method for triggering content by using position information of MR glasses
Technical Field
The invention relates to an operation mode design of intelligent tour guide equipment in a tourist attraction, in particular to a method for triggering contents by using position information through MR (magnetic resonance) glasses, and belongs to the field of application of the Internet of things.
Background
With the continuous development of material civilization and spiritual civilization construction, more and more cultural tourism themes are widely put forward and popularized to the vast users. And after people are in daily busy work and life, people need to arrange a travel plan in each major holiday, and people can increase the awareness while relaxing the mood. In the whole travel process, no matter the tourist plays water or visits various exhibition halls, the tourist can not guide the introduction of the scenic spots, thereby avoiding the tourist from scratching or missing the cultural characteristics of the important scenic spots.
Currently, scenic spots use shared MR glasses as a new generation of tour guide wearing devices, which are accepted by more and more scenic spots and visitors. However, many deficiencies are reflected in the practical application of the existing equipment. By way of example: part of MR part of the glasses uses a desktop type graphic menu which is compositely displayed in a virtual scene, and the operation such as selection, clicking and the like by using a handheld remote controller is needed in the actual operation process; calling a playing content resource package of the cloud server after the two-dimensional code is identified; or the desktop type graphic menu is operated by means of voice commands.
Obviously, in the above method for triggering content, on one hand, the MR glasses must be connected to the peripheral device through the practice of the handheld remote controller or the pressure glove, and the operation stays on the man-machine interaction of the user two-dimensional interface, which lacks dynamic interest; the peripheral equipment is heavy and fragile and actually occupies the necessary limbs and actions of the user. Therefore, the user cannot completely release both hands, and a burden is not small on the living operation such as taking a thing in a backpack. On the other hand, the external two-dimensional code identification sticker of the scenic spot is easily altered and damaged, or after the corresponding network link fails, the MR glasses cannot acquire the playing content. On the other hand, the requirement on the level of the Mandarin is higher, the local dialect causes the difference of the pronunciation of the Mandarin, and the pronunciation of users with different ages and throat physiological health also has difference, so that the accuracy of voice recognition is reduced, the delay of triggering contents is inevitably caused, and even more, the embarrassment that the triggering cannot be realized is caused.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, it is an object of the present invention to propose a method for triggering content by MR glasses using location information.
The technical solution of the invention for realizing the above purpose is as follows: a method for MR glasses to trigger content using location information: setting geographic position coordinates of the corresponding trigger playing contents of the virtual scenes in the MR glasses; integrating an optical camera in communication connection with an operation background and a cloud server in the MR glasses, dynamically and intermittently taking pictures by using the optical camera to obtain live-action pictures in front of a user, and uploading the pictures to the cloud server; and analyzing the geographical position coordinates of the scenic spots in the current live-action photo through an image recognition method, and controlling the local MCU of the MR glasses to trigger the playing content under the current geographical position coordinates.
Furthermore, the optical camera is controlled by the MCU of the camera to run in a dynamic intermittent photographing mode and acquire live-action photos, stops photographing for 3 s-10 s when the playing content of the virtual scene in the MR glasses is triggered, and dynamically adjusts the intermittent duration according to the current playing state and the different and identical result of the geographic position coordinates obtained by analyzing the newly acquired live-action photos and the previous live-action photos after photographing is restarted.
Furthermore, after the optical camera restarts to take a picture, the geographic position coordinates obtained by two real-scene picture analyses are consistent, and the scenic spot is located in the central area of the real-scene picture, the playing content or the whole segment is kept to be played repeatedly, and the slow shooting mode is adopted, wherein the shooting pause time is prolonged to be more than 3 s.
Furthermore, after the optical camera restarts to take a picture, the geographic position coordinates obtained by two real-scene picture analyses are consistent, and the scenic spots are located at the edge of the real-scene picture, so that the playing content is kept and the intermittent duration of the picture taking is shortened.
Furthermore, after the optical camera restarts to take a picture, the geographic position coordinates obtained by two real-scene picture analyses or the scenery spots disappear from the real-scene pictures, and then the playing content is interrupted and the photographing pause time is shortened to be less than 1 s.
Further, the playing content has two modes including title presentation and multimedia presentation, when the geographical position coordinates of more than two scenic spots obtained by analyzing the live-action photos are selected, the scenic spot closest to the central point of the live-action photo is selected to trigger the multimedia presentation, and the remaining scenic spots synchronously trigger the title presentation.
Furthermore, under the condition that the existing multimedia display is triggered, prolonging the photographing pause time to 3-10 s for keeping playing, analyzing the live-action picture after restarting photographing, and stopping playing contents, restoring the virtual scene and shortening the photographing pause time if all scenic spots disappear from the live-action picture; if the geographical position coordinate of the scenic spot is obtained only through analysis, triggering the playing content under the current geographical position coordinate; if the geographic position coordinates of more than two scenic spots are obtained through analysis and the scenic spot corresponding to the multimedia projection is still the scenic spot closest to the central point of the live-action photo, the multimedia projection and the rest of the scenic spots are kept to synchronously trigger the title to be presented; and if the geographic position coordinates of more than two scenic spots are obtained through analysis and the scenic spots corresponding to the multimedia show are moved to the edge from the position closest to the central point of the live-action photo, switching the multimedia show to the title presentation, selecting the scenic spot closest to the central point of the live-action photo to trigger the multimedia show, and synchronously triggering the title presentation of the rest scenic spots.
The MR glasses of the invention have outstanding substantive features and obvious progressiveness by using the solution of the position information triggering content: the method can be separated from the peripheral equipment, the portable equipment worn by the user is simplified, the geographic position coordinates of the scenic spots in the live-action photos are identified based on the shot live-action photos and the images, and no touch content exists, so that the human-computer interaction is more friendly and convenient; the high cost of code scanning along with arrangement of the scenic spots and periodic maintenance of the scenic spots is saved; meanwhile, the situation that the mode of triggering the presentation content through the voice command is easily influenced by environmental noise interference, voice recognition accuracy and the like and cannot be triggered accurately can be effectively avoided.
Detailed Description
The following detailed description of the embodiments of the present invention is provided to facilitate understanding and understanding of the technical solutions of the present invention, and to clearly define the scope of the present invention.
In view of the laggard design of the method for triggering and playing the content by the MR glasses in the existing tourist attraction, the objective problems of inconvenience for carrying, complex operation and large experience burden of a user exist, the application designer innovatively provides a method for triggering the content by the MR glasses by utilizing the position information by combining the technical experiences of years of research and development of an intelligent terminal and scene reproduction equipment, and the intelligence of man-machine interaction is improved by increasing hardware integration and developing and upgrading hardware functions of deep software through the MR glasses.
The innovation mainly aims at a triggering mode for introducing relevant content playing of scenic spots in a virtual scene of the MR glasses. By way of basic introduction, a virtual scene in MR glasses is a picture which is highly similar to a real scene without wearing the same kind of equipment and follows synchronously. What is different, the virtual scene is easier to be expanded in a multidimensional way, and can be combined with data interaction with a cloud server by utilizing a virtual imaging technology to present a picture presentation with a multi-layer nested mode. More intuitively understand that if a user visits with naked eyes in a common scenic spot, only live-action images in front of the user are displayed in the brain visually, and when detailed information of live-action sources and prosperous needs to be known, the user can know the information through a mobile phone inquiry or a manual guide introducer; when the MR glasses are worn for visiting, the current live-action can be used as a background in the visual virtual scene, and the live-action introduction focused on can be presented in the top layer, so that the user can have a relaxed and free visiting experience. The features summarized are: setting geographic position coordinates of the corresponding trigger playing contents of the virtual scenes in the MR glasses; integrating an optical camera in communication connection with an operation background and a cloud server in the MR glasses, dynamically and intermittently taking pictures by using the optical camera to obtain live-action pictures in front of a user, and uploading the pictures to the cloud server; and analyzing the geographical position coordinates of the scenic spots in the current live-action photo through an image recognition method, and controlling the local MCU of the MR glasses to trigger the playing content under the current geographical position coordinates.
To facilitate understanding of the above outlined features, specific instantiation is described as follows: the setting of the geographic position coordinates of the virtual scene corresponding to the trigger playing content in the MR glasses is realized based on the corresponding scaling of the virtual scene and the actual scenic spot, and a position (such as a scenic spot central point or a scenic spot exit) in the scenic spot is defined as a coordinate origin, so that each actual scenic spot is naturally endowed with a unique geographic position coordinate. The virtual scene in the MR glasses tracks the change of the angle of view of the user in real time, so that the live-action photograph photographed by the optical camera has a pairing relationship with the current picture of the virtual scene. The uploaded live-action photos are analyzed through the powerful computing processing capacity of the cloud server and various existing mature image recognition technologies, geographic position coordinates corresponding to scenic spots shown in the live-action photos are obtained and are transmitted to an operation background or an MR glasses local MCU through signals, and after an interaction mode triggered by position information is set in a virtual scene of the MR glasses, dynamic and accurate triggering of playing contents can be achieved.
On the basis, the optical camera is set to be dynamic and intermittent, that is, real-scene shooting is not performed at a solid frequency. For example, in a state where the playing content of the virtual scene in the MR glasses is triggered, in order to avoid that the information acquisition of the introduction of the scenic spot is affected by the fact that the picture is played back from the beginning or is not played back from the beginning due to a sudden change of direction caused by the movement of the user, the shooting is suspended for 3s to 10s or even longer, and after the shooting is restarted, the pause duration is dynamically adjusted according to the current playing state and the different and identical result of the geographic position coordinates obtained by analyzing the newly-acquired live-action picture and the previous live-action picture, which mainly includes but is not limited to the following three possible situations.
Firstly, after the optical camera finishes the pause of the shooting time and restarts the shooting, the optical camera continuously uploads the newly obtained live-action photos to the cloud server, the cloud server carries out image processing again, and the geographic position coordinates obtained by analyzing the two live-action photos are compared with each other to obtain consistency. And if the geographic position coordinates are consistent and the scenic spots are still located in the central area of the live-action photo, indicating that the focus of attention of the user is not transferred, keeping the playing content. Or when the user is addicted to the scenic spot, the whole content is repeatedly played, so that the memory and feeling of the user are enhanced. In this case, the optical camera is controlled by the local MCU of the MR glasses to extend the photographing pause time to a slow photographing mode of 3s or more.
Secondly, the operation is the same as the operation after the shooting is restarted, except that the geographic position coordinates obtained by comparing the real-scene photos and analyzing the real-scene photos are consistent, the scenic spot is positioned at the edge of the real-scene photo, and the user keeps continuously playing the content if the attention focus of the user starts to transfer and is about to skip the scenic spot; in this case, however, the optical camera is controlled to shorten the photographing pause time.
Moreover, the same operation as the operation after the previous photographing is restarted, except that the geographical position coordinates obtained by comparing the real-scene photo analyses twice or the scenic spots disappear from the real-scene photos, which indicates that the focus of attention of the user is shifted, a new scenic spot may be focused on, and the visual angle may be emptied and the virtual background environment is simply browsed. Therefore, for the former case that the user focuses on the focus to be transferred to a new attraction, the playing content corresponding to the attraction is triggered, and for the latter case that the attraction disappears, the MR glasses immediately interrupt the playing content. And shortening the intermittent duration of photographing to a fast photographing mode below 1s, and searching a next concerned new scene point.
Based on the introduction of the above preferred embodiment triggered by the location information, it is also not difficult to think that the obtained live-action photos mostly contain more than two sights at the same time, and the playing content of any sight under the virtual scene can be set to have two modes including title presentation and multimedia presentation. From a further detailed implementation, when the geographic position coordinates of more than two scenic spots are obtained by analysis in the live-action photo, the scenic spot closest to the center point of the live-action photo is selected to trigger the multimedia presentation, and the rest of the scenic spots synchronously trigger the title presentation. In this case, the number of the spots increases, which leads to a large number of picture layers and an increase in complexity in the virtual scene. However, by positioning and classifying the geographic position coordinates, detailed descriptions of the user attention scenic spots in the current scene can be visually presented through multimedia projection, and the surrounding scenic spots only prompt titles, have the most basic geographic identification characteristics, and shift positions or orientations according to the interests of the user.
And certainly, under the condition that the existing multimedia presentation is triggered, the playing is kept according to the time length of the extended photographing interval to 3-10 s, the live-action picture is analyzed after the photographing is restarted, and specific problems are specifically analyzed. And if all the scenic spots disappear from the live-action photo, indicating that the current user has no scenic spot elements downwards, stopping playing the content, restoring the real-time virtual scene and shortening the photographing pause time. And if the geographical position coordinates of one scenic spot are obtained through analysis, and the density of the scenic spots of the current user is smaller, triggering the playing content under the current geographical position coordinates. If the geographic position coordinates of more than two scenic spots are obtained through analysis, and the scenic spot corresponding to the multimedia projection is still the scenic spot closest to the center point of the live-action photo, indicating that the current focus of attention of the user is still on the original scene point, the multimedia projection and the rest scenic spots are kept to synchronously trigger the title to be presented. And if the geographic position coordinates of more than two scenic spots are obtained through analysis and the scenic spots corresponding to the multimedia show are moved to the edge from the position closest to the central point of the live-action photo, which indicates that the number of the scenic spots is more towards the next user and the focus of attention has been shifted, switching the multimedia show to the title presentation, selecting the scenic spot closest to the central point of the live-action photo to trigger the multimedia show, and synchronously triggering the title presentation of the rest scenic spots.
In summary, with respect to the feature introduction and the detailed embodiment of the MR glasses of the present invention, which use location information to trigger content scheme, it can be seen that the scheme has prominent substantive features and significant progressions: the mode can be separated from external equipment, the portable equipment worn by a user is simplified, the geographic position coordinates of scenic spots in the scenic spots are identified on the basis of shooting live-action pictures and images, and no touch content exists, so that the man-machine interaction is more friendly and convenient; the high cost of code scanning following the arrangement of the scenic spots and periodic maintenance of the scenic spots is saved; meanwhile, the situation that the mode of triggering the presentation content through the voice command is easily influenced by environmental noise interference, voice recognition accuracy and the like and cannot be triggered accurately can be effectively avoided.
In addition to the above embodiments, the present invention may have other embodiments, and any technical solutions formed by equivalent substitutions or equivalent transformations are within the scope of the present invention as claimed.

Claims (3)

1. A method for triggering contents by MR glasses through position information sets geographic position coordinates of virtual scenes in the MR glasses corresponding to the triggering playing contents; integrated and operation backstage and high in the clouds server communication connection's optical camera in MR glasses, its characterized in that: the optical camera is controlled by a local MCU to take pictures in a dynamic and intermittent manner to obtain live-action pictures right in front of the user, and the live-action pictures are uploaded to a cloud server; the cloud server analyzes the geographical position coordinates of the scenic spots in the current live-action photo through an image recognition method; stopping photographing for 3-10 s when playing contents of a virtual scene in the MR glasses are triggered, and dynamically adjusting the intermittent duration according to the current playing state and the different and identical result of the geographic position coordinates obtained by analyzing the newly acquired live-action picture and the previous live-action picture after restarting photographing;
after the optical camera restarts to take photos, geographical position coordinates obtained by two real-scene photo analyses are consistent, and the scenic spots are located in the central area of the real-scene photos, the playing content or the whole section of the real-scene photos are kept to be played repeatedly, and the slow shooting mode that the intermittent duration of the taking photos is prolonged to be more than 3s is adopted;
after the optical camera restarts to take photos, geographical position coordinates obtained by two times of real-scene photo analysis are consistent, and the scenic spots are located at the edges of the real-scene photos, playing contents are kept, and the intermittent duration of taking photos is shortened;
after the optical camera restarts to take photos, geographical position coordinates obtained by two times of real-scene photo analysis or the scenic spots disappear in the real-scene photos, playing contents are interrupted, and the photographing pause time is shortened to be less than 1 s.
2. The method for triggering contents by using position information of MR glasses according to claim 1, wherein: the playing content has two modes including title presentation and multimedia presentation, when the geographic position coordinates of more than two scenic spots obtained by analyzing the live-action photos are selected, the scenic spot closest to the central point of the live-action photo is selected to trigger the multimedia presentation, and the remaining scenic spots synchronously trigger the title presentation.
3. The method for triggering contents by using position information of MR glasses according to claim 2, wherein: under the condition that the existing multimedia presentation is triggered, prolonging the photographing intermittent duration to 3-10 s for keeping playing, analyzing the live-action photos after restarting photographing, and stopping playing contents, restoring the virtual scenes and shortening the photographing intermittent duration if all scenic spots disappear from the live-action photos; if the geographical position coordinate of the scenic spot is obtained only through analysis, triggering the playing content under the current geographical position coordinate; if the geographic position coordinates of more than two scenic spots are obtained through analysis and the scenic spot corresponding to the multimedia projection is still the scenic spot closest to the central point of the live-action photo, the multimedia projection and the rest of the scenic spots are kept to synchronously trigger the title to be presented; and if the geographical position coordinates of more than two scenic spots are obtained through analysis and the scenic spots corresponding to the multimedia projection are moved to the edge from the position closest to the central point of the live-action picture, switching the multimedia projection to the title presentation, selecting the scenic spot closest to the central point of the live-action picture to trigger the multimedia projection, and synchronously triggering the title presentation of the rest scenic spots.
CN202010659833.4A 2020-07-10 2020-07-10 Method for triggering content by using position information of MR glasses Active CN111741287B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010659833.4A CN111741287B (en) 2020-07-10 2020-07-10 Method for triggering content by using position information of MR glasses

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010659833.4A CN111741287B (en) 2020-07-10 2020-07-10 Method for triggering content by using position information of MR glasses

Publications (2)

Publication Number Publication Date
CN111741287A CN111741287A (en) 2020-10-02
CN111741287B true CN111741287B (en) 2022-05-17

Family

ID=72655965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010659833.4A Active CN111741287B (en) 2020-07-10 2020-07-10 Method for triggering content by using position information of MR glasses

Country Status (1)

Country Link
CN (1) CN111741287B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114185431B (en) * 2021-11-24 2024-04-02 安徽新华传媒股份有限公司 Intelligent media interaction method based on MR technology

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101765054A (en) * 2009-10-27 2010-06-30 李勇 Mobile voice intelligent guide service system and method
CN102210136A (en) * 2009-09-16 2011-10-05 索尼公司 Device, method, and program for processing image
CN103530352A (en) * 2013-10-10 2014-01-22 浙江大学 Device and method for obtaining scenic spot information in real time based on smart watch
CN104539723A (en) * 2015-01-12 2015-04-22 曹振祥 Virtual guide system based on scenic spot feature point positioning
CN104598589A (en) * 2015-01-20 2015-05-06 惠州Tcl移动通信有限公司 Intelligent tourist method and system based on image identification
CN105955252A (en) * 2016-04-12 2016-09-21 江苏理工学院 Intelligent voice tour guide robot and path optimizing method thereof
CN106896940A (en) * 2017-02-28 2017-06-27 杭州乐见科技有限公司 Virtual objects are presented effect control method and device
CN107358639A (en) * 2017-07-25 2017-11-17 上海传英信息技术有限公司 A kind of photo display method and photo display system based on intelligent terminal
CN107403395A (en) * 2017-07-03 2017-11-28 深圳前海弘稼科技有限公司 Intelligent tour method and intelligent tour device
CN107688392A (en) * 2017-09-01 2018-02-13 广州励丰文化科技股份有限公司 A kind of control MR heads show the method and system that equipment shows virtual scene
CN107704078A (en) * 2017-09-11 2018-02-16 广州慧玥文化传播有限公司 The method and system of MR patterns are realized based on optical alignment
CN207181824U (en) * 2017-09-14 2018-04-03 呼伦贝尔市瑞通网络信息咨询服务有限公司 Explain AR equipment in scenic spot
WO2018076912A1 (en) * 2016-10-28 2018-05-03 捷开通讯(深圳)有限公司 Virtual scene adjusting method and head-mounted intelligent device
CN109712247A (en) * 2018-12-10 2019-05-03 浙江工业大学 Outdoor scene training system based on mixed reality technology

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080112193A (en) * 2005-12-30 2008-12-24 스티븐 케이스 Genius adaptive design
JP5477059B2 (en) * 2010-03-04 2014-04-23 ソニー株式会社 Electronic device, image output method and program
KR101838033B1 (en) * 2011-11-25 2018-03-15 삼성전자 주식회사 Method and apparatus for providing image photography of a user device
CN104219647B (en) * 2014-05-23 2019-06-21 华为技术有限公司 Wireless channel control method, the transaction of flow packet, recommended method and relevant device
CN106791385A (en) * 2016-12-09 2017-05-31 深圳创维-Rgb电子有限公司 A kind of view method, apparatus and system based on virtual reality technology

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102210136A (en) * 2009-09-16 2011-10-05 索尼公司 Device, method, and program for processing image
CN101765054A (en) * 2009-10-27 2010-06-30 李勇 Mobile voice intelligent guide service system and method
CN103530352A (en) * 2013-10-10 2014-01-22 浙江大学 Device and method for obtaining scenic spot information in real time based on smart watch
CN104539723A (en) * 2015-01-12 2015-04-22 曹振祥 Virtual guide system based on scenic spot feature point positioning
CN104598589A (en) * 2015-01-20 2015-05-06 惠州Tcl移动通信有限公司 Intelligent tourist method and system based on image identification
CN105955252A (en) * 2016-04-12 2016-09-21 江苏理工学院 Intelligent voice tour guide robot and path optimizing method thereof
WO2018076912A1 (en) * 2016-10-28 2018-05-03 捷开通讯(深圳)有限公司 Virtual scene adjusting method and head-mounted intelligent device
CN106896940A (en) * 2017-02-28 2017-06-27 杭州乐见科技有限公司 Virtual objects are presented effect control method and device
CN107403395A (en) * 2017-07-03 2017-11-28 深圳前海弘稼科技有限公司 Intelligent tour method and intelligent tour device
CN107358639A (en) * 2017-07-25 2017-11-17 上海传英信息技术有限公司 A kind of photo display method and photo display system based on intelligent terminal
CN107688392A (en) * 2017-09-01 2018-02-13 广州励丰文化科技股份有限公司 A kind of control MR heads show the method and system that equipment shows virtual scene
CN107704078A (en) * 2017-09-11 2018-02-16 广州慧玥文化传播有限公司 The method and system of MR patterns are realized based on optical alignment
CN207181824U (en) * 2017-09-14 2018-04-03 呼伦贝尔市瑞通网络信息咨询服务有限公司 Explain AR equipment in scenic spot
CN109712247A (en) * 2018-12-10 2019-05-03 浙江工业大学 Outdoor scene training system based on mixed reality technology

Also Published As

Publication number Publication date
CN111741287A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
KR102555443B1 (en) Matching content to a spatial 3d environment
US20200227089A1 (en) Method and device for processing multimedia information
WO2022068537A1 (en) Image processing method and related apparatus
JP6574937B2 (en) COMMUNICATION SYSTEM, CONTROL METHOD, AND STORAGE MEDIUM
JP2022111133A (en) Image processing device and control method for the same
US11636644B2 (en) Output of virtual content
CN108319171B (en) Dynamic projection method and device based on voice control and dynamic projection system
WO2014181380A1 (en) Information processing device and application execution method
WO2021244457A1 (en) Video generation method and related apparatus
WO2022095788A1 (en) Panning photography method for target user, electronic device, and storage medium
CN112887584A (en) Video shooting method and electronic equipment
JP2013533668A (en) Method for determining key video frames
CN110322760B (en) Voice data generation method, device, terminal and storage medium
CN115918108B (en) Method for determining function switching entrance and electronic equipment
CN113382154A (en) Human body image beautifying method based on depth and electronic equipment
WO2021185296A1 (en) Photographing method and device
CN114415907B (en) Media resource display method, device, equipment and storage medium
WO2021115483A1 (en) Image processing method and related apparatus
CN110572716A (en) Multimedia data playing method, device and storage medium
CN113497890B (en) Shooting method and equipment
US9179031B2 (en) Content acquisition apparatus and storage medium
CN111741287B (en) Method for triggering content by using position information of MR glasses
CN106033588A (en) Control system for image ordering and restaurant scene rendering and working method thereof
WO2022037479A1 (en) Photographing method and photographing system
CN111541889B (en) Method for using sight line triggering content by MR glasses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant