CN114185433A - Intelligent glasses system based on augmented reality and control method - Google Patents

Intelligent glasses system based on augmented reality and control method Download PDF

Info

Publication number
CN114185433A
CN114185433A CN202111474128.8A CN202111474128A CN114185433A CN 114185433 A CN114185433 A CN 114185433A CN 202111474128 A CN202111474128 A CN 202111474128A CN 114185433 A CN114185433 A CN 114185433A
Authority
CN
China
Prior art keywords
scene
data
intelligent glasses
user
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111474128.8A
Other languages
Chinese (zh)
Inventor
王雪燕
黄正宗
王亮
陈霖
蔡雍稚
张玉江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Kedun Technology Co ltd
Original Assignee
Zhejiang Kedun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Kedun Technology Co ltd filed Critical Zhejiang Kedun Technology Co ltd
Priority to CN202111474128.8A priority Critical patent/CN114185433A/en
Publication of CN114185433A publication Critical patent/CN114185433A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An intelligent glasses system based on augmented reality and a control method thereof comprise a plurality of VR/AR/MR intelligent glasses access devices, a server and a plurality of multilayer internet areas based on the server, wherein the VR/AR/MR intelligent glasses access devices are connected with the server through wireless communication, a plurality of virtual world slices are operated on the server, the plurality of slices are superposed in a specific number to form new slices, and a user selects one slice to project through the VR/AR/MR intelligent glasses access devices; and the VR/AR/MR intelligent glasses access equipment performs data acquisition on the selected slice to complete the information retrieval and information interaction functions, and performs data uploading on the slice to complete the information release and information marking functions.

Description

Intelligent glasses system based on augmented reality and control method
Technical Field
The application relates to the technical field of augmented reality, in particular to an intelligent glasses system based on augmented reality and a control method.
Background
At present, VR glasses based on the meta universe have met with a wave of heat, and more companies and enterprises announce the line of joining the meta universe to perform research and development of related products. VR glasses are mainly used as immersion and full-virtual products in the Yuanzhou, but because the full-virtual world platform is created, users can easily indulge in the virtual world and escape from the real world, and adverse effects of reducing the production efficiency and the production value of everyone are easily caused.
Meanwhile, situations of high actual recurring cost and long construction period of conception or design often occur in real life, more and more people can choose to swim on the internet due to lack of interactivity and interestingness in the real world, and more people can choose to shop and take out on the internet due to high action cost and retrieval cost in the real world, which are caused by the fact that the real world and the virtual world are not interconnected and communicated.
Therefore, the system and the method for fusing the virtual world and the real world on the same platform can not only prevent the user from being immersed in the virtual world, but also reduce the realization cost of the real world and improve the interestingness, the interactivity and the pertinence of the real world, and the technology, the system and the application can really enable people to experience real life and enjoy the convenience of virtual life.
Disclosure of Invention
An intelligent glasses system based on augmented reality comprises a plurality of VR/AR/MR intelligent glasses access devices, a server and a plurality of multilayer internet areas based on the server, wherein the VR/AR/MR intelligent glasses access devices are connected with the server through wireless communication, a plurality of virtual world slices are operated on the server, the plurality of slices are superposed in a specific number to form a new slice, and a user selects one slice to project through the VR/AR/MR intelligent glasses access devices; and the VR/AR/MR intelligent glasses access equipment performs data acquisition on the selected slice to complete the information retrieval and information interaction functions, and performs data uploading on the slice to complete the information release and information marking functions.
The slices are presented as multiple scenes on application software of the VR/AR/MR intelligent glasses access equipment, the multiple scenes are divided into different functional scenes and different theme scenes, the functional scenes comprise a message leaving scene, an creation scene, an interaction scene, a biological scene, a shopping scene, a retrieval scene, a pushing scene, a joint scene, a design scene, a labeling scene, a friend making scene, a navigation scene and a live broadcast scene, the theme scenes comprise scenes corresponding to different characters, games, movies and animations, and the scenes are presented on the VR/AR/MR intelligent glasses in a single or combined mode in a visualized manner.
VR/AR/MR intelligence glasses access equipment include intelligent glasses body, mutual controlling device, intelligent glasses body with mutual controlling device establish data connection, the user accomplishes the switching of many first scenes on the intelligent glasses body through mutual controlling device virtual formation of image contain the scene label of having the ordinal number on the intelligent glasses body.
Furthermore, the scene tag is provided with a lower-level tag, and scene switching and drilling down are realized through the interactive control device.
Furthermore, the usage index, at least one characteristic standard and a set of AR parameters are stored in the data background of the VR/AR/MR smart glasses under the multivariate scene, and the scene labels are sorted from high to low according to the usage index.
The usage rate calculation formula is
Figure BDA0003389572370000021
Where t (i) is the usage duration of the ith scene,
Figure BDA0003389572370000031
the total length of time that the user uses the smart glasses.
An augmented reality-based intelligent glasses system control method comprises the following steps:
s1, establishing a user database to store characteristic data of a user;
and S2, updating the feature data of the user once every period of time, and sequencing scenes from high to low according to the feature data.
The processing of the characteristic data comprises the following steps:
s3, classifying the characteristic data of all the users at the background;
s4, establishing user figures according to the classified user characteristic data, wherein each type of user figure corresponds to a certain user characteristic data interval and corresponds to a certain scene sequencing;
and S5, finishing classification of the user on the user image according to the operation of the new user, and executing the scene sequencing corresponding to the user image.
An augmented reality-based intelligent glasses system scene application method comprises the following steps:
s6, switching or selecting an augmented reality interface entering a design scene through an interface on the intelligent glasses;
s7, performing virtual modeling on the design elements on the basic frame through the ports, storing and uploading structural data of the design elements and position relation data of the design elements relative to the basic frame to a cloud end when the modeling is completed, and uploading the design element data to the intelligent glasses through a cloud end server;
s8, acquiring an actual image in a visual field through the sensing module, identifying a basic frame of the actual image through the system, and finishing the selection of design elements by a user through the motion capture device and finishing the visual presentation of the design elements through the imaging device.
Drawings
FIG. 1 is a block diagram of the hardware logic of the smart eyewear system of the present application;
FIG. 2 is a diagram of internal logic according to an embodiment of the present application;
FIG. 3 is an external presentation of an embodiment of the present application;
FIG. 4 is a diagram of an outer frame of an embodiment of the present application;
FIG. 5 is a second internal logic diagram according to an embodiment of the present application;
FIG. 6 is a diagram of a third external frame according to an embodiment of the present application;
FIG. 7 is a logic diagram of the third embodiment of the present application;
FIG. 8 is a diagram of a four outer frames according to an embodiment of the present application;
FIG. 9 is a diagram of a sixth external frame according to an embodiment of the present application;
fig. 10 is a diagram of a tenth external frame according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
As shown in fig. 1, an augmented reality-based smart glasses system hardware includes a plurality of VR/AR/MR smart glasses access devices, a server, and a plurality of multi-layered internet areas based on the server, wherein the VR/AR/MR smart glasses access devices are connected to the server through wireless communication, the server runs the plurality of multi-layered internet areas, the internet areas can be regarded as slices of a virtual world, the plurality of slices can be overlapped to combine into new slices in a specific number, and a user can select a slice to project through the VR/AR/MR smart glasses access device. When VR/AR/MR intelligent glasses access equipment acquires data of the internet area, namely, information retrieval and information interaction functions are carried out: the server screens and classifies real-time data uploaded by the VR/AR/MR intelligent glasses access equipment according to target information data corresponding to the selected slice, specific data are reserved, the VR/AR/MR intelligent glasses access equipment and the slice on the server complete association and interaction of the specific information data, and the slice on the server transmits multidimensional data corresponding to the information data to the VR/AR/MR intelligent glasses access equipment for projection according to confirmation of the specific information data. VR/AR/MR intelligent glasses access equipment contain sensor and a plurality of data input equipment of a plurality of different types, carry out data upload at VR/AR/MR intelligent glasses access equipment to the internet area, when carrying out information issuing and information mark function promptly: capturing current environmental information and character action information through the sensor, inputting through the data input device, uploading obtained data to the server, classifying and screening the data by the slice on the server, storing the screened data, retrieving the data under the slice in the Internet area, interacting the data, and projecting the data as specific information data to VR/AR/MR intelligent glasses access equipment which establishes specific information data association and interaction with the server.
As shown under figure 1, an augmented reality-based intelligent glasses system hardware includes a plurality of intelligent glasses access devices, smart mobile phone, server and a plurality of multilayer internet area based on server, wherein intelligent glasses access device pass through the bluetooth connect with smart mobile phone establish data channel, smart mobile phone pass through wireless communication and server and link to each other, the server on operate a plurality of multilayer internet area, the internet area can be regarded as the section of a certain virtual world, a plurality of sections can superpose and make up into new section with specific quantity, user's accessible smart mobile phone select a certain section to carry out the show of this section information data on the APP interface. The intelligent glasses comprise a front sensing module, and when the intelligent mobile phone acquires data of the internet area, namely, the information retrieval and information interaction functions are performed: the method comprises the steps that video information data are obtained in real time through a sensing module, the video information data are transmitted to an APP of the smart phone through Bluetooth connection, data connection between an APP client and an Internet area is established through wireless communication, a server screens and classifies the real-time video information data uploaded by the APP client according to target information data corresponding to a selected slice, specific data are reserved, the smart phone and the slice on the server complete association and interaction of the specific video information data, and the slice on the server transmits multidimensional data corresponding to the information data to the smart phone according to confirmation of the specific information data and displays the multidimensional data in the APP client. When the smart phone uploads data to the internet area, namely, information publishing and information marking functions are performed: data input is carried out through the smart phone, the obtained data are uploaded to the server, the slice on the server classifies and screens the data, the screened data are stored and can be retrieved and interacted under the slice of the internet area, and the data can be presented on an APP interface of the smart phone as specific information data.
In a first embodiment, as shown in fig. 2, an augmented reality-based intelligent glasses system software includes multiple scenes, where the multiple scenes are divided into different functional scenes, where the functional scenes include a message leaving scene, an authoring scene, an interaction scene, a biological scene, a shopping scene, a retrieval scene, a push scene, a joint scene, a design scene, a tagging scene, a friend-making scene, a navigation scene, and a live broadcast scene, but are not limited to the above functional scenes, and each functional scene is visually presented on an intelligent glasses in a switching or overlapping manner; each of the functional scenes may be divided into different theme scenes, the theme scenes include scenes corresponding to different characters, games, movies, and animations, but are not limited to the theme scenes, the theme scenes are the slices, the slices may be superimposed, and a specific number of the slices may be combined to form a new slice, that is, the theme scenes may be individually visually presented on the smart glasses or combined to be visually presented on the smart eyes. If the number of the theme scenes is N, the number of the changeable theme scenes of the user is limited as follows: (2N-1)。
As shown in fig. 3, in the first embodiment, the hardware of the intelligent glasses system based on augmented reality includes an intelligent glasses body 101 and an interactive control device 102, the intelligent glasses body 101 establishes a data connection with the interactive control device 102, and a user can complete switching of virtual scenes on the intelligent glasses body 101 through the interactive control device 102, where the virtual images on the intelligent glasses body 101 include scene tags: the scene one 201, the scene two 202, the scene three 203, and the scene four 204 … … realize visual switching between scenes, and further, the scene tags may have lower tags for expanding the lower sub-scenes of a certain scene, and the interactive control device 102 realizes switching and drilling of scenes.
Further, each scene in the data background corresponds to a usage rate, at least one feature standard and a set of AR parameters. The ordering modes of the scene one, the scene two, the scene three and the scene four … … are ordered from high to low according to the corresponding utilization rate of each scene. Wherein, the usage calculation formula may be:
Figure BDA0003389572370000071
where t (i) is the usage duration of the ith scene,
Figure BDA0003389572370000072
the total length of time that the user uses the smart glasses.
Furthermore, in order to gradually adapt to the user habit for data updating service, the specific method is as follows:
s52, establishing a user database to store the characteristic data of the user, such as the use duration of each scene, the use rate of each scene, or the use frequency of each scene;
and S53, updating the feature data of the user once every time, and sequencing scenes from high to low according to the feature data.
Furthermore, in order to better match the user requirements for personalized customized services, the specific method is as follows:
s54, classifying the characteristic data of all the users in the background;
s55, establishing user figures according to the classified user characteristic data, wherein each type of user figure corresponds to a certain user characteristic data interval and corresponds to a certain scene sequencing;
and S56, finishing the classification of the user on the user image according to the operation of the new user, and executing the scene sequencing corresponding to the user image.
Furthermore, the autonomous switching and selection of scenes can be realized, and the specific method is as follows:
s57, analyzing and detecting the ambient environment data to obtain at least one main characteristic of the ambient environment;
and S58, comparing the characteristic standard of each scene with the main characteristic, and if matching is completed, automatically switching the scene to the scene corresponding to the characteristic standard matched with the main characteristic.
In the autonomous switching and selecting of the scene, the feature standard may be set to be one third or more of the ratio of the portrait to the full image, or may be other ratios, such as one fourth, one fifth, etc., set by the manufacturer.
In the second embodiment, a visual scene superposition state of the intelligent glasses system based on augmented reality under the message leaving function can be used for superposing visual imaging of voice data and character data when a real scene passes through lenses of the intelligent glasses, and the intelligent glasses system hardware for realizing the message leaving function is shown in fig. 4 and comprises an intelligent glasses body I301, an imaging device I401, a sensing module I402, a voice input device I403, a character input device I404, a positioning module I405, a communication module I406 and a server I407. The first imaging device 401, the first sensing module 402, the first voice input device 403, the first text input device 404, the first positioning module 405 and the first communication module 406 are respectively in data connection with the first intelligent glasses body 301, and the first communication module 406 and the first server 407 are in data connection in a remote communication mode. The first sensing module 402 may be a camera or a laser radar.
As shown in fig. 5, the method for controlling the augmented reality-based smart glasses system with message leaving function in the second embodiment is as follows:
s1, switching or selecting an augmented reality interface entering a message leaving scene through an interface on intelligent glasses;
s2, acquiring real-time GPS information and image information through a positioning module and a sensing module on the intelligent glasses;
s3, uploading the GPS information and the image information to a server through a communication module, and matching the GPS information and the image information attached to the voice data and the character data in the historical data stored in the server;
and S4, after the matching is successful, the server performs data return on the voice data and the character data which are matched correspondingly in the historical data, receives the data through a communication module of the intelligent glasses, and displays the data through an imaging device of the intelligent glasses.
Furthermore, in order to reduce the matching times and improve the matching speed in the matching process of the historical data and the real-time data, the historical data is preprocessed, and the method comprises the following steps:
s5, dividing the park according to one or more information in the GPS information and the image information attached to the voice data and the text data in the historical data, and determining one or more information ranges in the GPS information and the image information of the corresponding park;
and S6, classifying the voice data and the character data in the historical data according to a defined range, and finishing data division with the park as a main body.
The partition of the campus in S5 may specifically be:
s51, primarily dividing the garden according to the GPS information, wherein the garden 1, the garden 2, the garden 3 and the garden 4 correspond to a GPS range respectively;
s52, secondary division is carried out on the garden according to the image information acquired by the sensing module, image characteristic quantities or markers are identified and extracted, and labeling is completed on each garden characteristic quantity or marker, wherein the garden comprises 1 part of a mark point 1, 1 part of a park point 2, 1 part of a park point 3, 2 parts of a park point 1, 2 parts of a park point 2, 2 parts of a park point 3, 3 parts of a park point 1, 3 parts of a park point 2 and 3 parts of a park point 3.
The method for matching the preprocessed data is as follows:
s7, matching the voice data and the character data in the historical data of the server with the GPS information and the image information of each block of the park according to the attached GPS information and image information;
s8, if the matching is finished, the voice data or the text data are transferred into the matching park block;
s9, matching the real-time GPS information and the real-time image information on the intelligent glasses with the GPS information and the image information of each block of the park;
and S10, if the matching is finished, virtually imaging the voice data and the text data in the park block to the intelligent glasses.
Further, in order to complete the update of the historical data, the real-time data is captured, and the method comprises the following steps:
s11, acquiring voice data and character data through a voice input device and a character input device of the intelligent glasses, wherein the voice data and the character data comprise GPS information and image information attached to the voice data and the character data, and acquiring real-time information through a positioning module and a sensing module;
and S12, uploading the voice data, the character data and the attached GPS information and image information to a historical database of the server.
In a third embodiment, a visual scene superposition state of the intelligent glasses system based on augmented reality under the creation function can be used for superposing visual imaging of image data when a real scene passes through lenses of the intelligent glasses, and the hardware of the intelligent glasses system for realizing the creation function is shown in fig. 6 and comprises a second intelligent glasses body 501, an operating handle 502, a second imaging device 504, a second perception module 503, a second positioning module 505, a second communication module 506 and a second server 507. The operating handle 502, the second imaging device 504, the second sensing module 503, the second positioning module 505 and the second communication module 506 are respectively in data connection with the second intelligent glasses body 501, and the second communication module 506 is in data connection with the second server 507 in a remote communication mode. The second sensing module 503 may be a camera or a laser radar.
In the third embodiment, the method for controlling the augmented reality-based smart glasses system with the authoring function includes a display layer and an authoring layer, wherein the method for displaying the display layer is basically the same as that of the second embodiment, as shown in fig. 7, as follows:
s13, switching or selecting an augmented reality display layer interface entering an creation scene through an interface on the intelligent glasses;
s14, acquiring real-time GPS information and image information through a positioning module and a sensing module on the intelligent glasses;
s15, uploading the GPS information and the image information to a server through a communication module, and matching the GPS information and the image information attached to the three-dimensional image data and the two-dimensional image data in the historical data stored in the server;
and S16, after the matching is successful, the server returns the three-dimensional image data and the two-dimensional image data which are correspondingly matched in the historical data, receives the data through the communication module of the intelligent glasses, and displays the data through the imaging device of the intelligent glasses.
Furthermore, in order to reduce the matching times of the historical data and the real-time data in the matching process and improve the matching speed, the historical data is preprocessed, and the method is consistent with the method for preprocessing the two pairs of historical data in the embodiment.
The method for creating the layer comprises the following steps:
s23, switching or selecting an augmented reality authoring layer interface entering an authoring scene through an interface on the intelligent glasses;
s24, a user constructs three-dimensional image data or two-dimensional image data through an operating handle of the intelligent glasses, the constructed real-time three-dimensional image data or real-time two-dimensional image data are imaged on the lenses of the intelligent glasses through an imaging device, current GPS information is obtained according to a positioning module of the intelligent glasses, current image information of a real scene is obtained according to a sensing module of the intelligent glasses, and the GPS information and the image information are attached to the three-dimensional image data or the two-dimensional image data;
and S25, uploading the packed data to a server through a communication module, and transmitting the three-dimensional image data or the two-dimensional image data to other intelligent glasses through the server for imaging display.
The three-dimensional image data and the two-dimensional image data in the third embodiment are completed by combining the intelligent glasses with the operation handle, and the three-dimensional image data and the two-dimensional image data can be constructed through other ports, such as a PC end and a mobile end. The three-dimensional image data and the two-dimensional image data can be placed at the corresponding positions through inputting position coordinates, and the position information image can be opened through a client, and the three-dimensional image data and the two-dimensional image data are dragged to the position of the information image to complete the placement of the image. Furthermore, the three-dimensional image data and the two-dimensional image data can be stored in an open source authoring library, and can be dragged and copied when the port is authored, so that secondary authoring of the port is facilitated.
In the fourth embodiment, a visual scene superposition state of the intelligent glasses system based on augmented reality can be used for superposing visual imaging of image data when a real scene passes through lenses of the intelligent glasses, and the hardware of the intelligent glasses system for realizing the interactive function is shown in fig. 8 and comprises an intelligent glasses body three 701, an imaging device three 601, a sensing module three 602, a positioning module three 605, a communication module three 606, an action capture device three 603, a server three 607 and an image modeling and trigger setting port three 702. The three imaging devices 601, the three sensing modules 602, the three positioning modules 605, the three communication modules 606 and the three motion capture devices 603 are respectively in data connection with the three intelligent glasses body 701, the three communication modules 606 and the three server 607 are in data connection in a remote communication mode, and the three image modeling and trigger setting ports 702 are in data connection with the three server 607. The third sensing module 602 may be a camera or a laser radar.
In the fourth embodiment, the method for controlling the intelligent glasses system with the interactive function based on the augmented reality comprises the following steps:
s29, switching or selecting an augmented reality interface entering an interactive scene through an interface on the intelligent glasses;
s30, acquiring real-time GPS information and image information through a positioning module and a sensing module on the intelligent glasses;
s31, uploading the real-time GPS information and the image information to a server through a communication module, and matching the real-time GPS information and the image information with a GPS position and a characteristic region position set by two/three-dimensional image/video data manufactured by a development port stored in the server;
s32, after the matching is successful, the server transmits back the correspondingly matched two/three-dimensional image/video data, receives the data through a communication module of the intelligent glasses, displays the data through an imaging device of the intelligent glasses, and completes static/dynamic visual display of the data;
and S33, the user performs corresponding operation and interaction according to the presented image/video data, the user operation induction is completed through the motion capture device, and if the triggering condition set by the development port is met, the image/video data is visually presented according to the variable set by the development port.
The scenario of the fourth embodiment may be further combined with other topics with awareness to implement the construction of the joint scenario.
In the fifth embodiment, a visual scene superposition state of the intelligent glasses system based on the augmented reality under the biological function can be used for superposing visual imaging of image data when a real scene passes through the lenses of the intelligent glasses, and the hardware of the intelligent glasses system based on the biological function is the same as that of the fourth embodiment.
In the fifth embodiment, the method for controlling the bio-enabled augmented reality based smart glasses system includes:
s34, switching or selecting an augmented reality interface entering a biological scene through an interface on the intelligent glasses;
s35, acquiring real-time GPS information and image information through a positioning module and a sensing module on the intelligent glasses;
s36, uploading the real-time GPS information and the image information to a server through a communication module, and matching the real-time GPS information and the image information with a GPS position and a characteristic region position which are set by the biological video data which are produced by a development port and stored in the server;
s37, after the matching is successful, the server transmits back the corresponding matched biological video data, receives the data through a communication module of the intelligent glasses, displays the data through an imaging device of the intelligent glasses, and finishes the dynamic visual display of the data;
and S38, the user performs corresponding operation and interaction according to the presented biological video data, the user operation induction is completed through the motion capture device, and if the trigger condition set by the development port is met, the biological video data is visually presented according to the feedback variable set by the development port.
The method for constructing the actual characteristic data is as follows:
s341, constructing invariant data in the actual characteristic data, and constructing a visualization model of the invariant data;
s342, constructing variable data in the actual characteristic data, setting a formula of the variable data about time, and acting the formula on the visual model through a transfer function to complete the quantitative change setting of the visual model;
and S343, setting a threshold value of the variable data, building a visual model after the variable data exceeds the threshold value, and switching the visual model back and forth if the value of the variable data exceeds the set threshold value to complete the setting of qualitative change of the visual model.
The biological data may be actual characteristic data of a living being, and the actual characteristic data formula is as follows:
F(θi)=f0i(0))+f1i(t,s,...))+f2i(m,n,...))(i=0,1,2,3,...,n)
wherein, thetaiAll parameter indexes of the visual model are obtained; f. of0i(0) Invariant data for the visualization model; f. of1i(t, s.,)) is variable data of the visualization model; f. of2i(m, n.,)) is feedback data of the visualization model; t, s.. is a parameter that changes with time and affects the visualization model, such as time, climate, lighting, food intake; m, n.. is a trigger parameter sensed by the sensor, such as feeding amount, watering amount, and mutual amount.
Sixth embodiment, a visual scene superposition state of the intelligent glasses system based on augmented reality under the shopping function can superpose visual imaging of related information data when a real scene passes through the lenses of the intelligent glasses, and the hardware of the intelligent glasses system realizing the shopping function is shown in fig. 9, and includes an intelligent glasses body four 801, an imaging device four 901, a perception module four 902, a communication module four 906, and a server four 907. The imaging device IV 901, the sensing module IV 902 and the communication module IV 906 are respectively in data connection with the intelligent glasses body IV 801, and the communication module IV 906 and the server IV 907 are in data connection in a remote communication mode. The sensing module four 902 can be a camera or a laser radar. Furthermore, in order to capture the display of the position detail shielding irrelevant information data where the human eyes stay, a rear camera four 905 is arranged, and the rear camera four 905 and the intelligent glasses body four 801 establish data connection; furthermore, in order to complete the operation experience of the product, a fourth motion capture device 903 is arranged, and the fourth motion capture device 903 is in data connection with the fourth intelligent glasses body 801; furthermore, in order to complete the virtual modeling of the product, an image modeling and trigger setting port four 802 is provided, and the image modeling and trigger setting port four 802 establishes data connection with the server four 907.
In a sixth embodiment, a method for controlling an augmented reality-based smart glasses system having a shopping function includes:
s39, switching or selecting an augmented reality interface entering a shopping scene through an interface on the intelligent glasses;
s40, acquiring an image in a visual field through a sensing module of the intelligent glasses, identifying the image through an algorithm, and identifying the corresponding article type;
s41, searching information related to the article types in the network database and the built-in database, and imaging the related information through an imaging device of the intelligent glasses.
The related information in S41 may be encyclopedia introduction of the item and shopping links of each platform. Further, the shopping links are sorted by the value of the indicator, which may be the price of the item.
In S41, the associated information may be obtained by labeling the article through another PC port, a mobile port, or an intelligent glasses port, and performing text labeling or link labeling on the image data.
The embodiment seventh, a visual scene superposition state of the intelligent glasses system based on augmented reality under the retrieval function can superpose visual imaging of related retrieval data when a real scene passes through the lenses of the intelligent glasses, and the hardware of the intelligent glasses system realizing the retrieval function is the same as the embodiment. Furthermore, in order to make the seventh embodiment and the sixth embodiment have the function of information screening, a rear camera may be disposed on the smart glasses.
The seventh embodiment of the present invention provides a method for controlling an augmented reality-based smart glasses system having a search function, comprising:
s42, switching or selecting an augmented reality interface entering a retrieval scene through an interface on the intelligent glasses;
s43, acquiring an image in a visual field through a sensing module of the intelligent glasses, identifying the image through an algorithm, identifying characteristic points in the image, completing locking of characteristic objects, and uploading data of the characteristic objects to a server through a communication module;
and S44, information related to the characteristic objects stored in the network database and the built-in database in the server is transmitted back to the intelligent glasses through the communication module, and imaging of the related information is performed through an imaging device of the intelligent glasses.
The associated information may include multi-dimensional data such as text data, voice data, and image data uploaded by other users for the feature object, and multi-dimensional data such as text data, voice data, and image data uploaded by other ports for the feature object.
In an eighth embodiment, a visualization scene superposition state of the augmented reality-based smart glasses system under the push function can superpose visualization imaging of related push data when a real scene passes through the lenses of the smart glasses, and other hardware of the smart glasses system achieving the push function is the same as the hardware of the embodiment except that a positioning module is not needed.
In an eighth embodiment, the method for controlling the augmented reality-based smart glasses system with the push function includes:
s52, switching or selecting an augmented reality interface entering a push scene through an interface on the intelligent glasses;
s53, acquiring images in a visual field through a sensing module of the intelligent glasses, completing identification of video content to position the content, uploading information data of specific positioning content to a server through a communication module according to system setting, and transmitting the periphery of built-in advertisements, information and content back to the intelligent glasses through the server;
and S54, the intelligent glasses carry out visual presentation on the periphery of the pushed advertisements, the pushed information and the pushed contents through the imaging module.
Furthermore, in order to increase the interest and interactivity of the video content, a comment block can be added to complete the discussion of the video content among the users of the intelligent glasses.
Ninth, a visual scene superposition state of the smart glasses system based on augmented reality under the design function can be used for superposing visual imaging of related design element data when the real scene passes through the lenses of the smart glasses, and the hardware of the smart glasses system realizing the design function is the same as the fourth embodiment except that a positioning module is not needed.
In an embodiment, the method for controlling the smart glasses system with design function based on augmented reality includes:
s55, switching or selecting an augmented reality interface entering a design scene through an interface on the intelligent glasses;
s56, performing virtual modeling on the design elements on the basic frame through the ports, storing and uploading structural data of the design elements and position relation data of the design elements relative to the basic frame to a cloud end when the modeling is completed, and uploading the design element data to the intelligent glasses through a cloud end server;
and S57, acquiring an actual image in a visual field through a sensing module, identifying a basic frame of the actual image through a system, and finishing the selection of design elements by a user through a motion capture device and finishing the visual presentation of the design elements through an imaging device.
Wherein the basic frame described in S56 and S57 can be human skeleton or outline, and can also be the frame and outline of clothes.
The visualization of the design elements by the imaging device described in S57 presents the following two schemes:
the method comprises the steps that firstly, the position of an imaged design element on an identified actual image basic frame is confirmed and locked through the position relation data of the design element relative to the basic frame;
and secondly, manually moving the design elements to the specified positions through a motion capture device.
In an embodiment ten, a visual scene superposition state of an augmented reality-based intelligent glasses system under a labeling function can be used for superposing visual imaging of related labeling data when a real scene passes through lenses of the intelligent glasses, and the intelligent glasses system hardware with the labeling function is as shown in fig. 10 and comprises an intelligent glasses body five 1001, an input device five 1002, an imaging device five 1004, a perception module five 1003, a positioning module five 1005, a communication module five 1006 and a server five 1007. The intelligent glasses comprise an input device five 1002, an imaging device five 1004, a sensing module five 1003, a positioning module five 1005 and a communication module five 1006, wherein the input device five 1002, the imaging device five 1004, the sensing module five 1003, the positioning module five 1005 and the communication module five 1006 are respectively in data connection with the intelligent glasses body five 1001, and the communication module five 1006 and the server five 1007 are in data connection in a remote communication mode. The sensing module five 1003 can be a camera or a laser radar; the input device five 1002 can be used for voice input or character input.
In the tenth embodiment, the method for controlling the augmented reality-based smart glasses system having the annotation function includes:
s45, switching or selecting an augmented reality interface entering a marked scene through an interface on the intelligent glasses;
s46, acquiring an image in a visual field through a sensing module of the intelligent glasses, identifying the image through an algorithm, identifying an object outline in the image, acquiring user behavior operation through the sensing module, and finishing selection of a marked object according to the user operation;
s47, after the labeling is finished, the intelligent glasses system automatically follows the path of the labeled object after sensing the labeled object through the sensing module and stores the final position coordinate of the labeled object before the labeled object disappears in the sensing area of the sensing module, meanwhile, the sensing module senses whether the labeled object reappears in the sensing area in real time, if yes, the intelligent glasses system continuously and automatically follows the path of the labeled object and updates the final position coordinate of the labeled object before the labeled object disappears in the sensing area of the sensing module, and the steps are repeated in a circulating mode;
and S48, acquiring an instruction for searching a certain marked object through the input device, calling the final position coordinate of the marked object stored in the background, forming a guidance path according to the final position coordinate and the real-time position coordinate of the intelligent glasses at the moment, generating a guidance mark through the guidance path, and performing visual imaging on the guidance mark through the imaging device.
The specific method for sensing and marking the object by the sensing module can be as follows:
s471, extracting the characteristic points of the marked object through a sensing module when the object is marked and finishing the storage of corresponding data;
and S472, when the sensing module senses the corresponding object, extracting the characteristic points of the object, comparing the characteristic points with the stored characteristic points of the labeled object, and if the characteristic points are matched with the stored characteristic points, determining that the labeled object is sensed by the system.
In S472, the matching may specifically be that the similarity of the comparison exceeds a set threshold.
The position coordinates in S47 may be obtained through a positioning module.
A visual scene superposition state of an intelligent glasses system based on augmented reality can be used for superposing visual imaging of related comprehensive data when a real scene passes through lenses of the intelligent glasses, and the intelligent glasses with the friend making function are one or combination of the above embodiments to complete data interaction among the intelligent glasses, wherein the intelligent glasses comprise image data, character data and voice data.
The visual scene superposition state of the intelligent glasses system based on augmented reality under the navigation function can be used for superposing visual imaging of relevant position data when a real scene passes through lenses of the intelligent glasses, and the intelligent glasses with the navigation function are one or combination of the above embodiments to complete data interaction among the intelligent glasses, wherein the intelligent glasses comprise position data and image data.
The utility model provides a visual scene stack attitude of intelligent glasses system under live broadcast function can pass through in the live action the visual formation of image of relevant video data of stack when the lens of intelligent glasses, the intelligent glasses of live broadcast function is one item or several combinations among the above-mentioned embodiment, accomplishes the data interaction between intelligent glasses, including position data and image data.
The server accessed by the intelligent glasses system can be a centrally deployed server or an edge deployed distributed server, and the number of the servers is not limited. If the server is a distributed server, the server can be arranged at each position, the intelligent glasses can access the distributed server through a plurality of space induction modes such as GPS induction, network induction and radar induction, and the distributed server can be arranged in public spaces such as buses, shops, schools, hospitals, public institutions and enterprises.
The device for acquiring the human operation through the equipment in the above embodiments may be an image sensor, a radar sensor, a touch sensor, a key sensor, a voice sensor, or the like that can acquire human behavior.
In the above embodiment, entering a certain scene is completed through manual selection, and the scene can also be automatically entered by identifying whether the scene is set in the area, and further, if multiple scenes are identified, the habit of the user is obtained through an algorithm to complete the scene selection and the scene is automatically entered.
The intelligent glasses protected by the invention can be single-function intelligent glasses with single function/single scene, and also can be multi-function intelligent glasses with multiple functions/multiple scenes, and the multi-function intelligent glasses can be two or more combinations of single function/single scene, including hardware combination and function combination.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (9)

1. An intelligent glasses system based on augmented reality is characterized by comprising a plurality of VR/AR/MR intelligent glasses access devices, a server and a plurality of multilayer internet areas based on the server, wherein the VR/AR/MR intelligent glasses access devices are connected with the server through wireless communication, a plurality of virtual world slices are operated on the server, the plurality of slices are superposed in a specific number to form new slices, and a user selects one slice to project through the VR/AR/MR intelligent glasses access devices; and the VR/AR/MR intelligent glasses access equipment performs data acquisition on the selected slice to complete the information retrieval and information interaction functions, and performs data uploading on the slice to complete the information release and information marking functions.
2. The smart eyewear system of claim 1, wherein the slices are displayed on the application software of the VR/AR/MR smart eyewear access device as multiple scenes, the multiple scenes are divided into different functional scenes and different theme scenes, the functional scenes comprise a message leaving scene, an authoring scene, an interactive scene, a biological scene, a shopping scene, a retrieval scene, a pushing scene, a joint scene, a design scene, a labeling scene, a friend-making scene, a navigation scene and a live scene, the theme scenes comprise scenes corresponding to different characters, games, movies and animations, and the scenes are displayed on the VR/AR/MR smart eyewear in a single or combined manner.
3. The smart eyewear system of claim 2, wherein the VR/AR/MR smart eyewear access device comprises a smart eyewear body and an interactive control device, the smart eyewear body and the interactive control device establish a data connection, a user completes switching of multiple scenes on the smart eyewear body through the interactive control device, and the virtual images on the smart eyewear body include scene tags with sequence numbers.
4. The smart eyewear system of claim 3 wherein said scene tags are provided with subordinate tags, and scene switching and drilling down is achieved by said interactive control device.
5. The smart eyewear system of claim 3, wherein the VR/AR/MR smart eyewear data backend stores therein a usage index, at least one feature criterion, and a set of AR parameters under the multivariate scene, and the scene tags are ranked from high to low according to the usage index.
6. The smart eyewear system of claim 5, wherein the usage calculation formula is
Figure FDA0003389572360000021
Where t (i) is the usage duration of the ith scene,
Figure FDA0003389572360000022
the total length of time that the user uses the smart glasses.
7. An augmented reality-based smart eyewear system control method, controlling the smart eyewear system of claim 5, comprising the steps of:
s1, establishing a user database to store characteristic data of a user;
and S2, updating the feature data of the user once every period of time, and sequencing scenes from high to low according to the feature data.
8. The control method according to claim 7, wherein the processing of the characteristic data comprises the steps of:
s3, classifying the characteristic data of all the users at the background;
s4, establishing user figures according to the classified user characteristic data, wherein each type of user figure corresponds to a certain user characteristic data interval and corresponds to a certain scene sequencing;
and S5, finishing classification of the user on the user image according to the operation of the new user, and executing the scene sequencing corresponding to the user image.
9. An augmented reality-based intelligent glasses system scene application method, characterized in that the design scene application of claim 2 is controlled, comprising the steps of:
s6, switching or selecting an augmented reality interface entering a design scene through an interface on the intelligent glasses;
s7, performing virtual modeling on the design elements on the basic frame through the ports, storing and uploading structural data of the design elements and position relation data of the design elements relative to the basic frame to a cloud end when the modeling is completed, and uploading the design element data to the intelligent glasses through a cloud end server;
s8, acquiring an actual image in a visual field through the sensing module, identifying a basic frame of the actual image through the system, and finishing the selection of design elements by a user through the motion capture device and finishing the visual presentation of the design elements through the imaging device.
CN202111474128.8A 2021-12-02 2021-12-02 Intelligent glasses system based on augmented reality and control method Pending CN114185433A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111474128.8A CN114185433A (en) 2021-12-02 2021-12-02 Intelligent glasses system based on augmented reality and control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111474128.8A CN114185433A (en) 2021-12-02 2021-12-02 Intelligent glasses system based on augmented reality and control method

Publications (1)

Publication Number Publication Date
CN114185433A true CN114185433A (en) 2022-03-15

Family

ID=80603401

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111474128.8A Pending CN114185433A (en) 2021-12-02 2021-12-02 Intelligent glasses system based on augmented reality and control method

Country Status (1)

Country Link
CN (1) CN114185433A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110199525A (en) * 2017-01-18 2019-09-03 Pcms控股公司 For selecting scene with the system and method for the browsing history in augmented reality interface
CN110507992A (en) * 2019-08-28 2019-11-29 腾讯科技(深圳)有限公司 Technical support approach, device, equipment and storage medium in a kind of virtual scene
CN111524240A (en) * 2020-05-11 2020-08-11 维沃移动通信有限公司 Scene switching method and device and augmented reality equipment
CN111917768A (en) * 2020-07-30 2020-11-10 腾讯科技(深圳)有限公司 Virtual scene processing method and device and computer readable storage medium
CN112346572A (en) * 2020-11-11 2021-02-09 南京梦宇三维技术有限公司 Method, system and electronic device for realizing virtual-real fusion
CN112569599A (en) * 2020-12-24 2021-03-30 腾讯科技(深圳)有限公司 Control method and device for virtual object in virtual scene and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110199525A (en) * 2017-01-18 2019-09-03 Pcms控股公司 For selecting scene with the system and method for the browsing history in augmented reality interface
CN110507992A (en) * 2019-08-28 2019-11-29 腾讯科技(深圳)有限公司 Technical support approach, device, equipment and storage medium in a kind of virtual scene
CN111524240A (en) * 2020-05-11 2020-08-11 维沃移动通信有限公司 Scene switching method and device and augmented reality equipment
CN111917768A (en) * 2020-07-30 2020-11-10 腾讯科技(深圳)有限公司 Virtual scene processing method and device and computer readable storage medium
CN112346572A (en) * 2020-11-11 2021-02-09 南京梦宇三维技术有限公司 Method, system and electronic device for realizing virtual-real fusion
CN112569599A (en) * 2020-12-24 2021-03-30 腾讯科技(深圳)有限公司 Control method and device for virtual object in virtual scene and electronic equipment

Similar Documents

Publication Publication Date Title
CN103460256B (en) In Augmented Reality system, virtual image is anchored to real world surface
CN110235120A (en) System and method for the conversion between media content item
TWI615776B (en) Method and system for creating virtual message onto a moving object and searching the same
US20170034112A1 (en) System relating to 3d, 360 degree or spherical for refering to and/or embedding posts, videos or digital media within other posts, videos, digital data or digital media and posts within anypart of another posts, videos, digital data or digital media
CN107239203A (en) A kind of image management method and device
CN105122790A (en) Operating environment with gestural control and multiple client devices, displays, and users
US20140129370A1 (en) Chroma Key System and Method for Facilitating Social E-Commerce
CN101142595A (en) Album generating apparatus, album generating method and computer readable medium
CN113766296B (en) Live broadcast picture display method and device
TWI617930B (en) Method and system for sorting a search result with space objects, and a computer-readable storage device
JP6517293B2 (en) Location based spatial object remote management method and location based spatial object remote management system
US20140095349A1 (en) System and Method for Facilitating Social E-Commerce
US20200001182A1 (en) Device and method for providing a game based on a lesson path on a knowledge map
US20140309925A1 (en) Visual positioning system
US11526931B2 (en) Systems and methods for digital mirror
TWI642002B (en) Method and system for managing viewability of location-based spatial object
CN112989214A (en) Tourism information display method and related equipment
CN114119171A (en) MR/AR/VR shopping and retrieval scene control method, mobile terminal and readable storage medium
CN116319862A (en) System and method for intelligently matching digital libraries
KR102346137B1 (en) System for providing local cultural resources guidnace service using global positioning system based augmented reality contents
TW201823929A (en) Method and system for remote management of virtual message for a moving object
US20220198771A1 (en) Discovery, Management And Processing Of Virtual Real Estate Content
CN114935972A (en) MR/AR/VR labeling and searching control method, mobile terminal and readable storage medium
CN114185433A (en) Intelligent glasses system based on augmented reality and control method
CN114967908A (en) MR/AR/VR interaction and biological scene control method, mobile terminal and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220315

WD01 Invention patent application deemed withdrawn after publication