CN116258836B - Method for dynamically generating meta universe based on multi-mode data - Google Patents

Method for dynamically generating meta universe based on multi-mode data Download PDF

Info

Publication number
CN116258836B
CN116258836B CN202310014994.1A CN202310014994A CN116258836B CN 116258836 B CN116258836 B CN 116258836B CN 202310014994 A CN202310014994 A CN 202310014994A CN 116258836 B CN116258836 B CN 116258836B
Authority
CN
China
Prior art keywords
display area
audience
universe
exhibition
universe display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310014994.1A
Other languages
Chinese (zh)
Other versions
CN116258836A (en
Inventor
陈思琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shikong Shanghai Brand Planning Co ltd
Original Assignee
Shikong Shanghai Brand Planning Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shikong Shanghai Brand Planning Co ltd filed Critical Shikong Shanghai Brand Planning Co ltd
Priority to CN202310014994.1A priority Critical patent/CN116258836B/en
Publication of CN116258836A publication Critical patent/CN116258836A/en
Application granted granted Critical
Publication of CN116258836B publication Critical patent/CN116258836B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Hardware Design (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses a method for dynamically generating meta universe based on multi-mode data, which comprises the following steps: acquiring an exhibition explanation flow and segmentation, the number of audiences entering a virtual exhibition at present, and the time point of each audience entering the virtual exhibition, wherein the entering time point belongs to the segmentation position of the explanation flow of a first universe display area of the virtual exhibition, detecting whether an interruption event occurs, acquiring interruption information corresponding to the interruption event, judging whether to generate a second universe display area by combining a preset first rule when the interruption event occurs, determining the position coordinate point of the second universe display area when the second universe display area is generated if required, determining the sound size of the second universe display area received by the audiences in the virtual exhibition, and determining the starting time point of the second universe display area and the segmentation content of the explanation. According to the invention, meta space generation is effectively performed by combining multi-mode data with a deep learning technology, so that user experience is improved.

Description

Method for dynamically generating meta universe based on multi-mode data
Technical Field
The invention relates to the field of meta-universe generation, in particular to a method for dynamically generating meta-universe based on multi-mode data.
Background
The meta-universe is a persistent and decentralized online three-dimensional virtual environment. The virtual environment can enable a user to enter an artificial virtual world and implement various behaviors through devices such as virtual reality glasses, augmented reality glasses, mobile phones, personal computers, electronic game machines and the like.
Most enterprises can only choose to participate in one to two trade seminars or exhibitions each year, subject to time and labor cost limitations. The virtual exhibition based on the meta universe is not limited, the virtual exhibition from tens to thousands can be easily initiated almost every moment, the information of the enterprise is released to the market at the first time, and the instantaneity of the enterprise or brand propaganda is improved.
When participating in a virtual exhibition, the audience often loses attention and interest to the exhibition because of different time or interested content entering the exhibition, so that the exhibition exits, which is a relatively large loss to the sponsor of the exhibition, and the virtual exhibition based on the metauniverse has stronger flexibility, even can generate a second virtual exhibition in real time according to the attention condition of the audience, attracts more attention of the audience, and achieves higher audience retention rate and user experience, but if the metauniverse is continuously generated for pursuing the retention of the audience, the user experience is also greatly influenced as usual, and how to generate more metauniverse just becomes one important research part in the metauniverse technical field.
Referring to the related published technical scheme, the technical scheme with publication number US20110014985A1 proposes a system and a method for transmitting and managing contents in a multi-element space, which are used for realizing the content sharing required among a plurality of meta-universe; the technical scheme with the publication number of CN114998558A provides a construction method and device of a meta-universe scene, which are used for realizing multiplexing and generation of the meta-universe scene; the related technical proposal does not make optimization adjustment for the generation scene of the meta-universe exhibition at present.
Disclosure of Invention
Aiming at least one technical problem, the invention aims to provide a dynamic meta-universe generation method based on multi-mode data.
The embodiment of the invention comprises a meta universe dynamically generated based on multi-mode data, which comprises the following steps:
acquiring an exhibition explanation flow and segmenting the exhibition explanation flow;
acquiring the number of audiences entering a virtual exhibition at present and the time point of each audience entering the virtual exhibition, wherein the entering time point belongs to the segmentation position of the explanation flow of a first universe display area of the virtual exhibition;
detecting whether a breaking event occurs or not, and acquiring breaking information corresponding to the breaking event;
when the breaking event occurs, judging whether a second universe display area is generated or not by combining a preset first rule, if so, determining a position coordinate point of the second universe display area when the second universe display area is generated, determining the size of sound of the second universe display area received by spectators in the virtual exhibition, and determining a starting time point of the second universe display area and explaining segmented content;
And generating the second universe display region.
Preferably, the virtual exhibition includes a first universe display area;
a viewer entering the virtual exhibition will be assigned to the first meta-cosmic display area.
Preferably, the breaking event comprises: the audience sends a question information event to the host, a new audience entering the exhibition event exceeding a preset number, an audience leaving the exhibition event exceeding a preset number, and the host changes the explanation theme event;
the interrupt event further includes: and presetting interval time, and triggering a breaking event.
Preferably, when the breaking event occurs, determining whether to generate the second universe display area according to a preset first rule includes:
the preset first rule includes: clustering all audiences according to the attribute of all current audiences through a clustering algorithm to obtain n audience clusters, traversing each audience cluster, obtaining the attention condition of each audience in the audience clusters to a virtual exhibition, judging the number of audiences with the attention not in the virtual exhibition in the audience clusters, judging the audience cluster as a negative audience cluster when the number is higher than a third threshold, and judging that a second universe display area needs to be generated;
The preset first rule may further include: and obtaining the attention condition of each audience to the virtual exhibition, judging the number of the audience, the attention of which is not in the virtual exhibition, in the audience, and judging that the second universe display area needs to be generated when the number is higher than a fourth threshold value.
Preferably, a virtual exhibition plan is obtained, according to the virtual exhibition plan, the ordinate of the coordinate position of the audience with the largest ordinate in the virtual exhibition is obtained, n abscissa coordinates of the second universe display area are combined and preset to form a preselected coordinate point set, and a preselected coordinate point with the lowest sum of cosine distances of each audience which is not in the virtual exhibition in the whole field in the preselected coordinate point set is selected to be used as the position coordinate point of the second universe display area;
the determining the size of the sound of the second cosmic display region received by the audience includes: traversing each audience in the whole field, judging the attention condition of the audience to the virtual exhibition, if the attention condition of the audience is already in the first universe display area, adjusting the size of the sound of the second universe display area received by the audience to be 0, and if the attention condition of the audience is not in the first universe display area, adjusting the size of the sound of the second universe display area received by the audience by combining with a preset weight;
The preset weight is used for adjusting the sound of the second universe display area received by the audience, and the method comprises the following steps: according to the cosine distance between the position coordinate point of the second universe display area and the position coordinate point of the audience, combining with a preset weight, so that the larger the cosine distance is, the smaller the sound is;
the adjusting the preset weight to adjust the sound of the second universe display area received by the audience may also include: according to the time length that the attention of the audience is not in the first universe display area, the longer the time length is, the louder the sound is by combining with the preset weight;
determining a section of a second universe display area starting explanation flow according to the breaking event and breaking information corresponding to the breaking event, specifically including:
according to the breaking event, when the breaking event is that the audience sends a question information event to a host, judging the cosine distance between a characteristic point in a cluster where the question audience is located and the cluster center of the negative audience cluster, if the cosine distance is lower than a preset fifth threshold value, judging which section in the exhibition explanation flow the intention of the question audience belongs to according to the breaking information corresponding to the breaking event, and setting the section of the second universe display area opening explanation flow as the section;
According to the breaking event, when the breaking event is the event that the audience leaves the exhibition beyond the preset number, determining which audience cluster the audience leaving the exhibition respectively belongs to, taking the audience cluster with the largest number of audience leaving the exhibition, and setting the section of the second universe display area starting explanation flow as the section of the exhibition explanation flow corresponding to the cluster center attribute feature exhibition purpose of the audience cluster and the exhibition explanation flow with the exhibition focus theme intention information;
according to the breaking event, when the breaking event is the event that more than the preset number of new audiences enter the exhibition, setting the section of the second universe display area starting explanation flow as the first section of the exhibition explanation flow;
according to the breaking event, when the breaking event is the automatic triggering event of the preset interval time, audience members with full-field attention not in a virtual exhibition are obtained, the exhibition purposes and the exhibition attention theme intention information of the audience members are counted, and the exhibition explanation flow segments corresponding to the exhibition purposes and the exhibition attention theme intention information with the highest occurrence number are taken out and used as the segments of the second universe display area opening explanation flow;
the determining the time point when the second universe display region is opened comprises the following steps: and sending prompt information for opening the second universe display area to the host, so that the host decides whether to open the second universe display area, and when the host returns the information for opening the second universe display area, taking the time point as the time point for opening the second universe display area.
Preferably, after the second universe display area is generated, according to the condition of audiences attracted by the second universe display area, whether a second universe display area classification data set is started is manufactured, and a second universe display area starting classification model is trained according to whether the second universe display area classification data set is started;
whether to open the characteristic value of the second universe display region classification data set specifically comprises: the number of spectators entering the virtual exhibition at present, the time point of each spectator entering the virtual exhibition, the segmentation position of the explanation flow of the first universe display area of the virtual exhibition, the gender, age, occupation and participation purpose of each spectator, the intention information of the participation attention theme, the type and the interrupt information of the interrupt event, the attention condition of each spectator, the segmentation position or content of the first universe display area which is explained at present, and the coordinate position of each spectator in the virtual exhibition plan;
the predicted value of whether to turn on the second binary universe presentation area classification data set includes: opening the second universe display area to be effective or opening the second universe display area to be ineffective;
Based on the classification method of the artificial neural network, training the second universe display area to start a classification model by combining the second universe classification data set;
the preset first rule further includes: and acquiring a characteristic value required by the second universe display area starting classification model in the current virtual exhibition, inputting the characteristic value into the second universe display area starting classification model, predicting whether the second universe display area is started, and judging that the second universe display area needs to be generated if the prediction result is that the second universe display area is started effectively.
Preferably, after the second universe display area is generated, a second universe display area attention prediction data set is manufactured according to the condition of audiences attracted by the second universe display area, and a second universe display area attention prediction model is trained according to the second universe display area attention prediction data set;
after the second universe display area is started, the obtained audience, which is not in the virtual exhibition, of the negative audience clusters, and the number of the audience, which attracts attention after the second universe display area is started, is used as a predicted value of attention prediction data of the second universe display area;
Or (b)
Judging the number of the audiences attracting attention after the second universe display area is started in the audiences which are not in the virtual exhibition, and taking the number of the audiences attracting attention after the second universe display area is started as a predicted value of attention prediction data of the second universe display area;
the feature values of the second universe display region attention prediction dataset specifically comprise: the method comprises the steps of enabling the number of audiences entering a virtual exhibition at present, enabling each audience to enter the virtual exhibition at a time point, enabling each audience to enter the virtual exhibition at the time point, enabling each audience to enter a segmentation position of an explanation flow of a first universe display area of the virtual exhibition at the time point, enabling each audience to gender, age, occupation and participation purpose, enabling each audience to participate in theme intention information, occurring interrupt event types and interrupt information, attention situations of each audience, segmenting positions or contents of the first universe display area to be explained at present, enabling each audience to be located in a virtual exhibition plan view, determining the position coordinate point of the second universe display area, determining the sound size of the second universe display area received by each audience in the virtual exhibition, and enabling the time point of the second universe display area to be opened and the explained segmentation contents;
Training the second universe display area attention prediction model by combining the second universe display area attention prediction data set based on a regression algorithm;
the determining to generate the position coordinate point of the second universe display area in the second universe display area, determining the size of the sound of the second universe display area received by the audience in the virtual exhibition, determining the starting time point of the second universe display area and the explained segmentation content, and may further include:
traversing possible position coordinate points, starting time points and explanation segmented contents of the second universe display area, inputting characteristic values such as the number of audiences entering the virtual exhibition at present, the time points of each audience entering the virtual exhibition, the segmented position of the explanation flow of the first universe display area, the sex, age, occupation and participation purpose of each audience, the type of the starting event and the breaking information of the participation topic, the attention condition of each audience, the segmented position or content of the first universe display area, the coordinate position of each audience in the virtual exhibition plan and the like, into the attention prediction model of the second universe display area, and generating the possible position coordinate points, the starting time points and the explanation segmented contents of the second universe display area under the condition that the number of the audiences attracting attention in the second universe display area is the highest in the prediction result, and the corresponding segmented position or content of the second universe display area in the virtual exhibition is received as the second universe display area, and the second universe display area receives the sound of the second universe display area.
The beneficial effects of the invention are as follows:
1. after the break event occurs, judging the attention condition of each audience in the audience cluster to the virtual exhibition through a clustering algorithm, so that the consumption of computing resources for frequently computing the attention condition of the audience to the virtual exhibition can be avoided, and meanwhile, the attention condition of the audience is constantly changing, and the computing can be more computationally intensive at the moment, so that the computing needs to be triggered based on the break event with more representativeness to the change of the attention; secondly, after the audience is clustered by a clustering method, the method is favorable for calculating the part of the audience losing attention, so that the method has great help for the subsequent generation of detail parameters of a second universe display area, such as the selection of explanation content, and can improve the user experience of the audience cluster losing attention; .
2. The second universe display area is effectively generated in an area except the audience, confusion and influence on the existing audience are avoided, meanwhile, the coordinate point is calculated to be the sum of cosine distances of the audience, the attention of which is not in the virtual exhibition, of each whole field, and the coordinate point with the lowest sum of cosine distances is taken as the coordinate point of the second universe display area, so that the second universe display area can be generated at a position which is close to the audience, the attention of which is not in the virtual exhibition, as far as possible, and the attention of the audience, the attention of which is not in the virtual exhibition, is more easily attracted, and the effect of retaining the audience is achieved; .
3. And one or more of numerical value, text, voice, image and other series type data are covered by combining the multi-mode data, so that the model prediction effect is more accurate.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly explain the drawings that are required to be used in the embodiments of the present application. It is obvious that the drawings described below are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a flow diagram of a method for dynamically generating a meta-universe based on multimodal data in accordance with an embodiment of the present application;
FIG. 2 is an exemplary diagram of a virtual exhibition plan in a method for dynamically generating a meta-universe based on multimodal data in accordance with an embodiment of the present application;
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In the description of the present application, it should be understood that the terms "center," "longitudinal," "transverse," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate an orientation or positional relationship based on that shown in the drawings, merely for convenience of description and to simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be configured and operated in a particular orientation, and thus should not be construed as limiting the present application. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more features. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In this application, the term "exemplary" is used to mean "serving as an example, instance, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known structures and processes have not been shown in detail to avoid obscuring the description of the present application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
It should be noted that, because the method in the embodiment of the present application is executed in the electronic device, the processing objects of each electronic device exist in the form of data or information, for example, time, which is substantially time information, it can be understood that in the subsequent embodiment, if the size, the number, the position, etc. are all corresponding data, so that the electronic device processes the data, which is not described herein in detail.
S1, acquiring an exhibition explanation flow, and segmenting the exhibition explanation flow, wherein the method specifically comprises the following steps of:
presetting an exhibition explanation flow at a server, wherein the exhibition explanation flow can be set by a host according to the understanding of the current exhibition;
the segmentation of the exhibition explanation flow includes, according to understanding the present exhibition, the moderator sets up segmentation of the exhibition explanation flow, for example: the exhibition that the host was host mainly explains a certain razor, and the segmentation of the explanation flow of exhibition is to the host: functional explanation of the shaver, production technology explanation of the shaver, sales cooperation channel explanation of the shaver, and the like;
the segmentation of the exhibition explanation flow of setting includes: through the text content document of preset explanation content, to the text content document of explanation content is segmented to realize the segmentation of exhibition explanation flow, further, can judge the segmentation of present exhibition explanation flow through the content of host explanation, for example: obtaining a voice recognition text result of the content taught by the current host, and calculating the similarity between the voice recognition text result and text content document segments of each taught content through text similarity to obtain segments of the current exhibition teaching flow;
Or the ppt document of the preset explanation content is segmented through the ppt document of the preset explanation content, so that the segmentation of the exhibition explanation flow is realized,
or the time period of the preset explanation content is segmented through the time period of the preset explanation content, so that the segmentation of the exhibition explanation flow is realized,
or through the intention topic content of preset explanation content, presets the disagreement graph topic content that every segmentation possesses to realize the segmentation of exhibition explanation flow, further, can judge the segmentation of present exhibition explanation flow through the content of host explanation, for example: the intention recognition result of the content taught by the current host is obtained, and the intention recognition result is matched with the intention topic content preset for each section, so that the section of the teaching flow of the current exhibition can be obtained.
S2, obtaining the number of audiences entering a virtual exhibition at present and the time point of each audience entering the virtual exhibition, wherein the entering time point belongs to the segmentation position of the explanation flow of the first universe display area of the virtual exhibition, and specifically comprises the following steps:
when an event that a spectator joins a virtual exhibition occurs each time, storing the number of spectators currently participating in the virtual exhibition at a server, and simultaneously acquiring the segmentation positions of the explanation flows of the first universe display areas of the virtual exhibition in the time period according to the method of the step 1 based on the time point that the spectators join the virtual exhibition, and storing the time point that each spectator joins the virtual exhibition and the segmentation positions of the explanation flows of the exhibition corresponding to the time period at the server;
Further, the time point of each audience entering the virtual exhibition, where the time point of entering belongs to the segment position of the explanation flow of the first universe display area of the virtual exhibition, and the segment position of the explanation flow of the first universe display area can be combined to determine the segment of the explanation flow that each audience has watched;
the virtual exhibition is an online virtual exhibition based on a meta universe or virtual reality technology, and a user can log in the online virtual exhibition through a meta universe system to enter the exhibition as an virtual image; the metauniverse system can be developed and deployed based on a metauniverse development framework such as Metaverse landscape or XREngine;
in the description of the embodiment, the virtual exhibition represents a virtual exhibition of all or some commodity displayed by one exhibition manufacturer, for example, after 3d modeling is performed on the actual situation of the exhibition model of one exhibition manufacturer in the actual exhibition, the virtual exhibition is integrated into a meta space development framework, so as to form the virtual exhibition;
the virtual exhibition further comprises a first universe display area, wherein the first universe display area is an area where a host is when the host explains the exhibition content, the host can hold the explaining product, the explanation is carried out according to the explanation flow of the exhibition, and a spectator can listen to the explanation of the host around the front of the first universe display area;
The audience enters a virtual exhibition to be distributed in front of the first universe showing area, and listens to the explanation of the host;
s3, detecting whether a breaking event occurs or not, and acquiring breaking information corresponding to the breaking event, wherein the method specifically comprises the following steps:
the interrupt event includes: the audience sends a question information event to the host, a new audience entering the exhibition event exceeding a preset number, an audience leaving the exhibition event exceeding a preset number, and the host changes the explanation theme event;
the audience sends the acquisition of the questioning information event to the moderator, which comprises the following steps: detecting an event that a terminal device of a spectator sends questioning information to a terminal device of a host, thereby triggering the spectator to send the questioning information event to the host;
the terminal equipment of the audience and the host can be a smart phone, vr head display equipment, a pc computer and the like;
the new audience entering the exhibition event exceeding a preset number comprises: according to the method of the step 2, whether a new audience enters a virtual exhibition is detected, the number of new audience entering is recorded in a preset time period, and when the number exceeds a preset first threshold value, the event that the new audience enters the exhibition exceeding the preset number is triggered;
the preset first threshold may be 10;
The exceeding of the preset number of spectators leaves the exhibition event comprising: detecting whether a viewer leaves the virtual exhibition, recording the number of the viewer leaving in a preset time period, and triggering a new viewer leaving exhibition event exceeding a preset number when the number exceeds a preset second threshold;
the preset second threshold may be 5;
the presenter alters the explanation topic event, including: acquiring and judging the section of the exhibition explanation flow to which the explanation content of the host belongs at present by the method of the step 1, and triggering the host to change the explanation theme event when the explanation content of the host changes the section of the exhibition explanation flow;
the interrupt information corresponding to the interrupt event comprises:
the audience sends interrupt information corresponding to the questioning information event to the host, which comprises the following steps: according to the terminal equipment of the audience, the terminal equipment of the host sends question information, and the intention recognition algorithm is combined to obtain the intention of the audience for question as corresponding interrupt information;
the interrupt information corresponding to the new audience entering the exhibition event exceeding the preset number comprises the following steps: the number of new audiences entering the virtual exhibition in a preset time period;
the interrupt information corresponding to the events of the audiences leaving the exhibition exceeding the preset number comprises the following steps: the number of spectators leaving the virtual exhibition within a preset period of time;
The host changes the interrupt information corresponding to the explanation theme event, including: segmentation of the exhibition explanation flow to which the modified presenter explanation content belongs;
in another embodiment of the present invention, the interrupt event may also be a preset interval time, triggering an interrupt event, for example, a preset interval of 1 second, and executing an interrupt event;
the interrupt event is triggered through the preset interval time, so that the follow-up step is carried out, namely the follow-up step is carried out without interrupting the event, the condition that the audience loses attention is detected, the condition that the audience loses attention can be detected comprehensively, and the second universe display area is generated in time to keep the attention of the audience;
s4, when the breaking event occurs, judging whether a second universe display area is generated or not by combining a preset first rule, and if necessary, entering a step 5, wherein the step comprises the following steps:
the preset first rule includes: clustering all audiences according to the attribute of all current audiences through a clustering algorithm to obtain n audience clusters, traversing each audience cluster, obtaining the attention condition of each audience in the audience clusters to a virtual exhibition, judging the number of audiences with the attention not in the virtual exhibition in the audience clusters, judging the audience cluster as a negative audience cluster when the number is higher than a third threshold, and judging that a second universe display area needs to be generated;
The negative audience clusters further comprise, when the number of audience clusters, which are not in the virtual exhibition, is higher than a third threshold, taking the audience cluster with the highest number as the negative audience cluster, and sorting other audience clusters in descending order of number so as to prepare to generate a third universe display area, even a fourth universe display area, and so on;
in this embodiment, the second, third and fourth meta-cosmic display regions … … are for convenience of distinguishing between multiple generation-opened meta-cosmic display regions, and their generation methods are consistent and not contradictory, and are only used for descriptive purposes, and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Even if the second universe display area is mentioned in the subsequent method, the technical characteristics and effects can be considered to be the same as those of the third universe display area and the fourth universe display area;
the negative audience cluster further comprises traversing the negative audience cluster, judging segments of the explanation flow which each audience in the negative audience cluster has watched according to the method of the step 2, so as to judge whether the audience participation purpose and the participation topic intention information of the audience have been watched, and if so, moving the audience out of the negative audience cluster;
The clustering number n of the clustering clusters is generally 2-5 clusters;
the clustering method comprises the steps that the attribute of each audience can be input into a clustering algorithm according to a kmean clustering algorithm to obtain n audience clusters; the input attribute of each audience is that each audience enters a time point of the virtual exhibition, and the time point of entering the audience belongs to the subsection position, sex, age, occupation, exhibition purpose and exhibition attention theme intention information of the virtual exhibition;
the gender, age, occupation, exhibition purpose and exhibition attention theme intention information are filled in when the audience enters the virtual exhibition and stored in the background;
the obtaining the attention condition of the audience to the virtual exhibition comprises the following steps: obtaining a user expression image through a vr head display worn by a spectator, judging the user expression type through an expression recognition algorithm, judging that the spectator is not in a virtual exhibition when the user expression is impatient,
or (b)
Acquiring the direction of the front of the user through a vr head display worn by the audience, and judging that the attention of the audience is not in a virtual exhibition when the included angle between the direction of the front of the user and the direction of the host is larger than a preset value, wherein the preset value of the direction included angle can be 90 degrees;
The judgment of the attention condition of the audience to the virtual exhibition is needed to rely on the vr head display, and when the audience does not wear the vr head display to enter the virtual exhibition, the audience is not listed in the judgment range and the counted number;
the obtaining the judgment of the attention condition of the audience to the virtual exhibition may further include: judging the segmentation of the explanation flow which is watched by the audience according to the method of the step 2, so as to judge whether the audience exhibition purpose and the exhibition attention theme intention information are watched or not, and if the audience is watched, not listing the audience in a range for judging that the attention is not in a virtual exhibition and not counting the audience;
after the break event occurs, judging the attention condition of each audience in the audience cluster to the virtual exhibition through a clustering algorithm, so that the consumption of computing resources for frequently computing the attention condition of the audience to the virtual exhibition can be avoided, and meanwhile, the attention condition of the audience is constantly changing, and the computing can be more computationally intensive at the moment, so that the computing needs to be triggered based on the break event with more representativeness to the change of the attention; secondly, after the audience is clustered by a clustering method, the method is favorable for calculating the part of the audience losing attention, so that the method has great help for the subsequent generation of detail parameters of a second universe display area, such as the selection of explanation content, and can improve the user experience of the audience cluster losing attention;
In another embodiment of the present invention, the preset first rule may further include: the method comprises the steps of obtaining the attention condition of each audience to a virtual exhibition, judging the number of the audiences, in which the attention of the audiences is not in the virtual exhibition, and judging that a second universe display area needs to be generated when the number is higher than a fourth threshold value;
if the audience is not clustered through a clustering algorithm, the number of the audience, which is not in the virtual exhibition, of the attentions of the audience is directly judged, and whether the second universe display area is generated or not is determined, so that the step complexity can be reduced, but the overall characteristics of the audience, which is not in the virtual exhibition, of the attentions of the audience cannot be accurately locked;
s5, determining a position coordinate point of the second universe display area when the second universe display area is generated, determining the size of sound of the second universe display area received by spectators in the virtual exhibition, and determining a time point when the second universe display area is opened and explaining segmented content, wherein the method specifically comprises the following steps:
obtaining a virtual exhibition plan, obtaining the ordinate of the coordinate position of the audience with the largest ordinate in the virtual exhibition according to the virtual exhibition plan, combining n abscissa coordinates of the preset second universe display area to form a preselected coordinate point set, and selecting a preselected coordinate point with the lowest sum of cosine distances between the preselected coordinate point set and each audience which is not in the virtual exhibition in the whole field as the position coordinate point of the second universe display area; determining the sound of the second universe display area received by the audience by combining the attention condition of the audience to the virtual exhibition; determining a segmentation of a second universe display area starting explanation flow according to the interrupt event and interrupt information corresponding to the interrupt event; determining a time point when the second universe display area is opened according to feedback of a host;
S5.1, acquiring a virtual exhibition plan, wherein the virtual exhibition plan is a plan obtained from a overlook angle of the virtual exhibition, and the coordinate of a first universe display area is taken as the midpoint of the abscissa axis of the virtual exhibition plan, and the coordinate position of each audience in the virtual exhibition can be obtained from the virtual exhibition plan by using the vertex of the ordinate axis;
wherein the abscissa axis and the ordinate axis of the virtual exhibition plan are preset according to the virtual exhibition condition;
s5.2, according to the virtual exhibition plan, acquiring a ordinate position of a spectator with the largest ordinate in the virtual exhibition, and taking Ymax+a as a ordinate point of the second universe display area, wherein a is a preset value, so that the second universe display area cannot be confused with the spectator; presetting n abscissas of the second universe display area, combining the n abscissas with Ymax+a ordinate points to form n preselected coordinate points of the second universe display area, taking the n preselected coordinate points as a preselected coordinate point set, carrying out cosine distance calculation on each preselected coordinate point in the n preselected coordinate points and the coordinate of a spectator of which each attention is not in a virtual exhibition in the whole field, obtaining the sum of cosine distances between a single preselected coordinate point and the spectators of which each attention is not in the virtual exhibition in the whole field, and taking the preselected coordinate point with the lowest sum of cosine distances as the position coordinate point of the second universe display area;
The cosine distance calculation is performed on the position coordinates of each of the n pre-selected coordinate points and the position coordinates of the audience of which each attention is not in the virtual exhibition in the whole field to obtain the sum of cosine distances of a single pre-selected coordinate point and the audience of which each attention is not in the virtual exhibition in the whole field, for example, after the cosine distance calculation is performed on the position coordinates of the first pre-selected coordinate point and the position coordinates of the audience of which each attention is not in the virtual exhibition in the whole field, the cosine distances of the first pre-selected coordinate point and the audience of which each attention is not in the virtual exhibition in the whole field are respectively 5, 10 and 15, and the sum of cosine distances of the first pre-selected coordinate point and the audience of which each attention is not in the virtual exhibition in the whole field is 30 is obtained;
presetting n abscissa values of the second universe display region, for example: taking the whole abscissa from 0 to 300 as an example, 6 abscissas are preset, wherein the first abscissas are 0, the second abscissas are 60, the third abscissas are 120, the fourth abscissas are 180, the fifth abscissas are 240 and the sixth abscissas are 300;
as shown in fig. 2, which is an example of the plan view of the virtual exhibition, the first universe display area is right in the middle of the abscissa axis of the coordinate system, the ordinate axis is the most top, arrows 1 and 2 represent viewers in the virtual exhibition, arrow 1 represents viewers with attention at the virtual exhibition, arrow 2 represents viewers with attention not at the virtual exhibition, the directions of the arrows are the directions right in front of the viewers, and the bottom five second universe display areas are preselected coordinate points where the preset second universe display areas can exist; walls of the virtual exhibition are arranged around;
Through the technical characteristics, the second universe display area can be effectively generated in the area except the audience, confusion and influence on the existing audience are avoided, meanwhile, the coordinate point is calculated to be the sum of cosine distances of the audience, which is not in the virtual exhibition, of each attention of the whole area, and the coordinate point with the lowest sum of cosine distances is taken as the coordinate point of the second universe display area, so that the second universe display area can be generated at the position which is close to the audience, which is not in the virtual exhibition, of the attention of the audience, and the attention of the audience, which is not in the virtual exhibition, is more easily attracted, and the effect of retaining the audience is achieved; meanwhile, as the second universe display area is generated in Ymax+a, the influence on the audience of the virtual exhibition in the whole field, which is about to pay attention to each attention, can be effectively avoided, because the generation position is behind the audience, the second universe display area is difficult to notice to generate;
in another embodiment of the present invention, according to the method of step 5, a preselected coordinate point in the set of preselected coordinate points, which is the sum of cosine distances of viewers in the negative audience cluster and each of which is not in a virtual exhibition, is selected as a position coordinate point of the second universe display area, so that the second universe display area generates a position closer to the viewers in the negative audience cluster, which are not in the virtual exhibition, as much as possible, and the method is an embodiment for more specifically solving the problem that the viewers in the negative audience cluster are not in the virtual exhibition based on step 4;
In another embodiment of the present invention, after the second meta-space display area is generated, a third meta-space display area and even a fourth meta-space display area generated by a preliminary analogy need to be started, and according to the method of step S1-5, in combination with the descending order sorting condition of the other audience clusters, the next sorted other audience cluster is used as a negative audience cluster on which the current meta-space display area needs to be dependent, where the step is consistent with S5.1-5.2, and the ordinate points of the third meta-space display area may further include: the ordinate position of the second universe display area is marked as Ytow, ytwo-b is used as the ordinate point of the third universe display area, b is a preset value, the third universe display area is not confused with the second universe display area, and is not infinitely extended in a Ymax+a mode, so that a virtual exhibition is infinitely expanded;
s5.3, determining the sound size of the second universe display area received by the audience by combining the attention condition of the audience to the virtual exhibition, wherein the method comprises the following steps:
traversing each audience in the whole field, judging the attention condition of the audience to a virtual exhibition according to the method of the step s4, if the attention condition of the audience is already in the first universe display area, adjusting the sound size of the second universe display area received by the audience to be 0, and if the attention condition of the audience is not in the first universe display area, adjusting the sound size of the second universe display area received by the audience by combining with a preset weight;
The preset weight may be used to adjust the size of the sound of the second universe display area received by the audience, where the size may be: according to the cosine distance between the position coordinate point of the second universe display area and the position coordinate point of the audience, combining with a preset weight, the larger the cosine distance is, the smaller the sound is, for example: the cosine distance between a position coordinate point of a certain audience and a position coordinate point of the second universe display area is 10, and the sound is made to be the inverse of the cosine distance by combining a preset weight formula, so that the sound size is 1/10 dB;
the preset weight may be used to adjust the size of the sound of the second universe display area received by the audience, where the sound may be: according to the time length that the audience is not in the first universe display area, the longer the time length is, the louder the sound is, for example, by combining with preset weight: the time length that a certain audience is not in the first universe display area is 10 minutes, and the sound is obtained by combining the preset weight to make the time length of the sound which is not in the first universe display area be 1, wherein the sound size is 10 decibels;
s5.4, determining a section of a second universe display area opening explanation flow according to the interrupt event and interrupt information corresponding to the interrupt event, wherein the section specifically comprises the following steps:
Acquiring a cluster center of the negative audience cluster, acquiring attribute characteristics of the cluster center of the negative audience cluster, including the exhibition purpose, the exhibition focus subject intention information and the like, and judging the most focused exhibition explanation flow section of the negative audience cluster according to the attribute characteristic exhibition purpose and the exhibition focus subject intention information of the cluster center of the negative audience cluster;
the cluster center of the negative audience cluster may be obtained by a cluster_centers_function of kmean clusters based on a sklearn toolkit;
judging the most focused exhibition explanation flow section of the negative audience cluster through the attribute feature exhibition purpose and the exhibition focus theme intention information of the cluster center of the negative audience cluster, and judging the section content of a text content document of which preset explanation content belongs to the exhibition explanation flow or the intention theme content section of which preset explanation content belongs to the exhibition explanation flow by the method of the step 1, thereby determining the most focused exhibition explanation flow section of the negative audience cluster;
according to the breaking event, when the breaking event is that the audience sends a question information event to a host, judging the cosine distance between a characteristic point in a cluster where the question audience is located and the cluster center of the negative audience cluster, if the cosine distance is lower than a preset fifth threshold value, judging which section in an exhibition explanation flow the intention of the question audience belongs to according to the breaking information corresponding to the breaking event, namely the intention of the question audience, setting the section of the second universe display area on explanation flow as the section, and further, if the section is the section currently being explained by the first universe display area or other universe display areas, canceling the opening of the second universe display area;
The audience can know that the audience has doubt to the exhibition, and partial content in the audience can not be understood by sending the questioning information event to the moderator, because a clustering algorithm distributes specific positions to the feature points of each audience in the clustering according to different feature attributes of each audience in the clustering process, when the cosine distance between the position of the feature point in the clustering cluster where the questioning audience is located and the cluster center of the negative audience cluster is lower than a preset fifth threshold value, the questioning audience can be considered to have a certain representativeness in the negative audience cluster, and therefore, the segmentation of the second universe display area for starting the explanation flow is directly set as the segmentation of the intention of the questioning audience belonging to the exhibition explanation flow;
according to the breaking event, when the breaking event is the event that the audience leaves the exhibition beyond the preset number, determining which audience cluster in the step S3 the audience leaving the exhibition respectively belongs to, taking the audience cluster with the largest number of audience leaving the exhibition, and setting the segment of the second universe display area starting explanation flow as the segment of the exhibition explanation flow corresponding to the cluster center attribute feature exhibition object and the exhibition focus theme intention information of the audience cluster;
According to the breaking event, when the breaking event is the host changing the explanation theme event, sending a request for selecting the segmentation setting of the second universe display area opening explanation flow to the host, enabling the host to select, setting the segmentation of the second universe display area opening explanation flow as the segmentation selected by the host, and canceling the opening of the second universe display area if the host chooses not to open the second universe display area;
according to the breaking event, when the breaking event is the event that more than the preset number of new audiences enter the exhibition, setting the section of the second universe display area starting explanation flow as the first section of the exhibition explanation flow;
according to the breaking event, when the breaking event is the automatic triggering event of the preset interval time, audience of which the attention is not in a virtual exhibition in the whole field is obtained, the exhibition purposes and the exhibition attention theme intention information of the audience are counted, the exhibition explanation flow section corresponding to the exhibition purpose and the exhibition attention theme intention information with the highest occurrence number is taken as the section of the second universe display area starting explanation flow, and further, if the section is the section of the first universe display area or other universe display areas currently being explained, the starting of the second universe display area is canceled;
The audience's showplace purpose, showplace focus theme intention information still includes: if the preset exhibition purpose and exhibition attention theme intention information do not exist, the audience terminal can be thought to send the question information in real time, and the exhibition purpose and exhibition attention theme intention information are obtained;
the second universe show area opens explanation flow, still includes: by means of a virtual digital man technology, combining action, voice and expression history data of a host when the host is explained in the first universe display area, generating a virtual host, and automatically explaining;
the method comprises the steps of generating a virtual host for automatic explanation by combining action, voice and expression history data of the host when the host is explained in the first universe display area through a virtual digital man technology, and further comprising the following steps: judging whether the explanation flow started by the second universe display area exists in historical data or not, if not, transferring a host to the second universe display area to conduct real explanation, switching the first universe display area to use virtual host explanation, and meanwhile, if the first universe display area is also conducting explanation for the first time, eliminating the starting of the second universe display area, wherein the historical data of the explanation flow does not exist;
The virtual digital man-in-the-art technology is a well-known technology in the art and is not described in detail herein;
s5.5, determining the time point of opening the second universe display area according to feedback of the host, wherein the time point comprises the following steps: the prompt information for opening the second universe display area is sent to the host, so that the host decides whether to open the second universe display area, and when the host returns the information for opening the second universe display area, the time point is used as the time point for opening the second universe display area;
s5.6, generating the second universe display area based on the content;
in some embodiments of the present application, after the second cosmic display area is opened according to the method of step S1-5, according to the situation of the audience attracted by the second cosmic display area, making whether to open the second cosmic display area classification data set, and training the second cosmic display area to start the classification model according to whether to open the second cosmic classification data set;
after the second universe display area is opened according to the method in the step S1-5, according to the condition of the audience attracted by the second universe display area, whether to open the second universe display area classification data set is manufactured, including:
According to the method of the step S1-5, after the second universe display area is started, the audience, of which the attention is not in the virtual exhibition, in the negative audience cluster obtained according to the method of the step S4 is obtained, the number of the audience, which attracts attention after being started by the second universe display area is judged, the number of the audience, which attracts attention after being started by the second universe display area, in the negative audience cluster is calculated, the proportion of the audience, of which the attention is not in the virtual exhibition, is calculated, and when the proportion is higher than a sixth threshold, the second universe display area is considered to be valid, and the second universe display area is started this time as a positive sample of whether the second universe display area is started or not; otherwise, the second universe display area is considered to be invalid when the second universe display area is started, and the second universe display area is started at this time and is used as a negative sample of whether the second universe display area is started or not;
or (b)
Judging the number of the audiences which attract attention after being started by the second universe display area according to the audiences which are not in the virtual exhibition and are acquired by the method in the step S4, calculating the number of the audiences which attract attention after being started by the second universe display area, wherein the ratio of the audiences which occupy the whole area and are not in the virtual exhibition is considered to be effective when the ratio is higher than a sixth threshold value, and taking the current starting of the second universe display area as a positive sample of whether the second universe display area is started or not; otherwise, the second universe display area is not considered to be invalid when the second universe display area is started; taking the second universe display area started at the time as a negative sample of whether the second universe display area is started or not;
The judging method is the attention judging method of the step S4, and the difference is that the attention target is the second universe display area, and the attention is not described in detail herein;
whether to open the characteristic value of the second universe display region classification data set comprises a plurality of multi-mode data, and the method specifically comprises the following steps: the number of spectators entering the virtual exhibition at present, the time point of each spectator entering the virtual exhibition, the segmentation position of the explanation flow of the first universe display area of the virtual exhibition, the gender, age, occupation and participation purpose of each spectator, the intention information of the participation attention theme, the type and the interrupt information of the interrupt event, the attention condition of each spectator, the segmentation position or content of the first universe display area which is explained at present, and the coordinate position of each spectator in the virtual exhibition plan;
the multi-mode data covers one or more of numerical value, text, voice, image and other series type data, so that the model prediction effect is more accurate;
the predicted value of whether to turn on the second binary universe presentation area classification data set includes: opening the second universe display area to be effective or opening the second universe display area to be ineffective;
According to the method of step S1-5, after collecting audience conditions and the predicted values of the characteristic values which attract attention by the second universe display area after opening the second universe display area each time, making whether to open a second universe display area classification data set, wherein the data volume of whether to open the second universe display area classification data set can be more than 1000;
training a second universe display area to start a classification model according to the whether to start the second universe classification data set, including:
based on the classification method of the artificial neural network, training the second universe display area to start a classification model by combining the second universe classification data set;
an artificial neural network (Artificial Neural Network) classification algorithm belongs to a supervised learning algorithm. Common classification algorithms include: logistic regression (LoGIStic Regression, LR), K-Nearest Neighbor (KNN), naive bayes model (Naive Bayesian Model, NBM), hidden markov model (Hidden Markov Model), support vector machine (Support Vector Machine), decision Tree (Decision Tree), neural Network (Neural Network), and ensemble learning (ada-boost).
The artificial neural network (Artificial Neural Network) is a mathematical model that simulates the processing information of neurons. The neural network comprises a plurality of layers, and neurons in the same layer do not communicate data with each other; neurons between adjacent layers are interconnected to form a network, i.e., a "neural network". The data information is propagated forward along the network and the error information is propagated backward against the network.
The artificial neural network (Artificial Neural Network) includes a plurality of models: BP networks, radial basis RBF networks, hopfield networks, random neural networks (Boltzmann machines), competing neural networks (Hamming networks, self-organizing map networks), and the like. The neural network still has the defects of low convergence speed, large calculation amount, long training time, unexplained performance and the like.
The second universe display area starting classification model is a classification model, after the characteristic value is input into the model, a suggestion of whether the second universe display area needs to be started or not can be obtained, so that whether the second universe display area is started or not is automatically judged, and the second universe display area starting classification model is an extension method of the step S4, so that the second universe display area can be generated more accurately and automatically, and user experience is improved;
In some embodiments of the present application, after the second cosmic display region is opened according to the method of step S1-5, a second cosmic display region attention prediction data set is made according to the audience situation that attention is attracted by the second cosmic display region, and a second cosmic display region attention prediction model is trained according to the second cosmic display region attention prediction data set;
after the second universe display area is started according to the method of the step S1-5, the audience, of which the attention is not in the virtual exhibition, in the negative audience cluster obtained according to the method of the step S4 is obtained, and the number of the audience attracting the attention after the second universe display area is started is used as a predicted value of attention prediction data of the second universe display area;
or (b)
Judging the number of the audiences attracting attention after the second universe display area is started according to the audiences, which are acquired by the method in the step S4 and are not in the virtual exhibition, wherein the number of the audiences attracting attention after the second universe display area is started is the predicted value of the attention prediction data of the second universe display area;
the feature value of the second universe display region attention prediction dataset comprises a plurality of multi-mode data, and specifically comprises the following steps: the method comprises the steps of enabling the number of audiences entering a virtual exhibition at present, enabling each audience to enter the virtual exhibition at a time point, enabling each audience to enter the virtual exhibition at the time point, enabling each audience to enter a segmentation position of an explanation flow of a first universe display area of the virtual exhibition at the time point, enabling each audience to gender, age, occupation and participation purpose, enabling each audience to participate in theme intention information, occurring interrupt event types and interrupt information, attention situations of each audience, segmenting positions or contents of the first universe display area to be explained at present, enabling each audience to be located in a virtual exhibition plan view, determining the position coordinate point of the second universe display area, determining the sound size of the second universe display area received by each audience in the virtual exhibition, and enabling the time point of the second universe display area to be opened and the explained segmentation contents;
The multi-mode data covers one or more of numerical value, text, voice, image and other series type data, so that the model prediction effect is more accurate;
according to the method of step S1-5, after collecting the audience condition that the second universe display area attracts attention and the above-mentioned characteristic value predicted value after opening the second universe display area each time, the attention prediction data set of the second universe display area is produced, and the data volume of the attention prediction data set of the second universe display area may be more than 1000;
training the second binary cosmic display region attention prediction model according to the second binary cosmic display region attention prediction dataset, comprising:
training the second universe display area attention prediction model by combining the second universe display area attention prediction data set based on a regression algorithm of machine learning;
the machine-learned regression algorithm may be linear regression, logistic regression, etc.;
in some embodiments of the present application, the preset first rule of step S4 includes: acquiring a characteristic value required by the second universe display area starting classification model in the current virtual exhibition, inputting the characteristic value into the second universe display area starting classification model, predicting whether the second universe display area is started, and judging that the second universe display area needs to be generated if the prediction result is that the second universe display area is started effectively;
The feature values required by the second universe display area to start the classification model are already clearly described in the step S5, and are not repeated again;
in some embodiments of the present application, the determining in step S5 generates a location coordinate point of the second cosmic display area in the second cosmic display area, determines a size of a sound of the second cosmic display area received by a viewer in the virtual exhibition, and determines a time point when the second cosmic display area is opened and an explained segmented content, including:
traversing possible position coordinate points, open time points and explanation segmented contents of the second universe display area, inputting characteristic values such as the number of audiences entering the virtual exhibition at present, the time points of each audience entering the virtual exhibition, the segmented position of the explanation flow of the first universe display area, the sex, age, occupation and participation purpose of each audience, the type of the appointed theme, the occurred breaking event and breaking information, the attention condition of each audience, the segmented position or content of the first universe display area, the coordinate position of each audience in the virtual exhibition plan and the like, into the attention prediction model of the second universe display area, and generating the possible position coordinate points, the open time points and the explanation segmented contents of the second universe display area under the condition that the number of the audiences attracting attention in the second universe display area in the prediction result is the highest, and the corresponding segmented position of the second universe display area in the virtual exhibition receives the second universe display area, and the audio of the second universe display area receives the second universe display area;
It should be noted that, unless otherwise specified, when a feature is referred to as being "fixed" or "connected" to another feature, it may be directly or indirectly fixed or connected to the other feature. Further, the descriptions of the upper, lower, left, right, etc. used in this disclosure are merely with respect to the mutual positional relationship of the various components of this disclosure in the drawings. As used in this disclosure, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. In addition, unless defined otherwise, all technical and scientific terms used in this example have the same meaning as commonly understood by one of ordinary skill in the art. The terminology used in the description of the embodiments is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The term "and/or" as used in this embodiment includes any combination of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element of the same type from another. For example, a first element could also be termed a second element, and, similarly, a second element could also be termed a first element, without departing from the scope of the present disclosure. The use of any and all examples, or exemplary language (e.g., "such as") provided herein, is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed.
It should be appreciated that embodiments of the invention may be implemented or realized by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer readable storage medium configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, in accordance with the methods and drawings described in the specific embodiments. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Furthermore, the operations of the processes described in the present embodiments may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes (or variations and/or combinations thereof) described in this embodiment may be performed under control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications), by hardware, or combinations thereof, that collectively execute on one or more processors. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable computing platform, including, but not limited to, a personal computer, mini-computer, mainframe, workstation, network or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and so forth. Aspects of the invention may be implemented in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optical read and/or write storage medium, RAM, ROM, etc., such that it is readable by a programmable computer, which when read by a computer, is operable to configure and operate the computer to perform the processes described herein. Further, the machine readable code, or portions thereof, may be transmitted over a wired or wireless network. When such media includes instructions or programs that, in conjunction with a microprocessor or other data processor, implement the steps described above, the invention described in this embodiment includes these and other different types of non-transitory computer-readable storage media. The invention also includes the computer itself when programmed according to the methods and techniques of the present invention.
The computer program can be applied to the input data to perform the functions described in this embodiment, thereby converting the input data to generate output data that is stored to the non-volatile memory. The output information may also be applied to one or more output devices such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including specific visual depictions of physical and tangible objects produced on a display.
The present invention is not limited to the above embodiments, but can be modified, equivalent, improved, etc. by the same means to achieve the technical effects of the present invention, which are included in the spirit and principle of the present invention. Various modifications and variations are possible in the technical solution and/or in the embodiments within the scope of the invention.

Claims (7)

1. A method for dynamically generating a meta-universe based on multimodal data, comprising:
acquiring an exhibition explanation flow and segmenting the exhibition explanation flow;
acquiring the number of audiences entering a virtual exhibition at present and the time point of each audience entering the virtual exhibition, wherein the entering time point belongs to the segmentation position of the explanation flow of a first universe display area of the virtual exhibition;
Detecting whether a breaking event occurs or not, and acquiring breaking information corresponding to the breaking event;
when the breaking event occurs, judging whether a second universe display area is generated or not by combining a preset first rule, if so, determining a position coordinate point of the second universe display area when the second universe display area is generated, determining the size of sound of the second universe display area received by spectators in the virtual exhibition, and determining a starting time point of the second universe display area and explaining segmented content;
and generating the second universe display region.
2. The method for dynamically generating metauniverse based on multi-modal data according to claim 1, wherein:
the virtual exhibition comprises a first universe showing area;
a viewer entering the virtual exhibition will be assigned to the first meta-cosmic display area.
3. The method for dynamically generating metauniverse based on multi-modal data according to claim 1, wherein:
the interrupt event includes: the audience sends a question information event to the host, a new audience entering the exhibition event exceeding a preset number, an audience leaving the exhibition event exceeding a preset number, and the host changes the explanation theme event;
The interrupt event further includes: and presetting interval time, and triggering a breaking event.
4. The method for dynamically generating metauniverse based on multi-modal data according to claim 1, wherein: when the breaking event occurs, judging whether to generate a second universe display area according to a preset first rule, wherein the method comprises the following steps:
the preset first rule includes: clustering all audiences according to the attribute of all current audiences through a clustering algorithm to obtain n audience clusters, traversing each audience cluster, obtaining the attention condition of each audience in the audience clusters to a virtual exhibition, judging the number of audiences with the attention not in the virtual exhibition in the audience clusters, judging the audience cluster as a negative audience cluster when the number is higher than a third threshold, and judging that a second universe display area needs to be generated;
the preset first rule may further include: and obtaining the attention condition of each audience to the virtual exhibition, judging the number of the audience, the attention of which is not in the virtual exhibition, in the audience, and judging that the second universe display area needs to be generated when the number is higher than a fourth threshold value.
5. The method for dynamically generating a meta-universe based on multimodal data as recited in claim 4, wherein: obtaining a virtual exhibition plan, obtaining the ordinate of the coordinate position of the audience with the largest ordinate in the virtual exhibition according to the virtual exhibition plan, combining n abscissa coordinates of the preset second universe display area to form a preselected coordinate point set, and selecting a preselected coordinate point with the lowest sum of cosine distances between the preselected coordinate point set and each audience which is not in the virtual exhibition in the whole field as the position coordinate point of the second universe display area;
The determining the size of the sound of the second cosmic display region received by the audience includes: traversing each audience in the whole field, judging the attention condition of the audience to the virtual exhibition, if the attention condition of the audience is already in the first universe display area, adjusting the size of the sound of the second universe display area received by the audience to be 0, and if the attention condition of the audience is not in the first universe display area, adjusting the size of the sound of the second universe display area received by the audience by combining with a preset weight;
the preset weight is used for adjusting the sound of the second universe display area received by the audience, and the method comprises the following steps: according to the cosine distance between the position coordinate point of the second universe display area and the position coordinate point of the audience, combining with a preset weight, so that the larger the cosine distance is, the smaller the sound is;
the adjusting the preset weight to adjust the sound of the second universe display area received by the audience may also include: according to the time length that the attention of the audience is not in the first universe display area, the longer the time length is, the louder the sound is by combining with the preset weight;
determining a section of a second universe display area starting explanation flow according to the breaking event and breaking information corresponding to the breaking event, specifically including:
According to the breaking event, when the breaking event is that the audience sends a question information event to a host, judging the cosine distance between a characteristic point in a cluster where the question audience is located and the cluster center of the negative audience cluster, if the cosine distance is lower than a preset fifth threshold value, judging which section in the exhibition explanation flow the intention of the question audience belongs to according to the breaking information corresponding to the breaking event, and setting the section of the second universe display area opening explanation flow as the section;
according to the breaking event, when the breaking event is the event that the audience leaves the exhibition beyond the preset number, determining which audience cluster the audience leaving the exhibition respectively belongs to, taking the audience cluster with the largest number of audience leaving the exhibition, and setting the section of the second universe display area starting explanation flow as the section of the exhibition explanation flow corresponding to the cluster center attribute feature exhibition purpose of the audience cluster and the exhibition explanation flow with the exhibition focus theme intention information;
according to the breaking event, when the breaking event is the event that more than the preset number of new audiences enter the exhibition, setting the section of the second universe display area starting explanation flow as the first section of the exhibition explanation flow;
According to the breaking event, when the breaking event is the automatic triggering event of the preset interval time, audience members with full-field attention not in a virtual exhibition are obtained, the exhibition purposes and the exhibition attention theme intention information of the audience members are counted, and the exhibition explanation flow segments corresponding to the exhibition purposes and the exhibition attention theme intention information with the highest occurrence number are taken out and used as the segments of the second universe display area opening explanation flow;
the determining the time point when the second universe display region is opened comprises the following steps: and sending prompt information for opening the second universe display area to the host, so that the host decides whether to open the second universe display area, and when the host returns the information for opening the second universe display area, taking the time point as the time point for opening the second universe display area.
6. The method for dynamically generating metauniverse based on multi-modal data as recited in claim 5, wherein: after the second universe display area is generated, according to the condition of audiences attracted by the second universe display area, making a classification data set of whether to open the second universe display area, and training a second universe display area to start a classification model according to the second universe classification data set whether to open;
Whether to open the characteristic value of the second universe display region classification data set specifically comprises: the number of spectators entering the virtual exhibition at present, the time point of each spectator entering the virtual exhibition, the segmentation position of the explanation flow of the first universe display area of the virtual exhibition, the gender, age, occupation and participation purpose of each spectator, the intention information of the participation attention theme, the type and the interrupt information of the interrupt event, the attention condition of each spectator, the segmentation position or content of the first universe display area which is explained at present, and the coordinate position of each spectator in the virtual exhibition plan;
the predicted value of whether to turn on the second binary universe presentation area classification data set includes: opening the second universe display area to be effective or opening the second universe display area to be ineffective;
based on the classification method of the artificial neural network, training the second universe display area to start a classification model by combining the second universe classification data set;
the preset first rule further includes: and acquiring a characteristic value required by the second universe display area starting classification model in the current virtual exhibition, inputting the characteristic value into the second universe display area starting classification model, predicting whether the second universe display area is started, and judging that the second universe display area needs to be generated if the prediction result is that the second universe display area is started effectively.
7. The method for dynamically generating metauniverse based on multi-modal data as recited in claim 5, wherein: after the second universe display area is generated, a second universe display area attention prediction data set is manufactured according to the condition of audiences attracted by the second universe display area, and a second universe display area attention prediction model is trained according to the second universe display area attention prediction data set;
after the second universe display area is started, the obtained audience, which is not in the virtual exhibition, of the negative audience clusters, and the number of the audience, which attracts attention after the second universe display area is started, is used as a predicted value of attention prediction data of the second universe display area;
or (b)
Judging the number of the audiences attracting attention after the second universe display area is started in the audiences which are not in the virtual exhibition, and taking the number of the audiences attracting attention after the second universe display area is started as a predicted value of attention prediction data of the second universe display area;
the feature values of the second universe display region attention prediction dataset specifically comprise: the method comprises the steps of enabling the number of audiences entering a virtual exhibition at present, enabling each audience to enter the virtual exhibition at a time point, enabling each audience to enter the virtual exhibition at the time point, enabling each audience to enter a segmentation position of an explanation flow of a first universe display area of the virtual exhibition at the time point, enabling each audience to gender, age, occupation and participation purpose, enabling each audience to participate in theme intention information, occurring interrupt event types and interrupt information, attention situations of each audience, segmenting positions or contents of the first universe display area to be explained at present, enabling each audience to be located in a virtual exhibition plan view, determining the position coordinate point of the second universe display area, determining the sound size of the second universe display area received by each audience in the virtual exhibition, and enabling the time point of the second universe display area to be opened and the explained segmentation contents;
Training the second universe display area attention prediction model by combining the second universe display area attention prediction data set based on a regression algorithm;
the determining to generate the position coordinate point of the second universe display area in the second universe display area, determining the size of the sound of the second universe display area received by the audience in the virtual exhibition, determining the starting time point of the second universe display area and the explained segmentation content, and may further include:
traversing possible position coordinate points, starting time points and explanation segmented contents of the second universe display area, inputting characteristic values such as the number of audiences entering the virtual exhibition at present, the time points of each audience entering the virtual exhibition, the segmented position of the explanation flow of the first universe display area, the sex, age, occupation and participation purpose of each audience, the type of the starting event and the breaking information of the participation topic, the attention condition of each audience, the segmented position or content of the first universe display area, the coordinate position of each audience in the virtual exhibition plan and the like, into the attention prediction model of the second universe display area, and generating the possible position coordinate points, the starting time points and the explanation segmented contents of the second universe display area under the condition that the number of the audiences attracting attention in the second universe display area is the highest in the prediction result, and the corresponding segmented position or content of the second universe display area in the virtual exhibition is received as the second universe display area, and the second universe display area receives the sound of the second universe display area.
CN202310014994.1A 2023-01-06 2023-01-06 Method for dynamically generating meta universe based on multi-mode data Active CN116258836B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310014994.1A CN116258836B (en) 2023-01-06 2023-01-06 Method for dynamically generating meta universe based on multi-mode data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310014994.1A CN116258836B (en) 2023-01-06 2023-01-06 Method for dynamically generating meta universe based on multi-mode data

Publications (2)

Publication Number Publication Date
CN116258836A CN116258836A (en) 2023-06-13
CN116258836B true CN116258836B (en) 2024-04-02

Family

ID=86678602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310014994.1A Active CN116258836B (en) 2023-01-06 2023-01-06 Method for dynamically generating meta universe based on multi-mode data

Country Status (1)

Country Link
CN (1) CN116258836B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118278448A (en) * 2024-05-31 2024-07-02 南京维赛客网络科技有限公司 Method, system and storage medium for non-inductive switching between AI reception digital person and true person

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101206742A (en) * 2006-12-18 2008-06-25 张新新 Network virtual exhibition method and system
CN107122824A (en) * 2017-05-04 2017-09-01 中兴耀维科技江苏有限公司 A kind of method that enterprise's exhibition room is explained online
CN112346562A (en) * 2020-10-19 2021-02-09 深圳市太和世纪文化创意有限公司 Immersive three-dimensional virtual interaction method and system and electronic equipment
KR102404585B1 (en) * 2021-10-20 2022-06-02 주식회사 페트라인텔리전스 Metaverse Medical Exhibition Platform
CN114780892A (en) * 2022-03-31 2022-07-22 武汉古宝斋文化艺术品有限公司 Online exhibition and display intelligent interaction management system based on artificial intelligence

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101206742A (en) * 2006-12-18 2008-06-25 张新新 Network virtual exhibition method and system
CN107122824A (en) * 2017-05-04 2017-09-01 中兴耀维科技江苏有限公司 A kind of method that enterprise's exhibition room is explained online
CN112346562A (en) * 2020-10-19 2021-02-09 深圳市太和世纪文化创意有限公司 Immersive three-dimensional virtual interaction method and system and electronic equipment
KR102404585B1 (en) * 2021-10-20 2022-06-02 주식회사 페트라인텔리전스 Metaverse Medical Exhibition Platform
CN114780892A (en) * 2022-03-31 2022-07-22 武汉古宝斋文化艺术品有限公司 Online exhibition and display intelligent interaction management system based on artificial intelligence

Also Published As

Publication number Publication date
CN116258836A (en) 2023-06-13

Similar Documents

Publication Publication Date Title
US11847858B2 (en) Vehicle occupant engagement using three-dimensional eye gaze vectors
US20200142999A1 (en) Classification and moderation of text
US11501161B2 (en) Method to explain factors influencing AI predictions with deep neural networks
CN110263213B (en) Video pushing method, device, computer equipment and storage medium
CN109299384A (en) Scene recommended method, apparatus and system, storage medium
KR102501496B1 (en) Method, system, and computer program for providing multiple models of federated learning using personalization
CN111967971A (en) Bank client data processing method and device
CN116258836B (en) Method for dynamically generating meta universe based on multi-mode data
CN113966247A (en) Predictive data preloading
US11935276B2 (en) System and method for subjective property parameter determination
Sreenivas et al. Group based emotion recognition from video sequence with hybrid optimization based recurrent fuzzy neural network
US20240033644A1 (en) Automatic detection of prohibited gaming content
US11238518B2 (en) Customized predictive financial advisory for a customer
US20240126810A1 (en) Using interpolation to generate a video from static images
US11775813B2 (en) Generating a recommended target audience based on determining a predicted attendance utilizing a machine learning approach
CN114402355A (en) Personalized automatic video cropping
WO2022235599A1 (en) Generation and implementation of dedicated feature-based techniques to optimize inference performance in neural networks
US20230342108A1 (en) Enhanced computing device representation of audio
WO2019186427A1 (en) Method and system for reporter entity delivery
US20230135135A1 (en) Predicting outcomes of interest
Hadas et al. Using unsupervised incremental learning to cope with gradual concept drift
US20240144079A1 (en) Systems and methods for digital image analysis
US20230316470A1 (en) Method for correcting image by device and device therefor
KR102603424B1 (en) Method, Apparatus, and Computer-readable Medium for Determining Image Classification using a Neural Network Model
US20240020907A1 (en) User authentication and automatic capture of training data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240308

Address after: Room 901, No. 2, Lane 288, Qianfan Road, Xinqiao Town, Songjiang District, Shanghai 201612

Applicant after: Shikong (Shanghai) brand planning Co.,Ltd.

Country or region after: China

Address before: Shop 102, 126 Nanzhou North Road, Haizhu District, Guangzhou, Guangdong 510000

Applicant before: Guangzhou Yujing Technology Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant