CN111385337B - Cross-space interaction method, device, equipment, server and system - Google Patents

Cross-space interaction method, device, equipment, server and system Download PDF

Info

Publication number
CN111385337B
CN111385337B CN201811644516.4A CN201811644516A CN111385337B CN 111385337 B CN111385337 B CN 111385337B CN 201811644516 A CN201811644516 A CN 201811644516A CN 111385337 B CN111385337 B CN 111385337B
Authority
CN
China
Prior art keywords
interactive
interaction
input data
devices
user input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811644516.4A
Other languages
Chinese (zh)
Other versions
CN111385337A (en
Inventor
孙东方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811644516.4A priority Critical patent/CN111385337B/en
Publication of CN111385337A publication Critical patent/CN111385337A/en
Application granted granted Critical
Publication of CN111385337B publication Critical patent/CN111385337B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/141Setup of application sessions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the application provides a cross-space interaction method, a cross-space interaction device, cross-space interaction equipment, a cross-space interaction server and a cross-space interaction system. The server side generates interactive messages corresponding to at least two pieces of interactive equipment respectively based on at least two pieces of user input data acquired by the at least two pieces of interactive equipment which are in communication connection with the server side; and sending the interactive messages corresponding to the interactive messages to the at least two pieces of interactive equipment, and outputting the interactive messages by the at least two pieces of interactive equipment. The technical scheme provided by the embodiment of the application realizes cross-space multi-user interaction, simplifies an interaction mode and enriches interaction experience.

Description

Cross-space interaction method, device, equipment, server and system
Technical Field
The embodiment of the application relates to the technical field of computer application, in particular to a cross-space interaction method, a cross-space interaction device, interaction equipment, a server and a cross-space interaction system.
Background
With the continuous development of the society and the continuous perfection of public facilities, more and more offline activity places are provided for users, the communication and interaction requirements among the users are stronger and stronger, most of the current user interaction implementation modes are generally realized by depending on a social platform and need to become friends with each other, the interaction experience mode is single, and the interaction mode is complicated.
Disclosure of Invention
The embodiment of the application provides a cross-space interaction method, device, equipment and system, so that cross-space multi-user interaction is realized, the user interaction requirements are met, the interaction mode is simplified, and the interaction experience is enriched.
In a first aspect, an embodiment of the present application provides a cross-space interaction method, including:
determining at least two interactive devices establishing communication connection with a server; wherein the at least two interactive devices are arranged at different geographical locations;
acquiring user input data respectively acquired by the at least two interactive devices;
generating interactive messages respectively corresponding to the at least two interactive devices based on at least two user input data corresponding to the at least two interactive devices;
and sending the corresponding interaction messages to the at least two pieces of interaction equipment.
In a second aspect, an embodiment of the present application provides a cross-space interaction method, including:
the method comprises the steps that first interaction equipment collects first user input data;
the first user input data are sent to a server side, so that the server side generates a first interaction message based on second user input data respectively acquired by the first user input data and at least one second interaction device; the first interactive equipment and the at least one second interactive equipment are respectively deployed at different geographic positions;
receiving the first interaction message sent by the server;
and outputting the first interactive message.
In a third aspect, an embodiment of the present application provides a cross-space interaction apparatus, including:
the device determining module is used for determining at least two interactive devices which establish communication connection with the server; wherein the at least two interactive devices are arranged at different geographical locations;
the data acquisition module is used for acquiring user input data respectively acquired by the at least two pieces of interactive equipment;
the message generating module is used for generating interactive messages corresponding to the at least two interactive devices respectively based on at least two user input data corresponding to the at least two interactive devices;
and the interaction triggering module is used for sending the interaction messages corresponding to the interaction triggering module to the at least two pieces of interaction equipment.
In a fourth aspect, an embodiment of the present application provides a cross-space interaction apparatus, including:
the data acquisition module is used for acquiring first user input data;
the data sending module is used for sending the first user input data to a server so that the server can generate a first interaction message based on second user input data respectively acquired by the first user input data and at least one second interaction device; the first interactive equipment and the at least one second interactive equipment are respectively deployed at different geographic positions;
the message receiving module is used for receiving the first interactive message sent by the server;
and the message output module is used for outputting the first interactive message.
In a fifth aspect, an embodiment of the present application provides a server, including a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
determining at least two interactive devices which establish communication connection with the interactive devices; wherein the at least two interactive devices are arranged at different geographical locations;
acquiring user input data respectively acquired by the at least two interactive devices;
generating interactive messages respectively corresponding to the at least two interactive devices based on at least two user input data corresponding to the at least two interactive devices;
and sending the corresponding interaction messages to the at least two pieces of interaction equipment.
In a sixth aspect, an embodiment of the present application provides an interactive device, including a processing component, a storage component, and a detection component;
the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
collecting first user input data by a detection component;
the first user input data are sent to a server side, so that the server side generates a first interaction message based on second user input data respectively acquired by the first user input data and at least one second interaction device; the first interactive equipment and the at least one second interactive equipment are respectively deployed at different geographic positions;
receiving the first interactive message sent by the server;
and outputting the first interactive message.
In a seventh aspect, an embodiment of the present application provides a cross-space interaction system, including multiple interaction devices according to the sixth aspect and a server according to the fifth aspect; wherein the plurality of interactive devices are deployed at different geographic locations.
In the embodiment of the application, the server side can generate the interactive messages respectively corresponding to at least two interactive devices based on at least two user input data respectively acquired by the at least two interactive devices; the interaction messages corresponding to the users are sent to the at least two pieces of interaction equipment, the interaction messages are output by the at least two pieces of interaction equipment, and the users can check and know the interaction messages, so that the cross-space multi-user interaction is realized, the interaction requirements of the users are met, friends do not need to be formed among the users, the interaction mode is simplified, and the interaction experience is enriched.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic structural diagram illustrating an embodiment of a cross-space interaction system provided by the present application;
FIG. 2 is a flow chart illustrating one embodiment of a cross-space interaction method provided by the present application;
3 a-3 c are schematic diagrams respectively illustrating gestures performed by a user in a practical application according to the embodiment of the present application;
FIG. 4 is a flow chart illustrating a further embodiment of a cross-space interaction method provided by the present application;
FIG. 5 is a schematic structural diagram illustrating an embodiment of a cross-space interaction device provided in the present application;
fig. 6 shows a schematic structural diagram of an embodiment of a server provided in the present application;
FIG. 7 is a schematic structural diagram illustrating a cross-space interaction device according to yet another embodiment of the present disclosure;
fig. 8 shows a schematic structural diagram of an embodiment of an interactive device provided in the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification and claims of this application and in the above-described figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, the number of operations, e.g., 101, 102, etc., merely being used to distinguish between various operations, and the number itself does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
According to the technical scheme, cross-space interaction of a plurality of users in a plurality of off-line geographic positions is achieved, a new interaction mode is provided, interaction and communication among strangers can be achieved, and user interaction experience is improved.
Most of the existing multi-user cross-space interaction modes are usually realized by depending on a social platform, and need to be friends with each other, the interaction experience mode is single, the interaction mode is complicated, communication account registration needs to be carried out, and interaction is initiated based on the communication accounts of both sides. Therefore, the inventor proposes the technical scheme of the application through a series of researches.
In the embodiment of the application, the cross-space interaction system is composed of a plurality of interaction devices and a server side, wherein the interaction devices are arranged at different geographic positions, the interaction devices can collect user input data, and the server side can generate interaction messages respectively corresponding to at least two interaction devices based on at least two pieces of user input data respectively collected by the at least two interaction devices; the interaction messages corresponding to the users are sent to the at least two interaction devices, the interaction messages are output by the at least two interaction devices, and the users can check and know the interaction messages, so that the cross-space multi-user interaction is realized.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic structural diagram of an embodiment of a cross-space interaction system provided in the embodiment of the present application, where the cross-space interaction system may include a server 101 and a plurality of interaction devices 102 (two interaction devices are shown in fig. 1) respectively disposed at different geographic locations;
the plurality of interactive devices 102 establish network connections with the server 101, respectively.
The server 101 may determine at least two interactive devices 102 from a plurality of interactive devices 102 with which communication connection is established; acquiring user input data respectively acquired by the at least two interactive devices 102; generating interactive messages respectively corresponding to the at least two interactive devices based on at least two user input data corresponding to the at least two interactive devices 102; respectively sending the interaction messages corresponding to each other to the at least two interaction devices 102;
each interactive device 102 is used for collecting user input data and uploading the data to the server 101; and receiving the interactive message sent by the server 101 and outputting the interactive message.
The technical solution of the embodiment of the present application will be described in detail from the execution perspective of the server,
fig. 2 is a flowchart of an embodiment of a cross-space interaction method provided in an embodiment of the present application, where the method may include the following steps:
201: and determining at least two interactive devices which establish communication connection with the server.
Wherein the at least two interactive devices are arranged at different geographical locations.
Alternatively, the geographic location may refer to an off-line service location, such as a mall, hotel, supermarket, or the like.
The at least two interactive devices may have at least two interactive devices in an interactive relationship, and the interactive relationship may be determined in the following ways.
As an alternative, the at least two interactive devices may be at least two interactive devices that establish a communication connection with the server and are in an awake state.
That is, a plurality of interactive devices in the awake state at the same time can be considered to have an association relationship.
As another alternative, the at least two interactive devices may be at least two interactive devices for which the user login is successful.
Thus, optionally, the method may further comprise:
receiving a user login request sent by any interactive device;
and when the user login request passes the verification, determining that the user login corresponding to any one of the interactive devices is successful.
The login request may include a user account and a user password, and the verification pass may mean that the user password is the same as a stored password corresponding to the user account.
In addition, the cross-space interaction system of the embodiment of the application can establish an association relationship with a third-party system, so that the user account in the user login request can be a registered account of the third-party system; the server may specifically invoke a third-party system to verify the user login request, and when the verification passes, it is determined that the user login corresponding to any one of the interactive devices is successful.
In addition, the at least two interactive devices may be at least two interactive devices that are successfully logged in by the user and are in an awake state.
That is, a plurality of interactive devices that the user successfully logs in can be considered to have an association relationship.
As another optional manner, the server may send an interactive offer request to each of the plurality of interactive devices in the awake state;
the at least two interactive apparatuses may be at least two interactive apparatuses that transmit interactive response requests.
The interactive device receiving the interactive invitation request may output the interactive invitation request, where the interactive invitation request is used to prompt the user to participate in the interaction. The interactive invitation request may include the geographical location of the device, user-related information of the user using the device, and the like, where the user-related information may be input from a user screen or obtained by recognizing user image data or obtained from a user history record, and the like, and may include, for example, name, gender, occupation, avatar, and the like.
The interactive response request may be generated by the interactive apparatus in response to a user confirmation operation for the interactive offer request.
And if at least two interactive devices send the interactive response requests, subsequent operations can be executed to finish the interaction.
That is, a plurality of interactive apparatuses receiving the interactive response request may be considered to have an association relationship.
Optionally, the interactive invitation requests may be specifically sent to a plurality of interactive devices that are in an awake state and do not interact with other interactive devices, so as to avoid interfering with the user who is interacting.
In addition, to further improve the accuracy, as yet another alternative, the determining at least two interactive devices that establish a communication connection with the server may include:
generating interaction selection information corresponding to any one interactive device according to user related information corresponding to any one interactive device establishing communication connection with a server;
sending the interaction selection information of any one of the interaction devices to other interaction devices which do not include any one of the interaction devices;
sending a user selection request according to each interactive device, and determining at least one interactive device hit by each interactive device;
determining at least two interactive devices which hit each other;
the user selection request sent by each interactive device may be generated by each interactive device in response to a user selection operation for a plurality of pieces of interactive selection information, where the interactive selection information of any one interactive device may include user-related information corresponding to the any one interactive device, and may also include information such as a device geographical location, which is used to help the user to know other users; in addition, a selection control can be included for user selection.
Optionally, the user-related information may be user image data collected by any one of the interactive apparatuses.
Each of the plurality of interactive apparatuses may receive the interactive prompt information that does not include its own other interactive apparatus. Therefore, each interactive device can receive a plurality of interactive selection information, and the user determines the user who wants to interact according to the plurality of interactive selection information, so that the user selection request can be triggered by executing the selection operation, and at least one interactive device hit by each interactive device can be determined according to the interactive selection information selected by the user.
Thus, the at least two interactive apparatuses may refer to at least two interactive apparatuses that hit each other.
That is, a plurality of interactive apparatuses that hit each other can be considered to have an association relationship.
Optionally, the interaction selection information of any interactive device may be generated specifically according to the user-related information corresponding to any interactive device in the awake state.
In addition, as another optional manner, since the multiple interactive devices in the cross-space interactive system are respectively deployed at different geographic locations, and an association relationship between the multiple interactive devices may be preset, the determining at least two interactive devices that establish a communication connection with the server may include:
at least two interactive devices which are connected with the server in a communication mode and are associated with the geographic position are determined.
The association relationship of the geographic locations may be preset, for example, the interaction device located in a certain mall in china may be associated with the geographic location of the interaction device located on a certain street in the united states, so that at least two interaction devices associated with the geographic location have an association relationship.
In order to ensure that the interaction can be realized, at least two interaction devices which are associated with each other in geographic positions and are in an awakening state can be determined.
In the above description, each interactive device may switch to the wake-up state when detecting that there is a user in its sensing range, or when receiving a user switching request, etc.
The interactive equipment is in the awakening state, which indicates that the interactive equipment has the user interaction requirement, and in practical application, as the interactive equipment is deployed in the geographical position, any user can use the interactive equipment.
Therefore, as an alternative, the method may further include:
and when detecting that a user exists in the induction range of any interactive equipment, switching the any interactive equipment to an awakening state.
The user input data collected by the interactive device may specifically collect user input data within the sensing range.
Wherein, whether the user exists in the sensing range of any interactive device can be detected as follows:
and identifying whether the user exists according to the acquired image of any interactive equipment in the sensing range of the interactive equipment.
As another alternative, the method may further include:
and when detecting that any interactive equipment receives a user switching request, switching the any interactive equipment to an awakening state.
The interactive device may provide a switching control, and the user switching request may be generated by a user touching the switching control. The switching control may be a physical control set for the interactive device, or may be a virtual control displayed in the display screen.
In addition, the user switching request can be determined by identifying whether a switching instruction exists in the environmental voice data acquired by any interactive device, and if the switching instruction exists, it can be indicated that the user switching request is received by any interactive device.
The interactive device can also output switching prompt information, the switching prompt information can comprise switching instruction keywords, for example, "please speak, hey, hello and interact", and the switching prompt information is used for prompting a user to input a switching instruction by voice.
In addition, in order to reduce the processing pressure of the server, any interactive device may be switched to the wake-up state when receiving a state switching request of any interactive device.
The state switching request can be generated when any interactive device detects that a user exists in the sensing range of the interactive device, or recognizes that collected environment voice data comprises switched data, or responds to user triggering operation aiming at a switching control.
Specifically, any interactive device may be switched from a standby state to an awake state.
As yet another embodiment, the method may further include:
and when detecting that the user in the induction range of any interactive device disappears or detecting that any interactive device receives a user cancel request, switching any interactive device from the awakening state to the standby state.
In addition, the method may further include:
and when the any interactive equipment is in a standby state, outputting preset content in the any interactive equipment. The predetermined content may be predetermined content related to a geographic location of any interactive device, and may also be some marketing information, etc.
202: and acquiring user input data respectively acquired by the at least two interactive devices.
The user input data may be obtained by voice capture or image capture or screen detection by the interactive apparatus, and thus may include one or more of user voice data, user image data, and user screen input data.
The user image data may include one or more of user pose information (gestures, line gestures, standing gestures, etc.) and physiological characteristic information (faces or facial expressions, etc.), among others.
The user screen input data may include a corresponding request generated by operating a corresponding screen control or user-provided text data or the like.
Wherein, the user input data can be collected and uploaded by the interactive equipment in real time.
203: and generating interactive messages respectively corresponding to the at least two interactive devices based on the at least two user input data corresponding to the at least two interactive devices.
204: and sending the corresponding interaction messages to the at least two pieces of interaction equipment.
As an optional implementation manner, for any one of the at least two interactive apparatuses, the user input data of the other interactive apparatuses, which do not include the at least one interactive apparatus, of the at least two interactive apparatuses may be used as the interactive message of the at least one interactive apparatus.
For example, when the at least two interactive devices include a first interactive device and a second interactive device, the user input data acquired by the first interactive device may be used as the interactive message corresponding to the second interactive device, and the user input data acquired by the second interactive device may be used as the interactive message corresponding to the first interactive device.
The user input data can comprise voice data, image data or character data and the like, so that real-time communication interaction between a user corresponding to the first interactive device and a user corresponding to the second interactive device can be realized. Of course, the interactive message of any one interactive device may also include the user input data collected by itself. Since the interactive message of an interactive device may include a plurality of user input data, if the user input data includes displayable data such as images or characters, the interactive device may divide the display screen into a plurality of display areas to respectively display displayable content in different user input data.
In practical applications, the interactive devices may be deployed in any geographic location, and if the at least two interactive devices are deployed in different countries respectively, in order to implement interactive communications between users of different languages, in some embodiments, when the interactive messages include user voice data, the sending of the interactive messages corresponding to the at least two interactive devices respectively includes:
for any interactive device, identifying a source language type of user voice data collected by any interactive device;
if the user voice data in the interactive message corresponding to any one interactive device is different from the source language, translating the user voice data in the interactive message corresponding to any one interactive device into the source language;
and sending the interactive message after the translation corresponding to any interactive equipment.
Of course, there may be other implementation manners for generating the interactive messages corresponding to the at least two interactive devices based on the at least two user input data corresponding to the at least two interactive devices, for example, the interactive message corresponding to any one of the interactive devices may be a processing result obtained by the server performing corresponding processing on the user input data acquired by the server, or may be a processing result obtained by the server performing corresponding processing on the at least two user input data, and the like, which will be described in detail in the following embodiments.
In this embodiment, based on at least two pieces of user input data respectively acquired by at least two pieces of interactive equipment, interactive messages respectively corresponding to the at least two pieces of interactive equipment may be generated; the interaction messages corresponding to the users are respectively sent to the at least two interaction devices, the interaction messages are output by the at least two interaction devices, and the users can check and know the interaction messages, so that the cross-space multi-user interaction is realized, when the users are located at a certain geographic position, the interaction with the users located at other geographic positions can be realized based on the interaction devices in the geographic position, and the user interaction experience is improved.
In some embodiments, before the obtaining the user input data respectively collected by the at least two interactive devices, the method may further include:
generating interactive tasks corresponding to the at least two interactive devices respectively;
sending respective interactive tasks to the at least two interactive devices so that the at least two interactive devices can output respective interactive tasks;
the acquiring of the user input data respectively acquired by the at least two interactive devices comprises:
and acquiring user input data which are input by users corresponding to the at least two interactive devices according to respective interactive tasks.
The interactive task may be, for example, a game task. The interactive task is used for indicating user operation. Therefore, after each interactive device outputs the respective interactive task, the user can execute input operation according to the interactive task, and each interactive device can acquire and obtain user input data.
The interaction tasks corresponding to the at least two interaction devices may be the same or different.
For convenience of understanding, the following at least two interactive apparatuses include two interactive apparatuses: the interactive device a and the interactive device B are taken as examples to correspondingly explain the interactive task.
The interactive device A is assumed to correspond to an interactive task A and a user A, and the interactive device B corresponds to an interactive task B and a user B; the user a and the user B may be any users.
For example, the interactive task a may be, for example, a request that the user a perform a corresponding action according to a specified keyword, and the interactive task a may include the keyword and a task requirement. The interactive task B may, for example, request the user B to guess the specified keyword corresponding to the action according to the behavior of the user a, and the interactive task B may also include a task requirement.
The user input data of the interactive device a may be user image data including user behavior information, and the user input data of the interactive device B may be user voice data.
For another example, the interactive task a and the interactive task B may be the same, and the user a and the user B are required to put out corresponding posture shapes through gestures or limbs according to a predetermined shape, so that the postures of the two users may be spliced to form the predetermined shape. Therefore, the interactive task may include the image of the predetermined shape, a task requirement, and the like.
In some embodiments, the generating, based on the at least two user input data corresponding to the at least two interactive devices, an interactive message corresponding to each of the at least two interactive devices may include:
judging whether at least two user input data corresponding to the at least two interactive devices meet an interactive condition;
and if the at least two pieces of user input data meet the interaction condition, generating interaction messages respectively corresponding to the at least two pieces of interaction equipment.
As an alternative, the user input data comprises user image data; the determining whether the at least two pieces of user input data corresponding to the at least two pieces of interactive equipment satisfy the interaction condition may include:
identifying a user gesture in user image data acquired by each interactive device;
and judging whether the gesture shapes obtained by splicing the user gestures respectively corresponding to the at least two interactive devices are consistent with the preset shape.
The interaction tasks corresponding to the at least two pieces of interaction equipment respectively can comprise the preset shapes respectively, and the interaction tasks are used for prompting users of the at least two pieces of interaction equipment to execute corresponding operations, so that the user postures can be spliced to obtain the preset shapes.
For example, as shown in the schematic diagrams of fig. 3a to 3c, the preset shape may be the preset shape shown in fig. 3a, and if at least two interactive devices include two interactive devices, a user corresponding to the two interactive devices may use a gesture or a limb or the like to respectively swing the postures shown in fig. 3b and 3c, that is, the preset shape shown in fig. 3a may be obtained. In practical applications, the interactive task may include an image with a predetermined shape as shown in fig. 3 a.
The interaction messages respectively corresponding to the at least two interaction devices may be, for example, notification messages and the like that interact to satisfy interaction conditions.
In some embodiments, in order to increase the interaction probability and increase the interaction interest, in some embodiments, the generating the interaction messages corresponding to the at least two interaction devices if the at least two user input data satisfy the interaction condition includes:
if the at least two user input data meet the interaction condition, generating rights and interests getting prompt information;
the sending of the interactive messages corresponding to each other to the at least two pieces of interactive equipment respectively includes:
sending the rights and interests getting prompt information to the at least two interactive devices;
the method further comprises the following steps:
receiving a rights and interests getting request sent by any interactive equipment; wherein the rights pickup request includes a user account;
determining a target right corresponding to any one interactive device;
assigning the target equity to the user account.
In practical application, if at least two users corresponding to at least two interactive devices input user input data according to respective interactive tasks, and the at least two user input data corresponding to the at least two interactive devices meet interactive conditions, a right receiving prompt message can be generated, that is, a corresponding right is sent to the user to stimulate the user to interact and the like, and the right can comprise coupons, discount coupons, cash coupons, electronic card packets and the like for offline or online consumption.
The interest obtaining prompt message is used as an interactive message corresponding to at least two interactive devices respectively and is sent to the at least two interactive devices.
Therefore, if the server receives the right obtaining request sent by any interactive device, the server can determine the right corresponding to any interactive device and distribute the right to the user account.
Wherein the interest claim request can be triggered by a user, the user account can be provided by the user, the user account can be a login account of a third-party system associated with the cross-space interactive system, and the third-party system can be an online transaction system, an online payment system or a network social system, and the like.
And determining the rights and interests corresponding to any one interactive device, wherein the rights and interests corresponding to any one interactive device can be randomly distributed from a preset rights and interests pool.
In addition, in some embodiments, before obtaining the user input data respectively collected by the at least two interactive devices, the method may further include:
sending target content to the at least two interactive devices, and outputting the target content by the at least two interactive devices;
the obtaining of the user input data respectively collected by the at least two pieces of interactive equipment may be:
acquiring user operation information of the target content acquired by the at least two pieces of interactive equipment respectively;
the generating of the interactive messages respectively corresponding to the at least two interactive devices based on the at least two user input data corresponding to the at least two interactive devices may be:
updating the target content based on at least two pieces of user operation information;
and taking the updated target content as the interactive messages respectively corresponding to the at least two interactive devices.
The target content can be sent in an interactive task corresponding to at least two interactive devices.
The target content may be changed based on a user operation, for example, a shape, a color size, an included object, and the like of the target content may be changed;
the user operation information is determined by identifying and obtaining user behavior information from user image data, or determined according to user screen input data and the like.
For convenience of description, in at least two interactive devices having an interactive relationship, any one of the at least two interactive devices is named as a "first interactive device", and the other interactive devices are named as "second interactive devices", so that the at least two interactive devices having an interactive relationship are composed of the first interactive device and the at least one second interactive device. Fig. 4 is a flowchart illustrating a cross-space interaction method according to another embodiment of the present application, which may include the following steps:
401: first user input data is collected.
402: and sending the first user input data to a server.
The server may generate a first interaction message based on the first user input data and second user input data respectively acquired by at least one second interaction device.
In addition, the server may further generate a second interaction message corresponding to each second interaction device based on the first user input data and the at least one second user data.
403: and receiving the first interactive message sent by the server.
404: and outputting the first interactive message.
The first interactive device and the at least one second interactive device are respectively deployed at different geographic positions. The at least one second interactive apparatus may refer to an interactive apparatus having an interactive relationship with the first interactive apparatus.
Optionally, if both the first interactive device and the at least one second interactive device are in the wake-up state, it may be considered that the first interactive device and the at least one second interactive device are in a specific interaction relationship;
or, if the first interactive device and the at least one second interactive device both send interactive response requests based on the interactive offer request of the server, it may be considered that the first interactive device and the at least one second interactive device are in a specific interactive relationship;
or, if the first interactive device and the at least one second interactive device are interactive devices based on interactive hit, the specific interactive relationship between the first interactive device and the at least one second interactive device can be considered;
or, if the first interactive device is associated with the geographic location of the at least one second interactive device, the specific interactive relationship between the first interactive device and the at least one second interactive device may be considered;
or, if the first interactive device and the at least one second interactive device both successfully log in, the specific interactive relationship between the first interactive device and the at least one second interactive device may be considered.
In some embodiments, prior to the collecting the first user input data, the method further comprises:
when detecting that a user exists in the sensing range of the first interactive equipment or receiving a user switching request, requesting to switch the first interactive equipment to an awakening state;
the collecting first user input data comprises:
and acquiring first user input data when the first interactive equipment is in an awakening state.
In some embodiments, prior to the collecting the first user input data, the method further comprises:
outputting an interactive invitation request sent by a server;
and responding to the user confirmation operation aiming at the interactive invitation request, and sending the interactive invitation request to a server so that the server can determine the first interactive equipment and at least one second interactive equipment for sending the interactive invitation request.
In some embodiments, prior to the collecting the first user input data, the method further comprises:
respectively outputting interaction selection information aiming at a plurality of interaction devices sent by a server;
and responding to the user selection operation aiming at the plurality of interactive selection information, sending a user selection request to the server, so that the server determines at least one interactive device hit by the first interactive device, and determines at least one second interactive device hit by the first interactive device from the at least one interactive device.
In some embodiments, prior to the collecting the first user input data, the method further comprises:
receiving a first interaction task issued by a server;
collecting the first user input data comprises:
and collecting first user input data input by a user according to the first interaction task.
The server can also generate second interactive tasks respectively corresponding to at least one second interactive device, and respectively issue the corresponding second interactive tasks to the at least one second interactive device, and the second interactive device can collect second user input data input by the user according to the corresponding second interactive tasks.
The first interaction task and the second interaction task of each of the at least one second interaction device may be the same or different.
In some embodiments, the first interactive message comprises a rights pickup prompt;
the method may further comprise:
and responding to user trigger operation aiming at the rights and interests obtaining prompt information, sending a rights and interests obtaining request to the server side so that the server side can determine a target rights and interests corresponding to the first interactive equipment, and distributing the target rights and interests to a user account associated with the first interactive equipment.
The user trigger operation may include an operation of inputting a user account, and thus the user account may be carried in the interest obtaining request.
Further, in some embodiments, the first interactive message may include at least one second user input data.
If the second user input data includes displayable content such as user image data, outputting the first interactive message may specifically be:
dividing a display screen into at least one display area;
displaying at least one second user input data in the at least one display area; wherein one display area is used for displaying one second user input data.
Fig. 5 is a schematic structural diagram illustrating an embodiment of a cross-space interaction apparatus provided in the present application, where the apparatus may include:
the device determining module 501 is configured to determine at least two interactive devices that establish a communication connection with a server; wherein the at least two interactive devices are arranged at different geographical locations;
a data obtaining module 502, configured to obtain user input data respectively collected by the at least two pieces of interactive equipment;
a message generating module 503, configured to generate, based on at least two user input data corresponding to the at least two interactive devices, interactive messages corresponding to the at least two interactive devices, respectively;
an interaction triggering module 504, configured to send respective corresponding interaction messages to the at least two interaction devices.
As an optional manner, the device determining module is specifically configured to determine at least two interactive devices in an awake state.
As another optional manner, the device determination module is specifically configured to send an interactive offer request to each of a plurality of interactive devices in an awake state;
at least two interactive devices sending the interactive response requests are determined.
As another optional mode, the device determination module is specifically configured to generate interaction selection information of any one of the interactive devices according to user-related information corresponding to the any one of the interactive devices; sending the interaction selection information of any one of the interaction devices to other interaction devices which do not comprise any one of the interaction devices; determining at least one interactive device hit by each interactive device according to a user selection request sent by each interactive device; at least two interactive devices that hit each other are determined.
As a further alternative, the device determination module is specifically configured to determine at least two interactive devices associated with a geographic location.
In some embodiments, the apparatus may further comprise:
the first state switching module is used for switching any interactive equipment to an awakening state when detecting that a user exists in the induction range of any interactive equipment or detecting that any interactive equipment receives a user switching request.
Optionally, any interactive device may be switched from a standby state to a wake-up state.
Furthermore, the apparatus may further include:
and the second state switching module is used for switching any interactive device from the awakening state to the standby state when detecting that the user in the induction range of any interactive device disappears or detecting that any interactive device receives a user cancel request.
The apparatus may further include:
and the display triggering module is used for outputting the preset content in any interactive equipment when the any interactive equipment is in a standby state.
The predetermined content may be predetermined content related to a geographic location of any interactive device, and may also be some marketing information, etc.
In some embodiments, the apparatus may further comprise:
the task allocation module is used for generating interactive tasks corresponding to the at least two interactive devices respectively; sending respective interaction tasks to the at least two interaction devices;
the data acquisition module is specifically used for acquiring user input data which are acquired by the at least two interactive devices and input according to respective interactive tasks.
In some embodiments, the message generation module is specifically configured to determine whether at least two pieces of user input data corresponding to the at least two pieces of interaction equipment satisfy an interaction condition; and if the at least two pieces of user input data meet the interaction condition, generating interaction messages respectively corresponding to the at least two pieces of interaction equipment.
Optionally, the user input data comprises user image data; the message generation module judges whether at least two user input data corresponding to the at least two interactive devices meet an interaction condition, specifically, a user gesture in user image data acquired by each interactive device is recognized; and judging whether the gesture shapes obtained by splicing the user gestures respectively corresponding to the at least two interactive devices are consistent with the preset shape or not.
In some embodiments, if the at least two pieces of user input data satisfy the interaction condition, the message generation module may generate the interaction messages corresponding to the at least two pieces of interaction equipment, respectively, specifically, if the at least two pieces of user input data satisfy the interaction condition, generate the benefit obtaining prompt message;
the apparatus may further include:
the right and interest distribution module is used for receiving a right and interest obtaining request sent by any interactive equipment; wherein the rights gain request includes a user account; determining a target interest corresponding to any one interactive device; assigning the target interest and benefit to the user account.
In some embodiments, the message generating module may be specifically configured to, for any one of the interactive apparatuses, use, as the interactive message of the any one of the interactive apparatuses, user input data of other interactive apparatuses, which do not include the any one of the interactive apparatuses, of the at least two interactive apparatuses.
In some embodiments, the user input data comprises user voice data;
the interaction triggering module can be specifically used for identifying a source language type of user voice data acquired by any one interactive device aiming at any one interactive device;
if the user voice data in the interactive message of any one piece of interactive equipment is different from the source language, translating the user voice data in the interactive message of any one piece of interactive equipment into the source language;
and sending the translated interactive message of any interactive device to any interactive device.
In some embodiments, the apparatus may further comprise:
the content sending module is used for sending target content to the at least two interactive devices and outputting the target content by the at least two interactive devices respectively;
the data acquisition module may be specifically configured to acquire user operation information of the target content, acquired by the at least two pieces of interaction equipment, by the user;
the message generation module is specifically configured to update the target content based on at least two pieces of user operation information; and taking the updated target content as the interactive messages respectively corresponding to the at least two interactive devices.
The cross-space interaction device shown in fig. 5 can execute the cross-space interaction method shown in the embodiment shown in fig. 2, and the implementation principle and the technical effect are not repeated. The specific manner in which each module and unit of the cross-space interaction device in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail herein.
In one possible design, the cross-space interaction apparatus of the embodiment shown in fig. 5 may be implemented as a server, as shown in fig. 6, which may include a storage component 601 and a processing component 602;
the storage component 601 stores one or more computer instructions for the processing component 602 to invoke for execution.
The processing component 602 is configured to:
determining at least two interactive devices which establish communication connection with a server; wherein the at least two interactive devices are arranged at different geographical locations;
acquiring user input data respectively acquired by the at least two interactive devices;
generating interactive messages respectively corresponding to the at least two interactive devices based on at least two user input data corresponding to the at least two interactive devices;
and sending the interaction messages corresponding to the interaction devices to the at least two interaction devices.
The processing component 602 may include one or more processors for executing computer instructions to perform all or part of the steps of the method described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 601 is configured to store various types of data to support operations at the server. The storage component may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Of course, the server may also include other components, such as input/output interfaces, communication components, etc. The input/output interface provides an interface between the processing component and a peripheral interface module, which may be an output device, an input device, etc. The communication component is configured to facilitate wired or wireless communication between the server and the interactive device, and the like.
The server shown in fig. 6 may be the server 101 in the cross-space interactive system shown in fig. 1.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the cross-space interaction method in the embodiment shown in fig. 2 can be implemented.
Fig. 7 is a schematic structural diagram of another embodiment of a cross-space interaction device according to an embodiment of the present application, where the device includes:
a data collecting module 701, configured to collect first user input data;
a data sending module 702, configured to send the first user input data to a server, so that the server generates a first interaction message based on second user input data respectively acquired by the first user input data and at least one second interaction device; the first interactive equipment and the at least one second interactive equipment are respectively deployed at different geographic positions;
a message receiving module 703, configured to receive the first interaction message sent by the server;
a message output module 704, configured to output the first interactive message.
In some embodiments, the apparatus may further comprise:
the state switching request module is used for detecting that a user exists in the sensing range of the first interactive equipment or requesting to switch the first interactive equipment to an awakening state when a user switching request is received;
the data acquisition module may be specifically configured to acquire first user input data when the first interactive device is in an awake state.
In some embodiments, the apparatus may further comprise:
the first output module is used for outputting the interactive invitation request sent by the server;
and responding to the user confirmation operation aiming at the interactive invitation request, and sending an interactive response request to the server so that the server determines the first interactive equipment and at least one second interactive equipment for sending the interactive response request.
In some embodiments, the apparatus may further comprise:
the second output module is used for respectively outputting interaction selection information aiming at the plurality of interaction devices and sent by the server;
and responding to user selection operation aiming at a plurality of interactive selection information, sending a user selection request to a server, so that the server determines at least one interactive device hit by the first interactive device, and determines at least one second interactive device hit by the first interactive device from the at least one interactive device.
In some embodiments, the apparatus may further comprise:
the task receiving module is used for receiving a first interactive task issued by the server;
the data acquisition module is specifically used for acquiring first user input data input by a user according to the first interaction task.
In some embodiments, the first interactive message comprises a rights pickup prompt;
the apparatus may further include:
and the right and interest obtaining module is used for responding to the user trigger operation aiming at the right and interest obtaining prompt information, sending a right and interest obtaining request to the server so that the server can determine the target right and interest corresponding to the first interactive equipment and distribute the target right and interest to the user account associated with the first interactive equipment.
The cross-space interaction apparatus shown in fig. 7 may execute the cross-space interaction method shown in the embodiment shown in fig. 4, and the implementation principle and technical effect thereof are not described in detail. The specific manner in which each module and unit of the cross-space interaction device in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail here.
In one possible design, the cross-space interaction apparatus of the embodiment shown in fig. 7 may be implemented as the interaction device 102 shown in fig. 1, as shown in fig. 8, the interactive red version may include a storage component 801, a processing component 802, and a detection component 803;
the storage component 801 stores one or more computer instructions for execution invoked by the processing component 802.
Collecting first user input data by the detection component 803;
the first user input data are sent to a server side, so that the server side generates a first interaction message based on second user input data respectively acquired by the first user input data and at least one second interaction device; the first interactive equipment and the at least one second interactive equipment are respectively deployed at different geographic positions;
receiving the first interaction message sent by the server;
and outputting the first interactive message.
The processing component 802 may include one or more processors executing computer instructions to perform all or some of the steps of the methods described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components configured to perform the above-described methods.
The storage component 801 is configured to store various types of data to support operations at the server. The storage component may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The detection component 803 can include, for example, an image capture component and/or an audio capture component, among others. The image acquisition component can specifically refer to a camera, and the voice acquisition component can refer to a microphone and the like.
In addition, the interactive device can also comprise a display component, an audio output component and the like.
The display element may be an Electroluminescent (EL) element, a liquid crystal display or a microdisplay of similar construction, or a retina-directable or similar laser-scanned display.
The audio output component may be a speaker or the like.
When the first interactive message comprises displayable data, the first interactive message can be displayed through a display component;
when the first interactive message comprises audio data, the first interactive message can be output through the audio output component.
Naturally, the interactive device may also comprise other components, such as input/output interfaces, communication components, etc. The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc. The communication component is configured to facilitate wired or wireless communication between the interactive device and the server, and the like.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the cross-space interaction method in the embodiment shown in fig. 4 can be implemented.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present application.

Claims (24)

1. A cross-space interaction method is characterized by comprising the following steps:
determining at least two interactive devices which establish communication connection with a server; wherein the at least two interactive devices are arranged at different geographical locations;
acquiring user input data respectively acquired by the at least two interactive devices;
generating interactive messages respectively corresponding to the at least two interactive devices based on at least two user input data corresponding to the at least two interactive devices;
sending respective corresponding interaction messages to the at least two pieces of interaction equipment;
the generating, based on the at least two user input data corresponding to the at least two interactive devices, the interactive messages corresponding to the at least two interactive devices respectively comprises:
judging whether at least two user input data corresponding to the at least two interactive devices meet an interactive condition; and if the at least two pieces of user input data meet the interaction condition, generating interaction messages respectively corresponding to the at least two pieces of interaction equipment.
2. The method of claim 1, wherein the determining at least two interactive devices that establish communication connections with the server comprises:
and determining at least two interactive devices which are in communication connection with the server and are in an awakening state.
3. The method according to claim 1, wherein the determining at least two interactive devices that establish communication connection with the server comprises:
respectively sending interactive invitation requests to a plurality of interactive devices which are in communication connection with the server and are in an awakening state;
and determining at least two interactive devices sending the interactive response requests.
4. The method of claim 1, wherein the determining at least two interactive devices that establish communication connections with the server comprises:
generating interaction selection information of any one interactive device according to user related information corresponding to any one interactive device which establishes communication connection with a server;
sending the interaction selection information of any one of the interaction devices to other interaction devices which do not comprise any one of the interaction devices;
determining at least one interactive device hit by each interactive device according to a user selection request sent by each interactive device;
at least two interactive devices that hit each other are determined.
5. The method according to claim 1, wherein the determining at least two interactive devices that establish communication connection with the server comprises:
at least two interaction devices which are connected with the server side in a communication mode and are associated with the geographical position are determined.
6. The method of claim 2 or 3, further comprising:
and when detecting that a user exists in the induction range of any interactive equipment, switching the any interactive equipment to an awakening state.
7. The method of claim 2 or 3, further comprising:
and when detecting that any interactive equipment receives a user switching request, switching the any interactive equipment to an awakening state.
8. The method of claim 1, wherein before the obtaining the user input data collected by the at least two interactive devices, the method further comprises:
generating interaction tasks corresponding to the at least two pieces of interaction equipment respectively;
sending respective interaction tasks to the at least two interaction devices;
the acquiring of the user input data respectively acquired by the at least two interactive devices comprises:
and acquiring user input data which are acquired by the at least two interactive devices and input according to respective interactive tasks.
9. The method of claim 1, wherein the user input data comprises user image data; the judging whether the at least two user input data corresponding to the at least two interactive devices meet the interaction condition comprises:
identifying a user gesture in user image data acquired by each interactive device;
and judging whether the gesture shape obtained by splicing the user gestures respectively corresponding to the at least two interactive devices is consistent with a preset shape.
10. The method of claim 1, wherein if the at least two pieces of user input data satisfy the interaction condition, generating the interaction messages corresponding to the at least two pieces of interaction equipment respectively comprises:
if the at least two user input data meet the interaction condition, generating rights and interests getting prompt information;
the sending of the interactive messages corresponding to each other to the at least two pieces of interactive equipment respectively includes:
sending the rights and interests getting prompt information to the at least two interactive devices;
the method further comprises the following steps:
receiving a rights and interests getting request sent by any interactive equipment; wherein the rights gain request includes a user account;
determining a target right corresponding to any one interactive device;
assigning the target equity to the user account.
11. The method of claim 1, wherein the generating the interactive messages corresponding to the at least two interactive devices based on the at least two user input data corresponding to the at least two interactive devices comprises:
and aiming at any one interactive device, taking the user input data of other interactive devices which do not comprise the interactive device in the at least two interactive devices as the interactive message of the interactive device.
12. The method of claim 11, wherein the user input data comprises user voice data;
the sending of the interactive messages corresponding to each other to the at least two pieces of interactive equipment respectively includes:
for any interactive device, identifying a source language type of user voice data collected by any interactive device;
if the user voice data in the interactive message of any one interactive device is different from the source language, translating the user voice data in the interactive message of any one interactive device into the source language;
and sending the translated interactive message of any interactive device to any interactive device.
13. The method of claim 1, wherein before the obtaining the user input data collected by the at least two interactive devices, the method further comprises:
target content is sent to the at least two interactive devices, and the target content is output by the at least two interactive devices respectively;
the acquiring of the user input data respectively acquired by the at least two interactive devices comprises:
acquiring user operation information of the target content acquired by the at least two interactive devices respectively;
the generating, based on the at least two user input data corresponding to the at least two interactive devices, the interactive messages corresponding to the at least two interactive devices respectively comprises:
updating the target content based on at least two pieces of user operation information;
and taking the updated target content as the interactive messages respectively corresponding to the at least two interactive devices.
14. A cross-space interaction method is characterized by comprising the following steps:
the method comprises the steps that first interaction equipment collects first user input data;
the first user input data are sent to a server side, so that the server side generates a first interaction message based on second user input data respectively acquired by the first user input data and at least one second interaction device; the first interactive equipment and the at least one second interactive equipment are respectively deployed at different geographic positions;
receiving the first interaction message sent by the server;
and outputting the first interactive message.
15. The method of claim 14, wherein prior to said collecting first user input data, the method further comprises:
when detecting that a user exists in the sensing range of the first interactive equipment or receiving a user switching request, requesting to switch the first interactive equipment to an awakening state;
the collecting first user input data comprises:
and acquiring first user input data when the first interactive equipment is in an awakening state.
16. The method of claim 14, wherein prior to the collecting first user input data, the method further comprises:
outputting an interactive invitation request sent by a server;
and responding to the user confirmation operation aiming at the interactive invitation request, and sending an interactive response request to the server so that the server determines the first interactive equipment and at least one second interactive equipment for sending the interactive response request.
17. The method of claim 14, wherein prior to the collecting first user input data, the method further comprises:
respectively outputting interaction selection information aiming at a plurality of interaction devices sent by a server;
and responding to user selection operation aiming at a plurality of interactive selection information, sending a user selection request to a server, so that the server determines at least one interactive device hit by the first interactive device, and determines at least one second interactive device hit by the first interactive device from the at least one interactive device.
18. The method of claim 14, wherein prior to said collecting first user input data, the method further comprises:
receiving a first interaction task issued by a server;
collecting the first user input data comprises:
and collecting first user input data input by a user according to the first interaction task.
19. The method of claim 14, wherein the first interactive message comprises a rights procurement prompt;
the method further comprises the following steps:
and responding to user trigger operation aiming at the rights and interests obtaining prompt information, sending a rights and interests obtaining request to the server side so that the server side can determine a target rights and interests corresponding to the first interactive equipment, and distributing the target rights and interests to a user account associated with the first interactive equipment.
20. A cross-space interaction device, comprising:
the device determining module is used for determining at least two interactive devices which establish communication connection with the server; wherein the at least two interactive devices are arranged at different geographical locations;
the data acquisition module is used for acquiring user input data respectively acquired by the at least two pieces of interactive equipment;
the message generation module is used for generating interactive messages respectively corresponding to the at least two interactive devices based on at least two pieces of user input data corresponding to the at least two interactive devices;
the interaction triggering module is used for sending interaction messages corresponding to the interaction triggering module to the at least two pieces of interaction equipment;
the message generation module may be specifically configured to determine whether at least two user input data corresponding to the at least two pieces of interaction equipment satisfy an interaction condition; and if the at least two pieces of user input data meet the interaction condition, generating interaction messages respectively corresponding to the at least two pieces of interaction equipment.
21. A cross-space interaction device, comprising:
the data acquisition module is used for acquiring first user input data;
the data sending module is used for sending the first user input data to a server so that the server can generate a first interaction message based on second user input data respectively acquired by the first user input data and at least one second interaction device; the first interactive equipment and the at least one second interactive equipment are respectively deployed at different geographic positions;
the message receiving module is used for receiving the first interactive message sent by the server;
and the message output module is used for outputting the first interactive message.
22. A server is characterized by comprising a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
determining at least two interactive devices which establish communication connection with the interactive devices; wherein the at least two interactive devices are arranged at different geographical locations;
acquiring user input data respectively acquired by the at least two interactive devices;
generating interactive messages respectively corresponding to the at least two interactive devices based on at least two user input data corresponding to the at least two interactive devices;
sending the interaction messages corresponding to the interaction devices to the at least two interaction devices;
the generating, based on the at least two user input data corresponding to the at least two interactive devices, the interactive messages corresponding to the at least two interactive devices respectively comprises:
judging whether at least two user input data corresponding to the at least two interactive devices meet an interactive condition; and if the at least two pieces of user input data meet the interaction condition, generating interaction messages respectively corresponding to the at least two pieces of interaction equipment.
23. The interactive device is characterized by comprising a processing component, a storage component and a detection component;
the storage component stores one or more computer instructions; the one or more computer instructions to be invoked for execution by the processing component;
the processing component is to:
collecting first user input data by a detection component;
the first user input data are sent to a server side, so that the server side generates a first interaction message based on second user input data respectively acquired by the first user input data and at least one second interaction device; the first interactive equipment and the at least one second interactive equipment are respectively deployed at different geographic positions;
receiving the first interaction message sent by the server;
and outputting the first interactive message.
24. A cross-space interactive system, comprising a plurality of interactive apparatuses according to claim 23 and a server according to claim 22; wherein the plurality of interactive devices are deployed at different geographic locations.
CN201811644516.4A 2018-12-29 2018-12-29 Cross-space interaction method, device, equipment, server and system Active CN111385337B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811644516.4A CN111385337B (en) 2018-12-29 2018-12-29 Cross-space interaction method, device, equipment, server and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811644516.4A CN111385337B (en) 2018-12-29 2018-12-29 Cross-space interaction method, device, equipment, server and system

Publications (2)

Publication Number Publication Date
CN111385337A CN111385337A (en) 2020-07-07
CN111385337B true CN111385337B (en) 2023-04-07

Family

ID=71216785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811644516.4A Active CN111385337B (en) 2018-12-29 2018-12-29 Cross-space interaction method, device, equipment, server and system

Country Status (1)

Country Link
CN (1) CN111385337B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102202011A (en) * 2011-05-23 2011-09-28 宋健 Method and system for realizing social network based on drive-by random chat
CN104901873A (en) * 2015-06-29 2015-09-09 曾劲柏 Social networking system based on scenes and motions
CA2990716A1 (en) * 2015-06-30 2017-01-05 10353744 Canada Ltd. Method for establishing interaction relationship, and interaction device
CN106326274A (en) * 2015-06-30 2017-01-11 深圳市银信网银科技有限公司 Method and interaction device for establishing interaction relationship
CN106981000A (en) * 2016-10-13 2017-07-25 阿里巴巴集团控股有限公司 Interaction, method of ordering and system under many people's lines based on augmented reality
CN107608517A (en) * 2017-09-25 2018-01-19 艾亚(北京)科技有限公司 A kind of scene interaction dating system and method based on geographical position
CN108092880A (en) * 2017-12-13 2018-05-29 安徽跟屁虫科技有限公司 A kind of personalized integrated system based on NIWO social software systems
WO2018095439A1 (en) * 2016-11-25 2018-05-31 腾讯科技(深圳)有限公司 Method, apparatus and storage medium for information interaction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090276355A1 (en) * 2007-09-23 2009-11-05 Foundation For Lives And Minds, Inc. Method and networked system of interactive devices and services offered for use at participating social venues to facilitate mutual discovery, self-selection, and interaction among users

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102202011A (en) * 2011-05-23 2011-09-28 宋健 Method and system for realizing social network based on drive-by random chat
CN104901873A (en) * 2015-06-29 2015-09-09 曾劲柏 Social networking system based on scenes and motions
CA2990716A1 (en) * 2015-06-30 2017-01-05 10353744 Canada Ltd. Method for establishing interaction relationship, and interaction device
CN106326274A (en) * 2015-06-30 2017-01-11 深圳市银信网银科技有限公司 Method and interaction device for establishing interaction relationship
CN106981000A (en) * 2016-10-13 2017-07-25 阿里巴巴集团控股有限公司 Interaction, method of ordering and system under many people's lines based on augmented reality
WO2018095439A1 (en) * 2016-11-25 2018-05-31 腾讯科技(深圳)有限公司 Method, apparatus and storage medium for information interaction
CN107608517A (en) * 2017-09-25 2018-01-19 艾亚(北京)科技有限公司 A kind of scene interaction dating system and method based on geographical position
CN108092880A (en) * 2017-12-13 2018-05-29 安徽跟屁虫科技有限公司 A kind of personalized integrated system based on NIWO social software systems

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Ananda Maiti ; Andrew D. Maxwell ; Alexander A. Kist.Using marker based augmented reality and natural user interface for interactive remote experiments.《2017 4th Experiment@International Conference (exp.at"17)》.2017, *
虚拟社区信息运动互动机理与规律研究;贯君;《中国博士学位论文全文数据库 信息科技辑》;20150831;I141-2页 *

Also Published As

Publication number Publication date
CN111385337A (en) 2020-07-07

Similar Documents

Publication Publication Date Title
US9860282B2 (en) Real-time synchronous communication with persons appearing in image and video files
US10482228B2 (en) Systems and methods for authenticating users in virtual reality settings
US9858584B2 (en) Advising management system with sensor input
KR102636637B1 (en) Method for detecting providing information of exercise and electronic device thereof
KR20180111197A (en) Information providing method and electronic device supporting the same
US10136289B2 (en) Cross device information exchange using gestures and locations
US20120265811A1 (en) System and Method for Developing Evolving Online Profiles
JP2017153078A (en) Artificial intelligence learning method, artificial intelligence learning system, and answer relay method
KR20230004966A (en) Interactive spectating interface for live videos
US20170090853A1 (en) Automatic sizing of agent's screen for html co-browsing applications
CN111159587A (en) User access information processing method and device and terminal equipment
CN106255970A (en) Local individual's demons
CN110772796A (en) Team forming method and device and electronic equipment
CN108537971A (en) A kind of control method of massage armchair, terminal and storage medium
US20200036658A1 (en) System and method for assisting user communications using bots
US20230209125A1 (en) Method for displaying information and computer device
CN111723843B (en) Sign-in method, sign-in device, electronic equipment and storage medium
US20150172254A1 (en) System and Method For Operating a Social Networking Site on the Internet
CN113506086B (en) Task issuing method, device, computer equipment and medium
CN113271251B (en) Virtual resource activity control method and device, electronic equipment and storage medium
CN111385337B (en) Cross-space interaction method, device, equipment, server and system
KR20180094331A (en) Electronic apparatus and method for outputting message data thereof
CN116307394A (en) Product user experience scoring method, device, medium and equipment
US20220191183A1 (en) Method and apparatus for providing user profile
CN115623268A (en) Interaction method, device, equipment and storage medium based on virtual space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant