CN113126875B - Virtual gift interaction method and device, computer equipment and storage medium - Google Patents

Virtual gift interaction method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113126875B
CN113126875B CN202110431257.2A CN202110431257A CN113126875B CN 113126875 B CN113126875 B CN 113126875B CN 202110431257 A CN202110431257 A CN 202110431257A CN 113126875 B CN113126875 B CN 113126875B
Authority
CN
China
Prior art keywords
gift
object identifier
target
sliding operation
sliding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110431257.2A
Other languages
Chinese (zh)
Other versions
CN113126875A (en
Inventor
庄宇轩
孙静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Boguan Information Technology Co Ltd
Original Assignee
Guangzhou Boguan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Boguan Information Technology Co Ltd filed Critical Guangzhou Boguan Information Technology Co Ltd
Priority to CN202110431257.2A priority Critical patent/CN113126875B/en
Publication of CN113126875A publication Critical patent/CN113126875A/en
Application granted granted Critical
Publication of CN113126875B publication Critical patent/CN113126875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application discloses an interaction method and device of a virtual gift, computer equipment and a storage medium. Providing a gift area in a graphical user interface, the gift area including at least one gift identification; determining a target gift identification from at least one gift identification in response to a gift selection operation for the gift area; selecting an initial object identifier from at least one object identifier in response to a first sliding operation that is continuous with the gift selection operation; responding to a second sliding operation continuous with the first sliding operation, and determining at least one target object identifier from at least one object identifier according to the second sliding operation and the initial object identifier; and in response to the gift giving operation continuous with the second sliding operation, giving a virtual gift corresponding to the target gift identification to the user corresponding to the at least one target object identification. Therefore, the operation of presenting the virtual gift by the user is simplified, and the efficiency and the interactive experience of presenting the virtual gift are improved.

Description

Virtual gift interaction method and device, computer equipment and storage medium
Technical Field
The application relates to the technical field of live broadcast, in particular to an interaction method and device of virtual gifts, computer equipment and a storage medium.
Background
With the update of network technology and the popularization of smart phones, live webcasting gradually becomes a new social networking mode, including live video streaming, live voice streaming and the like. In the voice live broadcast process, one anchor can speak, or the anchor and a plurality of users can speak in turn, and audiences watching the live broadcast can give a favorite anchor or a user a virtual gift in a live broadcast platform. When a viewer watching a live broadcast presents virtual gifts to one or more users in a live broadcast room, the viewer needs to click a gift sending button in a live broadcast interface to call up a virtual gift display panel, then designate an object for presenting the virtual gifts, and perform presentation of the virtual gifts to the designated object after selecting the number of the presented virtual gifts. In the research and practice processes of the prior art, the inventor of the application finds that the existing process of presenting the virtual gifts needs a series of complicated operations to be performed by the user, and the interactive experience and the gift-presenting efficiency of the user in presenting the virtual gifts are reduced.
Disclosure of Invention
The embodiment of the application provides an interaction method and device for virtual gifts, computer equipment and a storage medium, which simplify the operation of giving the virtual gifts by a user and improve the efficiency and the interactive experience of giving the virtual gifts.
The embodiment of the application provides an interactive method of virtual gifts, which comprises the following steps:
providing a gift area in the graphical user interface, the gift area including at least one gift identification;
determining a target gift identification from the at least one gift identification in response to a gift selection operation for the gift area;
selecting an initial object identifier from the at least one object identifier in response to a first sliding operation that is continuous with the gift selecting operation;
responding to a second sliding operation continuous with the first sliding operation, and determining at least one target object identifier from the at least one object identifier according to the second sliding operation and the initial object identifier;
and in response to the gift giving operation continuous with the second sliding operation, giving the virtual gift corresponding to the target gift identification to the user corresponding to the at least one target object identification.
Correspondingly, the embodiment of the present application further provides an interaction device for virtual gifts, including:
a display unit for displaying a gift area in the graphical user interface, the gift area including at least one gift identification;
a first determination unit for determining a target gift identification from the at least one gift identification in response to a gift selection operation for the gift area;
a selection unit configured to select an initial object identifier from the at least one object identifier in response to a first sliding operation that is continuous with the gift selection operation;
a second determining unit, configured to determine, in response to a second sliding operation that is continuous with the first sliding operation, at least one target object identifier from the at least one object identifier according to the second sliding operation and the initial object identifier;
and the presenting unit is used for responding to the gift presenting operation continuous to the second sliding operation and presenting the virtual gift corresponding to the target gift identification to the user corresponding to the at least one target object identification.
Optionally, the selecting unit is further configured to:
triggering a single gift mode in response to the first sliding operation that is continuous with the gift selecting operation;
setting a main broadcasting user identification corresponding to a main broadcasting user to present a target display state based on the single gift mode;
and determining the anchor user identifier presenting the target display state as the initial object identifier.
Optionally, the graphical user interface includes a single-person mode trigger area, and the selecting unit is further configured to:
responding to the first sliding operation continuous with the gift selection operation, and acquiring a final touch point of the first sliding operation;
and if the final touch point of the first sliding operation is located in the single mode trigger area, triggering the single gift mode.
Optionally, the selecting unit is further configured to:
responding to the first sliding operation which is continuous in the gift selection operation, and acquiring the position of an initial touch point of the first sliding operation and the position of a final touch point of the first sliding operation;
acquiring the moving distance from the position of the initial touch point to the position of the final touch point;
and if the moving distance is greater than the first preset distance and less than the second preset distance, triggering a single gift giving mode.
Optionally, the second determining unit is further configured to:
responding to the second sliding operation continuous with the first sliding operation, and acquiring a touch point of the second sliding operation;
when the extension line of the touch point and the center point of the gift area passes through other object identifiers except the initial object identifier, canceling the target display state of the initial object identifier;
along with the movement of the touch point of the second sliding operation, setting an object identifier passing by the extended line of the moved touch point and the central point to present the target display state, wherein one object identifier exists on the graphical user interface at a time to present the target display state;
and determining the object identifier presenting the target display state for the last time as the target object identifier.
Optionally, the giving unit is further configured to:
acquiring a first sliding track and a first sliding rate of the gift giving operation in response to the gift giving operation continuing from the second sliding operation, and acquiring a length of the first sliding track;
if the first sliding track is along the extension line, the first sliding speed is greater than a first preset speed, and the length of the first sliding track is greater than a first preset length, the virtual gift corresponding to the target gift identification is presented to the user corresponding to the at least one target object identification.
Optionally, the selecting unit is further configured to:
triggering a multi-gift mode in response to the first sliding operation continuing with the gift selecting operation;
setting a main broadcast user identification corresponding to a main broadcast user to present a target display state based on the multi-person gift mode;
and determining the anchor user identifier presenting the target display state as the initial object identifier.
Optionally, the graphical user interface includes a multi-user mode trigger area, and the selection unit is further configured to:
responding to the first sliding operation continuous with the gift selection operation, and acquiring a final touch point of the first sliding operation;
and if the final touch point of the first sliding operation is located in the multi-person mode trigger area, triggering the multi-person gift mode.
Optionally, the selecting unit is further configured to:
responding to the first sliding operation continuous with the gift selection operation, and acquiring the position of an initial touch point of the first sliding operation and the position of a final touch point of the first sliding operation;
acquiring the moving distance from the position of the initial touch point to the position of the final touch point;
and if the moving distance is greater than a second preset distance, triggering the multi-person gift giving mode.
Optionally, the graphical user interface includes a permutation zone of the at least one object identifier, and the second determining unit is further configured to:
responding to the second sliding operation continuous with the first sliding operation, and acquiring a touch point of the second sliding operation;
mapping the touch point of the second sliding operation in the arrangement area of the at least one object identifier;
when the touch point of the second sliding operation is coincident with the at least one object identifier, setting the object identifier coincident with the touch point of the second sliding operation to be changed from a first display state to the target display state;
and when the touch point of the second sliding operation stops moving on the graphical user interface, determining at least one object identifier presenting the target display state as the target object identifier.
Optionally, the method further includes:
and when the object identifier presenting the target display state is superposed with the touch point of the second sliding operation, setting the object identifier presenting the target display state to be changed into the first display state.
Optionally, the giving unit is further configured to:
acquiring a second sliding track and a second sliding rate of the gift giving operation in response to the gift giving operation continuing from the second sliding operation, and acquiring a length of the second sliding track;
if the second sliding track is vertically upward along the graphical user interface, the second sliding speed is greater than a second preset speed, and the length of the second sliding track is greater than a second preset length, the virtual gift corresponding to the target gift identification is presented to the user corresponding to the at least one target object identification.
Similarly, an embodiment of the present application further provides a computer device, including:
a memory for storing a computer program;
a processor for performing the steps of any one of the interactive methods of the virtual gift.
Furthermore, an embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of any one of the interaction methods of the virtual gift.
The embodiment of the application provides an interactive method, an interactive device, computer equipment and a storage medium of a virtual gift, a gift area is provided in a graphical user interface, after a user selects a target gift identifier to be presented in the gift area, continuous operation can be directly executed under the condition that an operation interface is not changed, the target object identifier to be presented is conveniently and quickly selected, and presentation of the virtual gift corresponding to the target gift identifier is completed. The presenting process of the virtual gifts is executed in the same graphical user interface, so that the operation of presenting the virtual gifts by the user is simplified, and the efficiency and the interactive experience of presenting the virtual gifts are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a system diagram of an interactive method for virtual gifts provided by an embodiment of the present application;
fig. 2 is a schematic flowchart of an interaction method of a virtual gift provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a graphical user interface provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a single-person mode trigger area and a multi-person mode trigger area provided by an embodiment of the present application;
FIG. 5 is a schematic illustration of a ray in a gift area provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a multi-user mode operation area provided by an embodiment of the present application;
fig. 7 is another flow chart of an interaction method of a virtual gift provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of an interactive device for virtual gifts provided by an embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application provides an interaction method and device of a virtual gift, computer equipment and a storage medium. Specifically, the interaction method of the virtual gift in the embodiment of the present application may be executed by a computer device, where the computer device may be a terminal or a server, and the like. The terminal may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like, and may further include a client, which may be a game application client, a browser client carrying a game program, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, content distribution network service, big data and an artificial intelligence platform.
For example, when the virtual gift interaction method is operated on a terminal, a terminal device stores a live application program and is used for presenting scenes in a live picture. The terminal device is used for interacting with a user through a graphical user interface, for example, downloading and installing a live application program through the terminal device and running the live application program. The manner in which the terminal device provides the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including a live screen and a screen presenting a virtual gift, and receiving an operation instruction generated by a user acting on the graphical user interface, and a processor for running the live application, generating a graphical user interface, responding to the operation instruction, and controlling display of the graphical user interface on the touch display screen.
For example, when the interactive method of the virtual gift runs on a server, the interactive method can be live cloud. Cloud live broadcasting refers to a live broadcasting mode based on cloud computing. In the cloud live broadcast operation mode, an operation main body and a live broadcast picture presentation main body of a live broadcast application program are separated, and storage and operation of the virtual gift interaction method are completed on a cloud live broadcast server. The live broadcast picture presentation is completed at a live cloud client, and the live cloud client is mainly used for receiving and sending live broadcast data and presenting a live broadcast picture, for example, the live cloud client may be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer, a personal digital assistant, and the like, but a terminal device for processing live broadcast data is a live cloud server at a cloud end. When live broadcasting is carried out, a user operates the cloud live broadcasting client to send an operation instruction to the cloud live broadcasting server, the cloud live broadcasting server operates a live broadcasting program according to the operation instruction, data such as live broadcasting pictures are coded and compressed, the data are returned to the cloud live broadcasting client through a network, and finally the data are decoded through the cloud live broadcasting client and the live broadcasting pictures are output.
Referring to fig. 1, fig. 1 is a system diagram of an interactive device for virtual gifts according to an embodiment of the present application. The system may include at least one terminal 101 and at least one live server 102. The terminal 101 held by the user may be connected to the live broadcast server through different networks 103, for example, the network 103 may be a wireless network or a wired network, the wireless network may be a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, or the like, the terminal 101 is configured to generate a live broadcast room creation request or a live broadcast room join request in response to a touch operation of the user on the terminal 101, send the live broadcast room creation request or the live broadcast room join request to the live broadcast server 102 to enter the live broadcast room, and then display a graphical user interface on the live broadcast application, where the graphical user interface includes at least one object identifier; then, providing a gift area in the graphical user interface, the gift area comprising at least one gift identification; determining a target gift identification from the at least one gift identification in response to a gift selection operation for the gift area; selecting an initial object identifier from the at least one object identifier in response to a first swipe operation that is continuous with the gift selection operation; responding to a second sliding operation continuous with the first sliding operation, and determining at least one target object identifier from the at least one object identifier according to the second sliding operation and the initial object identifier; and in response to the gift giving operation continuous with the second sliding operation, giving the virtual gift corresponding to the target gift identification to the user corresponding to the at least one target object identification.
The live broadcast server 102 is configured to receive a live broadcast room creation request or a live broadcast room join request sent by a terminal, obtain live broadcast data corresponding to a live broadcast room according to the live broadcast room creation request or the live broadcast room join request, and send the live broadcast data to the terminal 101, so that the terminal 101 serving as a live broadcast audience and the terminal 101 serving as a live broadcast anchor can obtain the live broadcast data and participate in live broadcast together.
The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
The embodiment will be described from the perspective of an interactive device of a virtual gift, which may be specifically integrated in a terminal device, and the terminal device may include a smart phone, a notebook computer, a tablet computer, a personal computer, and the like.
The method for interacting virtual gifts provided by the embodiment of the present application may be executed by a processor of a terminal device, as shown in fig. 2, a specific flow of the method for interacting virtual gifts mainly includes steps 201 to 205, which are described in detail as follows:
step 201, displaying a gift area in the graphical user interface, wherein the gift area comprises at least one gift identifier.
In the embodiment of the application, when a user starts a live broadcast application program of a terminal device and joins any one live broadcast room provided by the live broadcast application program, a graphical user interface is displayed in the live broadcast room, the graphical user interface comprises a public screen area, an arrangement area of object identifiers and an interaction area, the arrangement area of the object identifiers can display an object identifier of at least one user, the object identifier can be a head portrait of the user in the live broadcast room, a nickname of the user or a combination of the head portrait and the nickname of the user, the public screen area can display messages such as activity notification in the live broadcast room, comments issued by the user in the live broadcast room and/or gift giving broadcast of the user to a main broadcast user, and the interaction area comprises a comment issuing area, an application control, a virtual gift giving identifier, a setting control and the like.
The user can input comment information in a comment publishing area, the comment information is displayed in a public screen area after clicking sending, the user makes touch operation on an application control, the head portrait of audience users who do not display the object identification in a arranged area in a voice live broadcast room can be checked, the head portrait of the user who displays the object identification in the arranged area can also be applied to display the object identification of the user in the arranged area, after the approval of a householder in the voice live broadcast room is passed through the object identification display request submitted by the user, the object identification corresponding to the user can be displayed in the arranged area, and the user corresponding to the object identification displayed in the arranged area can be subjected to voice live broadcast in the voice live broadcast room. When the user clicks the setting control, the functional attribute in the voice live broadcast room can be changed.
In the embodiment of the application, the gift area is an area generated on the graphical user interface after the user clicks the virtual gift-giving identification in the graphical user interface. The gift area may be located at a bottom area of the graphical user interface. The virtual gift-giving identifier may be an icon of a gift style and/or a text expressing the meaning of the gift, and the control 304 is the virtual gift-giving identifier as shown in fig. 3. The gift area comprises at least one gift identifier which is a representative identifier of a virtual gift that can be presented to a main broadcast or other users by users, and can be a two-dimensional or three-dimensional simulation image of a real gift and/or characters of various styles of gift names, and the like.
In some embodiments of the present application, the size, shape and/or position of the gift identifiers and the object identifiers in the graphical user interface are not limited, and can be flexibly set according to actual situations. In order to better distinguish the gift identifications and the object identifications, the respective gift identifications may be grouped and arranged in the same area in the graphical user interface, namely, the "gift area" in step 201, and the respective object identifications are grouped and arranged in the same area in the graphical user interface, namely, the "graphical user interface" in step 201 further comprises the arrangement area of the object identifications. Meanwhile, in order to better perform the selection operation of the object identifier and avoid the interference of the gift identifier and the object identifier when the selection operation is performed in the operation area, different positions of the area where the gift identifier is located, the object identifier and the operation area in the graphical user interface can be set.
The arrangement of each gift mark in the gift area is not limited, and the gift marks can be flexibly arranged according to actual conditions, for example, each gift mark can surround the same circle center to form a circular arc arranged in the gift area, and can also be arranged in a row in the gift area, and each row is provided with the same number of gift marks. As shown in fig. 3, which is a schematic diagram of a graphical user interface, a gift area 301 is located below a display interface of a terminal device, each gift identifier surrounds the same center of a circle, an arc is formed and arranged in the gift area 301, and an arrangement area 302 of an object identifier is located above the display interface of the terminal device.
For example, after the terminal executes the voice live broadcast application program, a voice live broadcast interface, that is, a graphical user interface, is generated, and when a user in the live broadcast room wants to present a virtual gift in the live broadcast application program to the main broadcast, the user can click an icon of a gift style to trigger the live broadcast application program to provide a gift area on the graphical user interface.
Step 202, in response to a gift selection operation for the gift area, determining a target gift identification from the at least one gift identification.
In one or more embodiments of the present application, after the user triggers the graphical user interface in the live application, the user may select, in the gift area, a gift identifier specifically intended to be presented to the anchor and/or other users as a target gift identifier, then trigger a presentation flow of the target virtual gift, select, in the object identifier, an object identifier intended to be presented as a target object identifier, and present the target gift identifier to the target object identifier. The gift selection operation of the user selecting the target gift identification is not limited, the user can use the clicked and/or long-pressed gift identification as the target gift identification, all gift identifications in the gift area can also be slid, the target gift identification to be given is slid to the center position of the gift area, and then the terminal equipment uses the gift identification located in the center position of the gift area as the target gift identification. For example, if the gift symbols are arranged in the gift area around the same center of a circle forming an arc, the user may move the gift symbol desired to be presented to a position directly above the center of the circle, and the gift symbol 401 located directly above the center of the circle is the target gift symbol, as shown in fig. 4.
In the embodiment of the application, if a user wants to present virtual gifts corresponding to a plurality of target gift identifications to a host and/or other users, the user may perform related operations to present the plurality of virtual gifts. Wherein the operation related to gifting the plurality of virtual gifts is not limited. For example, after the user performs the virtual gift-giving process once, the user may select the target gift identifier again and perform the virtual gift-giving process once again until the number of times of performing the virtual gift-giving process is the same as the number of virtual gifts that the user wants to give in advance. In addition, after the user selects the target gift identification, the number of virtual gifts to be presented is determined by pressing the target gift identification for a long time.
Step 203, selecting an initial object identifier from the at least one object identifier in response to a first sliding operation that is continuous with the gift selecting operation.
In the embodiment of the application, after the user selects the target gift identifier, a first sliding operation associated with the gift selection operation can be directly performed to determine the initial object identifier, and in a general case, an object that the user most wants to present a virtual gift is a main broadcasting user in the live broadcasting room. In order to distinguish the object identifier selected by the user from other unselected object identifiers, an initial object identifier presentation target display state may be set. The target display state may be a normally bright display, a blinking display, or the like.
In this embodiment of the application, when the user is in the single gift mode, the step 203 of "selecting an initial object identifier from the at least one object identifier in response to a first sliding operation that is continuous with the gift selection operation" may specifically be: triggering a single gift mode in response to the first sliding operation that is continuous with the gift selecting operation; setting a main broadcasting user identification corresponding to a main broadcasting user to present a target display state based on the single gift mode; and determining the anchor user identifier presenting the target display state as the initial object identifier.
In this embodiment, when the user is in the multi-gift mode, the step 203 of selecting the initial object identifier from the at least one object identifier in response to the first sliding operation that is continuous with the gift selecting operation may specifically be: triggering a multi-gift mode in response to the first sliding operation continuing with the gift selecting operation; setting a presentation target display state of a main broadcasting user identifier corresponding to a main broadcasting user based on the multi-person gift mode; and determining the anchor user identifier presenting the target display state as the initial object identifier.
In one embodiment of the present application, after the user determines the target gift identification that is to be given, the user may choose to give the virtual gift to the host and/or other users in the single gift mode, or may choose to give the virtual gift to the host and/or other users in the multi-gift mode. The single gift mode is that the user presents the virtual gift to the user corresponding to one of the object identifiers, and the multi-person gift mode is that the user presents the virtual gift to the user corresponding to at least two of the object identifiers. In order to better identify the gift mode selected by the user, the single gift mode and the multi-gift mode can be set to correspond to different trigger areas, namely, the single mode trigger area and the multi-gift mode trigger area. And determining whether the triggered single gift mode or the multi-person gift mode is based on the characteristics of the second sliding operation operated by the user.
In the embodiment of the present application, no matter the user selects the single gift mode or the multi-gift mode, the user can enter the selected gift mode by performing different mode triggering operations in the corresponding areas, and therefore the "graphical user interface" in step 201 further includes a single-mode triggering area and a multi-mode triggering area. The single-person mode trigger area comprises an area with a first position at a first preset distance, the multi-person mode trigger area comprises an area with a first position at a distance greater than the first preset distance and less than or equal to a second preset distance, and the first position comprises the center position of the gift area. In addition, the first preset distance is different from the second preset distance, the size and the specific numerical value of the first preset distance and the second preset distance are not limited, and the first preset distance and the second preset distance can be flexibly set according to actual conditions. Fig. 4 is a schematic diagram of a single-person mode trigger area and a multi-person mode trigger area, where an area 402 is the single-person mode trigger area, and an area 403 is the multi-person mode trigger area.
In the embodiment of the application, the single-person mode trigger area and the multi-person mode trigger area may be displayed in the graphical user interface or may not be displayed in the graphical user interface. The size and/or shape of the single-person mode trigger area and the multi-person mode trigger area may be the same or different. If the single-person mode trigger area and the multi-person mode trigger area can be displayed in the graphical user interface, the display effects of the two areas can be set to be different, for example, the display colors and/or the display brightness of the two areas are different, so that the user can clearly distinguish the two different display areas.
In this embodiment, if the user selects the single gift mode, in the step 202 "responding to the first sliding operation continuous to the gift selecting operation, selecting the initial object identifier from the at least one object identifier" may include performing a related operation in the graphical user interface to trigger the single gift mode, and the operation specifically triggering the single gift mode may be implemented through the following steps S2021 to S2022:
step S2021: and responding to the first sliding operation continuous with the gift selection operation, and acquiring a final touch point of the first sliding operation.
Step S2022: and if the final touch point of the first sliding operation is located in the single mode trigger area, triggering the single gift mode.
In the embodiment of the application, when the final touch point of the first sliding operation executed by the user is in the single mode trigger area, the live broadcast application program enters the single gift mode, a ray can be generated in the graphical user interface at the moment, and the user can determine to successfully enter the single gift mode according to the displayed ray. The width, the length and the extending direction of the ray are not limited, and the ray can be flexibly arranged according to actual conditions. The rays may extend outwardly from the target gift identification selected for presentation, or may extend outwardly from a central location in the gift area. As shown in fig. 5, which is a schematic diagram of a ray in a graphical user interface, ray 501 extends outward from the gift certificate 401.
In this embodiment of the application, whether the single gift mode is entered or not may be determined according to a characteristic of the first sliding operation performed by the user, and another method for triggering the single gift mode is as follows: responding to the first sliding operation which is continuous in the gift selection operation, and acquiring the position of an initial touch point of the first sliding operation and the position of a final touch point of the first sliding operation; acquiring the moving distance from the position of the initial touch point to the position of the final touch point; and if the moving distance is greater than the first preset distance and less than the second preset distance, triggering a single gift giving mode. The moving distance may be a distance from the initial touch point to the final touch point along the track of the first sliding operation, or a linear distance from the initial touch point to the final touch point on the graphical user interface.
In this embodiment, if the user selects the multi-gift mode, in step 202, "in response to a first sliding operation that is continuous with the gift selection operation, selecting an initial object identifier from the at least one object identifier" may include performing a related operation in a graphical user interface to trigger the multi-gift mode, and specifically triggering the multi-gift mode may be to obtain a final touch point of the first sliding operation in response to the first sliding operation that is continuous with the gift selection operation; and if the final touch point of the first sliding operation is located in the multi-person mode trigger area, triggering the multi-person gift mode.
In an embodiment of the present application, the determination method for triggering the multi-user gift mode may further be: responding to the first sliding operation continuous with the gift selection operation, and acquiring the position of an initial touch point of the first sliding operation and the position of a final touch point of the first sliding operation; acquiring the moving distance from the position of the initial touch point to the position of the final touch point; and if the moving distance is greater than a second preset distance, triggering the multi-person gift giving mode. Similarly, the moving distance may be a distance from the initial touch point to the final touch point along the track of the first sliding operation, or a straight-line distance from the initial touch point to the final touch point on the gui.
And 204, responding to a second sliding operation continuous with the first sliding operation, and determining at least one target object identifier from the at least one object identifier according to the second sliding operation and the initial object identifier.
In the embodiment of the application, in response to a first sliding operation performed by a user, a terminal may automatically trigger a single gift presenting mode or a multi-person gift presenting mode according to a type of the first sliding operation, and automatically select an initial object identifier for the user, when a target object for which the user wants to present a virtual gift is the initial object identifier, the user may directly perform a gift presenting operation, and present the virtual gift corresponding to the target gift identifier to a user corresponding to the initial object identifier. When the target object identification that the user wants to give away the virtual gift is not the initial object identification automatically allocated by the system, the user may perform a second sliding operation to reselect the target object identification that the user wants to give away the gift.
In this embodiment of the application, when the single gift mode is triggered, the user may select one object identifier as the target object identifier, and in this case, in step 204, "in response to a second sliding operation that is continuous with the first sliding operation, at least one target object identifier is determined from the at least one object identifier according to the second sliding operation and the initial object identifier" may be: responding to the second sliding operation continuous with the first sliding operation, and acquiring a touch point of the second sliding operation; when the extension line of the touch point and the center point of the gift area passes through other object identifiers except the initial object identifier, canceling the target display state of the initial object identifier; along with the movement of the touch point of the second sliding operation, setting an object identifier passing by the extended line of the moved touch point and the central point to present the target display state, wherein one object identifier exists on the graphical user interface at a time to present the target display state; and determining the object identifier presenting the target display state for the last time as the target object identifier.
In this embodiment, if a ray extending outward from the target gift identifier is displayed in the graphical user interface after the single gift mode is triggered, it may be set that the ray is rotated along with the sliding of the second sliding operation, and an object identifier through which the ray passes is determined as the target object identifier. For example, if each gift identifier forms an arc in the gift area, the user may perform a second sliding operation in the direction of the arc, when the second sliding operation is stopped, a first included angle between a connection line between the finger of the user and the center of the arc and a straight line in the vertical direction on the screen of the terminal device is obtained, the ray is rotated according to the first included angle, a second included angle between the ray and the straight line in the vertical direction on the screen of the terminal device is obtained, the first included angle and the second included angle are set to be the same, so that the extending direction of the ray after the user performs the second operation is obtained, and the object identifier through which the ray passes after rotation is determined as the target object identifier.
In this embodiment of the application, when the multi-user gift mode is triggered, the user may select a plurality of object identifiers as the target object identifiers, and at this time, in step 204, "in response to a second sliding operation that is continuous with the first sliding operation, at least one target object identifier is determined from the at least one object identifier according to the second sliding operation and the initial object identifier" may be: responding to the second sliding operation continuous with the first sliding operation, and acquiring a touch point of the second sliding operation; mapping the touch point of the second sliding operation in the arrangement area of the at least one object identifier; when the touch point of the second sliding operation is coincident with the at least one object identifier, setting the object identifier coincident with the touch point of the second sliding operation to be changed from a first display state to the target display state; and when the touch point of the second sliding operation stops moving on the graphical user interface, determining that at least one object identifier presenting the target display state is the target object identifier. The object identifier presents a target display state, and may be displayed in a normally bright manner as a normally bright identifier. It is also possible that the object identification is flashing constantly.
For example, in the multi-user gift mode, the user selects a plurality of object identifiers as the target object identifiers, so that the object identifiers touched by the user when performing the second sliding operation can be set as the target gift objects, and in order to make the user clearly see the touched object identifiers, the touched object identifiers can be set to be displayed in a normally-on mode.
In the embodiment of the present application, it may also be set that the size of the touched object identifier is changed, so as to distinguish the touched object identifier from the object identifier that is not touched.
In this embodiment, in order to determine a plurality of target object identifiers selected by a user according to a second sliding operation performed by the user, a multi-user mode operation area for performing the second sliding operation may be set on the graphical user interface, a first position of each touch point of the second sliding operation performed by the user in the multi-user mode operation area may be obtained, a first relationship between the first position and the multi-user mode operation area is determined, then a second position is mapped in an arrangement area of the object identifiers according to the first relationship, a second relationship between the second position and the arrangement area is determined, the first relationship and the second relationship are set to be the same, so that the touch points of the second sliding operation on the screen of the terminal device may be in one-to-one correspondence with the arrangement area of the object identifiers, and thus the object identifiers touched by the second sliding operation are obtained. For example, as shown in fig. 6, the multi-person mode operation area 601 may be located at a middle position of the graphic user interface.
In an embodiment of the application, since the second sliding operation performed by the user in the multi-user mode operation area may be complex, in order to more easily determine the plurality of target object identifiers selected by the user, the object identifiers in the arrangement area may be mapped into the multi-user mode operation area one by one, so as to obtain the object identifier touched by the second sliding operation.
In the embodiment of the present application, in order to enable the user to perform the first sliding operation and the second sliding operation more conveniently and quickly, two operations may be set to be continuous. That is, if the user touches the screen of the terminal device with a finger to perform the first sliding operation and the second sliding operation, the finger does not leave the screen of the terminal device when the user performs the first sliding operation and the second sliding operation.
In some embodiments of the present application, if the user wants to cancel the selected target object identifier, the target object identifier may be: and when the object identifier presenting the target display state is superposed with the touch point of the second sliding operation, setting the object identifier presenting the target display state to be changed into the first display state. The first display state may be a state of the object identifier before the user selects the object identifier. For example, if the object identifier that the user does not want to give a gift in advance exists in the normally-on identifiers, the normally-on display of the object identifier that the user does not want to give a gift in advance may be cancelled, thereby preventing the user from giving a virtual gift to an object that the user does not want to give a gift in advance. And eliminating the normally-on display of the normally-on mark when the touch point of the second sliding operation is superposed with the object mark at the normally-on mark.
Step 205, in response to the gift giving operation continuous with the second sliding operation, giving a virtual gift corresponding to the target gift identifier to the user corresponding to the at least one target object identifier.
In the embodiment of the application, after the user selects the target object identifier for presenting the virtual gift in advance, the gift presenting operation may be performed on the selected gift identifier, and the gift presenting operation may be a sliding operation, so that the terminal device presents the virtual gift corresponding to the selected gift identifier to the user corresponding to the target object identifier.
In some embodiments of the application, if the gift-giving operation is a sliding operation, when the user selects the single-person gift-giving mode, obtaining a first sliding track and a first sliding rate of the gift-giving operation, obtaining a length of the first sliding track, determining that the first sliding rate of the gift-giving operation is greater than a first preset rate, and determining that the length of the first sliding track of the gift-giving operation is greater than a first preset length, then giving the virtual gift corresponding to the target gift identifier to the user corresponding to the target object identifier. Further, it may be determined that the first sliding trajectory of the gift-giving operation is less than the first offset threshold from the ray.
In some embodiments of the application, when the user selects the single gift mode, if the user determines the target object identifier in the single gift mode, the user may perform a gift presenting operation, so that the terminal device may present a virtual gift corresponding to the gift identifier to the user corresponding to the target object identifier according to the gift presenting operation performed by the user. The gift giving operation may be a sliding operation, and the start point and the terminal of the sliding operation are not limited. To determine that the swipe operation performed by the user is valid, it may be determined whether the swipe operation performed by the user is valid in terms of a swipe rate, a swipe distance, and/or an offset threshold. After the user performs the gift giving operation, a second sliding track and a second sliding rate of the gift giving operation can be obtained, and the length of the second sliding track is obtained; and if the second sliding track is vertically upward along the graphical user interface, the second sliding speed is greater than a second preset speed, and the length of the second sliding track is greater than a second preset length, presenting the virtual gift corresponding to the target gift identification to the user corresponding to the at least one target object identification. Further, it may be determined that the second sliding trajectory of the gift-giving operation is offset from the vertical upward direction of the graphical user interface by a distance less than the first offset threshold. The setting of the second preset rate, the second preset length and the first offset threshold is not limited, and can be flexibly limited according to actual conditions.
All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.
According to the interactive method for the virtual gifts, the gift area is provided in the graphical user interface, after the user selects the target gift identification which is to be presented in the gift area, continuous operation can be directly executed under the condition that the operation interface is not changed, the target object identification which is to be presented is conveniently and quickly selected, and presentation of the virtual gift corresponding to the target gift identification is completed. The presenting process of the virtual gifts is executed in the same graphical user interface, so that the operation of presenting the virtual gifts by the user is simplified, and the efficiency and the interactive experience of presenting the virtual gifts are improved.
Referring to fig. 7, fig. 7 is another flow chart illustrating an interaction method of a virtual gift according to an embodiment of the present application. Taking an example of an interaction method for realizing a virtual gift by a user touching a screen of a terminal device with a finger, a specific flow of the method may be as follows:
step 701, executing the live broadcast application program, and generating a graphical user interface.
For example, the live application may be a voice live, a game live, and/or a video entertainment live, among others.
Step 702, clicking a virtual gift giving identifier in the live broadcast application program, and displaying a gift area.
For example, when a user in a live room wants to gift a virtual gift in a live application to a host and/or other users, the user may click on a gift-style icon, triggering the live application to generate a gift area. Wherein the graphical user interface may have a gift identification and an object identification.
Step 703, selecting the target gift identification in response to the moving operation of the gift identification.
For example, if the gift symbols are arranged in the gift area around the same center of a circle forming an arc, the user may move the gift symbol desired to be presented to a position directly above the center of the circle, and the gift symbol 401 located directly above the center of the circle is the target gift symbol, as shown in fig. 4.
Step 704, selecting an initial object identifier from the at least one object identifier in response to a first sliding operation that is continuous with the moving operation.
For example, after the user selects the target gift identification, a first sliding operation associated with the gift selection operation may be directly performed to determine the initial object identification, and in general, the object that the user most wants to present the virtual gift is the anchor user in the live broadcast room.
Step 705, determining to trigger the single gift mode according to the first sliding operation, and displaying a ray extending from the target gift identifier in the graphical user interface.
For example, if the user selects the one-person gift mode, the user's finger may move from the target gift identification to the one-person mode trigger area when performing the first sliding operation. When the finger of the user touches the single mode trigger area, a single gift giving process is started, and a ray extending from the target gift identifier is displayed in the graphical user interface.
And step 706, in response to a second sliding operation which is continuous with the first sliding operation, rotating the ray, and determining that an object identifier passed by the ray is a target object identifier.
For example, each gift mark forms an arc in the gift area, then the user can execute the second sliding operation along the direction of the arc in the single-person mode operation area, the ray slides along with the sliding direction of the user's finger on the screen, when the second sliding operation stops, obtain the first included angle between the connecting line of the user's finger and the arc center and the straight line in the vertical direction on the screen of the terminal device, rotate the ray according to the first included angle, obtain the second included angle between the ray and the straight line in the vertical direction on the screen of the terminal device, set the first included angle and the second included angle to be the same, thereby obtain the extending direction of the ray after the user executes the second operation, and determine the object mark through which the ray passes after rotating as the target object mark.
And 707, in response to the gift giving operation along the radial direction, giving the virtual gift corresponding to the target gift identifier to the user corresponding to the target object identifier.
For example, the user slides along the ray from the position where the second sliding operation is stopped, so that the virtual gift corresponding to the target gift identification is donated to the user corresponding to the target object identification.
Step 708, determining to trigger the multi-user gift mode according to the first sliding operation, and displaying an operation area of the multi-user mode in the graphical user interface.
For example, if the user selects the multi-gift mode, the user's finger may be moved from the target gift identification to the trigger area of the multi-mode. And when the user finger touches the multi-person mode trigger area, entering a multi-person gift process, and displaying the multi-person mode operation area in the graphical user interface.
And step 709, mapping the second sliding operation of the user in the multi-person mode operation area to the arrangement area of the object identifier.
Step 710, responding to a second sliding operation in the multi-user mode operation area, setting at least two touch object identifiers to be displayed in a normally-on mode, and setting the normally-on identifiers as target object identifiers.
For example, the user may perform the second sliding operation in the multi-person mode operation area according to the position of the object identifier in the arrangement area until the object identifier that the user wants to give away in advance is touched.
And 711, in response to the gift giving operation vertically upwards along the screen, giving the virtual gift corresponding to the target gift identifier to the user corresponding to the target object identifier.
For example, the user slides in a vertically upward direction along the screen from the position where the second sliding operation is stopped, thereby enabling the virtual gift corresponding to the target gift identification to be gifted to the user corresponding to the target object identification.
All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.
According to the interactive method for the virtual gifts, the gift area is provided in the graphical user interface, after the user selects the target gift identification which is to be presented in the gift area, continuous operation can be directly executed under the condition that the operation interface is not changed, the target object identification which is to be presented is conveniently and quickly selected, and presentation of the virtual gift corresponding to the target gift identification is completed. The presenting process of the virtual gifts is executed in the same graphical user interface, so that the operation of presenting the virtual gifts by the user is simplified, and the efficiency and the interactive experience of presenting the virtual gifts are improved.
In order to better implement the virtual gift interaction method of the embodiment of the present application, an embodiment of the present application further provides a virtual gift interaction device. Referring to fig. 8, fig. 8 is a schematic structural diagram of an interaction device for virtual gifts according to an embodiment of the present application. The interactive means of the virtual gift may include a display unit 801, a first determination unit 802, a selection unit 803, a second determination unit 804, and a gifting unit 805.
A display unit 801, configured to display a gift area in the graphical user interface, where the gift area includes at least one gift identifier;
a first determining unit 802 for determining a target gift identification from the at least one gift identification in response to a gift selection operation for the gift area;
a selecting unit 803, configured to select an initial object identifier from the at least one object identifier in response to a first sliding operation consecutive to the gift selecting operation;
a second determining unit 804, configured to determine, in response to a second sliding operation that is continuous with the first sliding operation, at least one target object identifier from the at least one object identifier according to the second sliding operation and the initial object identifier;
a gifting unit 805 configured to gift the virtual gift corresponding to the target gift identifier to the user corresponding to the at least one target object identifier in response to a gift gifting operation that is continuous with the second sliding operation.
Optionally, the selecting unit 803 is further configured to:
triggering a single gift mode in response to the first sliding operation that is continuous with the gift selecting operation;
setting a main broadcasting user identification corresponding to a main broadcasting user to present a target display state based on the single gift mode;
and determining the anchor user identifier presenting the target display state as the initial object identifier.
Optionally, the graphical user interface includes a single-person mode trigger area, and the selecting unit 803 is further configured to:
responding to the first sliding operation continuous with the gift selection operation, and acquiring a final touch point of the first sliding operation;
and if the final touch point of the first sliding operation is located in the single mode trigger area, triggering the single gift mode.
Optionally, the selecting unit 803 is further configured to:
responding to the first sliding operation which is continuous with the gift selection operation, and acquiring the position of an initial touch point of the first sliding operation and the position of a final touch point of the first sliding operation;
acquiring the moving distance from the position of the initial touch point to the position of the final touch point;
and if the moving distance is greater than the first preset distance and less than the second preset distance, triggering a single gift giving mode.
Optionally, the second determining unit 804 is further configured to:
responding to the second sliding operation continuous with the first sliding operation, and acquiring a touch point of the second sliding operation;
when the extension line of the touch point and the center point of the gift area passes through other object identifiers except the initial object identifier, canceling the target display state of the initial object identifier;
along with the movement of the touch point of the second sliding operation, setting an object identifier passing by the extended line of the moved touch point and the central point to present the target display state, wherein one object identifier exists on the graphical user interface at a time to present the target display state;
and determining the object identifier presenting the target display state for the last time as the target object identifier.
Optionally, the giving unit 805 is further configured to:
acquiring a first sliding track and a first sliding rate of the gift giving operation in response to the gift giving operation continuing from the second sliding operation, and acquiring a length of the first sliding track;
if the first sliding track is along the extension line, the first sliding speed is greater than a first preset speed, and the length of the first sliding track is greater than a first preset length, the virtual gift corresponding to the target gift identification is presented to the user corresponding to the at least one target object identification.
Optionally, the selecting unit 803 is further configured to:
triggering a multi-gift mode in response to the first sliding operation continuing with the gift selecting operation;
setting a presentation target display state of a main broadcasting user identifier corresponding to a main broadcasting user based on the multi-person gift mode;
and determining the anchor user identifier presenting the target display state as the initial object identifier.
Optionally, the graphical user interface includes a multi-user mode trigger area, and the selecting unit 803 is further configured to:
responding to the first sliding operation continuous with the gift selection operation, and acquiring a final touch point of the first sliding operation;
and if the final touch point of the first sliding operation is located in the multi-person mode trigger area, triggering the multi-person gift mode.
Optionally, the selecting unit 803 is further configured to:
responding to the first sliding operation continuous with the gift selection operation, and acquiring the position of an initial touch point of the first sliding operation and the position of a final touch point of the first sliding operation;
acquiring the moving distance from the position of the initial touch point to the position of the final touch point;
and if the moving distance is greater than a second preset distance, triggering the multi-person gift mode.
Optionally, the graphical user interface includes a permutation zone of the at least one object identifier, and the second determining unit 804 is further configured to:
responding to the second sliding operation continuous with the first sliding operation, and acquiring a touch point of the second sliding operation;
mapping the touch point of the second sliding operation in the arrangement area of the at least one object identifier;
when the touch point of the second sliding operation is coincident with the at least one object identifier, setting the object identifier coincident with the touch point of the second sliding operation to be changed from a first display state to the target display state;
and when the touch point of the second sliding operation stops moving on the graphical user interface, determining at least one object identifier presenting the target display state as the target object identifier.
Optionally, the method further includes:
and when the object identifier presenting the target display state is superposed with the touch point of the second sliding operation, setting the object identifier presenting the target display state to be changed into the first display state.
Optionally, the giving unit 805 is further configured to:
acquiring a second sliding track and a second sliding rate of the gift giving operation in response to the gift giving operation continuing from the second sliding operation, and acquiring a length of the second sliding track;
if the second sliding track is vertically upward along the graphical user interface, the second sliding speed is greater than a second preset speed, and the length of the second sliding track is greater than a second preset length, the virtual gift corresponding to the target gift identification is presented to the user corresponding to the at least one target object identification.
All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.
According to the virtual gift interaction device provided by the embodiment of the application, the display unit 801 provides a gift area in a graphical user interface, after the target gift identification which is to be presented is selected in the gift area through the first determination unit 802, continuous operation can be directly executed through the selection unit 803 and the second determination unit 804 without changing an operation interface, the target object identification which is to be presented is conveniently and quickly selected, and presentation of the virtual gift corresponding to the target gift identification is completed through the presentation unit 805. The presenting process of the virtual gifts is executed in the same graphical user interface, so that the operation of presenting the virtual gifts by the user is simplified, and the efficiency and the interactive experience of presenting the virtual gifts are improved.
Correspondingly, the embodiment of the application also provides a computer device, which can be a terminal, and the terminal can be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game machine, a personal computer, a personal digital assistant and the like. As shown in fig. 9, fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer device 900 includes a processor 901 having one or more processing cores, a memory 902 having one or more computer-readable storage media, and a computer program stored on the memory 902 and executable on the processor. The processor 901 is electrically connected to the memory 902. Those skilled in the art will appreciate that the computer device configurations illustrated in the figures are not meant to be limiting of computer devices and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The processor 901 is a control center of the computer apparatus 900, connects various parts of the entire computer apparatus 900 by various interfaces and lines, performs various functions of the computer apparatus 900 and processes data by running or loading software programs and/or modules stored in the memory 902 and calling data stored in the memory 902, thereby monitoring the computer apparatus 900 as a whole.
In this embodiment of the application, the processor 901 in the computer device 900 loads instructions corresponding to processes of one or more application programs into the memory 902, and the processor 901 runs the application programs stored in the memory 902, so as to implement various functions as follows:
providing a gift area in the graphical user interface, the gift area including at least one gift identification; determining a target gift identification from the at least one gift identification in response to a gift selection operation for the gift area; selecting an initial object identifier from the at least one object identifier in response to a first swipe operation that is continuous with the gift selection operation; responding to a second sliding operation continuous with the first sliding operation, and determining at least one target object identifier from the at least one object identifier according to the second sliding operation and the initial object identifier; and in response to the gift giving operation continuous with the second sliding operation, giving the virtual gift corresponding to the target gift identification to the user corresponding to the at least one target object identification.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Optionally, as shown in fig. 9, the computer apparatus 900 further includes: touch-sensitive display screen 903, radio frequency circuit 904, audio circuit 905, input unit 906 and power 907. The processor 901 is electrically connected to the touch display 903, the radio frequency circuit 904, the audio circuit 905, the input unit 906 and the power supply 907. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 9 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components.
The touch screen 903 may be used for displaying a graphical user interface and receiving operation instructions generated by a user acting on the graphical user interface. The touch display screen 903 may include a display panel and a touch panel. The display panel may be used, among other things, to display information entered by or provided to a user and various graphical user interfaces of the computer device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 901, and can receive and execute commands sent by the processor 901. The touch panel may cover the display panel, and when the touch panel detects a touch operation on or near the touch panel, the touch panel transmits the touch operation to the processor 901 to determine the type of the touch event, and then the processor 901 provides a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, a touch panel and a display panel may be integrated into the touch display screen 903 to realize input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display 903 may also be used as a part of the input unit 906 to implement an input function.
In this embodiment of the application, a processor 901 executes a live application to generate a graphical user interface on the touch display 903, where the graphical user interface includes a gift identifier and an object identifier, and according to a gift selection operation, a first sliding operation, a second sliding operation, and a gift presentation operation that are performed continuously by a user in the graphical user interface, the processor 901 may complete a presentation object for selecting a virtual gift and implement a presentation process.
The radio frequency circuit 904 may be configured to transceive radio frequency signals to establish wireless communication with a network device or other computer device via wireless communication, and to transceive signals with the network device or other computer device.
The audio circuitry 905 may be used to provide an audio interface between a user and a computer device through speakers, microphones. The audio circuit 905 can transmit the electrical signal converted from the received audio data to a loudspeaker, and the electrical signal is converted into a sound signal by the loudspeaker and output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 905 and converted into audio data, and then the audio data is processed by the audio data output processor 901, and then sent to another computer device through the radio frequency circuit 904, or the audio data is output to the memory 902 for further processing. The audio circuitry 905 may also include an earbud jack to provide communication of peripheral headphones with the computer device.
The input unit 906 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
A power supply 907 is used to power the various components of the computer device 900. Optionally, the power supply 907 may be logically connected to the processor 901 through a power management system, so as to implement functions of managing charging, discharging, power consumption management, and the like through the power management system. Power supply 907 may also include any component such as one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown in fig. 9, the computer device 900 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
As can be seen from the above, the computer device provided in this embodiment may provide a gift area in the graphical user interface, and after the user selects the target gift identifier that is to be presented in the gift area, the user may directly perform continuous operations without changing the operation interface, conveniently and quickly select the target object identifier that is to be presented, and complete presenting the virtual gift corresponding to the target gift identifier. The presenting process of the virtual gifts is executed in the same graphical user interface, so that the operation of presenting the virtual gifts by the user is simplified, and the efficiency and the interactive experience of presenting the virtual gifts are improved.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer-readable storage medium, in which a plurality of computer programs are stored, and the computer programs can be loaded by a processor to execute the steps in any of the virtual gift interaction methods provided by the embodiments of the present application. For example, the computer program may perform the steps of:
providing a gift area in the graphical user interface, the gift area including at least one gift identification; determining a target gift identification from the at least one gift identification in response to a gift selection operation for the gift area; selecting an initial object identifier from the at least one object identifier in response to a first swipe operation that is continuous with the gift selection operation; responding to a second sliding operation continuous with the first sliding operation, and determining at least one target object identifier from the at least one object identifier according to the second sliding operation and the initial object identifier; and in response to the gift giving operation continuous with the second sliding operation, giving the virtual gift corresponding to the target gift identification to the user corresponding to the at least one target object identification.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any of the virtual gift interaction methods provided in the embodiments of the present application, beneficial effects that can be achieved by any of the virtual gift interaction methods provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described again here.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The interaction method, the interaction device, the computer device and the storage medium for the virtual gifts provided by the embodiments of the present application are introduced in detail, and a specific example is applied in the description to explain the principle and the implementation of the present invention, and the description of the embodiments is only used to help understanding the technical scheme and the core idea of the present invention; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (15)

1. An interactive method for virtual gifts, which is applied to a computer device for providing a graphical user interface, wherein the graphical user interface comprises at least one object identifier, and the method comprises the following steps:
providing a gift area in the graphical user interface, the gift area including at least one gift identification;
determining a target gift identification from the at least one gift identification in response to a gift selection operation for the gift area;
in response to a first sliding operation continuous with the gift selecting operation, determining a target gift mode from a plurality of preset gift modes based on a touch position of the first sliding operation on the graphical user interface, and selecting an initial object identifier from the at least one object identifier based on the first sliding operation, wherein the preset gift modes comprise a single gift mode and a multi-gift mode;
responding to a second sliding operation continuous with the first sliding operation, and determining at least one target object identifier from an initial object identifier and at least one object identifier selected by the second sliding operation according to the target gift mode;
and in response to the gift giving operation continuous with the second sliding operation, giving the virtual gift corresponding to the target gift identification to the user corresponding to the at least one target object identification.
2. The method of claim 1, wherein the responding to a first swipe operation that is continuous with the gift selection operation, determining a target gift mode from a plurality of preset gift modes based on a touch location of the first swipe operation on the graphical user interface, and selecting an initial object identification from the at least one object identification based on the first swipe operation, wherein the preset gift modes include a single gift mode and a multi-gift mode, comprises:
triggering a single gift mode in response to the first sliding operation that is continuous with the gift selecting operation;
setting a main broadcasting user identification corresponding to a main broadcasting user to present a target display state based on the single gift mode;
and determining the anchor user identifier presenting the target display state as the initial object identifier.
3. The method of claim 2, wherein the graphical user interface includes a single-person mode trigger region that triggers a single gift mode in response to the first swipe operation that is continuous with the gift selection operation, comprising:
responding to the first sliding operation continuous with the gift selection operation, and acquiring a final touch point of the first sliding operation;
and if the final touch point of the first sliding operation is located in the single mode trigger area, triggering the single gift mode.
4. The method of claim 2, wherein triggering a single gift mode in response to the first swipe operation that is continuous with the gift selection operation comprises:
responding to the first sliding operation which is continuous in the gift selection operation, and acquiring the position of an initial touch point of the first sliding operation and the position of a final touch point of the first sliding operation;
acquiring the moving distance from the position of the initial touch point to the position of the final touch point;
and if the moving distance is greater than the first preset distance and less than the second preset distance, triggering a single gift giving mode.
5. The method of any of claims 2 to 4, wherein determining at least one target object identifier from the initial object identifier and at least one object identifier selected by the second swipe operation according to the target gift mode in response to a second swipe operation that is continuous with the first swipe operation comprises:
responding to the second sliding operation which is continuous with the first sliding operation, and acquiring a touch point of the second sliding operation;
when the extension line of the touch point and the center point of the gift area passes through other object identifiers except the initial object identifier, canceling the target display state of the initial object identifier;
along with the movement of the touch point of the second sliding operation, setting an object identifier passing by the extended line of the moved touch point and the central point to present the target display state, wherein one object identifier exists on the graphical user interface at a time to present the target display state;
and determining the object identifier presenting the target display state for the last time as the target object identifier.
6. The method of claim 5, wherein the gifting, to the user corresponding to the at least one target object identification, the virtual gift corresponding to the target gift identification in response to the gift-gifting operation that is consecutive to the second swiping operation comprises:
acquiring a first sliding track and a first sliding rate of the gift giving operation in response to the gift giving operation continuing from the second sliding operation, and acquiring a length of the first sliding track;
if the first sliding track is along the extension line, the first sliding speed is greater than a first preset speed, and the length of the first sliding track is greater than a first preset length, the virtual gift corresponding to the target gift identification is presented to the user corresponding to the at least one target object identification.
7. The method of claim 1, wherein the responding to a first swipe operation that is continuous with the gift selection operation, determining a target gift mode from a plurality of preset gift modes based on a touch location of the first swipe operation on the graphical user interface, and selecting an initial object identification from the at least one object identification based on the first swipe operation, wherein the preset gift modes include a single gift mode and a multi-gift mode, comprises:
triggering a multi-gift mode in response to the first sliding operation continuing with the gift selecting operation;
setting a presentation target display state of a main broadcasting user identifier corresponding to a main broadcasting user based on the multi-person gift mode;
and determining the anchor user identifier presenting the target display state as the initial object identifier.
8. The method of claim 7, wherein the graphical user interface includes a multi-mode trigger region that triggers a multi-gift mode in response to the first swipe operation that is continuous with the gift selection operation, comprising:
responding to the first sliding operation continuous with the gift selection operation, and acquiring a final touch point of the first sliding operation;
and if the final touch point of the first sliding operation is located in the multi-person mode trigger area, triggering the multi-person gift mode.
9. The method of claim 7, wherein triggering a multi-gift mode in response to the first swipe operation that is continuous with the gift selection operation comprises:
responding to the first sliding operation continuous with the gift selection operation, and acquiring the position of an initial touch point of the first sliding operation and the position of a final touch point of the first sliding operation;
acquiring the moving distance from the position of the initial touch point to the position of the final touch point;
and if the moving distance is greater than a second preset distance, triggering the multi-person gift giving mode.
10. The method of any of claims 7 to 9, wherein the graphical user interface includes a permutation zone of the at least one object identifier, and wherein determining at least one target object identifier from the initial object identifier and the at least one object identifier selected by the second swipe operation in accordance with the target gift mode in response to a second swipe operation that is continuous with the first swipe operation comprises:
responding to the second sliding operation continuous with the first sliding operation, and acquiring a touch point of the second sliding operation;
mapping the touch point of the second sliding operation in the arrangement area of the at least one object identifier;
when the touch point of the second sliding operation is coincident with the at least one object identifier, setting the object identifier coincident with the touch point of the second sliding operation to be changed from a first display state to the target display state;
and when the touch point of the second sliding operation stops moving on the graphical user interface, determining that at least one object identifier presenting the target display state is the target object identifier.
11. The method of claim 10, further comprising:
and when the object identifier presenting the target display state is superposed with the touch point of the second sliding operation, setting the object identifier presenting the target display state to be changed into the first display state.
12. The method of claim 10, wherein the gifting the virtual gift corresponding to the target gift identification to the user corresponding to the at least one target object identification in response to a gift-gifting operation that is consecutive to the second swiping operation comprises:
acquiring a second sliding track and a second sliding rate of the gift giving operation in response to the gift giving operation continuing from the second sliding operation, and acquiring a length of the second sliding track;
if the second sliding track is vertically upward along the graphical user interface, the second sliding speed is greater than a second preset speed, and the length of the second sliding track is greater than a second preset length, the virtual gift corresponding to the target gift identification is presented to the user corresponding to the at least one target object identification.
13. An interactive device for virtual gifts, which is applied to a computer device for providing a graphical user interface, wherein the graphical user interface comprises at least one object identifier, and the interactive device comprises:
a display unit for displaying a gift area in the graphical user interface, the gift area including at least one gift symbol;
a first determination unit for determining a target gift identification from the at least one gift identification in response to a gift selection operation for the gift area;
a selection unit, configured to determine, in response to a first sliding operation that is continuous with the gift selection operation, a target gift mode from a plurality of preset gift modes based on a touch position of the first sliding operation on the graphical user interface, and select an initial object identifier from the at least one object identifier based on the first sliding operation, where the preset gift modes include a single gift mode and a multi-person gift mode;
a second determining unit, configured to determine, in response to a second sliding operation that is continuous with the first sliding operation, at least one target object identifier from an initial object identifier and at least one object identifier selected by the second sliding operation according to the target gift mode;
and a presenting unit, configured to present a virtual gift corresponding to the target gift identifier to a user corresponding to the at least one target object identifier in response to a gift presenting operation that is continuous with the second sliding operation.
14. A computer device, comprising:
a memory for storing a computer program;
a processor for implementing the steps in the method of interacting a virtual gift according to any one of claims 1 through 12 when executing the computer program.
15. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, carries out the steps in the method of interacting a virtual gift according to any one of claims 1 to 12.
CN202110431257.2A 2021-04-21 2021-04-21 Virtual gift interaction method and device, computer equipment and storage medium Active CN113126875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110431257.2A CN113126875B (en) 2021-04-21 2021-04-21 Virtual gift interaction method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110431257.2A CN113126875B (en) 2021-04-21 2021-04-21 Virtual gift interaction method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113126875A CN113126875A (en) 2021-07-16
CN113126875B true CN113126875B (en) 2022-08-16

Family

ID=76778725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110431257.2A Active CN113126875B (en) 2021-04-21 2021-04-21 Virtual gift interaction method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113126875B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596504A (en) * 2021-08-05 2021-11-02 广州方硅信息技术有限公司 Live broadcast room virtual gift presenting method and device and computer equipment
CN114625466B (en) * 2022-03-15 2023-12-08 广州歌神信息科技有限公司 Interactive execution and control method and device for online singing hall, equipment, medium and product

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111010585A (en) * 2019-12-06 2020-04-14 广州华多网络科技有限公司 Virtual gift sending method, device, equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709762A (en) * 2016-12-26 2017-05-24 乐蜜科技有限公司 Virtual gift recommendation method, virtual gift recommendation device used in direct broadcast room, and mobile terminal
CN116761007A (en) * 2017-12-29 2023-09-15 广州方硅信息技术有限公司 Method for giving virtual gift to multicast live broadcasting room and electronic equipment
CN108848399A (en) * 2018-07-12 2018-11-20 广州趣丸网络科技有限公司 A kind of information interacting method and device based on more wheat positions room
CN111083505B (en) * 2019-11-19 2021-12-28 广州方硅信息技术有限公司 Live broadcast room virtual gift interaction method, electronic equipment and device
CN111147877B (en) * 2019-12-27 2022-04-12 广州方硅信息技术有限公司 Virtual gift presenting method, device, equipment and storage medium
CN114071177B (en) * 2021-11-16 2023-09-26 网易(杭州)网络有限公司 Virtual gift sending method and device and terminal equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111010585A (en) * 2019-12-06 2020-04-14 广州华多网络科技有限公司 Virtual gift sending method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113126875A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
CN113126875B (en) Virtual gift interaction method and device, computer equipment and storage medium
CN113350793B (en) Interface element setting method and device, electronic equipment and storage medium
CN108900407B (en) Method and device for managing session record and storage medium
CN113546419B (en) Game map display method, game map display device, terminal and storage medium
CN113411656A (en) Information processing method, information processing device, computer equipment and storage medium
CN112587925A (en) Display method and device of guide information, storage medium and computer equipment
CN113485617A (en) Animation display method and device, electronic equipment and storage medium
CN113709022A (en) Message interaction method, device, equipment and storage medium
CN113101650A (en) Game scene switching method and device, computer equipment and storage medium
CN114064173A (en) Method, device, medium and equipment for creating instant session page
CN113360034A (en) Picture display method and device, computer equipment and storage medium
CN113332719A (en) Virtual article marking method, device, terminal and storage medium
CN109104640B (en) Virtual gift presenting method and device and storage equipment
CN114189731B (en) Feedback method, device, equipment and storage medium after giving virtual gift
CN115643445A (en) Interaction processing method and device, electronic equipment and storage medium
CN112799754B (en) Information processing method, information processing device, storage medium and computer equipment
CN115193043A (en) Game information sending method and device, computer equipment and storage medium
CN112783386A (en) Page jump method, device, storage medium and computer equipment
CN113426115A (en) Game role display method and device and terminal
CN112487371B (en) Chat session display method, chat session display device, chat session display terminal and storage medium
CN114327197A (en) Message sending method, device, equipment and medium
CN116617660A (en) Map element guiding method, device, terminal and storage medium in game
CN117714720A (en) Live interaction method, live interaction device, computer equipment and computer readable storage medium
CN117150166A (en) Page interaction method, page interaction device, electronic equipment and computer readable storage medium
CN115779411A (en) Game acceleration method, game acceleration device, computer equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant