CN115767002A - Information display method, device, equipment and medium - Google Patents

Information display method, device, equipment and medium Download PDF

Info

Publication number
CN115767002A
CN115767002A CN202211466089.1A CN202211466089A CN115767002A CN 115767002 A CN115767002 A CN 115767002A CN 202211466089 A CN202211466089 A CN 202211466089A CN 115767002 A CN115767002 A CN 115767002A
Authority
CN
China
Prior art keywords
participant
interface
target
real
window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211466089.1A
Other languages
Chinese (zh)
Inventor
杨昕
谢庆麟
盛典
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211466089.1A priority Critical patent/CN115767002A/en
Publication of CN115767002A publication Critical patent/CN115767002A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an information display method, device, equipment and medium, wherein in a real-time voice interaction scene, when a first participant triggers corresponding operation on a real-time voice interaction interface, a target interaction window is displayed, the target interaction window is a window displayed when the real-time voice interaction interface is in non-foreground operation, and related contents and a switching control in the real-time voice interaction scene are displayed on the target interaction window. And controlling the content presented by the target interactive window to switch between the first interface and the second interface in response to the triggering operation of the first participant on the switching control. Wherein one of the first interface and the second interface is a target participant interface. Namely, the content displayed by the target interactive window can be switched through the trigger operation of the participant on the switching control in the target interactive window, so that the requirements of different participants are met, and the use experience of the participant is improved.

Description

Information display method, device, equipment and medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an information display method, apparatus, device, and medium.
Background
In order to meet meeting requirements under different scenes, a user can carry out remote audio and video meeting through meeting application, and efficient meeting at any time and any place is realized. In some scenarios, a participant may listen to the conference, i.e., take a meeting while processing other tasks. In these scenarios, the participant may trigger the display of a minimized floating window corresponding to the meeting window.
However, the currently minimized floating window only displays single content, and the content of the information transmitted through is low, so that the requirements of the participants cannot be met.
Disclosure of Invention
In view of this, embodiments of the present application provide an information display method, apparatus, device, and medium, so as to implement that a participant switches displayed content in a target interaction window according to a requirement of the participant, and improve the use experience of the participant.
In order to achieve the purpose, the technical scheme provided by the application is as follows:
in a first aspect of the present application, there is provided an information display method, including:
under a real-time voice interaction scene, responding to the triggering operation of a first participant on a real-time voice interaction interface, and displaying a target interaction window, wherein the target interaction window is a window displayed when the real-time voice interaction interface is in non-foreground operation, and the target interaction window displays related content and a switching control in the real-time voice interaction scene;
and responding to the triggering operation of the first participant on the switching control, and controlling the content presented in the target interaction window to be switched between a first interface and a second interface, wherein one of the first interface and the second interface is a target participant interface.
In a second aspect of the present application, there is provided an information display apparatus comprising:
the display unit is used for responding to the triggering operation of a first participant on a real-time voice interaction interface in a real-time voice interaction scene, and displaying a target interaction window, wherein the target interaction window is a window displayed when the real-time voice interaction interface is in non-foreground operation, and the target interaction window displays related contents and a switching control in the real-time voice interaction scene;
and the switching unit is used for responding to the triggering operation of the first participant on the switching control and controlling the content presented in the target interactive window to be switched between a first interface and a second interface, wherein one of the first interface and the second interface is a target participant interface.
In a third aspect of the present application, there is provided an electronic device comprising: a processor and a memory;
the memory for storing instructions or computer programs;
the processor is configured to execute the instructions or the computer program in the memory to cause the electronic device to perform the method of the first aspect.
In a fourth aspect of the present application, a computer-readable storage medium is provided, having stored therein instructions that, when run on a device, cause the device to perform the method of the first aspect.
In a fifth aspect of the application, a computer program product is provided, the computer program product comprising computer programs/instructions which, when executed by a processor, implement the method of the first aspect.
Therefore, the application has the following beneficial effects:
in the application, in a real-time voice interaction scene, when a first participant triggers corresponding operation on a real-time voice interaction interface, a target interaction window is displayed, the target interaction window is a window displayed when the real-time voice interaction interface is in non-foreground operation, and related contents and a switching control in the real-time voice interaction scene are displayed on the target interaction window. And controlling the content presented by the target interactive window to switch between the first interface and the second interface in response to the triggering operation of the first participant on the switching control. Wherein one of the first interface and the second interface is a target participant interface. Namely, the content displayed by the target interactive window can be switched through the trigger operation of the participant on the switching control in the target interactive window, the requirement that the participant switches different participant interfaces in the target interactive window is met, and the use experience of the participant is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1a is a schematic view of a real-time voice interaction scene according to an embodiment of the present application;
FIG. 1b is a schematic diagram of a target interaction window according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of an information display method provided in the practice of the present application;
FIG. 3a is a schematic diagram illustrating an embodiment of a switch from a shared content interface to a target participant interface;
FIG. 3b is a schematic diagram illustrating another interface for switching from a shared content interface to a target participant according to an embodiment of the present application;
FIG. 3c is a schematic diagram illustrating an embodiment of a target participant interface switching to a shared content interface;
fig. 4 is a structural diagram of an information display device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Currently, in a real-time voice interaction scenario, it is supported to display a minimized floating window on a display screen. The minimized floating window refers to the window displayed after the interactive window is completely covered or minimized. That is, the real-time voice interaction window is the window presented when the real-time voice interaction window is in the non-foreground runtime. In a sharing scenario, the minimized window displays shared content; in an unshared scene, the minimized window may display, for example, a video picture of the person speaking. In a sharing scenario, i.e., a scenario in which shared content exists, the minimized window may display a shared content interface, for example. Generally speaking, the size of the minimized floating window is small, and in a sharing scene, the picture display sharing content only facilitates the participant to identify that the participant is currently in the sharing state, and cannot acquire more information.
As shown in fig. 1a, in a video conference scene, a conference window in which shared content is displayed, and a dashed frame region indicated by reference numeral 101 is a region in which the shared content is displayed; when the participant triggers the minimize control in the upper right corner of the meeting window, a minimize hover window is displayed in the current display screen, which exposes the current shared content. As shown in fig. 1b, reference numeral 102 is another application interface displayed in the display screen 100, and reference numeral 103 is a window for minimizing hover.
In some scenarios, the participant wants to switch the display video frame, highlight the speaker information, and pay attention to the video status of the speaker, such as body language, expression, and the like. However, in the current sharing scenario, the minimized window only supports displaying the shared content, and cannot display the video frame.
Based on this, the embodiment of the application provides an information display method, which adds a switching function on a target interactive window, can realize switching of contents presented in the target interactive window, and meets the requirements of participants.
In order to facilitate understanding of the technical solutions provided by the embodiments of the present application, the following description will be made with reference to the accompanying drawings.
Referring to fig. 2, the figure is a flowchart of an information display method provided by an embodiment of the present application, where the method may be performed by a real-time voice interaction client, and the real-time voice interaction client may be installed in an electronic device. The electronic device may include a mobile phone, a tablet computer, a notebook computer, a desktop computer, a vehicle-mounted terminal, a wearable electronic device, an all-in-one machine, an intelligent home device, and other devices having a communication function, and may also be a virtual machine or a simulator-simulated device. As shown in fig. 2, the method may include the steps of:
s201: and under the real-time voice interaction scene, responding to the triggering operation of the first participant on the real-time voice interaction interface, and displaying a target interaction window.
The manner of initiating the real-time voice interaction is not limited. For example, the real-time voice interaction may be, for example, an online conference initiated by a participant through a conference client, or a voice call or a video call initiated by a participant through an instant messaging client. No matter which real-time voice interaction scene is adopted, the first participant can trigger related operations on the real-time voice interaction interface to display the target interaction window. The target interactive window is a window presented when the real-time voice interactive interface is in non-foreground operation, and related contents and switching controls in a real-time voice interactive scene are displayed. And the switching control is used for switching the content displayed in the target interactive window.
When a participant (e.g., the first participant) participating in the real-time voice interaction triggers the minimize function at the interactive interface, a target interaction window, such as the target interaction window 103 shown in FIG. 1b, will be displayed on the display screen.
S202: and controlling the content presented in the target interactive window to switch between the first interface and the second interface in response to the triggering operation of the first participant on the switching control.
In this embodiment, after the first participant triggers the switching control in the target interactive window, the content in the target interactive window is controlled to be switched. Specifically, the content presented in the target interaction window will be switched between the first interface and the second interface. Wherein one of the first interface and the second interface is a target participant interface.
In some embodiments, the target participant interface includes a real-time video frame or identification of the target participant corresponding to the target participant. Specifically, if the target participant starts the video function in the real-time voice interaction scene, the target participant interface presents a video picture corresponding to the target participant; and if the target participant does not start the video function in the real-time voice interaction scene, displaying the participant identification corresponding to the target participant on the target participant interface. The participant identification may include a head portrait of the participant, a name of the participant, and the like.
In some embodiments, in a real-time voice interaction and sharing scenario, a participant engaged in a voice interaction may browse content shared by other participants through a real-time voice interaction interface. In this scenario, the other of the first interface and the second interface is a shared content interface. That is, if the real-time voice interaction scene has shared content, the content presented in the target interaction window can be switched between the target participant interface and the shared content interface.
In one case, if the target interactive window displays the shared content interface, the content presented in the target interactive window is switched to display the target participant interface in response to the triggering operation of the switching control by the first participant.
In this embodiment, if the currently minimized interactive window displays shared content, after the first participant triggers a switching operation in the target interactive window, the target participant interface is displayed in the target interactive window.
For example, as shown in fig. 3a, the target interactive window displays the shared content and the toggle control 301, and when the participant triggers the toggle control 301, the content displayed in the target interactive window is toggled. And displaying the real-time video picture of the target participant in the target interactive window because the target participant starts the video function.
For example, as shown in fig. 3b, the target interactive window displays the shared content and the toggle control 301, and when the participant triggers the toggle control 301, the content displayed in the target interactive window is toggled. And displaying the head portrait and the name of the target participant in the target interactive window because the video function of the target participant is not started.
In another case, if the target interactive window displays the target participant interface, the shared content interface is displayed in the target interactive window in response to the participant's trigger operation on the switching control.
In this embodiment, if the current target interaction window displays a target participant interface, after the first participant triggers an operation on a switching control in the target interaction window, a shared content interface is displayed in the target interaction window.
It should be noted that, when the content displayed in the target interaction window is switched from the target participant interface to the shared content interface, if the shared content interface changes, the changed shared content interface is displayed in the target interaction window. That is, in the process of implementing switchable display of shared content and video pictures in the target interactive window, the sharing function is not interrupted and is always in the sharing state.
For example, as shown in fig. 3c, the target interactive window displays the target participant interface and the switching control 302, and when the participant triggers the switching control 302, the content displayed in the target interactive window is switched, and the shared content interface is displayed in the target interactive window.
As can be seen from fig. 3a to 3c, when the content presented in the target interactive window is different, the icon corresponding to the toggle control may be different. In addition, when the first participant triggers the preset operation on the switching control, prompt information can be displayed so as to prompt the participant to switch the content through the prompt information. Wherein the preset operation may be a floating operation. For example, in the scenario of FIG. 3a or FIG. 3b, a prompt message "switch to target participant interface" may be displayed when the participant hovers the mouse over toggle control 301; when in the scenario of FIG. 3c, a prompt "switch to shared content interface" may be displayed when the participant hovers the mouse over the toggle control 302.
The shared content is shared by a second participant participating in the real-time voice interaction, and the second participant can be the first participant or other participants participating in the real-time voice interaction. Specifically, if the second participant initiates sharing in the first sharing manner, the first participant includes the second participant. That is, in the first sharing mode, the sharer may also trigger the display of the target interaction window. If the second participant initiates sharing through the second sharing mode, the first participant does not include the second participant. That is, in the second sharing mode, the sharer cannot trigger the display of the target interactive window, and only the sharee can trigger the display of the target interactive window. It is understood that the first sharing manner and the second sharing manner may be set before or during the real-time voice interaction as required. In some application scenarios, the first sharing manner may be, for example, document sharing (for example, sharing between participants of the real-time voice interaction is achieved by sharing a storage address of a cloud document), and the second sharing manner may include desktop sharing, window sharing, and the like (for example, sharing between participants of the real-time voice interaction is achieved by means of real-time multimedia streaming).
In some embodiments, the target participant may be the focus participant or the current speaking participant in a real-time voice interaction scenario. The focus participant can be set by the first participant or a participant with a preset identity in the real-time voice interaction scene. The focused participant can be understood as a local participant (i.e., the first participant) or a participant that all participants participating in the real-time voice interaction wish to focus on. Alternatively or additionally, if the focus participant is set by the first participant, the focus participant may only take effect on the client corresponding to the first participant; if the focus participant is set by a participant with a predetermined identity, the focus participant may take effect on participants within a predetermined range participating in the real-time voice interaction. Here, the participant with the preset identity may be, for example, a conference moderator who sets the focus participant to be active for all participants participating in the real-time interaction.
That is, if in a real-time voice interaction scene, if a participant triggering the switching control has a corresponding focus participant, no matter whether the focus participant is a currently speaking participant, the target participant is a focus participant, and the target participant interface is a focus participant interface; and if the corresponding focus participant does not exist, the target participant is a speaking participant in the real-time voice interaction scene, and the interface of the target participant is the interface of the current speaking participant.
It can be seen that, in the real-time voice interaction scene, when the first participant triggers a corresponding operation on the real-time voice interaction interface, a target interaction window is displayed, the target interaction window is a window displayed when the real-time voice interaction interface is in a non-foreground operation state, and the target interaction window displays related content and a switching control in the real-time voice interaction scene. And controlling the content presented by the target interactive window to switch between the first interface and the second interface in response to the triggering operation of the first participant on the switching control. Wherein one of the first interface and the second interface is a target participant interface. Namely, the content displayed by the target interactive window can be switched through the trigger operation of the participant on the switching control in the target interactive window, so that the requirements of different participants are met, and the use experience of the participant is improved.
Based on the above method embodiments, embodiments of the present application provide an information display device and an electronic device, which will be described below with reference to the accompanying drawings.
Referring to fig. 4, which is a structural diagram of an information display device according to an embodiment of the present application, as shown in fig. 4, the device includes: a display unit 401 and a switching unit 402.
The display unit 401 is configured to, in a real-time voice interaction scene, respond to a trigger operation of a first participant on a real-time voice interaction interface, and display a target interaction window, where the target interaction window is a window presented when the real-time voice interaction interface is in a non-foreground operation, and the target interaction window displays related content and a switching control in the real-time voice interaction scene;
a switching unit 402, configured to control, in response to a trigger operation of the first participant on the switching control, switching of content presented in the target interaction window between a first interface and a second interface, where one of the first interface and the second interface is a target participant interface.
In some embodiments, the target participant interface includes a real-time video frame corresponding to a target participant and/or an identification of the target participant.
In some embodiments, if the target participant starts a video function in the real-time voice interaction scene, the target participant interface is a real-time video picture corresponding to the target participant;
and if the target participant does not start the video function in the real-time voice interaction scene, the target participant interface is the identifier corresponding to the target participant.
In some implementations, the other of the first interface and the second interface is a shared content interface.
In some embodiments, the switching unit is specifically configured to, if the target interaction window displays the shared content interface, respond to a trigger operation of the first participant on the switching control, and display the target participant interface on the target interaction window; and if the target participant interface is displayed on the target interactive window, responding to the triggering operation of the first participant on the switching control, and displaying the shared content interface on the minimized interactive window.
In some embodiments, the target participant is a focus participant or a current speaking participant in the real-time voice interaction scenario.
In some embodiments, the focus participant is set by the first participant or a participant in the real-time voice interaction having a preset identity.
It should be noted that, for specific implementation of each unit in this embodiment, reference may be made to the relevant description in the foregoing method embodiment. The division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation. Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. For example, in the above embodiment, the processing unit and the sending unit may be the same unit, or may be different units. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Referring to fig. 5, a schematic structural diagram of an electronic device 500 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM502, and the RAM503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
The electronic device provided by the embodiment of the disclosure and the method provided by the embodiment belong to the same inventive concept, and technical details which are not described in detail in the embodiment can be referred to the embodiment, and the embodiment has the same beneficial effects as the embodiment.
The disclosed embodiments provide a computer storage medium having stored thereon a computer program that, when executed by a processor, implements the methods provided by the above-described embodiments.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (Hyper Text Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the method.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit/module does not in some cases constitute a limitation on the unit itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the system or the device disclosed by the embodiment, the description is simple because the system or the device corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
It should be understood that, in this application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An information display method, characterized in that the method comprises:
under a real-time voice interaction scene, responding to the triggering operation of a first participant on a real-time voice interaction interface, and displaying a target interaction window, wherein the target interaction window is a window displayed when the real-time voice interaction interface is in non-foreground operation, and the target interaction window displays related contents and a switching control in the real-time voice interaction scene;
and responding to the triggering operation of the first participant on the switching control, and controlling the content presented in the target interaction window to be switched between a first interface and a second interface, wherein one of the first interface and the second interface is a target participant interface.
2. The method of claim 1, wherein the target participant interface comprises a real-time video frame corresponding to a target participant and/or an identification of the target participant.
3. The method of claim 2, wherein if the target participant initiates a video function in the real-time voice interaction scenario, the target participant interface is a real-time video frame corresponding to the target participant;
and if the target participant does not start the video function in the real-time voice interaction scene, the target participant interface is the identifier corresponding to the target participant.
4. The method of claim 1, wherein the other of the first interface and the second interface is a shared content interface.
5. The method of claim 4, wherein the controlling the content presented in the target interactive window to switch between the first interface and the second interface in response to the triggering operation of the switching control by the first participant comprises:
if the target interactive window displays the shared content interface, responding to the triggering operation of the first participant on the switching control, and displaying the target participant interface on the target interactive window;
and if the target participant interface is displayed on the target interactive window, responding to the triggering operation of the first participant on the switching control, and displaying the shared content interface on the minimized interactive window.
6. The method of claim 1, wherein the target participant is a focused participant or a current speaking participant in the real-time voice interaction scenario.
7. The method of claim 6, wherein the participant in focus is set by the first participant or a participant in the real-time voice interaction scenario with a preset identity.
8. An information display apparatus, characterized in that the apparatus comprises:
the display unit is used for responding to the triggering operation of a first participant on a real-time voice interaction interface in a real-time voice interaction scene, and displaying a target interaction window, wherein the target interaction window is a window displayed when the real-time voice interaction interface is in non-foreground operation, and the target interaction window displays related contents and a switching control in the real-time voice interaction scene;
and the switching unit is used for responding to the triggering operation of the first participant on the switching control and controlling the content presented in the target interactive window to be switched between a first interface and a second interface, wherein one of the first interface and the second interface is a target participant interface.
9. An electronic device, characterized in that the device comprises: a processor and a memory;
the memory for storing instructions or computer programs;
the processor to execute the instructions or computer program in the memory to cause the electronic device to perform the method of any of claims 1-7.
10. A computer-readable storage medium having stored therein instructions that, when executed on a device, cause the device to perform the method of any one of claims 1-7.
CN202211466089.1A 2022-11-22 2022-11-22 Information display method, device, equipment and medium Pending CN115767002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211466089.1A CN115767002A (en) 2022-11-22 2022-11-22 Information display method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211466089.1A CN115767002A (en) 2022-11-22 2022-11-22 Information display method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN115767002A true CN115767002A (en) 2023-03-07

Family

ID=85335059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211466089.1A Pending CN115767002A (en) 2022-11-22 2022-11-22 Information display method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN115767002A (en)

Similar Documents

Publication Publication Date Title
CN112433654B (en) Page display method and device, electronic equipment and computer readable medium
JP7316464B2 (en) Active friend information display method, device, electronic device and storage medium
CN113489937A (en) Video sharing method, device, equipment and medium
CN111526411A (en) Video processing method, device, equipment and medium
CN111790148B (en) Information interaction method and device in game scene and computer readable medium
CN113542902B (en) Video processing method and device, electronic equipment and storage medium
JP2024515928A (en) PAGE TRANSITION METHOD, APPARATUS, DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT
CN114371896B (en) Prompting method, device, equipment and medium based on document sharing
CN112312223A (en) Information display method and device and electronic equipment
JP7480344B2 (en) Information display method, device and electronic device
US11758087B2 (en) Multimedia conference data processing method and apparatus, and electronic device
US20240104513A1 (en) Schedule sharing method, apparatus, and device
US11956531B2 (en) Video sharing method and apparatus, electronic device, and storage medium
WO2023143299A1 (en) Message display method and apparatus, device, and storage medium
JP2024518472A (en) Image fusion method, device, electronic device, and storage medium
CN111147885B (en) Live broadcast room interaction method and device, readable medium and electronic equipment
JP2024502728A (en) Methods, devices, electronic devices, and storage media for displaying unread messages
CN114527925B (en) Conversation method, conversation device, electronic equipment and storage medium
CN114491098A (en) Comment prompting method and device, electronic equipment, storage medium and program product
CN115113787B (en) Message processing method, device, equipment and medium
CN115529485B (en) Live video processing method, device, equipment and medium
CN115097984B (en) Interaction method, interaction device, electronic equipment and storage medium
CN115756252A (en) Interaction method, device and equipment based on page content and storage medium
CN115933921A (en) Media content display method and device, electronic equipment and storage medium
CN115767002A (en) Information display method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination