CN112764603A - Message display method and device and electronic equipment - Google Patents

Message display method and device and electronic equipment Download PDF

Info

Publication number
CN112764603A
CN112764603A CN202011626373.1A CN202011626373A CN112764603A CN 112764603 A CN112764603 A CN 112764603A CN 202011626373 A CN202011626373 A CN 202011626373A CN 112764603 A CN112764603 A CN 112764603A
Authority
CN
China
Prior art keywords
input
message
image
target
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011626373.1A
Other languages
Chinese (zh)
Other versions
CN112764603B (en
Inventor
张孝东
刘红邦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011626373.1A priority Critical patent/CN112764603B/en
Publication of CN112764603A publication Critical patent/CN112764603A/en
Application granted granted Critical
Publication of CN112764603B publication Critical patent/CN112764603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a message display method, a message display device and electronic equipment, belongs to the technical field of communication, and can solve the problem that a required voice message cannot be quickly found from a chat record in the related art. The method comprises the following steps: receiving a first input of a user to a first interface; in response to a first input, displaying a first image, the first image being: an image of a target area of the first interface, the target area being determined based on a first input, the first image including N message identifiers, each of the message identifiers indicating one of N voice messages of the first interface; receiving a second input of the first image by the user; and in response to the second input, displaying a target object in the first image in a first display mode, wherein the target object is used for indicating a target message in the N voice messages, the target message comprises target content, the target content is search content determined based on the second input, and N is an integer greater than 1.

Description

Message display method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a message display method and device and electronic equipment.
Background
With the rapid development of terminal technology and mobile internet technology, the variety of instant social application programs is increasing.
At present, a user can chat with other users through an instant social application program anytime and anywhere, wherein a voice chat mode is also popular with the user due to the advantages of convenience and quickness. However, the voice chat technique has the following problems: at present, when a user wants to find a needed voice message from a history message record, the user can only click the voice message one by one to trigger the electronic device to play corresponding voice content, and then judge whether the voice message is the voice message needed by the user according to the played voice content, which may be time-consuming.
Thus, the related art cannot quickly find a desired voice message from the chat log.
Disclosure of Invention
The embodiment of the application aims to provide a message display method, a message display device and electronic equipment, and can solve the problem that a required voice message cannot be quickly found from a chat record in the related art.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a message display method, where the method includes: receiving a first input of a user to a first interface; in response to a first input, displaying a first image, the first image being: an image of a target area of the first interface, the target area being determined based on a first input, the first image including N message identifiers, each of the message identifiers indicating one of N voice messages of the first interface; receiving a second input of the first image by the user; and in response to the second input, displaying a target object in the first image in a first display mode, wherein the target object is used for indicating a target message in the N voice messages, the target message comprises target content, the target content is search content determined based on the second input, and N is an integer greater than 1.
In a second aspect, an embodiment of the present application provides a message display apparatus, including: the device comprises a receiving module and a display module; the receiving module is used for receiving a first input of a user to the first interface; the display module is used for responding to the first input received by the receiving module and displaying a first image, wherein the first image is as follows: an image of a target area of the first interface, the target area being determined based on a first input, the first image including N message identifiers, each of the message identifiers indicating one of N voice messages of the first interface; the receiving module is also used for receiving second input of the first image displayed by the display module by a user; the display module is further configured to display, in response to the second input received by the receiving module, a target object in the first image in a first display manner, where the target object is used to indicate a target message in the N voice messages, the target message includes target content, the target content is search content determined based on the second input, and N is an integer greater than 1.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, the first input of the user to the first interface can be received; in response to a first input, displaying a first image, the first image being: an image of a target area of the first interface, the target area being determined based on a first input, the first image including N message identifiers, each of the message identifiers indicating one of N voice messages of the first interface; receiving a second input of the first image by the user; and in response to the second input, displaying a target object in the first image in a first display mode, wherein the target object is used for indicating a target message in the N voice messages, the target message comprises target content, the target content is search content determined based on the second input, and N is an integer greater than 1. Through the scheme, the user triggers the electronic equipment to display the first image comprising the target area (the area comprising the N pieces of voice information and the area where the target information is located) of the first interface through the first input, and then the user can trigger the electronic equipment to display the target information (the voice information required by the user) in the first image in the first display mode through the second input, so that the operation process of searching the voice information can be simplified, the time of the user is saved, and the voice information required by the user can be quickly searched from the chat records.
Drawings
Fig. 1 is a flowchart of a message display method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a message display device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise specified, "a plurality" means two or more, for example, a plurality of processing units means two or more processing units; plural elements means two or more elements, and the like.
The message display method, the message display device, and the electronic device provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The message display method provided by the embodiment of the application can be applied to the scene of searching the historical voice message, and specifically, the first input of a user to a first interface can be received; in response to a first input, displaying a first image, the first image being: an image of a target area of the first interface, the target area being determined based on a first input, the first image including N message identifiers, each of the message identifiers indicating one of N voice messages of the first interface; receiving a second input of the first image by the user; and in response to the second input, displaying a target object in the first image in a first display mode, wherein the target object is used for indicating a target message in the N voice messages, the target message comprises target content, the target content is search content determined based on the second input, and N is an integer greater than 1. Through the scheme, the user triggers the electronic equipment to display the first image comprising the target area (the area comprising the N pieces of voice information and the area where the target information is located) of the first interface through the first input, and then the user can trigger the electronic equipment to display the target information (the voice information required by the user) in the first image in the first display mode through the second input, so that the operation process of searching the voice information can be simplified, the time of the user is saved, and the voice information required by the user can be quickly searched from the chat records.
Referring to fig. 1, an embodiment of the present application provides a message display method, and an example of the message display method provided by the embodiment of the present application is described below with an execution subject as an electronic device. The method may include steps 201 through 204 described below.
Step 201, the electronic device receives a first input of a user to a first interface.
Step 202, the electronic device displays a first image in response to a first input.
Wherein the first image is: the method comprises the steps of obtaining an image of a target area of a first interface, wherein the target area is determined based on a first input, the first image comprises N message identifications, each message identification is used for indicating one voice message in N voice messages of the first interface, and N is an integer larger than 1.
It can be understood that, in this embodiment of the application, the first interface may be a group chat window (including chat windows of at least three users) in the instant social application, or may be a private chat window (a chat window between two users) in the instant social application.
It can be understood that in the embodiment of the present application, the first input is used for triggering the electronic device to determine the target area and generate the first image. The first image may include at least one text indicator (each text indicator indicating one text message in the target area in the first interface) in addition to the N message indicators indicating the N voice messages (the text indicators may not be included). The first image may further include avatar information of the contact and at least one piece of time information, where each piece of time information is used to indicate a time for sending a message, and may be determined specifically according to actual usage requirements, and the embodiment of the present application is not limited.
Optionally, the first input may be a click input of a user on the target area, the first input may be a sliding input of the user on the target area, or may be a combined key input of the user for the target area, or may be a switch input for the target area, or may be another feasibility input, which may be determined according to actual use requirements, and the embodiment of the present application is not limited.
Illustratively, the click input may be any number of click inputs, such as a single click input, a double click input, or the like; the slide input may be a slide input in any direction or a multi-finger input, for example, an upward slide input, a downward slide input, a clockwise slide input, a counterclockwise slide input, a two-finger slide input, a three-finger slide input, or the like; the combination key input may be, for example, a combination input of a power key and a volume key; the switch input may be, for example, an input of clicking a screen capture switch (of the control center region), or the like.
Optionally, the target area may be determined according to a time range of message sending, for example, if the user needs the electronic device (to obtain a message in the target time period) to generate a first image of the message in the target time period, the user may input the time range corresponding to the target time period to determine the target area; the target area may also be determined according to the index of the message, for example, if the user needs the electronic device to generate a first image from messages between the fifth voice message and the twenty-th voice message (including the fifth message and the twenty-th message), the user may input the index of the start message (the fifth voice message) and the index of the end message (the twenty-th voice message) to determine the target area; the target area may also be determined according to an input for selecting a start message and an input for selecting an end message, for example, a user clicks the start message and the end message respectively, and triggers the electronic device to determine an area between the start message and the end message as the target area, thereby generating the first image.
Illustratively, a user has many records of voice messages in a social chat interface (first interface), and the user needs to find the location of a target voice message (target message) including target content in the social chat interface and the accurate content of the target voice message. The user can perform screen capture (or long screen capture) input on the chat records (target areas) in the target time period, trigger the electronic device to perform screen capture operation on the target areas, generate a first image, and display the first image.
Step 203, the electronic device receives a second input of the first image by the user.
And step 204, the electronic equipment responds to the second input and displays the target object in the first image in a first display mode.
The target object is used for indicating a target message in the N voice messages, the target message comprises target content, and the target content is search content determined based on a second input.
It should be noted that, in the case that the first image further includes at least one text label (text message), the electronic device may further display, in response to the second input, the target text label in the first display manner, where the target text label is a label that includes the target content in the at least one text label, and the target text label may be determined according to an actual use requirement, which is not limited in the embodiment of the present application.
It can be understood that the target message is at least one voice message of the N voice messages, which is not limited in this embodiment of the application.
It can be understood that the target object may be at least one message identifier of the N message identifiers (that is, the target object is a message identifier corresponding to the target message), or may be text content after at least one voice message of the N voice messages is translated (that is, at least one text content (of the N text contents into which the N voice messages are translated), that is, text content after the target message is translated), which may be determined specifically according to actual usage requirements, and the embodiment of the present application is not limited.
It is to be understood that the target message includes target content, that is, each message in the target message includes target content, and the target content may be key information or other marks, and the embodiments of the present application are not limited.
It can be understood that in the embodiment of the present application, an option or a control having a search function is included in the first image. Optionally, the first image may have a search box with a search function, and the second input may be an input for a user to input target content in the search box and trigger the electronic device to search; the first image can also be provided with a voice input control with a search function, and the second input can be input by a user for inputting target content by voice and triggering the electronic equipment to search; the first image may also have other options or controls with a search function, which is not limited in the embodiment of the present application.
Optionally, the first display mode may be a bold display mode, a specific color display mode, an underline display mode, a specific mark display mode, a positioning cursor display mode, and the like, and the embodiment of the present application is not limited. The target message in the first image is displayed in the first display mode, and it can be understood that the target message in the first image is marked and displayed by a bold display mode, a specific color display mode, an underline display mode, a specific identification display mode and the like. In this way, the user can clearly know which messages in the first image include the target content.
In the embodiment of the application, a first input of a user to the first interface can be received; in response to a first input, displaying a first image, the first image being: an image of a target area of the first interface, the target area being determined based on a first input, the first image including N message identifiers, each of the message identifiers indicating one of N voice messages of the first interface; receiving a second input of the first image by the user; and in response to the second input, displaying a target object in the first image in a first display mode, wherein the target object is used for indicating a target message in the N voice messages, the target message comprises target content, the target content is search content determined based on the second input, and N is an integer greater than 1. Through the scheme, the user triggers the electronic equipment to display the first image comprising the target area (the area comprising the N pieces of voice information and the area where the target information is located) of the first interface through the first input, and then the user can trigger the electronic equipment to display the target information (the voice information required by the user) in the first image in the first display mode through the second input, so that the operation process of searching the voice information can be simplified, the time of the user is saved, and the voice information required by the user can be quickly searched from the chat records.
Optionally, in the step 204, the target message in the first image may still be a voice message, or may also be text content after the voice message is translated (after the target message including the target content is found in response to the second input, the target message is translated into the text content, and the text content corresponding to the target message is displayed in the first image).
Optionally, in this embodiment of the present application, before step 203, the user may also trigger the electronic device to translate the voice message into text content by inputting.
Illustratively, after the step 203, the message display method provided by the embodiment of the present application may further include the following steps 205 to 206.
Step 205, the electronic device receives a third input from the user.
Step 206, the electronic device responds to the third input, and updates the N message identifications in the first image into N text contents, where each text content is a content corresponding to one voice message.
Optionally, the third input may be a click input of the user on the "speech translation" option in the first interface, may also be a slide input of the user on the "speech translation" option in the first interface, and may also be another feasibility input, which is not limited in this embodiment of the application.
For example, the detailed description of the click input and the slide input may refer to the related inputs of the click input and the slide input in the description of the first input in step 201, and will not be described herein again.
It is understood that, in response to the third input, the electronic device extracts and recognizes the N voice messages indicated by the N message identifiers in the first image, and then replaces the N message identifiers in the first image with the text contents corresponding to the voice messages.
Optionally, before the step 206, the message display method provided in the embodiment of the present application may further include the following step 206 a.
Step 206a, the electronic device responds to the third input, acquires the N voice messages from the storage area corresponding to the first interface, and translates the N voice messages to obtain the N text contents.
For example, the electronic device may determine N voice messages based on the first input, and specifically, the method for determining N voice messages based on the first input may refer to the description related to the determination method of the target area in step 202, then obtain N voice messages from the storage area corresponding to the first interface, and then translate the N voice messages to obtain N text contents.
In the embodiment of the application, the voice message is translated into the corresponding text content, so that the user can more intuitively know the chat content, the user experience is better, and the user can share the first image (the first image after the voice message is translated) out (share a friend circle, chat friends and the like) or store the first image in the electronic equipment.
Optionally, the first image further includes M text identifiers, each text identifier is used to indicate one text message of the target area, and in the first image, the display modes of the N text contents are different from the display modes of the M text identifiers; wherein M is a positive integer.
Illustratively, the display mode may include at least one of: font color, font size, font type, bold display, underline display, etc., and the embodiments of the present application are not limited. For example, the font color of the text content corresponding to the N voice messages is different from the font color of the M text identifiers (text messages).
It can be understood that, in the embodiment of the present application, when the first image further includes a text message, in the first image, the display manner of the N text contents is different from the display manner of the M text identifiers, so that it is convenient for a user to distinguish which text messages are translated from which text messages are translated in the first image.
Optionally, in this embodiment of the application, after the step 206, the user may save the first image or share the first image.
Illustratively, after the step 206, the message display method provided by the embodiment of the present application may further include the following steps 207 to 208.
Step 207, the electronic device receives a fourth input from the user.
And step 208, the electronic equipment responds to the fourth input and sends the first image to the target application program.
Optionally, the fourth input may be an input of the user for the first image, or may be a sliding input of the user for the first image, or may be another feasible input, which is not limited in this embodiment of the application.
For example, the detailed description of the click input and the slide input may refer to the related inputs of the click input and the slide input in the description of the first input in step 201, and will not be described herein again.
For example, the first input may be an input of the user clicking on the "share" option and clicking on the "target application" identifier for the first image.
In the embodiment of the application, the first image after the voice message is translated is shared, and the user does not need to input the chat content in the first image again in the target application program, so that the operation process of sharing the chat content in the first image in the target application program by the user can be simplified, and the time is saved.
Optionally, in this embodiment of the application, after the step 204, the user may trigger the electronic device to locate a certain message (first message) in the first image in the first interface by inputting, or may trigger the electronic device to copy a certain message in the first image by inputting.
Illustratively, after the step 204, the message display method provided by the embodiment of the present application may further include the following steps 209 to 210.
Step 209, the electronic device receives a fifth input of the user to the first object in the first image.
Wherein the first object is used to indicate a first message in the first interface.
Optionally, the first message may be a message in the target message, or may also be a message in the first image except for the target message, which is not limited in this embodiment of the application. The first object in the first image may be a message identifier in the N message identifiers, or may be text content corresponding to the message identifier, or may be a text identifier (text message) in the M text identifiers, which is not limited in this embodiment of the present application.
Optionally, the fifth input may be a click input of the user on the first message, or a slide input of the user on the first message, or other feasibility inputs, which is not limited in the embodiment of the present application.
For example, the detailed description of the click input and the slide input may refer to the related inputs of the click input and the slide input in the description of the first input in step 201, and will not be described herein again.
Step 210, the electronic device, in response to the fifth input, performs at least one of: displaying a first interface, wherein a first message in the first interface is displayed in a second display mode; the first message in the first image is copied.
It is to be appreciated that if the first input is used to position the first message in the first interface, the electronic device displays the first message in the first interface in the second display manner (i.e., quickly jumps to the first message in the first interface) in response to the first input. The electronic device copies the first message in the first image in response to the first input if the first input is for copying the first message in the first image.
It is understood that, for the description of the second display manner, reference may be made to the description of the first display manner in step 204, and details are not described herein again. In the embodiment of the present application, the second display manner may be the same as or different from the first display manner, and the embodiment of the present application is not limited.
In this embodiment of the present application, a first message in the first interface is displayed in a second display manner (the first interface is displayed, and the first message in the first interface is displayed in the second display manner), where the first message in the first interface may be a message in a voice format or a message in a text format (in this case, the first message may be a text message originally or a text message translated from a voice message), and this embodiment of the present application is not limited.
It is understood that, because there is an association (one-to-one correspondence or mapping) between each message of the first image and the message in the target area of the first interface, the electronic device may jump to the first message in the first interface in response to the fifth input according to the association.
In the embodiment of the application, through the fifth input, the first message can be quickly positioned in the first interface, or the first message can be quickly copied.
Fig. 2 shows a schematic diagram of a possible structure of the message display device according to the embodiment of the present application. As shown in fig. 2, the message display apparatus 300 may include: a receiving module 301 and a display module 302; the receiving module 301 is configured to receive a first input of a first interface from a user; the display module 302 is configured to display a first image in response to a first input received by the receiving module 301, where the first image is: an image of a target area of the first interface, the target area being determined based on a first input, the first image including N message identifiers, each of the message identifiers indicating one of N voice messages of the first interface; the receiving module 301 is further configured to receive a second input of the first image displayed by the displaying module 302 from the user; the display module 302 is further configured to display, in response to the second input received by the receiving module 301, a target object in the first image in a first display manner, where the target object is used to indicate a target message in the N voice messages, the target message includes target content, the target content is search content determined based on the second input, and N is an integer greater than 1.
Optionally, the message display apparatus 300 further includes: an update module 303; the receiving module 301 is further configured to receive a third input from the user before the second input from the user to the first image is received; the updating module 303 is configured to update the N message identifiers in the first image into N pieces of text content in response to a third input received by the receiving module 301, where each piece of text content is a content corresponding to one voice message.
Optionally, the message display apparatus 300 further includes: an acquisition and translation module 304; the obtaining and translating module 304 is configured to obtain the N voice messages from the storage area corresponding to the first interface before the N message identifiers in the first image are updated to N text contents, and translate the N voice messages to obtain the N text contents.
Optionally, the first image further includes M text identifiers, and each text identifier is used to indicate one text message of the target area; in the first image, the display mode of the N text contents is different from that of the M text marks; wherein M is a positive integer.
Optionally, the message display apparatus 300 further includes: a sending module 305; the receiving module 301 is further configured to receive a fourth input from the user after the N message identifiers in the first image are updated to N pieces of text content; the sending module 305 is configured to send the first image to the target application in response to the fourth input received by the receiving module 301.
Optionally, the message display apparatus 300 further includes: a copy module 306; the receiving module 301 is further configured to receive a fifth input of the first object in the first image from the user after the target object in the first image is displayed in the first display manner, where the first object is used to indicate a first message in the first interface; the display module 302 is further configured to display a first interface in response to the fifth input received by the receiving module 301, where a first message in the first interface is displayed in a second display manner; alternatively, the copying module 306 is configured to copy the first message in the first image in response to a fifth input received by the receiving module 301.
It should be noted that, as shown in fig. 2, modules that are necessarily included in the message display apparatus 300 are illustrated by solid line boxes, such as a receiving module 301 and a display module 302; the modules that may or may not be included in the message display apparatus 300 are illustrated by dashed boxes, such as an update module 303, an acquisition and translation module 304, a sending module 305, and a copy module 306.
The embodiment of the application provides a message display device, which can receive first input of a user to a first interface; in response to a first input, displaying a first image, the first image being: an image of a target area of the first interface, the target area being determined based on a first input, the first image including N message identifiers, each of the message identifiers indicating one of N voice messages of the first interface; receiving a second input of the first image by the user; and in response to the second input, displaying a target object in the first image in a first display mode, wherein the target object is used for indicating a target message in the N voice messages, the target message comprises target content, the target content is search content determined based on the second input, and N is an integer greater than 1. Through the scheme, the user triggers the electronic equipment to display the first image comprising the target area (the area comprising the N pieces of voice information and the area where the target information is located) of the first interface through the first input, and then the user can trigger the electronic equipment to display the target information (the voice information required by the user) in the first image in the first display mode through the second input, so that the operation process of searching the voice information can be simplified, the time of the user is saved, and the voice information required by the user can be quickly searched from the chat records.
The message display device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The message display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The message display device provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 3, an electronic device 400 is further provided in this embodiment of the present application, and includes a processor 401, a memory 402, and a program or an instruction stored in the memory 402 and executable on the processor 401, where the program or the instruction is executed by the processor 401 to implement each process of the foregoing message display method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 4 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application. The electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and the like.
Those skilled in the art will appreciate that the electronic device 500 may further include a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The user input unit 507 is configured to receive a first input of the first interface by the user; a display unit 506, configured to display a first image in response to a first input, the first image being: an image of a target area of the first interface, the target area being determined based on a first input, the first image including N message identifiers, each of the message identifiers indicating one of N voice messages of the first interface; a user input unit 507, further configured to receive a second input of the first image by the user; the display unit 506 is further configured to display, in response to the second input, a target object in the first image in the first display manner, where the target object is used to indicate a target message in the N voice messages, the target message includes target content, the target content is search content determined based on the second input, and N is an integer greater than 1.
Optionally, the user input unit 507 is further configured to receive a third input from the user before the second input from the user to the first image is received; a processor 510, configured to update the N message identifications in the first image to N pieces of text content in response to a third input, where each piece of text content is a content corresponding to one of the voice messages.
Optionally, the processor 510 is further configured to, before the N message identifiers in the first image are updated to N text contents, obtain the N voice messages from the storage area corresponding to the first interface, and translate the N voice messages to obtain the N text contents.
Optionally, the first image further includes M text identifiers, each text identifier is used to indicate one text message of the target area, and in the first image, the display modes of the N text contents are different from the display modes of the M text identifiers; wherein M is a positive integer.
Optionally, the user input unit 507 is further configured to receive a fourth input from the user after the N message identifiers in the first image are updated to N pieces of text content, each piece of text content being a content corresponding to one voice message; processor 510 is further configured to send the first image to the target application in response to a fourth input.
Optionally, the user input unit 507 is further configured to receive a fifth input of the user to the first object in the first image after the target object in the first image is displayed in the first display manner, where the first object is used to indicate a first message in the first interface; the display unit 506 is further configured to display a first interface in response to a fifth input, where a first message in the first interface is displayed in a second display manner; alternatively, the processor 510 is further configured to copy the first message in the first image in response to a fifth input.
The electronic device provided by the embodiment of the application can receive the first input of the user to the first interface; in response to a first input, displaying a first image, the first image being: an image of a target area of the first interface, the target area being determined based on a first input, the first image including N message identifiers, each of the message identifiers indicating one of N voice messages of the first interface; receiving a second input of the first image by the user; and in response to the second input, displaying a target object in the first image in a first display mode, wherein the target object is used for indicating a target message in the N voice messages, the target message comprises target content, the target content is search content determined based on the second input, and N is an integer greater than 1. Through the scheme, the user triggers the electronic equipment to display the first image comprising the target area (the area comprising the N pieces of voice information and the area where the target information is located) of the first interface through the first input, and then the user can trigger the electronic equipment to display the target information (the voice information required by the user) in the first image in the first display mode through the second input, so that the operation process of searching the voice information can be simplified, the time of the user is saved, and the voice information required by the user can be quickly searched from the chat records.
It should be understood that, in the embodiment of the present application, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system. The electronic device provides wireless broadband internet access to the user via the network module 502, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media. The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the electronic apparatus 500 (e.g., a call signal reception sound, a message reception sound, etc.). The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 506 may include a display panel 5061, and the display panel 5061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 507 includes a touch panel 5071 and other input devices 5072. A touch panel 5071, also referred to as a touch screen. The touch panel 5071 may include two parts of a touch detection device and a touch controller. Other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in further detail herein. The memory 509 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. Processor 510 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned message display method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above-mentioned message display method embodiment, and can achieve the same technical effect, and is not described here again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for displaying messages, the method comprising:
receiving a first input of a user to a first interface;
in response to the first input, displaying a first image, the first image being: an image of a target area of the first interface, the target area being determined based on the first input, the first image including N message identifiers, each message identifier indicating one of N voice messages of the first interface;
receiving a second input of the first image by the user;
and responding to the second input, displaying a target object in the first image in a first display mode, wherein the target object is used for indicating a target message in the N voice messages, the target message comprises target content, the target content is search content determined based on the second input, and N is an integer greater than 1.
2. The method of claim 1, wherein prior to receiving a second input by the user to the first image, the method further comprises:
receiving a third input of the user;
and responding to the third input, and updating the N message identifications in the first image into N pieces of text contents, wherein each piece of text content is the content corresponding to one voice message.
3. The method of claim 2, wherein prior to said updating said N message identifications in said first image to N pieces of text content, said method further comprises:
and acquiring the N voice messages from a storage area corresponding to the first interface, and translating the N voice messages to obtain the N text contents.
4. The method according to claim 2, wherein the first image further comprises M text identifiers, each text identifier being used for indicating one text message of the target area;
in the first image, the display modes of the N text contents are different from the display modes of the M text marks;
wherein M is a positive integer.
5. The method of claim 2, wherein after updating the N message identifications in the first image to N pieces of text content, the method further comprises:
receiving a fourth input from the user;
in response to the fourth input, sending the first image to a target application.
6. The method according to any one of claims 1 to 5, wherein after displaying the target object in the first image in the first display mode, the method further comprises:
receiving a fifth input of a user to a first object in the first image, the first object being used for indicating a first message in the first interface;
in response to the fifth input, performing at least one of:
displaying the first interface, wherein the first message in the first interface is displayed in a second display mode;
the first message is duplicated.
7. A message display apparatus, characterized in that the apparatus comprises: the device comprises a receiving module and a display module;
the receiving module is used for receiving a first input of a user to the first interface;
the display module is configured to display a first image in response to the first input received by the receiving module, where the first image is: an image of a target area of the first interface, the target area being determined based on the first input, the first image including N message identifiers, each message identifier indicating one of N voice messages of the first interface;
the receiving module is further used for receiving a second input of the first image displayed by the display module by a user;
the display module is further configured to display, in response to the second input received by the receiving module, a target object in the first image in a first display manner, where the target object is used to indicate a target message in the N voice messages, the target message includes target content, the target content is search content determined based on the second input, and N is an integer greater than 1.
8. The apparatus of claim 7, further comprising: an update module;
the receiving module is further configured to receive a third input of the user before the second input of the first image by the user is received;
the updating module is configured to update the N message identifiers in the first image to N pieces of text content in response to the third input received by the receiving module, where each piece of text content is a content corresponding to one voice message.
9. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the message display method as claimed in any one of claims 1 to 6.
10. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the message display method according to any one of claims 1 to 6.
CN202011626373.1A 2020-12-31 2020-12-31 Message display method and device and electronic equipment Active CN112764603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011626373.1A CN112764603B (en) 2020-12-31 2020-12-31 Message display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011626373.1A CN112764603B (en) 2020-12-31 2020-12-31 Message display method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112764603A true CN112764603A (en) 2021-05-07
CN112764603B CN112764603B (en) 2022-05-06

Family

ID=75699339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011626373.1A Active CN112764603B (en) 2020-12-31 2020-12-31 Message display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112764603B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327173A (en) * 2022-01-04 2022-04-12 维沃移动通信有限公司 Information processing method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017021672A (en) * 2015-07-14 2017-01-26 村田機械株式会社 Search device
US20170154450A1 (en) * 2015-11-30 2017-06-01 Le Shi Zhi Xin Electronic Technology (Tianjin) Limited Multimedia Picture Generating Method, Device and Electronic Device
CN107798143A (en) * 2017-11-24 2018-03-13 珠海市魅族科技有限公司 A kind of information search method, device, terminal and readable storage medium storing program for executing
CN109388319A (en) * 2018-10-19 2019-02-26 广东小天才科技有限公司 A kind of screenshot method, screenshot device, storage medium and terminal device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017021672A (en) * 2015-07-14 2017-01-26 村田機械株式会社 Search device
US20170154450A1 (en) * 2015-11-30 2017-06-01 Le Shi Zhi Xin Electronic Technology (Tianjin) Limited Multimedia Picture Generating Method, Device and Electronic Device
CN107798143A (en) * 2017-11-24 2018-03-13 珠海市魅族科技有限公司 A kind of information search method, device, terminal and readable storage medium storing program for executing
CN109388319A (en) * 2018-10-19 2019-02-26 广东小天才科技有限公司 A kind of screenshot method, screenshot device, storage medium and terminal device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327173A (en) * 2022-01-04 2022-04-12 维沃移动通信有限公司 Information processing method and device and electronic equipment
WO2023131043A1 (en) * 2022-01-04 2023-07-13 维沃移动通信有限公司 Information processing method and apparatus, and electronic device

Also Published As

Publication number Publication date
CN112764603B (en) 2022-05-06

Similar Documents

Publication Publication Date Title
CN113259222B (en) Message processing method and device and electronic equipment
CN113141293B (en) Message display method and device and electronic equipment
CN113179204B (en) Message withdrawal method and device and electronic equipment
CN112702261B (en) Information display method and device and electronic equipment
CN113518026A (en) Message processing method and device and electronic equipment
CN112486444A (en) Screen projection method, device, equipment and readable storage medium
CN113259221A (en) Message display method and device and electronic equipment
CN112162802A (en) Message reply method and device and electronic equipment
CN112947807A (en) Display method and device and electronic equipment
CN112671635A (en) Sending method, sending device and electronic equipment
CN106095128B (en) Character input method of mobile terminal and mobile terminal
CN112764603B (en) Message display method and device and electronic equipment
EP4351117A1 (en) Information display method and apparatus, and electronic device
CN112383666B (en) Content sending method and device and electronic equipment
CN116108119A (en) Position information acquisition method and device
CN112269510B (en) Information processing method and device and electronic equipment
CN112399010B (en) Page display method and device and electronic equipment
CN114398128A (en) Information display method and device
CN113342241A (en) Target character selection method and device, electronic equipment and storage medium
CN113126780A (en) Input method, input device, electronic equipment and readable storage medium
CN112637407A (en) Voice input method and device and electronic equipment
CN112579537A (en) File searching method, file searching device, touch pen and electronic equipment
CN112448884A (en) Content saving method and device
CN111857463A (en) New message reminding method and device, electronic equipment and medium
CN111966265A (en) Page display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant