CN114967992A - Information interaction method, label viewing method and device - Google Patents

Information interaction method, label viewing method and device Download PDF

Info

Publication number
CN114967992A
CN114967992A CN202110217727.5A CN202110217727A CN114967992A CN 114967992 A CN114967992 A CN 114967992A CN 202110217727 A CN202110217727 A CN 202110217727A CN 114967992 A CN114967992 A CN 114967992A
Authority
CN
China
Prior art keywords
label
tag
user
target
viewing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110217727.5A
Other languages
Chinese (zh)
Inventor
赵立悦
沈博文
张练
张晓明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202110217727.5A priority Critical patent/CN114967992A/en
Priority to JP2023552165A priority patent/JP2024515424A/en
Priority to PCT/CN2022/077874 priority patent/WO2022179598A1/en
Publication of CN114967992A publication Critical patent/CN114967992A/en
Priority to US18/456,062 priority patent/US20240061959A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses an information interaction method, an information viewing method and an information viewing device. In the information interaction method, a user can add a label to a target object. Correspondingly, the label which the user wants to add to the target object can be determined according to the label adding operation triggered by the user to the target object, and then the corresponding relation between the target label and the information block of the target object is obtained. Therefore, when a user wants to view an object corresponding to a certain label, the object (including the target object) corresponding to the label can be searched and displayed according to the corresponding relation. Therefore, the label can be added to the target object based on the label adding operation, so that the target object is distinguished from other objects, a user can quickly find out a required object, and the user can flexibly manage the object.

Description

Information interaction method, label viewing method and device
Technical Field
The application relates to the field of computers, in particular to an information interaction method, a label viewing method and a label viewing device.
Background
With the development of computer and internet technologies, a variety of software layers are on the go. In order to meet the requirements of users, the functions of the software are richer. At present, a user can send a text message through software, upload a file to a cloud server through the software, and set schedule reminding through the software. Brings great convenience to users in both working and living.
However, as users use the software, more and more data is available, and different types of data are mixed together, so that it is difficult for users to find data of a specific type or with specific attributes. Therefore, there is a need for efficient management of different types of data.
Disclosure of Invention
In order to solve the problems in the prior art, embodiments of the present application provide an information interaction method, a tag viewing method, and an apparatus.
In a first aspect, an embodiment of the present application provides a method for information interaction, including:
receiving a label adding operation triggered by a user on a target object;
and generating a corresponding relation according to the tag adding operation, wherein the corresponding relation comprises a corresponding relation between a target tag and the target object.
In a second aspect, an embodiment of the present application provides a tag viewing method, including:
receiving a label viewing operation triggered by a user on a label viewing control;
and displaying at least one label according to the label viewing operation.
In a third aspect, an embodiment of the present application provides an information interaction apparatus, including:
the receiving module is used for receiving a label adding operation triggered by a user on a target object;
and the generating module is used for generating a corresponding relation according to the label adding operation, wherein the corresponding relation comprises a corresponding relation between a target label and the target object.
In a fourth aspect, an embodiment of the present application provides a tag viewing apparatus, including:
the receiving module is used for receiving the label viewing operation triggered by the user on the label viewing control;
and the display module is used for displaying at least one label according to the label viewing operation.
In a fifth aspect, an embodiment of the present application provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the method for information interaction or the method for tag viewing according to any of the embodiments of the present application.
In a sixth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for information interaction according to any one of claims 1 to 28, or implements the method for information interaction or the method for tag viewing according to any one of the embodiments of the present application.
In the embodiment of the application, the user can add a label to the target object. Correspondingly, the label which the user wants to add to the target object can be determined according to the label adding operation triggered by the user to the target object, and then the corresponding relation between the target label and the target object is obtained. Therefore, when a user wants to view an object corresponding to a certain label, the object (including the target object) corresponding to the label can be searched and displayed according to the corresponding relation. Therefore, the label can be added to the target object based on the label adding operation, so that the target object is distinguished from other objects, a user can quickly find out a required object, and the user can flexibly manage the object.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart of an information interaction method according to an embodiment of the present application;
FIG. 2-a is a schematic diagram of a display interface of a client according to an embodiment of the present disclosure;
2-b is another schematic view of a display interface of a client provided by an embodiment of the present application;
2-c is another schematic view of a display interface of a client according to an embodiment of the present application;
FIG. 3-a is a schematic view of a display interface of a client according to an embodiment of the present disclosure;
3-b is another schematic view of a display interface of a client according to an embodiment of the present application;
fig. 4 is another schematic diagram of a display interface of a client according to an embodiment of the present disclosure;
fig. 5 is another schematic diagram of a display interface of a client according to an embodiment of the present application;
fig. 6 is another schematic diagram of a display interface of a client according to an embodiment of the present application;
fig. 7 is another schematic diagram of a display interface of a client according to an embodiment of the present application;
fig. 8 is another schematic diagram of a display interface of a client according to an embodiment of the present application;
fig. 9 is another schematic diagram of a display interface of a client according to an embodiment of the present application;
fig. 10 is a further schematic diagram of a display interface of a client according to an embodiment of the present application;
fig. 11 is a schematic flowchart of a tag viewing method according to an embodiment of the present application;
FIG. 12-a is a schematic view of a display interface of a client according to an embodiment of the present disclosure;
12-b is another schematic view of a display interface of a client according to an embodiment of the present application;
fig. 13 is another schematic diagram of a display interface of a client according to an embodiment of the present application;
fig. 14 is another schematic diagram of a display interface of a client according to an embodiment of the present application;
fig. 15 is another schematic diagram of a display interface of a client according to an embodiment of the present application;
fig. 16 is another schematic diagram of a display interface of a client according to an embodiment of the present application;
fig. 17 is another schematic diagram of a display interface of a client according to an embodiment of the present application;
FIG. 18-a is a schematic view of a display interface of a client according to an embodiment of the present application;
18-b is yet another schematic view of a display interface of a client provided by an embodiment of the present application;
FIG. 19-a is a schematic view of a display interface of a client according to an embodiment of the present application;
19-b is yet another schematic view of a display interface of a client provided by an embodiment of the present application;
fig. 20 is another schematic view of a display interface of a client according to an embodiment of the present application;
fig. 21 is another schematic diagram of a display interface of a client according to an embodiment of the present application;
fig. 22 is another schematic view of a display interface of a client according to an embodiment of the present application;
fig. 23 is another schematic diagram of a display interface of a client according to an embodiment of the present application;
FIG. 24 is a schematic structural diagram of an apparatus for information interaction according to an embodiment of the present disclosure;
fig. 25 is a schematic structural diagram of a tag viewing apparatus according to an embodiment of the present application;
fig. 26 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present application. It should be understood that the drawings and embodiments of the present application are for illustration purposes only and are not intended to limit the scope of the present application.
It should be understood that the various steps recited in the method embodiments of the present application may be performed in a different order and/or in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present application is not limited in this respect.
The term "including" and variations thereof as used herein is intended to be open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present application are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this application are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that reference to "one or more" unless the context clearly dictates otherwise.
Software generated data will increase with time. Therefore, when a user wants to select a desired object from data, it is necessary to determine whether the data is the desired object one by one, which makes it difficult to quickly find the desired object. For example, if the user wants to search the chat records related to item a from the chat records, the user needs to browse a large amount of chat data to find the chat records related to item a. As can be seen, in the conventional technology, the speed of searching for the target object by the user is slow.
To solve this problem, a stowage function is provided in the related art. The user can collect data which may need to be used, and the speed of searching the target object by the user is improved to a certain extent. However, this method would display all of the collected objects. When the number of the collected objects is large, the problem that the speed of searching the target object by the user is slow still exists. In order to solve the problems in the prior art, embodiments of the present application provide an information interaction method and a tag viewing method, which are described in detail below with reference to the accompanying drawings of the specification.
Fig. 1 is a schematic flow diagram of an information interaction method provided in an embodiment of the present application, where the present embodiment is applicable to a scenario in which a tag is added to a target object, and the method may be executed by a tag adding processing device, and the device may be implemented in a software manner and integrated in a client of a user. The client may be integrated in a Personal Computer (PC) terminal or a mobile terminal. As shown in fig. 1, the method specifically includes the following steps:
s101: and the client receives the label adding operation triggered by the target object by the user.
When the user adds the label to the target object, the user can operate on the client, so that the label adding operation for the target object is triggered. The client may receive a user-triggered tag addition operation.
In this embodiment of the present application, the target object may be any operation object capable of adding a tag, such as a cloud document, a schedule, a task, an Instant Messaging (IM) message, and an Instant Messaging group. The cloud document may be a document stored in a server or a file in other format. The user may download the cloud document from the server to the local, or may open the cloud document by accessing a storage address of the cloud document stored in the server (e.g., open in a web browser). An instant messaging message is a message sent or received by a user through instant messaging software. The instant messaging messages include group chat instant messaging messages and single chat instant messaging messages. The group chat instant messaging message is a message sent by an instant messaging user in an instant messaging group, and the single chat instant messaging message is a message sent by one instant messaging user to another instant messaging user through the single chat. The instant communication group comprises at least three instant communication users, and the message sent in the instant communication group can be seen by any instant communication user in the group. A single chat refers to a message sent between two instant messaging users that cannot be seen by a third instant messaging user.
The label adding operation may be a sliding operation, or may be any feasible trigger operation for the label adding control, such as clicking the label adding control. The sliding operation refers to a sliding track on a screen, triggered by a user, acquired by a client. When the screen of the client is a non-touch screen, the sliding operation may be an operation in which a user controls a cursor or a mouse to move on the screen through an external device (e.g., a mouse, a tablet, etc.) to generate a sliding track; when the screen of the client is a touch screen, the sliding operation may be an operation in which a user moves a finger on the screen to generate a sliding track.
When the sliding track is a preset track, the client may determine that the user triggers a tag addition operation. For example, assuming that the screen of the client is a touch screen and the preset track is a circle, the client may record a sliding track of a finger of the user. If the sliding track is detected to be circular, the client may determine that the user has triggered a tag addition operation.
The label adding control is used for triggering a label adding operation. When the user triggers the tag addition control, for example, clicking or long-pressing the tag addition control, the client may determine that the user triggered the tag addition operation.
In the embodiment of the application, a user can trigger a tag adding operation on a target object in an interface for displaying the target object, and can also trigger the tag adding operation on the target object in a tag display interface for displaying the tag. As described separately below.
Firstly, the situation that a user triggers a tag adding operation on a target object in an interface for displaying the target object is introduced. According to the difference of the target objects, the client can display the label adding control while displaying the target objects, and can also hide the label adding control. When the client hides the label adding control, the user can control the client to display the label adding control through the triggered selection operation or display operation. The selection operation refers to that the user selects a target object, and may be, for example, clicking or long-pressing the target object. The display operation refers to that a user clicks a display control displayed by the client. For the description of the display operation, reference is made to the description of the embodiments shown in fig. 2, fig. 3, and fig. 7, and for the description of the selection operation, reference is made to the description of the embodiments shown in fig. 4, fig. 5, and fig. 6, which will be described later. And will not be described in detail herein.
After the user triggers the tag adding operation, the client can acquire the target tag. In the embodiment of the present application, the target tag may be created by a user who triggers the tag addition operation, or may be created by another user different from the user. Before obtaining the target tag, the client may determine whether the user has the right to add the target tag to the target object. For example, assume that user A created tag X and user B has the right to add tag X. Then, when the user B wants to add the tag X to the target object, the client may determine that the user B has the authority to add the tag X, thereby adding the tag X to the target object.
To obtain the target tag, the client may present a plurality of candidate tags that may be added, but the user only wants to add one or a few candidate tags for the target object. Then after triggering the tag adding operation, the user may perform a selecting operation on the target tags, that is, select one or more tags from the candidate tags as the target tags. Correspondingly, the client can receive the selection operation triggered by the user after receiving the label adding operation triggered by the user, so that the target label is determined according to the requirement of the user, and the target label is added to the target object conveniently. Optionally, for a detailed description of the selecting operation, reference may be made to the description of the embodiments described later, and details are not described here. The candidate tags may include user-created tags and may also include tags created by other users.
In this embodiment of the present application, at least one candidate tag displayed by the client may be arranged according to a preset rule. The preset rule may include any one or more of a tag creation time, a tag access time, and a matching degree with the keyword.
In some possible implementations, the client presents that the target tag may not be included in the one or more candidate tags, and then the user may trigger a tag addition operation to create a new tag as the target tag. In this way, the user can create the tags autonomously, independent of the number of tags presented by the client. Optionally, for detailed description of the selecting operation, reference may be made to the description of the embodiments shown in fig. 9 and fig. 10, which is not described herein again.
The following describes a case where a user triggers a tag adding operation on a target object in a tag display interface.
The label display interface is used for displaying one or more labels, and the one or more labels can correspond to the same or different target objects, so that a user can manage the labels. For the description of the label display interface, reference may be made to the description of the embodiment shown in fig. 17, which is not described herein again.
Because the tag display interface may display a plurality of tags, before the tag adding operation is triggered, the user may determine the target tag corresponding to the tag adding operation first, for example, the user may select and click the target tag from the plurality of tags displayed in the tag display interface. Accordingly, the client can jump to the label detail page corresponding to the target label based on the click operation of the user.
In the tag detail page corresponding to the target tag, the user may select one or more target objects to which the target tag needs to be added, thereby triggering tag adding operations on the target tags.
In one possible implementation, the user may trigger an add operation, by which the control client displays at least one candidate object and then selects one or more candidate objects from the at least one candidate object as the target object. Accordingly, the client can receive the adding operation triggered by the user on the target tag, and display at least one candidate object according to the adding operation.
Specifically, as shown in fig. 2-a, when the user wants to add an object with a target tab of "meeting", the user can trigger an add operation by clicking an add control 210 in the display area. After receiving the user-triggered add operation, the client may display the interface shown in fig. 2-b.
Fig. 2-b includes an add operation area 220. The addition operation area 220 includes a title display area 221, an addition control 222, a search box 223, a tab field display area 224, a candidate object display area 225, and a target object display area 226, among others.
The title display area 221 is used to prompt the user that the purpose of the adding operation area 220 is to select a target object to which a tag of "meeting" needs to be added; the add control 222 is used to trigger a label add operation; the search box 223 is used for receiving a keyword input by a user and further searching for a target object corresponding to the keyword; the tag field display area 244 is used for displaying the tag field from which the object corresponding to the summary information originates. For a detailed description of the tag field and the tag field display area 244, reference may be made to the embodiment shown in fig. 20, which is not described herein again. In the embodiments shown in fig. 2-b and fig. 2-c, the displayed tag field is an instant messaging message, i.e. the objects displayed in the candidate object display area 225 and the target object display area 226 belong to an instant messaging message.
The candidate display area 225 is used to display at least one candidate. In the embodiment shown in fig. 2-b, the candidate objects are displayed in the form of cards. The candidate display area 225 includes a first candidate card 225-1, a second candidate card 225-2, and a third candidate card 225-3 that is not fully displayed. The candidate card may include any one or more of basic information of the candidate, a belonging user of the candidate, a receiving time of the candidate, and an information block of the candidate. For the description of the candidate object card, reference may be made to the description of the canvas card hereinafter, which will not be described in detail here.
The target object display area 226 is used to display at least one target object, and may be displayed in the same manner as the candidate object display area 225. In the embodiment shown in FIG. 2-b, target object display area 226 also includes a statistics display area 226-1. The statistical information display area 226-1 is used to display the statistical information of the target objects, for example, the number of target objects that the user has selected may be displayed.
In the embodiment of the application, the user can optionally select one or more candidate objects as the target object. For example, in the interface shown in fig. 2-b, the user may use any one of the candidate objects displayed in the candidate object card as the target object. In an embodiment of the present application, the user may determine the candidate object as the target object by clicking on the checkbox of the candidate object card, for example, may click on the checkbox 225-4 to determine the candidate object as the target object. Optionally, before the candidate object is not selected as the target object, the checkbox corresponding to the candidate object may be white.
After the user determines the candidate object presented in the candidate object card 225-1 as the target object, the interface displayed by the client may be as shown in fig. 2-c. Based on the selection operation of the user, the client can determine that the instant messaging message 'afternoon meeting' sent by the leader D belongs to the target object to which the label 'meeting' is to be added. Then, the client may add a target object card 226-2 in the target object selected area 226 for displaying basic information of the target object of the instant messaging message "afternoon meeting", and adjust the display content of the statistical information display area 226-1. Accordingly, the contents displayed in the first candidate card 225-1 and the second candidate card 225-2 in the candidate display area 225 may also be changed. Alternatively, the color of the checkbox for the target tag may be different from the color of the checkbox for the candidate tag. For example, in the embodiment shown in FIG. 2-c, checkbox 226-3 for the target tag is black in color.
After selecting one or more target objects from the candidate objects, the user may trigger a label addition operation by clicking on the label addition control 222. Accordingly, the client may continue to execute S102 after receiving the tag addition operation triggered by the user. For the description of the rest of fig. 2-a, fig. 2-b and fig. 2-c, reference may be made to the embodiment shown in fig. 20, which is not described herein again. It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
S102: and the client generates a corresponding relation according to the label adding operation and generates a corresponding relation.
After receiving a tag adding operation triggered by a user on a target object, the client may generate a corresponding relationship between the target object and the target tag according to the tag adding operation.
In an embodiment of the present application, the correspondence includes a correspondence between a target tag and an information block (block) of a target object. Wherein the information block is generated based on the target object. Specifically, the information block may include summary information of the target object for briefly describing the target object. For example, when the target object is a cloud document, the information block of the target object may include summary information of the cloud document, and the summary information of the cloud document may include information of a title, an author, a release time, and the like of the cloud document; when the target object is an instant messaging message, the summary information of the target object can be a sender of the instant messaging message and the first n words of the instant messaging message, wherein n is an integer greater than or equal to 1.
The information block may further include presentation information of the target object, the presentation information being used to present the target object. For example, when the target object is a cloud document, the presentation information of the target object may include a link of the cloud document, and the entire content of the cloud document may be displayed through the link; when the target object is an instant messaging message, the display information of the target object may include all contents of the instant messaging message corresponding to the target tag (without opening an instant messaging software window), or include a call chain of the instant messaging message, and all contents of the technical messaging message corresponding to the target tag may be displayed in the instant messaging software window through the call chain.
In this embodiment of the application, after receiving a tag addition operation triggered by a user, before generating a corresponding relationship between an information block and a tag, a client may determine the information block of a target object according to the target object, for example, extract summary information of target data and/or provide uniform structural encapsulation for the target object. After converting the target object into the information block, the client may establish a correspondence between the information block and the tag.
In some possible implementations, the correspondence may also include a correspondence between the target tag, the information block of the target object, and information of the user that triggered the tag addition operation. The information of the user may include one or more of a user name, a tag addition time, and the like. In this way, when displaying the target label or the target object, the user can see both the target label and the target object, and also can see which user added the target label for the target object, and at what time.
In the embodiment of the present application, the corresponding relationship may be generated by the client, or may be generated by the client through the control server. When the correspondence is generated by the client, the client may perform S103-1 (not shown in the figure). When the correspondence is generated by the client through the control server, the client may perform S103-2 (not shown in the drawing).
S103-1: the client generates the corresponding relation.
If the correspondence is generated by the client, the client may store the correspondence between the target tag and the information block of the target object. After generating the corresponding relationship, the client may send the corresponding relationship to the server, so that other clients or servers may know the corresponding relationship between the target tag and the information block of the target object in some application scenarios.
S103-2: and the client sends the target object and the target label to the server.
If the correspondence is generated by the server, the client may send the target object and the target tag to the server. After receiving the target object sent by the client, the server can determine the information block of the target object according to the target object and establish the corresponding relationship between the information block of the target object and the target tag.
It should be noted that sending the target object and the target tag from the client to the server is only one possible implementation provided by the embodiment of the present application. In some other implementations, the client may also obtain and send the target tag and the information block of the target object to the server, so that the server generates a corresponding relationship between the target tag and the information block of the target object. The embodiments of the present application do not limit this.
In some possible implementations, after determining the target tag, the client may display a prompt indicating that adding the target tag to the target object is complete. After seeing the prompt message, the user can know that the client adds the target label to the target object. Optionally, the prompt message may be an icon or a text message.
In some possible implementations, the user may trigger a tag view operation after adding a target tag to the target object. After receiving a tag viewing operation for a target object triggered by a user, the client can display the target tag according to the viewing operation. In this way, the user can see the tag added to the target object, and thereby determine whether an erroneous tag has been added to the target object.
In some possible implementations, the client may also display the content viewing control after generating the correspondence. When the user clicks the content viewing control or triggers a viewing operation on the content viewing control in other manners, the client may search for a corresponding relationship according to the viewing operation, determine and display an information block corresponding to the target label, where the information block includes, for example, an information block that may be a target object. When the information block includes summary information of the object, the summary information of the object corresponding to the target tag may be displayed.
In response to a user's trigger operation on an information block of a target object, the client may jump to an interface including the target object or display the target object. As a possible implementation manner, in response to a triggering operation of the user on the summary information of the target object, the client may jump to an interface of the target object or display the target object according to the aforementioned presentation information of the target object.
For other methods for displaying the tag of the target object, reference may be made to the description of the embodiment shown in fig. 11, which will not be described herein again.
In addition, in the embodiment of the application, the user can also delete the target tag of the target object. Specifically, the user may trigger a delete operation on a target tag of the target object. After receiving the deletion operation, the client may delete the target tag of the target object according to the deletion operation, for example, may delete the correspondence between the target tag and the target object.
In the embodiment of the application, the user can add a label to the target object. Correspondingly, the client can determine the label which the user wants to add to the target object according to the label adding operation triggered by the user to the target object, and further obtain the corresponding relation between the target label and the information block of the target object. Therefore, when a user wants to view an object corresponding to a certain label, the client can search and display the object (including the target object) corresponding to the label according to the corresponding relation. Therefore, the client can add the label to the target object based on the label adding operation, so that the target object is distinguished from other objects, a user can quickly find out a required object, and the user can flexibly manage the object.
In addition, in the embodiment of the present application, the correspondence includes a correspondence between the tag and the information block of the target object. Therefore, compared with the traditional collection method, the client can also display information such as summary information of the target object to the user according to the label and the information block of the target object. Thus, the amount of information that the user can see is increased, so that the user can find a required object more quickly.
Further, in the embodiment of the present application, the target object may be any one of operation objects capable of adding a tag, such as a cloud document, a schedule, a task, an instant messaging message, and an instant messaging group. That is, the user may add the same tag to target objects of different tag domains and may view target objects belonging to different tag domains of the tag. Therefore, cross-label domain distribution of labels is realized, and the flexibility degree of management is improved. Further description of the tag domain is provided below and will not be described further herein.
In the embodiment of the present application, different target objects belonging to different tag domains may be extracted as information blocks, and a correspondence between the information blocks and tags may be generated. Therefore, the target objects of different label domains are extracted into the information blocks, so that the uniformity of heterogeneous information entities, homogeneous modeling and labeling during cross-business domain circulation of cross-business applications can be solved, and the scenes of cross-business information aggregation and secondary consumption are supported. That is, information blocking is to uniformly encapsulate objects (i.e., heterogeneous information entities) from different tag domains, so that when marking is performed, operations required by marking are uniformly performed without paying attention to different structures of contents of different domains. If encapsulation is not done, heterogeneous entities of different domains require higher integration costs if there is no solution for homogenization.
As described above, the target object may be any one of operation objects capable of adding tags, such as a cloud document, a schedule, a task, an instant messaging message, and an instant messaging group. In the embodiment of the application, when the target objects are different, the way in which the user triggers the tag addition control is also different. In the following, the method of adding a user trigger tag is introduced by taking a target object as a cloud document, a schedule, a task, an instant messaging message and an instant messaging group as examples.
Firstly, a method for triggering a tag adding operation by a user when a target object is a cloud document is introduced. When the cloud document is displayed through the client, the client can display the name, the title, the text and the operation control of the cloud document on a screen.
In a first possible implementation, the operation controls displayed on the client screen may include a label addition control. When the user clicks the tag addition control or performs other triggering actions on the tag addition control, the client may determine that the user has triggered a tag addition operation on the cloud document being displayed.
In a second possible implementation, the label adding control is hidden in the screen, and the user can control the client to display the label adding control by triggering the display operation.
Specifically, as shown in fig. 3-a, when the client displays the cloud document, a name display area 310, a title display area 320, a body display area 3230, and an operation control set 340 may be included in the display area. The name display area 310 is used for displaying the name of the cloud document, the title display area 320 is used for displaying the text title of the cloud document, the text display area 330 is used for displaying the text of the cloud document, and the operation control set 340 comprises a display operation control 341, a minimization operation control 342 and a closing operation control 343, and does not comprise a tag adding control.
The closing operation control 343 may be configured to close the cloud document, for example, when the user clicks the closing operation control 343, the client may close the cloud document; the minimize operation control 342 may be used to minimize cloud documents, e.g., when the user clicks on the minimize operation control 3242, the client may minimize cloud documents from being displayed. The display operation control 341 is used to display a hidden operation control. In an embodiment of the present application, the hidden operation control includes an operation trigger control.
When the user wants to add a tag to the cloud document, the user may click the display operation control 341, or perform other triggering actions on the display operation control 3241. The client may display the hidden operation control on the screen after detecting that the display operation control 341 is triggered. As shown in fig. 3-b, when the user clicks on the display operation control 341, the client may display the hidden control set 350. Hidden control set 250 includes label addition control 351, copy control 352, cut control 353, delete control 354, and jump control 355. Wherein the label addition control 351 is used to trigger a label addition operation. Copy control 352 is used to copy cloud documents; the cutting control 353 is used for copying the cloud document, and deleting the original cloud document after the cloud document in the cutting board is covered; delete control 354 is used to delete the cloud document; the jump control 355 is used to jump to other interfaces for the user to perform other operations on the cloud document.
Optionally, the label adding control may have prompt information thereon for prompting the user that the control is used for adding a label. As shown in fig. 3-b, the prompt information of the label addition control 351 is "add label", and the prompt information of the copy control 352 is "copy".
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
The following describes a method for triggering a tag adding operation by a user when a target object is a schedule. When the calendar is displayed through the client, the client can use the calendar as a background, and adds a label at a position corresponding to the schedule on the calendar according to the time information corresponding to the schedule, so as to prompt the user of the schedule information corresponding to the time.
In a first possible implementation, the client may display a label addition control on the screen. When the user clicks the label adding control or performs other triggering actions on the label adding control, the client may determine that the user triggers a label adding operation on the schedule being displayed.
In a second possible implementation, the label adding control is hidden in the screen, and the user can control the client to display the label adding control by triggering and selecting operation.
Specifically, as shown in fig. 4, when the client displays the schedule, a display mode control 410, a calendar display area 420, and an operation control set 430 may be included in the display area 400. The display mode control 410 is used to switch the division unit of the calendar. For example, in the embodiment shown in fig. 4, the dates are divided in units of weeks, and the schedule of one week is displayed in the display area 400. When the user switches the display mode through the display mode control 410, the client may also display the corresponding schedule in units of days or months.
Specifically, when the date is divided in units of weeks, the calendar display area 420 includes a plurality of schedule display areas 421, date label areas 422, and time label areas 423. The schedule display area 421 includes at least one schedule display element. Each schedule display element corresponds to a date and a time period for displaying the schedule for that time period on that date. Optionally, the schedule display elements belonging to the same column respectively correspond to different time periods within the same date, and the schedule display elements belonging to the same row respectively belong to the same time period within the different dates. The date labeling area 422 can be used for labeling the date corresponding to each row of schedule display elements, and the time labeling area 423 is used for labeling the time period corresponding to each row of schedule display elements.
As shown in fig. 4, the schedule display area 421 includes 7 rows of schedule display elements corresponding to seven days, i.e., 11 days 1/month 2021 to 17 days 1/month 2021. The schedule display element in the first column and the first row corresponds to the time period 9:00-10:00 on the day of 11 days 1 month 2021. The schedule display element includes a schedule of "item a", indicating that a schedule of 1 month, 11 months, 9:00 to 10:00 of 2021 year is item a.
When the user wants to add the label to the schedule, the user can select the schedule display element corresponding to the schedule, so that the client is controlled to display the label adding control. For example, when the user wants to add a tag for the calendar "meeting 13:00 pm on 1 month 13 of 2021 at 13 pm," the user can select calendar display element 421-1, e.g., the user can click on calendar display element 421-1 on the client screen, thereby triggering a selection operation. After receiving a selection operation of a label adding control triggered by a user, the client may display a hidden interface, where the hidden interface includes the label adding control. As shown in fig. 5, after the schedule display element 421-1 is selected by the user, the client may display a hidden interface 500, where the hidden interface 500 includes an operation control set 510, a schedule information display area 520, a reminder time display area 530, and a tag addition area 540.
The schedule information display area can be used to display basic information of the schedule, for example, in the embodiment shown in fig. 5, the schedule information display area 520 displays that the subject of the schedule is a meeting and the time is 13:00-14:00 in 2021 year, 1 month, 13 afternoon. The reminder time display area can be used to display the reminder time for the schedule, which indicates how long before the client reminded the user, for example, in the embodiment shown in fig. 5, the reminder time display area 520 displays that the client reminded the user five minutes before the schedule starts. A label addition area 540 including a label display area 541 and a label addition control 542. The label display area 541 can be used to display the existing labels of the schedule, and the label addition control 542 can be used to trigger a label addition operation.
For the embodiment shown in fig. 4 and 5, when the user wants to add a label to the schedule "meeting" corresponding to the schedule display element 421-1, the user may select the schedule display element 421-1 first and then click the label adding control 542 on the hidden interface.
Of course, in some other implementations, the user may also control the client to display the control addition control by triggering the display operation, which is not described herein again.
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
The following describes a method for a user to trigger a tag addition operation when a target object is an instant messaging message.
As can be seen from the above description, the instant messaging message includes one or more of a group chat instant messaging message and a single chat instant messaging message, and the instant messaging message is taken as a group chat instant messaging message for example.
In a first possible implementation, the client may display a label addition control on the screen. When the user clicks the tag addition control or performs other triggering actions on the tag addition control, the client may determine that the user triggers the tag addition operation on the displayed communication group message.
In a second possible implementation, the label adding control is hidden in the screen, and the user can control the client to display the label adding control by triggering and selecting operation. As described in detail below.
When the user views the group chat instant messaging message on the client, the client may display the group chat instant messaging message and related information of the instant messaging group. Specifically, as shown in fig. 6, when the client displays the group chat instant messenger message, a title display area 610, a message display area 620, and an operation area 640 may be included in the display area 600. The title display area 610 is used for displaying the basic information of the instant messaging group. For example, in the embodiment shown in FIG. 6, the title display area 610 displays the group name "work exchange group" and the group avatar of the instant messaging group. Optionally, title display area 610 may also include a jump control 611 for jumping to other interfaces. For a detailed description of the jump control 611, reference may be made to the following description of the embodiments, which are not repeated herein.
Message display area 620 may be used to display group chat instant messaging messages. Optionally, the client may display the message text and the corresponding user information together when displaying the group chat instant messaging message. For example, in the embodiment shown in FIG. 6, group chat instant messaging message 621 includes avatar display area 621-1, logo display area 621-2, and text display area 621-3. Wherein, the avatar display area 621-1 is used for displaying an avatar of a user sending the group chat instant messaging message; the identification display area 621-2 is used to display an identification of a user who sends the group chat instant messaging message, which may be, for example, an ID or a nickname of the user. The text display area 621-3 is used to display the text of the group chat instant messaging message issued by the user, and in the embodiment shown in fig. 6, the text of the group message 621 is "notice".
The operation area 640 may include an input box 641 and a send control 642. The input box 641 is configured to input a group chat instant messenger message, and the sending control 642 is configured to send the group chat instant messenger message to be sent in the input box 641.
If the user needs to tag a group chat instant messaging message, the user can select the group chat instant messaging message. After receiving the selected operation on the instant communication message, the client can display the control set. In some possible implementations, the set of controls may include a label addition control. Correspondingly, the user can add the label to the selected group chat instant messaging message by triggering the label adding control. Of course, the user may also select multiple instant messages at a time and add tags to the multiple instant messages. In this case, the client displays the control set after receiving the user selection operation on the plurality of instant messaging messages.
In some other implementations, the set of controls does not include a label addition control. The user may control the client to display the label addition control by triggering a display operation. As described in detail below.
As shown in fig. 6, when the user wants to add a tag to the group chat instant messaging message 630, the user may first select the group chat instant messaging message 630, for example, may click on a location corresponding to the on-screen group chat instant messaging message 630. In response to a user's selection, the client may display a set of controls 640. The set of controls 640 includes a reply control 641, a thumbs-up control 642, and a display control 643. Wherein the reply control 641 is used for replying to the message. When the user clicks the reply control 641, the user may reply to the group chat instant messaging message 630; the like control 642 is used to add like identification to the group chat instant messaging message 630. When the user clicks the like control 642, the client may add a like identification to the group chat instant messaging message 630; the display operation control 643 is used to display a hidden operation control. When the user clicks on the display operation control 643, the client may display the set of hidden controls 650 to the user.
Hidden control set 650 includes label addition control 651, deletion control 652, multi-selection control 653, and jump control 654. Wherein the label addition control 651 is used to trigger a label addition operation. Delete control 652 is configured to delete a chat log of group chat instant messaging message 630 local to the client; multi-selection control 653 is used to select other group chat instant messaging messages while instant messaging group information 630 is selected. Jump control 654 is used to jump to the other interface for the user to perform other operations on the group chat instant messaging message 630.
Optionally, the label adding control may have prompt information for prompting the user that the label adding control is used to trigger the label adding operation. As shown in fig. 6, the prompt information of the label addition control 651 is "add label", and the prompt information of the delete control 652 is "delete".
When the instant messaging message is a single chat instant messaging message, the user may control the client to display the tag adding control and trigger the tag adding operation by using the same or similar process, which is not described herein again.
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
The following describes a method for a user to trigger a tag addition operation when a target object is an instant messaging group.
In a first possible implementation, the client may display a label adding control in an interface including the instant messaging group, so that a user may trigger a label adding operation through the label adding control. Here, the interface including the instant messaging group may be, for example, an interface displaying a group chat instant messaging message, or may be an information flow interface including the instant messaging group, which is not limited herein.
In a second possible implementation, the chat interface of the instant messaging group does not include a tag addition control, and then the user may control the client to display the tag addition interface by triggering the display operation. For example, in the embodiment shown in fig. 6, the chat interface 600 of the instant messaging chat group does not include a tag addition control for adding a tag to the group, and the user may click the jump control 611 to control the client to display a setting interface of the instant messaging group, where the setting interface includes the tag addition control for triggering a tag addition operation.
As shown in fig. 7, when the setting interface of the instant messaging group is displayed, a title display area 710, a position adjustment control 720, an avatar adjustment control 730, a name adjustment control 740, and a label addition control 750 may be included in the display area 700 of the client.
The title display area 710 is used for displaying basic information and basic operation controls of the setting interface. For example, in the embodiment shown in fig. 7, the title display area 710 displays the name "set-work exchange group" of the setting interface, which indicates the setting interface is used to set the related information of the instant messaging group with the group name "work exchange group".
The position adjustment control 720 is used for adjusting the relative position of the interface where the user is currently located and the display area 700. As shown in fig. 7, position adjustment control 720 includes a slide slot 721 and a slider control 722. The slider control 722 can move up and down in the slide 721 to adjust the relative position of the adjustment setting interface and the display area 700.
The avatar adjustment control 730 can be used to display the avatar of the instant messaging group and also can be used to adjust the avatar of the instant messaging group. For example, when the user wants to modify the group avatar of the instant messaging group, i.e., "work exchange group", the user may click on the position corresponding to the avatar adjustment control 730 on the client screen, thereby triggering the avatar replacement operation to adjust the group avatar of the instant messaging group.
The name adjusting control 740 can be used to display the group name of the instant messaging group, and also can be used to adjust the name of the instant messaging group. As shown in fig. 7, the name adjustment control 740 includes annotation information 741, a name display area 742, and an edit control 743. The label information 741 is used to prompt the user name display area 742 to display the content of the instant messaging group, the name display area 742 is used to display the name of the instant messaging group, and the editing control 743 is used to trigger the name changing operation. For example, when the user wants to modify the group name of the instant messaging group, i.e., "work exchange group", the user may click the edit control 743 in the name adjustment control 740 on the client screen to trigger the name change operation, so as to adjust the group name of the instant messaging group.
The tag adding control 750 may be configured to add a tag to the instant messaging group, display a tag corresponding to the instant messaging group, and adjust the tag corresponding to the instant messaging group. As shown in fig. 7, the label addition control 750 includes annotation information 751, a label display area 752, and an editing control 753. The annotation information 751 is used to prompt the user that the content displayed in the tag display area 752 is a tag corresponding to the instant messaging group. The label display area 752 is used to display a label corresponding to the instant messaging group. Alternatively, the tag display area 742 may display "no tags transient" when the instant messaging group has no tags. Edit control 753 can be used to trigger a label addition operation. Optionally, when the instant messaging group has a label, the editing control 753 may be further configured to trigger a label deletion operation or a label change operation.
In some possible implementations, the edit control 753 can include an indication message prompting the user that the edit control 753 can be used to trigger a label addition operation, or prompting the edit control 753 that the label deletion operation or a label alteration operation can be used. For example, when the instant messaging group has no tag, the indication message of the editing control 753 may be "add" for prompting the user that the editing control 753 may be used to trigger a tag addition operation; when the instant messaging group has a label, the indication message of the editing control 753 may be "add" for prompting the user that the editing control 753 may be used to trigger a label adding operation, a label changing operation, or a label deleting operation.
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
In the embodiment of the application, after receiving the tag adding operation triggered by the user, the client can also receive the selection operation triggered by the user, and determine the target tag according to the selection operation.
Before receiving a selection operation triggered by a user, the client can display at least one candidate tag according to the tag adding operation, so that the user can browse the candidate tags on a screen of the client and select a target tag from the candidate tags. For example, the user may click on the location on the client screen corresponding to the target tab. After detecting that the position corresponding to any candidate tag is clicked, the client may determine that the candidate tag is the target tag selected by the user. The client may display at least one candidate tag in a list format. The user can browse the list displayed by the client, and one or more candidate tags are selected from the list as target tags according to actual requirements. In this way, the client can determine the target tag corresponding to the target object according to the click operation of the user, so as to add the target tag to the target object.
In an actual application scenario, the number of candidate tags may be large, resulting in a long time for a user to select a target tag from the candidate tags. To address this problem, the client may display a tag input box and determine a target tag according to the user's input.
Specifically, as shown in fig. 8-a, when the client displays the tab input box, a title display area 810, an operation control set 820, a tab input box 830, a tab adding control 840, and a candidate tab display area 850 may be included in the display area 800. The title display area 810 is used for displaying basic information of the interface. For example, in the embodiment shown in FIG. 8-a, the title display area 810 may display a "determine target tab" indicating that the interface is to be used to determine a target tab from among the candidate tabs. The set of operation controls 820 may be similar to the set of operation controls 240 in fig. 2 and will not be described here. The tag input box 830 is used to receive a keyword input by a user. The label add control 840 is used to select a target label.
The candidate tag display area 850 is used to display at least one candidate tag matching the keyword input by the user in the tag input box 830. When there are multiple candidate tags, the multiple candidate tags may be sorted in order of high to low matching degree with the keyword. As shown in fig. 8, four candidate tags, tag 1, tag 2, tag 3, and tag 4, are displayed in the candidate tag display area 850. Then, tag 1 may be the candidate tag with the highest matching degree with the keyword, and tag 2 has a matching degree with the keyword lower than tag 1 and higher than tags 3 and 4. Of course, in some other implementation manners, the client may also sort according to the tag creation time or the tag access time of the candidate tag, which is not limited in this embodiment of the present application.
Optionally, the label addition control 840 can be located in the candidate label display area. In this way, when the user clicks any one of the candidate tags in the candidate tag display area, the tag adding control 840 is triggered, and the client receives the selection operation of the user on the target tag.
If the number of candidate tags is small, the client may also display all of the candidate tags. Accordingly, the user can select the tag to be added to the target object from the tags displayed on the client, and the tag to be added to the target object is selected through the selecting operation.
Specifically, the client may display the candidate tags in the form of a card. As shown in fig. 8-b. When the client displays all candidate tags, tag card 851, tag card 852, tag card 853, tag card 854, tag card 855, and tag card 856 may be included in candidate tag display area 850. Wherein each tag card has an independent boundary and includes name information. The name information is used for displaying the name of the label corresponding to the label card. For example, in the embodiment shown in fig. 8-b, the label name for the label card 851 is "work".
The user can determine the tag cards to be added according to the name information of the tag cards and select the tag cards. Upon selecting the tag, the user may trigger the tag addition control 840 to cause the client to determine the tag that the user wants to add for the target object.
In the embodiment of the application, after a user selects a certain tag card, the client can change the background color of the tag card. For example, in the embodiment shown in fig. 8-b, the background color of label cards 854 and 855 has changed, indicating that the user wants to add two labels, "work" and "project a" to the target object.
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
The above embodiments describe a method for information interaction, in which a user may add a tag to a target object. The following describes how the label is obtained, that is, the above label can be obtained by the label creation method described below.
In the embodiment of the application, a user can create a label through a label creation control in a label creation interface, and can also create the label when the label is added.
First, a user is introduced to create a label through a label creation control in a label creation interface.
When a user creates a label in the label creation interface, the user may first trigger the client to jump to the label creation interface, for example, the user may click a jump control or input a web address to jump to the label creation interface. After jumping to the label creation interface, the user can input the label name in the interface, and control the client or the server to create the label by triggering the label creation control.
Specifically, as shown in fig. 9, the label creation interface 900 includes a title display area 910, a label name input box 920, a label details input box 930, and a label creation control 940. The title display area 910 is used for displaying basic information of the interface. For example, in the embodiment shown in FIG. 9, title display area 810 may display a "New Tab" indicating that the interface is used to create a tab; the tag name input box 920 is used for receiving a tag name input by a user; a tag details input box 930 for receiving user-entered tag details; the label creation control 940 is used to trigger a label creation operation.
After the user enters the label name in the label name entry box 920, the label creation control 940 may be triggered, thereby triggering the label creation operation. After receiving a tag creation operation triggered by a user, the client may obtain the content input by the user in the tag name input box 920 as the name of the newly created tag. Optionally, the user can also input the content in the tag detail input box 930 as a detailed description of the newly created tag.
Alternatively, as shown in fig. 9, before the user inputs no data, the tag name input box 920 and the tag detail input box 930 may display prompt information for prompting the user what information needs to be input in the input box.
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
The method of creating a label when adding a label is described below.
As can be seen from the foregoing description, when selecting a target tag from the candidate tags, the user may input a keyword into the tag input box and find the target tag from the candidate tag display area. However, if the candidate tag does not include the target tag, the target tag cannot be found from the candidate tag display area. Thus, the user can create a new tag in the process of adding the tag. Optionally, the user may jump to another interface through the jump control to create a label, or may create a label on the current interface. As described in detail below.
In a first possible implementation, a display interface of the client for displaying the label input box includes a jump control, and the jump control is used for jumping to the label creation interface. Then, after the user finds that the candidate tag does not include the target tag, the user may trigger the jump control, thereby creating a new tag in the tag creation interface.
In a second possible implementation, the candidate label display area includes a label creation control through which a user can create a new label.
Specifically, as shown in fig. 10, when the client displays the tab input box, the display area 1000 includes a title display area 1010, an operation control set 1020, a tab input box 1030, a selection control 1040, and a candidate tab display area 1050. The title display area 1010, the operation control set 1020, the label input box 1030 and the selected control 1040 are similar to the corresponding areas or controls in fig. 8, and are not described herein again.
The candidate tag display area 1050 includes a first prompt information display area 1051, a first candidate tag display area 1052, a second candidate tag display area 1053, a first prompt information display area 1054, and a tag creation control 1055. The first prompt information display area 1051 is used to prompt the user that the content displayed in the first candidate tag display area 1052 and the second candidate tag display area 1053 is a candidate tag. The first candidate tag display area 1052 may be used to display a candidate tag having the highest degree of matching with the keyword input by the user, and the second candidate tag display area 1053 may be used to display a candidate tag having the second highest degree of matching with the keyword input by the user. The second prompt information display area 1053 is for prompting the user for a tab creation control 1055 for creating a new tab. The label creation control 1055 is used to trigger a label creation operation. Optionally, the label creation control 1055 may include label preview information that is the name of the newly created label, consistent with the keyword entered by the user in the label entry box 1030.
When the user enters a keyword in the tag entry box 1030 and there is no candidate tag that completely matches the keyword, the client may display a first prompt display area 1054 and a tag creation control 1055 in the candidate tag display area 1050. If the user considers the existing requirements of the candidate tag and the target tag and needs to create a new tag, the user may click the tag creation control 1055, thereby triggering a tag creation operation to control the client or the server to create a new tag, where the name of the tag is the keyword input by the user in the tag input box 1030.
Optionally, in some possible implementations, the user may also create a new label not through the label creation control 1055, but through the label addition control 1040. Specifically, the client may receive the keyword input by the user through the tag input box 1030. The client may display the tags that are consistent with the keyword as candidate tags. When the label adding control 1040 is triggered, the client may add the label consistent with the keyword to the target object, and store the label as a label created by the user.
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the previous embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
In the embodiment of the application, the creator of the target tag can also set the operation authority of the target tag, so that users with different operation authorities can perform different operations on the target tag with the target tag. The operation right may include an editing right and/or a viewing right. The editing authority comprises any one or more of an authority for modifying the target label, an authority for modifying the target object with the target label and an authority for modifying the incidence relation between the target label and the target object. The viewing authority includes any one or more of an authority whether the target object can be viewed when viewing the target object, an authority whether the target object can be viewed, and an authority whether all target objects having the target tag can be viewed.
Specifically, the user may trigger the permission setting operation on the target tag on the tag viewing interface, or trigger the permission setting operation on the target tag after the target tag is added to the target object. Accordingly, the client can receive the authority setting operation and determine the operation user with the target tag according to the authority setting operation.
Wherein, the operation authority can comprise three authorities that all members in the organization can edit, the added collaborators can edit, and other members in the organization can watch, only the collaborators can edit or can watch. The organization refers to a team to which the user belongs, for example, the team may be a company, a department, or a project group to which the user belongs, and the collaborator may be added by the user, for example, any one or more other users selected by the user in the organization.
The following details three operation authorities:
when the user sets the operation authority of the target tag to be editable by all members in the organization, the client can receive the editing operation of any member in the organization, and edit and modify the target tag or the target object with the target tag based on the editing operation of the member. Optionally, the editing operation may further include deleting a tag of the target object, or adding a target tag to the target object.
When the user sets the operation authority of the target tag to be editable only by the collaborators and other members can view the operation authority, the client can acquire the information of the collaborators determined by the user. When receiving the editing request of other users, the client may compare whether the information of the other users is the same as the information of the collaborators, and the client may edit and modify the target tag or the target object based on the editing operation of the user in response to the comparison result. If the information of the two is different and the other user is a member in the organization, the client can display the target label or the target object and forbid the user from editing.
When the user sets the operation authority of the target tag to be editable or viewable by only the collaborators, the client may not display the target object or the target tag on the premise that the user is not set for the creator or the collaborators of the target tag. For example, the target tag that the target object has may be hidden when the target object is displayed.
In addition, in the embodiment of the present application, the creator of the target tag may also delete the target tag. Specifically, the user may trigger the deletion operation of the target tag. After receiving the deletion operation, the client may delete the target tag according to the deletion operation, for example, may delete a correspondence between the target tag and any one object. For a detailed description of the deletion operation, reference may be made to the description of the tag viewing method section below, and details thereof are not described here.
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
The method for information interaction provided by the embodiment of the application is introduced above. Correspondingly, the embodiment of the application also provides a corresponding tag viewing method, which is described in detail below.
Referring to fig. 11, fig. 11 is a schematic flowchart of a tag viewing method provided in an embodiment of the present application, where the method specifically includes the following steps:
s1101: and the client receives the label viewing operation triggered by the label viewing control by the user.
When the user wants to view the label, the user can trigger a label viewing operation on the label viewing control on the client. Accordingly, the client may receive a tag viewing operation triggered by the user. In the embodiment of the application, the label viewing control may be related to a certain target object or may be unrelated to the target object. The following is a description for each of these two cases.
In a first possible implementation, the label viewing control is related to the target object, that is, the label viewing operation is used to view a label corresponding to the target object. Then, after the label viewing control is triggered, the label viewing operation received by the client is a label viewing operation for the target object, and the displayed label is also a label possessed by the target object. In this way, the user can see the label that the target object has.
Optionally, the tab viewing control associated with the target object may be located on the interface where the target object is located. The client displays the label viewing control while displaying the target object. Then, the user may trigger a corresponding tag viewing operation when viewing the target object. Correspondingly, the client receives the label viewing operation aiming at the target object, so that the label of the target object is displayed for the user. In this way, the user can control the client to display the label of the target object through the label viewing control when viewing the target object.
Of course, the client may also display the tags of the target object in the interface for displaying the target object, that is, display the target object and the tags of the target object simultaneously. The label viewing control may be a control for jumping to the interface displaying the target object. That is, after the user clicks the tab viewing control on another interface (the interface does not include the target object), the client may jump to the interface corresponding to the target object and the tab of the target object. Thus, the user can simultaneously view the target object and the label of the target object on one interface.
As can be seen from the foregoing description, in the embodiment of the present application, the target object may be any one of operation objects capable of adding a tag, such as a cloud document, a schedule, a task, an instant messaging message, and an instant messaging group. In the embodiment of the application, when the target objects are different, the way in which the user triggers the tab viewing control is also different. For the description of this part, reference may be made to the description of the corresponding embodiments in fig. 12 to 17, which will not be described herein again.
In a second possible implementation, the label viewing control is not directly related to a certain target object, but is related to a label. Accordingly, the tag viewing operation is used to view the tag itself, but not to view the tag that a certain target object has. That is, after the tab viewing control is triggered by the user, the client may jump to a tab display interface, where the tab display interface is used to display one or more tabs, and the one or more tabs may correspond to the same or different target objects. In this way, the user can manage the tags.
For the description of this part, reference may be made to the description of the corresponding embodiments in fig. 18 to fig. 21, which will not be described herein again.
S1102: and the client displays at least one label according to the label viewing operation.
After receiving the tag viewing operation, the client may display at least one tag according to the tag viewing operation, so that the user can view the tag. Alternatively, the client may display the creator information of the tag at the same time as displaying the tag. Creator information may include the creator's name, contact, team, job title, etc.
Optionally, when the client displays a plurality of labels, the client may sort the labels according to a preset rule, and then display the plurality of labels in sequence. The preset rule may include any one or more of tag access time ordering, tag creation time ordering and tag priority. For example, when the preset rule is that the tags are sorted according to the creation time, the client may obtain the creation time of each tag in at least one tag, and sort the tags according to the order of the creation time from late to early. The priority of the tags may be determined based on whether the tags are collected.
As described above, the tag viewing operation may be used to view tags corresponding to a target object, or to view all tags. When the tag viewing operation is used for viewing a tag corresponding to a target object, the client may display at least one tag corresponding to the target object; when the tab viewing operation is used to view all tabs, the client may display all tabs that the user can see. Optionally, when the tab viewing operation is used to view all tabs, the client may further modify at least one displayed tab, for example, the tab may be subjected to operations such as viewing details, deleting, and grouping. For details of this part, reference may be made to the following description, which is not repeated herein.
In the embodiment of the application, all the tags displayed by the client may include tags created by the user, or tags created by other users and having the viewing right of the user. The description will be given by taking an example in which the client displays a tag created by the user and tags created by other users and having viewing permissions. After receiving the label viewing operation, the client triggers the server to screen out the labels created by the user and the labels created by other users from all the stored labels. Then, the client can judge whether the user has the viewing permission corresponding to the labels created by other users one by one, so as to screen out the labels of which the user has the corresponding viewing permission, and send the labels created by the user and the labels created by other users and of which the user has the viewing permission to the client.
In some possible implementations, when a user does not have viewing rights for a tag that the user wants to view, the user may apply viewing rights for the tag to the user who created the tag. Specifically, assuming that the user X does not have the viewing permission of the tag a, the user may trigger the permission application operation on the tag a. After receiving the operation triggered by the user, the client can generate an authority application request according to the authority application operation and send the authority application request to the user Y who creates the label A. The permission application request is used for applying for the viewing permission of the label A. After seeing the permission application request, the user Y can approve the permission application request, so that the user X has the viewing permission of the label A and can see the label A on the client. Of course, user Y may also reject the permission application request so that user X cannot see tag a.
In some possible implementations, the user may filter the tags. Specifically, the user may trigger a filtering triggering operation on the filtering control. The client can receive the screening triggering operation triggered by the user, trigger the screening operation according to the screening triggering operation and display the screening result. In particular
Optionally, the screening operation may include screening conditions. Accordingly, the screening operation triggered by the screening trigger operation may screen the tags according to the screening conditions. Specifically, the filtering condition may include an attribute value condition of at least one attribute. The filtering operation may filter the tags presented on the target interface according to the attribute value of the at least one attribute.
In the embodiment of the present application, the at least one attribute may include any one of ownership by a user, ownership by another user, and unlimited ownership. The label with the attribute of being owned by the user is the label created by the user; the label with the attribute of being owned by other users is the label created by other users; tags having the attribute of unlimited attribution include both user-created tags and tags created by other users.
The following describes in detail the manner in which the user triggers the tag check operation according to whether the tag check control is related to the target object.
First, a case where the label viewing control is related to the target object, that is, the label viewing control is used to view a label that the target object has, is described.
As described above, the target object may be any one of operation objects capable of adding tags, such as a cloud document, a schedule, a task, an instant messaging message, and an instant messaging group. In the embodiment of the application, when the target objects are different, the manner of the user triggering the tag viewing operation may be different. In the following, the manner of the user triggering the tag viewing operation is introduced by taking the target object as a cloud document, a schedule, an instant messaging message and an instant messaging group as examples.
It should be noted that, after receiving the tag viewing operation, the client may display all tags that the target object has, and may also display a tag that is created by a user and/or a tag that is created by another user and that the user has viewing permissions, from among all tags that the target object has. The following description will be given taking an example in which the client displays all tags included in the target object.
Firstly, a method for triggering a tag viewing operation by a user when a target object is a cloud document is introduced.
In a first possible implementation, the client may display the tags that the cloud document has while displaying the cloud document. Accordingly, the label viewing control may be a control for viewing a label of the cloud document. After the control is triggered, the client can jump to an interface displaying the cloud document and display the cloud document and the tags of the cloud document.
As shown in fig. 12-a, when the client displays the cloud document, a name display area 1210, a title display area 1220, a body display area 1230, and an operation control set 1240 may be included in the display area. Among them, the title display area 1220 includes a tag display area 1221 for displaying tags that the cloud document has.
Optionally, the client may display the identification information when displaying the tag, so as to prompt the user that the displayed content is the tag. For example, in the embodiment shown in FIG. 12-a, a pound sign "#" may be displayed before each tag. In this way, when the user sees the cloud document, the user may determine that the tags corresponding to the cloud document include two tags, that is, "software project" and "function update description".
For the rest of fig. 12-a, reference may be made to the description of the embodiment shown in fig. 2 and 3, which is not repeated here.
In a second possible implementation, the client may display a tag viewing control interface corresponding to the cloud document while displaying the cloud document. For example, in the embodiment shown in FIG. 12-b, title display area 1220 includes a tab view control 1222. The user may trigger a tab display operation by triggering a tab view control 1222. Accordingly, after receiving the tag display operation, the client may display the tag of the cloud document. Optionally, in some other implementation manners, the client may also hide the tag viewing control, and display the tag viewing control based on a display operation triggered by the user and a selection operation. For the description of the display operation and the selection operation, reference may be made to the above description, and details are not repeated here.
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
The following describes a method for triggering a tag viewing operation by a user when a target object is a schedule.
In a first possible implementation, the client may display the calendar and the tags that the calendar has at the same time. Accordingly, the label viewing control can be a control for viewing a schedule. After the control is triggered, the client can jump to a schedule display interface and display the schedule and the labels of the schedule.
In a second possible implementation, the client may display the label viewing control corresponding to the schedule while displaying the schedule, and/or display the label viewing control corresponding to the schedule based on a display operation or a selection operation triggered by the user on the interface displaying the schedule. The following description will take an example in which the client displays a tab viewing control corresponding to a schedule based on a selection operation triggered by a user.
As shown in FIG. 13, the user may trigger a selection operation by clicking on the schedule display element 1310. After the user triggers the selection operation, the client may display a hidden interface 1320 that includes an operation control set 1321, a schedule information display area 1322, a reminder time display area 1323, and a label display area 1324. The label display area 1324 may be used to display a label corresponding to the schedule. In the embodiment shown in fig. 13, the tags corresponding to the schedule include two tags of "item a" and "function update description".
For the rest of fig. 13, reference may be made to the description of the embodiment shown in fig. 5 and 6, which is not repeated here.
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
The following describes a method for triggering the tag viewing operation by the user when the target object is an instant messaging message.
In a first possible implementation, the client may display the instant messaging message and the tag that the instant messaging message has at the same time. Accordingly, the label viewing control can be a control for viewing instant messaging messages. After the control is triggered, the client can jump to an interface for displaying the instant messaging message and display the instant messaging message and a label of the instant messaging message.
In a second possible implementation, the client may display the instant messaging message and simultaneously display the label display control corresponding to the instant messaging message, and/or display the label display control corresponding to the instant messaging message based on a display operation triggered by the user on an interface displaying the instant messaging message or a selection operation. The following description will take an example in which the client displays a tag display control corresponding to the instant messaging message based on a selection operation and a display operation triggered by a user.
As shown in fig. 14, when the user wants to view the tab of the group chat IM message 1410, the user may select the group chat IM message 1410. The client may display control set 1420 according to the user's selected operation, where control set 1420 includes display operation control 1421. The user can trigger a display operation through the display operation control 1421, and control the client to display a hidden control set 1430, where the hidden control set 1430 includes a label viewing control 1431.
Upon triggering the tab view control 1431, the client may display a tab view interface that includes the tab that the group chat IM message 1410 has. Alternatively, the tag viewing interface may be a pop-up dialog box, and the client may display the tag viewing interface in the interface displaying the group chat IM message. Optionally, the tag viewing interface may also be a separate interface, and the client may jump to another interface to display tags of the group chat IM message.
Note that the label viewing control 1431 in fig. 14 may be the same control as the label adding control 651 in fig. 6.
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
The following describes a method for triggering a tag viewing operation by a user when a target object is an instant messaging group.
In a first possible implementation, the client may display the instant messaging group and simultaneously display the tags of the instant messaging group. Accordingly, the label viewing control can be a control for viewing the instant messaging group. After the control is triggered, the client can jump to an interface for displaying the instant messaging group and display the instant messaging group and the label of the instant messaging group.
As shown in fig. 15, when the client displays an instant messaging group, the title display area 1510 may include a tag display area 1511. The tag display area 1511 may be used to display tags corresponding to the instant messaging group. In the embodiment shown in fig. 15, the instant messaging group "work exchange group" has tags including two tags "item a" and "work group".
In a second possible implementation, the client may display the tag viewing control corresponding to the instant messaging group while displaying the instant messaging group, and/or display the tag viewing control corresponding to the instant messaging group based on a display operation triggered by the user on an interface displaying the instant messaging group or a selection operation. The following description will take an example in which the client displays the instant messaging group and simultaneously displays the tag viewing control corresponding to the instant messaging group.
As shown in fig. 16, when the client displays an instant messaging group, the title display area 1610 may include a jump control 1611, the jump control 1611 being capable of functioning as a tab viewing control. After the user triggers the jump control 1611, the client may receive a tag viewing operation to display tags of the instant messaging group "work exchange group". Optionally, the client may display the tag of the instant messaging group on the current interface, or may jump to another interface to display the tag of the instant messaging group.
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
The above describes a scenario in which the label viewing control is used to view the label of the target object, and the following describes that the label viewing control is not directly associated with the target object, that is, the label viewing control is used to view the scenario of the label itself.
When the label viewing control is not directly used to view a label that a certain target object has, the label viewing control may be used to jump to the label display interface. In the tab display interface, the client may display at least one tab (e.g., having viewing permissions) that the user may view. By operating any one of the at least one tag, the user can view the related information of the object corresponding to the tag, adjust the related information of the tag, and adjust the corresponding relationship between the tag and the object.
In the embodiment of the application, the label viewing control can be located at the initial interface of the client, and can also be located at any other interface. Accordingly, the user can trigger the tab viewing operation at the initial interface of the client or any other interface.
The explanation will be given by taking an example of a user triggering a tag viewing operation in a group chat IM message interface. As shown in fig. 17, when the client displays a group chat IM message interface, a title display area 1710, a message display area 1720, and a control display area 1730 may be included in the display area. The title display area 1710 is used to display basic information of the client, and for example, in fig. 17, the title display area 1710 displays the name "software X" of software running on the client. Message display area 1720 is used to display an instant messaging group chat message. For details of the message display area 1720, reference may be made to the above description, which is not repeated here.
The control display area 1730 is used to display at least one operational control, and in the embodiment illustrated in fig. 17, the control display area 1730 includes a message view control 1731, an address book view control 1732, and a label view control 1733. Wherein, the message view control 1731 is configured to cause the client to display the instant messaging group chat message after being triggered. In the embodiment illustrated in FIG. 17, the message view control 1731 is in a state that has been triggered. The address book viewing control 1732 is used for causing the client to display the address book after being triggered. In the embodiment shown in fig. 17, the address book view control 1732 is in an unactuated state. The tab view control 1733 is used to cause the client to display at least one tab upon triggering. In the illustrated embodiment of FIG. 17, the label view control 1732 is in an unactuated state.
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
After the user triggers the label viewing control, the client can receive the label viewing operation and jump to a label display interface according to the label viewing operation. In the tab display interface, the client may display at least one tab. As can be appreciated from the foregoing description, the at least one tab may include a tab created by the user and a tab created by another user and having viewing rights. The user may be a user who logs in the client or a user who currently operates the client.
As shown in fig. 18-a, when the client displays the tab display interface, a title display area 1810, a control display area 1820, a search control 1830, an ordering information display area 1840, and a tab display area 1850 may be included within the display area.
The title display area 1810 is used for displaying basic information of the client; the control display area 1820 includes a tab view control 1821. In the embodiment shown in FIG. 18-a, the label view control 1821 is in a state that has been triggered; the search control 1830 is used to receive a keyword input by a user and search for a tag matching the keyword; the sorting information display area 1840 is used to display the sorting rules of the respective tab cards in the tab display area 1850. In the embodiment shown in FIG. 18-a, the sorting information display area 1840 displays the content "closest", indicating that the sorting rule describing the tags in the tag display area 1850 is sorting by tag access time, with the tag accessed last being the top.
Label display area 1850 may be used to display one or more label cards, each of which may display information relating to a label. In the embodiment shown in fig. 18-a, the label display area 1850 includes a first label card 1851, a second label card 1852, a third label card 1853, and an incompletely displayed fourth label 1 card 854. The first label card 1851 includes a color identification area 1851-1 and a label name display area 1851-2. Wherein the color identification area 1851-1 displays a color for representing the creator of the first label 1851, and the label name display area 1851-2 displays the name of the first label 1851. In the embodiment shown in fig. 18-a, the corresponding color of the first tab card 1851 is black, indicating that the tab "meeting" currently displayed by the first tab card 1851 was created by the user currently logged into the client. In other examples, the color displayed in the color identification area 1851-1 may also be used to represent other content, and the embodiment of the present application is not particularly limited.
As can be seen from the foregoing description, the user may filter at least one of the tags displayed by the client. In the embodiment of the application, the label display interface may include a filtering control, and a user may trigger a filtering triggering operation on the filtering control, so as to control the client to filter the label.
As shown in FIG. 18-b, a filter control 1860 may also be included in the label display interface, filter control 1860 including a current property display area 1861 and a property change control 1862. The current property display area 1861 is used to display the current filtering conditions. In the embodiment shown in fig. 18-b, the condition for the client to filter the tags is "don't limit attribution", which indicates that the tags displayed by the client include the tags created by the user and the tags created by other users. The property change control 1862 is used to change the filter conditions of the client.
When the user wants to screen out tabs with other properties from the tabs, the user can trigger the property change control 1862. In response to an operation by the user, the client can display the property display area 1863. The property display area 1863 includes a first property screening control 1863-1 and a second property screening control 1863-2. When the first attribute filtering control 1863-1 is triggered, the client can filter out the user-created labels from the labels; when the second property filtering control 1863-2 is triggered, the client may filter out other user-created labels from the labels.
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
When the number of tags is large, the time required for the user to find a desired tag from the tag display interface may be long. To this end, the user may trigger a favorites operation in the tab display interface to favorite any one or more tabs. Therefore, when looking at a certain collected tag, the user can quickly find the collected tag in the collection interface.
In some possible implementations, the user may trigger the favorites operation through a favorites control. In particular, as shown in fig. 19-a, a tab card displayed by the tab display interface may include a favorite control. For example, the first tab card 1851 includes a first favorite control 1851-3, the second tab card 1852 includes a second favorite control 1852-3, and the third tab card 1853 includes a second favorite control 1852-3.
When a user wants to favorite a tab, the user can click on a favorite control on the tab card that displays the tab. For example, assuming the user wants to favorite the tab "take a meeting," the first favorite control 1851-3 in the first tab card 1851 can be clicked. In response to the first favorites control 1851-3 being clicked, the client may determine that the user triggered a favorites operation to the tab "meeting," thereby favoring the tab "meeting.
Optionally, after collecting the target tag, the client may adjust a color of a collection control in the tag card where the target tag is located. As shown in fig. 19-b, after favoring the "meeting" tab, the client may adjust the color of the first favorites control 1851-3 from white to black to prompt the user that the tab "meeting" shown on the first tab card 1851 has been favorited.
After collecting any one or more tags, the user may trigger a viewing operation on the collected tags, for example, the viewing operation on the collected tags may be triggered on a home page of the client through a viewing control, or the viewing operation on the collected tags may be triggered in an interface where the collected objects are displayed on the client. After receiving the viewing operation, the client may display the collected tags.
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
In the tab viewing interface, the client displays to the user one or more tabs that the user can see. When the user wants to further view the object corresponding to a certain tag and other information of the objects, the user can trigger a viewing operation on any one tag. Accordingly, as can be seen from the above description, after the tag is added to the target object, the client or the server may generate the corresponding relationship between the information block of the target object and the tag. Then, upon receiving the tag view operation, the client may display the information block of the object having the tag.
In an embodiment of the present application, the information block may include summary information. For convenience of explanation, the following embodiments are described by taking the client side as an example to display the summary. It should be noted that, when the target object information block includes other information of the target object, the client may also display the information.
When the information block of the target object includes summary information of the target object, the client may display summary information of at least one object having the tag. For example, assuming that a user triggers a viewing operation on a first tag, the client may jump to a tag detail interface corresponding to the first tag according to the viewing operation, where the tag detail interface includes summary information of at least one object.
In order to facilitate the user to distinguish different types of objects, in the embodiment of the present application, the client may classify at least one object according to the tag domain, and then display the summary information according to the tag domain classification. Wherein the tag field indicates the source or category of the object. Alternatively, the tag field may be the type of the object. For example, if the object a is a cloud document, the tag domain corresponding to the object a is the cloud document. As can be seen from the foregoing discussion, the tagged target object may be an operation object such as a cloud document, a schedule, a task, an instant messaging message, and an instant messaging group. Accordingly, the tag field may include any one or more of a cloud document, a calendar, a task, an instant messenger message, and an instant messenger group.
In the embodiment of the present application, when the client displays summary information of at least one object having the same tag, the client may display the summary information in a list manner or a canvas card manner. Each will be described in detail below.
First, a case where the client displays summary information in a list manner is described.
In the case where the client displays the summary information in a list manner, the client may sequentially display the summary information of at least one object having a tag. Specifically, as shown in fig. 20, when the client displays the summary information in the form of a list, the display area may include a title display area 2010, a control display area 2020, a display manner switching control 2030, and a label display area 2040. The descriptions of the title display area 2010 and the control display area 2020 can be referred to above, and are not described here again.
The display mode switching control 2030 is used to switch the switching mode of the client for displaying the summary information, and may be used to switch the display mode from a list mode to a canvas card mode, or may be used to switch the display mode from the canvas card mode to the list mode, for example. In the embodiment shown in fig. 20, the display mode switching control 2030 includes a canvas card display control 2031 and a list display control 2032, where the canvas card display control 2031 is in an un-triggered state, and the list display control 2032 is in a triggered state, which indicates that the client currently displays summary information in a list mode.
The label display area 2040 includes a label switching control 2041, a label name display area 2042, a label domain display area 2043, and at least one summary information display area.
Wherein, the label switching control 2041 is used to switch to other labels. When the user wants to view other labels, the label switching control 2041 may be triggered; the tag name display area 2042 is used to display the name of a tag. For example, in the embodiment shown in fig. 20, the user can determine from the tag name display area 2042 that the currently displayed summary information belongs to the tag "in a meeting".
The tag field display area 2043 may be used to display a tag field from which an object corresponding to summary information originates, and may also be used to switch and display summary information of objects originating from other tag fields. As shown in fig. 20, the tag field display area 2043 includes a first tag field name 2043-1, a second tag field name 2043-2, and label information 2043-3. Wherein the first tag domain name 2043-1 is "message", which indicates that the object from the tag domain is an instant messaging message; the second tag field name 2043-2 is "group," which indicates that the objects from the tag field are instant messaging groups; the labeling information 2043-3 is used for labeling the label domain to which the summary information currently displayed by the client belongs. In the embodiment shown in fig. 20, the label information 2043-3 is associated with a first label field name 2043-1, which indicates that the object corresponding to the summary information currently displayed by the client is derived from the label field of "instant messaging message".
Of course, in some possible implementations, the client does not classify summary information according to tag fields. Then correspondingly, the label display area 2040 does not include the label field display area 2043.
In this embodiment, the tag display area 2040 may include one or more summary display areas, each of which may be used to display summary information of an object. The following description will take an example in which the tag display area 2040 includes the digest display area 2044.
As shown in fig. 20, when the summary information displayed by the client is summary information of an instant messaging message, the summary display area 2044 may include at least one of a summary avatar display area 2044-1, a summary content display area 2044-2, and an object information display area 2044-3. The abstract avatar display area 2044-1 is used for displaying the avatar of the user sending the instant messaging message. The summary content display area 2044-2 may display the specific content of the instant messaging message. Optionally, when the content of the instant messenger message is more specific, the summary content display area 2044-2 may display a portion of the instant messenger message, such as the first ten words of the message. The object information display area 2044-3 is used to display other information of the instant messenger message, for example, in the embodiment shown in fig. 20, the time when the instant messenger message is sent is displayed in the object information display area 2044-3. Of course, the object information display area 2044-3 may also display other information, such as an instant messaging group to which the instant messaging message belongs.
Optionally, the tag display area may further include indication information for prompting the user about the content displayed in each display area in the summary display area. For example, in the embodiment shown in fig. 20, the tag display area 2040 includes two pieces of indication information of "message content" and "most recently transmitted". The prompt message of 'message content' is used for prompting the corresponding position in the summary display area below the user to display the content of the instant communication message. The prompt message of 'most recently sent' is used for prompting the summary information displayed in each summary display area below the user to be arranged according to the sequence of the sending time of the corresponding instant messaging message.
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
The following describes a case where the client displays summary information in the form of a canvas card. In the case where the client displays summary information in the form of canvas cards, the client may set a canvas view display area in the display area. Specifically, as shown in fig. 21, when the client displays the summary information in the form of a canvas card, the display area may include a display scale adjustment control 2110, a display mode switching control 2120, a search control 2130, a tab field display area 2140, a canvas view 2150, and at least one canvas card. For the description of the other parts in fig. 21, reference may be made to the foregoing embodiments, which are not described herein again.
The display scale adjustment control 2110 is used to adjust the display scale of the canvas view 2150, including a zoom-out control 2111, a zoom-in control 2112, and a scale display area 2113. The zoom-out control 2111 is used for zooming out the display scale of the canvas view 2150, the zoom-in control 2112 is used for zooming in the display scale of the canvas view 2150, and the scale display area 2113 is used for displaying the current display scale of the canvas view 2150.
The display mode switching control 2120 is used to switch a switching mode of displaying summary information by the client, in the embodiment shown in fig. 21, the display mode switching control 2120 includes a canvas card type display control 2121 and a list type display control 2122, and the canvas card type display control 2121 is in a triggered state, and the list type display control 2122 is in an un-triggered state, which indicates that the client currently displays summary information in a canvas card type.
The search control 2130 is used for searching corresponding objects and/or summary information according to the keywords input by the user. When a user wants to have certain summary information or objects for a keyword, the keyword can be entered in search control 2130. Accordingly, the client may find objects and/or summary information that match the keyword.
For a detailed description of the label field display area 2140, reference is made to the above description and will not be repeated here.
The canvas view 2150 may be used to group pieces of information displayed in the canvas card according to a user's operation. The canvas view 2150 is described in detail in the following description of the embodiments shown in fig. 22 and 23, and will not be described here.
In an embodiment of the present application, the display area of the client may include one or more tab canvas cards, each for displaying summary information corresponding to one object. The following description takes the example where the display area includes canvas cards 2160.
As shown in fig. 21, the canvas card 2160 includes a summary avatar display area 2161, a summary content display area 2162, and an object information display area 2163. The abstract avatar display region 2061 is used to display the avatar of the user sending the instant messaging message. The summary content display area 2162 may display the specific content of the instant messaging message. The object information display area 2163 is used to display other information of the instant messenger message, for example, in the embodiment shown in fig. 21, the time when the instant messenger message is sent is displayed in the object information display area 2163. It should be noted that fig. 21 is only an example for facilitating understanding of the canvas card and is not used to limit the specific style of the canvas card.
In the embodiment of the application, separation areas can be arranged between different canvas cards and used for distinguishing summary information corresponding to different objects. As shown in FIG. 21, the canvas card 2160 has a background color that is different from the rest of the text. In this way, the user can determine the card display area 2160 for displaying the summary information according to the color information of the card display area.
As can be seen from the foregoing description, the creator of the tag can set viewing rights and/or editing rights for the tag. Accordingly, after receiving the tag viewing operation, the client may determine whether the currently logged-in user has the viewing permission of the tag. If the user does not have the viewing right of the tag, the client may not display the information block of the object having the tag. If the user has the viewing permission of the label, the client does not respond to the editing operation of the user. For example, in the embodiment shown in fig. 21, assuming that the user only has the viewing right of the tab "meeting" and does not have the editing right, the client may display the canvas view shown in fig. 21, and prohibit the user from grouping information blocks under the canvas view, and the like.
It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
The above introduces a method for a client to display a tag and summary information of at least one object having the tag to a user. Accordingly, in the interface for displaying the summary information, the user can operate the label or the summary information. In the embodiment of the application, the operation performed by the user on the tag or the summary information may include a view details operation, a delete operation, and a grouping operation. As described in detail below.
The view details operation is first described.
In an actual application scenario, the client may be limited in the size of the display area, and cannot display all information or all summary information of the object, so that the user cannot find the content that the user wants to see from the information displayed by the client.
Thus, the client may associate summary information of the object with the object or associate summary information with an interface corresponding to the object. In this way, when the user wants to view the detailed information of the object, the client may operate the display object or the interface corresponding to the object according to the viewing details triggered by the user.
For example. Assume that the object displayed by the client comprises a first object. Then, the client may record the correspondence between the summary information of the first object and the first object in advance, or record the correspondence between the summary information of the first object and the interface associated with the first object. When the user wants to view the detailed information of the first object, the user can trigger an operation on the summary information of the first object. The client may jump to an interface associated with the first object or display the first object in response to a triggering operation by the user.
The deletion operation will be described below.
When a user wants to delete a certain tag of an object, the user can trigger a deletion operation on the tag. Correspondingly, the client can delete the association relationship between the object and the tag according to the received deletion operation. Optionally, when the user triggers a delete operation on the tag detail page of the first tag, the client may receive the delete operation of the user on the first object. Since the current interface corresponds to the first tag, the client may determine that the deletion operation is for the first object and the first tag, so as to delete the association relationship between the first object and the first tag.
In the embodiment of the application, the user can also trigger the deletion operation of the tag, so that the client is controlled to delete the corresponding relation between the tag and all the objects. For example, in the tab display interface shown in fig. 18, the user may trigger a delete operation for the first tab. After receiving the deletion operation, the client may delete the first tag and clear the correspondence between the first tag and all the objects. Optionally, the user may control the client to display a label deletion control by right clicking the first label 1851 or long pressing the first label 1851, and trigger the deletion operation through the label deletion control.
The grouping operation is described below.
In an actual application scenario, different objects may have the same label. When the number of these objects is large, it may be difficult for the user to directly find a desired object. To address this problem, a user may trigger a grouping operation. Accordingly, the client can group any one or more objects of the one or more objects with the same tag according to the grouping operation. Therefore, the objects with the same characteristics are divided into a group, so that a user can find a desired object more quickly.
Optionally, when generating the packet, the client may obtain the tag packet name. From the tag group name, the client (or server) can generate a correspondence between the tag, the grouped object, and the tag group name. Thus, based on the tag group name, the user or client can determine the group to which the object belongs. For example, assuming that the user groups the object X and the object Y in the tag a into one group, the user can set the tag group name of the group as "first group". Accordingly, the client or server may record the correspondence between tag a, the first group, and object X, and the correspondence between tag a, the first group, and object Y.
For a detailed description of the client obtaining the tag group name, reference may be made to the following, which is not described herein again.
The following describes a process of grouping objects, taking a case where a client displays summary information in a canvas card format as an example.
When a user wants to group objects, the user can drag the canvas card where the summary information corresponding to the objects is located into the canvas view. The client can determine the group to which the object corresponding to the summary information in the canvas card belongs according to the position of the canvas card in the canvas view. For example, the client may determine the group to which the object corresponding to the summary information in the canvas card belongs according to the distance between the canvas card and other canvas cards.
For example. When a user wants to add an object to a group, such as the user wants to add the canvas card 2160 of FIG. 21 to the group "meeting X," the user may select the canvas card 2160 and drag the canvas card 2160 into the canvas view 2150. The client detects that a canvas card 2160 is newly added in the canvas view 2150, and the boundary of the canvas view 2160 does not coincide with the boundary of the existing grouping area, the client can determine that a user newly creates a group for the canvas card 2160, and display a new canvas card corresponding to the canvas card 2160 in the canvas view 2150. Then the display area of the client can be as shown in fig. 22, with the canvas view 2150 including a first packet display area 2151. The first packet display area 2151 is bounded by 2151-1, including a name display area 2151-2 and canvas cards 2151-3.
The name display area 2151-2 is used to display the group name corresponding to the first group display area 2151. In the embodiment shown in fig. 22, the packet name of the first packet display area 2151 is "conference X". Optionally, the client may display a name input box when creating the packet X, and use data input by the user in the name input box as the packet tag packet name. Canvas cards 2151-3 are new canvas cards corresponding to canvas cards 2160 and may display the same or similar summary information as canvas cards 2160. It can be seen that in the embodiment shown in FIG. 22, the canvas cards 2151-3 represent the grouping of the instant messenger message "a meeting in the afternoon of today" belonging to "meeting X" under the label "meeting" sent by leader A.
In an embodiment of the application, the client may determine the grouping of the dragged canvas view according to whether the dragged canvas card overlaps with a grouping display area inside the canvas view. For example, if the user drags the canvas card to a position that overlaps the first packet display area 2151, the user is said to add the new canvas card to the packet "packet X". If the user drags the canvas card to a position in the canvas view 2150 that does not overlap the first packet display area 2151, this indicates that the user has not added the new canvas card to the "packet X" packet.
Alternatively, the client may determine whether the canvas card overlaps the group display area by determining whether the boundary of the canvas card intersects the boundary of the group display area. For example, if the boundary of the canvas card dragged into the canvas view 2150 intersects the boundary 2151-1, the client may determine that the canvas card overlaps the first packet display area 2151; if the boundary of the canvas card dragged into the canvas view 2150 does not intersect the boundary 2151-1, the client may determine that the canvas card does not overlap the first packet display area 2151.
Accordingly, after the canvas card 2160 is moved to the canvas view 2150, the client may display the canvas card 2220 in the position where the canvas card 2160 was originally displayed. The instant messenger message corresponding to the canvas card 2220 is sent only the next time as the instant messenger message corresponding to the canvas card 2160.
After a number of user actions, the client's display interface may be as shown in FIG. 23, including grouping tab 2310, canvas card 2320, canvas card 2330, and canvas 2340. Where grouping tab 2310 is used to prompt the user that canvas cards 2320 and 2330 to the right of the display area are canvas cards that have not been grouped. Canvas cards 2320 and 2330 are used to display summary information for objects having the tag "in a meeting".
In the embodiment shown in FIG. 23, the canvas view 2340 includes a first grouping 2341 and a second grouping 2342, the boundary of the first grouping is 2341-1, the boundary of the second grouping 2342 is 2342-1, the boundary of the grouping means that the grouping is at the outer border of the canvas view, and the labels within the boundary belong to the same grouping. As can be seen from fig. 23, in a "meeting," this tag includes at least two packets, one packet having a tag packet name "conference X" and another packet having a tag packet name "conference Y". Wherein, the packet of the meeting X includes an instant communication message "the meeting in the afternoon today" sent by the leader a and an instant communication message "the subject of the meeting in the afternoon today" sent by the leader a is … ". The package of meeting Y includes an instant message "ten am meeting tomorrow" sent by leader B, an instant message "subject of meeting tomorrow am …" sent by colleague C, and an instant message "location of meeting tomorrow am …" sent by colleague D.
Of course, in some possible implementations, the user may also divide multiple sub-groups in one group, where each sub-group may include one or more objects. The specific partitioning method is similar to the foregoing method, and is not described herein again. It should be noted that, without contradiction, the method described in the present embodiment and the method described in the foregoing embodiment may be combined. Moreover, the combined scheme is also covered in the protection scope of the embodiment of the present disclosure.
Fig. 24 is a schematic structural diagram of an information interaction apparatus provided in an embodiment of the present application, where this embodiment may be applied to a case where a target tag is added to a target object in a client, and the apparatus specifically includes: a receiving module 2410 and a generating module 2420.
The receiving module 2410 is configured to receive a tag adding operation triggered by a user on a target object. A generating module 2420, configured to generate a corresponding relationship according to the tag adding operation, where the corresponding relationship includes a corresponding relationship between a target tag and the target object.
The information interaction device provided by the embodiment of the disclosure can execute the information interaction method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the information interaction method. It should be noted that, in the embodiment of the apparatus for information interaction, the included units and modules are only divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, the specific names of the functional units are only for the convenience of distinguishing from each other, and are not used for limiting the protection scope of the present disclosure
Fig. 25 is a schematic structural diagram of a tag viewing apparatus provided in an embodiment of the present application, where the embodiment may be applied to a case of viewing a tag in a client, and the apparatus specifically includes: a receiving module 2510 and a display module 2520.
The receiving module 2510 is configured to receive a label viewing operation triggered by the label viewing control by the user; a display module 2520, configured to display at least one tag according to the tag viewing operation.
The tag checking device provided by the embodiment of the disclosure can execute the tag checking method provided by any embodiment of the disclosure, and has the corresponding functional module and beneficial effect of executing the tag checking method.
It should be noted that, in the embodiment of the tag viewing apparatus, the included units and modules are only divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, the specific names of the functional units are only for the convenience of distinguishing from each other, and are not used for limiting the protection scope of the present disclosure
Referring now to fig. 26, a schematic diagram of an electronic device (e.g., a terminal device or server running a client) 2600 suitable for implementing an embodiment of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 26 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 26, electronic device 2600 may include a processing device (e.g., central processor, graphics processor, etc.) 2601 that may perform various suitable actions and processes in accordance with a program stored in a read-only memory (ROM)2602 or a program loaded from storage 2608 into Random Access Memory (RAM) 2603. The RAM 2603 also stores various programs and data necessary for the operation of the electronic device 2600. The processing device 2601, the ROM 2602, and the RAM 2603 are connected to each other through a bus 2604. An input/output (I/O) interface 905 is also connected to bus 2604.
Generally, the following devices may be connected to the I/O interface 2605: input devices 2606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, or the like; an output device 2607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, or the like; storage devices 2608 including, for example, magnetic tape, hard disk, and the like; and a communication device 2609. The communications apparatus 2609 may allow the electronic device 2600 to communicate wirelessly or by wire with other devices to exchange data. While fig. 26 illustrates an electronic device 2600 having various means, it is to be understood that it is not required that all illustrated means be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, the processes described above with reference to the flow diagrams may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication device 2609, or installed from the storage device 2608, or installed from the ROM 2602. The computer program, when executed by the processing device 2601, performs the above-described functions defined in the methods of embodiments of the present disclosure.
The electronic device provided by the embodiment of the present disclosure, the method for information interaction and the method for viewing a tag provided by the embodiment of the present disclosure belong to the same inventive concept, and technical details that are not described in detail in the embodiment of the present disclosure may be referred to in the embodiment of the present disclosure, and the embodiment of the present disclosure has the same beneficial effects as the embodiment of the present disclosure.
The embodiment of the present disclosure provides a computer storage medium, on which a computer program is stored, and when the program is executed by a processor, the method for information interaction or the method for viewing a tag provided by the above embodiment is implemented.
It should be noted that the computer readable medium of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may be separate and not incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a label adding operation triggered by a user on a target object; and generating a corresponding relation according to the label adding operation. Or cause the electronic device to: receiving a label viewing operation triggered by a user on a label viewing control; and displaying at least one label according to the label viewing operation.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a cell does not in some cases constitute a limitation on the cell itself, for example, an editable content display cell may also be described as an "editing cell".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, [ example one ] there is provided a method of information interaction, the method comprising: receiving a label adding operation triggered by a user on a target object; and generating a corresponding relation according to the tag adding operation, wherein the corresponding relation comprises a corresponding relation between a target tag and the target object.
According to one or more embodiments of the present disclosure, [ example two ] there is provided a method of information interaction, the method further comprising: optionally, the target tag is created by a user other than the user.
According to one or more embodiments of the present disclosure [ example three ] there is provided a method of information interaction, the method further comprising: optionally, the receiving a tag addition operation triggered by a user on a target object includes: and receiving a label adding operation triggered by the label adding control of the target object by the user.
According to one or more embodiments of the present disclosure, [ example four ] there is provided a method of information interaction, the method further comprising: optionally, before receiving a label adding operation triggered by a user to a label adding control of a target object, the method further includes: receiving a selection operation of the target object triggered by the user; and displaying the label adding control according to the selected operation of the target object.
According to one or more embodiments of the present disclosure, [ example five ] there is provided a method of information interaction, the method further comprising: optionally, before receiving a label adding operation triggered by a user on a label adding control of a target object, the method further includes: receiving a display operation of the label adding control triggered by the user; and displaying the label adding control according to the display operation.
According to one or more embodiments of the present disclosure, [ example six ] there is provided a method of information interaction, the method further comprising: optionally, the obtaining the target tag determined by the user according to the tag adding operation includes: displaying at least one candidate label according to the label adding operation, wherein the at least one candidate label comprises the target label; receiving the selection operation of the user on the target label; and determining the target label according to the selected operation.
According to one or more embodiments of the present disclosure, [ example seven ] there is provided a method of information interaction, the method further comprising: optionally, the displaying at least one candidate tag according to the tag adding operation includes: displaying a tag input box according to the tag adding operation; receiving a keyword input by a user in the label input box; and displaying at least one candidate label matched with the keyword.
According to one or more embodiments of the present disclosure, [ example eight ] there is provided a method of information interaction, the method further comprising: optionally, the at least one candidate tag is arranged according to a preset rule.
According to one or more embodiments of the present disclosure, [ example nine ] there is provided a method of information interaction, the method further comprising: optionally, the preset rule includes one or more of the following: the label creating time, the label accessing time and the matching degree with the keywords.
According to one or more embodiments of the present disclosure, [ example ten ] there is provided a method of information interaction, the method further comprising: optionally, after the target tag determined by the user is obtained according to the tag adding operation, the method further includes: and displaying prompt information, wherein the prompt information is used for indicating that the target label is added to the target object.
According to one or more embodiments of the present disclosure, [ example eleven ] there is provided a method of information interaction, the method further comprising: optionally, after obtaining the target tag determined by the user according to the tag adding operation, the method further includes: receiving a label viewing operation of the target object triggered by the user; and displaying the target label according to the viewing operation.
According to one or more embodiments of the present disclosure, [ example twelve ] there is provided a method of information interaction, the method further comprising: optionally, after the target tag determined by the user is obtained according to the tag adding operation, the method further includes: displaying a content viewing control; receiving a viewing operation triggered by the content viewing control by the user; and displaying the object corresponding to the target label according to the viewing operation.
According to one or more embodiments of the present disclosure, [ example thirteen ] provides a method of information interaction, the method further comprising: optionally, the correspondence between the target tag and the target object includes: a correspondence between the target tag and an information block of the target object, the information block being generated based on the target object.
According to one or more embodiments of the present disclosure [ example fourteen ] there is provided a method of information interaction, the method further comprising: optionally, the method further comprises: jumping to an interface comprising the target object in response to the triggering operation of the information block; and/or, responding to the trigger operation of the information block, and displaying the target object.
According to one or more embodiments of the present disclosure, [ example fifteen ] there is provided a method of information interaction, the method further comprising: optionally, before receiving a tag adding operation triggered by a user on a target object, the method includes: receiving an adding operation triggered by the user on a target label, and displaying at least one candidate object according to the adding operation; determining a candidate object selected by the user from the at least one candidate object as the target object.
According to one or more embodiments of the present disclosure, [ example sixteen ] there is provided a method of information interaction, the method further comprising: optionally, the at least one candidate object is presented according to a label domain classification from which the candidate object originates.
According to one or more embodiments of the present disclosure, [ example seventeen ] there is provided a method of information interaction, the method further comprising: optionally, the method further comprises: and receiving the authority setting operation of the user on the target label, and determining the user with the operation authority for operating the target label according to the authority setting operation, wherein the operation authority comprises an editing authority and/or a viewing authority.
According to one or more embodiments of the present disclosure, [ example eighteen ] there is provided a method of information interaction, the method further comprising: optionally, the editing right and/or viewing right of the target tag includes:
all members within an organization are editable; or, the added collaborators can be edited and other members can view; or, only added collaborators may edit or view, wherein the collaborators are users collaboratively editing the information block of the target tag.
According to one or more embodiments of the present disclosure, [ example nineteen ] there is provided a method of information interaction, the method further comprising: optionally, in response to the user setting the authority of the target tag to be editable by the added collaborator, other members can view the added collaborator, the method further includes:
and acquiring the information of the collaborators determined by the user.
In accordance with one or more embodiments of the present disclosure, [ example twenty ] there is provided a method of information interaction, the method further comprising: optionally, the method further comprises: receiving the deleting operation of the user on the target label; and deleting the target label according to the deleting operation.
In accordance with one or more embodiments of the present disclosure, [ example twenty-one ] there is provided a method of information interaction, the method further comprising: optionally, the correspondence includes a correspondence between the target tag, the information block of the target object, and the information of the user.
In accordance with one or more embodiments of the present disclosure, [ example twenty-two ] there is provided a method of information interaction, the method further comprising: optionally, the target object includes at least one of: cloud documents, schedules, tasks, instant messaging messages, and group chat IM messages.
According to one or more embodiments of the present disclosure, [ example twenty-three ] there is provided a method of information interaction, the method comprising: receiving a tag adding operation triggered by a user on a target cloud document; acquiring a target label determined by the user according to the label adding operation; and generating a corresponding relation between the target cloud document and the target label.
For relevant contents of this example, please refer to above.
In accordance with one or more embodiments of the present disclosure, [ example twenty-four ] there is provided a method of information interaction, the method comprising: receiving a label adding operation triggered by a user on a target schedule; acquiring a target label determined by the user according to the label adding operation; and generating a corresponding relation between the target label and the target schedule.
For relevant contents of this example, please refer to above.
According to one or more embodiments of the present disclosure [ example twenty five ] there is provided a method of information interaction, the method comprising: receiving a tag adding operation triggered by a target Instant Messaging (IM) message by a user; acquiring a target label determined by the user according to the label adding operation; and generating a corresponding relation between the target label and the target IM message.
For relevant contents of this example, please refer to above.
In accordance with one or more embodiments of the present disclosure, [ example twenty-six ] there is provided a method of information interaction, the method further comprising: optionally, the target IM message includes a single chat IM message or a group chat IM message.
According to one or more embodiments of the present disclosure, [ example twenty-seventh ] there is provided a method of information interaction, the method comprising: receiving a label adding operation triggered by a user to a target Instant Messaging (IM) group; acquiring a target label determined by the user according to the label adding operation; and generating a corresponding relation between the target label and the target IM group.
For relevant contents of this example, please refer to above.
In accordance with one or more embodiments of the present disclosure, [ example twenty-eight ] there is provided a method of information interaction, the method comprising: receiving a label adding operation triggered by a user on a target task; acquiring a target label determined by the user according to the label adding operation; and generating a corresponding relation between the target label and the target task.
For relevant contents of this example, please refer to above.
According to one or more embodiments of the present disclosure, [ example twenty-nine ] there is provided a tag viewing method, the method comprising: receiving a label viewing operation triggered by a user on a label viewing control; and displaying at least one label according to the label viewing operation.
According to one or more embodiments of the present disclosure [ example thirty ] there is provided a tag viewing method, further comprising: optionally, the at least one tag comprises one or more of: a label created by the user; a tag created by a user other than the user and the user has viewing rights.
According to one or more embodiments of the present disclosure, [ example thirty-one ] provides a tag viewing method, further comprising: optionally, the method further comprises: receiving a screening triggering operation of the user on a screening control; and presenting a screening result corresponding to the screening operation triggered by the screening triggering operation.
According to one or more embodiments of the present disclosure, [ example thirty-two ] there is provided a tag viewing method, the method further comprising: optionally, the filtering operation is configured to filter the tag presented on the target interface according to the attribute value of the at least one attribute.
According to one or more embodiments of the present disclosure, [ example thirty-three ] there is provided a tag viewing method, the method further comprising: optionally, the at least one attribute comprises one or more of: belonging to the user, belonging to other users and unlimited belonging.
According to one or more embodiments of the present disclosure, [ example thirty-four ] there is provided a tag viewing method, the method further comprising: optionally, the receiving a label viewing operation triggered by the user on the label viewing control includes: receiving a label viewing operation triggered by a user on a label viewing control displayed in association with a target object; the displaying at least one tag according to the tag viewing operation includes: and displaying at least one label of the target object according to the label viewing operation.
According to one or more embodiments of the present disclosure [ example thirty-five ] there is provided a tag viewing method, the method further comprising: optionally, the at least one tag comprises a first tag, the method further comprising: and receiving the viewing operation of the first label triggered by the user, and displaying the information block of at least one object with the first label.
According to one or more embodiments of the present disclosure [ example thirty-six ] there is provided a tag viewing method, the method further comprising: optionally, the information block of the at least one object with the first label is presented according to the label domain classification from which the object is derived.
According to one or more embodiments of the present disclosure, [ example thirty-seven ] provides a label viewing method, further comprising: optionally, the tag domain comprises one or more of: tasks, schedules, cloud documents, instant messaging messages, and instant messaging groups.
According to one or more embodiments of the present disclosure, [ example thirty-eight ] provides a tag viewing method, the method further comprising: optionally, the information block of each of the at least one object is presented in the form of a list or a canvas card.
According to one or more embodiments of the present disclosure, [ example thirty-nine ] there is provided a tag viewing method, the method further comprising: optionally, the method further comprises: receiving the grouping operation of at least one object with the first label triggered by the user; grouping one or more of the at least one object according to the grouping operation.
According to one or more embodiments of the present disclosure, [ example forty ] there is provided a tag viewing method, the method further comprising: optionally, the method further comprises: and acquiring a label grouping name, wherein the label grouping name is used for generating a corresponding relation, and the corresponding relation is the corresponding relation among the first label, the grouped object and the label grouping name.
According to one or more embodiments of the present disclosure, [ example forty one ] provides a tag viewing method, further comprising: optionally, the at least one object comprises a first object, the information block of which is displayed in a first canvas card; the receiving the user-triggered grouping operation of the at least one object of the first tag comprises: receiving a drag operation of the user dragging the first canvas card to a first group in the canvas view.
According to one or more embodiments of the present disclosure [ example forty-two ] there is provided a tag viewing method, the method further comprising: optionally, the at least one object comprises a first object, the method further comprising: responding to a triggering operation of an information block of the first object, and jumping to an interface of the first object associated with the information block; and/or displaying the first object in response to a trigger operation on the information block of the first object.
According to one or more embodiments of the present disclosure, [ example forty-three ] provides a tag viewing method, the method further comprising: optionally, the at least one object of the first tag comprises a first object, the method further comprising: and in response to receiving the label deletion operation of the user on the first object, deleting the corresponding relation between the first object and the first label.
According to one or more embodiments of the present disclosure, [ example forty-four ] there is provided a tag viewing method, the method further comprising: optionally, the displaying at least one tag according to the tag viewing operation includes: and displaying the at least one label and creator information of each label in the at least one label according to the label viewing operation.
According to one or more embodiments of the present disclosure, [ example forty-five ] provides a label viewing method, the method further comprising: optionally, the at least one tag includes a second tag, the second tag being a tag that is created by a user other than the user and the user does not have viewing rights; the method further comprises the following steps: receiving permission application operation, triggered by the user, on the second label; and sending an authority application request to a user corresponding to the second label according to the authority application operation, wherein the authority application request is used for applying for the viewing authority of the second label.
According to one or more embodiments of the present disclosure, [ example forty-six ] there is provided a tag viewing method, the method further comprising: optionally, the method further comprises: receiving a collection operation triggered by the user on a target tag in the at least one tag; and collecting the target tags according to the collection operation.
According to one or more embodiments of the present disclosure, [ example forty-seven ] provides a label viewing method, further comprising: optionally, the method further comprises: receiving a viewing operation triggered by the user on the collected label control; and displaying the collected tags in the at least one tag according to the viewing operation.
According to one or more embodiments of the present disclosure, [ example forty-eight ] provides a label viewing method, the method further comprising: optionally, the at least one tag is sorted according to a preset rule.
According to one or more embodiments of the present disclosure, [ example forty-nine ] there is provided a tag viewing method, the method further comprising: optionally, the preset rules include one or more of: tag access time, tag creation time, and priority of the tag.
According to one or more embodiments of the present disclosure, [ example fifty ] there is provided a tag viewing method, the method comprising: receiving a label viewing operation triggered by a target user on a label viewing control of a target cloud document; and displaying at least one label of the target cloud document according to the label viewing operation.
For relevant contents of this example, please refer to above.
According to one or more embodiments of the present disclosure, [ example fifty-one ] there is provided a tag viewing method, the method comprising: receiving a label viewing operation triggered by a target user to a label viewing control of a target schedule; and displaying at least one label of the target schedule according to the label viewing operation.
For relevant contents of this example, please refer to above.
According to one or more embodiments of the present disclosure, [ example fifty-two ] there is provided a tag viewing method, the method comprising: receiving a label viewing operation triggered by a target user to a label viewing control of a target task; and displaying at least one label of the target task according to the label viewing operation.
For relevant details of this example, see above.
According to one or more embodiments of the present disclosure, [ example fifty-three ] there is provided a tag viewing method, the method comprising: receiving a label viewing operation triggered by a label viewing control of a target Instant Messaging (IM) message by a target user; and displaying at least one tag of the target IM message according to the tag viewing operation.
For relevant contents of this example, please refer to above.
According to one or more embodiments of the present disclosure, [ example fifty-four ] there is provided a tag viewing method, the method comprising: receiving a label viewing operation triggered by a label viewing control of a target instant messaging IM group by a target user; and displaying at least one tag of the target IM group according to the tag viewing operation.
For relevant contents of this example, please refer to above.
According to one or more embodiments of the present disclosure, [ example fifty-five ] there is provided an apparatus for information interaction, the apparatus comprising: the receiving module is used for receiving a label adding operation triggered by a user on a target object; and the generating module is used for generating a corresponding relation according to the label adding operation, wherein the corresponding relation comprises a corresponding relation between a target label and the target object.
According to one or more embodiments of the present disclosure, [ example fifty-six ] there is provided a label viewing apparatus comprising: the receiving module is used for receiving the label viewing operation triggered by the user on the label viewing control; and the display module is used for displaying at least one label according to the label viewing operation.
According to one or more embodiments of the present disclosure, [ example fifty-seven ] provides an electronic device, comprising: one or more processors; a memory for storing one or more programs; when the one or more programs are executed by the one or more processors, the one or more processors implement the method for information interaction or the method for tag viewing according to any embodiment of the present application.
According to one or more embodiments of the present disclosure, [ example fifty-eight ] there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of information interaction or a method of tag viewing as described in any of the embodiments of the present application.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (58)

1. A method of information interaction, the method comprising:
receiving a label adding operation triggered by a user on a target object;
and generating a corresponding relation according to the tag adding operation, wherein the corresponding relation comprises a corresponding relation between a target tag and the target object.
2. The method of claim 1, wherein the target tag is created by a user other than the user.
3. The method of claim 1, wherein receiving a user-triggered tag add operation to a target object comprises:
and receiving a label adding operation triggered by the label adding control of the target object by the user.
4. The method of claim 3, wherein prior to receiving a user-triggered tag add operation to a tag add control of a target object, the method further comprises:
receiving a selection operation of the target object triggered by the user;
and displaying the label adding control according to the selected operation of the target object.
5. The method of claim 3, wherein prior to receiving a user-triggered tag add operation to a tag add control of a target object, the method further comprises:
receiving a display operation of the label adding control triggered by the user;
and displaying the label adding control according to the display operation.
6. The method of claim 1, wherein the obtaining the target tag determined by the user according to the tag adding operation comprises:
displaying at least one candidate tag according to the tag adding operation, wherein the at least one candidate tag comprises the target tag;
receiving the selection operation of the user on the target label;
and determining the target label according to the selected operation.
7. The method of claim 6, wherein displaying at least one candidate tag according to the tag add operation comprises:
displaying a tag input box according to the tag adding operation;
receiving a keyword input by a user in the label input box;
and displaying at least one candidate label matched with the keyword.
8. The method of claim 7, wherein the at least one candidate tag is arranged according to a preset rule.
9. The method of claim 8, wherein the preset rules comprise one or more of:
the label creating time, the label accessing time and the matching degree with the keywords.
10. The method of claim 1, wherein after obtaining the user-determined target tag according to the tag add operation, the method further comprises:
and displaying prompt information, wherein the prompt information is used for indicating that the target label is added to the target object.
11. The method of claim 1, wherein after obtaining the user-determined target tag according to the tag add operation, the method further comprises:
receiving a label viewing operation of the target object triggered by the user;
and displaying the target label according to the viewing operation.
12. The method of claim 1, wherein after obtaining the user-determined target tag according to the tag add operation, the method further comprises:
displaying a content viewing control;
receiving a viewing operation triggered by the content viewing control by the user;
and displaying the object corresponding to the target label according to the viewing operation.
13. The method of claim 1, wherein the correspondence between the target tag and the target object comprises:
a correspondence between the target tag and an information block of the target object, the information block being generated based on the target object.
14. The method of claim 13, further comprising:
jumping to an interface comprising the target object in response to the triggering operation of the information block; and/or the presence of a gas in the gas,
and responding to the triggering operation of the information block, and displaying the target object.
15. The method of claim 1, prior to receiving a tag add operation triggered by a user on a target object, comprising:
receiving an adding operation triggered by the user on a target label, and displaying at least one candidate object according to the adding operation;
determining a candidate object selected by the user from the at least one candidate object as the target object.
16. The method of claim 15, wherein the at least one candidate object is presented in a category according to a tag domain from which the candidate object originated.
17. The method of claim 1, further comprising:
and receiving the authority setting operation of the user on the target label, and determining the user with the operation authority for operating the target label according to the authority setting operation.
18. The method of claim 17, wherein the operational rights of the target tag comprise:
all members within an organization are editable; or the like, or, alternatively,
the added collaborators can be edited and other members can view the collaborators; or the like, or, alternatively,
only added collaborators are editable or viewable, wherein the collaborators are users collaboratively editing the information block of the target tag.
19. The method of claim 18, wherein in response to the user setting the permissions of the target tags editable by the added collaborators, other members are viewable, the method further comprising:
and acquiring the information of the collaborators determined by the user.
20. The method of claim 1, further comprising:
receiving the deleting operation of the user on the target label;
and deleting the target label according to the deleting operation.
21. The method of claim 1, wherein the correspondence comprises a correspondence between the target tag, an information block of the target object, and information of the user.
22. The method according to any one of claims 1-21, wherein the target object comprises at least one of:
cloud documents, schedules, tasks, instant messaging messages, and group chat IM messages.
23. A method of information interaction, the method comprising:
receiving a tag adding operation triggered by a user on a target cloud document;
acquiring a target label determined by the user according to the label adding operation;
and generating a corresponding relation between the target cloud document and the target label.
24. A method of information interaction, the method comprising:
receiving a label adding operation triggered by a user on a target schedule;
acquiring a target label determined by the user according to the label adding operation;
and generating a corresponding relation between the target label and the target schedule.
25. A method of information interaction, the method comprising:
receiving a tag adding operation triggered by a target Instant Messaging (IM) message by a user;
acquiring a target label determined by the user according to the label adding operation;
and generating a corresponding relation between the target label and the target IM message.
26. The method of claim 25, wherein the target IM message comprises a single chat IM message or a group chat IM message.
27. A method of information interaction, the method comprising:
receiving a label adding operation triggered by a user to a target Instant Messaging (IM) group;
acquiring a target label determined by the user according to the label adding operation;
and generating a corresponding relation between the target label and the target IM group.
28. A method of information interaction, the method comprising:
receiving a label adding operation triggered by a user on a target task;
acquiring a target label determined by the user according to the label adding operation;
and generating a corresponding relation between the target label and the target task.
29. A method of tag viewing, the method comprising:
receiving a label viewing operation triggered by a user on a label viewing control;
and displaying at least one label according to the label viewing operation.
30. The method of claim 29, wherein the at least one tag comprises one or more of:
a label created by the user;
a tag created by a user other than the user and the user has viewing rights.
31. The method of claim 29, further comprising:
receiving a screening triggering operation of the user on a screening control;
and presenting a screening result corresponding to the screening operation triggered by the screening triggering operation.
32. The method of claim 31, wherein the filtering operation is configured to filter the tags presented on the target interface according to the attribute value of the at least one attribute.
33. The method of claim 32, wherein the at least one attribute comprises one or more of:
belonging to the user, belonging to other users and unlimited belonging.
34. The method of claim 29, wherein receiving a user-triggered label viewing operation for a label viewing control comprises:
receiving a label viewing operation triggered by a user on a label viewing control displayed in association with a target object;
the displaying at least one tag according to the tag viewing operation includes:
and displaying at least one label of the target object according to the label viewing operation.
35. The method of claim 29, wherein the at least one tag comprises a first tag, the method further comprising:
and receiving the viewing operation of the first label triggered by the user, and displaying the information block of at least one object with the first label.
36. The method of claim 35, wherein the information blocks of the at least one object having the first label are presented according to a label domain classification from which the object originates.
37. The method of claim 36, wherein the tag field comprises one or more of:
tasks, schedules, cloud documents, instant messaging messages, and instant messaging groups.
38. The method of claim 34 or 36, wherein the information block of each of the at least one object is presented in the form of a list or a canvas card.
39. The method of claim 35 or 36, further comprising:
receiving the grouping operation of at least one object with the first label triggered by the user;
grouping one or more of the at least one object according to the grouping operation.
40. The method of claim 39, further comprising:
and acquiring a label grouping name, wherein the label grouping name is used for generating a corresponding relation, and the corresponding relation is the corresponding relation among the first label, the grouped object and the label grouping name.
41. The method of claim 39, wherein the at least one object comprises a first object, a piece of information of which is displayed in a first canvas card;
the receiving the user-triggered grouping operation of the at least one object of the first tag comprises:
receiving a drag operation of the user dragging the first canvas card to a first group in the canvas view.
42. The method of any one of claims 35-37, wherein the at least one object comprises a first object, the method further comprising:
responding to a trigger operation of an information block of the first object, and jumping to an interface of the first object associated with the information block; and/or the presence of a gas in the gas,
and displaying the first object in response to the triggering operation of the information block of the first object.
43. The method of any one of claims 35-37, wherein the at least one object of the first tag comprises a first object, the method further comprising:
and in response to receiving the label deletion operation of the user on the first object, deleting the corresponding relation between the first object and the first label.
44. The method of claim 30, wherein displaying at least one tab in accordance with the tab viewing operation comprises:
and displaying the at least one label and creator information of each label in the at least one label according to the label viewing operation.
45. The method of claim 30, wherein the at least one tag comprises a second tag, the second tag being a tag created by a user other than the user and the user does not have viewing rights;
the method further comprises the following steps:
receiving permission application operation, triggered by the user, on the second label;
and sending an authority application request to a user corresponding to the second label according to the authority application operation, wherein the authority application request is used for applying for the viewing authority of the second label.
46. The method of claim 29, further comprising:
receiving a collection operation triggered by the user on a target label in the at least one label;
and collecting the target tags according to the collection operation.
47. The method of claim 29, further comprising:
receiving a viewing operation triggered by the user on the collected label control;
and displaying the collected tags in the at least one tag according to the viewing operation.
48. The method of claim 29, wherein the at least one tag is ordered according to a predetermined rule.
49. The method of claim 48, wherein the preset rules include one or more of:
tag access time, tag creation time, and priority of the tag.
50. A method of tag viewing, the method comprising:
receiving a label viewing operation triggered by a target user on a label viewing control of a target cloud document;
and displaying at least one label of the target cloud document according to the label viewing operation.
51. A method of tag viewing, the method comprising:
receiving a label viewing operation triggered by a target user to a label viewing control of a target schedule;
and displaying at least one label of the target schedule according to the label viewing operation.
52. A method of tag viewing, the method comprising:
receiving a label viewing operation triggered by a label viewing control of a target task by a target user;
and displaying at least one label of the target task according to the label viewing operation.
53. A method of tag viewing, the method comprising:
receiving a label viewing operation triggered by a label viewing control of a target Instant Messaging (IM) message by a target user;
and displaying at least one tag of the target IM message according to the tag viewing operation.
54. A method for viewing a label, the method comprising:
receiving a label viewing operation triggered by a label viewing control of a target instant messaging IM group by a target user;
and displaying at least one tag of the target IM group according to the tag viewing operation.
55. An apparatus for information interaction, comprising:
the receiving module is used for receiving a label adding operation triggered by a user on a target object;
and the generating module is used for generating a corresponding relation according to the label adding operation, wherein the corresponding relation comprises a corresponding relation between a target label and the target object.
56. A label viewing device, comprising:
the receiving module is used for receiving the label viewing operation triggered by the user on the label viewing control;
and the display module is used for displaying at least one label according to the label viewing operation.
57. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method of information interaction as claimed in any one of claims 1-28, or cause the one or more processors to implement a method of tag viewing as claimed in any one of claims 29-54.
58. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out a method of information interaction according to any one of claims 1 to 28, or a method of tag viewing according to any one of claims 29 to 54.
CN202110217727.5A 2021-02-26 2021-02-26 Information interaction method, label viewing method and device Pending CN114967992A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202110217727.5A CN114967992A (en) 2021-02-26 2021-02-26 Information interaction method, label viewing method and device
JP2023552165A JP2024515424A (en) 2021-02-26 2022-02-25 Information processing, information interaction, label check, information display method and device
PCT/CN2022/077874 WO2022179598A1 (en) 2021-02-26 2022-02-25 Information processing, information interaction, tag viewing and information display method and apparatus
US18/456,062 US20240061959A1 (en) 2021-02-26 2023-08-25 Information processing, information interaction, tag viewing and information display method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110217727.5A CN114967992A (en) 2021-02-26 2021-02-26 Information interaction method, label viewing method and device

Publications (1)

Publication Number Publication Date
CN114967992A true CN114967992A (en) 2022-08-30

Family

ID=82973132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110217727.5A Pending CN114967992A (en) 2021-02-26 2021-02-26 Information interaction method, label viewing method and device

Country Status (1)

Country Link
CN (1) CN114967992A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116894133A (en) * 2023-08-28 2023-10-17 深圳有咖互动科技有限公司 Project service page display method, device, equipment and computer readable medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130191387A1 (en) * 2012-01-20 2013-07-25 Canon Kabushiki Kaisha Information processing apparatus, method for controlling the same, and storage medium storing program for displaying a tag added to a content file
CN104317933A (en) * 2014-10-31 2015-01-28 北京思特奇信息技术股份有限公司 Authority control tag display method and system
CN105227560A (en) * 2015-10-14 2016-01-06 浪潮集团有限公司 A kind of method of control of authority and device
CN105808782A (en) * 2016-03-31 2016-07-27 广东小天才科技有限公司 Picture label adding method and device
CN106600223A (en) * 2016-12-09 2017-04-26 奇酷互联网络科技(深圳)有限公司 Schedule creation method and device
KR20180090415A (en) * 2017-02-02 2018-08-13 주식회사 에이치나인 Device and method for generating or viewing design guide file
CN109040329A (en) * 2018-06-11 2018-12-18 平安科技(深圳)有限公司 Determination method, terminal device and the medium of the contact tag
CN109690520A (en) * 2016-08-31 2019-04-26 微软技术许可有限责任公司 Pass through logical tab shared document

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130191387A1 (en) * 2012-01-20 2013-07-25 Canon Kabushiki Kaisha Information processing apparatus, method for controlling the same, and storage medium storing program for displaying a tag added to a content file
CN104317933A (en) * 2014-10-31 2015-01-28 北京思特奇信息技术股份有限公司 Authority control tag display method and system
CN105227560A (en) * 2015-10-14 2016-01-06 浪潮集团有限公司 A kind of method of control of authority and device
CN105808782A (en) * 2016-03-31 2016-07-27 广东小天才科技有限公司 Picture label adding method and device
CN109690520A (en) * 2016-08-31 2019-04-26 微软技术许可有限责任公司 Pass through logical tab shared document
CN106600223A (en) * 2016-12-09 2017-04-26 奇酷互联网络科技(深圳)有限公司 Schedule creation method and device
KR20180090415A (en) * 2017-02-02 2018-08-13 주식회사 에이치나인 Device and method for generating or viewing design guide file
CN109040329A (en) * 2018-06-11 2018-12-18 平安科技(深圳)有限公司 Determination method, terminal device and the medium of the contact tag
WO2019237541A1 (en) * 2018-06-11 2019-12-19 平安科技(深圳)有限公司 Method and apparatus for determining contact label, and terminal device and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116894133A (en) * 2023-08-28 2023-10-17 深圳有咖互动科技有限公司 Project service page display method, device, equipment and computer readable medium
CN116894133B (en) * 2023-08-28 2023-11-24 深圳有咖互动科技有限公司 Project service page display method, device, equipment and computer readable medium

Similar Documents

Publication Publication Date Title
US20190286462A1 (en) Systems, methods, and media for presenting interactive checklists
US10324591B2 (en) System for creating and retrieving contextual links between user interface objects
US10671245B2 (en) Collection and control of user activity set data and activity set user interface
EP4372603A1 (en) Managing comments in a cloud-based environment
US9614880B1 (en) Methods for real-time notifications in an activity stream
US20140281850A1 (en) System and method of content stream utilization
CN111512328A (en) Collaborative document access recording and management
US20210350303A1 (en) Task list for tasks created at a third-party source
US20240061959A1 (en) Information processing, information interaction, tag viewing and information display method and apparatus
CN110622187B (en) Task related classification, application discovery and unified bookmarking for application manager
US20230055241A1 (en) Digital processing systems and methods for external events trigger automatic text-based document alterations in collaborative work systems
US20230297768A1 (en) System, method, and apparatus for snapshot sharding
WO2022153122A1 (en) Systems, methods, and devices for enhanced collaborative work documents
CN113574555A (en) Intelligent summarization based on context analysis of auto-learning and user input
WO2020023064A1 (en) Intelligent home screen of cloud-based content management platform
US20230205905A1 (en) Referencing a document in a virtual space
Chang et al. Tabs. do: Task-centric browser tab management
US20180234374A1 (en) Sharing of bundled content
CN114967992A (en) Information interaction method, label viewing method and device
US20240195772A1 (en) Information interaction/processing method, tag deletion method, and schedule creation method and device
US11888631B2 (en) Document management in a communication platform
CN115733812A (en) Information interaction method, device, equipment and medium
CN114969302A (en) Label searching method and device
CN115061601A (en) Electronic document processing method and device, terminal and storage medium
CN115129405A (en) Information interaction method, label deletion method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination