CN110704647A - Content processing method and device - Google Patents

Content processing method and device Download PDF

Info

Publication number
CN110704647A
CN110704647A CN201810654760.2A CN201810654760A CN110704647A CN 110704647 A CN110704647 A CN 110704647A CN 201810654760 A CN201810654760 A CN 201810654760A CN 110704647 A CN110704647 A CN 110704647A
Authority
CN
China
Prior art keywords
text
multimedia file
user
option
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810654760.2A
Other languages
Chinese (zh)
Other versions
CN110704647B (en
Inventor
刘叶舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN201810654760.2A priority Critical patent/CN110704647B/en
Priority to PCT/CN2018/123798 priority patent/WO2019242274A1/en
Publication of CN110704647A publication Critical patent/CN110704647A/en
Application granted granted Critical
Publication of CN110704647B publication Critical patent/CN110704647B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a content processing method and a content processing device, wherein the content processing method comprises the following steps: responding to the selected operation of the text, and displaying a function option control, wherein the function option control comprises at least one operation option; in response to the triggering of any operation option, generating a multimedia file comprising text; and sending the multimedia file to a target object and/or storing the multimedia file according to the triggered operation option, so that the problem that the text is shared or stored by converting the text to be shared or stored into the multimedia file and then sharing the multimedia file to the target object and/or storing the multimedia file in a local terminal when the user cannot copy, share or store the text can be solved.

Description

Content processing method and device
Technical Field
The present application relates to the field of internet technologies, and in particular, to a content processing method and apparatus.
Background
With the increasing popularization of the internet, especially the mobile internet, more and more users browse information by using various application programs, and in some scenarios, the users also want to share the information with other users or store the browsed information, for example, copy a certain section of browsed text, and share or store the copied text.
However, in some applications, the user is restricted from copying the text, for example, in dictionary applications, the user is not allowed to copy the text due to restrictions such as copyright, and thus the user cannot copy the text for sharing or saving.
Disclosure of Invention
In view of this, embodiments of the present application provide a content processing method and apparatus, so as to solve the technical problem that a user cannot share or store a content by copying a text in a conventional manner.
In order to solve the above problem, the technical solution provided by the embodiment of the present application is as follows:
a method of content processing, the method comprising:
responding to the selected operation of the text, and displaying a function option control, wherein the function option control comprises at least one operation option;
generating a multimedia file comprising the text in response to the triggering of any one of the operation options;
and sending the multimedia file to a target object according to the triggered operation option, and/or storing the multimedia file.
In one possible implementation, the generating a multimedia file including the text includes:
identifying the content of the text according to the selected operation;
and generating a multimedia file comprising the text according to the content of the recognized text in a preset display style.
In a possible implementation manner, the generating a multimedia file including the text according to a preset display style by the content of the recognized text includes:
editing the identified text content according to a preset static display style to generate a static image comprising the text;
or the like, or, alternatively,
editing the identified text content according to a preset dynamic display style to generate a dynamic image or video comprising the text.
In one possible implementation, the generating a multimedia file including the text includes:
screenshot is conducted on an original display interface comprising the text, and an interface screenshot is generated;
and processing the interface screenshot to generate a multimedia file comprising the text.
In a possible implementation manner, the processing the interface screenshot to generate a multimedia file including the text includes:
and responding to the cutting operation of the interface screenshot, cutting the interface screenshot, and generating a static image comprising the text.
In a possible implementation manner, the processing the interface screenshot to generate a multimedia file including the text includes:
and identifying the display position of the text on the interface screenshot, and cutting the interface screenshot according to the display position to generate a static image comprising the text.
In a possible implementation manner, when the triggered operation option is a sharing option, the method further includes:
and triggering and displaying a target object selection interface, and acquiring the selected target object.
A content processing apparatus, the apparatus comprising:
the display unit is used for responding to the selected operation of the text and displaying a function option control, and the function option control comprises at least one operation option;
the generating unit is used for responding to the trigger of any one of the operation options and generating a multimedia file comprising the text;
and the processing unit is used for sending the multimedia file to a target object according to the triggered operation option and/or storing the multimedia file.
In a possible implementation manner, the generating unit specifically includes:
the identification subunit is used for identifying the content of the text according to the selected operation;
and the first generation subunit is used for generating the multimedia file comprising the text according to the recognized content of the text in a preset display style.
In a possible implementation manner, the first generating subunit is specifically configured to edit the identified content of the text according to a preset static display style, and generate a static image including the text; or editing the identified text content according to a preset dynamic display style to generate a dynamic image or video comprising the text.
In a possible implementation manner, the generating unit specifically includes:
the screenshot subunit is used for screenshot the original display interface comprising the text and generating an interface screenshot;
and the second generation subunit is used for processing the interface screenshot and generating a multimedia file comprising the text.
In a possible implementation manner, the second generating subunit is specifically configured to perform, in response to a clipping operation on the interface screenshot, clipping processing on the interface screenshot, and generate a static image including the text.
In a possible implementation manner, the second generating subunit is specifically configured to identify a display position of the text in the interface screenshot, perform clipping processing on the interface screenshot according to the display position, and generate a static image including the text.
In a possible implementation manner, when the triggered operation option is a sharing option, the apparatus further includes:
and the acquisition unit is used for triggering and displaying the target object selection interface and acquiring the selected target object.
An apparatus for content processing, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured for execution by one or more processors the one or more programs include instructions for:
responding to the selected operation of the text, and displaying a function option control, wherein the function option control comprises at least one operation option;
generating a multimedia file comprising the text in response to the triggering of any one of the operation options;
and sending the multimedia file to a target object according to the triggered operation option, and/or storing the multimedia file.
In one possible implementation, the apparatus is caused to perform the content processing method as described above when executed by one or more processors.
Therefore, the embodiment of the application has the following beneficial effects:
in the embodiment of the application, when a user wants to share and/or save a text, the terminal device responds to the selected operation of the user on the text, displays the function option control, generates a multimedia file comprising the text according to the trigger of the user on the operation option in the option control, and sends the multimedia file to a target object according to the operation option triggered by the user, and/or saves the multimedia file, so that the problem that when the user cannot copy, share or save the text, the text needing to be shared or saved is converted into the multimedia file, and then the multimedia file is shared to the target object and/or saved in a local terminal, and the text is shared or saved is solved.
Drawings
Fig. 1 is a schematic diagram of a framework of an exemplary application scenario provided in an embodiment of the present application;
fig. 2 is a flowchart of a content processing method according to an embodiment of the present application;
fig. 3 is a flowchart of a static multimedia file generation method according to an embodiment of the present application;
fig. 4 is an image effect diagram generated by using english characters according to an embodiment of the present disclosure;
fig. 5 is a flowchart of a dynamic multimedia file generation method according to an embodiment of the present application;
fig. 6 is a flowchart of a method for generating a static image by a terminal according to a user clipping operation according to an embodiment of the present application;
fig. 7 is a flowchart of a method for generating a static image by automatic cropping at a terminal according to an embodiment of the present application;
fig. 8 is a flowchart of a method for sharing text content according to an embodiment of the present application;
fig. 9 is a flowchart of a method for saving text content according to an embodiment of the present application;
fig. 10 is a block diagram of a content processing apparatus according to an embodiment of the present application;
fig. 11 is a block diagram of another content processing apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a server device according to an embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanying the drawings are described in detail below.
The inventor finds that, in a traditional text processing research, when a user wants to share and/or save a certain text, the user can copy the text of interest by using a copy function of a terminal system, and share the copied text with friends or save the text in a local terminal, however, in some application programs, the user is limited to copy the text, for example, a dictionary application program, because the user is not allowed to copy the text due to copyright limitation, and thus the user cannot copy the corresponding text, so that the user cannot share or save the text.
Based on this, the embodiment of the application provides a content processing method and device, which respond to the selection operation of a user on a text, do not copy the selected text, but convert the selected text into a multimedia file according to an operation option selected by the user, and then share the multimedia file with a target object, and/or store the multimedia file locally in a terminal, thereby realizing the sharing or storage of the text.
Referring to fig. 1, the figure is a schematic diagram of a framework of an exemplary application scenario provided in an embodiment of the present application. The content processing method provided by the embodiment of the present application may be applied to the terminal 10, and the server 20 may receive the multimedia file and the identifier of the target object, such as the terminal 30, sent by the terminal 10, so as to share the multimedia file with the terminal 30.
In practical application, a user may select a text to be shared or saved by using the terminal 10, the terminal 10 displays the function option control in response to a selection operation of the user, the user triggers an operation option in the function option control through the terminal 10, for example, sharing, the terminal 10 generates a multimedia file including the selected text in response to the trigger operation of the user, and then sends the multimedia file to the server 20 according to a target object selected by the user, and the server 20 sends the multimedia file to the terminal 30, thereby completing the sharing of the text content.
Those skilled in the art will appreciate that the block diagram shown in fig. 1 is only one example in which embodiments of the present application may be implemented. The scope of applicability of embodiments of the present invention is not limited in any way by this framework.
It should be noted that the terminal 10 or the terminal 30 in the embodiments of the present application may be any user equipment, existing, developing or developing in the future, capable of interacting with the server 20 through any form of wired and/or wireless connection (e.g., Wi-Fi, LAN, cellular, coaxial cable, etc.), including but not limited to: existing, developing, or future developing smartphones, non-smartphones, tablets, laptop personal computers, desktop personal computers, minicomputers, midrange computers, mainframe computers, and the like. It should also be noted that the server 20 in the embodiment of the present application may be an example of an existing, developing or future developing device capable of providing an application service of information recommendation to a user. The embodiments of the present application are not limited in any way in this respect.
The content processing method provided by the embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 2, which is a flowchart of a content processing method provided in an embodiment of the present application, the method may include the following steps:
s201: and responding to the selected operation of the text, and displaying a function option control.
When a user needs to share or store some texts in an application program, a selection operation on the texts is triggered first, for example, the selection operation on the texts is triggered by long-pressing the texts on a terminal touch screen, and the terminal can display corresponding function option controls on the terminal touch screen in response to the selection operation on the texts by the user.
The function option control may include at least one operation option, for example, an operation option of sharing, saving, sharing and saving, sharing to a circle of friends, sharing to a microblog, or the like. The setting of the operation options can be divided into two cases, one setting is that the operation options only include sharing and/or saving action instructions, for example, when the function option control includes sharing, the terminal performs sharing operation aiming at the triggering of the user on the operation options, when the function option control includes saving, the terminal performs saving operation aiming at the triggering of the user on the operation options, when the function option control includes sharing and saving operation options, the terminal performs local saving aiming at the triggering of the user on the operation options while sharing; another setting is that the operation option may include not only a sharing and/or saving action instruction, but also a shared target object, for example, "share to circle of friends", "share to microblog", and so on, for example, when the function option control includes sharing to circle of friends, the terminal performs a sharing operation in response to the trigger of the user on the operation option, and the shared target object is "circle of friends". The operation options in the function option control may be displayed to the user in a pop-up bubble or in an operation menu.
In this embodiment, the operation of the user for selecting the text may be divided into two cases, one is that the user may select a specific text, for example, the user translates "harmony" in the oxford dictionary by using the terminal, the word "harmony" and the example sentence using the word are displayed on the display interface of the terminal, and the user may select the example sentence. In specific implementation, when a user wants to select an example sentence, an interface for displaying the example sentence can be triggered for a long time, a terminal can generate a selected cursor in the current interface according to the triggering operation of the user, the user can move the cursor to the initial position of the example sentence and then drag the cursor to the terminal position of the example sentence, the terminal can select the example sentence according to the triggering operation of the user, and then a function option control is displayed in the interface for the user to select.
In another case, the user cannot select a specific text, and the terminal may determine a long trigger operation of the user on the interface containing the text as a selection operation, and then display a function option control in the interface for the user to select. For example, a user triggers an interface displaying a word "harmonious" and an example sentence through a terminal, the terminal determines a long trigger operation of the user as a selected operation, then displays at least one operation option in the interface, for example, sharing, and when the user triggers the sharing operation, the terminal executes S202 according to the trigger operation of the user.
S202: in response to a trigger for any of the operation options, a multimedia file including text is generated.
In this embodiment, after acquiring the trigger operation of the user on a certain operation option, the terminal may generate a multimedia file including the text. The multimedia file can be in the form of images, videos and other media.
As can be seen from step S201, the selection operation may include two different cases, and for different selection operations, when the terminal generates the multimedia file, the specific generation manner may be different, and different forms of generating the multimedia file including the text will be described in detail in the following embodiments.
In this embodiment, when the operation option only includes the action instruction and does not include the target object, and when the triggered operation option is the sharing option, the method may further include: and triggering and displaying a target object selection interface, and acquiring the selected target object so that the terminal executes S203 according to the selected target object.
When the display function option control at least comprises a sharing option, the terminal displays a target object selection interface to the user according to the sharing option triggered by the user, the interface can comprise options of 'sharing to a friend circle', 'sharing to a microblog' and the like, and the terminal acquires the selected target object according to the triggering operation of the user on the selection interface. For example, if the user triggers "share to microblog", the terminal acquires that the target object selected by the user is "microblog".
In addition, the terminal sets up differently according to the operation option, and the time for generating the multimedia file may also be different, for example, when the operation option includes a sharing action instruction, the terminal generates the multimedia file including the text according to the trigger of the user on the sharing option, displays a target object selection interface to the user, acquires the selected target object, and then executes S203; or, the terminal displays a target object selection interface to the user according to the trigger of the user on the sharing option, acquires the selected target object, generates a multimedia file including a text, and executes S203.
For example, when a user wants to share a selected text to a WeChat friend A, the terminal device converts the text content into a multimedia file according to a sharing operation option triggered by the user, and then jumps to a social media platform selection interface, wherein the selection interface comprises options of 'share to friend circle', 'share to WeChat friend' and 'share to QQ friend', the terminal jumps to a WeChat friend selection interface based on 'share to WeChat friend' selected by the user, the WeChat friend selection interface is a target object selection interface, the user selects friend A, namely the target object is A, the terminal jumps to a conversation window between the user and A, and sends the generated multimedia file to A; or the terminal equipment jumps to a social media platform selection interface according to a sharing option triggered by the user, selects 'sharing to WeChat friends' in the selection interface, jumps to a WeChat friend selection interface according to the selection of the user, jumps to a dialogue window between the user and the user A based on the operation of the user for selecting friends A, generates a multimedia file from the selected text, and then sends the multimedia file to the user A.
It can be understood that, when the user selects "share to friend circle" or "share to microblog" on the social media platform selection interface, the social media platform selection interface is a target object selection interface, and "friend circle" or "microblog" is a target object.
S203: and sending the multimedia file to the target object according to the triggered operation option, and/or storing the multimedia file.
In this embodiment, the terminal shares and/or stores the multimedia file according to a trigger operation of the user on a certain operation option. And when the user triggers the sharing option, the terminal sends the multimedia file to the target object according to the target object selected by the user. When the user triggers the "save" option, the terminal saves the multimedia file in the local terminal. When the user triggers the option of sharing and saving, the terminal sends the multimedia file to the target object according to the target object selected by the user, and meanwhile, the multimedia file is saved in the local terminal.
During specific implementation, for example, the terminal sends the generated multimedia file and the selected friend identifier to the server according to the operation option of sharing to the WeChat friend triggered by the user, and the server sends the multimedia file to the terminal corresponding to the friend selected by the user according to the friend identifier sent by the terminal. The friend identifier is used for representing a unique friend corresponding to the identifier, and can be a user name, a nickname and the like of the friend.
In addition, for the saving option triggered by the user, the terminal can save the generated multimedia file to a local system album, so that the user can view the text information saved in the mode of pictures or videos through the system album. Certainly, the terminal can also store the multimedia file to the cloud end, so that the multimedia file is prevented from being lost, and when a user wants to check the multimedia file, the multimedia file can be downloaded from the cloud end through the terminal.
According to the content processing method provided by the embodiment, when a user wants to share and/or store a text, the terminal device responds to the selected operation of the user on the text, displays the functional option control, generates the multimedia file comprising the text according to the trigger of the user on the operation option in the option control, and sends the multimedia file to the target object and/or stores the multimedia file according to the triggered operation option.
It can be known from the above embodiments that there may be two cases for the user to select the text, one is to select a specific text, and the other is not to select the specific text, and for the two cases, the terminal may perform different operations when generating the multimedia file, and the specific operations for generating the multimedia file by the terminal will be described in detail below.
In a first manner of generating a multimedia file, a terminal identifies the content of a text according to a selected operation, and generates a multimedia file including the text according to the identified content of the text in a preset display style.
In this embodiment, when the user can select a specific text content, the terminal identifies the text content selected by the user to obtain information such as the number of characters and the type of the characters of the text content, determines a preset display style according to the identification result, and generates a multimedia file including a text according to the preset display style.
The pre-display style includes a display format, a display position, a fill color, and the like, and the display format may include a font, a color, a font style (slant, bold), a font size, and the like of the text. The display position can be centered, the text is aligned left, the text is aligned right, or the two ends are aligned, and the filling color can be the background color of the generated multimedia file, so that the multimedia file is beautified, and the reading or viewing experience of a user is improved.
In this embodiment, the terminal may specifically identify the content of the text, and may specifically include information such as identifying a type of the content of the text (for example, language information, chinese, english, or the like), and identifying the number of characters included in the content of the text. Because the Chinese and English have the characteristics, the multimedia file needs to be generated according to the inherent characteristics, if the Chinese and English are not distinguished, the multimedia file is generated in a uniform mode, the display effect of the file can be influenced, for example, "harmony" and "harmony", and if the multimedia file is generated for the Song style according to the font, the display effect is not attractive. Therefore, before generating the multimedia file, the terminal needs to identify the content of the text, so that the terminal can generate the multimedia file with different effects according to the information such as the type, the number of characters and the like of the text content, and the visual effect of the user is improved.
As can be seen from the above embodiments, the multimedia file may include various media files such as sound, image, video, etc., and the multimedia file may include a still multimedia file such as a still image, whose file format may be JPEG, PNG, etc., and a moving multimedia file such as a moving image or video, whose file format may be GIF, and whose file format may be MP4, RMVB, etc. It can be understood that, when the terminal generates the static multimedia file or the dynamic multimedia file, the specific display style may be different, and based on this, the generation of the static multimedia file and the generation of the dynamic multimedia file will be separately described.
Referring to fig. 3, which is a flowchart of a static multimedia file generation method provided in an embodiment of the present application, the method may include:
s301: and identifying the content of the text according to the selected operation.
In this embodiment, the terminal identifies the text content selected by the user to identify information such as a character type, a character number, a font style, and the like of the selected text content, so as to determine the static display style according to the identification information.
S302: and editing the content of the recognized text according to a preset static display style to generate a static image comprising a file.
In this embodiment, the terminal may edit the selected text content according to a preset static display style. The static display style may include a display format, a display position, a color, and the like. For example, the size of a font in text content, a font style, a display position, and the size of a still image are edited to generate a preset still image. Regarding the editing of the display position, the terminal may determine the display position of the text content according to the number of the selected characters and the preset number of the characters displayed in each line, for example, if the selected text content includes 50 characters, and the preset number of the characters displayed in each line is 10, the generated static image includes the text content selected in 5 lines. Of course, the terminal may also display the selected text content in the original manner, for example, if the selected text content includes 50 characters in 2 lines, the generated static image still displays the text content in 2 lines.
It can be understood that the character types include chinese characters and english characters, and in particular, the terminal may perform different edits according to the type of the selected character.
To facilitate understanding that the terminal edits according to the preset static display style, for example, if the selected text content is "a book of selected spots", the terminal may edit the font of the selected text content is "Times New Roman", the font is "bold and slant", the font size is "four", and the number of characters (counting space) 24, then the static image is generated according to the centered and two-line display, as shown in the effect diagram generated in fig. 4.
For example, if the selected text content is "this poetry set", the terminal may edit the font of the terminal as "regular font", the font as "oblique", the font size as "four small", and the number of characters (including punctuations) 15, and then a static image is generated by aligning the two ends and displaying the two lines. The above embodiments describe the generation of static multimedia files, and the generation of dynamic multimedia files will be explained with reference to the drawings.
Referring to fig. 5, which is a flowchart of a dynamic multimedia file generation method provided in an embodiment of the present application, the method may include:
s501: and identifying the content of the text according to the selected operation.
In this embodiment, the terminal identifies the text content selected by the user to identify information such as a character type, a character number, a font style, and the like of the selected text content, so as to determine a dynamic display style according to the identification information.
S502: and editing the content of the recognized text according to a preset dynamic display style to generate a dynamic image or video comprising the text.
In this embodiment, the terminal determines the dynamic display style according to the recognition result, and then edits the text content according to the preset dynamic display style. The dynamic display style may include a basic display format, a display position, and other styles, and may further include an animation effect, for example, displaying the selected text content in a flying-in manner, so as to achieve an effect of displaying the characters one by one as typing, and displaying in a rotating, floating, and other forms, so as to improve a visual effect of a user.
In this embodiment, the terminal edits the display format, the display position, and the animation effect of the text according to the preset dynamic display style, and generates a dynamic image or video according to the editing result.
The above embodiment describes an operation of generating a multimedia file by a terminal when a user can select a specific text, and another operation of generating a multimedia file by a terminal when the user cannot select a specific text and only triggers a terminal display interface will be described below.
In a second mode of generating the multimedia file, the terminal captures an original display interface including a text to generate an interface screenshot; and processing the interface screenshot to generate a multimedia file comprising a text.
In specific implementation, the terminal can capture the original display interface including the text according to long trigger operation of the user on the original display interface to obtain the interface capture, and the terminal can process the interface capture according to preset conditions to obtain the multimedia file including the text.
The original display interface refers to a display interface comprising text. For example, when a user triggers a terminal screen, a first layer is used for displaying a text, a second layer is used for displaying a function option control, and when the terminal captures the text according to a triggering operation of the user, a capture object should be the first layer including the text, and an interface capture is generated.
In this embodiment, the generated interface screenshot is a screenshot of the complete original display interface, which may include some unnecessary information, and in order to ensure that the generated interface screenshot only includes text that the user wants to share or save, the interface screenshot is processed before being sent to the target object and/or being saved to generate a multimedia file only including the text.
In specific implementation, the processing operation of the interface screenshot may include operations of clipping, filtering, adjusting brightness, and the like. The cutting operation of the interface screenshot can be automatically completed by the terminal, and the terminal can also respond to the cutting operation triggered by the user to complete the corresponding cutting. Based on the method, the process of automatically clipping and manually clipping the interface screenshot by the terminal is respectively introduced.
Referring to fig. 6, which is a flowchart of a method for generating a multimedia file by a terminal according to a user clipping operation according to an embodiment of the present application, the method may include:
s601: and carrying out screenshot on the original display interface comprising the text to generate an interface screenshot.
S602: and responding to the clipping operation of the interface screenshot, and performing clipping processing on the interface screenshot to generate a static image comprising text.
In this embodiment, a user triggers a clipping operation on an interface screenshot by using a terminal, and clips the interface screenshot according to a screenshot desired by the user, and a terminal device generates a static image including a text in response to the clipping operation completed by the user. For example, a user triggers an interface screenshot, a terminal displays a clipping option for the user according to a triggering operation of the user, the user triggers the clipping option to clip the interface screenshot, the user can clip the interface screenshot in the vertical direction, the horizontal direction and the like during specific application so as to obtain a screenshot required by the user, and the terminal generates a static image comprising a text according to a clipping result of the user.
The above-described embodiment describes the generation of a still image by a terminal according to a cropping by a user, and the generation of a still image by a terminal automatic cropping will be described below with reference to the drawings.
Referring to fig. 7, this figure shows an embodiment of a method for generating a static image by a screenshot of a terminal automatic cropping interface according to an embodiment of the present application, where the method may include:
s701: and carrying out screenshot on the original display interface comprising the text to generate an interface screenshot.
S702: and identifying the display position of the text on the interface screenshot, and cutting the interface screenshot according to the display position to generate a static image comprising the text.
In this embodiment, the terminal may automatically identify a display position of the text in the interface screenshot, for example, identify coordinate information of the text in the interface screenshot, automatically cut the interface screenshot according to the coordinate information, remove non-text content in the interface screenshot, only retain text content, and then generate a static image only retaining the text.
In addition, the terminal can also store the display position of the display interface where the text is when the user selects the specific text, and when the terminal automatically cuts, the interface screenshot can be cut according to the stored historical display position. For example, when the user can select a specific text, the start coordinate of the selected text is (x1, y1), the end coordinate of the selected text is (x2, y2), the terminal saves the start coordinate and the end coordinate, and when the terminal device automatically clips the interface screenshot, the interface screenshot can be clipped according to the coordinate information to obtain a static image including the text.
Through the method provided by the embodiment, when the user can select a specific text, the terminal can identify the text content according to the operation option selected by the user, so as to generate the multimedia file comprising the text according to the preset display style, when the user only triggers the display interface where the text is located, the terminal can capture the original display interface comprising the text according to the triggering operation, obtain the interface capture, process the interface capture, and generate the multimedia file comprising the text.
In order to fully understand the technical solutions provided by the present application, the following detailed descriptions will be provided for the content processing method provided by the embodiments of the present application with reference to specific application scenarios.
Referring to fig. 8, this figure is a flowchart of a method for sharing text content according to an embodiment of the present application.
For understanding the method, the following description will take an example that a user selects a specific text and shares the text to a friend in the form of a picture, where the method may include:
s801: and displaying the picture sharing option to the user according to the selected operation of the user on the text.
In this embodiment, after the user selects the text to be shared, the operation option shared in the form of a picture can be triggered and displayed.
S802: and displaying a social media selection interface in a pop-up window form according to the triggering of the picture sharing option by the user for the user to select.
When the user triggers the picture sharing option, a social media selection interface can be further displayed, a plurality of icons of social media can be displayed for the user in the social media selection interface, and the user triggers a certain icon according to the self requirement, so that the social media to be shared is selected.
S803: and receiving the social media selected by the user, and triggering and displaying a target friend selection interface.
The target friend selection interface can be understood as a target object display interface, information such as head portraits and nicknames of all friends related to the user in the social media can be displayed to the user in the target friend display interface, and the user can trigger the head portraits of certain friends to determine the target friends.
S804: and receiving the target friend selected by the user, and jumping to an input window of the user and the target friend.
After the user selects the target object, i.e., the target friend, the user can further jump to an input window of the target friend in the social media, wherein the input window of the user and the target friend can be a chat window.
S805: and generating a picture comprising the text according to a preset display style.
The preset display style may include information such as a display position and a display format, and the specific manner of generating the picture may be referred to in the above embodiments.
S806: and displaying the picture in the input window and sending the picture to the target friend.
It should be noted that, for other possible implementation manners of each step in this embodiment, reference may be made to the above method embodiment, and details are not described herein again.
In the embodiment, the text selected by the user is converted into the picture and sent to the target friend through the social media, so that the text is shared.
Referring to fig. 9, this figure is a flowchart of a method for saving text content according to an embodiment of the present application.
For the convenience of understanding, the following description will take the example of saving the selected text in the form of a picture, and the method may include:
s901: and displaying the picture storage option to the user according to the selected operation of the user on the text.
In this embodiment, after the user selects the text to be saved, the operation option saved in the picture mode may be triggered and displayed.
S902: and responding to the triggering operation of the user on the picture storage option, and generating a picture comprising a text according to a preset picture display style.
In this embodiment, the display style of the picture may be preset and stored in the terminal, when the user triggers the picture storage option, the terminal generates a picture including a text according to the stored picture display style according to the trigger of the user, and the specific mode of generating the picture may refer to the above embodiment.
S903: the pictures are saved to a system album.
It can be understood that the picture can also be stored in the cloud, and when the user needs to view the picture including the text, the user can read the picture stored in the cloud through the terminal.
It should be noted that, for other possible implementation manners of each step in this embodiment, reference may be made to the above method embodiment, and details are not described herein again.
In the embodiment, the text selected by the user is converted into the picture to be stored, so that the text is stored.
Based on the above method embodiment, the present application further provides a content processing apparatus, which will be described below with reference to the accompanying drawings.
Referring to fig. 10, a content processing apparatus provided in this application may include:
a display unit 1001, configured to display a function option control in response to a selection operation on a text, where the function option control includes at least one operation option;
a generating unit 1002, configured to generate a multimedia file including the text in response to a trigger to any of the operation options;
the processing unit 1003 is configured to send the multimedia file to a target object according to the triggered operation option, and/or store the multimedia file.
In some embodiments, the generating unit specifically includes:
the identification subunit is used for identifying the content of the text according to the selected operation;
and the first generation subunit is used for generating the multimedia file comprising the text according to the recognized content of the text in a preset display style.
In some embodiments, the first generating subunit is specifically configured to edit the identified content of the text according to a preset static display style, and generate a static image including the text; or editing the identified text content according to a preset dynamic display style to generate a dynamic image or video comprising the text.
In some embodiments, the generating unit specifically includes:
the screenshot subunit is used for screenshot the original display interface comprising the text and generating an interface screenshot;
and the second generation subunit is used for processing the interface screenshot and generating a multimedia file comprising the text.
In some embodiments, the second generating subunit is specifically configured to, in response to a clipping operation on the interface screenshot, perform clipping processing on the interface screenshot, and generate a static image including the text.
In some embodiments, the second generating subunit is specifically configured to identify a display position of the text in the interface screenshot, perform clipping processing on the interface screenshot according to the display position, and generate a static image including the text.
In some embodiments, when the triggered operation option is a sharing option, the apparatus further includes:
and the acquisition unit is used for triggering and displaying the target object selection interface and acquiring the selected target object.
In the embodiment of the application, when a user wants to share and/or save a text, the terminal device responds to the selected operation of the user on the text, displays the function option control, generates a multimedia file comprising the text according to the trigger of the user on the operation option in the option control, and sends the multimedia file to a target object according to the operation option triggered by the user, and/or saves the multimedia file, so that the problem that when the user cannot copy, share or save the text, the text needing to be shared or saved is converted into the multimedia file, and then the multimedia file is shared to the target object and/or saved in a local terminal, and the text is shared or saved is solved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The method examples are described in detail and will not be elaborated upon here.
Fig. 11 shows a block diagram of a content processing apparatus 1100. For example, the apparatus 1100 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 11, apparatus 1100 may include one or more of the following components: processing component 1102, memory 1104, power component 1106, multimedia component 1108, audio component 1110, input/output (I/O) interface 1112, sensor component 1114, and communications component 1116.
The processing component 1102 generally controls the overall operation of the device 1100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 1102 may include one or more processors 1120 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 1102 may include one or more modules that facilitate interaction between the processing component 1102 and other components. For example, the processing component 1102 may include a multimedia module to facilitate interaction between the multimedia component 1108 and the processing component 1102.
The memory 1104 is configured to store various types of data to support operation at the device 1100. Examples of such data include instructions for any application or method operating on device 1100, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1104 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A power component 1106 provides power to the various components of the device 1100. The power components 1106 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 1100.
The multimedia component 1108 includes a screen that provides an output interface between the device 1100 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1108 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 1100 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1110 is configured to output and/or input audio signals. For example, the audio component 1110 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 1100 is in operating modes, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1104 or transmitted via the communication component 1116. In some embodiments, the audio assembly 1110 further includes a speaker for outputting audio signals.
The I/O interface 1112 provides an interface between the processing component 1102 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1114 includes one or more sensors for providing various aspects of state assessment for the apparatus 1100. For example, the sensor assembly 1114 may detect an open/closed state of the device 1100, the relative positioning of components, such as a display and keypad of the apparatus 1100, the sensor assembly 1114 may also detect a change in position of the apparatus 1100 or a component of the apparatus 1100, the presence or absence of user contact with the apparatus 1100, an orientation or acceleration/deceleration of the apparatus 1100, and a change in temperature of the apparatus 1100. The sensor assembly 1114 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1114 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1114 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1116 is configured to facilitate wired or wireless communication between the apparatus 1100 and other devices. The apparatus 1100 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1116 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1116 also includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1100 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the following methods:
responding to the selected operation of the text, and displaying a function option control, wherein the function option control comprises at least one operation option;
generating a multimedia file comprising the text in response to the triggering of any one of the operation options;
and sending the multimedia file to a target object according to the triggered operation option, and/or storing the multimedia file.
Optionally, the generating a multimedia file including the text includes:
identifying the content of the text according to the selected operation;
and generating a multimedia file comprising the text according to the content of the recognized text in a preset display style.
Optionally, the generating a multimedia file including the text according to a preset display style from the content of the recognized text includes:
editing the identified text content according to a preset static display style to generate a static image comprising the text;
or the like, or, alternatively,
editing the identified text content according to a preset dynamic display style to generate a dynamic image or video comprising the text.
Optionally, the generating a multimedia file including the text includes:
screenshot is conducted on an original display interface comprising the text, and an interface screenshot is generated;
and processing the interface screenshot to generate a multimedia file comprising the text.
Optionally, the processing the interface screenshot to generate a multimedia file including the text includes:
and responding to the cutting operation of the interface screenshot, cutting the interface screenshot, and generating a static image comprising the text.
Optionally, the processing the interface screenshot to generate a multimedia file including the text includes:
and identifying the display position of the text on the interface screenshot, and cutting the interface screenshot according to the display position to generate a static image comprising the text.
Optionally, when the triggered operation option is a sharing option, the method further includes:
and triggering and displaying a target object selection interface, and acquiring the selected target object.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 1104 comprising instructions, executable by the processor 1120 of the apparatus 1100 to perform the method described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions therein, which when executed by a processor of a mobile terminal, enable the mobile terminal to perform a content processing method, the method comprising:
responding to the selected operation of the text, and displaying a function option control, wherein the function option control comprises at least one operation option;
generating a multimedia file comprising the text in response to the triggering of any one of the operation options;
and sending the multimedia file to a target object according to the triggered operation option, and/or storing the multimedia file.
Optionally, the generating a multimedia file including the text includes:
identifying the content of the text according to the selected operation;
and generating a multimedia file comprising the text according to the content of the recognized text in a preset display style.
Optionally, the generating a multimedia file including the text according to a preset display style from the content of the recognized text includes:
editing the content of the text according to a preset static display style to generate a static image comprising the text;
or the like, or, alternatively,
editing the identified text content according to a preset dynamic display style to generate a dynamic image or video comprising the text.
Optionally, the generating a multimedia file including the text includes:
screenshot is conducted on an original display interface comprising the text, and an interface screenshot is generated;
and processing the interface screenshot to generate a multimedia file comprising the text.
Optionally, the processing the interface screenshot to generate a multimedia file including the text includes:
and responding to the cutting operation of the interface screenshot, cutting the interface screenshot, and generating a static image comprising the text.
Optionally, the processing the interface screenshot to generate a multimedia file including the text includes:
and identifying the display position of the text on the interface screenshot, and cutting the interface screenshot according to the display position to generate a static image comprising the text.
Optionally, when the triggered operation option is a sharing option, the method further includes:
and triggering and displaying a target object selection interface, and acquiring the selected target object.
Fig. 12 is a schematic structural diagram of a server in an embodiment of the present invention. The server 1200 may vary widely in configuration or performance and may include one or more Central Processing Units (CPUs) 1222 (e.g., one or more processors) and memory 1232, one or more storage media 1230 (e.g., one or more mass storage devices) storing applications 1242 or data 1244. Memory 1232 and storage media 1230 can be, among other things, transient storage or persistent storage. The program stored in the storage medium 1230 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 1222 may be configured to communicate with the storage medium 1230, to execute a series of instruction operations in the storage medium 1230 on the server 1200.
Terminal 1200 can also include one or more power supplies 1226, one or more wired or wireless network interfaces 1250, one or more input-output interfaces 1258, one or more keyboards 1256, and/or one or more operating systems 1241, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the system or the device disclosed by the embodiment, the description is simple because the system or the device corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for content processing, the method comprising:
responding to the selected operation of the text, and displaying a function option control, wherein the function option control comprises at least one operation option;
generating a multimedia file comprising the text in response to the triggering of any one of the operation options;
and sending the multimedia file to a target object according to the triggered operation option, and/or storing the multimedia file.
2. The method of claim 1, wherein generating the multimedia file including the text comprises:
identifying the content of the text according to the selected operation;
and generating a multimedia file comprising the text according to the content of the recognized text in a preset display style.
3. The method according to claim 2, wherein the generating a multimedia file including the text according to the identified content of the text in a preset display style comprises:
editing the identified text content according to a preset static display style to generate a static image comprising the text;
or the like, or, alternatively,
editing the identified text content according to a preset dynamic display style to generate a dynamic image or video comprising the text.
4. The method of claim 1, wherein generating the multimedia file including the text comprises:
screenshot is conducted on an original display interface comprising the text, and an interface screenshot is generated;
and processing the interface screenshot to generate a multimedia file comprising the text.
5. The method of claim 4, wherein the processing the interface screenshot to generate a multimedia file including the text comprises:
and responding to the cutting operation of the interface screenshot, cutting the interface screenshot, and generating a static image comprising the text.
6. The method of claim 4, wherein the processing the interface screenshot to generate a multimedia file including the text comprises:
and identifying the display position of the text on the interface screenshot, and cutting the interface screenshot according to the display position to generate a static image comprising the text.
7. The method of claim 1, wherein when the triggered operation option is a share option, the method further comprises:
and triggering and displaying a target object selection interface, and acquiring the selected target object.
8. A content processing apparatus, characterized in that the apparatus comprises:
the display unit is used for responding to the selected operation of the text and displaying a function option control, and the function option control comprises at least one operation option;
the generating unit is used for responding to the trigger of any one of the operation options and generating a multimedia file comprising the text;
and the processing unit is used for sending the multimedia file to a target object according to the triggered operation option and/or storing the multimedia file.
9. An apparatus for content processing, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and wherein execution of the one or more programs by one or more processors comprises instructions for:
responding to the selected operation of the text, and displaying a function option control, wherein the function option control comprises at least one operation option;
generating a multimedia file comprising the text in response to the triggering of any one of the operation options;
and sending the multimedia file to a target object according to the triggered operation option, and/or storing the multimedia file.
10. A computer-readable medium having stored thereon instructions which, when executed by one or more processors, cause an apparatus to perform a content processing method as recited in one or more of claims 1-7.
CN201810654760.2A 2018-06-22 2018-06-22 Content processing method and device Active CN110704647B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810654760.2A CN110704647B (en) 2018-06-22 2018-06-22 Content processing method and device
PCT/CN2018/123798 WO2019242274A1 (en) 2018-06-22 2018-12-26 Content processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810654760.2A CN110704647B (en) 2018-06-22 2018-06-22 Content processing method and device

Publications (2)

Publication Number Publication Date
CN110704647A true CN110704647A (en) 2020-01-17
CN110704647B CN110704647B (en) 2024-04-16

Family

ID=68983480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810654760.2A Active CN110704647B (en) 2018-06-22 2018-06-22 Content processing method and device

Country Status (2)

Country Link
CN (1) CN110704647B (en)
WO (1) WO2019242274A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507352A (en) * 2020-04-16 2020-08-07 腾讯科技(深圳)有限公司 Image processing method and device, computer equipment and storage medium
WO2023093809A1 (en) * 2021-11-24 2023-06-01 维沃移动通信有限公司 File editing processing method and apparatus, and electronic device
WO2024046029A1 (en) * 2022-09-02 2024-03-07 腾讯科技(深圳)有限公司 Method and apparatus for creating media content, and device and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111506310B (en) * 2020-03-24 2024-04-05 深圳赛安特技术服务有限公司 Method, device, equipment and storage medium for generating multi-platform style
CN113347479B (en) * 2021-05-31 2023-05-26 网易(杭州)网络有限公司 Editing method, device, equipment and storage medium of multimedia material

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080294981A1 (en) * 2007-05-21 2008-11-27 Advancis.Com, Inc. Page clipping tool for digital publications
US20100318743A1 (en) * 2009-06-10 2010-12-16 Microsoft Corporation Dynamic screentip language translation
CN102779008A (en) * 2012-06-26 2012-11-14 奇智软件(北京)有限公司 Screen screenshot method and system
CN102855059A (en) * 2012-08-21 2013-01-02 东莞宇龙通信科技有限公司 Terminal and information sharing method
CN104123383A (en) * 2014-08-04 2014-10-29 网易(杭州)网络有限公司 Method and device used in media application
CN105739821A (en) * 2016-01-26 2016-07-06 三星电子(中国)研发中心 Operation processing method and apparatus for mobile terminal
CN105912610A (en) * 2016-04-06 2016-08-31 乐视控股(北京)有限公司 Method and device for guiding share based on character information
CN106469094A (en) * 2016-09-05 2017-03-01 维沃移动通信有限公司 A kind of Word message sharing method and mobile terminal
US20170277625A1 (en) * 2016-03-28 2017-09-28 Alexander Shtuchkin Generating annotated screenshots based on automated tests

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101625696A (en) * 2009-08-03 2010-01-13 孟智平 Method and system for constructing and generating video elements in webpage
CN102193905A (en) * 2011-05-26 2011-09-21 广东威创视讯科技股份有限公司 Virtual text editing method and device based on GDI (graphics device interface)/GDI+
CN104660797B (en) * 2013-11-25 2019-06-18 中兴通讯股份有限公司 Operation processing method and device
CN103634700A (en) * 2013-12-23 2014-03-12 乐视致新电子科技(天津)有限公司 Method and device of pushing multimedia files to smart television by mobile communication terminal

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080294981A1 (en) * 2007-05-21 2008-11-27 Advancis.Com, Inc. Page clipping tool for digital publications
US20100318743A1 (en) * 2009-06-10 2010-12-16 Microsoft Corporation Dynamic screentip language translation
CN102779008A (en) * 2012-06-26 2012-11-14 奇智软件(北京)有限公司 Screen screenshot method and system
CN102855059A (en) * 2012-08-21 2013-01-02 东莞宇龙通信科技有限公司 Terminal and information sharing method
CN104123383A (en) * 2014-08-04 2014-10-29 网易(杭州)网络有限公司 Method and device used in media application
CN105739821A (en) * 2016-01-26 2016-07-06 三星电子(中国)研发中心 Operation processing method and apparatus for mobile terminal
US20170277625A1 (en) * 2016-03-28 2017-09-28 Alexander Shtuchkin Generating annotated screenshots based on automated tests
CN105912610A (en) * 2016-04-06 2016-08-31 乐视控股(北京)有限公司 Method and device for guiding share based on character information
CN106469094A (en) * 2016-09-05 2017-03-01 维沃移动通信有限公司 A kind of Word message sharing method and mobile terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PATIL, NEHA 等: "Enhanced UI Automator Viewer with improved Android Accessibility Evaluation Features", 2016 INTERNATIONAL CONFERENCE ON AUTOMATIC CONTROL AND DYNAMIC OPTIMIZATION TECHNIQUES (ICACDOT), 16 March 2017 (2017-03-16), pages 977 - 983 *
快乐奥运;: "挥一挥衣袖 歌曲歌词全带走", 数字世界, no. 05, 15 May 2008 (2008-05-15), pages 172 - 173 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507352A (en) * 2020-04-16 2020-08-07 腾讯科技(深圳)有限公司 Image processing method and device, computer equipment and storage medium
WO2023093809A1 (en) * 2021-11-24 2023-06-01 维沃移动通信有限公司 File editing processing method and apparatus, and electronic device
WO2024046029A1 (en) * 2022-09-02 2024-03-07 腾讯科技(深圳)有限公司 Method and apparatus for creating media content, and device and storage medium

Also Published As

Publication number Publication date
CN110704647B (en) 2024-04-16
WO2019242274A1 (en) 2019-12-26

Similar Documents

Publication Publication Date Title
EP3454192B1 (en) Method and device for displaying page
CN110704647B (en) Content processing method and device
US10296201B2 (en) Method and apparatus for text selection
CN107908351B (en) Application interface display method and device and storage medium
CN107948708B (en) Bullet screen display method and device
US20200007944A1 (en) Method and apparatus for displaying interactive attributes during multimedia playback
CN107566892B (en) Video file processing method and device and computer readable storage medium
CN106775202B (en) Information transmission method and device
US9959487B2 (en) Method and device for adding font
CN107820131B (en) Comment information sharing method and device
CN108495168B (en) Bullet screen information display method and device
EP3147802B1 (en) Method and apparatus for processing information
CN107423386B (en) Method and device for generating electronic card
US11836342B2 (en) Method for acquiring historical information, storage medium, and system
EP3828682A1 (en) Method, apparatus for adding shortcut plug-in, and intelligent device
KR20150068509A (en) Method for communicating using image in messenger, apparatus and system for the same
CN112584222A (en) Video processing method and device for video processing
CN109947506B (en) Interface switching method and device and electronic equipment
CN106998493B (en) Video previewing method and device
CN109756783B (en) Poster generation method and device
CN113986574A (en) Comment content generation method and device, electronic equipment and storage medium
CN106447747B (en) Image processing method and device
CN105204718B (en) Information processing method and electronic equipment
EP3866482A1 (en) Method and device for processing information
US9740524B2 (en) Method and terminal device for executing application chain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant