CN110850996A - Picture/video processing method and device applied to input method - Google Patents

Picture/video processing method and device applied to input method Download PDF

Info

Publication number
CN110850996A
CN110850996A CN201910934769.3A CN201910934769A CN110850996A CN 110850996 A CN110850996 A CN 110850996A CN 201910934769 A CN201910934769 A CN 201910934769A CN 110850996 A CN110850996 A CN 110850996A
Authority
CN
China
Prior art keywords
text
picture
area
video
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910934769.3A
Other languages
Chinese (zh)
Inventor
施明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mengjia Network Technology Co Ltd
Original Assignee
Shanghai Mengjia Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Mengjia Network Technology Co Ltd filed Critical Shanghai Mengjia Network Technology Co Ltd
Priority to CN201910934769.3A priority Critical patent/CN110850996A/en
Publication of CN110850996A publication Critical patent/CN110850996A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Abstract

The invention relates to a picture/video processing method and device applied to an input method, wherein the method comprises the following steps: identifying an interest area in a picture/video, wherein the interest area is a subject area or a focus area in the picture/video; determining a location of a text region containing one or more text; and determining the character attribute contained in the character area. The device comprises a region of interest identification module configured to identify a region of interest in a picture/video; a text region determination module configured to determine a location of a text region containing one or more texts; and an attribute determination module configured to determine at least a text attribute accommodated by the text region. The invention increases the interest of the picture/video by a series of processing on the searched picture/video, so that the synthesized picture/video well expresses the intention of the user and highlights the theme, thereby leading the content input by the input method to be richer, more interesting and more recreational.

Description

Picture/video processing method and device applied to input method
Technical Field
The present invention relates to the field of application technologies, and in particular, to a method and an apparatus for processing a picture/video applied to an input method.
Background
The input method is an application program which is used at high frequency in daily life of people no matter at a PC end or a mobile end. The development of the existing input method has two obvious trends. One trend is the development of usability, with more convenient, more accurate, and more efficient input. Both the application of artificial intelligence to input method matching and speech recognition based input methods are representative of this direction. The other trend is the development of entertainment direction, and the input content is richer, more diversified and more intuitive. The continuous addition of input functions such as characters, expressions, emoticons and the like reflects the development of the input method in the direction. However, as the demand of people on expression is continuously increased, the existing input function can not meet the demand.
Disclosure of Invention
Aiming at the technical problems in the prior art, the application provides a picture/video processing method and device applied to an input method, which are used for generating candidate pictures/videos applied to the input method.
According to an aspect of the present invention, the present invention provides a method for processing pictures/videos applied to an input method, including: identifying an interest area in the picture/video, wherein the interest area is a subject area or a focus area in the picture/video; determining a location of a text region containing one or more text; and determining the character attribute contained in the character area.
Optionally, in the method, the text area is located in a non-interest area; or when the text area intersects with the interest area, the text area is located in the area with the minimum intersection.
Optionally, the method further comprises: identifying a location of a particular part of a person/animal in the area of interest; and determining the area where the character area is located according to the distance threshold between the character area and the specific part.
Optionally, in the method, the specific part is a head, a hand or a finger of a person in the interest area; or the specific part is the head, the upper limb or the lower limb of the animal in the interest area.
Optionally, the method further comprises: adding an area prompt for the text area, setting the ratio of the text area to the whole picture and/or setting the shape of the text area.
Optionally, in the method, the region prompt includes an edge prompt, a sample text prompt, and/or a text region background color.
Optionally, the method further comprises: identifying a background region in a picture/video; and modifying the attributes of the background region.
Optionally, in the method, the step of modifying the attribute of the background area further includes: increasing or decreasing a background area in the candidate picture/video; or changing the color of the background area based on the color of the characters contained in the character area; or changing one or more of background, luminance chrominance, and contrast of the candidate picture.
Optionally, in the method, the text attribute includes one or more of a number of texts, a height of texts, a font of texts, a color of texts, a font style of texts, and a text effect.
Optionally, the method further comprises: one or more of adding filters to the picture/video, adding cosmetic treatment, adding props, and adding framing.
Optionally, the method further comprises: changing one or more of the style, expression technique, and expression of the picture/video.
Optionally, the method further comprises: a motion picture composed of a plurality of sub-pictures is created.
Optionally, in the method, the text areas of the plurality of sub-pictures are the same, and the images are different; or the character areas of the plurality of sub-pictures are different and the images are the same; or the character areas and the images of the plurality of sub-pictures are different, and the image subjects of the sub-pictures are associated; or the characters in the character area of each sub-picture have the special switching effect.
According to another aspect of the present invention, the present invention provides a picture/video processing apparatus applied to an input method, comprising: an interest area identification module configured to identify an interest area in a picture/video, the interest area being a subject area or a focus area in the picture/video; a text region determination module configured to determine a location of a text region containing one or more texts; and an attribute determination module configured to determine at least a text attribute accommodated by the text region.
Optionally, the text region determining module in the processing device is further configured to:
a target image recognition unit configured to recognize a position where a specific part of a person/animal in the region of interest is located; and a text region determination unit configured to set the text region in a region whose distance from the specific portion is smaller than a distance threshold value according to the distance threshold value of the text region from the specific portion.
Optionally, the attribute determining module in the processing apparatus includes: a picture/video attribute determination unit configured to determine one or more of a style, a representation technique, an expression manner, and a background region attribute of the picture/video; a text region attribute determining unit configured to set one or more of a text region prompt, a ratio of the text region to the entire picture, and a shape of the text region; and a text attribute determining unit configured to set one or more of a number of texts, a height of the texts, a font of the texts, a color of the texts, a font style of the texts, and a text effect.
Optionally, the processing apparatus further includes: and the motion picture production module is configured to generate a motion picture by adopting a plurality of sub-pictures according to a preset production condition.
Optionally, the processing apparatus further includes: a retouching module configured to one or more of add filters to the picture/video, add aesthetic treatments, add props, and add framing.
In some embodiments of the invention, the interestingness of the picture/video is increased by performing a series of processing on the searched picture/video, so that the synthesized picture/video can better express the intention of the user and highlight the theme, and the content input by the input method is richer, more interesting and more entertaining.
Drawings
Preferred embodiments of the present invention will now be described in further detail with reference to the accompanying drawings, in which:
FIG. 1 is a schematic interactive diagram of an input method system according to one embodiment of the invention;
FIG. 2 is a schematic interactive diagram of an input method system according to another embodiment of the invention;
FIG. 3 is a schematic interactive diagram of an input method system according to another embodiment of the invention;
FIG. 4 is a flowchart of a method for processing pictures/video for an input method according to one embodiment of the present invention;
FIG. 5 is a functional block diagram of a picture/video processing device applied to an input method according to one embodiment of the present invention;
fig. 6 is a schematic block diagram of a picture/video processing apparatus applied to an input method according to another embodiment of the present invention;
fig. 7 is a schematic block diagram of a picture/video processing apparatus applied to an input method according to another embodiment of the present invention; and
FIG. 8 is a schematic diagram of an input interface of an input method client according to one embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following detailed description, reference is made to the accompanying drawings that form a part hereof and in which is shown by way of illustration specific embodiments of the application. In the drawings, like numerals describe substantially similar components throughout the different views. Various specific embodiments of the present application are described in sufficient detail below to enable those skilled in the art to practice the teachings of the present application. It is to be understood that other embodiments may be utilized and structural, logical or electrical changes may be made to the embodiments of the present application.
Some functions of the input method in the prior art, such as an emoticon function, enable the input method to input pictures. However, when using the emoticon, the user needs to download the emoticon in advance. The pictures which can be input in the input method are limited to the pictures provided in the facial expression package. In particular, the text in the emoticon picture cannot be modified. This greatly limits the use of the user.
Some embodiments of the invention provide a more entertaining input method: and generating a picture-text character composite picture/video containing the characters input by the user based on the characters input by the user and the picture/video. The technical solution of the present invention is explained in detail by the examples of the drawings below. It will be appreciated by those skilled in the art that the inventive arrangements can also be applied to video in a similar manner, for example small videos with a time of less than 5 seconds, 10 seconds or 15 seconds.
FIG. 1 is a schematic interactive diagram of an input method system, according to one embodiment of the invention. The input method system comprises an input method client 100 and a server 200. The input method client 100 receives an input from a user, generates on-screen text based on the character or character string input by the user, and sends a picture/step request to the server 200. The server 200 receives the picture/video request from the input method client 100, searches for a picture/video matching the picture/video according to the user input in the request, such as characters or on-screen characters input by the user, processes the searched picture/video, thereby generating one or more candidate pictures/videos, and sends the candidate pictures/videos to the input method client 100. After receiving the candidate picture, the input method client 100 synthesizes the on-screen characters into the candidate picture, thereby generating a picture-text synthesized picture. And outputting the corresponding image-text composite picture according to the selection of the user.
In this embodiment, the candidate picture/video obtained by searching and processing may be obtained by the server 200, or the candidate picture/video obtained by searching and processing may be obtained by an input method client as shown in fig. 2, or the candidate picture/video obtained by searching and processing the picture/video through a third-party search engine 300 by the server as shown in fig. 3.
Fig. 4 is a flowchart of a processing method applied to a picture/video of an input method for obtaining candidate pictures/videos according to an embodiment of the present invention, where the processing flow is executed by the server in fig. 1 and 3, or the input method client in fig. 2. The specific process is as follows:
step S410, identify a Region of Interest (ROI) in the picture/video, where the subject Region or the focus Region in the ROI picture/video, and the rest regions are non-ROIs. The non-ROI may be a region outside the background, focus image of the picture. For example, a picture of a floret in the background of the sky and a picture of a girl in a room, where the floret is a focus image of the picture, a region where the floret is located is an ROI, the sky is the background of the picture, and the region where the sky is located is a non-ROI. The region where girls are located is the ROI, the other indoor article images except for girls are non-subject images, and the region where the girls are located is a non-ROI, so that the non-ROI of the sky and the indoor articles can be used for adding characters. Regarding the identification of the background, focus and non-focus images of the picture, any identification method in the prior art can be adopted, and a detailed description thereof is omitted.
In step S420, the position of the text area containing one or more texts is determined. Wherein the characters are screen-on characters input by a user. In one embodiment, all non-ROIs may be used as text regions, and in other embodiments, a region may be determined from the non-ROIs as a text region. For example, text may be added at a particular location other than the ROI, such as near the head of the person/animal, to indicate that the added text is what the person/animal in the figure thinks of and said. Or characters are added near the fingers or hands of the character (if the character is an animal, the upper limb or the upper limb of the animal can be used for indicating the added characters are used for indicating the contents to be pointed out by the character in the picture. Therefore, for such a picture, the position of a specific part of a person/animal in the picture/video, such as the head, hand, and finger of the person, is first identified, then an area is determined according to the distance threshold between the text area and the specific part, and the text area is set in an area whose distance from the specific part is smaller than the distance threshold. When the position of the text region is set, it is followed that the text region does not coincide with the ROI, or overlaps as little as possible, that is, the text region should be set in the non-ROI, if the non-ROI is not large enough, the text region and the ROI intersect with each other, and the text region should be set in the region of minimum intersection. For example, in the case of setting the text region near the head of the person as described above, when a region in which text can be set is determined by a threshold value, a non-ROI or a region having the smallest intersection with the ROI among the regions is selected.
After the location of the text region is determined, some attributes of the text region are set. The character area attribute comprises: the shape of the text area, such as square, rectangle, cloud, etc.; the area prompter is used for enabling a user to see a text area which is different from other images, such as an edge prompter, a sample text prompter and/or a background color of the text area, wherein the edge prompter is a line which draws the text area and has color, the sample text prompter is sample text, and the background color of the text area is the background color of a filled text area; the ratio of the text region to the entire picture is, for example, 1/5, 1/3, 1/2.
In step S430, the character attributes contained in the character area are determined. Wherein the text attribute comprises one or more of text number, text height, text font, text color, text font style and text effect. For example, the number of characters is set according to the size of the character area, and the maximum value of the number can be set, such as 8-12; the text height is, for example, about 1/10-1/5 of the overall picture height; the character font is, for example, a line script, an clerk, etc.; character fonts such as bold, slant, and the like; the character effects are like various artistic fonts and shapes; and determining the color of the characters and the character areas based on the color of the background area.
In addition, in addition to the above steps, the following optional steps may be included:
step S440, modify the picture/video attribute. Such as modifying one or more of background region attributes, style of picture/video, presentation technique, and presentation style. For example, increase or decrease the background area in the picture; changing one or more of background, luminance, chrominance, and contrast of the candidate picture; the color of the background area is changed based on the color of the text accommodated by the text area. And modifying the style of the current picture into the style of the Sanskrit sky according to a preset style, such as the preset style of the Sanskrit sky picture. Alternatively, the expression technique of the current picture is changed, such as enlarging the current focus image, adding a stroke to the focus image, changing the overall color, and the like. Or, changing the expression mode of the current picture, such as changing the color picture into a black-and-white picture; changing the picture into a carved board picture, and the like.
And step S450, modifying the picture/video. For example, a filter is added to a picture/video, a beautifying process is added, props are added or rims are added, and the like, so that a more interesting picture can be obtained.
Step S460, generating a motion picture. And selecting a group of sub-pictures, and generating the motion pictures with different effects through different settings. For example, a picture is selected and stored as a plurality of copies, and through the foregoing steps S10-S30, a text region and its attributes are set for each copy, and an association relationship is set for the text regions of each copy, for example, each text region only displays a part of the text on the screen, and the text regions of each copy are combined to be the text on the screen that can be added. For example, a candidate generated motion picture includes 3 sub-pictures, and the text to be added is "i love you", so that the text areas of the 3 sub-pictures are respectively added with "i", "love" and "you", and thus the candidate dynamically presents the added text "i love you" to the user. In other embodiments, each sub-picture takes a different picture, and the text added to their text areas is consistent. Thus, although the sub-picture is converted to form the motion picture, the characters presented to the user by the entire motion picture are consistent. In other embodiments, the text areas of the sub-pictures are different, the images are different, but the image topics of the selected sub-pictures are associated, so that the texts in the text areas of the sub-pictures are combined to form the added on-screen text, and richer and more detailed contents can be expressed through different text combinations and the images associated with the topics. In some embodiments, the switching of adding text in each sub-picture of the candidate motion picture may have a special effect. These effects include, but are not limited to: fade-in and fade-out, small to large or large to small then disappear, left to right or right to left then disappear, top to bottom or bottom to top then disappear, etc. Those skilled in the art will appreciate that candidate videos may also be processed in a similar manner. In some examples, the candidate video is capable of playing on-screen text.
Fig. 5 is a schematic block diagram of a processing apparatus applied to a picture/video of an input method according to an embodiment of the present invention, wherein the apparatus 100 includes: a region of interest identification module 102, a text region determination module 104, and an attribute determination module 106. The region of interest identification module 102 is configured to identify a region of interest (ROI) in the picture/video, where the ROI is a subject region or a focus region in the picture/video. When the image is identified, a traditional feature identification method or a convolution neural network model identification method can be adopted. It will be appreciated by those skilled in the art that the foregoing identification methods can be employed for both the identification of images in pictures and the identification of images in videos.
Text region determination module 104 is configured to determine a location of a text region that holds one or more texts. The character area is used for accommodating the upper screen characters added by the user, and the upper screen characters are combined with the images in the pictures to generate effects of being more interesting and having more prominent subjects, so that the position of the character area is not coincident with the ROI or is less coincident with the ROI. Following this principle, the text region can be set in the non-ROI, or when the text region intersects with the non-ROI, the text region is set in the region with the smallest intersection as possible. In one embodiment, the text region determination module 104 includes a target image recognition unit 1042 and a text region determination unit 1044. The target image recognition unit 1042 is used for recognizing the position of a specific part of a person/animal in the interest area, such as the head, hand, etc. of the person. The text region determining unit 1044 may determine one region by setting the distance threshold to 20 pixels for a 400 × 400 picture according to the distance threshold between the text region and the specific portion, and set the text region in the region whose minimum distance from the text region to the specific portion is the distance threshold. I.e. the position of the text region 20 pixels away from the nearest edge of the character head. If the region is the ROI of the image, the position where the text region and the ROI intersect the least is selected.
An attribute determination module 106 configured to determine at least text attributes accommodated by the text region. In a specific embodiment, the attribute determining module 106 includes: a picture/video attribute determining unit 1062, a text area attribute determining unit 1064, and a text attribute determining unit 1066. The picture/video attribute determining unit 1062 is configured to determine one or more of a style, a representation technique, an expression manner, and a background area attribute of the picture/video; the text area attribute determining unit 1064 is configured to set one or more of a text area prompt, a ratio of the text area to the entire picture, and a shape of the text area; the text attribute determining unit 1066 is configured to set one or more of the number of texts, the height of the texts, the font of the texts, the color of the texts, the font style of the texts, and the effect of the texts. For details, please refer to the description of the method, which is not repeated herein.
Fig. 6 is a schematic block diagram of another picture/video processing apparatus applied to an input method according to an embodiment of the present invention. Unlike the embodiment shown in fig. 5, the processing device 200 in this embodiment includes a motion picture generation module 108 for generating a motion picture by using a plurality of sub-pictures according to a predetermined generation condition. The production conditions include, for example, the preset number of sub-pictures, and the characters to be added to the character area in each sub-picture. For details, please refer to the description of the foregoing method, which is not repeated herein.
Fig. 7 is a schematic block diagram of another processing apparatus applied to a picture/video of an input method according to an embodiment of the present invention, and unlike the embodiment shown in fig. 5, the processing apparatus 300 in this embodiment includes a modification module 110 for adding one or more of filters, adding beauty treatment, adding props, and adding edges to the picture/video.
After the processing by the method and the device, the candidate picture/video is generated, and after the input method client obtains the on-screen text and the candidate picture/video, the image-text composite picture/video is generated and displayed in the candidate picture area 204 in the input interface, as shown in fig. 8. The invention increases the interest of the picture/video through a series of processing on the searched picture/video, so that the synthesized picture/video can better express the intention of the user and highlight the theme, thereby leading the input content of the input method to be richer and more interesting.
The above embodiments are provided only for illustrating the present invention and not for limiting the present invention, and those skilled in the art can make various changes and modifications without departing from the scope of the present invention, and therefore, all equivalent technical solutions should fall within the scope of the present invention.

Claims (18)

1. A picture/video processing method applied to an input method comprises the following steps:
identifying an interest area in a picture/video, wherein the interest area is a subject area or a focus area in the picture/video;
determining a location of a text region containing one or more text; and
the text attribute contained in the text area is determined.
2. The method of claim 1, wherein the text region is located in a non-region of interest; or when the text area intersects with the interest area, the text area is located in the area with the minimum intersection.
3. The method of claim 1 or 2, further comprising:
identifying a location of a particular part of a person/animal in the area of interest; and
and determining the area where the character area is located according to the distance threshold between the character area and the specific part.
4. The method of claim 3, wherein the specific part is a head, a hand or a finger of a person in the area of interest; or the specific part is the head, the upper limb or the lower limb of the animal in the interest area.
5. The method of claim 1, further comprising: adding an area prompt for the text area, setting the ratio of the text area to the whole picture and/or setting the shape of the text area.
6. The method of claim 5, wherein the region cues comprise edge cues, sample text cues, and/or text region background colors.
7. The method of claim 1, further comprising: identifying a background region in a picture/video; and modifying the attributes of the background region.
8. The method of claim 7, the step of modifying the attributes of the background region further comprising: increasing or decreasing a background area in the candidate picture/video; or changing the color of the background area based on the color of the characters contained in the character area; or changing one or more of background, luminance chrominance, and contrast of the candidate picture.
9. The method of claim 1, the text attributes comprising one or more of a number of text, a height of text, a font of text, a color of text, a font style of text, and a text effect.
10. The method of claim 1, further comprising: one or more of adding filters to the picture/video, adding cosmetic treatment, adding props, and adding framing.
11. The method of claim 1, further comprising: changing one or more of the style, expression technique, and expression of the picture/video.
12. The method of claim 1, further comprising: a motion picture composed of a plurality of sub-pictures is created.
13. The method of claim 13, wherein the plurality of sub-pictures have the same text area and different images; or the character areas of the plurality of sub-pictures are different and the images are the same; or the character areas and the images of the plurality of sub-pictures are different, and the image subjects of the sub-pictures are associated; or the characters in the character area of each sub-picture have the special switching effect.
14. A picture/video processing device applied to an input method comprises the following steps:
an interest area identification module configured to identify an interest area in a picture/video, the interest area being a subject area or a focus area in the picture/video;
a text region determination module configured to determine a location of a text region containing one or more texts; and
an attribute determination module configured to determine at least an attribute of a text accommodated by the text region.
15. The processing device of claim 14, wherein the text region determination module is further configured to comprise:
a target image recognition unit configured to recognize a position where a specific part of a person/animal in the region of interest is located; and
and the character area determining unit is configured to set the character area in an area, the distance between the character area and the specific part of which is less than the distance threshold value, according to the distance threshold value of the character area and the specific part.
16. The processing apparatus of claim 14, wherein the attribute determination module comprises:
a picture/video attribute determination unit configured to determine one or more of a style, a representation technique, an expression manner, and a background region attribute of the picture/video;
a text region attribute determining unit configured to set one or more of a text region prompt, a ratio of the text region to the entire picture, and a shape of the text region; and
a text attribute determination unit configured to set one or more of a number of texts, a height of the texts, a font of the texts, a color of the texts, a font style of the texts, and a text effect.
17. The processing apparatus of claim 14, further comprising: and the motion picture production module is configured to generate a motion picture by adopting a plurality of sub-pictures according to a preset production condition.
18. The processing apparatus of claim 14, further comprising: a retouching module configured to one or more of add filters to the picture/video, add aesthetic treatments, add props, and add framing.
CN201910934769.3A 2019-09-29 2019-09-29 Picture/video processing method and device applied to input method Pending CN110850996A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910934769.3A CN110850996A (en) 2019-09-29 2019-09-29 Picture/video processing method and device applied to input method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910934769.3A CN110850996A (en) 2019-09-29 2019-09-29 Picture/video processing method and device applied to input method

Publications (1)

Publication Number Publication Date
CN110850996A true CN110850996A (en) 2020-02-28

Family

ID=69596162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910934769.3A Pending CN110850996A (en) 2019-09-29 2019-09-29 Picture/video processing method and device applied to input method

Country Status (1)

Country Link
CN (1) CN110850996A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112558785A (en) * 2020-12-24 2021-03-26 上海二三四五网络科技有限公司 Control method and device for adjusting character display color

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872438A (en) * 2015-12-15 2016-08-17 乐视致新电子科技(天津)有限公司 Video call method and device, and terminal
CN106341722A (en) * 2016-09-21 2017-01-18 努比亚技术有限公司 Video editing method and device
CN107369196A (en) * 2017-06-30 2017-11-21 广东欧珀移动通信有限公司 Expression, which packs, makees method, apparatus, storage medium and electronic equipment
CN109741423A (en) * 2018-12-28 2019-05-10 北京奇艺世纪科技有限公司 Expression packet generation method and system
CN109960549A (en) * 2017-12-22 2019-07-02 北京奇虎科技有限公司 A kind of generation method and device of GIF picture
CN109978972A (en) * 2019-03-20 2019-07-05 珠海天燕科技有限公司 A kind of method and device of copy editor in picture
CN110147805A (en) * 2018-07-23 2019-08-20 腾讯科技(深圳)有限公司 Image processing method, device, terminal and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872438A (en) * 2015-12-15 2016-08-17 乐视致新电子科技(天津)有限公司 Video call method and device, and terminal
CN106341722A (en) * 2016-09-21 2017-01-18 努比亚技术有限公司 Video editing method and device
CN107369196A (en) * 2017-06-30 2017-11-21 广东欧珀移动通信有限公司 Expression, which packs, makees method, apparatus, storage medium and electronic equipment
CN109960549A (en) * 2017-12-22 2019-07-02 北京奇虎科技有限公司 A kind of generation method and device of GIF picture
CN110147805A (en) * 2018-07-23 2019-08-20 腾讯科技(深圳)有限公司 Image processing method, device, terminal and storage medium
CN109741423A (en) * 2018-12-28 2019-05-10 北京奇艺世纪科技有限公司 Expression packet generation method and system
CN109978972A (en) * 2019-03-20 2019-07-05 珠海天燕科技有限公司 A kind of method and device of copy editor in picture

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112558785A (en) * 2020-12-24 2021-03-26 上海二三四五网络科技有限公司 Control method and device for adjusting character display color

Similar Documents

Publication Publication Date Title
CN109729426B (en) Method and device for generating video cover image
US11321385B2 (en) Visualization of image themes based on image content
US20190289359A1 (en) Intelligent video interaction method
US20200314482A1 (en) Control method and apparatus
CN111930994A (en) Video editing processing method and device, electronic equipment and storage medium
US20170017844A1 (en) Image content providing apparatus and image content providing method
CN106203286A (en) The content acquisition method of a kind of augmented reality, device and mobile terminal
CN106909548B (en) Picture loading method and device based on server
CN111241340A (en) Video tag determination method, device, terminal and storage medium
CN111757175A (en) Video processing method and device
CN112235520A (en) Image processing method and device, electronic equipment and storage medium
CN111586466B (en) Video data processing method and device and storage medium
CN111309200B (en) Method, device, equipment and storage medium for determining extended reading content
CN111638784A (en) Facial expression interaction method, interaction device and computer storage medium
CN108334886A (en) Image prediction method, terminal device and readable storage medium storing program for executing
CN112866577B (en) Image processing method and device, computer readable medium and electronic equipment
CN110850996A (en) Picture/video processing method and device applied to input method
US20230119313A1 (en) Voice packet recommendation method and apparatus, device and storage medium
CN106909547B (en) Picture loading method and device based on browser
CN112083863A (en) Image processing method and device, electronic equipment and readable storage medium
CN112235516B (en) Video generation method, device, server and storage medium
CN113676734A (en) Image compression method and image compression device
CN110908525A (en) Input method, client side thereof and method for providing candidate pictures/videos
CN110837307A (en) Input method and system thereof
CN112862558A (en) Method and system for generating product detail page and data processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200228