CN106658079B - The customized method and device for generating facial expression image - Google Patents
The customized method and device for generating facial expression image Download PDFInfo
- Publication number
- CN106658079B CN106658079B CN201710007418.9A CN201710007418A CN106658079B CN 106658079 B CN106658079 B CN 106658079B CN 201710007418 A CN201710007418 A CN 201710007418A CN 106658079 B CN106658079 B CN 106658079B
- Authority
- CN
- China
- Prior art keywords
- image
- video
- user
- social application
- processed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42204—User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
- H04N21/42206—User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
- H04N21/42224—Touch pad or touch panel provided on the remote control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/254—Management at additional data server, e.g. shopping server, rights management server
- H04N21/2541—Rights Management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4318—Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
- H04N21/4333—Processing operations in response to a pause request
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Telephonic Communication Services (AREA)
Abstract
The present invention relates to a kind of customized method and devices for generating facial expression image.The customized method for generating facial expression image includes: the video interception instruction for obtaining and generating in Video Applications;Image interception is carried out to the video played in the Video Applications according to video interception instruction, obtains image to be processed;It calls the image procossing plug-in unit embedded in the Video Applications to carry out image procossing to the image to be processed, generates facial expression image;It is identified according to the social application of user to facial expression image described in social application server push, there are corresponding relationships with the Video Applications for the social application mark of the user.It can be improved the formation efficiency of facial expression image using the customized method and device for generating facial expression image provided by the present invention.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of customized method and dress for generating facial expression image
It sets.
Background technique
With the continuous development of Internet technology, the information transmitted in social application has been no longer limited to traditional
Text, for example, expression language also belongs to a kind of information transmitted in social application, table of this kind of expression language to portray in image
Feelings indicate corresponding information.
Currently, social application usually provides expression language, user by an expression packet comprising various facial expression images
Mood at this very moment can be expressed by selecting a facial expression image in expression packet, and then is passed by the facial expression image to other side
Up to certain information.But the often limited amount of the facial expression image in expression packet, and picture material is fixed, and user is insufficient for
Individual demand, therefore, user may also think customized generation facial expression image, to make one's own expression packet.
A kind of method that the prior art provides is that user first obtains one section of video, and actually required by intercepting in this section of video
The image wanted, then open third party's image processing tool (such as Photoshop) and the image being truncated to is handled, finally lead to
It crosses the mode for saving and uploading and the facial expression image of customized generation is forwarded to social application.
Although being operated excessively numerous it follows that the above-mentioned prior art can be the customized generation facial expression image of user
It is trivial, however it remains the lower problem of the formation efficiency of facial expression image.
Summary of the invention
The embodiment of the present invention provides a kind of customized method and apparatus for generating facial expression image, can be improved facial expression image
Formation efficiency.
A kind of customized method for generating facial expression image, comprising: obtain the video interception instruction generated in Video Applications;Root
Image interception is carried out to the video played in the Video Applications according to video interception instruction, obtains image to be processed;It calls
The image procossing plug-in unit embedded in the Video Applications carries out image procossing to the image to be processed, generates facial expression image;Root
Identify according to the social application of user to facial expression image described in social application server push, the social application mark of the user with
There are corresponding relationships for the Video Applications.
A kind of customized device for generating facial expression image, comprising: instruction acquisition module is generated for obtaining in Video Applications
Video interception instruction;Video interception module, for being instructed according to the video interception to the view played in the Video Applications
Frequency carries out image interception, obtains image to be processed;Image processing module, for calling at the image embedded in the Video Applications
It manages plug-in unit and image procossing is carried out to the image to be processed, generate facial expression image;Image pushing module, for the society according to user
Hand over application identities to facial expression image described in social application server push, the social application mark of the user is answered with the video
With there are corresponding relationships.
Compared with prior art, the invention has the following advantages:
Image interception is carried out to the video wherein played according to the video interception instruction obtained in Video Applications, and calling should
The image procossing plug-in unit embedded in Video Applications carries out image procossing to the image to be processed being truncated to, and generates facial expression image, into
And facial expression image is pushed into social application clothes according to identifying with the Video Applications there are the social application of the user of corresponding relationship
Business device, can be by obtaining corresponding expression figure for identifying subsequently through the social application of the user in social application server
Picture.
User does not both need to exit Video Applications, does not need additional downloads third party's image processing tool yet, can be completed
Above-mentioned sequence of operations, it is simple and fast, and also the facial expression image of customized generation can be identified directly by the social application of user
It connects and pushes to social application server, so as to improve the formation efficiency of facial expression image.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention
Example, and it is used to explain the present invention the principle of embodiment together in specification.
Fig. 1 is the schematic diagram of related implementation environment according to embodiments of the present invention;
Fig. 2 is a kind of block diagram of terminal shown according to an exemplary embodiment;
Fig. 3 is the flow chart of the customized method for generating facial expression image of one kind shown according to an exemplary embodiment;
Fig. 4 is that the video interception instruction step generated in Video Applications is obtained in Fig. 3 corresponding embodiment in one embodiment
Flow chart;
Fig. 5 is intercepted according to video interception instruction to the video played in Video Applications in Fig. 3 corresponding embodiment, is obtained
To image step to be processed one embodiment flow chart;
Fig. 6 is the flow chart of the customized method for generating facial expression image of another kind shown according to an exemplary embodiment;
Fig. 7 is the flow chart of the customized method for generating facial expression image of another kind shown according to an exemplary embodiment;
Fig. 8 is a kind of specific implementation schematic diagram of the customized method for generating facial expression image in an application scenarios;
Fig. 9 is the block diagram of the customized device for generating facial expression image of one kind shown according to an exemplary embodiment;
Figure 10 be in Fig. 9 corresponding embodiment instruction acquisition module in the block diagram of one embodiment;
Figure 11 be in Fig. 9 corresponding embodiment video interception module in the block diagram of one embodiment;
Figure 12 is the block diagram of the customized device for generating facial expression image of another kind shown according to an exemplary embodiment;
Figure 13 is the block diagram of the customized device for generating facial expression image of another kind shown according to an exemplary embodiment.
Through the above attached drawings, it has been shown that the specific embodiment of the present invention will be hereinafter described in more detail, these attached drawings
It is not intended to limit the scope of the inventive concept in any manner with verbal description, but is by referring to specific embodiments
Those skilled in the art illustrate idea of the invention.
Specific embodiment
Here will the description is performed on the exemplary embodiment in detail, the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all embodiments consistented with the present invention.On the contrary, they be only with it is such as appended
The example of device and method being described in detail in claims, some aspects of the invention are consistent.
As previously mentioned, the method that the prior art provides requires first to save the image being truncated to, then import third party's image
Handling implement carries out image procossing, and timeliness and convenience are all poor.Meanwhile third party's image processing tool is generally fallen into specially
The stronger tool of industry, function is complicated, the relatively difficult customized experiences that will affect user of the upper hand of user.
In addition, the facial expression image of customized generation can not directly push to social application after image procossing finishes
Server, but need first to save upload and then just forward, being bound to cause facial expression image, transmission efficiency is low.
From the foregoing, it will be observed that the problem that method provided by the prior art is lower there are still the formation efficiency of facial expression image.
Therefore, in order to improve the formation efficiency of facial expression image, spy proposes a kind of customized method for generating facial expression image.
Fig. 1 is implementation environment involved in this kind of customized method for generating facial expression image.The implementation environment includes terminal
100 and social interaction server device 200.
Wherein, terminal 100 can be smart phone, smart television, tablet computer, palm PC, laptop or
For other electronic equipments of social application operation.Social interaction server device 200 is corresponding to the social application that runs in terminal 100
Server.
During concrete implementation, terminal 100 will carry out the generation of customized facial expression image in Video Applications, and can
The facial expression image of customized generation is directly pushed to social application server 200 by the social application mark of user, with
It can be by obtaining corresponding facial expression image in social application server 200 for the social application mark subsequently through the user.
Referring to Fig. 2, Fig. 2 is a kind of block diagram of terminal shown according to an exemplary embodiment.It should be noted that should
Terminal 100 is one and adapts to example of the invention, must not believe that there is provided any limits to use scope of the invention
System.The terminal 100 can not be construed to need to rely on or must have in illustrative terminal 100 shown in Figure 2
One or more component.
As shown in Fig. 2, terminal 100 (only shows one including memory 101, storage control 103, one or more in figure
It is a) processor 105, Peripheral Interface 107, radio-frequency module 109, locating module 111, photographing module 113, audio-frequency module 115, touch-control
Screen 117 and key module 119.These components are mutually communicated by one or more communication bus/signal wire 121.
It is appreciated that structure shown in Fig. 2 is only to illustrate, terminal 100 may also include more more or fewer than shown in Fig. 2
Component, or with the component different from shown in Fig. 2.Each component shown in Fig. 2 can use hardware, software or a combination thereof
To realize.
Wherein, memory 101 can be used for storing software program and module, as in each exemplary embodiment of the present invention from
Definition generates program instruction and module corresponding to the method and device of facial expression image, and processor 105 is stored in by operation
Program instruction in reservoir 101 realizes above-mentioned customized generation facial expression image thereby executing various functions and data processing
Method.
The carrier that memory 101 is stored as resource can be random storage medium, such as high speed random access memory, non-
Volatile memory, such as one or more magnetic storage devices, flash memory or other solid-state memories.Storage mode can be
Of short duration storage permanently stores.
Peripheral Interface 107 may include an at least wired or wireless network interface, at least one series-parallel translation interface, at least
One input/output interface and at least USB interface etc., for coupleeing memory for external various input/output devices
101 and processor 105, to realize the communication with external various input/output devices.
Radio-frequency module 109 is used for transceiving electromagnetic wave, the mutual conversion of electromagnetic wave and electric signal is realized, to pass through communication network
Network is communicated with other equipment.Communication network includes cellular telephone networks, WLAN or Metropolitan Area Network (MAN), above-mentioned communication network
Various communication standards, agreement and technology can be used in network.
Locating module 111 is used to obtain the geographical location of terminal 100 being currently located.The example of locating module 111 includes
But it is not limited to GPS (GPS), the location technology based on WLAN or mobile radio communication.
Photographing module 113 is under the jurisdiction of camera, for shooting picture or video.The picture or video of shooting can be deposited
In storage to memory 101, host computer can also be sent to by radio-frequency module 109.
Audio-frequency module 115 provides a user audio interface, may include one or more microphone interfaces, one or more
Speaker interface and one or more earphone interfaces.The interaction of audio data is carried out by audio interface and other equipment.Sound
Frequency can also be sent according to can store to memory 101 by radio-frequency module 109.
Touch Screen 117 provides an I/O Interface between terminal 100 and user.Specifically, user can pass through
Touch Screen 117 carries out the gesture operations such as input operation, such as click, touch, sliding, so that electronic equipment operates the input
It is responded.Any one form of text, picture or video or combination are then formed by output content and passed through by terminal 100
Touch Screen 117 shows to user and exports.
Key module 119 includes at least one key, to provide user's interface inputted to terminal 100, user
Terminal 100 can be made to execute different functions by pressing different keys.For example, sound regulating key is for user's realization pair
The adjusting for the wave volume that terminal 100 plays.
Referring to Fig. 3, in one exemplary embodiment, a kind of customized method for generating facial expression image is suitable for Fig. 1 institute
Show the terminal 100 of implementation environment, this kind of customized method for generating facial expression image can be executed by terminal 100, may include with
Lower step:
Step 310, the video interception instruction generated in Video Applications is obtained.
In order to allow users to indiscriminately ad. as one wishes be truncated to the image admired, the present embodiment while watching video display collection of drama
In, video interception instruction will generate in the Video Applications for video playing.Wherein, video interception instruction can at least indicate
Whether terminal user needs to carry out image interception operation to video.
For example, image interception entrance can be added in Video Applications, when user needs to carry out image interception to video
When operation, relevant operation can be triggered in the image interception entrance so that terminal listen to respond the relevant operation and
The video interception of generation instructs, and knows that user needs to carry out image interception operation to the video with this.
The image interception entrance can be pre-set shortcut key order, can also be and is set in advance in Video Applications
Virtual screenshot key on broadcast interface etc..Correspondingly, the relevant operation that user triggers in image screenshot entrance can be logical
It crosses keyboard and taps the shortcut key for corresponding to shortcut key order, can also be and virtual screenshot key is clicked by mouse or touch screen
Deng.
Certainly, in different application scenarios, for example, terminal also needs to know the frame number for carrying out image interception, then video
Screenshot instruction can also be generated according to the selected frame number of user, at this point, video interception instruction may also indicate that terminal to video into
Frame number when row image interception.
After video interception instruction generates, terminal gets video interception instruction, and then prepares according to the video
Screenshot instruction carries out the image interception operation of video.
Step 330, it is instructed according to video interception and image interception is carried out to the video played in Video Applications, obtained to be processed
Image.
In order to make the various facial expression images of expression packet in social application, it is also necessary to the acquisition for carrying out video, by obtaining
To video in the practically necessary image of interception user.
Wherein, the mode for obtaining video can be in the local storage space by terminal to be obtained in pre-stored video file
It takes, is also possible to can also be entrained by using terminal by internet by downloading certain section of video on the resources of movie & TV server
Camera carries out captured in real-time and gets.
In the present embodiment, the acquisition of video is completed in Video Applications.For example, when user is carried out by Video Applications
When the viewing of video display collection of drama, the video display collection of drama watched can be considered as the video of pending image interception.
After the video for getting pending image interception, terminal can be instructed according to the video interception got to this
Video carries out image interception operation.
Further, whether video interception instruction can not only need to carry out image interception to video with instruction terminal, can also
Frame number when image interception is carried out to video with instruction terminal, whether the image interception that also can indicate that terminal carries out video needs
Subtitle is carried, i.e. video interception instruction is able to reflect the actual screenshot demand of user.
Correspondingly, the image interception carried out according to video interception instruction, the image to be processed being truncated to are to meet user
Actual screenshot demand, be conducive to the subsequent facial expression image produced and meet users ' individualized requirement.
Step 350, it calls the image procossing plug-in unit embedded in Video Applications to carry out image procossing to image to be processed, generates
Facial expression image.
In order to avoid using third party's image processing tool, in the present embodiment, embedded image processing is inserted in Video Applications
Part in order to which user carries out the production of customized facial expression image, and then is conducive to improve the formation efficiency of facial expression image.
Further, which can be showed in Video Applications always by way of toolbar, can also
Only to be popped up in the form of toolbar when user selects to carry out image procossing.
More preferably, image procossing plug-in unit pop-up when user selects to carry out image procossing in the form of toolbar, for example, with
Maximized mode is showed in interface front end, so as to improve the customized experiences of user.Meanwhile playing the broadcast interface of video
It is then reduced, for example, being contracted to the interface lower left corner in a manner of minimum, is pushed when finishing in order to subsequent in facial expression image
Restore, to improve the viewing experience of user.
Include but is not limited to by the image procossing that image procossing plug-in unit carries out image to be processed: addition pre-sets text
Word, with pre-set picture synthesize, pre-set face replacement etc..Wherein, face replacement is pre-seted using face recognition technology, head
It first identifies the face in image to be processed, and then replaces the face recognized to pre-set face.
Further, since image to be processed includes (such as one section of still image (such as a picture) and dynamic image
Video), correspondingly, image procossing will be carried out respectively based on still image and dynamic image.
Specifically, when the i.e. frame image of still image, image procossing only is carried out to the frame image.
When dynamic image, that is, multiple image, then image procossing is successively carried out according to the frame position where the multiple image.
Step 370, it is identified according to the social application of user to social application server push facial expression image, the social activity of user
There are corresponding relationships with Video Applications for application identities.
In the present embodiment, the push of facial expression image can be carried out directly, without by first saving the side for uploading and forwarding again
Formula carries out.
Specifically, social interaction server device is server corresponding to the social application that runs in terminal, itself is just stored
The social application of mass users identifies.Video Applications and the social application of user mark are tied up in advance by social interaction server device
It is fixed, to establish the corresponding relationship between Video Applications and the social application mark of user in the terminal by the binding.
By the social application of user mark and corresponding relationship existing for Video Applications, customized generation in Video Applications
Facial expression image can directly push to social interaction server device, for that can be answered by social activity subsequently through the social application of user mark
With obtaining corresponding facial expression image in server.
Further, terminal can guide user to obtain corresponding Video Applications mark in a manner of Login Register, accordingly
Ground, in terminal there are corresponding relationship be the user Video Applications mark of the social application mark with the user, in favor of
According to there are the different social applications of corresponding relationship to identify the corresponding of progress facial expression image to different video application identities in terminal
Push.
For example, if Video Applications have the Video Applications mark A1 of party A-subscriber and the Video Applications mark of party B-subscriber
B1 there are corresponding relationship is that the social application of party A-subscriber identifies A2 with the Video Applications of party A-subscriber mark A1, the video with party B-subscriber
Application identities B1 there are corresponding relationship be party B-subscriber social application identify B2, at this point, party A-subscriber by the Video Applications institute from
The facial expression image generated is defined, it only can be according to there are the social application of corresponding relationship mark A2 to push to Video Applications mark A1
In social interaction server device.
It should be noted that existing corresponding relationship between the social application mark and Video Applications of either user, also
It is that there are corresponding relationships between the social application mark of user and Video Applications mark, will be all stored in the configuration of Video Applications
In file, for being extracted when carrying out facial expression image push.
By process as described above, realizes customized generation facial expression image, i.e. user in Video Applications and both do not need
Video Applications are exited, additional downloads third party's image processing tool is not needed yet, a series of customized generation expressions can be completed
The operation of image, it is simple and convenient, while improving the customized experiences of user, also improve the viewing experience of user.
In addition, having built the bridge between Video Applications and social application for user, i.e. Video Applications directly can incite somebody to action oneself
The facial expression image that definition generates pushes to social application server, and timeliness and convenience are all greatly improved, effectively improved
The efficiency of transmission of facial expression image, to further increase the formation efficiency of facial expression image.
Referring to Fig. 4, in one exemplary embodiment, step 310 may comprise steps of:
Step 311, the video played in the trigger action pause Video Applications of user is responded.
For example, a screenshot icon is set on the broadcast interface of Video Applications.When user needs to broadcast Video Applications
When putting video progress image interception operation being played in interface, user can click broadcasting circle by mouse or touch screen
Screenshot icon on face, the click are considered as the operation that user is triggered.
Correspondingly, terminal by the trigger action of response user by video pause being played on, in order to video into
The operation of row image interception.
Step 313, it generates picture material and selects information, select information alert user to carry out figure to be processed by picture material
As the selection of content.
After video being played on is suspended in the broadcast interface of Video Applications, terminal will also be further to user
The relevant information of inquiry and image interception.For example, the relevant information can be the frame number of interception image or interception image is
It is no to need to carry subtitle etc..
Specifically, will generate picture material in terminal selects information, for example, prompting in such a way that pop-up selects dialog box
User carries out the selection of picture material to be processed, which includes but is not limited to a frame image, multiple image, takes
Band subtitle does not carry subtitle etc..
Step 315, video interception instruction is generated according to the user's choice.
After user selects information to complete selection according to picture material, video interception instruction can be according to made by user
Selection generates, i.e., contains the selected picture material to be processed of user in video interception instruction.
For example, video interception instructs instruction terminal interception if the picture material to be processed that user selects is multiple image
The image to be processed arrived is dynamic image (such as one section of video);If the picture material to be processed that user selects is does not carry word
Curtain, the then image to be processed that video interception instruction instruction terminal is truncated to are the image on mask word backstage.
By process as described above, the video interception instruction for realizing generation, which reflects the actual screenshot of user, to be needed
It asks, is conducive to the subsequent facial expression image produced and meet users ' individualized requirement.
Referring to Fig. 5, in one exemplary embodiment, step 330 may comprise steps of:
Step 331, the current frame position of the video played using in Video Applications is as starting frame position, according to video interception
The frame number indicated in instruction, which determines, terminates frame position.
It should be appreciated that stopping in broadcast interface after video being played on is suspended in the broadcast interface of Video Applications
Frame position corresponding to the video image stayed is considered as the current frame position of the video.
As previously mentioned, whether video interception instruction can not only need to carry out image interception to video with instruction terminal, it can also
Frame number when image interception is carried out to video with instruction terminal, whether the image interception that also can indicate that terminal carries out video needs
Subtitle is carried, i.e. video interception instruction is able to reflect the actual screenshot demand of user.
Based on this, in the present embodiment, according to video interception instruct in the frame number that indicates, can be determined by current frame position
Go to start position and termination frame position.
Specifically, current frame position is to originate frame position, and terminating frame position is to originate the sum of frame position and frame number.
Step 333, according to starting frame position and termination frame position by intercepting image to be processed in video.
It is noted that as previously mentioned, also can indicate that terminal cuts the image that video carries out in video interception instruction
It takes and whether needs to carry subtitle, be based on this, terminal will also when carrying out image interception according to starting frame position and termination frame position
Carry out the relevant operation of subtitle shielding simultaneously according to the instruction of video interception instruction so that the image to be processed being finally truncated to and
The actual screenshot demand of user is consistent.
Referring to Fig. 6, in one exemplary embodiment, before step 350, method as described above can also include following
Step:
Step 410, it generates image procossing and selects information, select whether information alert user selects immediately by image procossing
Carry out image procossing.
It is appreciated that user can according to actual needs, immediately after terminal is completed to the image interception operation of video
Image procossing is carried out to the image to be processed being truncated to, or delays and handles the image to be processed being truncated to.
Based on this, terminal to image to be processed carry out image procossing before, will also further requry the users whether
It needs to carry out image procossing immediately.
Specifically, image procossing will be generated in terminal and select information, for example, prompting to use in such a way that pop-up selects dialog box
Whether family selects to carry out image procossing immediately.If user's selection carries out image procossing immediately, 350 are entered step, is immediately treated
Image to be processed.
Conversely, entering step 430 if user is non-selected to carry out image procossing immediately, processing image to be processed is delayed.
Step 430, in the non-selected image procossing of progress immediately of user, continue to play the video in Video Applications, and protect
Image to be processed is deposited to default memory space, so that the image to be processed in default memory space is delayed processing.
By process as described above, the customized adaptability for generating facial expression image is improved, customized generation is extended
The application scenarios of facial expression image are applicable not only to immediately treating for image to be processed, be equally applicable to prolonging for image to be processed
Post-processing.
Further, in one exemplary embodiment, method as described above can with the following steps are included:
When listening in default memory space the image zooming-out instruction for triggering generation, by extracted in default memory space to
Image is managed, to carry out image procossing to the image to be processed extracted.
Image zooming-out instruction is used to indicate terminal user and needs to carry out image procossing to the image to be processed for delaying processing.
Therefore, when in default memory space triggering generate image zooming-out instruction, terminal knows that user needs to prolonging
The image to be processed of post-processing carries out image procossing.Correspondingly, terminal will be by extracting image to be processed in default memory space
And carry out image procossing.
For example, this is preset memory space and linked to by terminal while image to be processed is stored in default memory space
One file, and image to be processed is then linked to the file in this document folder, is somebody's turn to do when user is clicked by mouse or touch screen
Some file in file, i.e. triggering generate image zooming-out instruction, and then according to image zooming-out instruction by default storage
It is extracted in space and obtains image to be processed corresponding with some file.
Referring to Fig. 7, in one exemplary embodiment, before step 370, method as described above can also include following
Step:
Step 510, to there are the social application of the user of corresponding relationship marks to scan for Video Applications, if search is not
To there are the social application of the user of corresponding relationship marks with Video Applications, then social application binding is initiated to social interaction server device and asked
It asks.
As previously mentioned, being needed pre- by social interaction server device in order to which facial expression image is directly pushed to social interaction server device
First by Video Applications and the social application of user mark bind, and then by the binding establish in the terminal Video Applications with
Corresponding relationship between the social application mark of user.
Based on this, before carrying out facial expression image push, terminal will carry out there are the users of corresponding relationship with Video Applications
Social application mark search, judge whether to establish in terminal between Video Applications and the social application mark of user with this
Corresponding relationship, and then determine whether directly carry out facial expression image push.
If searching with Video Applications there are the social application of the user of corresponding relationship mark, 370 are entered step, by table
Feelings image directly pushes to social interaction server device.
Conversely, if search, there are the social application of the user of corresponding relationship mark, guides user less than with Video Applications
Video Applications and its social application mark are bound, i.e., is identified by the social application of user to social interaction server device initiation social activity and is answered
Use bind request.Wherein, the social application mark of user is at least carried in social application bind request.
Step 530, social application bind request is responded by social interaction server device, the social activity for establishing Video Applications and user is answered
With the corresponding relationship between mark.
After social interaction server device receives social application bind request, the social application mark for therefrom extracting user is carried out
Verifying confirms and identifies in the social application mark for the mass users that it is stored with the presence or absence of corresponding social application.
If confirmation exist, social application bind request is responded so that established in terminal Video Applications with
Corresponding relationship between the social application mark of user, in favor of the direct push of facial expression image.
Fig. 8 is a kind of specific implementation schematic diagram of the customized method for generating facial expression image in an application scenarios, now in conjunction with
Concrete application scene shown in Fig. 8 to the customized method for generating facial expression image involved in each exemplary embodiment of the present invention into
Row description.
User clicks the screenshot icon being set in Video Applications by executing step 601, so that terminal is by executing step
Video being played in the trigger action pause Video Applications of rapid 602 response user, and be further advanced by and execute step 602
The Video Applications mark that the user is obtained to step 603, the Video Applications mark for establishing the user in the terminal for subsequent with
Corresponding relationship between social application mark.
It, will be according to the video interception instruction pair generated in Video Applications after the Video Applications mark for getting the user
Video carries out image interception, and is identified the Video Applications of the image to be processed being truncated to and the user by executing step 604
Storage.
After completing storage, is requried the users by execution step 605 and whether select to immediately treat image to be processed, if
Be it is no, then continue to play by executing step 606 and be suspended the video of broadcasting in Video Applications, otherwise, pass through and execute step 607
It calls the image procossing plug-in unit being embedded in Video Applications to carry out image procossing to image to be processed, generates facial expression image.
After facial expression image generation, requry the users whether push facial expression image to party clothes by executing step 608
Business device, if it has not, otherwise the video for then continuing to be suspended broadcasting in broadcasting Video Applications by executing step 606 passes through execution
Step 609 to step 610 establishes the corresponding relationship between the Video Applications mark of the user and social application mark, to exist
After corresponding relationship, identified according to the social application of the user to social interaction server device push facial expression image by executing step 611.
After social interaction server device receives the facial expression image, which can be stored to the social activity with the user
In the corresponding expression packet of application identities.When the user it is subsequent using social application when, can by transferred in social interaction server device with
Its social application identifies corresponding expression packet, and a facial expression image is selected in the expression packet by being deployed into express at this very moment
Mood, and then convey certain information to other side by the facial expression image.
In various embodiments of the present invention, efficient customized generation facial expression image is realized.
Following is apparatus of the present invention embodiment, can be used for executing customized generation facial expression image according to the present invention
Method.For undisclosed details in apparatus of the present invention embodiment, customized generation expression figure according to the present invention is please referred to
The embodiment of the method for the method of picture.
Referring to Fig. 9, in one exemplary embodiment, a kind of customized device 700 for generating facial expression image includes but not
It is limited to: instruction acquisition module 710, video interception module 730, image processing module 750 and image pushing module 770.
Wherein, instruction acquisition module 710 is used to obtain the video interception instruction generated in Video Applications.
Video interception module 730 is used to intercept the video played in Video Applications according to video interception instruction, obtains
To image to be processed.
Image processing module 750 is used to that the image procossing plug-in unit embedded in Video Applications to be called to carry out figure to image to be processed
As processing, facial expression image is generated.
Image pushing module 770 according to the social application of user for identifying to social application server push expression figure
Picture.There are corresponding relationships with Video Applications for the social application mark of user.
Referring to Fig. 10, in one exemplary embodiment, instruction acquisition module 710 includes but is not limited to: operation response is single
Member 711, information generating unit 713 and instruction generation unit 715.
Wherein, operation response unit 711 is used to respond the video played in the trigger action pause Video Applications of user.
Information generating unit 713 selects information alert user by picture material for generating picture material selection information
Carry out the selection of picture material to be processed.
Instruction generation unit 715 for generating video interception instruction according to the user's choice.
Please refer to Figure 11, in one exemplary embodiment, video interception module 730 includes but is not limited to: frame position determines
Unit 731 and image interception unit 733.
Wherein, the current frame position of video that frame position determination unit 731 is used to play using in Video Applications is as originating
Frame position, according to video interception instruct in the frame number that indicates determine and terminate frame position.
Image interception unit 733 is used for according to starting frame position and terminates frame position by intercepting image to be processed in video.
Please refer to Figure 12, in one exemplary embodiment, device 700 as described above further includes but is not limited to: information is raw
At module 810 and image storage module 830.
Wherein, information generating module 810 selects information alert by image procossing for generating image procossing selection information
Whether user selects to carry out image procossing immediately.
Image storage module 830 is used to continue to play in Video Applications in the non-selected image procossing of progress immediately of user
Video, and image to be processed is saved to default memory space, so that the image to be processed in default memory space is delayed place
Reason.
Further, in one exemplary embodiment, device 700 as described above further includes but is not limited to: image zooming-out
Module.
Wherein, image zooming-out module, which is used to work as, listens to the image zooming-out that generation is triggered in default memory space instruction, by
Image to be processed is extracted in default memory space, to carry out image procossing to the image to be processed extracted.
Figure 13 is please referred to, in one exemplary embodiment, device 700 as described above further includes but is not limited to: search mould
Block 910 and binding module 930.
Wherein, search module 910 is used for there are the social application of the user of corresponding relationship marks to carry out with Video Applications
Search, if search is initiated less than there are the social application of the user of corresponding relationship marks with Video Applications to social interaction server device
Social application bind request.
Binding module 930 is used to respond social application bind request by social interaction server device, establishes Video Applications and user
Social application mark between corresponding relationship.
It should be noted that the customized device for generating facial expression image provided by above-described embodiment is carrying out customized life
At facial expression image when, only the example of the division of the above functional modules, in practical application, can according to need and
Above-mentioned function distribution is completed by different functional modules, i.e., the internal structure of the customized device for generating facial expression image will divide
For different functional modules, to complete all or part of the functions described above.
In addition, the customized device for generating facial expression image provided by above-described embodiment and customized generation facial expression image
The embodiment of the method for method belongs to same design, and the concrete mode that wherein modules execution operates is in embodiment of the method
It is described in detail, details are not described herein again.
Above content, preferable examples embodiment only of the invention, is not intended to limit embodiment of the present invention, this
Field those of ordinary skill central scope according to the present invention and spirit can be carried out very easily corresponding flexible or repaired
Change, therefore protection scope of the present invention should be subject to protection scope required by claims.
Claims (12)
1. a kind of customized method for generating facial expression image characterized by comprising
Obtain the video interception instruction generated in Video Applications;
Image interception is carried out to the video played in the Video Applications according to video interception instruction, obtains figure to be processed
Picture;
Image procossing plug-in unit is shown in the form of toolbar in the Video Applications, by the image procossing plug-in unit of display to institute
It states image to be processed and carries out image procossing, generate facial expression image;
It is identified according to the social application of user to facial expression image described in social application server push, the social application of the user
There are corresponding relationships with the Video Applications for mark.
2. the method as described in claim 1, which is characterized in that described to obtain what the video interception generated in Video Applications instructed
Step includes:
The trigger action for responding the user suspends the video played in the Video Applications;
It generates picture material and selects information, prompt the user to carry out the figure to be processed by described image content selection information
As the selection of content;
The video interception instruction is generated according to the selection of the user.
3. the method as described in claim 1, which is characterized in that described to be instructed according to the video interception to the Video Applications
The step of video of middle broadcasting carries out image interception, obtains image to be processed include:
The current frame position of the video played using in the Video Applications is instructed as starting frame position according to the video interception
The frame number of middle instruction, which determines, terminates frame position;
According to the starting frame position and frame position is terminated by intercepting the image to be processed in the video.
4. the method as described in claim 1, which is characterized in that described to be shown in the form of toolbar in the Video Applications
Image procossing plug-in unit carries out image procossing to the image to be processed by the image procossing plug-in unit of display, generates facial expression image
The step of before, the method also includes:
It generates image procossing and selects information, handle whether user described in selection information alert selects to carry out immediately by described image
Image procossing;
In the non-selected image procossing of progress immediately of the user, continue to play the video in the Video Applications, and save institute
Image to be processed is stated to default memory space, so that the image to be processed in the default memory space is delayed processing.
5. method as claimed in claim 4, which is characterized in that the method also includes:
When listening in the default memory space image zooming-out instruction for triggering generation, by being extracted in the default memory space
The image to be processed, to carry out image procossing to the image to be processed extracted.
6. such as method described in any one of claim 1 to 5, which is characterized in that the social application according to user identify to
Before the step of facial expression image described in social application server push, the method also includes:
To with the Video Applications there are the social application of the user of corresponding relationship mark scan for, if search less than with
There are the social application of the user of corresponding relationship marks for the Video Applications, then initiate society to the social application server
It hands over and applies bind request;
The social application bind request is responded by the social application server, establishes the Video Applications and the user
Social application mark between corresponding relationship.
7. a kind of customized device for generating facial expression image characterized by comprising
Instruction acquisition module, for obtaining the video interception generated in Video Applications instruction;
Video interception module is cut for carrying out image to the video played in the Video Applications according to video interception instruction
It takes, obtains image to be processed;
Image processing module passes through display for showing image procossing plug-in unit in the form of toolbar in the Video Applications
Image procossing plug-in unit image procossing is carried out to the image to be processed, generate facial expression image;
Image pushing module is identified for the social application according to user to facial expression image described in social application server push,
There are corresponding relationships with the Video Applications for the social application mark of the user.
8. device as claimed in claim 7, which is characterized in that described instruction obtains module and includes:
Response unit is operated, the trigger action for responding the user suspends the video played in the Video Applications;
Information generating unit prompts the use by described image content selection information for generating picture material selection information
Family carries out the selection of the picture material to be processed;
Instruction generation unit is instructed for generating the video interception according to the selection of the user.
9. device as claimed in claim 7, which is characterized in that the video interception module includes:
Frame position determination unit, the current frame position of the video for being played using in the Video Applications as starting frame position,
It is determined according to the frame number indicated in video interception instruction and terminates frame position;
Image interception unit, for described to be processed by being intercepted in the video according to the starting frame position and termination frame position
Image.
10. device as claimed in claim 7, which is characterized in that described device further include:
Information generating module is handled by described image and is used described in selection information alert for generating image procossing selection information
Whether family selects to carry out image procossing immediately;
Image storage module, for continuing to play the Video Applications in the non-selected image procossing of progress immediately of the user
In video, and save the image to be processed to default memory space, so that the figure to be processed in the default memory space
As being delayed processing.
11. device as claimed in claim 10, which is characterized in that described device further include:
Image zooming-out module listens in the default memory space image zooming-out for triggering generation instruction for working as, by described
The image to be processed is extracted in default memory space, to carry out image procossing to the image to be processed extracted.
12. such as the described in any item devices of claim 7 to 11, which is characterized in that described device further include:
Search module, for there are the social application of the user of corresponding relationship marks to search with the Video Applications
Rope, if searching for less than there are the social application of the user of corresponding relationship marks with the Video Applications, to the social activity
Application server initiates social application bind request;
Binding module establishes the video for responding the social application bind request by the social application server
Using the corresponding relationship between the social application mark of the user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710007418.9A CN106658079B (en) | 2017-01-05 | 2017-01-05 | The customized method and device for generating facial expression image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710007418.9A CN106658079B (en) | 2017-01-05 | 2017-01-05 | The customized method and device for generating facial expression image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106658079A CN106658079A (en) | 2017-05-10 |
CN106658079B true CN106658079B (en) | 2019-04-30 |
Family
ID=58843254
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710007418.9A Active CN106658079B (en) | 2017-01-05 | 2017-01-05 | The customized method and device for generating facial expression image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106658079B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109472849B (en) * | 2017-09-07 | 2023-04-07 | 腾讯科技(深圳)有限公司 | Method, device, terminal equipment and storage medium for processing image in application |
CN108200463B (en) * | 2018-01-19 | 2020-11-03 | 上海哔哩哔哩科技有限公司 | Bullet screen expression package generation method, server and bullet screen expression package generation system |
CN108596114A (en) * | 2018-04-27 | 2018-09-28 | 佛山市日日圣科技有限公司 | A kind of expression generation method and device |
CN108712323B (en) * | 2018-05-02 | 2021-05-28 | 广州市百果园网络科技有限公司 | Voice transmission method, system, computer storage medium and computer device |
CN110163932A (en) * | 2018-07-12 | 2019-08-23 | 腾讯数码(天津)有限公司 | Image processing method, device, computer-readable medium and electronic equipment |
CN109120866B (en) * | 2018-09-27 | 2020-04-03 | 腾讯科技(深圳)有限公司 | Dynamic expression generation method and device, computer readable storage medium and computer equipment |
CN111507143B (en) * | 2019-01-31 | 2023-06-02 | 北京字节跳动网络技术有限公司 | Expression image effect generation method and device and electronic equipment |
CN110149549B (en) * | 2019-02-26 | 2022-09-13 | 腾讯科技(深圳)有限公司 | Information display method and device |
CN110049377B (en) * | 2019-03-12 | 2021-06-22 | 北京奇艺世纪科技有限公司 | Expression package generation method and device, electronic equipment and computer readable storage medium |
CN110162670B (en) * | 2019-05-27 | 2020-05-08 | 北京字节跳动网络技术有限公司 | Method and device for generating expression package |
CN113032339B (en) * | 2019-12-09 | 2023-10-20 | 腾讯科技(深圳)有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN111984173B (en) * | 2020-07-17 | 2022-03-25 | 维沃移动通信有限公司 | Expression package generation method and device |
CN113345054A (en) * | 2021-05-28 | 2021-09-03 | 上海哔哩哔哩科技有限公司 | Virtual image decorating method, detection method and device |
CN113568551A (en) * | 2021-07-26 | 2021-10-29 | 北京达佳互联信息技术有限公司 | Picture saving method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101527690A (en) * | 2009-04-13 | 2009-09-09 | 腾讯科技(北京)有限公司 | Method for intercepting dynamic image, system and device thereof |
CN105828167A (en) * | 2016-03-04 | 2016-08-03 | 乐视网信息技术(北京)股份有限公司 | Screen-shot sharing method and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB0712879D0 (en) * | 2007-07-03 | 2007-08-08 | Skype Ltd | Video communication system and method |
-
2017
- 2017-01-05 CN CN201710007418.9A patent/CN106658079B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101527690A (en) * | 2009-04-13 | 2009-09-09 | 腾讯科技(北京)有限公司 | Method for intercepting dynamic image, system and device thereof |
CN105828167A (en) * | 2016-03-04 | 2016-08-03 | 乐视网信息技术(北京)股份有限公司 | Screen-shot sharing method and device |
Also Published As
Publication number | Publication date |
---|---|
CN106658079A (en) | 2017-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106658079B (en) | The customized method and device for generating facial expression image | |
CN104618780B (en) | Electrical equipment control method and system | |
CN103648048B (en) | Intelligent television video resource searching method and system | |
US20130262687A1 (en) | Connecting a mobile device as a remote control | |
CN106570100A (en) | Information search method and device | |
CN103166941A (en) | Data sharing method and device | |
CN106062763A (en) | Method and apparatus for displaying application and picture, and electronic device | |
CN110490808A (en) | Picture joining method, device, terminal and storage medium | |
CN107786427B (en) | Information interaction method, terminal and computer readable storage medium | |
CN110389697B (en) | Data interaction method and device, storage medium and electronic device | |
CN107659850A (en) | Media information processing method and device | |
CN107547934A (en) | Information transferring method and device based on video | |
CN105892822A (en) | Mobile terminal and rapid setting method and device thereof | |
CN106126377B (en) | The method and device of system starting | |
CN104023248A (en) | Video screen capturing method and device | |
CN106095132B (en) | Playback equipment keypress function setting method and device | |
WO2016026108A1 (en) | Application program switch method, apparatus and electronic terminal | |
CN114691277A (en) | Application program processing method, intelligent terminal and storage medium | |
CN109842820A (en) | Barrage data inputting method and device, mobile terminal and readable storage medium storing program for executing | |
CN113162844B (en) | Instant messaging method, instant messaging device, electronic equipment and storage medium | |
CN106658138B (en) | Smart television and its signal source switch method, device | |
CN109559274A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN104536571B (en) | The method of controlling operation thereof and device of earphone | |
CN108933958B (en) | Method, storage medium, equipment and system for realizing microphone connection preview at user side | |
CN106406838A (en) | Screen shot sharing method, apparatus, and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |