CN108875670A - Information processing method, device and storage medium - Google Patents
Information processing method, device and storage medium Download PDFInfo
- Publication number
- CN108875670A CN108875670A CN201810688665.4A CN201810688665A CN108875670A CN 108875670 A CN108875670 A CN 108875670A CN 201810688665 A CN201810688665 A CN 201810688665A CN 108875670 A CN108875670 A CN 108875670A
- Authority
- CN
- China
- Prior art keywords
- data
- cover
- image data
- video data
- frame image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The invention discloses a kind of information processing methods, including:Obtain the first video data;Under first mode, an at least frame image data is determined from the multiple image data that first video data includes;Cover data are generated according to an at least frame image data, the cover data are merged with first video data, generate the second video data.The invention also discloses a kind of information processing unit and computer readable storage mediums.
Description
Technical field
The present invention relates to the information processing technology more particularly to a kind of information processing methods, device and computer-readable storage
Medium.
Background technique
Existing short-sighted frequency creation application program (APP) is provided with short video capture, short-sighted frequency edition function, some
It is provided with the function of video cover selection.Existing video cover choosing method generally comprises following steps:Page is chosen in cover
Face shows a series of pictures frame;The selection that cursor on picture frame carries out cover starting point is dragged by user;By the cover of default
The cover that specification (the cover specification may include frame per second, size etc.) setting is chosen, and the cover effect that preview is currently chosen;
As shown in Figure 1, the cover for providing several existing APP chooses the page.But above-mentioned video cover choosing method is asked there is also following
Topic:Can only by cover selection rule predetermined carry out cover selection, do not support user customize and freely take out frame be combined into it is dynamic
State cover.
Summary of the invention
In view of this, the main purpose of the present invention is to provide a kind of information processing method, device and computer-readable depositing
Storage media.
In order to achieve the above objectives, the technical proposal of the invention is realized in this way:
The embodiment of the invention provides a kind of information processing method, the method includes:
Obtain the first video data;
Under first mode, an at least frame figure is determined from the multiple image data that first video data includes
As data;
Cover data are generated according to an at least frame image data, by the cover data and first video data
Merge, generates the second video data.
It is described that an at least frame image is determined from the multiple image data that first video data includes in above scheme
Data, including:
The first input operation is obtained, the multiple image for including from first video data based on the first input operation
An at least frame image data is determined in data;Alternatively,
First video data is analyzed, the target object in first video data is obtained;
At least portion comprising the target object is obtained from the multiple image data for including in first video data
Framing image data generates alternative cover data set;
An at least frame image data is determined from the alternative cover data set.
In above scheme, the method also includes:The face-image for obtaining user in real time, determines according to the face-image
The expressive features of user;According to the corresponding relationship of the expressive features of preservation and image style, determine that the expressive features are corresponding
Image style;
Image procossing is carried out to an at least frame image data according to described image style;
An at least frame image data according to generates cover data, including:According at least one after image procossing
Frame image data generates cover data.
In above scheme, an at least frame image data according to generates cover data, including:
Initial cover data are generated according to an at least frame image data;
Under second mode, image procossing is carried out to the initial cover data and generates cover data;Described image
Processing includes at least one of:Material data is added in the initial cover data, changes the initial cover data
Display properties parameter.
It is described that an at least frame image is determined from the multiple image data that first video data includes in above scheme
Data, including:
Multiple groups frame image data is determined from the multiple image data that first video data includes;Every framing picture number
It include an at least frame image data in;
An at least frame image data according to generates cover data, including:
Cover data are generated according to every group of frame image data in the multiple groups frame image data, obtain multiple cover numbers
According to.
It is described to merge the cover data with first video data in above scheme, the second video data is generated,
Including:
The sending object for determining second video data, according to the attribute of the sending object from the multiple cover number
According to middle selection target cover data;
The target cover data are merged with first video data, generate the second video data.
The embodiment of the invention also provides a kind of information processing unit, described device includes:First processing module, at second
Manage module and third processing module;Wherein,
The first processing module, for obtaining the first video data;
The Second processing module, in described the under the first mode, obtained from the first processing module
An at least frame image data is determined in the multiple image data that one video data includes;
The third processing module, it is raw for an at least frame image data according to Second processing module determination
At cover data, the cover data are merged with first video data, generates the second video data.
In above scheme, the Second processing module is specifically used for obtaining the first input operation, based on first input
Operation determines an at least frame image data from the multiple image data that first video data includes;Alternatively,
First video data is analyzed, the target object in first video data is obtained;
At least portion comprising the target object is obtained from the multiple image data for including in first video data
Framing image data generates alternative cover data set;
An at least frame image data is determined from the alternative cover data set.
In above scheme, the first processing module is also used to obtain the face-image of user in real time, according to the face
Image determines the expressive features of user;According to the corresponding relationship of the expressive features of preservation and image style, determine that the expression is special
Levy corresponding image style;Image procossing is carried out to an at least frame image data according to described image style;
The third processing module, for according at least frame image after first processing module progress image procossing
Data generate cover data.
In above scheme, the third processing module is specifically used for being generated according to an at least frame image data initial
Cover data;Under second mode, image procossing is carried out to the initial cover data and generates cover data;Described image
Processing includes at least one of:Material data is added in the initial cover data, changes the initial cover data
Display properties parameter.
In above scheme, the Second processing module, specifically for the multiple image for including from first video data
Multiple groups frame image data is determined in data;It include an at least frame image data in every group of frame image data;
The third processing module, for generating cover according to every group of frame image data in the multiple groups frame image data
Data obtain multiple cover data.
In above scheme, the third processing module, specifically for the sending object of determination second video data, root
According to the attribute of the sending object from the multiple cover data selection target cover data;By the target cover data with
First video data merges, and generates the second video data.
The embodiment of the invention also provides a kind of information processing unit, described device includes:Processor and for storing energy
The memory of enough computer programs run on a processor;Wherein,
The processor is for executing any one above-described information processing method when running the computer program
The step of.
The embodiment of the invention also provides a kind of computer readable storage mediums, are stored thereon with computer program, described
The step of any one above-described information processing method is realized when computer program is executed by processor.
Information processing method, device provided by the embodiment of the present invention and computer readable storage medium obtain the first view
Frequency evidence;Under first mode, an at least frame figure is determined from the multiple image data that first video data includes
As data;Cover data are generated according to an at least frame image data, by the cover data and first video data
Merge, generates the second video data.The embodiment of the present invention under first mode by supporting to select from multiple image data
It selects an at least frame image data and generates cover data, realize the customized production of video cover, greatly improve the behaviour of user
It experiences.
Detailed description of the invention
Fig. 1 (a) to Fig. 1 (d) is that the cover of existing APP provided in an embodiment of the present invention chooses the schematic diagram of the page;
Fig. 2 is a kind of flow diagram of information processing method provided in an embodiment of the present invention;
Fig. 3 is the flow diagram of another information processing method provided in an embodiment of the present invention;
Fig. 4 (a) to Fig. 4 (b) is respectively the schematic diagram of the image data selection page provided in an embodiment of the present invention;
Fig. 5 is the schematic diagram of addition paster and the filter page provided in an embodiment of the present invention;
Fig. 6 is a kind of structural schematic diagram of information processing unit provided in an embodiment of the present invention;
Fig. 7 is the structural schematic diagram of another information processing unit provided in an embodiment of the present invention.
Specific embodiment
In various embodiments of the present invention, the first video data is obtained;Under first mode, regarded from described first
Frequency is according to an at least frame image data determining in the multiple image data for including;It is generated and is sealed according to an at least frame image data
Face data merges the cover data with first video data, generates the second video data.
Below with reference to embodiment, the present invention is further described in more detail.
Fig. 2 is a kind of flow diagram of information processing method provided in an embodiment of the present invention;As shown in Fig. 2, the letter
Breath processing method can be applied to mobile terminal, and the mobile terminal can be the smart machines such as mobile phone, plate, computer;Institute
The method of stating includes:
Step 101 obtains the first video data.
Specifically, the first video data of the acquisition may include:
The shooting operation for receiving user's input, according to the shooting operation recorded video, the video data obtained after recording
As first video data;Or;
The read operation for receiving user's input, at least one view being locally stored according to the read operation from mobile terminal
Frequency selects any video data in, as first video data.
Step 102, determined in the multiple image data under the first mode, including from first video data to
A few frame image data.
Here, the first mode refers to cover data creating mode;As an implementation, when mobile terminal obtains the
After one video data, the first mode can be entered.As another embodiment, detect that cover data creating refers to
It enables, into the first mode;Wherein, the cover data creating instruction can be by detecting the behaviour for specific function key
It obtains, can perhaps be obtained by detecting gesture operation comprising the instruction or can be by detecting comprising the instruction
Voice data obtains.The multiple image data that first video data includes under the first mode, are being obtained, and are being adopted
The multiple image data are shown with the mode of being particularly shown.
Wherein, the use is particularly shown mode and shows the multiple image data, including:According to the multiple image data
Chronological order arrangement, the multiple image data are shown by the way of thumbnail;Wherein, after thumbnail can be diminution
Frame image data, be also possible to the part image data of frame image data, for example, only include frame image data key feature,
Such as facial image.
Specifically, described that an at least frame picture number is determined from the multiple image data that first video data includes
According to, including:
The first input operation is obtained, the multiple image for including from first video data based on the first input operation
An at least frame image data is determined in data.
Here, what the first input operation can input for user to mobile terminal is used to select an at least frame picture number
According to operation.
After mobile terminal shows by human-computer interaction interface the multiple image data that first video data includes, use
Family can select an at least frame image data from the multiple image data.
Specifically, described that an at least frame picture number is determined from the multiple image data that first video data includes
According to, including:
First video data is analyzed, the target object in first video data is obtained;
At least portion comprising the target object is obtained from the multiple image data for including in first video data
Framing image data generates alternative cover data set;
An at least frame image data is determined from the alternative cover data set.
Specifically, mobile terminal carries out video to first video data and takes out frame processing, obtains first video
The multiple image data that data include;The multiple image data (as carried out image recognition) is analyzed, first video is obtained
Target object in data (target object can be some people, some object etc.);From first video data
In include multiple image data in obtain comprising the target object at least partly frame image data (such as comprising a certain individual
Part frame image data), generate alternative cover data set;An at least frame picture number is determined from the alternative cover data set
According to.
It should be noted that generating still image as the cover data when determining a frame image data;Determine multiframe
When image data, dynamic image is obtained according to the multiple image data, such as graphic interchange format (GIF, Graphics
Interchange Format) image, the dynamic image is as cover data.
In the present embodiment, user is supported to select multiple image data customization dynamic image, breaks existing cover rule, i.e.
Rule only with still image as cover increases the freedom degree of user, can also excite the creation desire of user.
Step 103 generates cover data according to an at least frame image data, by the cover data and described first
Video data merges, and generates the second video data.
Specifically, the method also includes:
The face-image for obtaining user in real time, the expressive features of user are determined according to the face-image;According to preservation
The corresponding relationship of expressive features and image style determines the corresponding image style of the expressive features;According to described image style
Image procossing is carried out to an at least frame image data;
An at least frame image data according to generates cover data, including:According at least one after image procossing
Frame image data generates cover data.
Here, the corresponding relationship of the expressive features and image style can preset and save;The expressive features
May include:Smile, is dejected etc., corresponding image style may include:Warm tones, cool tone etc.;
Various image styles are set by specifically image parameter, and described image parameter may include:Brightness, cool tone,
The parameters such as warm tones, saturation degree, contrast.After determining the expressive features, the corresponding image wind of the expressive features is determined
Lattice are applied in image data by the corresponding image parameter of described image style, can be completed it is described according to image style to institute
It states an at least frame image data and carries out image procossing.
Here it is possible to be schemed by the face that the image collection assembly (such as camera) of the mobile terminal acquires user in real time
Picture, it is also possible to mobile terminal is acquired and be sent to by the image capture device for establishing communication connection with mobile terminal, thus institute
State the face-image that mobile terminal obtains user.
Specifically, an at least frame image data according to generates cover data, including:
Initial cover data are generated according to an at least frame image data;
Under second mode, image procossing is carried out to the initial cover data and generates cover data;Described image
Processing includes at least one of:Material data is added in the initial cover data, changes the initial cover data
Display properties parameter.
Here, the second mode refers to secondary edit pattern;In such a mode, initial cover data can be compiled
Volume.
As an implementation, after detecting edit instruction, then second mode is entered;Wherein, the edit instruction
It can be obtained by detecting the operation of specific function key (for example, the human-computer interaction interface of mobile terminal can show volume
Button is collected, when the user clicks the Edit button, the corresponding edit instruction for receiving input of mobile terminal refers to according to the editor
Order enters under second mode);It can also can also pass through detection by detecting that the gesture operation comprising edit instruction obtains
It is obtained to the voice data comprising edit instruction.
Here, the mobile terminal can preserve different material datas, such as paster, expression packet;
It is described to add material data in the initial cover data, may include:
The addition operation for receiving user's input determines material data to be added and to be added according to the addition operation
Position;The addition operation is executed, the material data to be added is added to the corresponding position to be added.
Here, the display properties parameter for changing the initial cover data may include:
The adjustment operation for receiving user's input determines the size for being directed to the preliminary examination cover data according to the adjustment operation
And/or the adjusting parameter of filter, change the display properties parameter of the initial cover data according to the adjusting parameter.
It should be noted that mobile terminal can be provided with different filters, every kind of filter is corresponding with different brightness, cold
The parameter values such as tone, warm tones, saturation degree;The filter is adjusted really for the brightness, cool tone, warm tones, saturation degree
Equal parameter values are applied in the initial cover data.
By carrying out secondary editor, such as addition paster, filter to cover data, the secondary creation of user is merged, is obtained
The video cover for more meeting user's requirement, promotes the operating experience of user.
In the present embodiment, determined in the step 102, in the multiple image data that include from first video data to
Lack a frame image data, can also include:
Multiple groups frame image data is determined from the multiple image data that first video data includes;Every framing picture number
It include an at least frame image data in;
An at least frame image data according to generates cover data, including:
Cover data are generated according to every group of frame image data in the multiple groups frame image data, obtain multiple cover numbers
According to.
Here, multiple cover data can be provided through the above steps, and the multiple cover data can be directed to different
Application scenarios.
Correspondingly, described merge the cover data with first video data, the second video data, packet are generated
It includes:
The sending object for determining second video data, according to the attribute of the sending object from the multiple cover number
According to middle selection target cover data;
The target cover data are merged with first video data, generate the second video data.
Specifically, the attribute of the sending object may include:It is the platform (such as wechat, microblogging) that is sent to, pending
The user etc. being sent to.
When being provided with multiple cover data, different cover data correspond to the attribute of different sending objects;Determine to
After sending object, certain corresponding group cover data are determined according to the attribute of the object to be sent;By certain determining group cover number
Merge according to first video data, generates the second video data.
Here, multiple cover data can be associated with for same first video data, the multiple cover data can be with
Applied under different scenes, the effect in " thousand people, thousand face " is formed, so that the video of user's publication has more variability and attraction
Power, and then the creation desire of user can also be excited.
Specifically, the method also includes:The cover data are saved into cover data acquisition system, and/or by the envelope
Face data is sent to server;
It is described to merge the cover data with first video data, including:
Obtained from the cover data acquisition system of storage with the associated cover data of first video data, will be described
Cover data merge with first video data;Alternatively,
From server obtain with the associated cover data of first video data, by the cover data and described first
Video data merges.
Here, some cover data in first video data and cover data acquisition system are there are incidence relation, and institute
Stating the first video data, there are incidence relations with some cover data in server.When mobile terminal can not be read described in local
When cover data in cover data acquisition system, the request for obtaining the cover data can be sent to the server;It connects
The cover data for receiving the server feedback merge the cover data of the server feedback with first video data,
Generate the second video data.
Specifically, the method also includes:
It is not stored in the cover data acquisition system with the associated cover data of first video data and to connect
When server, generate prompt information, the prompt information for prompt user check network after again request server to obtain
With the associated cover data of first video data.
Specifically, mobile terminal after user has replaced mobile terminal and replaced can not connect network, it may occur however that
It is not stored with the associated cover data of first video data and mobile terminal can not connect server in cover data acquisition system
The case where, at this point, prompt information can be generated in mobile terminal, to inform that user can not connect server this moment, and prompt to use
Family check network after again request server to obtain and the associated cover data of first video data.
By the above method, an at least frame image data can be selected to generate static cover or dynamic cover from video;
In addition to this it is possible to carry out secondary editor to the cover data of selection, may also set up more set covers for different scenes.
Fig. 3 is the flow diagram of another information processing method provided in an embodiment of the present invention;As shown in figure 3, described
Method may include:
Step 201, recorded video obtain the first video data;
Here, user is recording one section of view using mobile terminal or using the application software of the information processing method
Frequently, first video data is obtained.
Step 202, determined in the multiple image data under the first mode, including from first video data to
A few frame image data;
Here, the human-computer interaction interface of mobile terminal shows cover and chooses button, after user clicks cover selection button,
Mobile terminal receives the instruction of cover data creating, can enter the first mode.
Step 203 generates initial cover data according to an at least frame image data;
Here, when user selects a frame image data, then still image is generated as the initial cover data;When with
When family selects multiple image data, then dynamic image (such as GIF image) can be obtained according to the multiple image data, as first
Beginning cover data.
Specifically, in the flrst mode, user can choose a certain frame image data, make static initial cover number
According to, alternatively, the button of production dynamic seal face data is clicked, the dynamic initial cover data of production.
In conjunction with shown in Fig. 4 (a), after the button for making dynamic seal face data when the user clicks, mobile terminal receives production
Dynamic seal face data instruction, into the production page of dynamic seal face data;User can be paved into taking out frame by first video data
Multiple image data (form that can be thumbnail group is shown) in select an at least frame image data (such as 5 frame picture numbers
According to), the picture frame being selected can be highlighted to indicate to be selected, and as shown in Fig. 4 (b), be selected if again tapping on
Picture frame indicates to cancel selection;After user chooses completion and clicks generation button, mobile terminal is according to determining multiple image number
Dynamic initial cover data are generated according to merging, such as GIF image.
Step 204, under the second mode, image procossing is carried out to the initial cover data and generates cover data;
Here, under second mode, user can carry out secondary editor to the initial cover data;Including:Superposition
Paster, filter etc.;Secondary edit page can be as shown in Figure 5.
Specifically, after generating initial cover data, then it can enter second mode;User can click certain a paster
When, default shows the paster in preview area central point, and user can be by the dragging of paster, scaling, rotation;And/or user can be with
When clicking certain a filter, mobile terminal receives the clicking operation of user's input, determines the filter mark of the filter of selection, and
Corresponding image processing effect (such as black and white, European rural area) is identified for filter on the initial cover data investigation.
Here, user, which carries out as above any operation, mobile terminal by corresponding button or gesture, can be obtained accordingly
Instruction;According to the instruction of acquisition, the mobile terminal completes the editor for being directed to initial cover data.
Step 205 merges the cover data with first video data, generates the second video data.
Specifically, the step 205 can also include:
Determining cover data are stored in local cover data acquisition system, and/or the cover data are sent to
Server.
It is described to merge the cover data with first video data, including:
Obtained from the cover data acquisition system of storage with the associated cover data of first video data, will be described
Cover data merge with first video data;Alternatively,
From server obtain with the associated cover data of first video data, by the cover data and described first
Video data merges.
In the present embodiment, the method can also include:
Mobile terminal chooses user to initial cover data every time, edited cover data are stored in described
The corresponding cover data acquisition system of one video data;
It is the corresponding carousel cover of first video data by multiple described cover data configurations;When the user of browsing holds
When row refresh operation or when the second video data of multiple sharing, cover data can carry out random rotation, and " thousand people, thousand face " is presented
Effect.
Fig. 6 is a kind of structural schematic diagram of information processing unit provided in an embodiment of the present invention;As shown in fig. 6, the dress
Set including:First processing module 301, Second processing module 302 and third processing module 303;Wherein,
The first processing module 301, for obtaining the first video data;
The Second processing module 302, for being obtained from the first processing module 301 under first mode
An at least frame image data is determined in the multiple image data that first video data includes;
The third processing module 303, for an at least frame image according to the Second processing module 302 determination
Data generate cover data, and the cover data are merged with first video data, generate the second video data.
Specifically, the Second processing module 302 is specifically used for obtaining the first input operation, based on first input
Operation determines an at least frame image data from the multiple image data that first video data includes;Alternatively,
First video data is analyzed, the target object in first video data is obtained;
At least portion comprising the target object is obtained from the multiple image data for including in first video data
Framing image data generates alternative cover data set;
An at least frame image data is determined from the alternative cover data set.
Specifically, the first processing module 301 is also used to obtain the face-image of user in real time, according to the face
Image determines the expressive features of user;According to the corresponding relationship of the expressive features of preservation and image style, determine that the expression is special
Levy corresponding image style;Image procossing is carried out to an at least frame image data according to described image style;
The third processing module 303, for according at least one after the first processing module 301 progress image procossing
Frame image data generates cover data.
Specifically, the third processing module 303 is specifically used for generating initial envelope according to an at least frame image data
Face data;Under second mode, image procossing is carried out to the initial cover data and generates cover data;At described image
Reason includes at least one of:Material data is added in the initial cover data, changes the aobvious of the initial cover data
Show property parameters.
Specifically, the Second processing module 302, specifically for the multiple image number for including from first video data
According to middle determining multiple groups frame image data;It include an at least frame image data in every group of frame image data;
The third processing module 303, for being generated according to every group of frame image data in the multiple groups frame image data
Cover data obtain multiple cover data.
Specifically, the third processing module 303, specifically for the sending object of determination second video data, root
According to the attribute of the sending object from the multiple cover data selection target cover data;By the target cover data with
First video data merges, and generates the second video data.
Specifically, described device can also include:Communication module;
The third processing module 303 is also used to save the cover data into cover data acquisition system;It is also used to from depositing
Acquisition and the associated cover data of first video data in the cover data acquisition system of storage, by the cover data and institute
State the merging of the first video data.
The communication module, for the cover data to be sent to server;Be also used to obtain from server with it is described
The associated cover data of first video data;
The third processing module 303 is also used to merge the cover data with first video data.
Specifically, described device can also include:Reminding module;The reminding module, in the cover data set
It is not stored with the associated cover data of first video data and when the communication module can not connect server in conjunction, it is raw
At prompt information, the prompt information for prompt user check network after again request server with obtain and it is described first view
Frequency is according to associated cover data.
It should be noted that:Information processing unit provided by the above embodiment is when carrying out information processing, only with above-mentioned each
The division progress of program module can according to need for example, in practical application and distribute above-mentioned processing by different journeys
Sequence module is completed, i.e., the internal structure of device is divided into different program modules, to complete whole described above or portion
Divide processing.In addition, information processing unit provided by the above embodiment and information processing method embodiment belong to same design, have
Body realizes that process is detailed in embodiment of the method, and which is not described herein again.
Fig. 7 is the structural schematic diagram of another information processing unit provided in an embodiment of the present invention;The information processing apparatus
It sets and can be applied to mobile terminal, as shown in fig. 7, described device 40 includes:Processor 401 and for store can be at the place
The memory 402 of the computer program run on reason device;Wherein, when the processor 401 is used to run the computer program,
It executes:Obtain the first video data;In the case where being in first mode, from the multiple image data that first video data includes
Determine an at least frame image data;Cover data are generated according to an at least frame image data, by the cover data and institute
The merging of the first video data is stated, the second video data is generated.
In one embodiment, it when the processor 401 is also used to run the computer program, executes:It is defeated to obtain first
Enter operation, an at least frame is determined from the multiple image data that first video data includes based on the first input operation
Image data;Alternatively, analysis first video data, obtains the target object in first video data;From described
At least partly frame image data comprising the target object is obtained in the multiple image data for including in one video data, is generated
Alternative cover data set;An at least frame image data is determined from the alternative cover data set.
In one embodiment, it when the processor 401 is also used to run the computer program, executes:It is used in real time
The face-image at family determines the expressive features of user according to the face-image;According to the expressive features of preservation and image style
Corresponding relationship, determine the corresponding image style of the expressive features;According to described image style to an at least frame image
Data carry out image procossing;An at least frame image data according to generates cover data, including:After image procossing
An at least frame image data generate cover data.
In one embodiment, it when the processor 401 is also used to run the computer program, executes:According to it is described extremely
A few frame image data generates initial cover data;Under second mode, the initial cover data are carried out at image
Reason generates cover data;Described image processing includes at least one of:In the initial cover data add material data,
Change the display properties parameter of the initial cover data.
In one embodiment, it when the processor 401 is also used to run the computer program, executes:From described first
Multiple groups frame image data is determined in the multiple image data that video data includes;It include an at least frame figure in every group of frame image data
As data;Cover data are generated according to every group of frame image data in the multiple groups frame image data, obtain multiple cover data.
In one embodiment, it when the processor 401 is also used to run the computer program, executes:Determine described
The sending object of two video datas, according to the attribute of the sending object from the multiple cover data selection target cover number
According to;The target cover data are merged with first video data, generate the second video data.
In one embodiment, it when the processor 401 is also used to run the computer program, executes:Save the envelope
Face data is sent to server into cover data acquisition system, and/or by the cover data;From the cover data set of storage
Acquisition and the associated cover data of first video data, the cover data and first video data are closed in conjunction
And;Alternatively, from server obtain with the associated cover data of first video data, by the cover data and described first
Video data merges.
It should be noted that:Information processing unit provided by the above embodiment belongs to same with information processing method embodiment
Design, specific implementation process are detailed in embodiment of the method, and which is not described herein again.
When practical application, described device 40 can also include:At least one network interface 403.In information processing unit 40
Various components be coupled by bus system 404.It is understood that bus system 404 is for realizing between these components
Connection communication.Bus system 404 further includes power bus, control bus and status signal bus in addition in addition to including data/address bus.
But for the sake of clear explanation, various buses are all designated as bus system 404 in Fig. 7.
Wherein, the number of the processor 404 can be at least one.
Communication of the network interface 403 for wired or wireless way between information processing unit 40 and other equipment.
Memory 402 in the embodiment of the present invention is for storing various types of data to support voice processing apparatus 40
Operation.
The method that the embodiments of the present invention disclose can be applied in processor 401, or be realized by processor 401.
Processor 401 may be a kind of IC chip, the processing capacity with signal.During realization, the above method it is each
Step can be completed by the integrated logic circuit of the hardware in processor 401 or the instruction of software form.Above-mentioned processing
Device 401 can be general processor, digital signal processor (DSP, Digital Signal Processor) or other can
Programmed logic device, discrete gate or transistor logic, discrete hardware components etc..Processor 401 may be implemented or hold
Disclosed each method, step and logic diagram in the row embodiment of the present invention.General processor can be microprocessor or appoint
What conventional processor etc..The step of method in conjunction with disclosed in the embodiment of the present invention, can be embodied directly at hardware decoding
Reason device executes completion, or in decoding processor hardware and software module combine and execute completion.Software module can be located at
In storage medium, which is located at memory 402, and processor 401 reads the information in memory 402, in conjunction with its hardware
The step of completing preceding method.
In the exemplary embodiment, information processing unit 40 can by one or more application specific integrated circuit (ASIC,
Application Specific Integrated Circuit), DSP, programmable logic device (PLD, Programmable
Logic Device), Complex Programmable Logic Devices (CPLD, Complex Programmable Logic Device), scene
Programmable gate array (FPGA, Field-Programmable Gate Array), general processor, controller, microcontroller
(MCU, Micro Controller Unit), microprocessor (Microprocessor) or other electronic components are realized, are used for
Execute preceding method.
The embodiment of the invention also provides a kind of computer readable storage mediums, are stored thereon with computer program, described
When computer program is run by processor, execute:Obtain the first video data;Under first mode, regarded from described first
Frequency is according to an at least frame image data determining in the multiple image data for including;It is generated and is sealed according to an at least frame image data
Face data merges the cover data with first video data, generates the second video data.
In one embodiment, it when the computer program is run by processor, executes:The first input operation is obtained, is based on
The first input operation determines an at least frame image data from the multiple image data that first video data includes;Or
Person analyzes first video data, obtains the target object in first video data;From first video data
Including multiple image data in obtain include the target object at least partly frame image data, generate alternative cover data
Collection;An at least frame image data is determined from the alternative cover data set.
In one embodiment, it when the computer program is run by processor, executes:The face figure of user is obtained in real time
Picture determines the expressive features of user according to the face-image;According to the corresponding relationship of the expressive features of preservation and image style,
Determine the corresponding image style of the expressive features;Image is carried out to an at least frame image data according to described image style
Processing;An at least frame image data according to generates cover data, including:According at least frame figure after image procossing
As data generate cover data.
In one embodiment, it when the computer program is run by processor, executes:According to an at least frame picture number
According to the initial cover data of generation;Under second mode, image procossing is carried out to the initial cover data and generates cover number
According to;Described image processing includes at least one of:Addition material data, change are described initial in the initial cover data
The display properties parameter of cover data.
In one embodiment, it when the computer program is run by processor, executes:Include from first video data
Multiple image data in determine multiple groups frame image data;It include an at least frame image data in every group of frame image data;According to
Every group of frame image data in the multiple groups frame image data generates cover data, obtains multiple cover data.
In one embodiment, it when the computer program is run by processor, executes:Determine second video data
Sending object, according to the attribute of the sending object from the multiple cover data selection target cover data;By the mesh
Mark cover data merge with first video data, generate the second video data.
In one embodiment, it when the computer program is run by processor, executes:The cover data are saved to cover
In data acquisition system, and/or the cover data are sent to server;It is obtained from the cover data acquisition system of storage and institute
The associated cover data of the first video data are stated, the cover data are merged with first video data;Alternatively, from service
Device obtains and the associated cover data of first video data, and the cover data are merged with first video data.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it
Its mode is realized.Apparatus embodiments described above are merely indicative, for example, the division of the unit, only
A kind of logical function partition, there may be another division manner in actual implementation, such as:Multiple units or components can combine, or
It is desirably integrated into another system, or some features can be ignored or not executed.In addition, shown or discussed each composition portion
Mutual coupling or direct-coupling or communication connection is divided to can be through some interfaces, the INDIRECT COUPLING of equipment or unit
Or communication connection, it can be electrical, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member, which can be or may not be, to be physically separated, aobvious as unit
The component shown can be or may not be physical unit, it can and it is in one place, it may be distributed over multiple network lists
In member;Some or all of units can be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can be fully integrated in one processing unit, it can also
To be each unit individually as a unit, can also be integrated in one unit with two or more units;It is above-mentioned
Integrated unit both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
Those of ordinary skill in the art will appreciate that:Realize that all or part of the steps of above method embodiment can pass through
The relevant hardware of program instruction is completed, and program above-mentioned can be stored in a computer readable storage medium, the program
When being executed, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned includes:It is movable storage device, read-only
Memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or
The various media that can store program code such as person's CD.
If alternatively, the above-mentioned integrated unit of the present invention is realized in the form of software function module and as independent product
When selling or using, it also can store in a computer readable storage medium.Based on this understanding, the present invention is implemented
Substantially the part that contributes to existing technology can be embodied in the form of software products the technical solution of example in other words,
The computer software product is stored in a storage medium, including some instructions are used so that computer equipment (can be with
It is personal computer, server or network equipment etc.) execute all or part of each embodiment the method for the present invention.
And storage medium above-mentioned includes:Movable storage device, ROM, RAM, magnetic or disk etc. are various to can store program code
Medium.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the scope of the present invention, it is all
Made any modifications, equivalent replacements, and improvements etc. within the spirit and principles in the present invention, should be included in protection of the invention
Within the scope of.
Claims (14)
1. a kind of information processing method, which is characterized in that the method includes:
Obtain the first video data;
Under first mode, an at least frame picture number is determined from the multiple image data that first video data includes
According to;
Cover data are generated according to an at least frame image data, the cover data and first video data are closed
And generate the second video data.
2. the method according to claim 1, wherein the multiple image for including from first video data
An at least frame image data is determined in data, including:
The first input operation is obtained, the multiple image data for including from first video data based on the first input operation
A middle determination at least frame image data;Alternatively,
First video data is analyzed, the target object in first video data is obtained;
At least partly frame comprising the target object is obtained from the multiple image data for including in first video data
Image data generates alternative cover data set;
An at least frame image data is determined from the alternative cover data set.
3. the method according to claim 1, wherein the method also includes:The face figure of user is obtained in real time
Picture determines the expressive features of user according to the face-image;According to the corresponding relationship of the expressive features of preservation and image style,
Determine the corresponding image style of the expressive features;Image is carried out to an at least frame image data according to described image style
Processing;
An at least frame image data according to generates cover data, including:According at least frame figure after image procossing
As data generate cover data.
4. method according to any one of claims 1 to 3, which is characterized in that an at least frame picture number according to
According to generation cover data, including:
Initial cover data are generated according to an at least frame image data;
Under second mode, image procossing is carried out to the initial cover data and generates cover data;Described image processing
Including at least one of:The display added material data in the initial cover data, change the initial cover data
Property parameters.
5. method according to any one of claims 1 to 3, which is characterized in that described to include from first video data
Multiple image data in determine an at least frame image data, including:
Multiple groups frame image data is determined from the multiple image data that first video data includes;In every group of frame image data
Including an at least frame image data;
An at least frame image data according to generates cover data, including:
Cover data are generated according to every group of frame image data in the multiple groups frame image data, obtain multiple cover data.
6. according to the method described in claim 5, it is characterized in that, described by the cover data and first video data
Merge, generates the second video data, including:
The sending object for determining second video data, according to the attribute of the sending object from the multiple cover data
Selection target cover data;
The target cover data are merged with first video data, generate the second video data.
7. a kind of information processing unit, which is characterized in that described device includes:First processing module, Second processing module and
Three processing modules;Wherein,
The first processing module, for obtaining the first video data;
The Second processing module, in first view under first mode, obtained from the first processing module
Frequency is according to an at least frame image data determining in the multiple image data for including;
The third processing module generates envelope for an at least frame image data according to Second processing module determination
Face data merges the cover data with first video data, generates the second video data.
8. device according to claim 7, which is characterized in that it is defeated to be specifically used for acquisition first for the Second processing module
Enter operation, an at least frame is determined from the multiple image data that first video data includes based on the first input operation
Image data;Alternatively,
First video data is analyzed, the target object in first video data is obtained;
At least partly frame comprising the target object is obtained from the multiple image data for including in first video data
Image data generates alternative cover data set;
An at least frame image data is determined from the alternative cover data set.
9. device according to claim 7, which is characterized in that the first processing module is also used to obtain user in real time
Face-image, the expressive features of user are determined according to the face-image;According to the expressive features of preservation and image style
Corresponding relationship determines the corresponding image style of the expressive features;According to described image style to an at least frame picture number
According to progress image procossing;
The third processing module, for according at least frame image data after first processing module progress image procossing
Generate cover data.
10. device according to any one of claims 7 to 9, which is characterized in that the third processing module is specifically used for
Initial cover data are generated according to an at least frame image data;In the case where being in second mode, to the initial cover data
It carries out image procossing and generates cover data;Described image processing includes at least one of:Add in the initial cover data
The display properties parameter for adding material data, changing the initial cover data.
11. device according to any one of claims 7 to 9, which is characterized in that the Second processing module is specifically used for
Multiple groups frame image data is determined from the multiple image data that first video data includes;Include in every group of frame image data
An at least frame image data;
The third processing module, for generating cover number according to every group of frame image data in the multiple groups frame image data
According to obtaining multiple cover data.
12. device according to claim 11, which is characterized in that the third processing module, described in determining
The sending object of second video data, according to the attribute of the sending object from the multiple cover data selection target cover
Data;The target cover data are merged with first video data, generate the second video data.
13. a kind of information processing unit, which is characterized in that described device includes:Processor and for store can be in processor
The memory of the computer program of upper operation;Wherein,
The processor is for the step of when running the computer program, perform claim requires 1 to 6 any the method.
14. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of claim 1 to 6 any the method is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810688665.4A CN108875670A (en) | 2018-06-28 | 2018-06-28 | Information processing method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810688665.4A CN108875670A (en) | 2018-06-28 | 2018-06-28 | Information processing method, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108875670A true CN108875670A (en) | 2018-11-23 |
Family
ID=64296573
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810688665.4A Pending CN108875670A (en) | 2018-06-28 | 2018-06-28 | Information processing method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108875670A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110045958A (en) * | 2019-04-17 | 2019-07-23 | 腾讯科技(深圳)有限公司 | Data texturing generation method, device, storage medium and equipment |
CN111918131A (en) * | 2020-08-18 | 2020-11-10 | 北京达佳互联信息技术有限公司 | Video generation method and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105159639A (en) * | 2015-08-21 | 2015-12-16 | 小米科技有限责任公司 | Audio cover displaying method and apparatus |
CN105931178A (en) * | 2016-04-15 | 2016-09-07 | 乐视控股(北京)有限公司 | Image processing method and device |
CN106210608A (en) * | 2015-04-29 | 2016-12-07 | 中国电信股份有限公司 | The methods, devices and systems of the dynamic front cover in position, control point are realized based on mobile detection |
CN106599208A (en) * | 2016-12-15 | 2017-04-26 | 腾讯科技(深圳)有限公司 | Content sharing method and user client |
CN106658141A (en) * | 2016-11-29 | 2017-05-10 | 维沃移动通信有限公司 | Video processing method and mobile terminal |
CN106686402A (en) * | 2016-11-29 | 2017-05-17 | 维沃移动通信有限公司 | Video processing method and mobile terminal |
CN106998477A (en) * | 2017-04-05 | 2017-08-01 | 腾讯科技(深圳)有限公司 | The front cover display methods and device of live video |
CN107577768A (en) * | 2017-09-05 | 2018-01-12 | 广州阿里巴巴文学信息技术有限公司 | Import processing method, device and the intelligent terminal of cover document |
-
2018
- 2018-06-28 CN CN201810688665.4A patent/CN108875670A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106210608A (en) * | 2015-04-29 | 2016-12-07 | 中国电信股份有限公司 | The methods, devices and systems of the dynamic front cover in position, control point are realized based on mobile detection |
CN105159639A (en) * | 2015-08-21 | 2015-12-16 | 小米科技有限责任公司 | Audio cover displaying method and apparatus |
CN105931178A (en) * | 2016-04-15 | 2016-09-07 | 乐视控股(北京)有限公司 | Image processing method and device |
CN106658141A (en) * | 2016-11-29 | 2017-05-10 | 维沃移动通信有限公司 | Video processing method and mobile terminal |
CN106686402A (en) * | 2016-11-29 | 2017-05-17 | 维沃移动通信有限公司 | Video processing method and mobile terminal |
CN106599208A (en) * | 2016-12-15 | 2017-04-26 | 腾讯科技(深圳)有限公司 | Content sharing method and user client |
CN106998477A (en) * | 2017-04-05 | 2017-08-01 | 腾讯科技(深圳)有限公司 | The front cover display methods and device of live video |
CN107577768A (en) * | 2017-09-05 | 2018-01-12 | 广州阿里巴巴文学信息技术有限公司 | Import processing method, device and the intelligent terminal of cover document |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110045958A (en) * | 2019-04-17 | 2019-07-23 | 腾讯科技(深圳)有限公司 | Data texturing generation method, device, storage medium and equipment |
CN111918131A (en) * | 2020-08-18 | 2020-11-10 | 北京达佳互联信息技术有限公司 | Video generation method and device |
WO2022037348A1 (en) * | 2020-08-18 | 2022-02-24 | 北京达佳互联信息技术有限公司 | Video generation method and apparatus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108205467A (en) | The intelligence auxiliary of repetitive operation | |
CN106709762A (en) | Virtual gift recommendation method, virtual gift recommendation device used in direct broadcast room, and mobile terminal | |
CN106844659A (en) | A kind of multimedia data processing method and device | |
US12015615B2 (en) | Apparatus and method for coordinating the matching and initial communications between individuals in a dating application | |
WO2020207413A1 (en) | Content pushing method, apparatus, and device | |
CN111930994A (en) | Video editing processing method and device, electronic equipment and storage medium | |
CN102801652B (en) | The method of contact person, client and system is added by expression data | |
US11558561B2 (en) | Personalized videos featuring multiple persons | |
KR102546016B1 (en) | Systems and methods for providing personalized video | |
CN108196751A (en) | Update method, terminal and the computer readable storage medium of group chat head portrait | |
EP4243427A1 (en) | Video generation method and apparatus, device, and storage medium | |
CN106101526A (en) | Screen recording method and device | |
CN109992227A (en) | User interface display control method, device, terminal and computer storage medium | |
CN108875670A (en) | Information processing method, device and storage medium | |
EP4145268A1 (en) | Electronic device and inter-device screen collaboration method and medium thereof | |
CN109710779A (en) | Multimedia file intercepting method, device, equipment and storage medium | |
KR20150123429A (en) | Electronic device and Method for providing contents | |
US11967344B2 (en) | Video processing method and apparatus, device and computer readable storage medium | |
CN106650365A (en) | Method and device for starting different working modes | |
US11829593B2 (en) | Method for providing contents by using widget in mobile electronic device and system thereof | |
CN104731466B (en) | A kind of data inputting method and electronic equipment | |
CN107343090A (en) | Image processing method and device | |
JP6750094B2 (en) | Image evaluation apparatus, image evaluation method, and image evaluation program | |
CN115248891A (en) | Page display method and device, electronic equipment and storage medium | |
KR102563026B1 (en) | Method for providing avatar-based clothing matching information, and server and program using the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181123 |