CN106101579A - A kind of method of video-splicing and mobile terminal - Google Patents
A kind of method of video-splicing and mobile terminal Download PDFInfo
- Publication number
- CN106101579A CN106101579A CN201610614655.7A CN201610614655A CN106101579A CN 106101579 A CN106101579 A CN 106101579A CN 201610614655 A CN201610614655 A CN 201610614655A CN 106101579 A CN106101579 A CN 106101579A
- Authority
- CN
- China
- Prior art keywords
- video
- image frame
- target
- main body
- background
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 230000015572 biosynthetic process Effects 0.000 claims description 7
- 239000000463 material Substances 0.000 claims description 7
- 238000003786 synthesis reaction Methods 0.000 claims description 7
- 230000009467 reduction Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 description 30
- 230000006870 function Effects 0.000 description 10
- 230000000694 effects Effects 0.000 description 9
- 238000010168 coupling process Methods 0.000 description 8
- 238000005859 coupling reaction Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 230000008878 coupling Effects 0.000 description 7
- 238000003860 storage Methods 0.000 description 7
- 230000004927 fusion Effects 0.000 description 6
- 230000010354 integration Effects 0.000 description 5
- 239000000203 mixture Substances 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000005611 electricity Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004899 motility Effects 0.000 description 2
- 238000004549 pulsed laser deposition Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009182 swimming Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000002844 melting Methods 0.000 description 1
- 230000008018 melting Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/268—Signal distribution or switching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The invention provides method and the mobile terminal of a kind of video-splicing, wherein method includes: a kind of method of video-splicing, it is characterised in that including: obtain at least two target video file;Respectively described at least two target video file is carried out video decoding, be reduced to initial image frame;For each target video file in described at least two target video file, determine the video main body in described initial image frame and video background;Based on the video main body in initial image frame described at least two and video background, carry out video-splicing, generate target video;Wherein, initial image frame described in described at least two is selected from different target video files.The program can realize the same time of the main body on different time axle and occur in Same Scene, is greatly increased the interest of video, meets the multi-demands of user.
Description
Technical field
The present invention relates to communication technical field, particularly relate to method and the mobile terminal of a kind of video-splicing.
Background technology
Along with the development of electronic equipment, increasing user can utilize the shooting of the electronic equipment such as mobile phone, panel computer to regard
Frequently.Under normal circumstances, captured multiple video-splicings can be become a video by user.Current video-splicing is only simple
One long video of the short Video Composition of multiple videos, video content increased.Such as, it is the video of t1 by playing duration
Video1 and the video Video2 that playing duration is t2, being spliced into a playing duration is the new video of t1+t2.But, this is existing
There is method to be only the length splicing simply video carried out, do not carry out video for other parts in video file
Splicing, the video-splicing effect completed is single, cannot meet the use of growing video sharing and interest
Family demand.
Summary of the invention
The embodiment of the present invention provides method and the mobile terminal of a kind of video-splicing, there is no pin solving this existing method
Other parts in video file carry out the splicing of video, and the video-splicing effect completed is single, it is impossible to meets and uses
The problem of family demand.
On the one hand, the embodiment of the present invention provides a kind of method of video-splicing, including:
Obtain at least two target video file;
Respectively described at least two target video file is carried out video decoding, be reduced to initial image frame;
For each target video file in described at least two target video file, determine in described initial image frame
Video main body and video background;
Based on the video main body in initial image frame described at least two and video background, carry out video-splicing, generate mesh
Mark video;
Wherein, initial image frame described in described at least two is selected from different described target video files.
On the other hand, the embodiment of the present invention also provides for a kind of mobile terminal, including:
Acquisition module, is used for obtaining at least two target video file;
Decoded back module, for entering the described at least two target video file obtained in described acquisition module respectively
Row video decodes, and is reduced to initial image frame;
First determines module, is used for for each target video file in described at least two target video file, really
Video main body in the described initial image frame that fixed described decoded back module reduction obtains and video background;
Concatenation module, for determining the video in the described initial image frame that module determines based on described at least two first
Main body and video background, carry out video-splicing, generates target video, and wherein, initial image frame described in described at least two is chosen
From different described target video files.
So, by respectively at least two target video file obtained being carried out video decoding, it is reduced at the beginning of correspondence
Beginning picture frame, determines the video main body in the initial image frame of each target video file and video background, based at least two
Video main body in initial image frame and video background, will identify the video main body and video background that draw in different video file
Splicing, ultimately form target video, this process can change in video file the environment residing for video main body and background, will
The video main body identified in different video file is integrated in same background, by melting of the video main body in different video
Closing, can realize the same time of the main body on different time axle occurs in Same Scene, realize being similar to virtual reality, oneself with
The effects such as oneself dialogue, are greatly increased the interest of video, meet the multi-demands of user.
Accompanying drawing explanation
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Describe, it is clear that described embodiment is a part of embodiment of the present invention rather than whole embodiments wholely.Based on this
Embodiment in bright, the every other enforcement that those of ordinary skill in the art are obtained under not making creative work premise
Example, broadly falls into the scope of protection of the invention.
Fig. 1 represents the flow chart of the method for video-splicing in first embodiment of the invention;
Fig. 2 represents the flow chart of the method for video-splicing in second embodiment of the invention;
Fig. 3 represents in second embodiment of the invention the first initial image frame and the second initial image frame is carried out image conjunction
Become, generate the flow chart of target image frame;
Fig. 4 represents the structured flowchart one of mobile terminal in third embodiment of the invention;
Fig. 5 represents the structured flowchart two of mobile terminal in third embodiment of the invention;
Fig. 6 represents the structured flowchart of mobile terminal in fourth embodiment of the invention;
Fig. 7 represents the structured flowchart of mobile terminal in fifth embodiment of the invention.
Detailed description of the invention
It is more fully described the exemplary embodiment of the disclosure below with reference to accompanying drawings.Although accompanying drawing shows the disclosure
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure and should be by embodiments set forth here
Limited.On the contrary, it is provided that these embodiments are able to be best understood from the disclosure, and can be by the scope of the present disclosure
Complete conveys to those skilled in the art.
First embodiment
A kind of method of video-splicing disclosed in the present embodiment, shown in Fig. 1, including:
Step 101: obtain at least two target video file.
In this step, this at least two target video file chosen, define object to be spliced, ensuing to carry out
Video-splicing process.
Step 102: respectively this at least two target video file is carried out video decoding, be reduced to initial image frame.
Specifically, video file is made up of the image play continuously one by one, and an image is corresponding in video file
A picture frame, a video file comprises multiple picture frame.In this step, respectively to this at least two video file
It is decoded, each video file is reduced to constitute the initial image frame of this video file, to carry out multiple videos literary composition
The content concatenation of part.
Step 103: for each target video file at least two target video file, determine in initial image frame
Video main body and video background.
For one section of video, video main body is the main expressive object in video pictures;In this step, at least two
Each target video file in individual target video file all carries out resolving identifying, determines the initial graph of each target video file
As the video main body in frame and video background.Wherein, the video main body in initial image frame and video background are determined
Process, can carry out outline identification by the object in two field picture every to initial image frame and realize, or by judging image
Middle element occupies the size of picture and realizes, or can be identified the motion master in video file by frame differential method
Body, it is achieved to picture main body and the Division identification of background.
Step 104: based on the video main body at least two initial image frame and video background, carries out video-splicing, raw
Become target video.
Wherein, this at least two initial image frame is selected from different target video files.
This step, when carrying out video-splicing, the object of splicing is the video main body in this at least two initial image frame
And video background, the process of this video-splicing is to realize being bonded in a video this video main body and video background spliced whole
Process, to generate target video after splicing completes.Wherein, during being somebody's turn to do, for choosing this from different target video files
At least two initial image frame, each initial image frame corresponds to a target video file, to realize different video file
In video main body and background integrate.This process can make the same time of the main body on different time axle occur in same field
Scape, you and your the present baby before such as 20 years old, two " children " simultaneously appear in Disney and play with water;Or can lead to
Cross and determine different integration backgrounds, it is achieved replacing of local environment and background etc. in video, increase the interest of video, meet
The multi-demands of user.
Further, above-mentioned based on the video main body at least two initial image frame and video background, carry out
Video-splicing, before generating the step of target video, the method also includes:
Receive user's selection operation to video main body;Based on this selection operation, by the video main body selected by this user
It is defined as the video main body in this at least two initial image frame.
Specifically, the video main body selected by user, can be in the video main body from the aforementioned initial image frame determined
Selected by obtain.Based on the video main body at least two initial image frame and video background, before carrying out video-splicing, can
Make user that the video main body at least two initial image frame to be chosen, by obtain user choose instruction, obtain
Video main body selected by user, the video main body that user is chosen as the video main body at least two initial image frame,
Implementing splicing, this process is applicable to when the video main body in the initial image frame of target video file is multiple, can
With the selection operation according to user, determine the video main body needing to integrate in splicing, more motility, meet user's
Multi-demands.
The method of the video-splicing of the embodiment of the present invention, by carrying out at least two target video file obtained respectively
Video decodes, and is reduced to the initial image frame of correspondence, determines the video main body in the initial image frame of each target video file
And video background, based on the video main body at least two initial image frame and video background, will different video file identify
The video main body drawn and video background splice, and ultimately form target video, and this process can be changed in video file and regard
Frequently the environment residing for main body and background, is integrated into the video main body identified in different video file in same background, logical
Crossing the fusion of video main body in different video, can realize the same time of the main body on different time axle occurs in Same Scene,
Realize being similar to virtual reality, oneself with oneself dialogue etc. effect, be greatly increased the interest of video.
Second embodiment
A kind of method of video-splicing disclosed in the present embodiment, shown in Fig. 2, including:
Step 201: obtain at least two target video file.
In this step, this at least two target video file chosen, define object to be spliced, ensuing to carry out
Video-splicing process.
Step 202: respectively at least two target video file is carried out video decoding, be reduced to initial image frame.
Specifically, video file is made up of the image play continuously one by one, and an image is corresponding in video file
A picture frame, a video file comprises multiple picture frame.In this step, respectively to this at least two video file
It is decoded, each video file is reduced to constitute the initial image frame of this video file, to carry out multiple videos literary composition
The content concatenation of part.
Step 203: obtain the first initial image frame of first object video file and the second target video file respectively
Second initial image frame.
Wherein, the first initial image frame and the second initial image frame are selected from different target video files, after realizing
Continue and different video file is carried out video-splicing merging process.
Step 204: respectively the first initial image frame and the second initial image frame are carried out picture material identification, determine first
The first video main body in initial image frame and the second video main body in the first video background, and the second initial image frame and
Second video background.
After getting the first initial image frame and the second initial image frame, need both are carried out picture material knowledge
Not, determine the first video main body in the first initial image frame of first object video file and the first video background, determine
The second video main body in second initial image frame of two target video files and the second video background.
Step 205: the first video background and the second video background are compared, obtains comparison result.
Corresponding at least two video file to be spliced, the video identified from the initial image frame of different video files
Background is probably identical, also or different.In same section of video, corresponding the regarding of next video file of ordinary circumstance
Frequently background is to belong to same background theme, and such as, video is shoot by the sea, and this background theme is ocean, sandy beach master
Topic;To in the contrast matching process of video background, can be by the Color constitut of video background, profile composition etc. be contrasted
Analyze, see whether the video background identified in different video file is identical video background, such as, the master in different video
The most by the sea, the predominantly blueness of the Color constitut in background, local distribution has the color white of white clouds and the Huang at sandy beach to body
Color color, can carry out, to when fuzzy matching, judging the video back of the body in different video file by such Color constitut
Scape is the most identical.
Specifically, when the frame number difference in different video or duration difference, the process of this contrast coupling can be with not
With a certain frame of video set respectively in video file as start frame, by a sequence set, for example, according to frame of video respectively
Play sequence in video, the background in image corresponding to frame of video in different video file carries out contrast coupling.
Step 206: based on this comparison result, determines target video background.
The video background that different video file identification draws after contrast, the contrast matching result obtained may for identical or
Person, for differing, needs according to different contrast matching results, and decide specific aims video background.According to the target determined
Integration background, carries out splicing by the video main body identified in different video files and this target video background and integrates, will not
It is integrated in same video background with the video main body identified in video file.
Step 207: based on the first video main body, the second video main body and target video background, by the first initial image frame
Carry out image synthesis with the second initial image frame, generate target image frame.
Before generating target video, need first to obtain the target image frame corresponding with target video, by this target image frame
Constitute target video.First video main body obtains for identifying from the first initial image frame, and the second video main body is for from the beginning of second
Identifying in beginning picture frame and obtain, this target image frame is the first video background according to the first initial image frame and the second initial graph
As the comparison result of the second video background of frame, determine and obtain;According to those contents, it is achieved at the beginning of the first initial image frame and second
The image synthesis of beginning picture frame part, generates new target image frame, forms described target image frame.
Step 208: this target image frame carries out Video coding, generates target video.
At the picture main body that image recognition corresponding with picture frame in different video file is drawn and the integration back of the body determined
Scape splices, and after forming new image and the new picture frame corresponding with new image, needs the new image obtained
Frame re-starts coding, to form final new video file.Wherein, this target video includes the first video main body and second
Video main body, the video background of this target video is described target video background.
Further, the preferred implementation of related realization step in said method will be described further below.
Wherein, the step of target video background should be determined based on comparison result, including:
If described comparison result be described first video background and the second video background identical, then by described first video the back of the body
Scape or the second video background are defined as described target video background.
Specifically, the contrast coupling that carry out the most identical to video background, the mode of fuzzy matching can be used to implement, when
The comparison result obtained after contrast coupling is to identify in different video file when the video background drawn is identical, directly determines this
In identical video background one is target video background, the video main body in background constant clone different video file, will
Different videos to be integrated identifying, the video main body drawn is integrated in this identical target video background, makes multiple video master
Body is put in a video, it is possible to achieve oneself and oneself game, dialogue, it is also possible to realize the delight effects such as Guan Gongzhan Qin Qiong.
Accordingly, the step of target video background should be determined based on comparison result, including:
If this comparison result is the first video background and the second video background difference, then receive the behaviour of the selection to video background
Make;Video background corresponding for this selection operation is defined as target video background.
If finding after comparison that the video background identified in two frame of video is difference, need target video background is entered
Row determines, to carry out follow-up integration process, specifically can be operated by the selection of user, obtain a video background as target
Video background, the video main body cloned in other video files is implanted, it is achieved the people of community swimming pool swimming is cloned into horse
The beautiful seabeach of Er Daifu is gone.
Alternatively, the video background that this selection operation is corresponding includes that the first video background, the second video background or preset regards
Frequently background.The video background that this selection operation is corresponding, can be in the first video background or the second video background, or
The video background provided in other video files chosen, has the multiformity in enforcement and the motility selected.
Specifically, should be based on the first video main body, the second video main body and target video background, by the first initial image frame
Carry out image synthesis with the second initial image frame, generate the step of target image frame, shown in Fig. 3, including:
Step 301: determine in the first initial image frame and the second initial image frame at the beginning of the target at target video background place
Beginning picture frame.
After the first initial image frame and the second initial image frame are carried out picture material identification, determine and identify wherein
One video background is target video background, and initial image frame is carrying out image synthesis, during generating target image frame,
Need first to determine the initial image frame belonging to target video background, determine that this initial image frame is target initial image frame.
Step 302: determine the target being not belonging to this target initial image frame in the first video main body and the second video main body
Video main body.
After having determined target initial image frame, need to judge the first video main body and the second video main body are not belonging to this
The video main body of target initial image frame, determines that this video main body is target video main body.This target video main body is and target
Video background is not belonging to the video main body of same initial image frame.
Step 303: for the every two field picture in target initial image frame, the pixel of target location in this image is replaced
Pixel for target video main body.
During concrete video-splicing, need to merge into colluding a video video main body in different video file
In background, this anastomosing and splicing process can be replaced by pixel and be realized, in the target initial graph at target video background place
As, in frame, the pixel of composition target video main body being carried out the replacement of pixel, by difference in the target location of every two field picture
In the video main body splicing extremely same video background identified in video file, to realize the fusion of video main body.
Step 304: when all images in target initial image frame complete pixel replacement, generate and comprise described first
Video main body and the second video main body the described target image frame with described target video background as video background.
Every two field picture in target initial image frame, all completes pixel replacement process in step 303, can be in target
On the basis of initial image frame, obtain being integrated into the target image frame of other video main bodys, obtain the mesh that this target image frame is constituted
Mark video, completes video-splicing process, increases the interest of video, meets user's request.
The method of the video-splicing of the embodiment of the present invention, by being corresponding by multiple video file decoded back to be spliced
Picture frame, identify the picture main body in image corresponding to picture frame and background, the picture frame in different video file carried out
Combination, background identification drawn by group carries out contrast coupling, by different video file picture frame pair in each combination
The picture main body of the image answered is spliced with the integration background determined so that the picture master identified in different video file
Body is integrated in same background, forms new image and the new picture frame corresponding with this new image, and to new image
Frame encodes, and ultimately forms new video file, and this process is incorporated into the main body of multiple videos in one video, Ke Yigeng
Change local environment and background in video, by the fusion of the picture main body in different video, the master on different time axle can be realized
The body same time occurs in Same Scene, realize being similar to virtual reality, oneself with oneself dialogue etc. effect, be greatly increased and regard
The interest of frequency, meets the multi-demands of user.
3rd embodiment
A kind of mobile terminal disclosed in the present embodiment, shown in Fig. 4, Fig. 5, including: acquisition module 401, decoded back
Module 402, first determine module 403 and concatenation module 404.
Acquisition module 401, is used for obtaining at least two target video file.
Decoded back module 402, for respectively to the described at least two target video obtained in described acquisition module 401
File carries out video decoding, is reduced to initial image frame.
First determines module 403, is used for for each target video file in described at least two target video file,
Determine the video main body in the described initial image frame that the reduction of described decoded back module 402 obtains and video background.
Concatenation module 404, for determining in the described initial image frame that module 403 determines based on described at least two first
Video main body and video background, carry out video-splicing, generate target video, wherein, initial pictures described in described at least two
Frame is selected from different described target video files.
Wherein, this first determines module 403, including: obtain submodule 4031 and first and determine submodule 4032.
Obtain submodule 4031, for obtaining the first initial image frame and second target of first object video file respectively
Second initial image frame of video file.
First determines submodule 4032, for described first initial pictures obtained described acquisition submodule 4031 respectively
Frame and the second initial image frame carry out picture material identification, determine the first video main body in described first initial image frame and
The second video main body in one video background, and described second initial image frame and the second video background.
Wherein, this concatenation module 404, including: comparer module 4041, second determine submodule 4042, synthon module
4043 and encoding submodule 4044.
Comparer module 4041, for described first video background and described second video background being compared, obtains
Comparison result.
Second determines submodule 4042, for the described comparison result obtained based on described comparer module 4041, determines
Target video background.
Synthon module 4043, for determining son based on described first video main body, the second video main body and described second
The target video background that module 4042 determines, carries out image conjunction by described first initial image frame and described second initial image frame
Become, generate target image frame.
Encoding submodule 4044, carries out Video coding for the target image frame obtaining described synthon module 4043,
Generate described target video;Wherein, described target video includes described first video main body and the second video main body, described target
The video background of video is described target video background.
Wherein, this second determines submodule 4042, including: first determines unit 40421.
First determines unit 40421, if being described first video background and the second video background for described comparison result
Identical, then described first video background or the second video background are defined as described target video background.
Wherein, this second determines submodule 4042, including: receive unit 40422 and second and determine unit 40423.
Receive unit 40422, if being described first video background and the second video background difference for described comparison result,
Then receive the operation of the selection to video background.
Second determines unit 40423, the video of the described selection operation correspondence for being received by described reception unit 40422
Background is defined as described target video background.
Wherein, the video background that this selection operation is corresponding includes described first video background, the second video background or presets
Video background.
Wherein, this synthon module 4043, including: the 3rd determines that unit the 40431, the 4th determines unit 40432, replaces list
Unit 40433 and signal generating unit 40434.
3rd determines unit 40431, is used for determining institute in described first initial image frame and described second initial image frame
State the target initial image frame at target video background place.
4th determines unit 40432, be used for determining in described first video main body and the second video main body be not belonging to described
The 3rd target video main body determining the described target initial image frame that unit 40431 determines.
Replacement unit 40433, for for the every two field picture in described target initial image frame, by target in described image
The pixel of position replaces with the described 4th pixel determining described target video main body that unit 40432 determines.
Signal generating unit 40434, for when all images in described target initial image frame complete pixel replacement, raw
Become to comprise described first video main body and the second video main body the described target with described target video background as video background
Picture frame.
Wherein, this mobile terminal also includes: receiver module 405 and second determines module 406.
Receiver module 405, for receiving user's selection operation to video main body.
Second determines module 406, for the described selection operation received based on described receiver module 405, by described user
The video main body that selected video main body is defined as in initial image frame described in described at least two.
The mobile terminal of the embodiment of the present invention, by carrying out video solution at least two target video file obtained respectively
Code, is reduced to the initial image frame of correspondence, determines the video main body in the initial image frame of each target video file and video
Background, based on the video main body at least two initial image frame and video background, draws identification in different video file
Video main body and video background splice, and ultimately form target video, and this process can change video main body in video file
Residing environment and background, be integrated into the video main body identified in different video file in same background, by difference
The fusion of the video main body in video, can realize the same time of the main body on different time axle occurs in Same Scene, realizes
Be similar to virtual reality, oneself with oneself dialogue etc. effect, be greatly increased the interest of video.
4th embodiment
As shown in Figure 6, this mobile terminal 600 includes: at least one processor 601, memorizer 602, at least one network
Interface 604 and user interface 603.Each assembly in mobile terminal 600 is coupled by bus system 605.It is understood that
Bus system 605 is for realizing the connection communication between these assemblies.Bus system 605, in addition to including data/address bus, is also wrapped
Include power bus, control bus and status signal bus in addition.But for the sake of understanding explanation, in figure 6 various buses are all marked
For bus system 605.
Wherein, user interface 603 can include display, keyboard or pointing device (such as, mouse, trace ball
(trackball), touch-sensitive plate or touch screen etc..
The memorizer 602 being appreciated that in the embodiment of the present invention can be volatile memory or nonvolatile memory,
Maybe can include volatibility and nonvolatile memory.Wherein, nonvolatile memory can be read only memory (Read-
Only Memory, ROM), programmable read only memory (Programmable ROM, PROM), the read-only storage of erasable programmable
Device (Erasable PROM, EPROM), Electrically Erasable Read Only Memory (Electrically EPROM, EEPROM) or
Flash memory.Volatile memory can be random access memory (Random Access Memory, RAM), and it is used as outside high
Speed caching.By exemplary but be not restricted explanation, the RAM of many forms can use, such as static RAM
(Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), Synchronous Dynamic Random Access Memory
(Synchronous DRAM, SDRAM), double data speed synchronous dynamic RAM (Double Data Rate
SDRAM, DDRSDRAM), enhancement mode Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links
Dynamic random access memory (Synch Link DRAM, SLDRAM) and direct rambus random access memory (Direct
Rambus RAM, DRRAM).The memorizer 602 of the system and method that the embodiment of the present invention describes is intended to include but not limited to these
Memorizer with arbitrarily other applicable type.
In some embodiments, memorizer 602 stores following element, executable module or data structure, or
Their subset of person, or their superset: operating system 6021 and application program 6022.
Wherein, operating system 6021, comprise various system program, such as ccf layer, core library layer, driving layer etc., be used for
Realize various basic business and process hardware based task.Application program 6022, comprises various application program, such as media
Player (Media Player), browser (Browser) etc., be used for realizing various applied business.Realize the embodiment of the present invention
The program of method may be embodied in application program 6022.
In embodiments of the present invention, by calling program or the instruction of memorizer 602 storage, concrete, can be application
The program stored in program 6022 or instruction, processor 601 is used for obtaining at least two target video file;Respectively to described extremely
Few two target video files carry out video decoding, are reduced to initial image frame;For described at least two target video file
In each target video file, determine the video main body in described initial image frame and video background;Based at least two institute
State the video main body in initial image frame and video background, carry out video-splicing, generate target video;Wherein, described at least two
Individual described initial image frame is selected from different described target video files.
The method that the invention described above embodiment discloses can apply in processor 601, or is realized by processor 601.
Processor 601 is probably a kind of IC chip, has the disposal ability of signal.During realizing, said method each
Step can be completed by the instruction of the integrated logic circuit of the hardware in processor 601 or software form.Above-mentioned process
Device 601 can be general processor, digital signal processor (Digital Signal Processor, DSP), special integrated electricity
Road (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field
Programmable Gate Array, FPGA) or other PLDs, discrete gate or transistor logic,
Discrete hardware components.Can realize or perform disclosed each method, step and the logic diagram in the embodiment of the present invention.General
The processor etc. that processor can be microprocessor or this processor can also be any routine.In conjunction with embodiment of the present invention institute
The step of disclosed method can be embodied directly in hardware decoding processor and perform, or with the hardware in decoding processor
And software module combination execution completes.Software module may be located at random access memory, and flash memory, read only memory are able to programme read-only
In the storage medium that this areas such as memorizer or electrically erasable programmable memorizer, depositor are ripe.This storage medium is positioned at
Memorizer 602, processor 601 reads the information in memorizer 602, completes the step of said method in conjunction with its hardware.
It is understood that the embodiment of the present invention describe these embodiments can use hardware, software, firmware, middleware,
Microcode or a combination thereof realize.Realizing for hardware, processing unit can be implemented in one or more special IC
(Application Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal
Processing, DSP), digital signal processing appts (DSP Device, DSPD), programmable logic device (Programmable
Logic Device, PLD), field programmable gate array (Field-Programmable Gate Array, FPGA), general place
Reason device, controller, microcontroller, microprocessor, for performing in other electronic unit or a combination thereof of herein described function.
Software is realized, can come by performing the module (such as process, function etc.) of function described in the embodiment of the present invention
Realize the technology described in the embodiment of the present invention.Software code is storable in performing in memorizer and by processor.Memorizer can
Within a processor or to realize outside processor.
Alternatively, processor 601 is additionally operable to: obtain first initial image frame and second of first object video file respectively
Second initial image frame of target video file;Respectively described first initial image frame and the second initial image frame are carried out image
Content recognition, determines at the beginning of the first video main body in described first initial image frame and the first video background, and described second
The second video main body in beginning picture frame and the second video background.
As another embodiment, processor 601 is additionally operable to: by described first video background and described second video background
Compare, obtain comparison result;Based on described comparison result, determine target video background;Based on described first video main body,
Second video main body and target video background, carry out image conjunction by described first initial image frame and described second initial image frame
Become, generate target image frame;Described target image frame is carried out Video coding, generates described target video;Wherein, described target
Video includes described first video main body and the second video main body, and the video background of described target video is the described target video back of the body
Scape.
Alternatively, as another embodiment, processor 601 is additionally operable to: if described comparison result is described first video
Background is identical with the second video background, then described first video background or the second video background are defined as the described target video back of the body
Scape.
Alternatively, as another embodiment, processor 601 is additionally operable to: if described comparison result is described first video
Background is different with the second video background, then receive the operation of the selection to video background;By the described video back of the body selecting operation corresponding
Scape is defined as described target video background.
Alternatively, as another embodiment, the video background of described selection operation correspondence includes the described first video back of the body
Scape, the second video background or default video background.
Alternatively, as another embodiment, processor 601 is additionally operable to: determine described first initial image frame and described
The target initial image frame of target video background place described in the second initial image frame;Determine described first video main body and
Two video main bodys are not belonging to the target video main body of described target initial image frame;For in described target initial image frame
Every two field picture, replaces with the pixel of described target video main body by the pixel of target location in described image;When described mesh
When all images in mark initial image frame complete pixel replacement, generate and comprise described first video main body and the second video master
Body the described target image frame with described target video background as video background.
Alternatively, as another embodiment, processor 601 is additionally operable to: receive user's selection operation to video main body;
Based on described selection operation, the video main body selected by described user is defined as in initial image frame described in described at least two
Video main body.
This mobile terminal is capable of each process that in previous embodiment, terminal realizes, for avoiding repeating, the most no longer
Repeat.
The mobile terminal of the embodiment of the present invention, by carrying out video solution at least two target video file obtained respectively
Code, is reduced to the initial image frame of correspondence, determines the video main body in the initial image frame of each target video file and video
Background, based on the video main body at least two initial image frame and video background, draws identification in different video file
Video main body and video background splice, and ultimately form target video, and this process can change video main body in video file
Residing environment and background, be integrated into the video main body identified in different video file in same background, by difference
The fusion of the video main body in video, can realize the same time of the main body on different time axle occurs in Same Scene, realizes
Be similar to virtual reality, oneself with oneself dialogue etc. effect, be greatly increased the interest of video, meet the multi-demands of user.
5th embodiment
As it is shown in fig. 7, this mobile terminal 700 can be mobile phone, panel computer, personal digital assistant (Personal
Digital Assistant, PDA) or vehicle-mounted computer etc..
Mobile terminal 700 in Fig. 7 includes radio frequency (Radio Frequency, RF) circuit 710, memorizer 720, input
Unit 730, display unit 740, processor 760, voicefrequency circuit 770, WiFi (Wireless Fidelity) module 780 and electricity
Source 790.
Wherein, input block 730 can be used for receiving numeral or the character information of user's input, and produces and mobile terminal
The user setup of 700 and function control relevant signal input.Specifically, in the embodiment of the present invention, this input block 730 can
To include contact panel 731.Contact panel 731, also referred to as touch screen, can collect user thereon or neighbouring touch operation
(such as user uses any applicable object such as finger, stylus or adnexa operation on contact panel 731), and according in advance
The formula set drives corresponding attachment means.Optionally, contact panel 731 can include touch detecting apparatus and touch controller
Two parts.Wherein, the touch orientation of touch detecting apparatus detection user, and detect the signal that touch operation brings, by signal
Send touch controller to;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate,
Give this processor 760 again, and order that processor 760 sends can be received and performed.Furthermore, it is possible to employing resistance-type,
The polytypes such as condenser type, infrared ray and surface acoustic wave realize contact panel 731.Except contact panel 731, input block
730 can also include other input equipments 732, and other input equipments 732 can include but not limited to physical keyboard, function key
One or more in (such as volume control button, switch key etc.), trace ball, mouse, action bars etc..
Wherein, display unit 740 can be used for showing the information inputted by user or the information being supplied to user and movement
The various menu interfaces of terminal 700.Display unit 740 can include display floater 741, optionally, can use LCD or organic
The forms such as optical diode (Organic Light-Emitting Diode, OLED) configure display floater 741.
It should be noted that contact panel 731 can cover display floater 741, formed and touch display screen, when this touch display screen is examined
Measure thereon or after neighbouring touch operation, send processor 760 to determine the type of touch event, with preprocessor
760 provide corresponding visual output according to the type of touch event on touch display screen.
Touch display screen and include Application Program Interface viewing area and conventional control viewing area.This Application Program Interface viewing area
And the arrangement mode of this conventional control viewing area does not limit, can be arranged above and below, left-right situs etc. can be distinguished two and show
Show the arrangement mode in district.This Application Program Interface viewing area is displayed for the interface of application program.Each interface is permissible
The interface elements such as the icon and/or the widget desktop control that comprise at least one application program.This Application Program Interface viewing area
It can also be the empty interface not comprising any content.This conventional control viewing area is for showing the control that utilization rate is higher, such as,
The application icons etc. such as settings button, interface numbering, scroll bar, phone directory icon.
Wherein processor 760 is the control centre of mobile terminal 700, utilizes various interface and the whole mobile phone of connection
Various piece, is stored in the software program in first memory 721 and/or module by running or performing, and calls storage
Data in second memory 722, perform the various functions of mobile terminal 700 and process data, thus to mobile terminal 700
Carry out integral monitoring.Optionally, processor 760 can include one or more processing unit.
In embodiments of the present invention, by call the software program and/or module stored in this first memory 721 and/
Or the data in this second memory 722, processor 760 is used for obtaining at least two target video file;Respectively to described extremely
Few two target video files carry out video decoding, are reduced to initial image frame;For described at least two target video file
In each target video file, determine the video main body in described initial image frame and video background;Based at least two institute
State the video main body in initial image frame and video background, carry out video-splicing, generate target video;Wherein, described at least two
Individual described initial image frame is selected from different described target video files.
Alternatively, as another embodiment, processor 760 is additionally operable to obtain respectively at the beginning of the first of first object video file
Beginning picture frame and the second initial image frame of the second target video file;Initial to described first initial image frame and second respectively
Picture frame carries out picture material identification, determines the first video main body in described first initial image frame and the first video background,
And the second video main body in described second initial image frame and the second video background.
As another embodiment, processor 760 is additionally operable to enter described first video background and described second video background
Row comparison, obtains comparison result;Based on described comparison result, determine target video background;Based on described first video main body,
Two video main bodys and target video background, carry out image conjunction by described first initial image frame and described second initial image frame
Become, generate target image frame;Described target image frame is carried out Video coding, generates described target video;Wherein, described target
Video includes described first video main body and the second video main body, and the video background of described target video is the described target video back of the body
Scape.
Alternatively, as another embodiment, if it is described first video background that processor 760 is additionally operable to described comparison result
Identical with the second video background, then described first video background or the second video background are defined as described target video background.
Alternatively, as another embodiment, if it is described first video background that processor 760 is additionally operable to described comparison result
Different with the second video background, then receive the operation of the selection to video background;By true for the described video background selecting operation corresponding
It is set to described target video background.
Alternatively, as another embodiment, video background corresponding to described selection operation include described first video background,
Second video background or default video background.
Alternatively, as another embodiment, processor 760 is additionally operable to determine described first initial image frame and described second
The target initial image frame at the place of target video background described in initial image frame;Determine that described first video main body and second regards
Main body is not belonging to the target video main body of described target initial image frame frequently;For the every frame in described target initial image frame
Image, replaces with the pixel of described target video main body by the pixel of target location in described image;When at the beginning of described target
All images in beginning picture frame complete pixel when replacing, and generate and comprise described first video main body and the second video main body also
Described target image frame with described target video background as video background.
Alternatively, as another embodiment, processor 760 is additionally operable to receive user's selection operation to video main body;Base
In described selection operation, the video main body selected by described user is defined as in initial image frame described in described at least two
Video main body.
The mobile terminal of the embodiment of the present invention, by carrying out video solution at least two target video file obtained respectively
Code, is reduced to the initial image frame of correspondence, determines the video main body in the initial image frame of each target video file and video
Background, based on the video main body at least two initial image frame and video background, draws identification in different video file
Video main body and video background splice, and ultimately form target video, and this process can change video main body in video file
Residing environment and background, be integrated into the video main body identified in different video file in same background, by difference
The fusion of the video main body in video, can realize the same time of the main body on different time axle occurs in Same Scene, realizes
Be similar to virtual reality, oneself with oneself dialogue etc. effect, be greatly increased the interest of video, meet the multi-demands of user.
Those of ordinary skill in the art are it is to be appreciated that combine that the disclosed embodiments in the embodiment of the present invention describe is each
The unit of example and algorithm steps, it is possible to being implemented in combination in of electronic hardware or computer software and electronic hardware.These
Function performs with hardware or software mode actually, depends on application-specific and the design constraint of technical scheme.Specialty
Technical staff specifically should can be used for using different methods to realize described function to each, but this realization should not
Think beyond the scope of this invention.
Those skilled in the art is it can be understood that arrive, for convenience and simplicity of description, the system of foregoing description,
The specific works process of device and unit, is referred to the corresponding process in preceding method embodiment, does not repeats them here.
In embodiment provided herein, it should be understood that disclosed apparatus and method, can be passed through other
Mode realizes.Such as, device embodiment described above is only schematically, such as, the division of described unit, it is only
A kind of logic function divides, actual can have when realizing other dividing mode, the most multiple unit or assembly can in conjunction with or
Person is desirably integrated into another system, or some features can be ignored, or does not performs.Another point, shown or discussed is mutual
Between coupling direct-coupling or communication connection can be the INDIRECT COUPLING by some interfaces, device or unit or communication link
Connect, can be electrical, machinery or other form.
The described unit illustrated as separating component can be or may not be physically separate, shows as unit
The parts shown can be or may not be physical location, i.e. may be located at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected according to the actual needs to realize the mesh of the present embodiment scheme
's.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to
It is that unit is individually physically present, it is also possible to two or more unit are integrated in a unit.
If described function is using the form realization of SFU software functional unit and as independent production marketing or use, permissible
It is stored in a computer read/write memory medium.Based on such understanding, technical scheme is the most in other words
The part contributing prior art or the part of this technical scheme can embody with the form of software product, this meter
Calculation machine software product is stored in a storage medium, including some instructions with so that a computer equipment (can be individual
People's computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention.
And aforesaid storage medium includes: USB flash disk, portable hard drive, ROM, RAM, magnetic disc or CD etc. are various can store program code
Medium.
The above, the only detailed description of the invention of the present invention, but protection scope of the present invention is not limited thereto, and any
Those familiar with the art, in the technical scope that the invention discloses, can readily occur in change or replace, should contain
Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with scope of the claims.
Each embodiment in this specification all uses the mode gone forward one by one to describe, what each embodiment stressed is with
The difference of other embodiments, between each embodiment, identical similar part sees mutually.
Although having been described for the preferred embodiment of the embodiment of the present invention, but those skilled in the art once knowing base
This creativeness concept, then can make other change and amendment to these embodiments.So, claims are intended to be construed to
The all changes including preferred embodiment and falling into range of embodiment of the invention and amendment.
Finally, in addition it is also necessary to explanation, in embodiments of the present invention, the relational terms of such as first and second or the like is only
Only it is used for separating an entity or operation with another entity or operating space, and not necessarily requires or imply that these are real
Relation or the order of any this reality is there is between body or operation.And, term " includes ", " comprising " or its any its
His variant is intended to comprising of nonexcludability, so that include that the process of a series of key element, method, article or terminal set
For not only including those key elements, but also include other key elements being not expressly set out, or also include for this process, side
The key element that method, article or terminal unit are intrinsic.In the case of there is no more restriction, statement " including ... " limit
Fixed key element, it is not excluded that there is also other identical in including the process of described key element, method, article or terminal unit
Key element.
Above-described is the preferred embodiment of the present invention, it should be pointed out that the ordinary person for the art comes
Saying, can also make some improvements and modifications under without departing from principle premise of the present invention, these improvements and modifications also exist
In protection scope of the present invention.
Claims (16)
1. the method for a video-splicing, it is characterised in that including:
Obtain at least two target video file;
Respectively described at least two target video file is carried out video decoding, be reduced to initial image frame;
For each target video file in described at least two target video file, determine regarding in described initial image frame
Frequently main body and video background;
Based on the video main body in initial image frame described at least two and video background, carry out video-splicing, generate target and regard
Frequently;
Wherein, initial image frame described in described at least two is selected from different described target video files.
Method the most according to claim 1, it is characterised in that described in described at least two target video file
Each target video file, determines the video main body in described initial image frame and the step of video background, including:
Obtain the first initial image frame and second initial pictures of the second target video file of first object video file respectively
Frame;
Respectively described first initial image frame and the second initial image frame are carried out picture material identification, determine described first initial
The first video main body in picture frame and the second video main body in the first video background, and described second initial image frame and
Second video background.
Method the most according to claim 2, it is characterised in that described based on regarding in initial image frame described at least two
Frequently main body and video background, carries out video-splicing, generates the step of target video, including:
Described first video background and described second video background are compared, obtains comparison result;
Based on described comparison result, determine target video background;
Based on described first video main body, the second video main body and target video background, by described first initial image frame and institute
State the second initial image frame and carry out image synthesis, generate target image frame;
Described target image frame is carried out Video coding, generates described target video;
Wherein, described target video includes described first video main body and the second video main body, the video back of the body of described target video
Scape is described target video background.
Method the most according to claim 3, it is characterised in that described based on described comparison result, determines that target video is carried on the back
The step of scape, including:
If described comparison result be described first video background and the second video background identical, then by described first video background or
Second video background is defined as described target video background.
Method the most according to claim 3, it is characterised in that described based on described comparison result, determines that target video is carried on the back
The step of scape, including:
If described comparison result is described first video background and the second video background difference, then receive the selection to video background
Operation;
The described video background selecting operation corresponding is defined as described target video background.
Method the most according to claim 5, it is characterised in that video background corresponding to described selection operation includes described the
One video background, the second video background or default video background.
Method the most according to claim 3, it is characterised in that described based on described first video main body, the second video master
Body and target video background, carry out image synthesis by described first initial image frame and described second initial image frame, generates mesh
The step of logo image frame, including:
Determine described in described first initial image frame and described second initial image frame at the beginning of the target at target video background place
Beginning picture frame;
Determine the target video master being not belonging to described target initial image frame in described first video main body and the second video main body
Body;
For the every two field picture in described target initial image frame, the pixel of target location in described image is replaced with described
The pixel of target video main body;
When all images in described target initial image frame complete pixel replacement, generate and comprise described first video main body
With the second video main body the described target image frame with described target video background as video background.
Method the most according to claim 1, it is characterised in that described based on regarding in initial image frame described at least two
Frequently main body and video background, carries out video-splicing, and before generating the step of target video, described method also includes:
Receive user's selection operation to video main body;
Based on described selection operation, the video main body selected by described user is defined as initial pictures described in described at least two
Video main body in frame.
9. a mobile terminal, it is characterised in that including:
Acquisition module, is used for obtaining at least two target video file;
Decoded back module, for regarding the described at least two target video file obtained in described acquisition module respectively
Frequency decoding, is reduced to initial image frame;
First determines module, for for each target video file in described at least two target video file, determines institute
State the video main body in the described initial image frame that the reduction of decoded back module obtains and video background;
Concatenation module, for determining the video main body in the described initial image frame that module determines based on described at least two first
And video background, carry out video-splicing, generate target video, wherein, initial image frame described in described at least two is selected from not
Same described target video file.
Mobile terminal the most according to claim 9, it is characterised in that described first determines that module includes:
Obtain submodule, for obtaining the first initial image frame and the second target video file of first object video file respectively
The second initial image frame;
First determines submodule, initial for described first initial image frame obtained described acquisition submodule respectively and second
Picture frame carries out picture material identification, determines the first video main body in described first initial image frame and the first video background,
And the second video main body in described second initial image frame and the second video background.
11. mobile terminals according to claim 10, it is characterised in that described concatenation module includes:
Comparer module, for described first video background and described second video background being compared, obtains comparison result;
Second determines submodule, for the described comparison result obtained based on described comparer module, determines target video background;
Based on described first video main body, the second video main body and described second, synthon module, for determining that submodule determines
Target video background, described first initial image frame and described second initial image frame are carried out image synthesis, generate target
Picture frame;
Encoding submodule, carries out Video coding for the target image frame obtaining described synthon module, generates described target
Video;
Wherein, described target video includes described first video main body and the second video main body, the video back of the body of described target video
Scape is described target video background.
12. mobile terminals according to claim 11, it is characterised in that described second determines that submodule includes:
First determines unit, if for described comparison result be described first video background and the second video background identical, then will
Described first video background or the second video background are defined as described target video background.
13. mobile terminals according to claim 11, it is characterised in that described second determines that submodule includes:
Receiving unit, if being described first video background and the second video background difference for described comparison result, then it is right to receive
The selection operation of video background;
Second determines unit, and the video background of the described selection operation correspondence for being received by described reception unit is defined as described
Target video background.
14. mobile terminals according to claim 13, it is characterised in that the video background of described selection operation correspondence includes
Described first video background, the second video background or default video background.
15. mobile terminals according to claim 11, it is characterised in that described synthon module includes:
3rd determines unit, is used for determining target video described in described first initial image frame and described second initial image frame
The target initial image frame at background place;
4th determines unit, is used for determining that being not belonging to the described 3rd in described first video main body and the second video main body determines list
The target video main body of the described target initial image frame that unit determines;
Replacement unit, for for the every two field picture in described target initial image frame, by the picture of target location in described image
Vegetarian refreshments replaces with the described 4th pixel determining described target video main body that unit determines;
Signal generating unit, for when all images in described target initial image frame complete pixel replacement, generating and comprise institute
State the first video main body and the second video main body the described target image frame with described target video background as video background.
16. mobile terminals according to claim 9, it is characterised in that described mobile terminal also includes:
Receiver module, for receiving user's selection operation to video main body;
Second determines module, for the described selection operation received based on described receiver module, by regarding selected by described user
Frequently the video main body during main body is defined as initial image frame described in described at least two.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610614655.7A CN106101579B (en) | 2016-07-29 | 2016-07-29 | A kind of method and mobile terminal of video-splicing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610614655.7A CN106101579B (en) | 2016-07-29 | 2016-07-29 | A kind of method and mobile terminal of video-splicing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106101579A true CN106101579A (en) | 2016-11-09 |
CN106101579B CN106101579B (en) | 2019-04-12 |
Family
ID=57478692
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610614655.7A Active CN106101579B (en) | 2016-07-29 | 2016-07-29 | A kind of method and mobile terminal of video-splicing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106101579B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106713942A (en) * | 2016-12-27 | 2017-05-24 | 广州华多网络科技有限公司 | Video processing method and video processing device |
CN106776831A (en) * | 2016-11-24 | 2017-05-31 | 维沃移动通信有限公司 | A kind of edit methods and mobile terminal of Multimedia Combination data |
CN112055258A (en) * | 2019-06-06 | 2020-12-08 | 腾讯科技(深圳)有限公司 | Time delay testing method and device for loading live broadcast picture and electronic equipment |
CN113596574A (en) * | 2021-07-30 | 2021-11-02 | 维沃移动通信有限公司 | Video processing method, video processing apparatus, electronic device, and readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1425171A (en) * | 1999-11-24 | 2003-06-18 | 伊摩信科技有限公司 | Method and system for coordination and combination of video sequences with spatial and temporal normalization |
CN101946500A (en) * | 2007-12-17 | 2011-01-12 | 斯坦·考塞瑞德 | Real time video inclusion system |
CN104902189A (en) * | 2015-06-24 | 2015-09-09 | 小米科技有限责任公司 | Picture processing method and picture processing device |
CN105046699A (en) * | 2015-07-09 | 2015-11-11 | 硅革科技(北京)有限公司 | Motion video superposition contrast method |
CN105472271A (en) * | 2014-09-10 | 2016-04-06 | 易珉 | Video interaction method, device and system |
-
2016
- 2016-07-29 CN CN201610614655.7A patent/CN106101579B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1425171A (en) * | 1999-11-24 | 2003-06-18 | 伊摩信科技有限公司 | Method and system for coordination and combination of video sequences with spatial and temporal normalization |
CN101946500A (en) * | 2007-12-17 | 2011-01-12 | 斯坦·考塞瑞德 | Real time video inclusion system |
CN105472271A (en) * | 2014-09-10 | 2016-04-06 | 易珉 | Video interaction method, device and system |
CN104902189A (en) * | 2015-06-24 | 2015-09-09 | 小米科技有限责任公司 | Picture processing method and picture processing device |
CN105046699A (en) * | 2015-07-09 | 2015-11-11 | 硅革科技(北京)有限公司 | Motion video superposition contrast method |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106776831A (en) * | 2016-11-24 | 2017-05-31 | 维沃移动通信有限公司 | A kind of edit methods and mobile terminal of Multimedia Combination data |
CN106713942A (en) * | 2016-12-27 | 2017-05-24 | 广州华多网络科技有限公司 | Video processing method and video processing device |
CN106713942B (en) * | 2016-12-27 | 2020-06-09 | 广州华多网络科技有限公司 | Video processing method and device |
CN112055258A (en) * | 2019-06-06 | 2020-12-08 | 腾讯科技(深圳)有限公司 | Time delay testing method and device for loading live broadcast picture and electronic equipment |
CN112055258B (en) * | 2019-06-06 | 2023-01-31 | 腾讯科技(深圳)有限公司 | Time delay testing method and device for loading live broadcast picture, electronic equipment and storage medium |
CN113596574A (en) * | 2021-07-30 | 2021-11-02 | 维沃移动通信有限公司 | Video processing method, video processing apparatus, electronic device, and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN106101579B (en) | 2019-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106161967A (en) | A kind of backlight scene panorama shooting method and mobile terminal | |
CN106101579A (en) | A kind of method of video-splicing and mobile terminal | |
CN106060386A (en) | Preview image generation method and mobile terminal | |
CN105933538A (en) | Video finding method for mobile terminal and mobile terminal | |
CN106937055A (en) | A kind of image processing method and mobile terminal | |
CN106210526A (en) | A kind of image pickup method and mobile terminal | |
CN106101544A (en) | A kind of image processing method and mobile terminal | |
CN105847674A (en) | Preview image processing method based on mobile terminal, and mobile terminal therein | |
CN106231187A (en) | A kind of method shooting image and mobile terminal | |
CN106101767A (en) | A kind of screen recording method and mobile terminal | |
CN106658141A (en) | Video processing method and mobile terminal | |
CN105898495A (en) | Method for pushing mobile terminal recommended information and mobile terminal | |
CN105389780A (en) | Image processing method and mobile terminal | |
CN106126108A (en) | A kind of breviary map generalization method and mobile terminal | |
CN106488133A (en) | A kind of detection method of Moving Objects and mobile terminal | |
CN106341608A (en) | Emotion based shooting method and mobile terminal | |
CN106357961A (en) | Photographing method and mobile terminal | |
CN105959564A (en) | Photographing method and mobile terminal | |
CN106454085A (en) | Image processing method and mobile terminal | |
CN107592568B (en) | A kind of video broadcasting method and terminal device | |
CN106855744B (en) | A kind of screen display method and mobile terminal | |
CN106101666A (en) | The method of a kind of image color reservation and mobile terminal | |
CN106156313A (en) | The inspection method of a kind of album picture and mobile terminal | |
CN106850940A (en) | The changing method and mobile terminal of a kind of state | |
CN106101597A (en) | The image pickup method of a kind of video that fixes and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |