CN106303292B - A kind of generation method and terminal of video data - Google Patents
A kind of generation method and terminal of video data Download PDFInfo
- Publication number
- CN106303292B CN106303292B CN201610879193.1A CN201610879193A CN106303292B CN 106303292 B CN106303292 B CN 106303292B CN 201610879193 A CN201610879193 A CN 201610879193A CN 106303292 B CN106303292 B CN 106303292B
- Authority
- CN
- China
- Prior art keywords
- pictures
- picture
- sub
- target area
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2624—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2625—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
- Telephone Function (AREA)
Abstract
The embodiment of the invention discloses a kind of generation methods of video data, when the camera of terminal is run, obtain the target area at shooting interface, determine target object from the object of the target area;After receiving photographing instruction, taken pictures before the moment at preset time using distance as initial time, as finish time, to obtain each picture according to preset step-length at the preset time after the moment of taking pictures described in distance;From each picture, intercepts out including the target object and region identical with the target area size is each sub-pictures;The target area is matched with each sub-pictures respectively, when successful match, retains sub-pictures, when it fails to match, abandons sub-pictures;Each sub-pictures based on reservation generate video data, and the photo that the video data is taken with the moment of taking pictures is saved.The embodiment of the present invention further simultaneously discloses a kind of terminal.
Description
Technical field
The present invention relates to terminal applies field more particularly to the generation methods and terminal of a kind of video data.
Background technique
Currently, user is generally taken pictures using the camera in mobile device, still, often missed during taking pictures
Excellent moment in the prior art, the video of surrounding time section is saved in moment of taking pictures, simultaneously to leave excellent moment
The audio data of the period can also be saved, to make up the sound of splendid moment and U.S. second that leakage is clapped;
User is often provided to shoot some specific object, such as people or object during taking pictures, however,
It often will appear some people disorderly entered or object in obtained video data, influence the perception of user, that is to say, that the prior art
In, the ornamental value of the video of the surrounding time section saved when taking pictures is poor, influences the Experience Degree of user.
Summary of the invention
In view of this, it is a primary object of the present invention to propose the generation method and terminal of a kind of video data, it is intended to mention
The ornamental value of the video for the surrounding time section that height is saved when taking pictures improves user experience.
In order to achieve the above objectives, the technical scheme of the present invention is realized as follows:
In a first aspect, the embodiment of the invention provides a kind of generation methods of video data, comprising: the camera of terminal is run
When, the target area at shooting interface is obtained, determines target object from the object of the target area;Receiving finger of taking pictures
After order, taken pictures before the moment at preset time using distance as initial time, with the preset time after the moment of taking pictures described in distance
Place is finish time, obtains each picture according to preset step-length;From each picture, intercept out including the target object and with institute
Stating the identical region of target area size is each sub-pictures;The target area is carried out with each sub-pictures respectively
Match, when successful match, retain sub-pictures, when it fails to match, abandons sub-pictures;Each sub-pictures based on reservation generate video counts
According to, and the photo that the video data is taken with the moment of taking pictures is saved.
Further, described from each picture, intercept out including the target object and with the target area size
Identical region is each sub-pictures, comprising: detects to each picture, obtains the object of each picture;It will
The object of each picture is matched with the target object, when successful match, is retained each picture, is worked as matching
When failure, each picture is abandoned;From each picture of reservation, the location information of the target object is determined;According to
The location information, from each picture of reservation interception include the target object and with the target area size phase
Same region, using the region of interception as corresponding sub-pictures.
Further, it is described according to the positional information, from each picture of reservation interception include the target
Object and region identical with the target area size, comprising: according to the positional information, set in the target object
Heart position;Centered on the center of the target object, interception and the target area size from each picture
Identical region.
Further, described to match the target area with each sub-pictures respectively, when successful match, protect
Sub-pictures are stayed, when it fails to match, abandon sub-pictures, comprising: the object for determining each sub-pictures, by each sub-pictures
Object described in object other than target object be determined as fisrt feature object set;By institute in the object of the target area
It states the object other than target object and is determined as second feature object set;By the second feature object set and first spy
Sign object set is matched, and when successful match, retains sub-pictures, when it fails to match, abandons sub-pictures.
Further, the target area is enclosed region.
Second aspect, the embodiment of the invention provides a kind of terminal, the terminal includes: acquisition module, for terminal
When camera is run, the target area at shooting interface is obtained, determines target object from the object of the target area;It is receiving
To after photographing instruction, taken pictures before the moment at preset time using distance as initial time, with described after the moment of taking pictures described in distance
It is finish time at preset time, obtains each picture according to preset step-length;Interception module, for intercepting out from each picture
Including the target object and region identical with the target area size is each sub-pictures;Matching module is used for institute
It states target area to be matched with each sub-pictures respectively, when successful match, retains sub-pictures, when it fails to match, abandon
Sub-pictures;Preserving module generates video data for each sub-pictures based on reservation, and the video data is taken pictures with described
The photo that moment is taken is saved.
Further, the interception module is specifically used for detecting each picture, obtains each picture
Object;The object of each picture is matched with the target object, when successful match, retains each figure
Piece abandons each picture when it fails to match;From each picture of reservation, the position of the target object is determined
Information;According to the positional information, from each picture of reservation interception include the target object and with the target
The identical region of area size, using the region of interception as corresponding sub-pictures.
Further, the interception module is specifically used for according to the positional information, setting the center of the target object
Position;Centered on the center of the target object, interception and the target area size phase from each picture
Same region.
Further, the matching module is specifically used for determining the object of each sub-pictures, by each subgraph
Object other than target object described in the object of piece is determined as fisrt feature object set;It will be in the object of the target area
Object other than the target object is determined as second feature object set;By the second feature object set and described first
Feature object set is matched, and when successful match, retains sub-pictures, when it fails to match, abandons sub-pictures.
Further, the target area is enclosed region.
A kind of generation method and terminal of video data provided by the embodiment of the present invention, it is first when the camera of terminal is run
First, target object is simultaneously determined in the target area for obtaining shooting interface from the object of target area, then takes pictures receiving
It after instruction, was taken pictures before the moment at preset time using distance as initial time, is taken pictures after the moment at preset time with distance as knot
The beam moment obtains each picture according to preset step-length, secondly, from each picture, intercepts out each sub-pictures, in each sub-pictures
It include that target object and the size of each sub-pictures and the size of target area are identical, in this way, being intercepted by spanning subgraph piece
Local picture out, finally, by matching target area with each sub-pictures respectively, each sub-pictures retained,
And each sub-pictures based on reservation generate video data, and the photo that video data is taken with the moment of taking pictures is saved,
That is matching, obtaining with each sub-pictures by intercepting out sub-pictures, and by target area in the embodiment of the present invention
Each sub-pictures retained, so that local video data is generated, by matching so that not entering disorderly in the video data of the part
People or object, and the video data of the part is saved together with photo, so that user can be with while ornamental photo
Ornamental part small video, improves the ornamental value of the video of the surrounding time section saved when taking pictures, improves user experience.
Detailed description of the invention
A kind of hardware structural diagram of Fig. 1 optional terminal of each embodiment to realize the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the flow diagram of the management method of the display object in the embodiment of the present invention one;
Fig. 4 is the arrangement schematic diagram of target area in the embodiment of the present invention one;
Fig. 5 schemes at the time of being in the generation method of video data;
Fig. 6 is a kind of optional flow diagram of the generation method of the video data in the embodiment of the present invention two;
Fig. 7-1 is a kind of optional arrangement schematic diagram of the picture in the embodiment of the present invention two;
Fig. 7-2 is the optional arrangement schematic diagram of another kind of the picture in the embodiment of the present invention two;
Fig. 7-3 is another optional arrangement schematic diagram of the picture in the embodiment of the present invention two;
Fig. 7-4 is a kind of optional arrangement schematic diagram of the sub-pictures in the embodiment of the present invention two;
Fig. 7-5 is the optional arrangement schematic diagram of another kind of the sub-pictures in the embodiment of the present invention two;
Fig. 7-6 is another optional arrangement schematic diagram of the sub-pictures in the embodiment of the present invention two;
Fig. 8 is the composed structure schematic diagram of the terminal in the embodiment of the present invention three.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description.
1 describe to realize the terminal of each embodiment of the present invention with reference to the drawings.In subsequent description, use is used
In indicate element such as " module ", " component " or " unit " suffix only for being conducive to explanation of the invention, itself is simultaneously
There is no specific meaning.Therefore, " module " can be used mixedly with " component ".
Terminal can be implemented in a variety of manners.For example, terminal described in the present invention may include such as mobile phone,
Smart phone, laptop, digit broadcasting receiver, personal digital assistant (PDA), tablet computer (PAD), portable more matchmakers
The fixed terminal of the terminal of body player (PMP), navigation device etc. and such as number TV, desktop computer etc..In the following,
Assuming that terminal is terminal.However, it will be understood by those skilled in the art that other than the element for being used in particular for mobile purpose,
The construction of embodiment according to the present invention can also apply to the terminal of fixed type.
Fig. 1 to realize the present invention a kind of optional terminal of each embodiment hardware configuration signal.
Terminal 100 may include wireless communication unit 110, audio/video (A/V) input unit 120, user input unit
130, sensing unit 140, output unit 150, memory 160, interface unit 170, controller 180 and power supply unit 190 etc..
Fig. 1 shows the terminal with various assemblies, it should be understood that being not required for implementing all components shown, can replace
More or fewer components are implemented on generation ground, and the element of terminal will be discussed in more detail below.
Wireless communication unit 110 generally includes one or more components, allows terminal 100 and wireless communication system or net
Radio communication between network.For example, wireless communication unit may include broadcasting reception module 111, mobile communication module 112,
At least one of wireless Internet module 113, short range communication module 114 and location information module 115.
Broadcasting reception module 111 receives broadcast singal and/or broadcast from external broadcast management server via broadcast channel
Relevant information.Broadcast channel may include satellite channel and/or terrestrial channel.Broadcast management server, which can be, to be generated and sent
The broadcast singal and/or broadcast related information generated before the server or reception of broadcast singal and/or broadcast related information
And send it to the server of terminal.Broadcast singal may include TV broadcast singal, radio signals, data broadcasting
Signal etc..Moreover, broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast phase
Closing information can also provide via mobile communications network, and in this case, broadcast related information can be by mobile communication mould
Block 112 receives.Broadcast singal can exist in a variety of manners, for example, it can be with the electronics of digital multimedia broadcasting (DMB)
Program guide (EPG), digital video broadcast-handheld (DVB-H) electronic service guidebooks (ESG) etc. form and exist.Broadcast
Receiving module 111 can receive signal broadcast by using various types of broadcast systems.Particularly, broadcasting reception module 111
It can be wide by using such as multimedia broadcasting-ground (DMB-T), digital multimedia broadcasting-satellite (DMB-S), digital video
It broadcasts-holds (DVB-H), the Radio Data System of forward link media (MediaFLO@), received terrestrial digital broadcasting integrated service
(ISDB-T) etc. digit broadcasting system receives digital broadcasting.Broadcasting reception module 111, which may be constructed such that, to be adapted to provide for extensively
Broadcast the various broadcast systems and above-mentioned digit broadcasting system of signal.Via the received broadcast singal of broadcasting reception module 111 and/
Or broadcast related information can store in memory 160 (or other types of storage medium).
Mobile communication module 112 sends radio signals to base station (for example, access point, node B etc.), exterior terminal
And at least one of server and/or receive from it radio signal.Such radio signal may include that voice is logical
Talk about signal, video calling signal or according to text and/or Multimedia Message transmission and/or received various types of data.
The Wi-Fi (Wireless Internet Access) of the support terminal of wireless Internet module 113.The module can be coupled internally or externally
To terminal.Wi-Fi (Wireless Internet Access) technology involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (nothing
Line width band), Wimax (worldwide interoperability for microwave accesses), HSDPA (high-speed downlink packet access) etc..
Short range communication module 114 is the module for supporting short range communication.Some examples of short-range communication technology include indigo plant
ToothTM, radio frequency identification (RFID), Infrared Data Association (IrDA), ultra wide band (UWB), purple honeybeeTMEtc..
Location information module 115 is the module for checking or obtaining the location information of terminal.Location information module 115
Typical case is GPS.According to current technology, the location information module 115 as GPS is calculated from three or more satellites
Range information and correct time information and Information application triangulation for calculating, thus according to longitude, latitude and
Highly accurately calculate three-dimensional current location information.Currently, three satellites are used for the method for calculating position and temporal information
And the error of calculated position and temporal information is corrected by using an other satellite.In addition, 115 energy of GPS module
Enough by Continuous plus current location information in real time come calculating speed information.
A/V input unit 120 is for receiving audio or video signal.A/V input unit 120 may include 121 He of camera
Microphone 122, camera 121 is to the static images obtained in video acquisition mode or image capture mode by image capture apparatus
Or the image data of video is handled.Treated, and picture frame may be displayed on display unit 151.It is handled through camera 121
Picture frame afterwards can store in memory 160 (or other storage mediums) or be sent out via wireless communication unit 110
It send, two or more cameras 121 can be provided according to the construction of terminal.Microphone 122 can be in telephone calling model, record mould
Sound (audio data) is received via microphone 122 in formula, speech recognition mode etc. operational mode, and can will be such
Acoustic processing is audio data.Audio that treated (voice) data can be converted in the case where telephone calling model can be through
The format output of mobile communication base station is sent to by mobile communication module 112.Various types of noises can be implemented in microphone 122
(or inhibition) algorithm is eliminated to eliminate the noise or interference that (or inhibition) generates during sending and receiving audio signal.
The order that user input unit 130 can be inputted according to user generates key input data with the various behaviour of controlling terminal
Make.User input unit 130 allows user to input various types of information, and may include keyboard, metal dome, touch tablet
(for example, the sensitive component of detection due to the variation of resistance, pressure, capacitor etc. caused by being contacted), idler wheel, rocking bar etc.
Deng.Particularly, when touch tablet is superimposed upon in the form of layer on display unit 151, touch screen can be formed.
Sensing unit 140 detects the current state of terminal 100, (for example, terminal 100 opens or closes state), terminal
100 position, user for the presence or absence of contact (that is, touch input) of terminal 100, the orientation of terminal 100, terminal 100 plus
Speed or mobile and direction etc. of slowing down, and order or signal of the generation for the operation of controlling terminal 100.For example, working as terminal
100 when being embodied as sliding-type mobile phone, and it is to open or close that sensing unit 140, which can sense the sliding-type phone,.In addition,
Sensing unit 140 is able to detect whether power supply unit 190 provides electric power or whether interface unit 170 couples with external device (ED).
Sensing unit 140, which may include proximity sensor 141, this to be described in conjunction with touch screen below.
Interface unit 170 be used as at least one external device (ED) connect with terminal 100 can by interface.For example, external
Device may include wired or wireless headphone port, external power supply (or battery charger) port, wired or wireless number
According to port, memory card port, port, the port audio input/output (I/O), view for connecting the device with identification module
The port frequency I/O, ear port etc..Identification module can be storage for verifying the various information of user's using terminal 100 simultaneously
It and may include subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) etc..Separately
Outside, the device (hereinafter referred to as " identification device ") with identification module can take the form of smart card, and therefore, identification device can
To be connect via port or other attachment devices with terminal 100.Interface unit 170 can be used for receiving from the defeated of external device (ED)
Enter (for example, data information, electric power etc.) and by the input received be transferred to one or more elements in terminal 100 or
Person can be used for transmitting data between terminal and external device (ED).
In addition, when terminal 100 is connect with external base, interface unit 170 may be used as allowing by its by electric power from
Pedestal, which is provided to the path or may be used as of terminal 100, allows the various command signals inputted from pedestal to be transferred to end by it
The path at end.The various command signals or electric power inputted from pedestal may be used as whether terminal for identification is accurately fitted within bottom
Signal on seat.Output unit 150 is configured to provide output signal (for example, audio with vision, audio and/or tactile manner
Signal, vision signal, alarm signal, vibration signal etc.).Output unit 150 may include display unit 151, audio output
Module 152, alarm unit 153 etc..
Display unit 151 may be displayed on the information handled in terminal 100.For example, when terminal 100 is in telephone relation mould
When formula, display unit 151 can show with converse or it is other communicate (for example, text messaging, multimedia file downloading etc.
Deng) relevant user interface (UI) or graphic user interface (GUI).When terminal 100 is in video calling mode or image is caught
When obtaining mode, display unit 151 can show captured image and/or received image, show video or image and correlation
UI or GUI of function etc..
Meanwhile when display unit 151 and touch tablet in the form of layer it is superposed on one another to form touch screen when, display unit
151 may be used as input unit and output device.Display unit 151 may include liquid crystal display (LCD), thin film transistor (TFT)
In LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc. at least
It is a kind of.Some in these displays may be constructed such that transparence to allow user to watch from outside, this is properly termed as transparent
Display, typical transparent display can be, for example, TOLED (transparent organic light emitting diode) display etc..According to specific
Desired embodiment, terminal 100 may include two or more display units (or other display devices), for example, terminal can
To include outernal display unit (not shown) and inner display unit (not shown).Touch screen can be used for detecting touch input pressure
And touch input position and touch input area.
Audio output module 152 can be in call signal reception pattern, call mode, logging mode, voice in terminal
It is wireless communication unit 110 is received or deposit in memory 160 when under the isotypes such as recognition mode, broadcast reception mode
The audio data transducing audio signal of storage and to export be sound.Moreover, audio output module 152 can provide and terminal 100
The relevant audio output of the specific function of execution (for example, call signal receives sound, message sink sound etc.).Audio output
Module 152 may include loudspeaker, buzzer etc..
Memory 160 can store the software program etc. of the processing and control operation that are executed by controller 180, Huo Zheke
Temporarily to store the data that has exported or will export (for example, telephone directory, message, still image, video etc.).And
And memory 160 can store about the vibrations of various modes and audio signal exported when touching and being applied to touch screen
Data.
Memory 160 may include the storage medium of at least one type, and the storage medium includes flash memory, hard disk, more
Media card, card-type memory (for example, SD or DX memory etc.), random access storage device (RAM), static random-access storage
Device (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read only memory
(PROM), magnetic storage, disk, CD etc..Moreover, terminal 100 can execute memory 160 with by network connection
The network storage device of store function cooperates.
The overall operation of the usual controlling terminal of controller 180.Lead to for example, controller 180 is executed with voice communication, data
Letter, video calling etc. relevant control and processing.In addition, controller 180 may include for reproducing (or playback) multimedia
The multi-media module 181 of data, multi-media module 181 can construct in controller 180, or it is so structured that and controller
180 separation.Controller 180 can be drawn the handwriting input executed on the touchscreen or picture with execution pattern identifying processing
System input is identified as character or image.
Power supply unit 190 receives external power or internal power under the control of controller 180 and provides operation each member
Electric power appropriate needed for part and component.
Various embodiments described herein can be to use the calculating of such as computer software, hardware or any combination thereof
Machine readable medium is implemented.Hardware is implemented, embodiment described herein can be by using application-specific IC
(ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), scene can
Programming gate array (FPGA), controller, microcontroller, microprocessor, is designed to execute function described herein processor
At least one of electronic unit is implemented, and in some cases, such embodiment can be implemented in controller 180.
For software implementation, the embodiment of such as process or function can with allow to execute the individual of at least one functions or operations
Software module is implemented.Software code can by the software application (or program) write with any programming language appropriate Lai
Implement, software code can store in memory 160 and be executed by controller 180.
So far, terminal is described according to its function.In the following, for the sake of brevity, such as folded form, straight panel will be described
Slider type terminal in various types of terminals of type, oscillating-type, slider type terminal etc. is as example.Therefore, energy of the present invention
Enough it is applied to any kind of terminal, and is not limited to slider type terminal.
Terminal 100 as shown in Figure 1 may be constructed such that using via frame or grouping send the such as wired of data and
Wireless communication system and satellite-based communication system operate.
Referring now to Fig. 2 description communication system that wherein mobile terminal according to the present invention can operate.
Different air interface and/or physical layer can be used in such communication system.For example, used by communication system
Air interface includes such as frequency division multiple access (FDMA), time division multiple acess (TDMA), CDMA (CDMA) and universal mobile communications system
System (UMTS) (particularly, long term evolution (LTE)), global system for mobile communications (GSM) etc..As non-limiting example, under
The description in face is related to cdma communication system, but such introduction is equally applicable to other types of system.
With reference to Fig. 2, cdma wireless communication system may include multiple mobile terminals 100, multiple base stations (BS) 270, base station
Controller (BSC) 275 and mobile switching centre (MSC) 280.MSC280 is configured to and Public Switched Telephony Network (PSTN)
290 form interface.MSC280 is also structured to form interface with the BSC275 that can be couple to base station 270 via back haul link.
Back haul link can be constructed according to any in several known interfaces, and the interface includes such as E1/T1, ATM, IP,
PPP, frame relay, HDSL, ADSL or xDSL.It will be appreciated that system may include multiple BSC275 as shown in Figure 2.
Each BS270 can service one or more subregions (or region), by multidirectional antenna or the day of direction specific direction
Each subregion of line covering is radially far from BS270.Alternatively, each subregion can be by two or more for diversity reception
Antenna covering.Each BS270, which may be constructed such that, supports multiple frequency distribution, and the distribution of each frequency has specific frequency spectrum
(for example, 1.25MHz, 5MHz etc.).
What subregion and frequency were distributed, which intersects, can be referred to as CDMA Channel.BS270 can also be referred to as base station transceiver
System (BTS) or other equivalent terms.In this case, term " base station " can be used for broadly indicating single
BSC275 and at least one BS270.Base station can also be referred to as " cellular station ".Alternatively, each subregion of specific BS270 can be claimed
For multiple cellular stations.
As shown in Figure 2, broadcast singal is sent to the mobile terminal operated in system by broadcsting transmitter (BT) 295
100.Broadcasting reception module 111 as shown in Figure 1 is arranged at mobile terminal 100 to receive the broadcast sent by BT295
Signal.In fig. 2 it is shown that several global positioning system (GPS) satellites 300.The help of satellite 300 positions multiple mobile terminals
At least one of 100.
In Fig. 2, multiple satellites 300 are depicted, it is understood that, it is useful to can use any number of satellite acquisition
Location information.GPS module 115 as shown in Figure 1 is generally configured to cooperate with satellite 300 to obtain desired positioning and believe
Breath.It substitutes GPS tracking technique or except GPS tracking technique, the other of the position that can track mobile terminal can be used
Technology.In addition, at least one 300 property of can choose of GPS satellite or extraly processing satellite dmb transmission.
As a typical operation of wireless communication system, BS270 receives the reverse link from various mobile terminals 100
Signal.Mobile terminal 100 usually participates in call, information receiving and transmitting and other types of communication.Certain base station 270 is received each anti-
It is handled in specific BS270 to link signal.The data of acquisition are forwarded to relevant BSC275.BSC provides call
The mobile management function of resource allocation and the coordination including the soft switching process between BS270.The number that BSC275 will also be received
According to MSC280 is routed to, the additional route service for forming interface with PSTN290 is provided.Similarly, PSTN290 with
MSC280 forms interface, and MSC and BSC275 form interface, and BSC275 controls BS270 correspondingly with by forward link signals
It is sent to mobile terminal 100.
It will be based on above-mentioned terminal hardware structure and communication system below, propose each embodiment of the method for the present invention.
The technical solution of the present invention is further elaborated in the following with reference to the drawings and specific embodiments.
Embodiment one
Based on embodiment above-mentioned, the embodiment of the present invention provides a kind of generation method of video data, and this method is applied to
Terminal, the function that the generation method of the video data is realized can be by the processor caller codes in terminal come real
Existing, certain program code can be stored in computer storage medium, it is seen then that the terminal includes at least processor and storage is situated between
Matter.
The present embodiment provides a kind of generation method of video data, Fig. 3 is the video data in the embodiment of the present invention one
The flow diagram of generation method, as shown in figure 3, the generation method of the video data includes:
S301: when the camera operation of terminal, the target area at shooting interface is obtained, is determined from the object of target area
Target object;After receiving photographing instruction, was taken pictures before the moment at preset time using distance as initial time, taken pictures with distance
It is finish time at the preset time after moment, obtains each picture according to preset step-length;
Wherein, user clicks the camera icon in terminal, and the camera operation of terminal, the display interface of terminal is the bat of camera
Interface is taken the photograph, shoots show picture to be clapped on interface at this time, then user irises out target area on shooting interface, and Fig. 4 is this hair
The arrangement schematic diagram of target area in bright embodiment one, as shown in figure 4, the rectangle in Fig. 4 is the target area that user irises out;
It should be noted that the shape of target area can be rectangle, round, triangle row etc., here, the present invention is not
It is specifically limited;
Target area is detected, can learn that the object of target area, the object of the target area in Fig. 4 include pair
As 1, object 2 and object 3;User's selected object 1 is target object;
Fig. 5 schemes at the time of being in the generation method of video data, as shown in figure 5, under user clicks in shooting interface first
The shutter of side, then it is at this time start time t1 that terminal, which receives photographing instruction, and taken pictures before moment t3 with distance is at preset time
Initial time t2 is taken pictures after moment t3 at the preset time as finish time t4 using distance, obtains each figure according to preset step-length
Piece can be here 2s with preset time, and preset step-length can be for 1ms for example, and the preceding 2s for the moment t3 that takes pictures obtains first
Picture obtains a picture every 2ms, and 2s terminates to obtain picture after the moment t3 that takes pictures;
So far, available to arrive each picture.
S302: it from each picture, intercepts out including target object and region identical with target area size is each
Sub-pictures;
Specifically, it after getting each picture, needs to carry out screenshot to each picture to obtain each sub-pictures, and
It include the target object chosen of user in each sub-pictures, and the target area chosen of the size of each sub-pictures and user
Size is identical;
For example, it is assumed that object 1 is people, object 2 is red balloon, and object 3 is blue balloon, when with
Family using terminal wishes moment most excellent when takeofing with object 2 and object 3 for the object 1 of background, then, the user in S301
Iris out after target area and target object, after terminal gets picture every time, intercept out including object 1 and in Fig. 4
The identical region of rectangle size be sub-pictures;In this way, just obtaining the local picture including target object, deletes and do not wrapped
The picture of target object is included, so that the video that user watches is that user it is expected the picture seen, improves the Experience Degree of user.
Here, it should be noted that the target object in the embodiment of the present invention can be one or more, here this hair
It is not specifically limited in bright embodiment.
S303: target area is matched with each sub-pictures respectively, when successful match, retains sub-pictures, matching is lost
When losing, sub-pictures are abandoned;
After getting each sub-pictures in S301, target area is matched with each sub-pictures respectively, is obtained
With as a result, the sub-pictures that matching result is failure are discarded when matching result is that successful sub-pictures retain to get up;
Here, it should be noted that in being matched target area with each sub-pictures respectively, refer to target area
The object in domain is matched with the object of each picture one by one respectively, for example, a kind of matching way is, the target pair of target area
When as target object successful match with sub-pictures, other objects other than target object in sub-pictures are deleted, then
Retain the sub-pictures;Another matching way is that the object of target area is matched one by one with the object of sub-pictures, matching at
After function, other objects other than the object of target area in sub-pictures are deleted, the sub-pictures are then retained;Certainly, originally
Above two matching way is not limited in inventive embodiments.
For example, when in obtained sub-pictures include object 1, object 2, object 3 and object 4, the first matching way
In, when 1 successful match of object of the object 1 of target area and sub-pictures, delete the object other than object 1 in sub-pictures
2, object 3 and object 4, then retain the sub-pictures;In another matching way, object 1, object 2 and the object 3 of target area
Matched one by one with the object of sub-pictures, after successful match, delete sub-pictures in addition to object 1, object 2 and object 3 with
Outer object 4, then retains the sub-pictures, and here, the matching way in the embodiment of the present invention is without being limited thereto;
In this way, the undesirable object occurred in video of user is deleted, so that the sub-pictures retained are all users
Desired picture improves user experience to improve the ornamental value of the picture of the surrounding time section saved when taking pictures.
S304: each sub-pictures based on reservation generate video data, and the photo that video data is taken with the moment of taking pictures
It is saved.
After each sub-pictures retained in S303, video data is generated based on each sub-pictures, video data
Format can be video/avc, video/3gpp, video/mp4v-es etc., and the format of photo can be bmp, jpg, tiff,
Gif, pcx, tga, exif, fpx, svg, psd, cdr, pcd, dxf, ufo, eps, ai, raw etc., here in the embodiment of the present invention
The format of format and photo to video data is not specifically limited.
A kind of generation method and terminal of video data provided by the embodiment of the present invention, it is first when the camera of terminal is run
First, target object is simultaneously determined in the target area for obtaining shooting interface from the object of target area, then takes pictures receiving
It after instruction, was taken pictures before the moment at preset time using distance as initial time, is taken pictures after the moment at preset time with distance as knot
The beam moment obtains each picture according to preset step-length, secondly, from each picture, intercepts out each sub-pictures, in each sub-pictures
It include that target object and the size of each sub-pictures and the size of target area are identical, in this way, being intercepted by spanning subgraph piece
Local picture out, finally, by matching target area with each sub-pictures respectively, each sub-pictures retained,
And each sub-pictures based on reservation generate video data, and the photo that video data is taken with the moment of taking pictures is saved,
That is matching, obtaining with each sub-pictures by intercepting out sub-pictures, and by target area in the embodiment of the present invention
Each sub-pictures retained, so that local video data is generated, by matching so that not entering disorderly in the video data of the part
People or object, and the video data of the part is saved together with photo, so that user can be with while ornamental photo
Ornamental part small video, improves the ornamental value of the video of the surrounding time section saved when taking pictures, improves user experience.
Embodiment two
Based on embodiment above-mentioned, the present embodiment provides a kind of generation method of video data, this method is applied to terminal,
The function that the generation method of the video data is realized can realize by the processor caller code in terminal, certainly
Program code can be stored in computer storage medium, it is seen then that the terminal includes at least pocessor and storage media.
Fig. 6 is a kind of optional flow diagram of the generation method of the video data in the embodiment of the present invention two, such as Fig. 6
Shown, on the basis of the above embodiment 1, S302 includes:
S601: detecting each picture, obtains the object of each picture;
S602: the object of each picture is matched with target object, when successful match, retains each picture, when
When it fails to match, each picture is abandoned;
S603: from each picture of reservation, the location information of target object is determined;
S604: according to location information, from each picture of reservation interception include target object and with target area size
Identical region, using the region of interception as corresponding sub-pictures.
For example, after getting each picture, Fig. 7-1 is that one kind of the picture in the embodiment of the present invention two is optional
Arrangement schematic diagram the picture is detected to obtain the object packet of the picture first in the picture got as shown in Fig. 7-1
Include: then object 1, object 2, object 3, object 4, object 5 and object 6 match above-mentioned object with target object respectively,
Successful match illustrates that there are target objects in the picture, is still target object with object 1, is learnt in the picture and is deposited by matching
In object 1, then retain the picture;
Fig. 7-2 is the optional arrangement schematic diagram of another kind of the picture in the embodiment of the present invention two, in the picture got
As shown in Fig. 7-2, the object for being detected to obtain the picture to the picture first include: object 1, object 2, object 3, object 4,
Object 5 and object 6, then match above-mentioned object with target object respectively, and successful match illustrates that there are mesh in the picture
Object is marked, is still target object with object 1, learns that there are objects 1 in the picture, then retain the picture by matching;
Fig. 7-3 is another optional arrangement schematic diagram of the picture in the embodiment of the present invention two, in the picture got
As shown in Fig. 7-3, the object for being detected to obtain the picture to the picture first includes: object 2, object 3, object 4, object 5
With object 6, then above-mentioned object is matched with target object respectively, successful match illustrates that there are targets pair in the picture
As being still target object with object 1, learning that there is no objects 1 in the picture, then abandon the picture by matching;
By the picture that above-mentioned detection and matching are remained, in order to get the local picture of target object,
To obtain the local small video of target object, in the specific implementation process, from each picture of reservation, target pair is determined
The location information of elephant, location information here are the coordinate information on shooting interface of target object, and target pair is being determined
After the location information of elephant, according to location information, sub-pictures of the interception including target object on the picture, and the sub-pictures
Size it is identical as the size of target area;So far, local picture has just been obtained.
In order to be intercepted to obtain sub-pictures to the picture remained, in an alternative embodiment, S604 can be with
It include: that the center of target object is set according to location information;Centered on the center of target object, from each figure
Region identical with target area size is intercepted in piece.
Terminal sets out the center of itself for the target object after determining the location information of target object,
Here, the center of the target object can be user and preset in the terminal, for example, when shooting personage,
The head of personage is set as center position;Then it centered on the head of personage, is intercepted out from the picture big with target object
Small identical region;
For example, Fig. 7-4 is a kind of optional arrangement schematic diagram of the sub-pictures in the embodiment of the present invention two, Fig. 7-4
Sub-pictures be the sub-pictures for intercepting out from the picture of Fig. 7-1, as shown in Fig. 7-4, after getting object 1, determine
Then the location information of object 1 out sets out the center of object 1, centered on the center of object 1, intercept out and scheme
Sub-pictures in 7-4 include object 1, object 2 and object 3 in the sub-pictures;
Fig. 7-5 is the optional arrangement schematic diagram of another kind of the sub-pictures in the embodiment of the present invention two, the sub-pictures of Fig. 7-5
It is the sub-pictures for intercepting out from the picture of Fig. 7-2, as shown in Fig. 7-5, after getting object 1, determines object 1
Then location information sets out the center of object 1, centered on the center of object 1, intercept out the son in Fig. 7-5
Picture includes object 1, object 2, object 3 and object 4 in the sub-pictures;
In this way, being processed similarly to each picture, each sub-pictures have just been obtained;
After determining sub-pictures, in order to meet the needs of users, in an alternative embodiment, in above-mentioned implementation
On the basis of example one, S303 may include: the object of determining each sub-pictures, by target object in the object of each sub-pictures with
Outer object is determined as fisrt feature object set;Object other than target object in the object of target area is determined as second
Feature object set;Second feature object set is matched with fisrt feature object set, when successful match, retains subgraph
Piece when it fails to match, abandons sub-pictures.
Wherein, terminal first determines the object of each subgraph and the object of target area by the method for detection, and citing comes
It says, still by taking the target area of Fig. 4 as an example, then the object of target area is object 1 (people), object 2 (red balloon) and object 3
(blue balloon), then the collection of fisrt feature object is combined into red balloon and blue balloon;
Object behaviour, red balloon and the blue balloon of sub-pictures in Fig. 7-4, then second feature object set is red
Color balloon and blue balloon carry out the red balloon in the red balloon and blue balloon and Fig. 7-4 in Fig. 4 with blue balloon
Matching, successful match retain the sub-pictures;
The object of sub-pictures in Fig. 7-5 is object person, red balloon, blue balloon and object 4 (bird), then second is special
Sign object set is red balloon, blue balloon and bird, by the red in the red balloon and blue balloon and Fig. 7-5 in Fig. 4
Balloon, blue balloon and bird are matched, and successful match retains the sub-pictures, and the bird in Fig. 7-5 does not find matching object, then
Bird is deleted, the picture is retained;
Fig. 7-6 is another optional arrangement schematic diagram of the sub-pictures in the embodiment of the present invention two, as shown in Fig. 7-6,
The object of sub-pictures in Fig. 7-6 is object person and blue balloon, then second feature object set is blue balloon, by Fig. 4
In red balloon and blue balloon and Fig. 7-6 in blue balloon matched, since the red balloon in Fig. 4 is not found
With object, it fails to match, abandons the sub-pictures;
By the above-mentioned available local picture for making local small video.
Here, what needs to be explained here is that, above-mentioned target area is enclosed region.
Embodiment three
Acquiring unit and determining list based on embodiment of the method above-mentioned, the present embodiment provides a kind of terminal, in the terminal
The respective included each module of member and each unit, can be realized by the processor in terminal, can also pass through tool certainly
The logic circuit of body is realized;During specific embodiment, processor can be central processing unit (CPU), microprocessor
(MPU), digital signal processor (DSP) or field programmable gate array (FPGA) etc..
Terminal provided by the present embodiment, Fig. 8 are the composed structure schematic diagram of the terminal in the embodiment of the present invention three, are such as schemed
Shown in 8, which includes obtaining module 81, interception module 82, matching module 83 and preserving module 84, wherein
Module 81 is obtained, when the camera for terminal is run, obtains the target area at shooting interface, pair from target area
Target object is determined as in;After receiving photographing instruction, taken pictures before the moment at preset time using distance as initial time,
It is taken pictures after the moment at the preset time using distance as finish time, obtains each picture according to preset step-length;Interception module 82 is used
In from each picture, intercepting out including target object and region identical with target area size is each sub-pictures;Matching
Module 83 when successful match, retains sub-pictures, it fails to match for matching target area with each sub-pictures respectively
When, abandon sub-pictures;Preserving module 84 generates video data for each sub-pictures based on reservation, and by video data and claps
The picture that the moment is taken is taken to be saved.
Wherein, above-mentioned target area is enclosed region.
In order to intercept out sub-pictures, in an alternative embodiment, above-mentioned interception module 82 is specifically used for each figure
Piece is detected, and the object of each picture is obtained;The object of each picture is matched with target object, works as successful match
When, retain each picture, when it fails to match, abandons each picture;From each picture of reservation, target object is determined
Location information;According to location information, interception includes target object and identical as target area size from each picture of reservation
Region, using the region of interception as corresponding sub-pictures.
In order to accurately intercept out sub-pictures, in an alternative embodiment, above-mentioned interception module 82 is specifically used for root
According to location information, the center of target object is set;Centered on the center of target object, intercepted from each picture
Region identical with target area size.
In order to meet the needs of users, the local picture of user's needs, in an alternative embodiment, above-mentioned are obtained
It is specifically used for determining the object of each sub-pictures with module 83, the object other than target object in the object of each sub-pictures is true
It is set to fisrt feature object set;Object other than target object in the object of target area is determined as second feature object set
It closes;Second feature object set is matched with fisrt feature object set, when successful match, retains sub-pictures, matching is lost
When losing, sub-pictures are abandoned.
It need to be noted that: the description of the above terminal embodiment item, be with above method description it is similar, have same
The identical beneficial effect of embodiment of the method, therefore do not repeat them here.For undisclosed technical detail in terminal embodiment of the present invention,
Those skilled in the art please refers to the description of embodiment of the present invention method and understands, to save length, which is not described herein again.
It need to be noted that:
It should be understood that " one embodiment " or " embodiment " that specification is mentioned in the whole text mean it is related with embodiment
A particular feature, structure, or characteristic is included at least one embodiment of the present invention.Therefore, occur everywhere in the whole instruction
" in one embodiment " or " in one embodiment " not necessarily refer to identical embodiment.In addition, these specific features, knot
Structure or characteristic can combine in any suitable manner in one or more embodiments.It should be understood that in various implementations of the invention
In example, magnitude of the sequence numbers of the above procedures are not meant that the order of the execution order, the execution sequence Ying Yiqi function of each process
It can determine that the implementation process of the embodiments of the invention shall not be constituted with any limitation with internal logic.The embodiments of the present invention
Serial number is for illustration only, does not represent the advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it
Its mode is realized.Apparatus embodiments described above are merely indicative, for example, the division of the unit, only
A kind of logical function partition, there may be another division manner in actual implementation, such as: multiple units or components can combine, or
It is desirably integrated into another system, or some features can be ignored or not executed.In addition, shown or discussed each composition portion
Mutual coupling or direct-coupling or communication connection is divided to can be through some interfaces, the INDIRECT COUPLING of equipment or unit
Or communication connection, it can be electrical, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member, which can be or may not be, to be physically separated, aobvious as unit
The component shown can be or may not be physical unit;Both it can be located in one place, and may be distributed over multiple network lists
In member;Some or all of units can be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can be fully integrated in one processing unit, it can also
To be each unit individually as a unit, can also be integrated in one unit with two or more units;It is above-mentioned
Integrated unit both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above method embodiment can pass through
The relevant hardware of program instruction is completed, and program above-mentioned can store in computer-readable storage medium, which exists
When execution, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned includes: movable storage device, read-only deposits
The various media that can store program code such as reservoir (Read Only Memory, ROM), magnetic or disk.
If alternatively, the above-mentioned integrated unit of the present invention is realized in the form of software function module and as independent product
When selling or using, it also can store in a computer readable storage medium.Based on this understanding, the present invention is implemented
Substantially the part that contributes to existing technology can be embodied in the form of software products the technical solution of example in other words,
The computer software product is stored in a storage medium, including some instructions are used so that computer equipment (can be with
It is personal computer, server or network equipment etc.) execute all or part of each embodiment the method for the present invention.
And storage medium above-mentioned includes: various Jie that can store program code such as movable storage device, ROM, magnetic or disk
Matter.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.
Claims (10)
1. a kind of generation method of video data characterized by comprising
When the camera operation of terminal, the target area at shooting interface is obtained, determines target from the object of the target area
Object;After receiving photographing instruction, taken pictures before the moment at preset time using distance as initial time, to take pictures described in distance
It is finish time at the preset time after moment, obtains each picture according to preset step-length;
From each picture, intercepts out including the target object and region identical with the target area size is every height
Picture;
The target area is matched with each sub-pictures respectively, when successful match, retains sub-pictures, it fails to match
When, abandon sub-pictures;
Each sub-pictures based on reservation generate video data, and the photo that the video data and the moment of taking pictures are taken into
Row saves;
Wherein, described to match the target area with each sub-pictures respectively, comprising: by the object of target area
It is matched respectively with the object of each picture.
2. intercepting out includes the target the method according to claim 1, wherein described from each picture
Object and region identical with the target area size are each sub-pictures, comprising:
Each picture is detected, the object of each picture is obtained;
The object of each picture is matched with the target object, when successful match, retains each picture,
When it fails to match, each picture is abandoned;
From each picture of reservation, the location information of the target object is determined;
According to the positional information, from each picture of reservation interception include the target object and with the target area
The identical region of domain size, using the region of interception as corresponding sub-pictures.
3. according to the method described in claim 2, it is characterized in that, it is described according to the positional information, from the described every of reservation
Interception includes the target object and region identical with the target area size in a picture, comprising:
According to the positional information, the center of the target object is set;
Centered on the center of the target object, intercepted from each picture identical as the target area size
Region.
4. the method according to claim 1, wherein it is described by the target area respectively with each subgraph
Piece is matched, and when successful match, retains sub-pictures, when it fails to match, abandons sub-pictures, comprising:
The object for determining each sub-pictures, the object other than target object described in the object of each sub-pictures is true
It is set to fisrt feature object set;
Object other than target object described in the object of the target area is determined as second feature object set;
The second feature object set is matched with the fisrt feature object set, when successful match, retains subgraph
Piece when it fails to match, abandons sub-pictures.
5. the method according to claim 1, wherein the target area is enclosed region.
6. a kind of terminal, which is characterized in that the terminal includes:
Module is obtained, when the camera for terminal is run, obtains the target area at shooting interface, the object from the target area
In determine target object;After receiving photographing instruction, taken pictures before the moment at preset time using distance as initial time, with
Taking pictures after the moment described in distance at the preset time is finish time, obtains each picture according to preset step-length;
Interception module intercepts out including the target object and identical as the target area size for from each picture
Region be each sub-pictures;
Matching module when successful match, retains son for matching the target area with each sub-pictures respectively
Picture when it fails to match, abandons sub-pictures;
Preserving module, for based on reservation each sub-pictures generate video data, and by the video data and it is described take pictures when
Clapped photo is carved to be saved;
Wherein, the matching module, specifically for matching the object of target area with the object of each picture respectively.
7. terminal according to claim 6, which is characterized in that the interception module be specifically used for each picture into
Row detection, obtains the object of each picture;The object of each picture is matched with the target object, when
When with success, retain each picture, when it fails to match, abandons each picture;From each picture of reservation, really
Make the location information of the target object;According to the positional information, interception includes institute from each picture of reservation
Target object and region identical with the target area size are stated, using the region of interception as corresponding sub-pictures.
8. terminal according to claim 7, which is characterized in that the interception module is specifically used for being believed according to the position
Breath, sets the center of the target object;Centered on the center of the target object, from each picture
Intercept region identical with the target area size.
9. terminal according to claim 6, which is characterized in that the matching module is specifically used for determining each subgraph
Object other than target object described in the object of each sub-pictures is determined as fisrt feature object set by the object of piece
It closes;Object other than target object described in the object of the target area is determined as second feature object set;It will be described
Second feature object set is matched with the fisrt feature object set, when successful match, retains sub-pictures, it fails to match
When, abandon sub-pictures.
10. terminal according to claim 6, which is characterized in that the target area is enclosed region.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610879193.1A CN106303292B (en) | 2016-09-30 | 2016-09-30 | A kind of generation method and terminal of video data |
PCT/CN2017/101621 WO2018059237A1 (en) | 2016-09-30 | 2017-09-13 | Video data generation method, terminal and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610879193.1A CN106303292B (en) | 2016-09-30 | 2016-09-30 | A kind of generation method and terminal of video data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106303292A CN106303292A (en) | 2017-01-04 |
CN106303292B true CN106303292B (en) | 2019-05-03 |
Family
ID=57716848
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610879193.1A Active CN106303292B (en) | 2016-09-30 | 2016-09-30 | A kind of generation method and terminal of video data |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106303292B (en) |
WO (1) | WO2018059237A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106303292B (en) * | 2016-09-30 | 2019-05-03 | 努比亚技术有限公司 | A kind of generation method and terminal of video data |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1717702A (en) * | 2003-04-17 | 2006-01-04 | 精工爱普生株式会社 | Generation of still image from a plurality of frame images |
CN102547091A (en) * | 2010-12-06 | 2012-07-04 | 奥林巴斯映像株式会社 | Camera, display device and display method |
CN104618572A (en) * | 2014-12-19 | 2015-05-13 | 广东欧珀移动通信有限公司 | Photographing method and device for terminal |
CN105245777A (en) * | 2015-09-28 | 2016-01-13 | 努比亚技术有限公司 | Method and device for generating video image |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102109902A (en) * | 2009-12-28 | 2011-06-29 | 鸿富锦精密工业(深圳)有限公司 | Input device based on gesture recognition |
CN103971391A (en) * | 2013-02-01 | 2014-08-06 | 腾讯科技(深圳)有限公司 | Animation method and device |
CN104796594B (en) * | 2014-01-16 | 2020-01-14 | 中兴通讯股份有限公司 | Method for instantly presenting special effect of preview interface and terminal equipment |
CN105338284A (en) * | 2014-07-08 | 2016-02-17 | 华为技术有限公司 | Method, device and system used for carrying out multi-point video communication |
CN106303292B (en) * | 2016-09-30 | 2019-05-03 | 努比亚技术有限公司 | A kind of generation method and terminal of video data |
-
2016
- 2016-09-30 CN CN201610879193.1A patent/CN106303292B/en active Active
-
2017
- 2017-09-13 WO PCT/CN2017/101621 patent/WO2018059237A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1717702A (en) * | 2003-04-17 | 2006-01-04 | 精工爱普生株式会社 | Generation of still image from a plurality of frame images |
CN102547091A (en) * | 2010-12-06 | 2012-07-04 | 奥林巴斯映像株式会社 | Camera, display device and display method |
CN104618572A (en) * | 2014-12-19 | 2015-05-13 | 广东欧珀移动通信有限公司 | Photographing method and device for terminal |
CN105245777A (en) * | 2015-09-28 | 2016-01-13 | 努比亚技术有限公司 | Method and device for generating video image |
Also Published As
Publication number | Publication date |
---|---|
CN106303292A (en) | 2017-01-04 |
WO2018059237A1 (en) | 2018-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105404484B (en) | Terminal split screen device and method | |
CN104954689A (en) | Method and shooting device for acquiring photo through double cameras | |
CN105227837A (en) | A kind of image combining method and device | |
CN105263049B (en) | A kind of video Scissoring device, method and mobile terminal based on frame coordinate | |
CN106097284B (en) | A kind of processing method and mobile terminal of night scene image | |
CN104660912A (en) | Photographing method and photographing device | |
CN106155695A (en) | The removing control device and method of background application | |
CN106412243B (en) | A kind of method and terminal of monitoring distance inductor exception | |
CN105956999A (en) | Thumbnail generating device and method | |
CN105897564A (en) | Photo sharing apparatus and method | |
CN105959551A (en) | Shooting device, shooting method and mobile terminal | |
CN106302086A (en) | A kind of different mobile terminal carries out the method for content synchronization, Apparatus and system | |
CN105681582A (en) | Control color adjusting method and terminal | |
CN105227865A (en) | A kind of image processing method and terminal | |
CN106302651A (en) | The social sharing method of picture and there is the terminal of picture social activity share system | |
CN107071329A (en) | The method and device of automatic switchover camera in video call process | |
CN104917965A (en) | Shooting method and device | |
CN106303229A (en) | A kind of photographic method and device | |
CN105187709A (en) | Remote photography implementing method and terminal | |
CN105245792A (en) | Mobile terminal and image shooting method | |
CN106373110A (en) | Method and device for image fusion | |
CN106331482A (en) | Photo processing device and method | |
CN106412328B (en) | A kind of method and apparatus obtaining field feedback | |
CN104935822A (en) | Method and device for processing images | |
CN105242483A (en) | Focusing realization method and device and shooting realization method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |