CN108109186B - Video file processing method and device and mobile terminal - Google Patents

Video file processing method and device and mobile terminal Download PDF

Info

Publication number
CN108109186B
CN108109186B CN201711241910.9A CN201711241910A CN108109186B CN 108109186 B CN108109186 B CN 108109186B CN 201711241910 A CN201711241910 A CN 201711241910A CN 108109186 B CN108109186 B CN 108109186B
Authority
CN
China
Prior art keywords
reference image
target pixel
pixel area
video file
pixel region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711241910.9A
Other languages
Chinese (zh)
Other versions
CN108109186A (en
Inventor
张贾宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201711241910.9A priority Critical patent/CN108109186B/en
Publication of CN108109186A publication Critical patent/CN108109186A/en
Application granted granted Critical
Publication of CN108109186B publication Critical patent/CN108109186B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the invention provides a video file processing method, a video file processing device and a mobile terminal, wherein the method comprises the following steps: acquiring position information of a target pixel area in a reference image of a video file; and adjusting the position of the target pixel region in a non-reference image of the video file except the reference image according to the position information, and adjusting the position of the associated pixel region in the non-reference image according to the position relation between the target pixel region and the associated pixel region in the non-reference image except the target pixel region. According to the embodiment of the invention, the positions of the target pixel area and the associated pixel area are adjusted, so that the position of the target pixel area in the video file is kept relatively static, the definition of the video file during playing can be improved, and a user can clearly watch the target pixel area.

Description

Video file processing method and device and mobile terminal
Technical Field
The present invention relates to the field of mobile terminal technologies, and in particular, to a video processing method and apparatus, and a mobile terminal.
Background
With the continuous improvement of the pixels of the camera in the mobile terminal, the traditional camera is basically replaced by the image shot by the mobile terminal, the video recording through the camera of the mobile terminal becomes a high-frequency activity, and a user can record life in a video recording mode.
The recorded objects are mostly dynamic when recording videos, for example: the user records a video of the friend A jumping through the video recording function of the mobile terminal. In the recorded video, the grass is still as shown in fig. 1 and character a is moving. In the process of watching a video by a user, there may be a need for: only the face of the person a in the video needs to be seen clearly, but other scenes (such as grasslands) in the video are not concerned, but the person in the video continuously jumps, so that the user wants to see the face of the person a in motion clearly, and the gaze of the user can jump up and down, so that the user cannot focus on the person a intensively, and the watching experience of the user is influenced.
Disclosure of Invention
The embodiment of the invention provides a video processing method and device and a mobile terminal, and aims to solve the problem that the prior art influences the watching experience of a user.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, a video file processing method is provided, where the method includes: acquiring position information of a target pixel area in a reference image of a video file; and adjusting the position of the target pixel region in a non-reference image of the video file except the reference image according to the position information, and adjusting the position of the associated pixel region in the non-reference image according to the position relation between the target pixel region and the associated pixel region in the non-reference image except the target pixel region.
In a second aspect, there is provided a video file processing apparatus comprising: the acquisition module is used for acquiring the position information of the target pixel area in a reference image of the video file; and the adjusting module is used for adjusting the position of the target pixel area in a non-reference image of the video file except the reference image according to the position information, and adjusting the position of the associated pixel area in the non-reference image according to the position relation between the target pixel area and the associated pixel area in the non-reference image except the target pixel area.
In a third aspect, a mobile terminal is provided, including: the video file processing method comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the steps of any one of the video file processing methods in the embodiment of the invention when being executed by the processor.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which, when executed by a processor, implements the steps of any one of the video file processing methods described in the embodiments of the present invention.
Compared with the prior art, the invention has the following advantages:
in the embodiment of the invention, the positions of the target pixel area and the associated pixel area are adjusted, so that the position of the target pixel area in the video file is kept relatively static, the definition of the video file during playing can be improved, and a user can clearly watch the target pixel area.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a schematic diagram of the motion state of a person in two adjacent images in a video;
FIG. 2 is a flowchart illustrating steps of a video file processing method according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps of a video file processing method according to a second embodiment of the present invention;
FIG. 4 is a schematic view of a reference image;
FIG. 5 is a schematic diagram of a selection of a target pixel region in a reference image;
FIG. 6 is a schematic diagram of a state of a target pixel region in two adjacent frames of images in a video;
fig. 7 is a block diagram of a video file processing apparatus according to a third embodiment of the present invention;
fig. 8 is a block diagram of a mobile terminal according to a fourth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 2, a flowchart illustrating steps of a video file processing method according to a first embodiment of the present invention is shown.
The video file processing method of the embodiment of the invention comprises the following steps:
step 101: and acquiring the position information of the target pixel region in a reference image of the video file.
The video file is composed of a plurality of frame images, and a user can manually select any frame image from the frame images as a reference image. Of course, it is also possible to automatically determine a frame of image to recommend to the user as the reference image by the mobile terminal. The user can hand-draw the target pixel area in the reference image. The target pixel region may be a contour region of the object to be stationary, or may be a region including the object to be stationary.
Step 102: and adjusting the position of the target pixel region in a non-reference image except the reference image of the video file according to the position information, and adjusting the position of the associated pixel region in the non-reference image according to the position relation between the target pixel region and the associated pixel region except the target pixel region in the non-reference image.
The target pixel area in each frame of image is adjusted to the same position coordinate, and the target pixel area in the picture displayed to the user when the video file is played is always in a static state, so that the user can observe the characteristics of the object in the target pixel area conveniently.
For example: the target pixel area is an image of a person a, and the image of the person a is located at the center of the image. And aiming at each frame of image in the video file, searching the image of the person A from each frame of image as a target pixel adjustment area, and adjusting the image of the person A in each frame of image to the position of the center of the image.
The method comprises the steps that a target pixel area which originally moves in a video file is static in the target video file; the target pixel region, which is originally stationary in the video file, moves in the target video file. By the video file processing method provided by the embodiment of the invention, the conversion of the relative motion relationship of the shot object can be finally completed.
It should be noted that, in the specific implementation process, after a user selects a target pixel area in a reference image, the reference image with the target pixel area calibrated is sent to the server, the server determines the target pixel area to be stationary and the position information of the target pixel area in the reference image, identifies the target pixel area in the image for each frame of image in the video file to be processed, adjusts the target pixel area to the position of the target pixel area in the reference image, combines the adjusted frames of image into a target video file, and sends the target video file to the mobile terminal for playing.
According to the video processing method provided by the embodiment of the invention, the positions of the target pixel area and the associated pixel area are adjusted, so that the position of the target pixel area in the video file is kept relatively static, the definition of the video file during playing can be improved, and a user can clearly watch the target pixel area.
Example two
Referring to fig. 3, a flowchart illustrating steps of a video file processing method according to a second embodiment of the present invention is shown.
The video file processing method of the embodiment of the invention specifically comprises the following steps:
step 201: and displaying the thumbnails of all the frame images in the video file to be processed in sequence.
The thumbnail showing each frame image facilitates the user to select the reference image.
The video file processing method in the embodiment of the invention can be started manually by clicking the corresponding button by a user or automatically by the mobile terminal. One way to preferably manually initiate the video processing flow by the user is to: before sequentially expanding thumbnails of all frame images in a file to be processed, the mobile terminal receives a trigger instruction of a user to a motion relation conversion button; in response to the trigger instruction, the motion relationship conversion process is started, that is, the video processing process of the embodiment of the present invention is started.
Step 202: and receiving a selection instruction of the user on the thumbnail, and determining the image corresponding to the selected thumbnail as a reference image.
Specifically, when the displayed thumbnail is selected, the user can drag the progress bar below the thumbnail to select one thumbnail, and the mobile terminal determines the image corresponding to the selected thumbnail as the reference image for display.
The following description will take an example of a video recorded by a person jumping on a grass surface as a video to be processed. Fig. 4 shows a reference image selected by a user from a video file to be processed.
Step 203: and determining the first position coordinates of the target pixel area in the reference image according to the selection operation of the user on the target pixel area in the reference image.
The user can draw a target pixel area in the reference image by hand; in addition, the user can also circle the target pixel region in the reference image through the selection frame. Fig. 5 is a schematic diagram illustrating selection of a target pixel region in a reference image, and a region enclosed by a dotted line in fig. 5 is the target pixel region.
The first position coordinate can be the coordinate of a central pixel point of the target pixel region in the image; or coordinates of the first and last pixel points of the target pixel region in the image.
Step 204: for each frame of non-reference image in the video file, a target pixel region in the non-reference image is identified.
And determining the object to be static indicated by the target pixel area, identifying the object to be static from each frame of non-reference image after the object to be static is determined, and determining the pixel area occupied by the object to be static in each frame of non-reference image as the target pixel area.
Step 205: and adjusting the target pixel area to the first position coordinate of the non-reference image, and adjusting the position of the associated pixel area in the non-reference image according to the position relation between the target pixel area and the associated pixel area except the target pixel area in the non-reference image.
Specifically, when the position of the associated pixel region in the non-reference image is adjusted according to the position relationship between the target pixel region and the associated pixel region in the non-reference image except the target pixel region, the pixels of the image are adjusted according to the determined adjustment vector as a whole.
One preferred way to adjust the target pixel area to the first position coordinates of the non-reference image is as follows:
firstly, determining a second position coordinate of a target pixel area in a non-reference image;
secondly, determining an offset vector of the second position coordinate and the first position coordinate;
finally, the position of each pixel point in the non-reference image is adjusted according to the offset vector, so that the target pixel area is adjusted to the first position coordinate of the non-reference image.
By referring to the above preferred mode, the non-reference image in the video file is processed frame by frame, the moving target pixel area is still in a specific area of the video, and other background pixels are adapted to match the area, so that the motion relation conversion between the moving pixels and the still pixels is realized.
Fig. 6 is a schematic diagram of a state of a shot object in two adjacent frames of images in a video, as shown in fig. 6, an originally moving person in an image to be processed is in a static state, and an originally static lawn is in a moving state.
And combining the adjusted frames of images into a target video file, wherein the relative motion relation of the shot object in the target video file is converted.
In the embodiment of the present invention, the example of converting the relative motion relationship of the photographic object in the video file by the mobile terminal is described, and in the specific implementation process, the server may also execute the relative motion relationship conversion operation of the photographic object, convert the video file into the target video, and send the target video to the mobile terminal. Specifically, after a user selects a target pixel area in a reference image, the mobile terminal sends the reference image marked with the target pixel area to the server, the server determines a first position coordinate of the target pixel area in the reference image, identifies the target pixel area for each frame of image in the video file, adjusts the target pixel area to the first position coordinate of the image, combines the adjusted frames of image into a target video file, and sends the target video file to the mobile terminal for playing.
The scheme of performing the relative motion relation conversion of the shooting object by the server can reduce the processing load of the mobile terminal.
According to the video processing method provided by the embodiment of the invention, a frame of reference image is selected from the video file, the target pixel area is determined in the reference image, and the positions of the pixel points in each frame of image are adjusted to make the positions of the target pixel area in each frame of image consistent, so that the relative stillness of the target pixel area in each frame of image can be kept, the target pixel area in the video is relatively still, the characteristics of objects in the target pixel area can be clearly seen by a user, and the watching experience of the user is improved. In addition, the video file processing method provided by the embodiment of the invention can convert the relative motion relationship of the shooting object in the video file, can improve the video interest and can increase the user viscosity.
EXAMPLE III
Referring to fig. 7, a block diagram of a video file processing apparatus according to a third embodiment of the present invention is shown.
The video file processing device of the embodiment of the invention comprises:
an obtaining module 301, configured to obtain position information of a target pixel region in a reference image of a video file;
an adjusting module 302, configured to adjust, according to the location information, a location of the target pixel region in a non-reference image of the video file, except for the reference image, and adjust, according to a location relationship between the target pixel region and an associated pixel region in the non-reference image, except for the target pixel region, a location of the associated pixel region in the non-reference image.
Preferably, the obtaining module 301 includes:
the display sub-module 3011 is configured to sequentially display thumbnails of frames of images in the video file to be processed;
the receiving sub-module 3012 is configured to receive a selection instruction of a user for a thumbnail, and determine an image corresponding to the selected thumbnail as a reference image;
the determining sub-module 3013 is configured to receive a selection operation of a user on a target pixel region in the reference image, and determine a first position coordinate of the target pixel region in the reference image.
Preferably, the adjusting module 302 includes:
an identifying submodule 3021 configured to identify, for each frame of non-reference image in the video file, the target pixel region in the non-reference image;
an adjusting submodule 3022, configured to adjust the target pixel region to the first position coordinate of the non-reference image, and adjust the position of the associated pixel region in the non-reference image according to a position relationship between the target pixel region and an associated pixel region in the non-reference image except the target pixel region.
Preferably, the adjusting submodule 3022 includes:
a coordinate determination unit 30221 configured to determine a second position coordinate of the target pixel region in the non-reference image;
a vector determination unit 30222 configured to determine an offset vector of the second position coordinate and the first position coordinate;
an adjusting unit 30223, configured to adjust each pixel point position in the non-reference image according to the offset vector, so as to adjust the target pixel area to the first position coordinate of the non-reference image.
Preferably, the apparatus further comprises:
the instruction receiving module 303 is configured to receive a trigger instruction of a user to the motion relation conversion button before the display submodule sequentially displays thumbnails of frames of images in the video file to be processed;
and the starting module 304 is configured to start a motion relationship conversion process in response to the trigger instruction.
The video file processing apparatus provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to fig. 6, and is not described herein again to avoid repetition.
According to the video file processing device provided by the embodiment of the invention, the positions of the target pixel area and the associated pixel area are adjusted, so that the position of the target pixel area in the video file is kept relatively static, the definition of the video file during playing can be improved, and a user can clearly watch the target pixel area.
Example four
Figure 8 is a schematic diagram of a hardware configuration of a mobile terminal implementing various embodiments of the present invention,
the mobile terminal 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, a processor 810, and a power supply 811. Those skilled in the art will appreciate that the mobile terminal architecture illustrated in fig. 8 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A processor 810, configured to obtain position information of a target pixel region in a reference image of a video file; and adjusting the position of the target pixel region in a non-reference image of the video file except the reference image according to the position information, and adjusting the position of the associated pixel region in the non-reference image according to the position relation between the target pixel region and the associated pixel region in the non-reference image except the target pixel region.
According to the mobile terminal provided by the embodiment of the invention, the frame of reference image is selected from the video file, the target pixel area is determined in the reference image, and the positions of the pixel points in the frame of image are adjusted to make the positions of the target pixel area in the frame of image consistent, so that the relative stillness of the target pixel area in the frame of image can be kept, a user can clearly see the characteristics of the object in the target pixel area to be watched, and the watching experience of the user is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 801 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 810; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 801 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 801 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 802, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 803 may convert audio data received by the radio frequency unit 801 or the network module 802 or stored in the memory 809 into an audio signal and output as sound. Also, the audio output unit 803 may also provide audio output related to a specific function performed by the mobile terminal 800 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 803 includes a speaker, a buzzer, a receiver, and the like.
The input unit 804 is used for receiving an audio or video signal. The input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics processor 8041 processes image data of a still picture or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 806. The image frames processed by the graphics processor 8041 may be stored in the memory 809 (or other storage medium) or transmitted via the radio frequency unit 801 or the network module 802. The microphone 8042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 801 in case of a phone call mode.
The mobile terminal 800 also includes at least one sensor 805, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 8061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 8061 and/or the backlight when the mobile terminal 800 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 805 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 806 is used to display information input by the user or information provided to the user. The Display unit 806 may include a Display panel 8061, and the Display panel 8061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 807 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 807 includes a touch panel 8071 and other input devices 8072. The touch panel 8071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 8071 (e.g., operations by a user on or near the touch panel 8071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 8071 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 810, receives a command from the processor 810, and executes the command. In addition, the touch panel 8071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 8071, the user input unit 807 can include other input devices 8072. In particular, other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 8071 can be overlaid on the display panel 8061, and when the touch panel 8071 detects a touch operation on or near the touch panel 8071, the touch operation is transmitted to the processor 810 to determine the type of the touch event, and then the processor 810 provides a corresponding visual output on the display panel 8061 according to the type of the touch event. Although in fig. 8, the touch panel 8071 and the display panel 8061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 8071 and the display panel 8061 may be integrated to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 808 is an interface through which an external device is connected to the mobile terminal 800. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 808 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 800 or may be used to transmit data between the mobile terminal 800 and external devices.
The memory 809 may be used to store software programs as well as various data. The memory 809 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 809 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 810 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or executing software programs and/or modules stored in the memory 809 and calling data stored in the memory 809, thereby integrally monitoring the mobile terminal. Processor 810 may include one or more processing units; preferably, the processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The mobile terminal 800 may also include a power supply 811 (e.g., a battery) for powering the various components, and the power supply 811 may be logically coupled to the processor 810 via a power management system that may be used to manage charging, discharging, and power consumption.
In addition, the mobile terminal 800 includes some functional modules that are not shown, and thus, are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 810, a memory 809, and a computer program stored in the memory 809 and capable of running on the processor 810, where the computer program, when executed by the processor 810, implements each process of the video file processing method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video file processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (6)

1. A method for processing a video file, the method comprising:
acquiring position information of a target pixel area in a reference image of a video file;
according to the position information, the position of the target pixel area in a non-reference image of the video file except the reference image is adjusted, and the position of the associated pixel area in the non-reference image is adjusted according to the position relation between the target pixel area and the associated pixel area in the non-reference image except the target pixel area;
the acquiring of the position information of the target pixel region in the reference image of the video file comprises:
displaying thumbnails of all frames of images in the video file to be processed in sequence;
receiving a selection instruction of a user on the thumbnail, and determining an image corresponding to the selected thumbnail as a reference image;
receiving a selection operation of a user on a target pixel area in the reference image, and determining a first position coordinate of the target pixel area in the reference image;
the adjusting, according to the position information, the position of the target pixel region in a non-reference image of the video file except the reference image, and adjusting the position of the associated pixel region in the non-reference image according to a position relationship between the target pixel region and an associated pixel region in the non-reference image except the target pixel region, includes:
for each frame of non-reference image in the video file, identifying the target pixel region in the non-reference image;
adjusting the target pixel area to the first position coordinate of the non-reference image, and adjusting the position of the associated pixel area in the non-reference image according to the position relation between the target pixel area and the associated pixel area except the target pixel area in the non-reference image;
the target pixel area is a contour area of the object to be stationary or an area containing the object to be stationary.
2. The method according to claim 1, wherein the adjusting the target pixel region to the first position coordinate of the non-reference image and the adjusting the position of the associated pixel region in the non-reference image according to the position relationship between the target pixel region and the associated pixel region in the non-reference image except the target pixel region comprises:
determining a second position coordinate of the target pixel region in the non-reference image;
determining an offset vector of the second position coordinate from the first position coordinate;
adjusting the position of each pixel point in the non-reference image according to the offset vector so as to adjust the target pixel area to the first position coordinate of the non-reference image.
3. A video file processing apparatus, comprising:
the acquisition module is used for acquiring the position information of the target pixel area in a reference image of the video file;
an adjusting module, configured to adjust, according to the location information, a location of the target pixel region in a non-reference image of the video file, except for the reference image, and adjust, according to a location relationship between the target pixel region and an associated pixel region in the non-reference image, except for the target pixel region, a location of the associated pixel region in the non-reference image;
the acquisition module includes:
the display submodule is used for sequentially displaying the thumbnails of all the frame images in the video file to be processed;
the receiving submodule is used for receiving a selection instruction of a user on the thumbnail and determining an image corresponding to the selected thumbnail as a reference image;
the determining submodule is used for receiving the selection operation of a user on a target pixel area in the reference image and determining a first position coordinate of the target pixel area in the reference image;
the adjustment module includes:
an identifying sub-module for identifying the target pixel region in each frame of non-reference image in the video file;
the adjusting submodule is used for adjusting the target pixel area to the first position coordinate of the non-reference image and adjusting the position of the associated pixel area in the non-reference image according to the position relation between the target pixel area and the associated pixel area except the target pixel area in the non-reference image;
the target pixel area is a contour area of the object to be stationary or an area containing the object to be stationary.
4. The apparatus of claim 3, wherein the adjustment submodule comprises:
a coordinate determination unit for determining a second position coordinate of the target pixel region in the non-reference image;
a vector determination unit configured to determine an offset vector of the second position coordinate and the first position coordinate;
and the adjusting unit is used for adjusting the position of each pixel point in the non-reference image according to the offset vector so as to adjust the target pixel area to the first position coordinate of the non-reference image.
5. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program, when executed by said processor, implementing the steps of the video file processing method according to any one of claims 1 to 2.
6. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the video file processing method according to any one of claims 1 to 2.
CN201711241910.9A 2017-11-30 2017-11-30 Video file processing method and device and mobile terminal Active CN108109186B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711241910.9A CN108109186B (en) 2017-11-30 2017-11-30 Video file processing method and device and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711241910.9A CN108109186B (en) 2017-11-30 2017-11-30 Video file processing method and device and mobile terminal

Publications (2)

Publication Number Publication Date
CN108109186A CN108109186A (en) 2018-06-01
CN108109186B true CN108109186B (en) 2021-06-11

Family

ID=62207933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711241910.9A Active CN108109186B (en) 2017-11-30 2017-11-30 Video file processing method and device and mobile terminal

Country Status (1)

Country Link
CN (1) CN108109186B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110784682B (en) * 2019-09-27 2021-11-09 深圳市海雀科技有限公司 Video processing method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763182A (en) * 2008-12-25 2010-06-30 索尼株式会社 Input apparatus, control apparatus, control system, electronic apparatus, and control method
CN101853145A (en) * 2009-03-31 2010-10-06 佳能株式会社 Image editing apparatus and image edit method
CN103020248A (en) * 2012-12-19 2013-04-03 青岛海信传媒网络技术有限公司 Video file thumbnail generating method and generating device
CN106303225A (en) * 2016-07-29 2017-01-04 努比亚技术有限公司 A kind of image processing method and electronic equipment
CN106339070A (en) * 2015-07-09 2017-01-18 腾讯科技(深圳)有限公司 Display control method and mobile terminal
CN106407027A (en) * 2016-09-23 2017-02-15 维沃移动通信有限公司 An information display method for a mobile terminal and a mobile terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6850571B2 (en) * 2001-04-23 2005-02-01 Webtv Networks, Inc. Systems and methods for MPEG subsample decoding
WO2010009152A1 (en) * 2008-07-15 2010-01-21 Immersion Corporation Systems and methods for shifting haptic feedback function between passive and active modes

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763182A (en) * 2008-12-25 2010-06-30 索尼株式会社 Input apparatus, control apparatus, control system, electronic apparatus, and control method
CN101853145A (en) * 2009-03-31 2010-10-06 佳能株式会社 Image editing apparatus and image edit method
CN103020248A (en) * 2012-12-19 2013-04-03 青岛海信传媒网络技术有限公司 Video file thumbnail generating method and generating device
CN106339070A (en) * 2015-07-09 2017-01-18 腾讯科技(深圳)有限公司 Display control method and mobile terminal
CN106303225A (en) * 2016-07-29 2017-01-04 努比亚技术有限公司 A kind of image processing method and electronic equipment
CN106407027A (en) * 2016-09-23 2017-02-15 维沃移动通信有限公司 An information display method for a mobile terminal and a mobile terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Smart video surveillance: exploring the concept of multiscale spatiotemporal tracking;A. Hampapur .etal;《IEEE Signal Processing Magazine》;20050331;第22卷(第2期);38-51页 *
基于高斯背景建模的目标检测技术;康晓晶 等;《液晶与显示》;20100630;第25卷(第3期);454-459页 *

Also Published As

Publication number Publication date
CN108109186A (en) 2018-06-01

Similar Documents

Publication Publication Date Title
CN107977144B (en) Screen capture processing method and mobile terminal
CN108989672B (en) Shooting method and mobile terminal
CN109361867B (en) Filter processing method and mobile terminal
CN108038825B (en) Image processing method and mobile terminal
CN107846583B (en) Image shadow compensation method and mobile terminal
CN108307106B (en) Image processing method and device and mobile terminal
US11165950B2 (en) Method and apparatus for shooting video, and storage medium
CN107730460B (en) Image processing method and mobile terminal
CN110062273B (en) Screenshot method and mobile terminal
CN110602386B (en) Video recording method and electronic equipment
CN109618218B (en) Video processing method and mobile terminal
CN107153500B (en) Method and equipment for realizing image display
CN110099218B (en) Interactive control method and device in shooting process and computer readable storage medium
CN109922294B (en) Video processing method and mobile terminal
CN110719527A (en) Video processing method, electronic equipment and mobile terminal
CN108718389B (en) Shooting mode selection method and mobile terminal
CN108174109B (en) Photographing method and mobile terminal
CN110650367A (en) Video processing method, electronic device, and medium
CN110650294A (en) Video shooting method, mobile terminal and readable storage medium
CN111182211B (en) Shooting method, image processing method and electronic equipment
CN110636225B (en) Photographing method and electronic equipment
CN110321449B (en) Picture display method and terminal
CN109639981B (en) Image shooting method and mobile terminal
CN109491964B (en) File sharing method and terminal
CN111050214A (en) Video playing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant