CN116193198A - Video processing method, device, electronic equipment, storage medium and product - Google Patents

Video processing method, device, electronic equipment, storage medium and product Download PDF

Info

Publication number
CN116193198A
CN116193198A CN202310196007.4A CN202310196007A CN116193198A CN 116193198 A CN116193198 A CN 116193198A CN 202310196007 A CN202310196007 A CN 202310196007A CN 116193198 A CN116193198 A CN 116193198A
Authority
CN
China
Prior art keywords
video
video frame
input
processed
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310196007.4A
Other languages
Chinese (zh)
Inventor
黄文强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202310196007.4A priority Critical patent/CN116193198A/en
Publication of CN116193198A publication Critical patent/CN116193198A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/454Content or additional data filtering, e.g. blocking advertisements
    • H04N21/4545Input to filtering algorithms, e.g. filtering a region of the image
    • H04N21/45455Input to filtering algorithms, e.g. filtering a region of the image applied to a region of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Picture Signal Circuits (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses a video processing method, a video processing device, electronic equipment, a storage medium and a product, and belongs to the technical field of video processing. The video processing method comprises the following steps: receiving a first input of a target object in a video to be processed; responsive to the first input, determining the target object as a locked state; determining a video frame to be processed from the video to be processed; the video frame to be processed is a video frame in which the target object is in a locking state; and processing the target object in the video frame to be processed to obtain the target video.

Description

Video processing method, device, electronic equipment, storage medium and product
Technical Field
The application belongs to the technical field of video processing, and particularly relates to a video processing method, a video processing device, electronic equipment, a storage medium and a product.
Background
In a video playback scenario, to meet certain requirements, it may be necessary to process a specified object in the video, for example, to process a specified object such as hiding, mosaic, etc.
Currently, the designated objects in the video are processed at a later stage by manpower, for example, when the designated object A in the video needs to be marked with a mosaic, each video frame needs to be searched manually, and when a certain video frame contains the designated object A, the designated object A is marked with a mosaic, which results in low efficiency of video processing.
Disclosure of Invention
The embodiment of the application aims to provide a video processing method, a device, electronic equipment, a storage medium and a product, which can effectively solve the problem of low efficiency when processing a specified object in a video.
In a first aspect, an embodiment of the present application provides a video processing method, including:
receiving a first input of a target object in a video to be processed;
responsive to the first input, determining the target object as a locked state;
determining a video frame to be processed from the video to be processed; the video frame to be processed is a video frame in which the target object is in a locking state;
and processing the target object in the video frame to be processed to obtain the target video.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including: the device comprises a receiving module, a determining module and a processing module;
the receiving module is used for receiving a first input of a target object in the video to be processed;
a determining module for determining the target object as a locked state in response to the first input;
the determining module is also used for determining a video frame to be processed from the video to be processed; the video frame to be processed is a video frame in which the target object is in a locking state;
And the processing module is used for processing the target object in the video frame to be processed to obtain the target video.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions implementing the steps of the video processing method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the video processing method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, instructions in which, when executed by a processor of an electronic device, cause the electronic device to perform the steps of the video processing method according to the first aspect.
In response to a first input of a target object in a video to be processed, determining the target object as a locking state, determining a video frame to be processed from the video to be processed, wherein the video frame to be processed is a video frame with the target object in the locking state, and processing the target object in the video frame to be processed to obtain the target video. That is, in the embodiment of the application, the target object is only required to be determined to be in the locking state through the first input, so that the video frame to be processed can be determined from the video to be processed, the target object in the video frame to be processed is further processed, the user does not need to search each video frame of the video to be processed one by one, and the video processing efficiency is improved.
Drawings
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a locking target object according to an embodiment of the present application;
fig. 3 is a schematic diagram of an outline of a target object according to an embodiment of the present application;
fig. 4 is a schematic diagram of each video frame identifier provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a method for separately processing a video frame identifier according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a batch processing video frame identifier according to an embodiment of the present application;
fig. 7 is a schematic diagram showing a processing effect provided in an embodiment of the present application;
fig. 8 is a schematic diagram illustrating an encryption manner according to an embodiment of the present application;
fig. 9 is a schematic diagram of displaying a first password according to an embodiment of the present application;
fig. 10 is a schematic diagram of displaying a target object when not decrypted according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a target object after decryption according to an embodiment of the present disclosure;
fig. 12 is a schematic display diagram of a video directory according to an embodiment of the present application;
fig. 13 is a schematic diagram of a display of unlocking a part of a target object and not unlocking the part of the target object according to an embodiment of the present application;
Fig. 14 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 16 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
In video scenes, special processing is sometimes required for some people or objects in the video, for example, hiding, mosaicing, and the like are required for some people or objects.
At present, the method is mainly carried out in a manual mode, for example, a user is required to search each video frame one by one, and after a person or object needing special treatment is found, the person or object is treated one by one, so that the method is complex in operation and low in efficiency.
Therefore, the embodiment of the application provides a video processing method, a device, electronic equipment, a storage medium and a product, which can effectively solve the problem of low efficiency when processing a specified object in a video.
The video processing method provided in the embodiment of the present application is described in detail below with reference to specific embodiments. Fig. 1 is a flowchart of a video processing method provided in an embodiment of the present application, where the method may be applied to an electronic device, and the electronic device may be a device with functions of data processing and display, for example, may include a mobile phone, a notebook, a desktop, and the like.
As shown in fig. 1, the video processing method may include the steps of:
s110, receiving a first input of a target object in a video to be processed.
S120, responding to the first input, and determining the target object as a locking state.
S130, determining a video frame to be processed from the video to be processed.
Wherein, the video frame to be processed is a video frame in which the target object is in a locked state.
S140, processing the target object in the video frame to be processed to obtain a target video.
In response to a first input of a target object in a video to be processed, determining the target object as a locking state, determining a video frame to be processed from the video to be processed, wherein the video frame to be processed is a video frame with the target object in the locking state, and processing the target object in the video frame to be processed to obtain the target video. That is, in the embodiment of the application, the target object is only required to be determined to be in the locking state through the first input, so that the video frame to be processed can be determined from the video to be processed, the target object in the video frame to be processed is further processed, the user does not need to search each video frame of the video to be processed one by one, and the video processing efficiency is improved.
The following describes the above steps in detail, as follows:
in S110, the video to be processed may be a video in a recorded scene, that is, a video being recorded, or a video in a playing scene, where the video to be played may be, for example, a pre-recorded video, a locally cached video, a video acquired from the internet, or the like.
The first input may be an input for determining the target object, and the form of the first input is not limited in this application, and may be, for example, a click, a double click, a drag lock mark, or the like.
The target object may be an object contained in the video to be processed, and the target object may contain one or more objects, that is, one or more objects in the video to be processed may be determined as the target object, for example.
Taking the video to be processed as the video in the playing scene as an example, the video processing method may further include the following steps, illustratively, before S110:
displaying a video playing interface of the video to be processed;
accordingly, the step S110 may include the steps of:
a first input is received to a target object in a video playback interface.
The video playing interface is an interface for playing the video to be processed. For example, when the user clicks a play button of the video to be processed, a video play interface of the video to be processed may be displayed, and the processed video may be played.
In the case of playing a video to be processed, a first input may be performed to a target object in a video playing interface.
Taking the video to be processed as the video in the recording scene as an example, the video processing method may include the following steps, for example, before S110:
Displaying a video shooting preview interface for shooting a video to be processed;
accordingly, the step S110 may include the steps of:
a first input is received of a target object in a video capture preview interface.
The video shooting preview interface is a recording interface for recording the video to be processed, and in the case of recording the video, the first input can be performed on the target object in the video shooting preview interface.
The video processing method provided by the embodiment of the invention can be applied to scenes such as video playing or video recording, and meets the processing requirements of different scenes on videos.
In S120, the target object is determined to be in a locked state, i.e., locked. According to the method and the device for processing the video, under the condition that the first input to the target object is received, the target object can be determined to be in a locking state, and subsequent video processing is facilitated.
Taking the first input as a drag input to the lock mark as an example, the lock mark may be dragged to an area associated with the target object for the purpose of locking the target object. The region associated with the target object may be, for example, a region on the target object, or a region located outside the target object in the video frame to be processed and associated with the target object.
Illustratively, referring to FIG. 2, in the event that a drag input (i.e., a first input) to the lock tab 210 is received, the lock tab 210 may be dragged onto the persona shown in FIG. 2 in response to the drag input, leaving the persona in a locked state.
For example, in the case where there are a plurality of target objects, each target object may be locked one by one, and the locking manner of each target object may be the same or different.
In some embodiments, after the target object is locked, the contour points of the target object may be identified by an identification technique, and the contour line of the target object may be drawn based on the identified contour points, so that, when the target object is subsequently processed, an area within the contour line may be processed. Taking the target object shown in fig. 2 as an example, the outline of the drawn target object may refer to fig. 3.
The embodiment of the present application is not limited to a specific identification technology, and any technology capable of identifying the outline of people and things can be applied to the embodiment of the present application.
In S130, the video frame to be processed is a video frame in which the target object in the video to be processed is in a locked state, and when the target object is determined to be in the locked state, the video frame in which the target object in the video to be processed is in the locked state can be determined to be the video frame to be processed, so that the subsequent processing of the video frame to be processed is facilitated.
In S140, the processing manner of the target object is not limited, and may include, but is not limited to, hiding, replacing, mosaicing, mapping, and the like.
The processing modes of the target objects can be the same or different. The same target object can also correspond to different processing modes in different time periods, for example, a mapping processing mode can be adopted in the time period 1, and a mosaic processing mode can be adopted in the time period 2, so that the personalized requirements of users can be met.
According to the method and the device, the target object is determined to be in the locking state only through the first input, the video frames to be processed can be determined from the video to be processed, the video frames to be processed are processed, and each video frame does not need to be searched one by one, so that the operation of a user is simplified, and the processing efficiency of the video is improved.
For convenience of user operation, the video processing method may further include, illustratively, the steps of:
displaying video frame identifiers corresponding to at least one target video frame respectively; at least one target video frame is determined from the video frames to be processed, and each target video frame is associated with at least one video frame to be processed;
A second input of a first video frame identification of the at least one video frame identification is received.
Accordingly, the step S140 may include the steps of:
responding to the second input, and processing a target object in the first associated video frame to obtain a target video; the first associated video frame includes a target video frame corresponding to the first video frame identification and a pending video frame associated with the target video frame corresponding to the first video frame identification.
The target video frame is at least one video frame in the video frames with the target object in the locked state, and the video frame identifier is used for identifying each target video frame, and illustratively, the target object in the target video frame can be used as a video frame identifier to identify the target video frame.
For example, referring to fig. 4, each video frame identifier shown in fig. 4 is taken as an example corresponding to two target objects ("person" and "butterfly"), for which 4 video frame identifiers are displayed, and corresponding to 4 target video frames, respectively, for which similar operations are performed for the target object "butterfly".
Illustratively, after S130, a video frame identifier corresponding to each of the at least one target video frame may be displayed.
The first video frame identifier may be one or more of the at least one video frame identifier and the second input may be a single click, double click, long press, or touch operation on the first video frame identifier.
It should be appreciated that the video frame identification shown in fig. 4 may be associated with a plurality of video frames of the video frames to be processed, such as for the first "person" of fig. 4, may be included in the plurality of video frames to be processed, and thus, in the event that a second input is received for the first video frame identification, the target video frame corresponding to the first video frame identification, as well as other video frames to be processed including the "person" may be processed.
For example, for the first "person" of fig. 4, the target video frame corresponding to the "person" is the fifth frame of the video frame to be processed, where the sixth frame and the seventh frame also include the "person", and then, according to the above embodiment, the target object in the fifth frame, the sixth frame, and the seventh frame may be processed.
According to the embodiment of the application, based on the video frame identification of the displayed partial video frames, all video frames corresponding to and associated with the video frame identification in the video frames to be processed can be processed, so that batch processing is realized, one-by-one searching and operation are not needed, and the processing efficiency is improved.
In some embodiments, after "displaying the video frame identifications corresponding to the at least one target video frame, respectively", the video processing method may further include the steps of:
receiving a third input for a second video frame identification of the at least one video frame identification;
in response to the third input, the display of the second video frame identification is canceled.
The second video frame identification may be the same as the first video frame identification or may be different from the first video frame identification. The third input is for canceling display of the second video frame identification, and may be, for example, a single click, double click, long press, or touch operation on the second video frame identification.
For example, the cancelling of the display of the second video frame identity may be deleting the second video frame identity, e.g. in case a third input of the second video frame identity of the at least one video frame identity is received, the second video frame identity corresponding to the third input may be deleted.
For example, in the case of determining the second video frame identifier, an adjacent video frame identifier adjacent to the second video frame identifier in the at least one video frame identifier may be determined, and in the case of deleting the second video frame identifier, a video frame between the second video frame and the adjacent video frame may be kept unchanged, that is, a video frame between the second video frame and the adjacent video frame is not processed, that is, a video frame between the second video frame and the adjacent video frame is kept as an original video frame. The second video frame is a video frame corresponding to the second video frame identifier, and the adjacent video frame is a video frame corresponding to the adjacent video frame identifier.
Therefore, the processing operation of each video frame can be canceled in batches, the original video frames are obtained, and the cancellation is not needed one by one, so that the processing efficiency of the video is improved.
For example, referring to fig. 4, the "person" contains four video frame identifications, and assuming that the second video frame identification is the second video frame identification from left to right, in the case of deleting the second video frame identification, the processing operation of the video frame between the second video frame (the video frame corresponding to the second video frame identification) and the adjacent video frame (the video frame corresponding to the third video frame identification) may be canceled in batch.
According to the embodiment of the application, the corresponding video frame identifications can be deleted in a personalized mode according to the requirements, and the video frames between the deleted video frames and the video frames corresponding to the video frame identifications adjacent to the deleted video frame identifications are kept to be original video frames.
In some embodiments, the processing the target object in the first associated video frame to obtain the target video in response to the second input may include the following steps:
in response to the second input, displaying video processing options;
receiving a fourth input to the first one of the video processing options;
and responding to the fourth input, and processing the target object in the first associated video frame according to the video processing strategy corresponding to the first option to obtain the target video.
The video processing options may include, but are not limited to, option one, option two, option three, option four, and option five, and the different options may correspond to different video processing strategies, for example, the video processing strategy corresponding to option one is "mosaic", the video processing strategy corresponding to option two is "map", the video processing strategy corresponding to option three is "skin change", the video processing strategy corresponding to option four is "hide", and the video processing strategy corresponding to option five is "other".
The first option may be any of the video processing options described above. The fourth input may be a click or touch operation of the first option.
And when receiving a fourth input of the first option, processing a target object in the first associated video frame according to a video processing strategy corresponding to the first option to obtain a target video.
With reference to fig. 5, in an exemplary case where a second input for identifying a butterfly in a second video frame is received, video processing policies corresponding to mosaics, mapping, skin changing, hiding, and other five options may be displayed.
For different video frame identifications, different video processing strategies can be adopted, so that the personalized requirements of users can be met.
In some embodiments, in the case of displaying the video frame identifiers corresponding to the at least one target video frame respectively, a processing control may also be displayed, taking the video frame identifier shown in fig. 4 as an example, and referring to fig. 6, by way of example, the video frame identifier "person" may be displayed, the processing control 610 may be displayed, and for the video frame identifier "butterfly", the processing control 620 may be displayed.
Each processing control may correspond to a plurality of processing options, each processing option may correspond to a different video processing policy, for example, in the case where input to processing control 620 is received, a video processing policy corresponding to mosaic, map, skin, hide, and other five processing options may be displayed, and in the case where input to a certain processing option is received, a batch processing may be performed on a target object in a video frame associated with each video frame identifier corresponding to the processing control.
For example, in the case of clicking on the corresponding map of the process control 620, batch mapping may be performed on "butterflies" in each of the video frames associated with the four video frame identifications "butterflies".
In some embodiments, after the video frame identifiers are processed by adopting a corresponding processing manner, a processing effect may be displayed, as shown in fig. 7, for example, in the case of clicking on a map corresponding to the processing control 610, the map is displayed on each of "people" in the four video frame identifiers, and the processing control 610 also displays a corresponding video processing policy "map"; if the mosaic corresponding to the processing control 620 is clicked, the "butterfly" in the four video frame identifications will have the mosaic, and the processing control 620 will also display the corresponding video processing policy "mosaic".
According to the embodiment of the application, batch processing can be realized on the video frames through simple control input, video searching and processing one by one are not needed, so that manpower is saved, and the processing efficiency of the video is improved.
In some embodiments, the processed video frame may be encrypted, so that some private information is better protected.
Based on this, after "displaying the video frame identifications corresponding to the at least one target video frame, respectively", the video processing method may further include the steps of:
receiving a fifth input of a third video frame identification of the at least one video frame identification;
Adding encryption information for the second associated video frame in response to the fifth input; the second associated video frame includes a target video frame corresponding to the third video frame identification and a pending video frame associated with the target video frame corresponding to the third video frame identification.
The fifth input is used for adding encryption information to the second associated video frame, and the fifth input can be, for example, clicking or touch operation on a third video frame identifier, wherein the third video frame identifier is one or more of at least one video frame identifier, so that the personalized requirement of a user on encryption can be met.
In the case of receiving a fifth input of the third video frame identification, encryption information may be added for the target video frame corresponding to the third video frame identification and the to-be-processed video frame associated with the target video frame.
The encryption mode is not limited, for example, a traditional password encryption mode can be adopted; the encryption can also be performed based on the number of video frame identifications, for example, the number of video frame identifications can be used as a password for encryption; encryption may also be based on the video processing policy employed for each video frame identification.
Taking encryption based on the video processing strategies adopted by each video frame identifier as an example, for example, in the case that the fifth input to the third video identifier is received, a first keyword may be obtained based on the number of the video processing strategies adopted by the video frame identifiers and the serial numbers corresponding to the video processing strategies, and the first keyword is used as a password to encrypt the target video frame corresponding to the third video frame identifier and the video frame to be processed associated with the target video frame.
By way of example, assuming that three video processing strategies are currently involved, respectively, (1) mosaic, (2) map, and (3) skin change, the third video frame is identified as "butterfly", and the four "butterflies" employ two video processing strategies in total, respectively "mosaic" and "map", the value of the first keyword X can be determined to be: 1 (video processing policy (1))x2 (number of uses of video processing policy (1) +3 (video processing policy (3))x1 (number of uses of video processing policy (3)) = 5.
Different video frame identifications can adopt the same encryption mode or different encryption modes.
When the video identifiers are in the same encryption manner, for simplifying the user operation, for example, in the case of displaying the video frame identifiers corresponding to at least one target video frame respectively, as shown in fig. 4, a confirmation control 410 may also be displayed, and in the case of receiving an input to the confirmation control 410, the encryption manner shown in fig. 8 may be displayed, where fig. 8 includes four encryption manners as an example.
Taking encryption mode 1 as an example of encryption based on the video processing policy adopted by each video frame identifier, for example, in the case where an input to encryption mode 1 is received, a first password shown in fig. 9 may be displayed, the first password including a first keyword, and the determination of the first keyword may be referred to in the above-described embodiments.
Wherein a list may correspond to a first password, such as a list 1-password: 1-X-10 (number of video frame identifications corresponding to the same target object), list 2-password: 2-X-6 (number of video frame identifications corresponding to the same target object), bulk encryption-password: all-X-16 (total number of video frame identifications corresponding to respective target objects).
List 1 may represent encrypting video frames corresponding to a video frame identifier, such as a person, and list 2 may represent encrypting video frames corresponding to a video frame identifier, such as a butterfly, with bulk encryption representing encrypting video frames corresponding to respective video frame identifiers, such as a person + butterfly. Therefore, batch encryption is realized, repeated operations are not required to be executed for each video frame identifier, and the processing efficiency of the video is improved.
According to the method and the device for processing the video frames, the value of the first keyword is determined based on the video processing strategy adopted by each video frame identifier, the first password is obtained, and the corresponding video frames are encrypted based on the first password, so that privacy information leakage can be avoided in the video transmission process, and the information transmission safety is improved.
In the case of encryption, the target video may be decrypted by decryption information that matches the encryption information, resulting in the original video, i.e., the video to be processed. Based on this, in some embodiments, the video processing method may further include the steps of:
Receiving a sixth input under the condition of playing the target video;
and playing the video to be processed under the condition that the decryption information corresponding to the sixth input is matched with the encryption information.
Taking the mosaic process of "person" as an example, in the case where it is not decrypted, the "person" may be displayed in the form shown in fig. 10, while the lower right corner displays an encryption flag 101 for indicating that the "person" is in an encrypted state.
The sixth input may be an operation of inputting decryption information, and illustratively, by performing the sixth input on the encryption flag 101, corresponding decryption information may be input, and in the case where the decryption information matches the encryption information, as shown in fig. 11, a video to be processed may be played while the encryption flag 101 is in a decrypted state. Under the condition that the decryption information is not matched with the encryption information, the user can be prompted that the decryption fails, and the target video, namely the processed video, is displayed.
In the case of encrypting some video frames, when the target video is played, the target object corresponding to the encrypted video frame is clicked in the preset time period before or after the encrypted video frame according to the sequence, so that decryption can be realized. The size of the preset time period can be set according to actual needs, for example, can be set to be 1s.
In some embodiments, if the target object locking function is started, a locking mark may be displayed in the video directory for the locked videos, as shown in fig. 12, and if video 1 locks some objects, a locking mark may be displayed in the area associated with video 1, so that the user may conveniently know which videos are locked and which are not.
It should be appreciated that for an unrevealed video, a video frame is typically displayed as a video cover before clicking the play button. In an embodiment of the present application, a video frame identifier may be selected from displayed video frame identifiers, and a video frame corresponding to the video frame identifier is used as a video cover of a target video, where, in some embodiments, after "displaying video frame identifiers corresponding to at least one target video frame respectively", the video processing method may further include the following steps:
receiving a seventh input for a fourth video frame identification of the at least one video frame identification;
in response to the seventh input, the fourth video frame identifies the corresponding target video frame as a video cover of the target video.
The seventh input may be an operation such as clicking or touching the fourth video frame identifier, where the fourth video frame identifier may be any one of the video frame identifiers shown in fig. 4, and in general, the video frame identifier of the video frame located at the forefront of the target video may be used as the fourth video frame identifier, and the target video frame corresponding to the fourth video frame identifier may be used as the video cover of the target video. Of course, the video frame corresponding to the video frame identifier may also be used as the video cover of the target video. The target video frame may be a processed video frame or a video frame to be processed.
Compared with the traditional scheme of searching each video frame one by one for processing the target object, the target video frame can be quickly selected from the displayed video frame identifiers to serve as the video cover, the method is simple and convenient, each video frame does not need to be searched one by one, and particularly when the video cover needs to be replaced, the time of a user can be greatly saved.
In an exemplary embodiment, when the fourth video frame identifier is selected, the target video frame corresponding to the selected fourth video frame identifier may be indicated as the video cover by adding a border, color filling, and the like, so that the user can conveniently learn about the target video frame.
For the video cover, the video processing procedure described above may be performed, or the original image may be maintained, i.e., no processing is performed. The method of selecting the processed target video frame as the video cover is not limited in the embodiment of the present application.
Illustratively, after the fourth video frame identifier is selected, the processed target video frame may be indicated as the video cover by clicking or long pressing, etc., and the selected fourth video frame identifier may be smeared to indicate that the original image is used as the video cover.
The video cover of the target video may also be set by a cover setting policy corresponding to the cover setting option, for example. Based on this, in some embodiments, the "responding to the seventh input, regarding the fourth video frame identifier corresponding to the target video frame as the video cover of the target video" may include the following steps:
Displaying a cover setting option in response to the seventh input;
receiving an eighth input to a second one of the cover setting options;
and responding to the eighth input, and taking the target video frame corresponding to the fourth video frame identification as the video cover of the target video according to the cover setting strategy corresponding to the second option.
For example, referring to fig. 4, in the case where the seventh input of the fourth video frame identifier is received, the cover setting option 420 may be displayed, the cover setting option 420 may include a first option and a second option, and different options may correspond to different cover setting policies, for example, the cover setting policy corresponding to the first option may be "yes", that is, the original video frame corresponding to the fourth video frame identifier, that is, the video cover of the video frame to be processed as the target video. The cover setting policy corresponding to the second option may be "no", that is, the fourth video frame identifies the corresponding processed video frame as the video cover of the target video.
The eighth input may be an operation such as clicking or touching the second option, and in the case that the eighth input to the second option is received, the processed video frame corresponding to the fourth video frame identifier may be used as the video cover of the target video.
Illustratively, if no input is received to the cover setting option 420 within a set time, the unprocessed video frames, i.e., the video cover with the artwork as the target video, are defaulted.
Compared with the traditional scheme of processing the target object by searching each video frame one by one, the embodiment of the application can flexibly process the video cover through the cover setting options, namely, the video cover can be processed, and the video cover can be processed without processing, so that the reversible purpose is realized, and the personalized requirements of a user are met.
In some embodiments, the target object in the video frame to be processed may be locked as needed, so that the subsequent processing of the locked target object may be facilitated, or the locking state of the target object may be cancelled as needed, where based on this, in some embodiments, after S120, the video processing method may further include the following steps:
receiving a ninth input of a target object in the video to be processed;
in response to the ninth input, the locked state of the target object is canceled.
The ninth input is used to cancel the locked state of the target object, and specifically, the ninth input may be a single click, double click, long press, or touch operation on the target object. In the case of receiving the ninth input to the target object, the lock state of the target object may be canceled, and further the processing of the target object may be canceled, thus realizing reversibility of the operation.
For example, in the case of a target object being locked by a lock mark, the ninth input may also act on the lock mark corresponding to the target object. Namely, under the condition that the ninth input of the locking mark is received, the locking state of the target object can be canceled, and the locking mark can be continuously displayed at the moment, so that the user can conveniently relock the target object.
It should be understood that the same video frame to be processed may contain multiple target objects, and in practical application, the locking state of some target objects may be cancelled, while the locking state of another target object is unchanged.
For example, referring to fig. 13, the video frame contains both "person" and "butterfly", assuming that both target objects are in a locked state, when the lock flag of "person" is clicked, the locked state of "person" may be canceled, while "butterfly" is still in a locked state.
The unlocking-post-locking mark is in an unlocking state and is displayed continuously, so that the user can conveniently lock the part of the first object again.
It should be noted that, in the video processing method provided in the embodiment of the present application, the execution subject may be a video processing apparatus, or a processing module in the video processing apparatus for executing the video processing method. In the embodiment of the present application, a video processing device is taken as an example to execute a video processing method by using the video processing device, and the video processing device provided in the embodiment of the present application is described.
Fig. 14 is a schematic structural diagram of a video device according to an embodiment of the present application.
As shown in fig. 14, the video processing apparatus 1400 may include: a receiving module 1401, a determining module 1402 and a processing module 1403;
a receiving module 1401 for receiving a first input of a target object in a video to be processed;
a determining module 1402 for determining a target object as a locked state in response to a first input;
a determining module 1402, further configured to determine a video frame to be processed from the video to be processed; the video frame to be processed is a video frame in which the target object is in a locking state;
a processing module 1403 is configured to process a target object in the video frame to be processed to obtain a target video.
In response to a first input of a target object in a video to be processed, determining the target object as a locking state, determining a video frame to be processed from the video to be processed, wherein the video frame to be processed is a video frame with the target object in the locking state, and processing the target object in the video frame to be processed to obtain the target video. That is, in the embodiment of the application, the target object is only required to be determined to be in the locking state through the first input, so that the video frame to be processed can be determined from the video to be processed, the target object in the video frame to be processed is further processed, the user does not need to search each video frame of the video to be processed one by one, and the video processing efficiency is improved.
In some possible implementations of embodiments of the present application, the video processing apparatus 1400 may further include:
the display module is used for displaying the video frame identifiers corresponding to at least one target video frame respectively before the processing module 140 processes the target object in the video frame to be processed to obtain the target video; at least one target video frame is determined from the video frames to be processed, and each target video frame is associated with at least one video frame to be processed;
a receiving module 1401, further configured to receive a second input of a first video frame identifier of the at least one video frame identifier;
the processing module 1403 is specifically configured to process, in response to the second input, the target object in the first associated video frame to obtain a target video; the first associated video frame includes a target video frame corresponding to the first video frame identification and a pending video frame associated with the target video frame corresponding to the first video frame identification.
In some possible implementations of embodiments of the present application, the receiving module 1401 is further configured to receive a third input of a second video frame identifier of the at least one video frame identifier;
the processing module 1403 is further configured to cancel displaying the second video frame identifier in response to the third input.
In some possible implementations of embodiments of the present application, the processing module 1403 is specifically configured to display video processing options in response to the second input;
receiving a fourth input to the first one of the video processing options;
and responding to the fourth input, and processing the target object in the first associated video frame according to the video processing strategy corresponding to the first option to obtain the target video.
In some possible implementations of embodiments of the present application, the receiving module 1401 is further configured to receive a fifth input of a third video frame identifier of the at least one video frame identifier;
a processing module 1403, further configured to add encryption information to the second associated video frame in response to the fifth input; the second associated video frame includes a target video frame corresponding to the third video frame identification and a pending video frame associated with the target video frame corresponding to the third video frame identification.
In some possible implementations of the embodiments of the present application, the receiving module 1401 is further configured to receive a sixth input in a case where the target video is played;
the video processing apparatus 1400 may further include:
and the playing module plays the video to be processed under the condition that the decryption information corresponding to the sixth input is matched with the encryption information.
In some possible implementations of embodiments of the present application, the receiving module 1401 is further configured to receive a seventh input for a fourth video frame identifier of the at least one video frame identifier;
the determining module 1402 is further configured to, in response to the seventh input, use the fourth video frame identifier as a video cover of the target video corresponding to the target video frame.
In some possible implementations of embodiments of the present application, the determining module is specifically configured to display a cover setting option in response to a seventh input;
receiving an eighth input to a second one of the cover setting options;
and responding to the eighth input, and taking the target video frame corresponding to the fourth video frame identification as the video cover of the target video according to the cover setting strategy corresponding to the second option.
In some possible implementations of embodiments of the present application, the receiving module 1401 is further configured to receive a ninth input of the target object in the video to be processed;
the processing module 1403 is further configured to cancel the locked state of the target object in response to the ninth input.
In some possible implementations of the embodiments of the present application, the display module is further configured to display a video playing interface of the video to be processed before the step of receiving, by the receiving module 1401, the first input of the target object in the video to be processed;
The receiving module 1401 is specifically configured to receive a first input of a target object in the video playing interface.
In some possible implementations of the embodiments of the present application, the display module is further configured to display, before the step of receiving, by the receiving module 1401, a first input of a target object in the video to be processed, a video capturing preview interface for capturing the video to be processed;
the receiving module 1401 is specifically configured to receive a first input of a target object in a video capturing preview interface.
The video processing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in an electronic device. The electronic device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The electronic device in the embodiment of the application may be an electronic device having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The video processing device provided in the embodiment of the present application can implement each process in the embodiments of the video processing method in fig. 1 to 13, and in order to avoid repetition, a detailed description is omitted here.
As shown in fig. 15, the embodiment of the present application further provides an electronic device 1500, which includes a processor 1501, a memory 1502, and a program or an instruction stored in the memory 1502 and capable of being executed on the processor 1502, where the program or the instruction implements each process of the embodiment of the video processing method described above when executed by the processor 1501, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
In some possible implementations of embodiments of the present application, the processor 1501 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured to implement one or more integrated circuits of embodiments of the present application.
In some possible implementations of embodiments of the present application, memory 1502 may include Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk storage media devices, optical storage media devices, flash Memory devices, electrical, optical, or other physical/tangible Memory storage devices. Thus, in general, the memory 1502 includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors) it is operable to perform the operations described with reference to video processing methods in accordance with embodiments of the present application.
Fig. 16 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
The electronic device 1600 includes, but is not limited to: radio frequency unit 1601, network module 1602, audio output unit 1603, input unit 1604, sensor 1605, display unit 1606, user input unit 1607, interface unit 1608, memory 1609, and processor 1610.
Those skilled in the art will appreciate that the electronic device 1600 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1610 by a power management system that performs the functions of managing charge, discharge, and power consumption. The electronic device structure shown in fig. 16 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown in the drawings, or may combine some components, or may be arranged in different components, which will not be described in detail herein.
Wherein the user input unit 1607 is for: receiving a first input of a target object in a video to be processed;
the processor 1610 is configured to: responsive to the first input, determining the target object as a locked state;
determining a video frame to be processed from the video to be processed; the video frame to be processed is a video frame in which the target object is in a locking state;
and processing the target object in the video frame to be processed to obtain the target video.
In response to a first input of a target object in a video to be processed, determining the target object as a locking state, determining a video frame to be processed from the video to be processed, wherein the video frame to be processed is a video frame with the target object in the locking state, and processing the target object in the video frame to be processed to obtain the target video. That is, in the embodiment of the application, the target object is only required to be determined to be in the locking state through the first input, so that the video frame to be processed can be determined from the video to be processed, the target object in the video frame to be processed is further processed, the user does not need to search each video frame of the video to be processed one by one, and the video processing efficiency is improved.
In some possible implementations of the embodiments of the present application, the display unit 1606 is configured to: before the step of processing, by the processor 1610, the target object in the video frame to be processed to obtain the target video, displaying the video frame identifiers corresponding to at least one target video frame respectively; at least one target video frame is determined from the video frames to be processed, and each target video frame is associated with at least one video frame to be processed;
The user input unit 1607 is for: receiving a second input of a first video frame identification of the at least one video frame identification;
processor 1610, in particular, is configured to: responding to the second input, and processing a target object in the first associated video frame to obtain a target video; the first associated video frame includes a target video frame corresponding to the first video frame identification and a pending video frame associated with the target video frame corresponding to the first video frame identification.
In some possible implementations of embodiments of the present application, the user input unit 1607 is further to: receiving a third input for a second video frame identification of the at least one video frame identification;
processor 1610 is further configured to: in response to the third input, the display of the second video frame identification is canceled.
In some possible implementations of embodiments of the present application, processor 1610 is specifically configured to: in response to the second input, displaying video processing options;
receiving a fourth input to the first one of the video processing options;
and responding to the fourth input, and processing the target object in the first associated video frame according to the video processing strategy corresponding to the first option to obtain the target video.
In some possible implementations of embodiments of the present application, the user input unit 1607 is further to: receiving a fifth input of a third video frame identification of the at least one video frame identification;
Processor 1610 is also configured to: adding encryption information for the second associated video frame in response to the fifth input; the second associated video frame includes a target video frame corresponding to the third video frame identification and a pending video frame associated with the target video frame corresponding to the third video frame identification.
In some possible implementations of embodiments of the present application, the user input unit 1607 is further to: receiving a sixth input under the condition of playing the target video;
the processor 1610 is specifically configured to: and playing the video to be processed under the condition that the decryption information corresponding to the sixth input is matched with the encryption information.
In some possible implementations of embodiments of the present application, the user input unit 1607 is further to: receiving a seventh input for a fourth video frame identification of the at least one video frame identification;
processor 1610 is also configured to: in response to the seventh input, the fourth video frame identifies the corresponding target video frame as a video cover of the target video.
In some possible implementations of embodiments of the present application, processor 1610 is specifically configured to:
displaying a cover setting option in response to the seventh input;
receiving an eighth input to a second one of the cover setting options;
and responding to the eighth input, and taking the target video frame corresponding to the fourth video frame identification as the video cover of the target video according to the cover setting strategy corresponding to the second option.
In some possible implementations of embodiments of the present application, the user input unit 1607 is further to: receiving a ninth input of a target object in the video to be processed;
the processor 1610 is specifically configured to: in response to the ninth input, the locked state of the target object is canceled.
In some possible implementations of the embodiments of the present application, the display unit 1606 is further configured to: before the user input unit 1607 receives a first input of a target object in a video to be processed, a video play interface of the video to be processed is displayed;
a user input unit 1607, specifically for: a first input is received to a target object in a video playback interface.
In some possible implementations of the embodiments of the present application, the display unit 1606 is further configured to: before the user input unit 1607 receives a first input of a target object in a video to be processed, a video capturing preview interface for capturing the video to be processed is displayed;
a user input unit 1607, specifically for: a first input is received of a target object in a video capture preview interface.
It should be appreciated that in embodiments of the present application, the input unit 1604 may include a graphics processor (Graphics Processing Unit, GPU) 16041 and a microphone 16042, the graphics processor 16041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 1606 may include a display panel 16061, and the display panel 16061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1607 includes a touch panel 16071 and other input devices 16072. The touch panel 16071, also referred to as a touch screen. The touch panel 16071 may include two parts, a touch detection device and a touch controller. Other input devices 16072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 1609 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. Processor 1610 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1610.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, where the program or the instruction realizes each process of the embodiment of the video processing method when executed by a processor, and the same technical effect can be achieved, so that repetition is avoided, and no detailed description is given here.
The processor is a processor in the electronic device in the above embodiment. The readable storage medium includes a computer readable storage medium, and examples of the computer readable storage medium include a non-transitory computer readable storage medium such as ROM, RAM, magnetic disk, or optical disk.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or instructions, the above processes of the video processing method embodiment applied to the electronic device are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the related art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (15)

1. A video processing method, comprising:
receiving a first input of a target object in a video to be processed;
determining the target object as a locked state in response to the first input;
determining a video frame to be processed from the video to be processed; the video frame to be processed is a video frame in which the target object is in a locking state;
and processing the target object in the video frame to be processed to obtain a target video.
2. The method of claim 1, wherein prior to the step of processing the target object in the video frame to be processed to obtain the target video, the method further comprises:
displaying video frame identifiers corresponding to at least one target video frame respectively; the at least one target video frame is determined from the video frames to be processed, and each target video frame is associated with at least one video frame to be processed;
receiving a second input of a first video frame identification of the at least one video frame identification;
the processing the target object in the video frame to be processed to obtain a target video includes:
responding to the second input, and processing a target object in the first associated video frame to obtain a target video; the first associated video frame includes a target video frame corresponding to the first video frame identification and a to-be-processed video frame associated with the target video frame corresponding to the first video frame identification.
3. The method according to claim 2, wherein the method further comprises:
receiving a third input for a second video frame identification of the at least one video frame identification;
in response to the third input, the second video frame identification is canceled from being displayed.
4. The method of claim 2, wherein processing the target object in the first associated video frame in response to the second input to obtain the target video comprises:
in response to the second input, displaying video processing options;
receiving a fourth input to the first one of the video processing options;
and responding to the fourth input, and processing a target object in the first associated video frame according to a video processing strategy corresponding to the first option to obtain the target video.
5. The method according to claim 2, wherein the method further comprises:
receiving a fifth input of a third video frame identification of the at least one video frame identification;
adding encryption information for the second associated video frame in response to the fifth input; the second associated video frame comprises a target video frame corresponding to the third video frame identifier and a to-be-processed video frame associated with the target video frame corresponding to the third video frame identifier.
6. The method of claim 5, wherein the method further comprises:
receiving a sixth input under the condition of playing the target video;
and playing the video to be processed under the condition that the decryption information corresponding to the sixth input is matched with the encryption information.
7. The method according to claim 2, wherein the method further comprises:
receiving a seventh input for a fourth video frame identification of the at least one video frame identification;
and responding to the seventh input, and taking the corresponding target video frame of the fourth video frame identification as a video cover of the target video.
8. The method of claim 7, wherein the identifying the fourth video frame as the video cover of the target video corresponding to the fourth video frame in response to the seventh input comprises:
displaying a cover setting option in response to the seventh input;
receiving an eighth input to a second one of the cover setting options;
and responding to the eighth input, and taking the target video frame corresponding to the fourth video frame identifier as a video cover of the target video according to a cover setting strategy corresponding to the second option.
9. The method according to claim 1, wherein the method further comprises:
receiving a ninth input of a target object in the video to be processed;
in response to the ninth input, the locked state of the target object is canceled.
10. The method of claim 1, wherein prior to the step of receiving a first input of a target object in the video to be processed, the method further comprises:
displaying a video playing interface of the video to be processed;
the receiving a first input of a target object in a video to be processed, comprising:
a first input is received to the target object in the video playback interface.
11. The method of claim 1, wherein prior to the step of receiving a first input of a target object in the video to be processed, the method further comprises:
displaying a video shooting preview interface for shooting the video to be processed;
the receiving a first input of a target object in a video to be processed, comprising:
a first input is received to the target object in the video capture preview interface.
12. A video processing apparatus, comprising: the device comprises a receiving module, a determining module and a processing module;
The receiving module is used for receiving a first input of a target object in the video to be processed;
the determining module is used for responding to the first input and determining the target object as a locking state;
the determining module is further used for determining a video frame to be processed from the video to be processed; the video frame to be processed is a video frame in which the target object is in a locking state;
and the processing module is used for processing the target object in the video frame to be processed to obtain a target video.
13. An electronic device, comprising: a processor, a memory and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the video processing method of any one of claims 1 to 11.
14. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the video processing method of any of claims 1 to 11.
15. A computer program product, characterized in that instructions in the computer program product, when executed by a processor of an electronic device, cause the electronic device to perform the steps of the video processing method according to any of claims 1 to 11.
CN202310196007.4A 2023-03-02 2023-03-02 Video processing method, device, electronic equipment, storage medium and product Pending CN116193198A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310196007.4A CN116193198A (en) 2023-03-02 2023-03-02 Video processing method, device, electronic equipment, storage medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310196007.4A CN116193198A (en) 2023-03-02 2023-03-02 Video processing method, device, electronic equipment, storage medium and product

Publications (1)

Publication Number Publication Date
CN116193198A true CN116193198A (en) 2023-05-30

Family

ID=86438176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310196007.4A Pending CN116193198A (en) 2023-03-02 2023-03-02 Video processing method, device, electronic equipment, storage medium and product

Country Status (1)

Country Link
CN (1) CN116193198A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101257558A (en) * 2007-02-27 2008-09-03 华晶科技股份有限公司 Mosaic process for digital camera as well as method for reducing mosaic process
CN107784232A (en) * 2017-10-18 2018-03-09 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN113067983A (en) * 2021-03-29 2021-07-02 维沃移动通信(杭州)有限公司 Video processing method and device, electronic equipment and storage medium
CN113613067A (en) * 2021-08-03 2021-11-05 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101257558A (en) * 2007-02-27 2008-09-03 华晶科技股份有限公司 Mosaic process for digital camera as well as method for reducing mosaic process
CN107784232A (en) * 2017-10-18 2018-03-09 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN113067983A (en) * 2021-03-29 2021-07-02 维沃移动通信(杭州)有限公司 Video processing method and device, electronic equipment and storage medium
CN113613067A (en) * 2021-08-03 2021-11-05 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108460259A (en) A kind of information processing method, device and terminal
CN113794834A (en) Image processing method and device and electronic equipment
WO2022068768A1 (en) Permission setting method and apparatus, file playback method and apparatus, and electronic device
CN112163200A (en) Picture processing method and device and electronic equipment
CN112734661A (en) Image processing method and device
CN112104785A (en) Information display method and device
CN112764633A (en) Information processing method and device and electronic equipment
CN112184535B (en) Image anti-counterfeiting method, device and equipment
JP2007114959A (en) Authentication information processor, authentication information processing method and computer program
CN116193198A (en) Video processing method, device, electronic equipment, storage medium and product
WO2022247865A1 (en) Display control method and apparatus, electronic device, and medium
EP4398558A1 (en) Display method and apparatus
CN112270004B (en) Content encryption method and device and electronic equipment
CN114253449B (en) Screen capturing method, device, equipment and medium
CN108696355B (en) Method and system for preventing head portrait of user from being embezzled
CN114844853A (en) Information processing method, information processing apparatus, electronic device, and medium
CN113238691B (en) Application icon management method and device and electronic equipment
CN112765620A (en) Display control method, display control device, electronic device, and medium
CN113868702A (en) Object moving method and device
CN113010918A (en) Information processing method and device
CN113190882A (en) Method and device for shielding control
CN112492035A (en) File transmission method and device and electronic equipment
CN115134473B (en) Image encryption method and device
CN113407959B (en) Operation execution method and device and electronic equipment
CN113282899A (en) Object management method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination