CN106303723B - Video processing method and device - Google Patents

Video processing method and device Download PDF

Info

Publication number
CN106303723B
CN106303723B CN201610659380.9A CN201610659380A CN106303723B CN 106303723 B CN106303723 B CN 106303723B CN 201610659380 A CN201610659380 A CN 201610659380A CN 106303723 B CN106303723 B CN 106303723B
Authority
CN
China
Prior art keywords
video
user
instruction
note
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610659380.9A
Other languages
Chinese (zh)
Other versions
CN106303723A (en
Inventor
马志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Youdao Information Technology Hangzhou Co Ltd
Original Assignee
Netease Youdao Information Technology Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Youdao Information Technology Hangzhou Co Ltd filed Critical Netease Youdao Information Technology Hangzhou Co Ltd
Priority to CN201610659380.9A priority Critical patent/CN106303723B/en
Publication of CN106303723A publication Critical patent/CN106303723A/en
Application granted granted Critical
Publication of CN106303723B publication Critical patent/CN106303723B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4758End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for providing answers, e.g. voting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The embodiment of the invention provides a video processing method and device. The method comprises the following steps: receiving an instruction of intercepting video content in the process of playing a video; intercepting video content corresponding to the current playing time from the video, wherein the video content is a frame of image or video clip; displaying an editing interface for the intercepted video content; and correspondingly processing the image selected by the user according to the operation of the user on the image selected by the user in the editing interface. The method of the invention enables the comment to be associated with the picture, so that the displayed position of the comment accords with the reading habit of the user, and the user can check the comment without remembering or guessing which picture the comment is directed to, thereby obviously reducing the cognitive burden of the user, improving the rationality of the displayed position of the comment and bringing better experience for the user.

Description

Video processing method and device
Technical Field
The embodiment of the invention relates to the field of multimedia information processing, in particular to a video processing method and device.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
Currently, a user can comment on a video while browsing the video. For example, the user may spit the performance of each player in a game. The user can also post-view and look-after the interested video.
Specifically, a user may post comments while watching a video through a "barrage" function provided by a video page. Comments can also be posted in a comment page dedicated to posting comments separate from the video page.
However, regardless of the above manner of posting comments, the comments posted by the user are independent of the commented screen. For example, when a user watches a video, the user is interested in the current picture and clicks a "barrage" to make a comment, and actually the comment made by the user is directed to the current picture, but the position where the comment is displayed is on the picture behind the current picture. If the comment is published in the comment page, the comment is separated from the commented picture.
Disclosure of Invention
For the reason that the comment posted by the user is separated from the comment-subject screen, in the prior art, when the user views the comment, the user needs to remember or guess which screen the comment is directed to by mind. Even if the user has been on a screen for a long time, the user can not know which screen the comment is on. This increases the cognitive burden on the user, causes the display position of the comment to be unreasonable, and reduces the application experience of the user. In addition, in the prior art, besides playing control, a user can only comment on a watched video, so that the control of the user on the video is limited, and the user experience is poor.
Therefore, in the prior art, due to the separation of the comment and the commented picture, the user is difficult to know the meaning of the comment, the user viewing the comment cannot share the mind of the user posting the comment, and the user is limited in controlling the video, which is a very annoying process.
Therefore, an improved video processing method and apparatus are needed to enable a user to operate a video conveniently and quickly, and enable an operation result (e.g., a comment) obtained after the operation to correspond to an operated picture, so as to improve the application experience of the user.
In this context, embodiments of the present invention are intended to provide a video processing method and apparatus.
In a first aspect of embodiments of the present invention, there is provided a video processing method, including:
receiving an instruction of intercepting video content in the process of playing a video;
intercepting video content corresponding to the current playing time from the video, wherein the video content is a frame of image or video clip;
displaying an editing interface for the intercepted video content;
and correspondingly processing the image selected by the user according to the operation of the user on the image selected by the user in the editing interface.
In a second aspect of embodiments of the present invention, there is provided a video processing apparatus comprising:
the instruction receiving module is used for receiving an instruction of intercepting video content in the process of playing the video;
the video content acquisition module is used for intercepting video content corresponding to the current playing time from the video, wherein the video content is a frame of image or a video clip;
the editing interface display module is used for displaying an editing interface aiming at the intercepted video content;
and the processing module is used for correspondingly processing the image selected by the user according to the operation of the user on the image selected by the user in the editing interface.
In a third aspect of embodiments of the present invention, there is provided a video processing device, which may include a memory and a processor, for example, wherein the processor may be configured to read a program in the memory and execute the following processes:
receiving an instruction of intercepting video content in the process of playing a video;
intercepting video content corresponding to the current playing time from the video, wherein the video content is a frame of image or video clip;
displaying an editing interface for the intercepted video content;
and correspondingly processing the image selected by the user according to the operation of the user on the image selected by the user in the editing interface.
In a fourth aspect of embodiments of the present invention, there is provided a program product comprising program code for performing, when the program product is run, the following:
receiving an instruction of intercepting video content in the process of playing a video;
intercepting video content corresponding to the current playing time from the video, wherein the video content is a frame of image or video clip;
displaying an editing interface for the intercepted video content;
and correspondingly processing the image selected by the user according to the operation of the user on the image selected by the user in the editing interface.
According to the video processing method and device provided by the embodiment of the invention, after video content (such as a video clip and a frame of image) needing to be edited is intercepted, corresponding processing is carried out on a user-selected image in the video content according to the operation of a user on the video content in an editing interface. Therefore, the user can make comments on the image selected by the user through the editing interface, the made comments are associated with the picture, the displayed position of the comments accords with the reading habit of the user, and the user does not need to remember mentally or guess which picture the comments are directed to when looking up the comments, so that the cognitive burden of the user is obviously reduced, the rationality of the displayed position of the comments is improved, and better experience is brought to the user. In addition, the function of the editing interface can be expanded according to actual needs, so that a user can issue comments and perform other operations on the image selected by the user, the control function of the user on the video is expanded, and the application experience of the user is further improved.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 schematically illustrates an application scenario according to an embodiment of the present invention;
fig. 2 schematically shows a flow diagram of a video processing method according to another embodiment of the invention;
FIG. 3 schematically shows a schematic diagram of intercepting video content according to a further embodiment of the invention;
FIG. 4 schematically illustrates a schematic diagram of an editing interface according to yet another embodiment of the invention;
FIG. 5 schematically illustrates an edit effect diagram according to yet another embodiment of the invention;
fig. 6 schematically shows an effect view when a preset mark is displayed according to still another embodiment of the present invention;
FIG. 7 schematically illustrates an effect diagram when displaying comments according to still another embodiment of the present invention;
fig. 8 schematically shows a schematic configuration diagram of a video processing apparatus according to still another embodiment of the present invention;
fig. 9 schematically shows a schematic configuration of a video processing apparatus according to still another embodiment of the present invention;
fig. 10 schematically shows a schematic structural diagram of a program product of video processing according to an embodiment of the present invention;
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to an embodiment of the invention, a video processing method and device are provided.
In this context, it is to be understood that the terms referred to:
1. an editing interface: an interactive interface provides image editing functionality.
2. Video content: refers to a frame of image or video segment in a video.
Moreover, any number of elements in the drawings are by way of example and not by way of limitation, and any nomenclature is used solely for differentiation and not by way of limitation.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
Summary of The Invention
The inventor finds that in the prior art, the comment issued by the user is separated from the comment-receiving screen, and the user needs to remember or guess which screen the comment is directed to when watching the comment by brain. Even if the user has been on a screen for a long time, the user can not know which screen the comment is on. This increases the cognitive burden on the user, causes the display position of the comment to be unreasonable, and reduces the application experience of the user. In addition, in the prior art, besides playing control, a user can only comment on a watched video, so that the control of the user on the video is limited, and the user experience is poor.
In the embodiment of the invention, after video content (such as a video clip and a frame of image) needing to be edited is intercepted, corresponding processing is carried out on the image selected by a user in the video content according to the operation of the user on the video content in an editing interface. Therefore, the user can make comments on the image selected by the user through the editing interface, the made comments are associated with the picture, the displayed position of the comments accords with the reading habit of the user, and the user does not need to remember mentally or guess which picture the comments are directed to when looking up the comments, so that the cognitive burden of the user is obviously reduced, the rationality of the displayed position of the comments is improved, and better experience is brought to the user. In addition, the function of the editing interface can be expanded according to actual needs, so that a user can issue comments and perform other operations on the image selected by the user, the control function of the user on the video is expanded, and the application experience of the user is further improved.
Having described the general principles of the invention, various non-limiting embodiments of the invention are described in detail below.
Application scene overview
Fig. 1 is a schematic view of an application scenario of a video processing method according to an embodiment of the present invention. The scenario may for example comprise a user 10, a user terminal 11 and a server 12. Various clients, such as a news client, a video client, etc., may be installed in the user terminal 11. The user 10 may issue an instruction for intercepting video content of a video displayed by the client or server 12 to the client or server 12 based on the client in the user terminal 11; intercepting video content corresponding to the current playing time from the video by a client or a server 12 in the user terminal 11 according to the instruction; displaying an editing interface aiming at the intercepted video content by the client; then, the user may continue to operate the image selected by the user through the editing interface, the client receives an operation instruction corresponding to the operation and/or forwards the operation instruction to the server 12, and the client in the user terminal 11 or the server 12 performs corresponding processing on the image selected by the user.
That is to say, the video processing method provided in the embodiment of the present invention may be implemented by a server on the network side, a client installed in a user terminal, or even a user terminal, which is not limited in any way.
The user terminal 11 and the server 12 may be communicatively connected through a communication network, which may be a local area network, a wide area network, or the like. The user terminal 11 may be a mobile phone, a tablet computer, a notebook computer, a personal computer, etc., and the server 12 may be any server device capable of supporting corresponding video processing.
Exemplary method
The method for use according to an exemplary embodiment of the invention is described below with reference to fig. 2 to 10 in connection with the application scenario of fig. 1. It should be noted that the above application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present invention, and the embodiments of the present invention are not limited in this respect. Rather, embodiments of the present invention may be applied to any scenario where applicable.
Fig. 2 is a schematic flow chart of a video processing method according to an embodiment of the present invention. In this embodiment, the execution subject of the video processing method may be the server 12, the client or the user terminal 11 described in the application scenario, which is not limited in this respect. Specifically, as shown in fig. 2, the video processing method according to this embodiment may include the following steps:
step 201: and receiving an instruction of intercepting video contents in the process of playing the video.
Optionally, the instruction for capturing the video content may be received at any time during the playing of the video. For example, the video may be received immediately after the video starts to be played, or may be received after the video starts to be played, which is not limited to this.
Optionally, a button for facilitating the user to issue an instruction to capture the video content may be displayed in the video frame, as shown in fig. 3, taking a mobile phone as an example, the button may be similar to a pair of scissors. Other labels may be used in embodiments.
Optionally, the user may also issue an instruction for capturing the video content through a preset gesture operation for capturing the video content. The specific gesture can be set according to actual needs, which is not limited in the present invention.
Step 202: and intercepting video content corresponding to the current playing time from the video, wherein the video content is a frame of image or video clip.
Step 203: displaying an editing interface for the intercepted video content.
For example, taking a mobile phone as an example, the schematic diagram of the editing interface may be as shown in fig. 4: including various editing functions such as "add comment", "cutout", "advanced image processing", and the like. Among them, the "advanced image processing" is, for example, changing the color of an image, changing the storage format of an image, such as converting a gray-scale image of continuous tone into a bitmap, or the like. In the specific implementation, the setting may be according to the actual requirement, and the present invention is not limited to this.
Step 204: and correspondingly processing the image selected by the user according to the operation of the user on the image selected by the user in the editing interface.
In one embodiment, the user selected image may include rich image content, such as characters, flowers, birds, and character spoken text. When the user reviews the selected image, only the birds therein may be reviewed. Or, the video played by the user is the video for learning the mathematical knowledge, and then several mathematical formulas may be included in the user-selected image, and then the user may label only one of the mathematical formulas. Therefore, in this embodiment of the present application, for convenience of user operation, step 204 may include step a 1-step a 2:
step A1: receiving an information input instruction, wherein the information input instruction comprises user input information and a display position; wherein, the user input information is characters and/or graphs.
Optionally, the function of the user input information may be a comment on an image selected by the user, or may be a prompt message used for watching a video next time, or may be an annotation of image content, which is not limited in the present invention.
Step A2: displaying the user input information at the display location of the user selected image.
For example, as shown in fig. 5, the mathematical test questions may be labeled so that the mathematical test questions are focused on the next time the mathematical test questions are viewed.
Therefore, the user can set the display position of the information input by the user according to the actual requirement of the user, for example, a comment on a bird can be displayed beside the bird, and a label for the mathematical formula can be displayed beside the mathematical formula. In this way, the user input information is displayed near the corresponding comment object when the video is played next time. Therefore, the user can understand the method conveniently, the cognitive burden of the user is relieved, and the application experience of the user is improved.
Wherein, in one embodiment, to facilitate the user operation, the user may edit the inputted user input information again, the method may include steps B1-B2:
step B1: receiving an editing instruction issued by a user aiming at selected user input information, wherein the editing instruction comprises any one of the following items: delete instructions, move position instructions, modify instructions.
Step B2: and executing corresponding operation on the selected user input information according to the editing instruction.
In this way, the user can edit not only the user input information that has just been input but also the user input information in the user-selected image that has already been edited. Therefore, the user can conveniently operate the user input information input by the user according to the actual needs of the user, and the application experience of the user can be further improved.
In an embodiment, in order to facilitate a user to manage video content that has been edited, in an embodiment of the present invention, after performing corresponding processing on the image selected by the user, the method may further include:
step C1: and generating a video note according to the processing result and the video content.
Wherein, in one embodiment, the processing result may be a user-selected image after processing. Optionally, in order to facilitate the separate management of the original image content of the user-selected image and the user-edited content, the user-selected image may be regarded as a background layer, and the user-edited content may be placed on other layers. In this way, only the original user-selected image may be displayed, or layers with user-edited content may be displayed together. The user-selected image after processing may therefore be an image having multiple layers.
Step C2: adding the video note to a specified video note list.
Therefore, in the form of video notes, the video content edited by the user can be uniformly stored and managed by the user. For example, in specific implementation, the user can classify the video notes according to the actual needs of the user, for example, the video notes are classified into "math notes", "english notes", and the like, so that the user can learn according to the video notes. Therefore, the embodiment of the application can enable the user to further control the video, and further improve the application experience of the user.
In an embodiment, in order to facilitate communication between users, in an embodiment of the present invention,: and sharing the video note so that other users except the user can view and edit the video note through an editing interface. For example, other users may comment on the user-selected image through the editing interface, or may comment on existing comments in the user-selected image. In this way, user interaction can be achieved based on the same user selected interface.
In an embodiment, in order to facilitate a user to view a video note, in an embodiment of the present invention, the method may further include: receiving a display instruction for displaying the video note; and displaying the video note, and displaying a processing result of the image selected by the user when the image selected by the user is displayed. The processing result here may include a processing result after operations by a plurality of users, or may be only a processing result of the user who plays the video note, which is not limited. That is, once the video note is played, the corresponding processing result is displayed. Therefore, the user can know the processing result of the video note in time.
Of course, buttons for turning on and off the processing results may also be provided in the play screen of the video note so that the user displays the processing results when needed.
In one embodiment, the image selected by the user is equivalent to a user focus point, and in order to facilitate the user and other users to know the content of the focus point in time, in the embodiment of the present invention, if the video content in the video note is a video clip, the method may further include: pausing playback of the video clip while the user-selected image is displayed. In this way, the user can see the details in the user-selected image through a static screen.
In one embodiment, in order to facilitate a user to rewarming an original video corresponding to a video note, in the embodiment of the present invention, the video note may further include a video link of the video; therefore, in the embodiment of the present invention, a play instruction for playing the video may be received when the video note is displayed; and acquiring and playing the video according to the video link. Therefore, the user does not need to rely on memory to blindly search the original video, and the user can conveniently know the video notes through the original video while operating the video.
In one embodiment, the video and video clips typically have a certain play duration, and may contain multiple frames of images. And the number of user-selected images edited therein may be small. Therefore, in order to facilitate the user to find the position of the video note quickly and accurately and know the content of the video note, the embodiment of the invention may further comprise: and when the video note or the video is displayed, displaying a preset mark at a position corresponding to the image selected by the user in the playing progress bar, wherein the preset mark is used for indicating that the image at the corresponding position has the video note.
For example, as shown in FIG. 6, a five-star graphic may be used as a preset marker to facilitate the user to know that the location has a video note. If the fact that the user drags the progress bar to the five-star graph is detected, the processing result of the image at the position can be displayed.
Wherein, in one embodiment, the comments of other users can be displayed in a comment list manner focusing on the region for displaying the comments. In this embodiment of the present invention, when the editing result of the video note by another user includes a comment and a display position of the comment for the image selected by the user and/or the user input information, the displaying the video note may further include: selecting a specified number of comments from the comments of other users, and displaying the specified number of comments at a display position of the image selected by the user corresponding to the comments. For example, as shown in FIG. 7, is an effect diagram for a specified number of comments being displayed. Therefore, whatever comments are made by the user, the comments can be displayed at the corresponding positions, and the user can know the comments conveniently. The cognitive burden of the user is reduced.
Optionally, the specified number may be a number set by a user sharing the video note when the video note is shared. In addition, comments of other users may be praised. Then the specified number of comments may be the most favored comment. In specific implementation, the setting can be set according to actual needs, and is not limited.
In one embodiment, the comments are not necessarily displayed, and in order to facilitate the user to watch the video note according to the requirement of the user, in the embodiment of the present invention, when the comments are displayed: receiving an instruction to close the displayed comments of the other users; closing the displayed comments of the other users. In this way, the user can place points of interest on the video content and the user's own video notes.
Further, in order to facilitate the user to deeply understand the interested comment, in the embodiment of the present invention, the method further includes: receiving an instruction to view information of a user making a comment; and displaying the information of the user. In this way, it is convenient for the user to see which user has posted the comment of interest. In specific implementation, when the long-press operation of the user on the interested comment is detected, the user information of the user who submits the comment can be displayed. Of course, other manners may also be used to determine under which trigger condition the user information is displayed, which is not limited in the present invention.
As can be seen from the foregoing content of the embodiment of the present invention, after video content (for example, a video clip and a frame of image) that needs to be edited is intercepted, according to an operation of a user on the video content in an editing interface, corresponding processing is performed on an image selected by the user in the video content in the embodiment of the present invention. Therefore, the user can make comments on the image selected by the user through the editing interface, the made comments are associated with the picture, the displayed position of the comments accords with the reading habit of the user, and the user does not need to remember mentally or guess which picture the comments are directed to when looking up the comments, so that the cognitive burden of the user is obviously reduced, the rationality of the displayed position of the comments is improved, and better experience is brought to the user. In addition, the function of the editing interface can be expanded according to actual needs, so that a user can issue comments and perform other operations on the image selected by the user, the control function of the user on the video is expanded, and the application experience of the user is further improved.
Exemplary device
Having described the method of an exemplary embodiment of the present invention, a video processing apparatus of an exemplary embodiment of the present invention is next described with reference to fig. 8.
As shown in fig. 8, a schematic structural diagram of a video processing apparatus according to an embodiment of the present invention includes:
an instruction receiving module 801, configured to receive an instruction for capturing video content during a video playing process;
a video content obtaining module 802, configured to intercept video content corresponding to a current playing time from the video, where the video content is a frame of image or a video segment;
an editing interface display module 803, configured to display an editing interface for the intercepted video content;
and the processing module 804 is configured to perform corresponding processing on the image selected by the user according to the operation of the user on the image selected by the user in the editing interface.
Optionally, the processing module may specifically include:
the device comprises an input instruction receiving unit, a display unit and a display unit, wherein the input instruction receiving unit is used for receiving an information input instruction, and the information input instruction comprises user input information and a display position; wherein, the user input information is characters and/or graphs;
a processing unit to display the user input information at the display position of the user selected image.
Optionally, the apparatus further comprises:
an editing instruction receiving module, configured to receive an editing instruction issued by a user for selected user input information, where the editing instruction includes any one of the following: deleting an instruction, moving a position instruction and modifying an instruction;
and the editing instruction execution module is used for executing corresponding operation on the selected user input information according to the editing instruction.
Optionally, the apparatus further comprises:
the video note generation module is used for generating a video note according to a processing result and the video content after the processing module correspondingly processes the image selected by the user;
an adding module for adding the video note to a specified video note list.
Optionally, the apparatus further comprises:
the sharing module is used for sharing the video note so that other users except the user can view and edit the video note through an editing interface.
Optionally, the apparatus further comprises:
the display instruction receiving module is used for receiving a display instruction for displaying the video note;
and the display module is used for displaying the video note and displaying the processing result of the image selected by the user when the image selected by the user is displayed.
Optionally, if the video content in the video note is a video clip, the apparatus further includes:
a pause module for pausing the playing of the video clip while the user selected image is displayed.
Optionally, the video note further includes a video link of the video; the device further comprises:
the playing instruction receiving module is used for receiving a playing instruction for playing the video when the video note is displayed;
and the playing module is used for acquiring the video according to the video link and playing the video.
Optionally, the apparatus further comprises:
and the preset mark display module is used for displaying a preset mark at a position corresponding to the image selected by the user in the playing progress bar when the video note or the video is displayed, wherein the preset mark is used for indicating that the image at the corresponding position has the video note.
Optionally, the editing result of the video note by the other user includes a comment for the image selected by the user and/or the user input information and a display position of the comment, and the apparatus further includes:
and the comment display module is used for selecting a specified number of comments from the comments of other users when the display module displays the video notes, and displaying the specified number of comments at the display position of the image selected by the user, which corresponds to the comments.
Optionally, the apparatus further comprises:
a closing instruction receiving module, configured to receive an instruction to close the displayed comments of the other users;
and the closing module is used for closing the displayed comments of the other users.
Optionally, the apparatus further comprises:
the user information viewing module is used for receiving an instruction for viewing the information of the user who makes the comment;
and the user information display module is used for displaying the information of the user.
Having described the method and apparatus of an exemplary embodiment of the present invention, a video processing apparatus according to another exemplary embodiment of the present invention is described next.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible embodiments, the video processing apparatus of the present invention may comprise at least one processing unit, and at least one storage unit. Wherein the storage unit stores program code that, when executed by the processing unit, causes the processing unit to perform various steps in the video processing method according to various exemplary embodiments of the present invention described in the above section "exemplary method" of the present specification. For example, the processing unit may execute step 201 shown in fig. 2, and receive an instruction to intercept video content during playing video; step 202, intercepting video content corresponding to the current playing time from the video, wherein the video content is a frame of image or a video clip; step 203, displaying an editing interface aiming at the intercepted video content; and step 204, correspondingly processing the image selected by the user according to the operation of the user on the image selected by the user in the editing interface.
The request amount control device 90 for network resources according to this embodiment of the present invention is described below with reference to fig. 9. The device for controlling the requested amount of network resources shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 9, the request amount control device 90 for network resources may be in the form of a general-purpose computing device, which may be a server device, for example. The components of the request amount control device 90 for network resources may include, but are not limited to: the at least one processing unit 91, the at least one memory unit 92, and a bus 93 connecting the various system components (including the memory unit 92 and the processing unit 91).
Bus 93 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The storage unit 92 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)921 and/or cache memory 922, and may further include Read Only Memory (ROM) 923.
Storage unit 92 may also include programs/utilities 925 having a set (at least one) of program modules 924, such program modules 924 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The requested amount of network resource controlling device 90 may also communicate with one or more external devices 94 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with the requested amount of network resource controlling device 90, and/or with any device (e.g., router, modem, etc.) that enables the requested amount of network resource controlling device 90 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 95. Also, the requested amount of network resources controlling device 90 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via a network adapter 96. As shown, the network adapter 96 communicates with the other modules of the request amount control device 90 for network resources over a bus 93. It should be understood that although not shown, other hardware and/or software modules may be used in conjunction with the requested amount of network resources control device 90, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Exemplary program product
In some possible embodiments, aspects of the present invention may also be implemented in the form of a program product comprising program code for causing a server device to perform the steps of the method according to various exemplary embodiments of the present invention described in the "exemplary method" section above of this specification when the program product is run on the server device, for example, the server device may perform step 201 as shown in fig. 2, receiving an instruction to intercept video content during playing of the video; step 202, intercepting video content corresponding to the current playing time from the video, wherein the video content is a frame of image or a video clip; step 203, displaying an editing interface aiming at the intercepted video content; and step 204, correspondingly processing the image selected by the user according to the operation of the user on the image selected by the user in the editing interface. .
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As shown in fig. 10, a program product 100 for request control of network resources according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a server device. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device over any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., over the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the units described above may be embodied in one unit, according to embodiments of the invention. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (18)

1. A video processing method, comprising:
receiving an instruction of intercepting video content in the process of playing a video;
intercepting video content corresponding to the current playing time from the video, wherein the video content is a frame of image or video clip;
displaying an editing interface for the intercepted video content;
according to the operation of the user on the image selected by the user in the editing interface, the image selected by the user is correspondingly processed,
the method specifically comprises the following steps of correspondingly processing the image selected by the user according to the operation of the user on the image selected by the user in the editing interface:
receiving an information input instruction, wherein the information input instruction comprises user input information and a display position; wherein, the user input information is characters and/or graphs;
displaying the user input information at the display location of the user selected image;
generating a video note according to a processing result and the video content; and the number of the first and second electrodes,
adding the video note to a category corresponding to the video note in a specified video note list;
wherein the method further comprises:
and when the video note or the video is displayed, displaying a preset mark at a position corresponding to the image selected by the user in the playing progress bar, wherein the preset mark is used for indicating that the image at the corresponding position has the video note.
2. The method of claim 1, further comprising:
receiving an editing instruction issued by a user aiming at selected user input information, wherein the editing instruction comprises any one of the following items: deleting an instruction, moving a position instruction and modifying an instruction;
and executing corresponding operation on the selected user input information according to the editing instruction.
3. The method of claim 1, further comprising:
and sharing the video note so that other users except the user can view and edit the video note through an editing interface.
4. The method of claim 3, further comprising:
receiving a display instruction for displaying the video note;
and displaying the video note, and displaying a processing result of the image selected by the user when the image selected by the user is displayed.
5. The method of claim 4, wherein if the video content in the video note is a video clip, the method further comprises:
pausing playback of the video clip while the user-selected image is displayed.
6. The method of claim 4, the video note further comprising a video link to the video; the method further comprises the following steps:
when the video note is displayed, receiving a playing instruction for playing the video;
and acquiring and playing the video according to the video link.
7. The method of claim 4, wherein the results of the editing of the video note by other users include comments and display locations of comments for the user-selected image and/or the user-input information, and wherein the method further comprises:
selecting a specified number of comments from the comments of other users, and displaying the specified number of comments at a display position of the image selected by the user corresponding to the comments.
8. The method of claim 7, further comprising:
receiving an instruction to close the displayed comments of the other users;
closing the displayed comments of the other users.
9. The method of claim 7, further comprising:
receiving an instruction to view information of a user making a comment;
and displaying the information of the user.
10. A video processing apparatus comprising:
the instruction receiving module is used for receiving an instruction of intercepting video content in the process of playing the video;
the video content acquisition module is used for intercepting video content corresponding to the current playing time from the video, wherein the video content is a frame of image or a video clip;
the editing interface display module is used for displaying an editing interface aiming at the intercepted video content;
the processing module is used for correspondingly processing the image selected by the user according to the operation of the user on the image selected by the user in the editing interface,
wherein, the processing module specifically comprises:
the device comprises an input instruction receiving unit, a display unit and a display unit, wherein the input instruction receiving unit is used for receiving an information input instruction, and the information input instruction comprises user input information and a display position; wherein, the user input information is characters and/or graphs;
a processing unit for displaying the user input information at the display position of the user selected image;
the video note generation module is used for generating a video note according to a processing result and the video content after the processing module correspondingly processes the image selected by the user;
the adding module is used for adding the video notes into the classification corresponding to the video notes in the appointed video note list;
wherein the apparatus further comprises:
and the preset mark display module is used for displaying a preset mark at a position corresponding to the image selected by the user in the playing progress bar when the video note or the video is displayed, wherein the preset mark is used for indicating that the image at the corresponding position has the video note.
11. The apparatus of claim 10, the apparatus further comprising:
an editing instruction receiving module, configured to receive an editing instruction issued by a user for selected user input information, where the editing instruction includes any one of the following: deleting an instruction, moving a position instruction and modifying an instruction;
and the editing instruction execution module is used for executing corresponding operation on the selected user input information according to the editing instruction.
12. The apparatus of claim 10, the apparatus further comprising:
the sharing module is used for sharing the video note so that other users except the user can view and edit the video note through an editing interface.
13. The apparatus of claim 12, the apparatus further comprising:
the display instruction receiving module is used for receiving a display instruction for displaying the video note;
and the display module is used for displaying the video note and displaying the processing result of the image selected by the user when the image selected by the user is displayed.
14. The apparatus of claim 13, if the video content in the video note is a video clip, the apparatus further comprising:
a pause module for pausing the playing of the video clip while the user selected image is displayed.
15. The apparatus of claim 13, the video note further comprising a video link to the video; the device further comprises:
the playing instruction receiving module is used for receiving a playing instruction for playing the video when the video note is displayed;
and the playing module is used for acquiring the video according to the video link and playing the video.
16. The apparatus of claim 13, wherein the editing results of the video note by other users include comments and display positions of the comments for the user-selected image and/or the user-input information, and the apparatus further comprises, when the video note is displayed:
and the comment display module is used for selecting a specified number of comments from the comments of other users and displaying the specified number of comments at the display position of the image selected by the user, which corresponds to the comments.
17. The apparatus of claim 16, the apparatus further comprising:
a closing instruction receiving module, configured to receive an instruction to close the displayed comments of the other users;
and the closing module is used for closing the displayed comments of the other users.
18. The apparatus of claim 16, the apparatus further comprising:
the user information viewing module is used for receiving an instruction for viewing the information of the user who makes the comment;
and the user information display module is used for displaying the information of the user.
CN201610659380.9A 2016-08-11 2016-08-11 Video processing method and device Active CN106303723B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610659380.9A CN106303723B (en) 2016-08-11 2016-08-11 Video processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610659380.9A CN106303723B (en) 2016-08-11 2016-08-11 Video processing method and device

Publications (2)

Publication Number Publication Date
CN106303723A CN106303723A (en) 2017-01-04
CN106303723B true CN106303723B (en) 2020-10-16

Family

ID=57668572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610659380.9A Active CN106303723B (en) 2016-08-11 2016-08-11 Video processing method and device

Country Status (1)

Country Link
CN (1) CN106303723B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107635153B (en) * 2017-09-11 2020-07-31 北京奇艺世纪科技有限公司 Interaction method and system based on image data
CN107682744B (en) * 2017-09-29 2021-01-08 惠州Tcl移动通信有限公司 Video clip output method, storage medium and mobile terminal
CN109947981B (en) * 2017-10-30 2022-03-22 阿里巴巴(中国)有限公司 Video sharing method and device
CN108279833A (en) * 2018-01-08 2018-07-13 维沃移动通信有限公司 A kind of reading interactive approach and mobile terminal
CN110381382B (en) * 2019-07-23 2021-02-09 腾讯科技(深圳)有限公司 Video note generation method and device, storage medium and computer equipment
CN110798727A (en) * 2019-10-28 2020-02-14 维沃移动通信有限公司 Video processing method and electronic equipment
CN110933509A (en) * 2019-12-09 2020-03-27 北京字节跳动网络技术有限公司 Information publishing method and device, electronic equipment and storage medium
CN113163230B (en) 2020-01-22 2023-09-15 腾讯科技(深圳)有限公司 Video message generation method and device, electronic equipment and storage medium
CN111314792B (en) * 2020-02-27 2022-04-08 北京奇艺世纪科技有限公司 Note generation method, electronic device and storage medium
CN111447489A (en) * 2020-04-02 2020-07-24 北京字节跳动网络技术有限公司 Video processing method and device, readable medium and electronic equipment
CN113645482A (en) * 2020-04-27 2021-11-12 阿里巴巴集团控股有限公司 Video processing method and device, electronic equipment and storage medium
CN111556371A (en) * 2020-05-20 2020-08-18 维沃移动通信有限公司 Note recording method and electronic equipment
CN111726685A (en) * 2020-06-28 2020-09-29 百度在线网络技术(北京)有限公司 Video processing method, video processing device, electronic equipment and medium
CN112087657B (en) * 2020-09-21 2024-02-09 腾讯科技(深圳)有限公司 Data processing method and device
CN113010698B (en) * 2020-11-18 2023-03-10 北京字跳网络技术有限公司 Multimedia interaction method, information interaction method, device, equipment and medium
CN113015009B (en) * 2020-11-18 2022-09-09 北京字跳网络技术有限公司 Video interaction method, device, equipment and medium
CN112380365A (en) * 2020-11-18 2021-02-19 北京字跳网络技术有限公司 Multimedia subtitle interaction method, device, equipment and medium
CN113139090A (en) * 2021-04-16 2021-07-20 北京字节跳动网络技术有限公司 Interaction method, interaction device, electronic equipment and computer-readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102307156A (en) * 2011-05-16 2012-01-04 北京奇艺世纪科技有限公司 Method and device for sharing video picture and returning to playing
CN104159151A (en) * 2014-08-06 2014-11-19 哈尔滨工业大学深圳研究生院 Device and method for intercepting and processing of videos on OTT box
CN104427352A (en) * 2013-09-09 2015-03-18 北京下周科技有限公司 Method and system for recording and playing television video by mobile terminals to realize user interaction and sharing
CN104796795A (en) * 2014-01-17 2015-07-22 乐视网信息技术(北京)股份有限公司 Video content publishing method and device
CN105681820A (en) * 2016-01-08 2016-06-15 天脉聚源(北京)科技有限公司 Video barrage recording method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102307156A (en) * 2011-05-16 2012-01-04 北京奇艺世纪科技有限公司 Method and device for sharing video picture and returning to playing
CN104427352A (en) * 2013-09-09 2015-03-18 北京下周科技有限公司 Method and system for recording and playing television video by mobile terminals to realize user interaction and sharing
CN104796795A (en) * 2014-01-17 2015-07-22 乐视网信息技术(北京)股份有限公司 Video content publishing method and device
CN104159151A (en) * 2014-08-06 2014-11-19 哈尔滨工业大学深圳研究生院 Device and method for intercepting and processing of videos on OTT box
CN105681820A (en) * 2016-01-08 2016-06-15 天脉聚源(北京)科技有限公司 Video barrage recording method and device

Also Published As

Publication number Publication date
CN106303723A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN106303723B (en) Video processing method and device
US11887630B2 (en) Multimedia data processing method, multimedia data generation method, and related device
US9864734B2 (en) Clickable links within live collaborative web meetings
CN108833787B (en) Method and apparatus for generating short video
US20220239612A1 (en) Information interaction method, apparatus, device, storage medium and program product
CN112437353B (en) Video processing method, video processing device, electronic apparatus, and readable storage medium
US10984065B1 (en) Accessing embedded web links in real-time
CA2992484A1 (en) Video-production system with social-media features
US20150106723A1 (en) Tools for locating, curating, editing, and using content of an online library
US10747300B2 (en) Dynamic content generation for augmented reality assisted technology support
US20180046731A1 (en) Method and system for editing a browsing session
CN113010698B (en) Multimedia interaction method, information interaction method, device, equipment and medium
JP2023539815A (en) Minutes interaction methods, devices, equipment and media
CN106878773B (en) Electronic device, video processing method and apparatus, and storage medium
WO2019146466A1 (en) Information processing device, moving-image retrieval method, generation method, and program
US20230343233A1 (en) Tutorial-based multimedia resource editing method and apparatus, device, and storage medium
WO2020221076A1 (en) Hosted application generation method and device
CN111787188B (en) Video playing method and device, terminal equipment and storage medium
EP4343579A1 (en) Information replay method and apparatus, electronic device, computer storage medium, and product
CN115424125A (en) Media content processing method, device, equipment, readable storage medium and product
CN115379136A (en) Special effect prop processing method and device, electronic equipment and storage medium
CN111491184B (en) Method and device for generating situational subtitles, electronic equipment and storage medium
CN110703971A (en) Method and device for publishing information
CN111935493B (en) Anchor photo album processing method and device, storage medium and electronic equipment
US11423683B2 (en) Source linking and subsequent recall

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190819

Address after: 310052 Room 309, Building No. 599, Changhe Street Network Business Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Netease Youdao Information Technology (Hangzhou) Co., Ltd.

Address before: Hangzhou City, Zhejiang province Binjiang District 310052 River Street Network Road No. 599 building 4 layer 7

Applicant before: NetEase (Hangzhou) Network Co., Ltd.

GR01 Patent grant
GR01 Patent grant