CN115250377B - Video processing method, processing platform, electronic device and storage medium - Google Patents

Video processing method, processing platform, electronic device and storage medium Download PDF

Info

Publication number
CN115250377B
CN115250377B CN202110458994.1A CN202110458994A CN115250377B CN 115250377 B CN115250377 B CN 115250377B CN 202110458994 A CN202110458994 A CN 202110458994A CN 115250377 B CN115250377 B CN 115250377B
Authority
CN
China
Prior art keywords
video
video segment
segment
segments
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110458994.1A
Other languages
Chinese (zh)
Other versions
CN115250377A (en
Inventor
张民
吕德政
崔刚
张彤
张艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Frame Color Film And Television Technology Co ltd
Original Assignee
Shenzhen Frame Color Film And Television Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Frame Color Film And Television Technology Co ltd filed Critical Shenzhen Frame Color Film And Television Technology Co ltd
Priority to CN202110458994.1A priority Critical patent/CN115250377B/en
Publication of CN115250377A publication Critical patent/CN115250377A/en
Application granted granted Critical
Publication of CN115250377B publication Critical patent/CN115250377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a video processing method, a processing platform, electronic equipment and a storage medium, wherein a plurality of first video segments are obtained by acquiring target video information to be processed and then dividing the target video information; respectively issuing a plurality of first video segments and annotation files to each client, wherein the annotation files comprise a plurality of preset video characteristic labels; receiving a plurality of annotation information sets corresponding to the first video segments returned by each client, wherein the annotation information sets corresponding to the first video segments comprise at least one video characteristic tag in a plurality of video characteristic tags; and processing the first video segments according to the corresponding annotation information sets of each first video segment to obtain processed video information. By the video processing method, the video segments of the target video information are divided, so that multiple people can process the same video simultaneously, and the video processing speed is improved.

Description

Video processing method, processing platform, electronic device and storage medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a video processing method, a processing platform, an electronic device, and a storage medium.
Background
Currently, when post-processing video, it is generally required that a post-person browse the video according to the time sequence of the video, and process the video according to human experience, for example, process the video image to improve brightness or contrast.
However, since the video post-processing has higher processing capability requirements for the video processing platform, the post-processing personnel are less due to the limitation of equipment resources. With the continuous growth of the video duration, the workload of post personnel is continuously increased, so that the post processing speed of the video is reduced. Accordingly, the present application provides a new video processing method to increase the speed of video post-processing.
Disclosure of Invention
The application provides a video processing method, a processing platform, electronic equipment and a storage medium, which are used for solving the problem of low video processing speed in the prior art.
A first aspect of the present application provides a video processing method, the method comprising:
acquiring target video information to be processed;
dividing the target video information to obtain a plurality of first video segments;
respectively issuing a plurality of first video segments and annotation files to each client, wherein the annotation files comprise a plurality of preset video characteristic labels;
receiving a plurality of annotation information sets corresponding to first video segments returned by each client, wherein the annotation information sets corresponding to the first video segments comprise at least one video characteristic tag in the plurality of video characteristic tags;
and processing the first video segments according to the corresponding annotation information sets of each first video segment to obtain processed video information.
In a possible implementation manner, the dividing the target video information to obtain a plurality of first video segments includes:
dividing the target video information according to a lens to obtain a plurality of second video segments;
and distributing each second video segment to the first video segments to obtain a plurality of first video segments, wherein each first video segment comprises at least one second video segment.
In a possible implementation manner, the allocating each second video segment to the first video segments to obtain a plurality of first video segments includes:
determining the number of the plurality of first video segments according to the number of the clients currently on line;
detecting the duration of each second video segment;
and distributing each second video segment to the first video segments according to the number of the first video segments and the duration of each second video segment to obtain a plurality of first video segments, wherein the duration of the first video segments is equal.
In a possible implementation manner, the assigning each second video segment to the first video segments to obtain a plurality of first video segments includes:
identifying each second video segment, and determining the content type of each second video segment;
and distributing each second video segment to the first video segment according to the content type of each second video segment to obtain a plurality of first video segments, wherein the content types of the second video segments corresponding to each first video segment are the same.
In a possible implementation manner, the processing, according to the set of annotation information corresponding to each first video segment, the first video segment to obtain processed video information includes:
determining a video processing program corresponding to the video characteristic tag in the labeling information set according to the labeling information set corresponding to each first video segment;
and calling a video processing program corresponding to the video characteristic label in the labeling information set to process the first video segment.
In a possible implementation manner, the annotation file further includes: label grade sets corresponding to the plurality of video characteristic labels, wherein each label grade set comprises a plurality of label grade parameters; the annotation information set corresponding to the first video segment further comprises: tag grade parameters corresponding to the video characteristic tags in the tag information set;
the calling the video processing program corresponding to the video characteristic label in the labeling information set to process the first video segment comprises the following steps: and calling a video processing program corresponding to the video characteristic label in the labeling information set, and processing the first video segment by adopting a label grade parameter corresponding to the video characteristic label in the labeling information set.
In a possible implementation manner, the video characteristic tag in the annotation file includes a plurality of the following: contrast labels, brightness labels, motion labels, face labels.
In a second aspect, the present application provides a video processing platform, the platform comprising:
the acquisition unit is used for acquiring target video information to be processed;
the dividing unit is used for dividing the target video information to obtain a plurality of first video segments;
the sending unit is used for respectively sending a plurality of first video segments and annotation files to each client, wherein the annotation files comprise a plurality of preset video characteristic labels;
the receiving unit is used for receiving a plurality of labeling information sets corresponding to the first video segments returned by each client, wherein the labeling information sets corresponding to the first video segments comprise at least one video characteristic label in the plurality of video characteristic labels;
and the processing unit is used for processing the first video segments according to the marking information set corresponding to each first video segment to obtain processed video information.
In a possible implementation manner, the dividing unit includes:
the lens detection module is used for dividing the target video information according to lenses to obtain a plurality of second video segments;
the dividing module is used for distributing each second video segment to the first video segments to obtain a plurality of first video segments, wherein each first video segment comprises at least one second video segment.
In a possible implementation manner, the dividing module includes:
the time detection module is used for determining the number of the plurality of first video segments according to the number of the clients currently on line;
the time detection module is further used for detecting the duration of each second video segment;
the time detection module is further configured to allocate each second video segment to a first video segment according to the number of the plurality of first video segments and the duration of each second video segment, so as to obtain a plurality of first video segments, where the durations of the plurality of first video segments are equal.
In a possible implementation manner, the dividing module includes:
the identification module is used for identifying each second video segment and determining the content type of each second video segment;
the identification module is further configured to allocate each second video segment to a first video segment according to the content type of each second video segment, so as to obtain a plurality of first video segments, where the content types of the second video segments corresponding to each first video segment are the same.
In a possible implementation manner, the processing unit includes: the analysis module and the calling module;
the analysis module is used for determining a video processing program corresponding to the video characteristic tag in the labeling information set according to the labeling information set corresponding to each first video segment;
and the calling module is used for calling a video processing program corresponding to the video characteristic tag in the labeling information set to process the first video segment.
In a possible implementation manner, the annotation file further includes: label grade sets corresponding to the plurality of video characteristic labels, wherein each label grade set comprises a plurality of label grade parameters; the annotation information set corresponding to the first video segment further comprises: tag grade parameters corresponding to the video characteristic tags in the tag information set;
the calling module is specifically configured to call a video processing program corresponding to a video characteristic tag in the labeling information set, and process the first video segment by adopting a tag class parameter corresponding to the video characteristic tag in the labeling information set.
In a possible implementation manner, the video characteristic tag in the annotation file includes a plurality of the following: contrast labels, brightness labels, motion labels, face labels.
In a third aspect, the present application provides an electronic device, comprising: a memory, a processor;
a memory; a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the method according to any of the first aspects according to the executable instructions.
In a fourth aspect, the present application provides a computer-readable storage medium having stored therein computer-executable instructions for implementing the method according to any one of the first aspects when executed by a processor.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the method according to any of the first aspects.
According to the video processing method, the processing platform, the electronic equipment and the storage medium, target video information to be processed is obtained, and then the target video information is divided to obtain a plurality of first video segments; respectively issuing a plurality of first video segments and annotation files to each client, wherein the annotation files comprise a plurality of preset video characteristic labels; receiving a plurality of annotation information sets corresponding to the first video segments returned by each client, wherein the annotation information sets corresponding to the first video segments comprise at least one video characteristic tag in a plurality of video characteristic tags; and processing the first video segments according to the corresponding annotation information sets of each first video segment to obtain processed video information. By the video processing method, the video segments of the target video information are divided, so that multiple people can process the same video simultaneously, and the video processing speed is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic illustration of an application scenario for video processing;
fig. 2 is a schematic flow chart of a video processing method according to an embodiment of the present application;
fig. 3 is a schematic view of an application scenario of video processing provided in the present application;
fig. 4 is a flow chart of a video information dividing method according to an embodiment of the present application;
fig. 5 is a flow chart of a method for processing video information provided in the present application;
fig. 6 is a schematic structural diagram of a video processing platform provided in the present application;
FIG. 7 is a schematic structural diagram of yet another video processing platform provided in the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
Fig. 1 is a schematic view of an application scenario of video processing. The shooting equipment is used for shooting an object to be shot to obtain original video information. And then sending the original video information to a video processing platform, and processing (such as clipping, subtitle adding, image rendering and the like) by the video processing platform, wherein the video processing platform can be a cloud server, and the video processing platform is not limited herein. And sending the processed video to a playing device (such as a television, a mobile phone, a cinema playing device and the like), and playing the video after the playing device receives the processed video. And the processed video can enable the audience to have better viewing experience.
Currently, when post-processing video, it is generally required that a post-person browse the video on a video processing platform according to the time sequence of the video, and process the video according to human experience. Because the video post-processing has higher processing capacity requirement on the video processing platform, the video post-processing is limited by equipment, so that post-processing staff are fewer. However, as the duration of video increases, the workload of post-personnel increases, resulting in a decrease in the post-processing speed of the video.
The following describes the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a flow chart of a video processing method according to an embodiment of the present application. As shown in fig. 2, the method includes:
s101, acquiring target video information to be processed;
s102, dividing target video information to obtain a plurality of first video segments.
An application scenario of the video processing method provided by the embodiment of the present application is shown in fig. 3. Fig. 3 is a schematic view of an application scenario of video processing provided in the present application, where the application scenario includes a client 1, a client 2, a client 3, and a video processing platform 4. The video processing platform may be a remote server. The client may be an application installed on a device that does not have a high processing power requirement for the device.
After the video processing platform receives the target video information, the video processing platform splits the received target video information into a plurality of first video segments. In one possible implementation manner, the target video information may be distributed evenly or randomly according to the duration directly according to the number of clients connected to the video processing platform.
S103, respectively issuing a plurality of first video segments and annotation files to each client, wherein the annotation files comprise a plurality of preset video characteristic labels.
After obtaining a plurality of first video segments, a preset annotation file and the first video segments are issued to each client. For example, the obtained first video segments are respectively: first video segment 1, first video segment 2, and first video segment 3. Thereafter, the first video segment 1 is sent to the client 1, the first video segment 2 is sent to the client 2, and the first video segment 3 is sent to the client 3.
In addition, a first video segment is issued to each client, and a markup file is also sent to each client, wherein the markup file comprises a plurality of preset video characteristic labels. For example, the predetermined plurality of video characteristic tags may include a plurality of the following tags: contrast labels, brightness labels, motion labels, face labels. It should be noted that the video characteristic labels in the present application include, but are not limited to, the labels illustrated above. The face tag may be used to mark key face details that need to be emphasized in the video segment, and the resolution of the video segment needs to be improved, for example, when details such as facial expressions of characters such as main angles in a movie need to be highlighted, a face characteristic tag may be added to the details. And, when it is known in advance that the acquired target video information is a video for describing growth of an animal, in which no face appears, the plurality of video characteristic tags preset at this time may include: contrast labels, brightness labels, and motion labels.
S104, receiving a plurality of labeling information sets corresponding to the first video segments returned by each client, wherein the labeling information sets corresponding to the first video segments comprise at least one video characteristic label in a plurality of video characteristic labels.
In an exemplary embodiment, after receiving a plurality of first video segments, a user selects a video characteristic tag from the received annotation file to generate an annotation information set corresponding to the first video segments, and uploads the annotation information set to a video processing platform. That is, the user adds a video characteristic tag to the video segment at the client, which indicates that one or more video processes are required for the video segment corresponding to the video characteristic tag, for example, when a contrast tag is selected for the first video segment, it indicates that the contrast of the first video segment needs to be adjusted. And then, the generated labeling information set is sent to a video processing platform.
Optionally, in order to correspond the first video segment to the labeling information set, start time information and end time information of the first video segment and a total frame number corresponding to the first video segment may be set in the labeling information set.
S105, processing the first video segments according to the corresponding annotation information sets of the first video segments to obtain processed video information.
The video processing platform processes each first video segment according to the video characteristic tag included in the corresponding annotation information set after receiving the annotation information set. Specifically, when the video processing platform processes the video, each time a labeling information set is received, a first video segment corresponding to the labeling information set is searched in the platform, the first video segment is processed until all the first video segments are processed, and the processed video segments are combined according to the sequence of the original time of the video, so that the processed video is obtained.
It should be noted that, when the video processing platform and the client may perform communication interaction (for example, the client uploads the set of annotation information to the video processing platform, and the video processing platform issues the video segment and the annotation file to the client), the communication modes of the video processing platform and the client may be various, including but not limited to the fourth generation mobile information system (4 th generation mobile communication technology, 4G for short), the fifth generation mobile communication technology (5 th generation mobile networks, 5G for short), and the like.
In this embodiment, the processing capability of the device required by the processing program is high when the video is post-processed, so the video processing platform may be a remote server or other device with high processing capability. The video processing platform divides the target video information and then transmits the divided target video information to the client-side equipment with weaker processing capability, so that labels are added to the video segments by a plurality of client-sides to generate label information sets, the label information sets are uploaded to the video processing platform, and then the video processing platform can process the first video segments according to the label sets corresponding to the first video segments. By the method, multiple persons can process all video segments in the same video at the same time, the video processing speed is improved, and the method has low requirements on the processing capacity of equipment and is easy to realize.
In practical applications, the steps shown in fig. 4 may be implemented when dividing the target video information (i.e., when performing step S102). Fig. 4 is a flow chart of a video information dividing method provided in an embodiment of the present application, as shown in fig. 4, including the following steps:
s201, dividing target video information according to the lens to obtain a plurality of second video segments.
For example, when dividing the target video information, the target video information may be divided according to an existing lens detection technology to obtain a plurality of second video segments, where each second video segment corresponds to one lens.
For example, in the existing shot detection technology, the judgment is generally performed according to the contrast ratio between the adjacent frame images in the target video and the difference value between the brightness motion vectors, and if the difference value is large, the adjacent frame images are in different two shots.
S202, each second video segment is distributed to the first video segments to obtain a plurality of first video segments, wherein each first video segment comprises at least one second video segment.
In one example, step S202 may be performed by:
the method comprises the steps of firstly, determining the number of a plurality of first video segments according to the number of clients currently on line;
step two, detecting the duration of each second video segment;
and thirdly, distributing each second video segment to the first video segment according to the number of the first video segments and the duration of each second video segment to obtain a plurality of first video segments, wherein the duration of the plurality of first video segments is equal.
For example, after obtaining the plurality of second video segments, the video processing platform may detect the number of clients currently online, and take the number of clients currently online as the number of first video segments. In another case, the video processing platform may further determine the number of the first video segments according to the number of clients specified by the user.
And then, according to the time length of each second video segment, and according to the number of the first video segments and the time length of each second video segment, obtaining a plurality of first video segments with the same time length, wherein the time length of the first video segment is the sum of the time lengths of the second video segments included in the first video segment.
In another case, the duration of the first video segment may also be different. For example, the duration of each divided first video segment is within a preset duration range. For example, by means of the total duration of the target video information being 100min, the number of clients being 5, it is determined that the average duration of each client is 20min, and the preset duration range may be between 18min and 22min, so that when a plurality of first video segments are obtained by means of the duration of the second video segments and the number of the first video segments, the integrity of the second video segments is ensured, that is, each second video segment corresponds to only one first video segment, and the duration of the obtained plurality of first video segments is between 18min and 22min, so that the post-processing is more accurate.
In one example, to assign each second video segment to a first video segment to obtain a plurality of first video segments (i.e., when performing step S202), it may be further performed by:
step one, identifying each second video segment, and determining the content type of each second video segment;
and step two, distributing each second video segment to the first video segment according to the content type of each second video segment to obtain a plurality of first video segments, wherein the content types of the second video segments corresponding to each first video segment are the same.
For example, after obtaining the plurality of second video segments, in order to distribute the plurality of second video segments to generate the plurality of first video segments, it may be implemented according to the following method. Firstly, the video processing platform identifies the frame image in each second video segment, and further determines the content type in the second video segment. For example, the content category of the second video segment 1 includes sky, mountain, grass, tree. The content categories of the second video segment 2 include: sky, river, grass. The content categories of the second video segment 3 include: sky, building, pedestrian. The content categories of the second video segment 4 include: sky, river, grass. In the above second video segment, the second video segment 2 and the second video segment 4 include the same internal and same types, and then the second video segment 1 and the second video segment 3 may be respectively issued to the first video segment 1 and the first video segment 2, and the second video segment 2 and the second video segment 4 are assigned to the same first video segment 3, that is, issued to the same client, and the same user selects the tag characteristic, so that the post-processing difference of the same content is not large due to subjective factors of people during processing.
In addition, in another embodiment, since the similarity of the content types in the second video segment 1 and the second video segment 2 is higher, a similarity threshold may be set, and the video similarity between the second video segments is higher than the similarity threshold and is allocated to the same first video segment. That is, the second video segment 1, the second video segment 2, and the second video segment 4 have higher similarity, and may be allocated to the first video segment 1, while the second video segment 2 has lower similarity with the rest of the second video segments, and may be divided into the first video segment 2.
Alternatively, in this example, when the second video segments are allocated according to the content type, the allocation may also be performed in combination with the duration of each second video segment, that is, the method of this example is combined with the method of the previous example. And when the client determines the annotation information set corresponding to the first video segment, the annotation information set corresponding to the plurality of second video segments included in the first video segment can be included. That is, the labeling information sets of the second video segments in the first video segment form the labeling information set of the first video segment. In another possible case, the set of labeling information corresponding to the first video segment is composed of a set of labeling information corresponding to each frame of image in the first video segment.
In this embodiment, when dividing the video segments, the target video information may be first divided into a plurality of second video segments according to the lens, and then the plurality of second video segments are allocated to the plurality of first video segments to obtain a plurality of first video segments. Specifically, the time length of each second video segment and the number of the first video segments can be allocated according to the time length of each second video segment, so that the time length of each first video segment is equal, the workload of each client is the same, and the video processing speed is improved. In addition, the first video segments can be distributed according to the content sets of the second video segments, so that the content types of the second video segments in each first video segment are the same or have higher similarity, and further, larger processing differences among the same content caused by the influence of human factors can be avoided in the post-processing, the processing speed is improved, and meanwhile, the processing accuracy is improved.
In practical application, when the video processing platform processes the plurality of first video segments after receiving the annotation information set sent by the client (i.e., step S105 includes the following steps), as shown in fig. 5, fig. 5 is a flow chart of a video information processing method provided in the present application:
s1051, determining a video processing program corresponding to a video characteristic label in a labeling information set according to the labeling information set corresponding to each first video segment;
s1052, calling a video processing program corresponding to the video characteristic label in the labeling information set to process the first video segment.
After the video processing platform obtains each labeling information set, for each first video segment, a video characteristic label is searched in a label information set corresponding to the first video segment, and then a video characteristic processing program corresponding to the video characteristic label in the labeling information set is determined. For example, if the video characteristic set corresponding to a certain first video segment includes a brightness characteristic tag, a brightness adjusting processing program corresponding to the brightness characteristic tag is determined according to the brightness characteristic tag, where a correspondence between the video characteristic tag and the video processing program may be stored in the video processing platform in advance.
And the video processing platform calls the video processing program to process the first video segment after determining the video processing program corresponding to the first video segment.
In one example, the annotation file further comprises: label grade sets corresponding to the plurality of video characteristic labels, wherein each label grade set comprises a plurality of label grade parameters; the annotation information set corresponding to the first video segment further comprises: and the label grade parameter corresponding to the video characteristic label in the label information set. That is, when the video processing platform issues the first video segment and the markup file to the client, the markup file may further include a tag class set corresponding to a plurality of pre-specified video characteristic tags, where each tag class set includes a plurality of tag class parameters, for example, when the markup file includes a video characteristic tag: when the brightness characteristic label A, the contrast characteristic label B and the motion characteristic label C are used, the brightness characteristic label A can be divided into 5 label grade parameters of A1, A2, A3, A4 and A5, wherein the larger the label grade parameter is for the same video segment, the higher the brightness of the adjusted video segment is, namely, when the grade parameter A2 is marked, the brightness adjusted by the grade parameter A1 is brighter, and the brightness adjusted by the grade parameter A3 is darker.
At this time, step S1052 specifically includes: and calling a video processing program corresponding to the video characteristic label in the labeling information set, and processing the first video segment by adopting a label grade parameter corresponding to the video characteristic label in the labeling information set.
In this embodiment, when the video processing platform processes the first video segment, a processing program corresponding to the video characteristic tag is called according to the video characteristic tag in the labeling information set corresponding to the first video segment sent by the client, so as to further process the first video segment. Further, the labeling file may further include a plurality of label grade parameters corresponding to each video characteristic label, after the client receives the labeling file, the client selects a video characteristic label required by the video segment in the labeling file, and selects the label grade parameter corresponding to the video characteristic label, so that when the video processing platform processes the first video segment, the video processing platform can call a video processing program according to the video characteristic label corresponding to the first video segment, and adopts the label grade parameter corresponding to the first video segment to select a processing parameter, execute the video processing program, and process the first video segment, thereby enabling the effect of the video information obtained by processing to be more accurate, and enabling the user to have a better viewing effect.
Fig. 6 is a schematic structural diagram of a video processing platform provided in the present application, where, as shown in fig. 6, the platform includes:
an acquisition unit 61 for acquiring target video information to be processed;
a dividing unit 62, configured to divide the target video information to obtain a plurality of first video segments;
a sending unit 63, configured to send a plurality of first video segments and a markup file to each client, where the markup file includes a predetermined plurality of video characteristic tags;
a receiving unit 64, configured to receive a set of annotation information corresponding to a plurality of first video segments returned by each client, where the set of annotation information corresponding to the first video segments includes at least one video characteristic tag of a plurality of video characteristic tags;
the processing unit 65 is configured to process each first video segment according to the set of annotation information corresponding to the first video segment, so as to obtain processed video information.
The video processing platform provided in this embodiment is configured to implement the technical scheme provided by the foregoing method, and the implementation principle and the technical effect are similar and are not repeated.
Fig. 7 is a schematic structural diagram of still another video processing platform provided in the present application, as shown in fig. 7, on the basis of the structure shown in fig. 6, the dividing unit 62 includes:
a lens detection module 621, configured to divide the target video information according to the lens, so as to obtain a plurality of second video segments;
the dividing module 622 is configured to allocate each second video segment to a first video segment, so as to obtain a plurality of first video segments, where each first video segment includes at least one second video segment.
In one possible implementation, the partitioning module 622 includes:
the time detection module is used for determining the number of the plurality of first video segments according to the number of the clients currently on line;
the time detection module is also used for detecting the duration of each second video segment;
and the time detection module is also used for distributing each second video segment to the first video segment according to the number of the plurality of first video segments and the duration of each second video segment to obtain a plurality of first video segments, wherein the duration of the plurality of first video segments is equal.
In one possible implementation, the partitioning module 622 includes:
the identification module is used for identifying each second video segment and determining the content type of each second video segment;
the identification module is further configured to allocate each second video segment to the first video segment according to the content type of each second video segment, so as to obtain a plurality of first video segments, where the content types of the second video segments corresponding to each first video segment are the same.
In a possible implementation, the processing unit 65 includes: a parsing module 651 and a calling module 652;
the parsing module 651 is configured to determine, according to the set of annotation information corresponding to each first video segment, a video processing program corresponding to a video characteristic tag in the set of annotation information;
and the calling module 652 is configured to call a video processing program corresponding to the video characteristic tag in the labeling information set, and process the first video segment.
In one possible implementation, the markup document further includes: label grade sets corresponding to the plurality of video characteristic labels, wherein each label grade set comprises a plurality of label grade parameters; the annotation information set corresponding to the first video segment further comprises: tag class parameters corresponding to the video characteristic tags in the tag information set;
the calling module 652 is specifically configured to call a video processing program corresponding to a video characteristic tag in the labeling information set, and process the first video segment by using a tag class parameter corresponding to the video characteristic tag in the labeling information set.
In one possible implementation, the video property tags in the annotation file include a plurality of: contrast labels, brightness labels, motion labels, face labels.
The device provided in this embodiment is configured to implement the technical scheme provided by the method, and the implementation principle and the technical effect are similar and are not repeated.
Fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, as shown in fig. 8, where the electronic device includes:
a processor 291, the electronic device further comprising a memory 292; a communication interface (Communication Interface) 293 and bus 294 may also be included. The processor 291, the memory 292, and the communication interface 293 may communicate with each other via the bus 294. Communication interface 293 may be used for information transfer. The processor 291 may call logic instructions in the memory 294 to perform the methods of the above embodiments.
Further, the logic instructions in memory 292 described above may be implemented in the form of software functional units and stored in a computer-readable storage medium when sold or used as a stand-alone product.
The memory 292 is a computer readable storage medium, and may be used to store a software program, a computer executable program, and program instructions/modules corresponding to the methods in the embodiments of the present application. The processor 291 executes functional applications and data processing by running software programs, instructions and modules stored in the memory 292, i.e., implements the methods of the method embodiments described above.
Memory 292 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the terminal device, etc. Further, memory 292 may include high-speed random access memory, and may also include non-volatile memory.
The embodiment of the application provides a computer readable storage medium, wherein computer executable instructions are stored in the computer readable storage medium, and the computer executable instructions are used for realizing the method provided by the embodiment when being executed by a processor.
The embodiment provides a computer program product comprising a computer program which, when executed by a processor, implements the method provided by the above embodiment
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (6)

1. A method of video processing, the method comprising:
acquiring target video information to be processed;
dividing the target video information according to a lens to obtain a plurality of second video segments;
identifying each second video segment, and determining the content type of each second video segment;
classifying each second video segment according to the content type of each second video segment to obtain a plurality of first video segments, wherein the content type of the second video segment corresponding to each first video segment is the same or the content type similarity of the second video segment corresponding to each first video segment is greater than a preset similarity threshold; wherein each first video segment comprises at least one second video segment;
respectively issuing a plurality of first video segments and a labeling file to each client, wherein the labeling file comprises a plurality of preset video characteristic labels and label grade sets corresponding to the video characteristic labels, and each label grade set comprises a plurality of label grade parameters;
receiving a plurality of first video segment corresponding annotation information sets returned by each client, wherein the first video segment corresponding annotation information sets comprise at least one video characteristic label in the plurality of video characteristic labels and label grade parameters corresponding to the video characteristic labels in the label information sets;
determining a video processing program corresponding to the video characteristic tag in the labeling information set according to the labeling information set corresponding to each first video segment;
and calling a video processing program corresponding to the video characteristic label in the labeling information set, and processing the first video segment by adopting a label grade parameter corresponding to the video characteristic label in the labeling information set.
2. The method of claim 1, wherein classifying each of the second video segments results in a plurality of first video segments, further comprising:
determining the number of the plurality of first video segments according to the number of the clients currently on line;
detecting the duration of each second video segment;
and classifying each second video segment according to the number of the first video segments and the duration of each second video segment to obtain a plurality of first video segments, wherein the duration of the first video segments is equal.
3. The method of claim 1, wherein the video property tags in the annotation file comprise a plurality of: contrast labels, brightness labels, motion labels, face labels.
4. A video processing platform, the platform comprising:
the acquisition unit is used for acquiring target video information to be processed;
the dividing unit is used for dividing the target video information to obtain a plurality of first video segments;
the system comprises a sending unit, a client and a label file, wherein the sending unit is used for respectively sending a plurality of first video segments and the label file to each client, the label file comprises a plurality of preset video characteristic labels and label grade sets corresponding to the video characteristic labels, and each label grade set comprises a plurality of label grade parameters;
the receiving unit is used for receiving label information sets corresponding to a plurality of first video segments returned by each client, wherein the label information sets corresponding to the first video segments comprise at least one video characteristic label in the plurality of video characteristic labels and label grade parameters corresponding to the video characteristic labels in the label information sets;
the processing unit is used for processing the first video segments according to the marking information set corresponding to each first video segment to obtain processed video information;
the dividing unit includes:
the lens detection module is used for dividing the target video information according to lenses to obtain a plurality of second video segments;
the dividing module is used for classifying each second video segment to obtain a plurality of first video segments, wherein each first video segment comprises at least one second video segment;
the dividing module comprises:
the identification module is used for identifying each second video segment and determining the content type of each second video segment; classifying each second video segment according to the content type of each second video segment to obtain a plurality of first video segments, wherein the content types of the second video segments corresponding to each first video segment are the same;
the processing unit includes:
the analysis module is used for determining a video processing program corresponding to the video characteristic tag in the labeling information set according to the labeling information set corresponding to each first video segment;
the calling module is used for calling a video processing program corresponding to the video characteristic tag in the labeling information set and processing the first video segment;
the calling module is specifically configured to call a video processing program corresponding to a video characteristic tag in the labeling information set, and process the first video segment by adopting a tag class parameter corresponding to the video characteristic tag in the labeling information set.
5. An electronic device, comprising: a memory, a processor;
a memory; a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the method of any of claims 1-3 according to the executable instructions.
6. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any of claims 1-3.
CN202110458994.1A 2021-04-27 2021-04-27 Video processing method, processing platform, electronic device and storage medium Active CN115250377B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110458994.1A CN115250377B (en) 2021-04-27 2021-04-27 Video processing method, processing platform, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110458994.1A CN115250377B (en) 2021-04-27 2021-04-27 Video processing method, processing platform, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN115250377A CN115250377A (en) 2022-10-28
CN115250377B true CN115250377B (en) 2024-04-02

Family

ID=83697510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110458994.1A Active CN115250377B (en) 2021-04-27 2021-04-27 Video processing method, processing platform, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN115250377B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006074742A (en) * 2004-08-04 2006-03-16 Noritsu Koki Co Ltd Photographing scene correcting method, program, and photographing scene correction processing system implementing the method
CN106162323A (en) * 2015-03-26 2016-11-23 无锡天脉聚源传媒科技有限公司 A kind of video data handling procedure and device
CN109525901A (en) * 2018-11-27 2019-03-26 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and computer-readable medium
CN109614517A (en) * 2018-12-04 2019-04-12 广州市百果园信息技术有限公司 Classification method, device, equipment and the storage medium of video
CN111416950A (en) * 2020-03-26 2020-07-14 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment
CN111901535A (en) * 2020-07-23 2020-11-06 北京达佳互联信息技术有限公司 Video editing method, device, electronic equipment, system and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107948732B (en) * 2017-12-04 2020-12-01 京东方科技集团股份有限公司 Video playing method, video playing device and video playing system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006074742A (en) * 2004-08-04 2006-03-16 Noritsu Koki Co Ltd Photographing scene correcting method, program, and photographing scene correction processing system implementing the method
CN106162323A (en) * 2015-03-26 2016-11-23 无锡天脉聚源传媒科技有限公司 A kind of video data handling procedure and device
CN109525901A (en) * 2018-11-27 2019-03-26 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and computer-readable medium
CN109614517A (en) * 2018-12-04 2019-04-12 广州市百果园信息技术有限公司 Classification method, device, equipment and the storage medium of video
CN111416950A (en) * 2020-03-26 2020-07-14 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment
CN111901535A (en) * 2020-07-23 2020-11-06 北京达佳互联信息技术有限公司 Video editing method, device, electronic equipment, system and storage medium

Also Published As

Publication number Publication date
CN115250377A (en) 2022-10-28

Similar Documents

Publication Publication Date Title
JP6972260B2 (en) Systems and methods for partitioning search indexes to improve media segment identification efficiency
JP6833842B2 (en) Optimized media fingerprint retention to improve system resource utilization
CN109302619A (en) A kind of information processing method and device
CN108965950B (en) Advertisement monitoring method and device
CN112822504B (en) Live broadcast room cover updating method and device, electronic equipment and storage medium
CN111464819B (en) Live image detection method, device, equipment and storage medium
CN113763296A (en) Image processing method, apparatus and medium
CN110677718B (en) Video identification method and device
CN109960969B (en) Method, device and system for generating moving route
CN109640104A (en) Living broadcast interactive method, apparatus, equipment and storage medium based on recognition of face
CN107241585B (en) Video monitoring method and system
CN110418148B (en) Video generation method, video generation device and readable storage medium
AU2018275194A1 (en) Temporal placement of a rebuffering event
CN115250377B (en) Video processing method, processing platform, electronic device and storage medium
US20190014359A1 (en) Method to insert ad content into a video scene
CN113918763A (en) Video cover recommendation method, video cover generation method, video cover recommendation device, video cover generation device, video cover recommendation equipment and storage medium
CN110300118B (en) Streaming media processing method, device and storage medium
CN117278776A (en) Multichannel video content real-time comparison method and device, equipment and storage medium
CN112437332B (en) Playing method and device of target multimedia information
CN115391596A (en) Video archive generation method and device and storage medium
CN115146965A (en) Person-on-duty planning method and device, storage medium and equipment
CN108600864B (en) Movie preview generation method and device
US10904499B2 (en) Information processing system, information processing device and program
CN114095740B (en) Information processing method, information processing device, electronic equipment and storage medium
CN116662606B (en) Method and system for determining new video event, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant