CN111954075B - Video processing model state adjusting method and device, electronic equipment and storage medium - Google Patents

Video processing model state adjusting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111954075B
CN111954075B CN202010845003.0A CN202010845003A CN111954075B CN 111954075 B CN111954075 B CN 111954075B CN 202010845003 A CN202010845003 A CN 202010845003A CN 111954075 B CN111954075 B CN 111954075B
Authority
CN
China
Prior art keywords
video
processing model
video processing
target
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010845003.0A
Other languages
Chinese (zh)
Other versions
CN111954075A (en
Inventor
蒋政胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010845003.0A priority Critical patent/CN111954075B/en
Publication of CN111954075A publication Critical patent/CN111954075A/en
Application granted granted Critical
Publication of CN111954075B publication Critical patent/CN111954075B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a video processing model state adjusting method, a video processing model state adjusting device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a target video to be analyzed; detecting video frames in the target video through a video processing model, and determining the number of pixel points identified by the video processing model; determining the number of target pixel points in a video frame of the target video; determining the state of the video processing model based on the ratio of the number of the pixels identified by the video processing model to the number of target pixels; and adjusting the model parameters of the video processing model based on the state of the video processing model to realize the adjustment of the state of the video processing model. Therefore, dynamic monitoring of the using effect of the video processing model can be achieved, the video processing model which is not suitable in the using scene can be found in time, and model parameters of the video processing model can be adjusted in time to adapt to different using environments.

Description

Video processing model state adjusting method and device, electronic equipment and storage medium
Technical Field
The present invention relates to a video processing model state adjustment technology, and in particular, to a video processing model state adjustment method, a video processing model state adjustment device, and a storage medium.
Background
In the use scene of live video, live broadcast audiences can present various types of AI gifts to a main broadcast, a live broadcast service server analyzes live broadcast stream data in real time, can dynamically and intelligently identify body parts of the received AI gifts matched with the main broadcast and display the received AI gifts to different live broadcast audiences, and in the process, the artificial intelligence technology provides a scheme for training a proper video processing model to support the application. However, as the types of the AI gifts increase, the use environments of the video processing models corresponding to different anchor broadcasters are also diversified, and the processing effect of the video processing model needs to be monitored in real time so as to adjust the model parameters of the video processing model in time, so that the video processing model obtains a better use effect.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, and a storage medium for adjusting a video processing model state of a video processing model, and a technical solution of an embodiment of the present invention is implemented as follows:
the invention provides a video processing model state adjusting method, which comprises the following steps:
acquiring a target video to be analyzed, wherein the target video to be analyzed comprises a video frame image carrying a special effect;
triggering a scoring evaluation process, detecting video frames in the target video through a video processing model, and determining the number of pixel points identified by the video processing model;
determining the number of target pixel points in a video frame of the target video;
determining the state of the video processing model based on the ratio of the number of the pixels identified by the video processing model to the number of target pixels;
and adjusting the model parameters of the video processing model based on the state of the video processing model to realize the adjustment of the state of the video processing model.
The embodiment of the invention also provides a video processing model state adjusting device, which comprises:
the system comprises an information transmission module, a video analysis module and a video analysis module, wherein the information transmission module is used for acquiring a target video to be analyzed, and the target video to be analyzed comprises a video frame image carrying a special effect;
the information processing module is used for triggering a scoring evaluation process, detecting video frames in the target video through a video processing model and determining the number of pixel points identified by the video processing model;
the information processing module is used for determining the number of target pixel points in a video frame of the target video;
the information processing module is used for determining the state of the video processing model based on the ratio of the number of the pixel points identified by the video processing model to the number of the target pixel points;
and the information processing module is used for adjusting the model parameters of the video processing model based on the state of the video processing model so as to adjust the state of the video processing model.
In the above-mentioned scheme, the first step of the method,
the information transmission module is used for responding to a video uploading instruction and acquiring a streaming media address matched with a target video;
the information transmission module is used for determining target user information of a live video and acquiring a streaming media address matched with a target user based on the target user information of the live video;
the information transmission module is used for detecting the correctness of the streaming media address;
and the information transmission module is used for sending prompt information to prompt filling of a new streaming media address when the target video to be analyzed corresponding to the streaming media address cannot be played.
In the above-mentioned scheme, the first step of the method,
the information transmission module is used for configuring a video clearing process matched with the target video to be analyzed;
and the information transmission module is used for clearing the target video matched with the streaming media address through the video clearing process after the state of the video processing model is determined.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for determining animation special effect information matched with the target video through the scoring evaluation process;
the information processing module is used for responding to the determined animation special effect information, and determining a video frame set in the target video within a unit time interval through a corresponding video transcoding instruction, wherein the video frame set comprises different continuous video frames;
the information processing module is used for detecting the video frame set through the video processing model and determining the number of pixel points in different video frames in the video frame set;
the information processing module is used for determining the number of the pixels identified by the video processing model based on the average value of the number of the pixels in different video frames in the video frame set.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for determining the playing environment of the target video and the matched animation special effect information;
and the information processing module is used for determining the number of target pixel points in the video frame of the target video based on the playing environment of the target video and the matched animation special effect information.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for determining a state threshold value of the video processing model based on the animation special effect information matched with the target video;
the information processing module is used for determining that the state of the video processing model is stable when the ratio of the number of the pixels identified by the video processing model to the number of the target pixels is greater than the state threshold of the video processing model; or
The information processing module is used for determining the state of the video processing model as a state to be adjusted when the ratio of the number of the pixels identified by the video processing model to the number of the target pixels is less than or equal to the state threshold of the video processing model.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for iteratively adjusting the neural network parameters of the video processing model to form a new video processing model when the state of the video processing model is a state to be adjusted;
the information processing module is used for determining the ratio of the number of the pixels identified by the new video processing model to the number of the target pixels, and determining the neural network parameters of the video processing model until the state of the video processing model is determined to be stable.
In the above scheme, the apparatus further comprises:
the display module is used for displaying a user interface, the user interface comprises a person-called visual angle picture for training personnel by using a video processing model and observing a state adjustment environment of the video processing model, and the user interface also comprises a task control component and an information display component;
the display module is used for triggering a video display process through the information display component, and acquiring and displaying a target video to be analyzed corresponding to the streaming media address;
the display module is used for triggering the scoring evaluation process through the task control assembly and determining the state of the video processing model;
and the display module is used for presenting the state of the video processing model in the user interface through the information display component.
In the above-mentioned scheme, the first step of the method,
the display module is used for triggering a video display process through the information display component and determining animation special effect information matched with the target video;
and the display module is used for intercepting the state image of the video processing model presented in the user interface through the task control assembly to form a state screenshot of the video processing model.
In the above-mentioned scheme, the first step of the method,
the display module is used for presenting a sharing function item for sharing the state screenshot of the video processing model in the user interface;
and the display module is used for responding to the triggering operation of the state screenshot sharing function item aiming at the video processing model and sharing the state screenshot of the video processing model to the corresponding user.
An electronic device according to an embodiment of the present invention includes:
a memory for storing executable instructions;
and the processor is used for realizing the video processing model state adjustment method of the preamble when the executable instruction stored in the memory is run.
The embodiment of the invention also provides a computer-readable storage medium, which stores executable instructions, and the executable instructions are executed by a processor to realize the video processing model state adjustment method of the preamble.
The invention has the following beneficial effects:
the method comprises the steps of obtaining a target video to be analyzed, wherein the target video to be analyzed comprises a video frame image carrying a special effect; triggering a scoring evaluation process, detecting video frames in the target video through a video processing model, and determining the number of pixel points identified by the video processing model; determining the number of target pixel points in a video frame of the target video; determining the state of the video processing model based on the ratio of the number of the pixels identified by the video processing model to the number of target pixels; based on the state of the video processing model, adjusting model parameters of the video processing model to realize the adjustment of the state of the video processing model; therefore, dynamic monitoring of the using effect of the video processing model can be achieved, the video processing model which is not suitable in the using scene can be found in time, and model parameters of the video processing model can be adjusted in time to adapt to different using environments.
Drawings
Fig. 1 is a schematic view of a usage scenario of a video processing model state adjustment method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a display effect of the video processing model state adjustment method according to the present invention;
fig. 4 is a schematic flow chart illustrating an alternative method for adjusting a state of a video processing model according to an embodiment of the present invention;
fig. 5 is a schematic flow chart illustrating an alternative method for adjusting a state of a video processing model according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an optional display effect of a video processing model state adjustment method according to an embodiment of the present invention;
fig. 7 is an optional flowchart of a video processing model state adjustment method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Before further detailed description of the embodiments of the present invention, terms and expressions mentioned in the embodiments of the present invention are explained, and the terms and expressions mentioned in the embodiments of the present invention are applied to the following explanations.
1) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
2) Terminals, including but not limited to: the system comprises a common terminal and a special terminal, wherein the common terminal is in long connection and/or short connection with a sending channel, and the special terminal is in long connection with the sending channel.
3) The client and the carrier for realizing the specific function in the terminal, for example, a mobile client (APP) is a carrier for realizing the specific function in the mobile terminal, for example, a function of executing report making or a function of displaying a report.
4) A Component (Component), which is a functional module of a view of an applet, also called the front-end Component, buttons, titles, tables, sidebars, content, and footers in a page, includes modular code to facilitate reuse among different pages of the applet.
5) A Mini Program (Program) is a Program developed based on a front-end-oriented Language (e.g., JavaScript) and implementing a service in a hypertext Markup Language (HTML) page, and software downloaded by a client (e.g., a browser or any client embedded in a browser core) via a network (e.g., the internet) and interpreted and executed in a browser environment of the client saves steps installed in the client. For example, applets for implementing various services such as air ticket purchase, task processing and production, data presentation, and the like can be downloaded and run in the social network client.
6) Based on the condition or state on which the operation to be performed depends, when the condition or state on which the operation depends is satisfied, the operation or operations to be performed may be in real time or may have a set delay; there is no restriction on the order of execution of the operations performed unless otherwise specified.
7) And (4) model training, namely performing multi-classification learning on the image data set. The model can be constructed by adopting deep learning frames such as Tensor Flow, torch and the like, and a multi-classification model is formed by combining multiple layers of neural network layers such as CNN and the like. The input of the model is a three-channel or original channel matrix formed by reading an image through openCV and other tools, the output of the model is multi-classification probability, and text information is finally output through softmax and other algorithms. During training, the model approaches to a correct trend through an objective function such as cross entropy and the like.
8) Neural Networks (NN): an Artificial Neural Network (ANN), referred to as Neural Network or Neural Network for short, is a mathematical model or computational model that imitates the structure and function of biological Neural Network (central nervous system of animals, especially brain) in the field of machine learning and cognitive science, and is used for estimating or approximating functions.
Fig. 1 is a schematic view of a usage scenario of a video processing model state adjustment method according to an embodiment of the present invention, and referring to fig. 1, a client capable of displaying software of a corresponding video, such as a client or a plug-in for live video or video playing, is arranged on a terminal (including a terminal 10-1 and a terminal 10-2), and a user can obtain and display different videos (such as live video streams) through the corresponding client; a client capable of displaying software of a corresponding live video can also be provided, for example, a client or a plug-in for playing the live video, and a user can obtain the live video (or play back the live video of any main broadcast) through the corresponding client and display the live video; the terminal is connected to the server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two, and uses a wireless link to realize data transmission.
Of course, the video processing model state adjustment method provided by the invention can be applied to not only live video playing of a live video client, but also a video live applet in a WeChat applet, and finally, a processing result of the video processing model is presented on a User Interface (UI) so as to improve the interactive experience of a User. The video processing model state adjustment method provided by the embodiment of the application can be realized based on Artificial Intelligence (AI), which is a theory, method, technology and application system that simulates, extends and expands human Intelligence by using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
In the embodiment of the present application, the artificial intelligence software technology mainly involved includes the above-mentioned voice processing technology and machine learning and other directions. For example, the present invention may relate to a Speech Recognition Technology (ASR) in Speech Technology (Speech Technology), which includes Speech signal preprocessing (Speech signal preprocessing), Speech signal frequency domain analysis (Speech signal analysis), Speech signal feature extraction (Speech signal feature extraction), Speech signal feature matching/Recognition (Speech signal feature matching/Recognition), training of Speech (Speech training), and the like.
For example, Machine Learning (ML) may be involved, which is a multi-domain cross discipline, involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, and so on. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine Learning generally includes techniques such as Deep Learning (Deep Learning), which includes artificial Neural networks (artificial Neural networks), such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Deep Neural Networks (DNN), and the like.
As will be described in detail below, the electronic device according to the embodiment of the present invention may be implemented in various forms, such as a dedicated terminal with a speech recognition model training function, or a server with a speech recognition model training function, for example, the server 200 in the foregoing fig. 1. Fig. 2 is a schematic diagram of a composition structure of an electronic device according to an embodiment of the present invention, and it is understood that fig. 2 only shows an exemplary structure of the electronic device, and not a whole structure, and a part of the structure or the whole structure shown in fig. 2 may be implemented as needed.
The electronic equipment provided by the embodiment of the invention comprises: at least one processor 201, memory 202, user interface 203, and at least one network interface 204. The various components in the electronic device are coupled together by a bus system 205. It will be appreciated that the bus system 205 is used to enable communications among the components. The bus system 205 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 205 in fig. 2.
The user interface 203 may include, among other things, a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, or a touch screen.
It will be appreciated that the memory 202 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. The memory 202 in embodiments of the present invention is capable of storing data to support operation of the terminal (e.g., 10-1). Examples of such data include: any computer program, such as an operating system and application programs, for operating on a terminal (e.g., 10-1). The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application program may include various application programs.
In some embodiments, the electronic device provided in the embodiments of the present invention may be implemented by a combination of hardware and software, and for example, the electronic device provided in the embodiments of the present invention may be a processor in the form of a hardware decoding processor, which is programmed to execute the video processing model state adjustment method provided in the embodiments of the present invention. For example, a processor in the form of a hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
As an example of the electronic device provided by the embodiment of the present invention implemented by combining software and hardware, the electronic device provided by the embodiment of the present invention may be directly embodied as a combination of software modules executed by the processor 201, where the software modules may be located in a storage medium located in the memory 202, and the processor 201 reads executable instructions included in the software modules in the memory 202, and completes the video processing model state adjustment method provided by the embodiment of the present invention in combination with necessary hardware (for example, including the processor 201 and other components connected to the bus 205).
By way of example, the Processor 201 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor or the like.
As an example of the electronic Device provided by the embodiment of the present invention implemented by hardware, the apparatus provided by the embodiment of the present invention may be implemented by directly using the processor 201 in the form of a hardware decoding processor, for example, by being implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components, to implement the video processing model state adjustment method provided by the embodiment of the present invention.
The memory 202 in embodiments of the present invention is used to store various types of data to support the operation of the electronic device. Examples of such data include: any executable instructions for operating on an electronic device, such as executable instructions, may be embodied in a program for implementing the method for adjusting from a video processing model state of an embodiment of the invention.
In other embodiments, the electronic device provided by the embodiments of the present invention may be implemented in software, and fig. 2 illustrates the electronic device stored in the memory 202, which may be software in the form of programs and plug-ins, and includes a series of modules, and as an example of the programs stored in the memory 202, the electronic device may be included, and the video processing model state adjustment module of the electronic device includes the following software modules: an information transmission module 2081 and an information processing module 2082. When the software modules in the electronic device are read into the RAM by the processor 201 and executed, the method for adjusting the state of the video processing model according to the embodiment of the present invention will be implemented, wherein the functions of the software modules in the apparatus for adjusting the state of the video processing model will be described in the following embodiments in turn.
It should be noted that, in fig. 2, for convenience of expression, all the modules are shown at once, but it should not be considered that the implementation that may include only the information transmission module 2081 and the information processing module 2082 is excluded from the video processing model state adjustment apparatus, and the functions of the respective modules will be described below.
In the related art, as an example, referring to fig. 3, fig. 3 is a schematic diagram illustrating the display effect of the video processing model state adjustment method according to the present invention, wherein the server 200 of the previous fig. 1 was used to lay out a trained video processing model to achieve the animation effect of an AI gift, and in particular, in the use scene of live video, live audiences can present various types of AI gifts to the anchor, the live broadcast service server analyzes live broadcast stream data in real time, the body part of the received AI gifts, which is matched with the anchor, can be dynamically and intelligently identified, and present the received AI gifts to different live viewers (e.g., wing effects as shown in fig. 3, i.e., adding wings in relative positions of the main body through a video processing model), in which artificial intelligence techniques provide a solution to train an appropriate video processing model to support the above-described application. However, as the types of the AI gifts increase, the use environments of the video processing models corresponding to different anchor broadcasters are also diversified, and the processing effect of the video processing model needs to be monitored in real time so as to adjust the model parameters of the video processing model in time, so that the video processing model obtains a better use effect. In the process, taking the wing special effect of the AI gift as an example, before the deployment of the video processing model, a corresponding model needs to be trained through a large amount of live video anchor body materials, singing segments and dancing segments in a training sample to determine model parameters, when the deployment test is completed after the model training, the wing special effect generated at the corresponding part of the anchor body is observed only by naked eyes of testers whether the wing special effect is correctly attached to the corresponding part of the anchor body, and for a scene that the use environment of a live user is identified as singing and dancing through the video processing model, a corresponding singing and dancing label in a live broadcast room can be only watched to verify the accuracy of scene identification of the video processing model. Further, when the video processing model is completely deployed in the corresponding service server, the using effect of the AI gift can only be observed on line manually by an operator, and if a certain scene is found to have poor recognition effect, effective visual data cannot be provided for the model improvement stage; in the optimization process of the video processing model, the accuracy of regression of the model is verified by collecting a large amount of anchor video data, and whether the use effect of the video processing model of the optimized model is obvious or not can be supported only by the phenomenon data.
In order to overcome the above-mentioned drawbacks, referring to fig. 4, fig. 4 is an optional flowchart of a video processing model state adjustment method according to an embodiment of the present invention, and it can be understood that the steps shown in fig. 4 can be executed by various electronic devices operating the video processing model state adjustment apparatus, for example, various game devices with the video processing model state adjustment apparatus, wherein a dedicated terminal with the video processing model state adjustment apparatus can be packaged in the terminal 10-1 shown in fig. 1 to execute corresponding software modules in the video processing model state adjustment apparatus shown in the previous fig. 2. The following is a description of the steps shown in fig. 4.
Step 401: and the video processing model state adjusting device acquires a target video to be analyzed.
The target video to be analyzed comprises a video frame image carrying a special effect.
In some embodiments of the present invention, obtaining a target video to be analyzed may be implemented by:
responding to a video uploading instruction, and acquiring a streaming media address matched with a target video; or determining target user information of a live video, and acquiring a streaming media address matched with a target user based on the target user information of the live video; detecting the correctness of the streaming media address; and sending prompt information to prompt filling of a new streaming media address when the target video to be analyzed corresponding to the streaming media address cannot be played. Wherein, because the using environment of the video processing model is various, the video information in the live video can not be recorded, so the target user information of the live video can obtain the streaming media address matched with the target user, the state of the video processing model used in the live video can be monitored in real time, the distortion of the model processing effect caused by the change of the using environment of the video processing model can be avoided, the use of the user is not influenced, in combination with the display effect schematic diagram shown in the front sequence diagram 3, the user gives a certain AI gift to the main broadcast during the live video broadcasting, then the business server analyzes the live broadcast stream data in real time through the deployed video processing model, dynamically and automatically identifies each part of the main broadcast body, such as the back position, displays the wing special effect on the main broadcast according to the back position identification result, if the parameter of the video processing model is not matched with the current using scene, the wing special effect distortion can be caused, affecting the use of the user.
When a user selects a state evaluation scoring item corresponding to the use environment of one AI gift, the scoring platform corresponding to the video processing model state adjustment method provided by the application has two modes of uploading target videos, and the method comprises the following steps: the method for uploading the local video is that the local video is uploaded to a file server, the file server returns a flow address link, specifically, a node.js file uploading service process is established in the file server, and after a user finishes one-time scoring, target video data on the file server can be removed.
In some embodiments of the present invention, a video clearing process matched with the target video to be analyzed may also be configured; and after the state of the video processing model is determined, clearing the target video matched with the streaming media address through the video clearing process. The file server can also be provided with a timed clearing file script, so that the server is prevented from influencing file uploading service due to excessive memory occupation. When the file is stored on the server, the flow address returned by the server is obtained through the ajax callback function, and then the value is assigned through the link input box.
Step 402: and triggering a scoring evaluation process by the video processing model state adjusting device, detecting video frames in the target video through the video processing model, and determining the number of pixel points identified by the video processing model.
In some embodiments of the present invention, a scoring evaluation process is triggered, a video frame in the target video is detected through a video processing model, and the number of pixels identified by the video processing model is determined, which can be implemented in the following manner:
determining animation special effect information matched with the target video through the scoring evaluation process; determining, by respective video transcoding instructions, a set of video frames within a unit time interval in the target video in response to the determined animated special effects information, wherein the set of video frames comprises different consecutive video frames; detecting the video frame set through the video processing model, and determining the number of pixel points in different video frames in the video frame set; and determining the number of the pixels identified by the video processing model based on the average value of the number of the pixels in different video frames in the video frame set. The monitoring of the state of the video processing model can be realized by scoring the recognition effect of the video processing model, specifically, the score of a certain model is calculated through a corresponding algorithm rule, the model recognition result is measured by using the score, the higher the score is, the better the recognition effect is, the special effect display is more attached to the corresponding part of the anchor, and the consumer can have higher evaluation on the product. Each video frame is composed of different pixel points, the number of the pixel points identified by the video processing model in the corresponding playing environment and the matched animation special effect information is more, and the display effect of the animation special effect (animation expression of the AI gift) is clear and accurate.
Specifically, after a developer clicks a functional item for starting scoring, a scoring evaluation process firstly sends a stream address to a scoring server through an ajax request, the scoring server returns a flag bit to the front end after receiving the stream address, and if the processing is successful, a corresponding score acquisition step is executed. In some embodiments of the present invention, a timer process may be further configured to poll (e.g., poll for three seconds) the background scoring server for score data at corresponding time intervals, and if a corresponding score is returned to the front-end display interface, the corresponding scoring score may be plotted on the front-end page. Further, the scoring server firstly pulls a video stream resource according to the acquired stream address, and then acquires a video frame corresponding to each second by using an ffmpeg command, wherein in the use scene of the invention, a video with the length of 1s comprises 20-30 video frames, and each video frame is composed of different unit feature points and is called a pixel point. And finally, calculating the average value of every continuous 10 frames to be divided into one time of scoring, continuously taking wing AI gift identification in the previous sequence embodiment as an example, and by carrying out target marking processing on the video frames, the scoring server can determine which pixel points in the video frames are the head of the anchor, which pixel points are the upper body and which pixel points are the arms.
Step 403: and the video processing model state adjusting device determines the number of target pixel points in the video frame of the target video.
In some embodiments of the present invention, determining the number of target pixel points in a video frame of the target video may be implemented by:
determining the playing environment of the target video and the matched animation special effect information; and determining the number of target pixel points in the video frame of the target video based on the playing environment of the target video and the matched animation special effect information. In some embodiments of the present invention, an anchor end of a live video receives a presentation instruction of a special-effect gift, where the presentation instruction includes an identification of an anchor target given by the special-effect gift and an ID of animation special-effect information. And the anchor side acquires the original live video corresponding to the target anchor ID according to the received presentation instruction, and determines the animation special effect information and the corresponding characteristic region thereof according to the animation special effect information ID. For example, the animation special effect information ID is 0001, the corresponding animation special effect information is "angel wing", and the corresponding feature region is the back. The animation special effect information ID is 0002, the corresponding animation special effect information is "angel kiss", and the corresponding characteristic region is a lip (the animation special effect is expressed by a trigger day to surround the anchor avatar, and then kiss the anchor lip).
Further, in this process, the number of target pixel points determined by the "angel wing" animation special effect information may be greater than the number of target pixel points determined by the "angel kiss" animation special effect information. Specifically, when the number of target pixel points is related to the playing environment of the target video and the animation special effect information is 'angel wings', the number of the target pixel points is 100, the video processing model can present the 'angel wing' animation special effect according to the corresponding pixel points in the display interface only by identifying more than or equal to 60 target pixel points, and similarly, when the animation special effect information is 'angel wing', the number of the target pixel points is 100, the video processing model can present the 'angel wing' animation special effect according to the corresponding pixel points in the display interface only by identifying more than or equal to 60 target pixel points, and similarly, when the animation special effect information is 'angel kiss', the number of the target pixel points is 70, and the video processing model can present an 'angel kiss' animation special effect in the display interface according to the corresponding pixel points only by identifying more than or equal to 35 target pixel points.
And the video processing model state adjustment server extracts a video frame image from the original live video, processes the video frame image according to the characteristic region corresponding to the animation special effect information and determines the display position information of the characteristic region in the video frame image. For example, the feature area of the animation special effect information is the back, and after the video frame image is processed, the display position information of the "back" of the video frame image in the original live video is determined, or the animation special effect is displayed on the "lip" of the anchor.
Step 404: the video processing model state adjusting device determines the state of the video processing model based on the ratio of the number of the pixels identified by the video processing model to the number of the target pixels.
In some embodiments of the present invention, determining the state of the video processing model based on the ratio of the number of pixels identified by the video processing model to the number of target pixels may be implemented by:
determining a state threshold of the video processing model based on the animation special effect information matched with the target video; when the ratio of the number of the pixels identified by the video processing model to the number of the target pixels is larger than the state threshold value of the video processing model, determining that the state of the video processing model is stable; or when the ratio of the number of the pixels identified by the video processing model to the number of the target pixels is less than or equal to the state threshold value of the video processing model, determining that the state of the video processing model is a state to be adjusted. According to the human back contour information of the anchor, setting a foreground image layer where an anchor character is located on a special effect layer where an "angel wing" is located according to the human back contour information of the anchor, shielding a set area of the "angel wing", and achieving the effect of adding the "angel wing" to the back of the anchor character, wherein the "angel wing" is added to the back of the anchor in the process; adding a part of feathers to the arms of the anchor, and shielding the corresponding area of the arms of the anchor; the other part of the feathers are attached to the shoulders of the anchor, so that a video processing model is required to identify pixel points at different positions in a video frame, and the more the identified pixel points are, the clearer the display effect of the special effect of the animation is; similarly, the number of target pixel points required to be determined by the "angel kiss" animation special effect information is smaller than the number of target pixel points required to be determined by the "angel wing" animation special effect information, so that the state thresholds of the video processing models corresponding to different animation special effect information matched with the target video in different use environments are different, and the state threshold of the corresponding video processing model is larger when more pixel points are required to be identified.
If 100 target pixel points are marked in one video frame, then the video frame is identified through a video processing model, if 80 points finally identified by the video processing model are within a distance range, the model is reliable in identification results of the 80 pixel points, the score is 80, and the calculation formula is shown as formula 1:
Figure GDA0002999576210000161
step 405: and the video processing model state adjusting device adjusts the model parameters of the video processing model based on the state of the video processing model so as to realize the adjustment of the state of the video processing model.
In some embodiments of the present invention, based on the state of the video processing model, adjusting the model parameters of the video processing model to adjust the state of the video processing model may be implemented by:
when the state of the video processing model is a state to be adjusted, iteratively adjusting the neural network parameters of the video processing model to form a new video processing model; and determining the ratio of the number of the pixels identified by the new video processing model to the number of the target pixels, and determining the neural network parameters of the video processing model until the state of the video processing model is determined to be stable. When the state of the video processing model is analyzed, the score of the corresponding video processing model is returned through the scoring server, meanwhile, the video can be synchronously played in the corresponding user interface, the video playing and the scoring are carried out in the same interface, a user can clearly know a use scene with high scoring and a use scene with low scoring conveniently, and then the image of the scene with low scoring is intercepted for subsequent analysis.
With continuing reference to fig. 5, fig. 5 is an alternative flow chart of the video processing model state adjustment method according to the embodiment of the present invention, and it can be understood that the steps shown in fig. 5 can be executed by various electronic devices operating the video processing model state adjustment apparatus, such as various game devices with the video processing model state adjustment apparatus, wherein a dedicated terminal with the video processing model state adjustment apparatus can be packaged in the terminal 10-1 shown in fig. 1 to execute the corresponding software modules in the video processing model state adjustment apparatus shown in the previous fig. 2. The following is a description of the steps shown in fig. 5.
Step 501: a user interface is displayed.
The user interface comprises a person name visual angle picture for training personnel by using a video processing model and observing the video processing model state adjusting environment, and further comprises a task control component and an information display component.
Fig. 6 is a schematic diagram of an optional display effect of the video processing model state adjustment method provided in the embodiment of the present invention, as shown in fig. 6, a user may select a local video, start uploading, a server storing the video returns a stream server address, and automatically fills the position "2", and certainly, the user may also manually copy the stream address of a target video in video live broadcast software and then stick the stream address to the position "2", select a corresponding AI gift type for monitoring through the position "1", and display a detection effect of a corresponding video processing model through the position "4".
Further, whether the stream address is correct or not can be verified through a video player, if the video can be played normally, the stream address is correct, a corresponding scoring evaluation process is triggered, and if the video is played abnormally, if a black screen appears, the stream address link needs to be detected and the uploaded local video needs to be detected, so that the stream address can be played normally, and the subsequent scoring evaluation process cannot be influenced.
After a legal stream url address is added by a user, scoring is started through a corresponding trigger instruction, data is transmitted to a scoring server corresponding to the video processing model, the starting scoring shown by '3' in the display interface of fig. 6 is changed into ending scoring, when the user clicks and ends scoring, the user is prompted to refresh the interface to score again, and scoring change of the video processing model can be clearly obtained through the '6' position.
Step 502: triggering a video display process through the information display component, and acquiring and displaying a target video to be analyzed corresponding to the streaming media address;
step 503: triggering the scoring evaluation process through the task control assembly to determine the state of the video processing model;
step 504: presenting, by the information presentation component, a state of the video processing model in the user interface.
In some embodiments of the present invention, a video display process may be triggered by the information display component, and animation special effect information matched with the target video is determined; and intercepting the state image of the video processing model presented in the user interface through the task control component to form a state screenshot of the video processing model. Further, a sharing function item for sharing the state screenshot of the video processing model is presented in the user interface; and responding to the triggering operation of the state screenshot sharing function item aiming at the video processing model, and sharing the state screenshot of the video processing model to a corresponding user.
Continuing with the exemplary structure of the video processing model state adjustment apparatus provided by the embodiments of the present invention as implemented as a software module, in some embodiments, as shown in fig. 2, the software module in the video processing model state adjustment apparatus stored in the memory may include: an information transmission module 2081 and an information processing module 2082.
The information transmission module 2081, configured to obtain a target video to be analyzed, where the target video to be analyzed includes a video frame image carrying a special effect; the information processing module 2082 is used for triggering a scoring evaluation process, detecting video frames in the target video through a video processing model, and determining the number of pixel points identified by the video processing model; the information processing module 2082 is used for determining the number of target pixel points in the video frame of the target video; the information processing module 2082 is configured to determine a state of the video processing model based on a ratio of the number of pixels identified by the video processing model to the number of target pixels; the information processing module 2082 is configured to adjust the model parameters of the video processing model based on the state of the video processing model, so as to adjust the state of the video processing model.
In some embodiments of the present invention, the information transmission module 2081 is configured to, in response to the video upload instruction, obtain a streaming media address matching the target video; the information transmission module 2081 is used for determining target user information of a live video and acquiring a streaming media address matched with a target user based on the target user information of the live video; the information transmission module 2081 is used for detecting the correctness of the streaming media address; the information transmission module 2081 is configured to send a prompt message to prompt to fill a new streaming media address when the target video to be analyzed corresponding to the streaming media address cannot be played.
In some embodiments of the present invention, the information transmission module 2081 is configured to configure a video clearing process matched with the target video to be analyzed; the information transmission module 2081, configured to clear, through the video clearing process, the target video matched with the streaming media address after determining the state of the video processing model.
In some embodiments of the present invention, the information processing module 2082 is configured to determine, through the scoring evaluation process, animation special effect information matched with the target video; the information processing module 2082 is used for responding to the determined animation special effect information, and determining a video frame set in a unit time interval in the target video through a corresponding video transcoding instruction, wherein the video frame set comprises different continuous video frames; the information processing module 2082 is configured to detect the video frame set through the video processing model, and determine the number of pixel points in different video frames in the video frame set; the information processing module 2082 is configured to determine the number of pixels identified by the video processing model based on an average value of the number of pixels in different video frames in the video frame set.
In some embodiments of the present invention, the,
the information processing module 2082 is used for determining the playing environment of the target video and the matched animation special effect information; the information processing module 2082 is configured to determine the number of target pixel points in the video frame of the target video based on the playing environment of the target video and the matched animation special effect information.
In some embodiments of the present invention, the information processing module 2082 is configured to determine a state threshold of the video processing model based on the animation special effect information matched with the target video; the information processing module 2082 is configured to determine that the state of the video processing model is stable when the ratio of the number of pixels identified by the video processing model to the number of target pixels is greater than the state threshold of the video processing model; the information processing module 2082 is configured to determine that the state of the video processing model is the state to be adjusted when the ratio of the number of the pixels identified by the video processing model to the number of the target pixels is less than or equal to the state threshold of the video processing model.
In some embodiments of the present invention, the information processing module 2082 is configured to iteratively adjust a neural network parameter of the video processing model to form a new video processing model when the state of the video processing model is a state to be adjusted; the information processing module 2082 is configured to determine a ratio of the number of pixels identified by the new video processing model to the number of target pixels, and determine a neural network parameter of the video processing model until it is determined that the state of the video processing model is stable.
In some embodiments of the invention, the apparatus further comprises:
the display module 2083 is configured to display a user interface, where the user interface includes a person-weighed view angle picture obtained by training a person with a video processing model and observing a state adjustment environment of the video processing model, and the user interface further includes a task control component and an information display component; the display module 2083 is configured to trigger a video display process through the information display component, and acquire and display a target video to be analyzed corresponding to the streaming media address; the display module 2083 is used for triggering the scoring and evaluating process through the task control assembly and determining the state of the video processing model; a display module 2083, configured to present, through the information presentation component, a state of the video processing model in the user interface.
In some embodiments of the present invention, the display module 2083 is configured to trigger a video display process through the information display component, and determine animation special effect information matched with the target video; the display module 2083 is configured to intercept, through the task control component, the state image of the video processing model presented in the user interface to form a state screenshot of the video processing model.
In some embodiments of the present invention, a display module, configured to present, in the user interface, a sharing function item for sharing a state screenshot of the video processing model; and the display module is used for responding to the triggering operation of the state screenshot sharing function item aiming at the video processing model and sharing the state screenshot of the video processing model to the corresponding user.
According to the electronic device shown in fig. 2, in one aspect of the present application, the present application also provides a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations of the video processing model state adjustment method described above.
With reference to fig. 7 in conjunction with the foregoing block diagram 1, fig. 7 is an optional flowchart of a video processing model state adjustment method according to an embodiment of the present invention, and it can be understood that the steps shown in fig. 7 may be executed by various electronic devices operating a video processing model state adjustment apparatus, and are used to identify an AI gift through a video processing model in a live video environment, where an animation special effect information ID received by the terminal 10-1 is 0001, the corresponding animation special effect information ID is "angel wing", an animation special effect information ID received by the terminal 10-2 is 0002, the corresponding animation special effect information ID is "angel kiss", and neural network parameters of the video processing model are the same; the trained video processing model may be run on a variety of gaming devices with video processing model state adjustment means, wherein dedicated terminals with video processing model state adjustment means may be packaged in terminals 10-1 and 10-2 shown in fig. 1 to execute corresponding software modules in the video processing model state adjustment means shown in the preceding fig. 2. The following is a description of the steps shown in fig. 7.
Step 701: and acquiring a target video 1 and a target video 2 to be analyzed in different live streaming media information.
Step 702: and triggering a scoring evaluation process through a scoring evaluation server, and detecting video frames in the target video 1 and the target video 2 through a video processing model.
Step 703: and determining the number of target pixel points in the video frame of the target video 1, the number of pixel points identified by the video processing model and the number of target pixel points in the target video 2.
Step 704: and determining scores of the video processing model in different use scenes respectively based on the ratio of the number of the pixels identified by the video processing model to the number of the target pixels.
Step 705: and according to the scores of the video processing models, parameter adjustment is carried out on the video processing models, and optimization of the video processing models is realized.
The video processing model in the terminal 10-1 is already not applicable, parameter adjustment or retraining is required, and the video processing model in the terminal 10-2 is stable in state and can be continuously in a deployment and use state.
Therefore, dynamic monitoring of the using effect of the video processing model can be achieved, using scenes with low scores can be found in time, and model parameters of the video processing model can be adjusted in time to adapt to different using environments. Meanwhile, the video processing model after adjustment and optimization can be visually tested in the video processing model effect monitoring platform, the question judgment and optimization effect is improved, the training efficiency of the video processing model is improved, and the use experience of a video live broadcast user is also improved.
The beneficial technical effects are as follows:
the video processing model state adjusting method provided by the embodiment of the invention obtains a target video to be analyzed, wherein the target video to be analyzed comprises a video frame image carrying a special effect; triggering a scoring evaluation process, detecting video frames in the target video through a video processing model, and determining the number of pixel points identified by the video processing model; determining the number of target pixel points in a video frame of the target video; determining the state of the video processing model based on the ratio of the number of the pixels identified by the video processing model to the number of target pixels; based on the state of the video processing model, adjusting model parameters of the video processing model to realize the adjustment of the state of the video processing model; therefore, dynamic monitoring of the using effect of the video processing model can be achieved, the video processing model which is not suitable in the using scene can be found in time, and model parameters of the video processing model can be adjusted in time to adapt to different using environments.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (14)

1. A method for adjusting a state of a video processing model, the method comprising:
acquiring a target video to be analyzed, wherein the target video to be analyzed comprises a video frame image carrying a special effect;
triggering a scoring evaluation process, and determining animation special effect information matched with the target video through the scoring evaluation process;
determining, by respective video transcoding instructions, a set of video frames within a unit time interval in the target video in response to the determined animated special effects information, wherein the set of video frames comprises different consecutive video frames;
detecting the video frame set through the video processing model, and determining the number of pixel points in different video frames in the video frame set;
determining the number of pixels identified by the video processing model based on the average value of the number of pixels in different video frames in the video frame set, wherein the number of the identified pixels is the number of pixels in each fixed position in the animation special effect information;
determining the number of target pixel points in a video frame of the target video, wherein the number of the target pixel points in the video frame is matched with the playing environment of the target video and the animation special effect information, and the target pixel points are pixel points in a characteristic region of the animation special effect information;
determining the state of the video processing model based on the ratio of the number of the pixels identified by the video processing model to the number of target pixels;
and adjusting the model parameters of the video processing model based on the state of the video processing model to realize the adjustment of the state of the video processing model.
2. The method of claim 1, wherein the obtaining a target video to be analyzed comprises:
responding to a video uploading instruction, and acquiring a streaming media address matched with a target video; or
Determining target user information of a live video, and acquiring a streaming media address matched with a target user based on the target user information of the live video;
detecting the correctness of the streaming media address;
and sending prompt information to prompt filling of a new streaming media address when the target video to be analyzed corresponding to the streaming media address cannot be played.
3. The method of claim 2, further comprising:
configuring a video clearing process matched with the target video to be analyzed;
and after the state of the video processing model is determined, clearing the target video matched with the streaming media address through the video clearing process.
4. The method of claim 1, wherein determining the number of target pixels in a video frame of the target video comprises:
determining the playing environment of the target video and the matched animation special effect information;
and determining the number of target pixel points in the video frame of the target video based on the playing environment of the target video and the matched animation special effect information.
5. The method of claim 1, wherein determining the state of the video processing model based on the ratio of the number of pixels identified by the video processing model to the number of target pixels comprises:
determining a state threshold of the video processing model based on the animation special effect information matched with the target video;
when the ratio of the number of the pixels identified by the video processing model to the number of the target pixels is larger than the state threshold value of the video processing model, determining that the state of the video processing model is stable; or
And when the ratio of the number of the pixels identified by the video processing model to the number of the target pixels is less than or equal to the state threshold value of the video processing model, determining that the state of the video processing model is a state to be adjusted.
6. The method of claim 1, wherein adjusting model parameters of the video processing model based on the state of the video processing model to achieve the adjustment of the state of the video processing model comprises:
when the state of the video processing model is a state to be adjusted, iteratively adjusting the neural network parameters of the video processing model to form a new video processing model;
and determining the ratio of the number of the pixels identified by the new video processing model to the number of the target pixels, and determining the neural network parameters of the video processing model until the state of the video processing model is determined to be stable.
7. The method of claim 1, further comprising:
displaying a user interface, wherein the user interface comprises: the user interface also comprises a task control component and an information display component;
triggering a video display process through the information display component, and acquiring and displaying a target video to be analyzed corresponding to the streaming media address;
triggering the scoring evaluation process through the task control assembly to determine the state of the video processing model;
presenting, by the information presentation component, a state of the video processing model in the user interface.
8. The method of claim 7, further comprising:
triggering a video display process through the information display component, and determining animation special effect information matched with the target video;
and intercepting the state image of the video processing model presented in the user interface through the task control component to form a state screenshot of the video processing model.
9. The method of claim 7, further comprising:
presenting, in the user interface, a sharing function for sharing a state screenshot of the video processing model;
and responding to the triggering operation of the state screenshot sharing function item aiming at the video processing model, and sharing the state screenshot of the video processing model to a corresponding user.
10. An apparatus for adjusting a state of a video processing model, the apparatus comprising:
the system comprises an information transmission module, a video analysis module and a video analysis module, wherein the information transmission module is used for acquiring a target video to be analyzed, and the target video to be analyzed comprises a video frame image carrying a special effect;
the information processing module is used for triggering a scoring evaluation process and determining animation special effect information matched with the target video through the scoring evaluation process;
an information processing module, configured to determine, in response to the determined animation special effect information, a set of video frames within a unit time interval in the target video through a corresponding video transcoding instruction, where the set of video frames includes different consecutive video frames;
the information processing module is used for detecting the video frame set through the video processing model and determining the number of pixel points in different video frames in the video frame set, wherein the number of the identified pixel points is the number of the pixel points in each fixed position in the animation special effect information;
the information processing module is used for determining the number of the pixels identified by the video processing model based on the average value of the number of the pixels in different video frames in the video frame set, wherein the target pixel is a pixel in a characteristic region of the animation special effect information;
the information processing module is used for determining the number of target pixel points in a video frame of the target video, wherein the number of the target pixel points in the video frame is matched with the playing environment of the target video and the animation special effect information;
the information processing module is used for determining the state of the video processing model based on the ratio of the number of the pixel points identified by the video processing model to the number of the target pixel points;
and the information processing module is used for adjusting the model parameters of the video processing model based on the state of the video processing model so as to adjust the state of the video processing model.
11. The apparatus of claim 10,
the information transmission module is used for responding to a video uploading instruction and acquiring a streaming media address matched with a target video;
the information transmission module is used for determining target user information of a live video and acquiring a streaming media address matched with a target user based on the target user information of the live video;
the information transmission module is used for detecting the correctness of the streaming media address;
and the information transmission module is used for sending prompt information to prompt filling of a new streaming media address when the target video to be analyzed corresponding to the streaming media address cannot be played.
12. The apparatus of claim 10, further comprising:
a display module for displaying a user interface, the user interface comprising: the user interface also comprises a task control component and an information display component;
the display module is used for triggering a video display process through the information display component, and acquiring and displaying a target video to be analyzed corresponding to the streaming media address;
the display module is used for triggering the scoring evaluation process through the task control assembly and determining the state of the video processing model;
and the display module is used for presenting the state of the video processing model in the user interface through the information display component.
13. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor for implementing the video processing model state adjustment method of any one of claims 1 to 9 when executing the executable instructions stored by the memory.
14. A computer-readable storage medium storing executable instructions, wherein the executable instructions, when executed by a processor, implement the video processing model state adjustment method of any one of claims 1 to 9.
CN202010845003.0A 2020-08-20 2020-08-20 Video processing model state adjusting method and device, electronic equipment and storage medium Active CN111954075B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010845003.0A CN111954075B (en) 2020-08-20 2020-08-20 Video processing model state adjusting method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010845003.0A CN111954075B (en) 2020-08-20 2020-08-20 Video processing model state adjusting method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111954075A CN111954075A (en) 2020-11-17
CN111954075B true CN111954075B (en) 2021-07-09

Family

ID=73358729

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010845003.0A Active CN111954075B (en) 2020-08-20 2020-08-20 Video processing model state adjusting method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111954075B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616333A (en) * 2014-12-24 2015-05-13 深圳市腾讯计算机***有限公司 Game video processing method and device
CN109348277A (en) * 2018-11-29 2019-02-15 北京字节跳动网络技术有限公司 Move pixel special video effect adding method, device, terminal device and storage medium
CN110062269A (en) * 2018-01-18 2019-07-26 腾讯科技(深圳)有限公司 Extra objects display methods, device and computer equipment
CN110351592A (en) * 2019-07-17 2019-10-18 深圳市蓝鲸数据科技有限公司 Animation rendering method, device, computer equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023216A (en) * 2016-05-25 2016-10-12 东南大学 Self-adaption segmentation vehicle detection method based on pixel points and confidence
KR101937436B1 (en) * 2016-11-15 2019-01-10 에스케이텔레콤 주식회사 Apparatus and method for separating background and foreground in 3d video
CN108133220A (en) * 2016-11-30 2018-06-08 北京市商汤科技开发有限公司 Model training, crucial point location and image processing method, system and electronic equipment
CN107633479A (en) * 2017-09-14 2018-01-26 光锐恒宇(北京)科技有限公司 A kind of method and apparatus of special display effect in the application
CN109474850B (en) * 2018-11-29 2021-07-20 北京字节跳动网络技术有限公司 Motion pixel video special effect adding method and device, terminal equipment and storage medium
CN110072141B (en) * 2019-04-28 2022-02-25 广州虎牙信息科技有限公司 Media processing method, device, equipment and storage medium
CN110675310B (en) * 2019-07-02 2020-10-02 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium
CN110298327B (en) * 2019-07-03 2021-09-03 北京字节跳动网络技术有限公司 Visual special effect processing method and device, storage medium and terminal
CN111222571B (en) * 2020-01-06 2021-12-14 腾讯科技(深圳)有限公司 Image special effect processing method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616333A (en) * 2014-12-24 2015-05-13 深圳市腾讯计算机***有限公司 Game video processing method and device
CN110062269A (en) * 2018-01-18 2019-07-26 腾讯科技(深圳)有限公司 Extra objects display methods, device and computer equipment
CN109348277A (en) * 2018-11-29 2019-02-15 北京字节跳动网络技术有限公司 Move pixel special video effect adding method, device, terminal device and storage medium
CN110351592A (en) * 2019-07-17 2019-10-18 深圳市蓝鲸数据科技有限公司 Animation rendering method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111954075A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
CN110784759B (en) Bullet screen information processing method and device, electronic equipment and storage medium
US10776970B2 (en) Method and apparatus for processing video image and computer readable medium
US11308993B2 (en) Short video synthesis method and apparatus, and device and storage medium
CN110166827B (en) Video clip determination method and device, storage medium and electronic device
CN109145784B (en) Method and apparatus for processing video
CN107341805B (en) Background segment and network model training, image processing method and device before image
CN110602526B (en) Video processing method, video processing device, computer equipment and storage medium
CN113365147B (en) Video editing method, device, equipment and storage medium based on music card point
CN111988670B (en) Video playing method and device, electronic equipment and computer readable storage medium
CN112801052B (en) User concentration degree detection method and user concentration degree detection system
CN113518256A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN112163560B (en) Video information processing method and device, electronic equipment and storage medium
CN112131346A (en) Comment aggregation method and device, storage medium and electronic equipment
CN110047513A (en) A kind of video monitoring method, device, electronic equipment and storage medium
CN110248235B (en) Software teaching method, device, terminal equipment and medium
CN113542801A (en) Method, device, equipment, storage medium and program product for generating anchor identification
CN111985419A (en) Video processing method and related equipment
CN114283349A (en) Data processing method and device, computer equipment and storage medium
CN111954075B (en) Video processing model state adjusting method and device, electronic equipment and storage medium
CN111597361B (en) Multimedia data processing method, device, storage medium and equipment
CN107770580B (en) Video image processing method and device and terminal equipment
CN112084954A (en) Video target detection method and device, electronic equipment and storage medium
CN108665455B (en) Method and device for evaluating image significance prediction result
CN112835807B (en) Interface identification method and device, electronic equipment and storage medium
CN111768729A (en) VR scene automatic explanation method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant