CN107801093B - Video rendering method and device, computer equipment and readable storage medium - Google Patents

Video rendering method and device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN107801093B
CN107801093B CN201711020523.2A CN201711020523A CN107801093B CN 107801093 B CN107801093 B CN 107801093B CN 201711020523 A CN201711020523 A CN 201711020523A CN 107801093 B CN107801093 B CN 107801093B
Authority
CN
China
Prior art keywords
video
area
static
region
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711020523.2A
Other languages
Chinese (zh)
Other versions
CN107801093A (en
Inventor
韩庆龙
张聪
黄之燊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Talos Innovation Co.,Ltd.
Original Assignee
Shenzhen Quantum Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Quantum Vision Technology Co Ltd filed Critical Shenzhen Quantum Vision Technology Co Ltd
Priority to CN201711020523.2A priority Critical patent/CN107801093B/en
Publication of CN107801093A publication Critical patent/CN107801093A/en
Application granted granted Critical
Publication of CN107801093B publication Critical patent/CN107801093B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention relates to a video rendering method, a video rendering device, computer equipment and a readable storage medium, wherein the video rendering method comprises the following steps: receiving a video and a region identifier of the video sent by a server, identifying a dynamic region and a static region in the video according to the region identifier, rendering each frame of the current dynamic region and a first frame of the current static region according to the playing sequence of video frames in the video, and rendering each frame of the changed dynamic region and the first frame of the changed static region according to the playing sequence of the video frames in the video when the current dynamic region and the current static region are changed until the video rendering is completed. The video rendering method, the video rendering device, the computer equipment and the readable storage medium save computing resources, improve rendering efficiency and improve rendering effect of the high-resolution video.

Description

Video rendering method and device, computer equipment and readable storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a video rendering method and apparatus, a computer device, and a readable storage medium.
Background
With the progress of science and technology, the application range of the visual information display mode of the video is wider and wider, and the requirement of people on the resolution of the displayed video is higher and higher. Panoramic videos and three-dimensional videos are more suitable for 3D scenes in real life due to the display effect, and are widely applied; in order to obtain better display effect of panoramic or three-dimensional video, the operation of rendering video data is generally required.
In the traditional rendering procedure, the content of each frame of image is rendered according to the time sequence of a video file; currently, the rendering capabilities of mainstream hardware and software devices are limited, and when video data with high resolution is rendered, the problem of poor rendering effect may occur, such as stutter, frame dropping, mosaic and the like of the rendered video.
Disclosure of Invention
Based on this, it is necessary to provide a video rendering method, apparatus, computer device and readable storage medium for solving the problem of poor rendering effect when rendering high-resolution video.
A video rendering method, comprising:
receiving a video and a video area identifier sent by a server;
identifying a dynamic area and a static area in the video according to the area identification;
rendering each frame of the current dynamic area and the first frame of the current static area according to the playing sequence of the video frames in the video;
when the current dynamic area and the current static area are changed, each frame of the changed dynamic area and the first frame of the changed static area are rendered according to the playing sequence of the video frames in the video until the video rendering is completed.
In one embodiment, the step of changing the current dynamic area and the current static area comprises:
receiving the duration of the static area sent by the server;
selecting the duration corresponding to the current static area from the received durations of the static areas;
when the display time of the first frame of the current static area is equal to the selected duration, the current dynamic area and the current static area are changed.
A video processing method, comprising:
acquiring a video to be processed and a processing algorithm corresponding to the video to be processed;
dividing a video to be processed into a dynamic area and a static area through a processing algorithm;
adding area identifications to the dynamic area and the static area;
and when a video acquisition request of the client is received, sending the video to be processed and the area identification of the video to be processed to the client.
In one embodiment, the step of dividing the video to be processed into a dynamic region and a static region by the processing algorithm further comprises the following steps:
counting the duration of the static area;
the step of sending the video to be processed and the area identifier of the video to be processed to the client comprises the following steps:
and sending the video to be processed, the area identification of the video to be processed and the duration of the static area to the client.
In one embodiment, the step of dividing the video to be processed into a dynamic region and a static region by a processing algorithm includes:
calculating the difference value of corresponding pixels in adjacent frames of the video to be processed through a processing algorithm;
when the difference value does not exceed the preset value, the area corresponding to the pixel is a static area;
and when the difference value exceeds a preset value, the area corresponding to the pixel is a dynamic area.
In one embodiment, the step of dividing the video to be processed into a dynamic region and a static region by a processing algorithm includes:
when the processing algorithm is a deep learning algorithm, acquiring a matching model corresponding to the video to be processed;
and outputting the video to be processed to the matching model to obtain a dynamic region and a static region.
A video rendering apparatus comprising:
the receiving module is used for receiving the video sent by the server and the area identification of the video;
the identification module is used for identifying a dynamic area and a static area in the video according to the area identification;
the rendering module is used for rendering each frame of the current dynamic area and the first frame of the current static area according to the playing sequence of the video frames in the video;
and the judging module is used for rendering each frame of the changed dynamic area and the first frame of the changed static area according to the playing sequence of the video frames in the video when the current dynamic area and the current static area are changed until the video rendering is finished.
A video processing apparatus comprising:
the algorithm module is used for acquiring a video to be processed and a processing algorithm corresponding to the video to be processed;
the segmentation module is used for segmenting the video to be processed into a dynamic region and a static region through a processing algorithm;
the identification module is used for adding area identifications to the dynamic area and the static area;
and the sending module is used for sending the video to be processed and the area identification of the video to be processed to the client when receiving the video acquisition request of the client.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when executing the program.
A readable storage medium, having stored thereon a computer program which, when executed by a processor, carries out the steps of the above-mentioned method.
According to the video rendering method, the video rendering device, the computer equipment and the storage medium, all video contents do not need to be rendered in real time, the video to be rendered is divided into the dynamic area and the static area, when the rendering operation is executed, only the first frame of the current static area is rendered, the rendered frame is continuously used, and the divided dynamic area and the divided static area need to be updated in real time according to the actual situation of the video to be rendered; and only the first frame of the static area is rendered and continuously used, so that the computing resources are saved, the rendering efficiency is improved, the video data with higher resolution can be rendered by the rendering equipment under the same rendering efficiency, and the rendering effect of the high-resolution video is improved.
Drawings
FIG. 1 is a diagram illustrating an exemplary video rendering method;
FIG. 2 is a flow diagram of a method for video rendering in one embodiment;
FIG. 3 is a flowchart illustrating steps in a video rendering method according to an embodiment in which a current dynamic region and a current static region are changed;
FIG. 4 is a flow diagram of a video processing method in one embodiment;
FIG. 5 is a block diagram of a video rendering device according to an embodiment;
FIG. 6 is a block diagram of a video processing device according to an embodiment;
FIG. 7 is a block diagram of a computer device that performs video rendering in one embodiment;
FIG. 8 is a block diagram of a computer device that performs video processing in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Before describing in detail embodiments that are in accordance with the present invention, it should be observed that the embodiments reside primarily in combinations of steps and system components related to video rendering methods, apparatus, computer devices, and readable storage media. Accordingly, the system components and method steps have been represented where appropriate by conventional symbols in the drawings, showing only those details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
In this document, relational terms such as left and right, top and bottom, front and back, first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 provides an application scene diagram of a video rendering method in an embodiment, where the application scene diagram includes a video rendering device and a video server, the video rendering device and the video server may be connected via a network, the video rendering device may be a 3D video playing device, such as 3D glasses or a computer device, and the video server is provided with a video processing algorithm, and divides a video to be processed into a dynamic region and a static region by a corresponding algorithm and identifies the dynamic region and the static region, and then sends the identified video to the video rendering device; the video rendering device receives a video sent by a video server and a region identifier of the video, identifies a dynamic region and a static region according to the region identifier, renders the dynamic region in real time and updates the content of the dynamic region, renders only a first frame for the static region and continuously uses the first frame, renders the changed region according to the rendering mode when the static region changes until the whole video rendering is completed, and enables the video rendering device to respectively perform different rendering operations on the dynamic region and the static region which are divided according to an algorithm of the video server.
Referring to fig. 2, a flowchart of a video rendering method is provided, and this embodiment illustrates that the method is applied to the video rendering device in fig. 1, where a video rendering program is run on the video rendering device, and a video to be rendered sent by a video server is rendered by the video rendering program. The method comprises the following steps:
s202: and receiving the video sent by the server and the area identification of the video.
The area identifier is an identifier which is marked by the server and can distinguish a dynamic area from a static area after the server analyzes the video to be rendered. This identification may be a tag or flag, for example; if a specific bit is preset in the video data as a flag bit, if the specific bit is a dynamic area, the flag bit is 0, and if the specific bit is a static area, the flag bit is 1; or setting a specific data segment in the last designated digit of the video file, wherein the specific data segment is a pixel coordinate interval corresponding to the dynamic area and the static area in the video to be rendered respectively.
Specifically, after the video to be rendered is divided into a dynamic area and a static area by the server according to a corresponding algorithm, the identified video is sent to the video rendering device according to the request of the video rendering device. In addition, the static area in the video to be rendered sent by the server to the video rendering device can only include the static area in the first frame within the duration of the static area, so as to reduce the transmission amount.
S204: and identifying a dynamic area and a static area in the video according to the area identification.
Specifically, after receiving a video to be rendered after identification sent by a server, video rendering equipment renders the video to be rendered, first reads an area identification of the video before rendering, and acquires a dynamic area and a static area of the video according to the area identification. If the video to be rendered is encoded or compressed at the server side, the video rendering device performs corresponding operations such as decoding and decompressing on the file; and identifying a dynamic area and a static area corresponding to the area identification according to the area identification.
S206: and rendering each frame of the current dynamic area and the first frame of the current static area according to the playing sequence of the video frames in the video.
Specifically, after identifying a dynamic area and a static area of a video to be rendered, which are marked by a server, a video rendering device renders all contents of the dynamic area in real time according to rendering requirements, that is, renders an image of each frame in the dynamic area according to a playing sequence of video frames; for a static area, the video content of the static area does not change basically within a period of time, and real-time updating is not needed, so when a certain area is identified as the static area, only a first frame image within the time that the area is identified as the static area needs to be rendered, and the rendered first frame image is used for displaying within the duration of the static area all the time.
For example, one video file includes 20 frames. And when the rendering is performed, rendering the dynamic areas of the 10 frames of images in sequence, only rendering the static areas of the first frames, continuously displaying the second areas of the first frames in 20 frames, and displaying each frame of the rendered first areas in the video playing sequence until the rendering of the 20 frames of images is completed.
S208: when the current dynamic area and the current static area are changed, each frame of the changed dynamic area and the first frame of the changed static area are rendered according to the playing sequence of the video frames in the video until the video rendering is completed.
Specifically, if the current dynamic region and the current static region change, that is, the first frame of the current static region can no longer represent the corresponding region in the next frame, the static region in the next frame image is obtained according to the region identifier of the next frame image, and only the static region in the next frame image is rendered within the duration of the static region in the next frame image, and the dynamic region of the subsequent frame image is continuously rendered.
For example, in the video file of 20 frames, the first area of each frame in the first 10 frames is a dynamic area, the second area is a static area, and the first area of each frame in the 11 th to 20 th frames is a static area, the second area is a dynamic area; during rendering, rendering the second area of the first 10 frames of images in sequence, rendering only the first area of the 1 st frame, displaying the first area of the 1 st frame all the time in the first 10 frames, and displaying the second area of each frame in the playing sequence of the 10 frames of videos; and then rendering the first areas in the 11 th to 20 th frames in sequence, rendering only the second areas of the 11 th frame, and in the 11 th to 20 th frames, always displaying the second areas of the 11 th frame and the first areas of each frame of the 10 frames in sequence until the 20 th frame image is displayed completely.
According to the video rendering method, the video rendering device does not need to render all contents of the video to be rendered, only the first frame of the identified static area is rendered, the first frame image of the static area is displayed all the time within the duration time of the static area, only the image of the dynamic area needs to be updated, the computing resource for rendering each frame image of the static area is saved, the rendering efficiency of the rendering device is improved, the current video rendering device can support the rendering requirement on the video file with higher resolution, the problems of blocking, frame dropping, mosaic and the like caused by insufficient computing capacity of the rendering device when the resolution of the video to be rendered is higher are avoided, and the rendering effect of the high-resolution video is improved.
Referring to fig. 3, in an embodiment, the step of changing the current dynamic area and the current static area in step S208 in the above method may include:
s302: the duration of the static area sent by the server is received.
Specifically, when the server performs dynamic analysis on a video, when a certain area is identified as a static area within a certain time and the area is identified as a static area in subsequent consecutive frames, the server counts the duration of the static area, that is, the position of the corresponding static area is fixed within the duration, and after the duration is reached, the positions of the dynamic area and the static area are changed. The rendering device receives the video to be rendered sent by the server and also receives the duration of the corresponding static area marked by the server.
S304: and selecting the duration corresponding to the current static area from the received durations of the static areas.
Specifically, the video rendering device selects the duration of the static area from the duration of the received static area according to the current static area. For example, when the video rendering device renders a video, the video is rendered according to a video playing sequence, the duration of the static region may also be sequenced according to the video playing sequence, if the video rendering device starts rendering the current static region, the duration of the current static region is directly read, if the duration of the current static region reaches, the static region in the next frame of image frame is continuously rendered, and the duration of the next static region, that is, the duration of the next frame of image frame, is read according to the sequence.
S306: when the display time of the first frame of the current static area is equal to the selected duration, the current dynamic area and the current static area are changed.
Specifically, when the display time of the first frame in a certain static area is equal to the duration of the corresponding static area, it is considered that the dynamic area and the static area of the next frame of the video to be rendered are different from the previous frame, and the video needs to be continuously rendered according to the newly identified dynamic area and static area, that is, the static area and the dynamic area in the next image frame are identified according to the area identifier, and the rendering is performed according to the above manner.
If a plurality of dynamic areas and a plurality of static areas exist in a video file at the same time, the duration of each static area needs to be counted respectively, and when the duration of a first static area reaches, the first static area changes in the next frame of image frame, and the display time of other static areas is not equal to the corresponding duration, then the display image of the first static area changes in the next frame of image frame, and the other static areas do not change and still display the first frame image in the current duration of the rendered corresponding static area. For example, when a first static area and a second static area exist in a certain video file, the duration of the first static area is 1 second, and the duration of the second static area is 2 seconds, when the first static area becomes a dynamic area after 1 second, the video rendering device may still keep the second static area displaying the first frame within the corresponding duration, without re-rendering the first frame of the second static area after 1 second.
When a complex video is rendered, the image change condition of the video cannot be accurately reflected by a single area identifier. The server needs to identify the corresponding area of the complex video and count the duration of the identified static area, and the video rendering device can render and display the content of the next frame only when the duration corresponding to the static area is reached by each static area through the method, and the video rendering device is not affected by the change of other static areas. When the complex video is rendered, the rendering effect of the video is ensured, and the computing resources of the rendering equipment are saved to the greatest extent.
According to the embodiment, the video rendering device judges the change conditions of the static area and the dynamic area according to the duration corresponding to the static area, the server can more accurately identify the change conditions of the video to be rendered through the method, and the video rendering device can also achieve more accurate rendering effect according to the identification result of the server.
Referring to fig. 4, fig. 4 provides a flowchart of a video processing method in an embodiment, and this embodiment is exemplified by applying the method to the server in fig. 1, where a video processing program runs on the server, and a video to be processed is processed by the video processing program. The method comprises the following steps:
s402: and acquiring a video to be processed and a processing algorithm corresponding to the video to be processed.
Specifically, the server acquires a video to be processed, and selects a corresponding processing algorithm according to parameters such as the complexity of the video to be processed, processing requirements and the like, so as to perform dynamic analysis on the video to be processed. For example, each server may set a corresponding processing algorithm as needed, so that when the video to be processed is transmitted to the server, the processing algorithm corresponding to the server is directly obtained to process the video to be processed. Or each to-be-processed video carries a complexity and a processing requirement parameter, and the server acquires a corresponding processing algorithm according to the complexity and the processing requirement parameter, in this case, the server may preset weights of a level M of the complexity and a level N of the processing requirement, for example, M and N, respectively, then calculate a level of the processing algorithm as M × M + N × N, and acquire a corresponding algorithm according to the level of the obtained algorithm. The processing algorithm may be an inter-frame difference method, a deep learning algorithm, an edge detection method, or the like.
S404: and dividing the video to be processed into a dynamic area and a static area through a processing algorithm.
Specifically, the video to be processed is dynamically analyzed through the processing algorithm, so that the video to be processed is divided into a dynamic area and a static area, and a client performs corresponding rendering operations on different areas of the video to be processed: rendering the dynamic area in real time, and rendering only the first frame for the static area and continuously using the first frame; and when the static area changes, rendering according to the changed area according to the rendering mode until the whole video rendering is finished.
S406: adding region identification to the dynamic region and the static region.
The area identifier is an identifier which is marked by the server and can distinguish a dynamic area from a static area after the server analyzes the video to be rendered. This identification may be a tag or flag, for example; if a specific bit is preset in the video data as a flag bit, if the specific bit is a dynamic area, the flag bit is 0, and if the specific bit is a static area, the flag bit is 1; or setting a specific data segment in the last designated digit of the video file, wherein the specific data segment is a pixel coordinate interval corresponding to the dynamic area and the static area in the video to be rendered respectively.
S408: and when a video acquisition request of the client is received, sending the video to be processed and the area identification of the video to be processed to the client.
The video obtaining request is a request for obtaining a video to be rendered, which is sent by a client (namely, video rendering equipment) to a server, and the video to be rendered is a video processed by the server and comprises the video to be processed and an area identifier.
Specifically, since the video file generally occupies a large bandwidth, the identified video to be rendered may be encoded and compressed before being transmitted, so as to facilitate transmission, and in addition, the static area in the video to be rendered, which is transmitted by the server to the video rendering device, may only include the static area in the first frame within the duration of the static area, so as to reduce the transmission amount. And after receiving the video to be processed and the area identification of the video to be processed, which are sent by the server, the client renders the video according to the video rendering method.
In the video processing method, the server divides the video to be processed into a dynamic area and a static area according to a corresponding algorithm, marks the area identifications of the dynamic area and the static area after division, then sends the video to be processed and the area identifications together to the client, and through the operation of dividing the dynamic area and the static area, the video rendering device (namely the client) can execute adaptive rendering operation on the video to be rendered according to the change conditions of different areas, the video rendering device does not need to render all the contents of the video to be rendered, only renders the first frame of the identified static area, and always displays the first frame image of the static area within the duration time of the static area, only needs to update the image of the dynamic area, saves the computing resources for rendering each frame image of the static area, and improves the rendering efficiency of the rendering device, the current video rendering equipment can support the rendering requirement on the video file with higher resolution, the problems of blocking, frame dropping, mosaic and the like caused by insufficient computing capacity of the rendering equipment when the resolution of the video to be rendered is higher are avoided, and the rendering effect of the high-resolution video is improved.
In one embodiment, when the selected algorithm is an inter-frame difference method, the step S404 shown in fig. 4, namely, the step of dividing the video to be processed into the dynamic region and the static region by the processing algorithm, may include: calculating the difference value of corresponding pixels in adjacent frames of the video to be processed through a processing algorithm; when the difference value does not exceed the preset value, the area corresponding to the pixel is a static area; and when the difference value exceeds a preset value, the area corresponding to the pixel is a dynamic area.
The interframe difference method is to subtract corresponding pixel points of adjacent frames, and when absolute values of differences obtained by all pixel points in a region are smaller than preset values in several adjacent frames, the region is a static region, otherwise, the region is a dynamic region. The difference value can be the difference value of the brightness of the pixel points corresponding to the adjacent frames; the gray value of the image can also be calculated, and the difference value of the upper numbers is the difference of the gray values of the pixel points corresponding to the adjacent frames.
Specifically, the server subtracts the brightness value or the gray value of a pixel point corresponding to an adjacent frame of the video to be processed, and when the obtained difference value of one pixel point is smaller than a preset value in several continuous adjacent frames, the pixel is a static pixel; when the adjacent pixel points exceed the static pixel threshold value, the adjacent pixel points are called as a static area.
For example, a video to be processed includes 20 frames, each frame includes 10 pixels, the server extracts the luminance value or the gray value of each frame of image, and subtracts each pixel point in each frame of image from the corresponding pixel point in the next frame of image from the first frame to obtain a difference value, if the absolute value of the difference value of the first 5 pixel points in the 20 frames of image is less than the threshold value, the first 5 pixel points are called static pixels, the threshold value of the static pixels preset by the server is 3, the first 5 static pixels are adjacent and satisfy the condition of forming the static area, and the first 5 pixels of each frame of image of the video are called the static area.
In the above embodiment, the inter-frame difference method is used to perform dynamic analysis on the video to be processed, change of a corresponding region in the video can be analyzed through a difference value of corresponding pixels in adjacent frames, a change condition of the corresponding pixel is obtained by setting a threshold, a part with an insignificant change is divided into a static region, a part with an significant change is divided into a dynamic region, and region identifiers are added to the divided dynamic region and static region, so that a client can perform corresponding rendering operations on different regions.
In one embodiment, when the selected algorithm is a deep learning algorithm, the step S404 shown in fig. 4, namely, the step of dividing the video to be processed into the dynamic region and the static region by the processing algorithm, may include: acquiring a matching model corresponding to the video to be processed through a processing algorithm; and outputting the video to be processed to the matching model to obtain a dynamic region and a static region.
Specifically, the deep learning algorithm, that is, the server may analyze the video to be processed, identify a main body of the video to be processed, establish a deep learning model according to the main body, and then match each frame in the video to be processed with the deep learning model according to the deep learning model.
For example, in one video processing operation, the server analyzes the first 10 frames of the video to be processed, identifies a subject (such as a person) of the video to be processed in the 10 frames, establishes a deep learning model according to the subject, and then matches each frame of the video to be processed with the deep learning model according to the deep learning model, so that a pixel region which can be matched with the deep learning model in each frame is a dynamic region, and other regions are static regions, that is, in the 10 frames of the video, a motion track of the identified person subject and a change region of a person action are dynamic regions.
In the embodiment, a deep learning algorithm is adopted to perform dynamic analysis on the video to be processed, the server can divide a main body part corresponding to the matching model into dynamic areas and divide other areas into static areas by matching the main body of the video to be processed, and add area identifiers to the divided dynamic areas and static areas, so that the client can execute corresponding rendering operation on different areas, and the client can render and update other areas in real time according to the motion track of the main body of the video, thereby enabling the rendering operation to be more intelligent and more targeted.
In one embodiment, the video processing method may further include the following steps: the duration of the static area is counted. Correspondingly, step S408, that is, when the video acquisition request of the client is received, sending the to-be-processed video and the area identifier of the to-be-processed video to the client may include sending the to-be-processed video, the area identifier of the to-be-processed video and the duration of the static area to the client when the video acquisition request of the client is received.
Specifically, the server acquires a video to be processed, selects a processing algorithm corresponding to the video to be processed, performs dynamic analysis on the video according to the processing algorithm, divides the video to be processed into a dynamic area and a static area, adds area identifiers to the dynamic area and the static area after division, when a certain area is identified as the static area within a certain time and the area is identified as the static area in subsequent continuous frames, the server counts the duration time of the static area, that is, the position of the corresponding static area is fixed within the duration time, and after the duration time is reached, the positions of the dynamic area and the static area are changed, and when a video acquisition request of the client is received, the duration time of the static area, the area identifiers of the video to be processed and the video to be processed are sent to the client together, so that the client can judge whether the positions of the static area and the dynamic area are located according to the duration time of the static area And (6) changing.
Further, when a static area is identified as a static area in consecutive frames (the number of consecutive frames can be set by itself), the playing time of the consecutive frames is counted, i.e. the duration of the static area. And after the server finishes the processing on the video to be processed, sending the video to be processed, the area identifier and the duration of the static area to the client.
And the video rendering equipment receives the video to be processed, the area identification and the duration of the static area, and selects the duration of the static area to be rendered currently from the received duration of the static area. For example, when the video rendering device renders a video, the video is rendered according to a video playing sequence, the duration of the static region may also be sequenced according to the video playing sequence, if the video rendering device starts rendering the current static region, the duration of the current static region is directly read, if the duration of the current static region reaches, the static region in the next frame of image frame is continuously rendered, and the duration of the next static region, that is, the duration of the next frame of image frame, is read according to the sequence.
According to the embodiment, the video rendering device judges the change conditions of the static area and the dynamic area according to the duration corresponding to the static area, the server can more accurately identify the change conditions of the video to be rendered through the method, and the video rendering device can also achieve more accurate rendering effect according to the identification result of the server.
In one embodiment, referring to fig. 5, a schematic structural diagram of a video rendering apparatus 500 is provided, where the video rendering apparatus 500 includes:
a receiving module 502, configured to receive a video sent by a server and an area identifier of the video.
And the identifying module 504 is configured to identify a dynamic region and a static region in the video according to the region identifier.
A rendering module 506, configured to render each frame of the current dynamic area and the first frame of the current static area according to a playing order of the video frames in the video.
The determining module 508 is configured to, when the current dynamic area and the current static area change, render each frame of the changed dynamic area and the first frame of the changed static area according to a playing sequence of video frames in the video until the video rendering is completed.
In one embodiment, the determining module 518 of the video rendering device 510 includes:
and the receiving unit is used for receiving the duration of the static area sent by the server.
And the selecting unit is used for selecting the duration corresponding to the current static area from the received durations of the static areas.
And the judging unit is used for changing the current dynamic area and the current static area when the display time of the first frame of the current static area is equal to the selected duration.
For the above specific limitations on the video rendering apparatus, reference may be made to the above limitations on the video rendering method, which is not described herein again.
In one embodiment, referring to fig. 6, a schematic diagram of a video processing apparatus is provided, and the video processing apparatus 600 includes:
the algorithm module 602 is configured to obtain a video to be processed and a processing algorithm corresponding to the video to be processed.
And a segmentation module 604, configured to segment the video to be processed into a dynamic region and a static region through a processing algorithm.
An identification module 606 is configured to add a region identification to the dynamic region and the static region.
The sending module 608 is configured to send the video to be processed and the area identifier of the video to be processed to the client when receiving a video acquisition request of the client.
In one embodiment, the video processing apparatus may further include:
and the timing module is used for counting the duration of the static area.
The sending module is further configured to send the duration of the static area to the client.
In one embodiment, the segmentation module 604 in the video processing apparatus 610 may include:
and the calculating unit is used for calculating the difference value of the corresponding pixel in the adjacent frame of the video to be processed through a processing algorithm.
And the first segmentation unit is used for determining the area corresponding to the pixel as a static area when the difference value does not exceed a preset value. And when the difference value exceeds a preset value, the area corresponding to the pixel is a dynamic area.
In one embodiment, the segmentation module 604 in the video processing apparatus 610 may include:
and the obtaining unit is used for obtaining a matching model corresponding to the video to be processed when the processing algorithm is a deep learning algorithm.
And the second segmentation unit is used for outputting the video to be processed to the matching model to obtain a dynamic region and a static region.
For the above specific limitations on the video processing apparatus, reference may be made to the above limitations on the video processing method, which is not described herein again.
In one embodiment, please refer to fig. 7, which provides a schematic structural diagram of a computer device for performing video rendering, the computer device may execute a video rendering device, a conventional server or any other suitable computer device, and includes a memory, a processor, an operating system, a database, and a video rendering program stored in the memory and executable on the processor, wherein the memory may include an internal memory, and the processor performs the following steps when executing the video rendering program: receiving a video and a video area identifier sent by a server; identifying a dynamic area and a static area in the video according to the area identification; rendering each frame of the current dynamic area and the first frame of the current static area according to the playing sequence of the video frames in the video; when the current dynamic area and the current static area are changed, each frame of the changed dynamic area and the first frame of the changed static area are rendered according to the playing sequence of the video frames in the video until the video rendering is completed.
In one embodiment, the step of changing the current dynamic area and the current static area when the processor executes the program may include: and receiving the duration of the static area sent by the server, selecting the duration corresponding to the current static area from the received durations of the static areas, and changing the current dynamic area and the current static area when the display time of the first frame of the current static area is equal to the selected duration.
For the above specific limitations on the computer device, reference may be made to the above limitations on the video rendering method, which is not described herein again.
In one embodiment, with continuing reference to FIG. 7, a computer storage medium is provided having a computer program stored thereon which, when executed by a processor, performs the steps of: receiving a video and a video area identifier sent by a server; identifying a dynamic area and a static area in the video according to the area identification; rendering each frame of the current dynamic area and the first frame of the current static area according to the playing sequence of the video frames in the video; when the current dynamic area and the current static area are changed, each frame of the changed dynamic area and the first frame of the changed static area are rendered according to the playing sequence of the video frames in the video until the video rendering is completed.
In one embodiment, the step of changing the current dynamic area and the current static area when the program is executed by the processor may include: and receiving the duration of the static area sent by the server, selecting the duration corresponding to the current static area from the received durations of the static areas, and changing the current dynamic area and the current static area when the display time of the first frame of the current static area is equal to the selected duration.
For the above specific limitations on the computer storage medium, reference may be made to the above limitations on the video rendering method, which is not described herein again.
In one embodiment, please refer to fig. 8, which provides a schematic structural diagram of a computer device for performing video processing, the computer device may execute a video processing device, a conventional server or any other suitable computer device, and includes a memory, a processor, an operating system, a database, and a video processing program stored in the memory and executable on the processor, wherein the memory may include an internal memory, and the processor performs the following steps when executing the video processing program: acquiring a video to be processed and a processing algorithm corresponding to the video to be processed; dividing a video to be processed into a dynamic area and a static area through a processing algorithm; adding area identifications to the dynamic area and the static area; and when a video acquisition request of the client is received, sending the video to be processed and the area identification of the video to be processed to the client.
In one embodiment, the step of dividing the video to be processed into the dynamic area and the static area by the processing algorithm when the processor executes the program may further include: and counting the duration of the static area, and sending the duration of the static area to the client.
In one embodiment, the step of dividing the video to be processed into the dynamic area and the static area by the processing algorithm, which is implemented when the processor executes the program, may further include: and calculating the difference value of corresponding pixels in adjacent frames of the video to be processed through a processing algorithm, wherein when the difference value does not exceed a preset value, the area corresponding to the pixels is a static area, and when the difference value exceeds the preset value, the area corresponding to the pixels is a dynamic area.
In one embodiment, the step of dividing the video to be processed into the dynamic area and the static area through the processing algorithm, which is implemented when the processor executes the program, may further include: and when the processing algorithm is a deep learning algorithm, acquiring a matching model corresponding to the video to be processed, and outputting the video to be processed to the matching model to obtain a dynamic region and a static region.
For the above specific limitations on the computer device, reference may be made to the above limitations on the video processing method, which is not described herein again.
In one embodiment, with continuing reference to FIG. 8, a computer storage medium is provided having a computer program stored thereon which, when executed by a processor, performs the steps of: acquiring a video to be processed and a processing algorithm corresponding to the video to be processed; dividing a video to be processed into a dynamic area and a static area through a processing algorithm; adding area identifications to the dynamic area and the static area; and when a video acquisition request of the client is received, sending the video to be processed and the area identification of the video to be processed to the client.
In one embodiment, the step of dividing the video to be processed into a dynamic region and a static region by the processing algorithm, which is implemented when the program is executed by the processor, may further include: and counting the duration of the static area, and sending the duration of the static area to the client.
In one embodiment, the step of dividing the video to be processed into the dynamic region and the static region by the processing algorithm, which is implemented when the program is executed by the processor, may further include: and calculating the difference value of corresponding pixels in adjacent frames of the video to be processed through a processing algorithm, wherein when the difference value does not exceed a preset value, the area corresponding to the pixels is a static area, and when the difference value exceeds the preset value, the area corresponding to the pixels is a dynamic area.
In one embodiment, the step of dividing the video to be processed into a dynamic region and a static region by the processing algorithm when the program is executed by the processor may further include: and when the processing algorithm is a deep learning algorithm, acquiring a matching model corresponding to the video to be processed, and outputting the video to be processed to the matching model to obtain a dynamic region and a static region.
For the above specific limitations on the computer storage medium, reference may be made to the above limitations on the video processing method, which is not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program that can be stored in a non-volatile computer-readable storage medium and can be executed by hardware related to the instructions of the computer program. The computer-readable storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. A method of video rendering, comprising:
receiving a video sent by a server and an area identifier of the video;
identifying a dynamic region and a static region in the video according to the region identifier, wherein a pixel region matched with the deep learning model is the dynamic region, and other regions are the static regions;
rendering each frame of the current dynamic area and the first frame of the current static area according to the playing sequence of the video frames in the video;
reading the duration of the current static area, if the duration of the current static area is reached, changing the positions of the current dynamic area and the current static area, and when the positions of the current dynamic area and the current static area are changed, rendering each frame of the changed dynamic area and the first frame of the changed static area according to the playing sequence of the video frames in the video until the video rendering is completed.
2. The method of claim 1, wherein the step of reading the duration of the current static area comprises:
receiving the duration of the static area sent by the server;
selecting the duration corresponding to the current static area from the received durations of the static areas;
the current dynamic region and the current static region are changed, including:
and when the display time of the first frame of the current static area is equal to the selected duration, changing the current dynamic area and the current static area.
3. A video processing method, comprising:
acquiring a video to be processed and a processing algorithm corresponding to the video to be processed;
dividing the video to be processed into a dynamic area and a static area through the processing algorithm, wherein the method comprises the following steps: when the processing algorithm is a deep learning algorithm, identifying a main body of a video to be processed, establishing a deep learning model according to the main body, and matching each frame in the video to be processed with the deep learning model according to the deep learning model, wherein a pixel region matched with the deep learning model in each frame is the dynamic region, and other regions are the static regions;
adding region identifiers to the dynamic region and the static region, and counting the duration of the static region;
and when a video acquisition request of a client is received, sending the video to be processed, the duration of the static area and the area identifier of the video to be processed to the client.
4. The method of claim 3, wherein the step of counting the duration of the static region comprises:
when a certain region is identified as the static region within a certain time and the region is identified as the static region in the following continuous frames, counting the duration of the static region.
5. The method according to claim 3, wherein the step of partitioning the video to be processed into a dynamic region and a static region by the processing algorithm comprises:
when the selected processing algorithm is an inter-frame difference method, calculating the difference value of corresponding pixels in adjacent frames of the video to be processed through the processing algorithm;
when the difference value does not exceed a preset value, the area corresponding to the pixel is a static area;
and when the difference value exceeds a preset value, the area corresponding to the pixel is a dynamic area.
6. A video rendering apparatus, comprising:
the receiving module is used for receiving a video sent by a server and an area identifier of the video;
the identification module is used for identifying a dynamic region and a static region in the video according to the region identifier, wherein a pixel region matched with the deep learning model is the dynamic region, and other regions are static regions;
the rendering module is used for rendering each frame of the current dynamic area and the first frame of the current static area according to the playing sequence of the video frames in the video;
and the judging module is used for reading the duration of the current static area, changing the positions of the current dynamic area and the current static area if the duration of the current static area reaches, and rendering each frame of the changed dynamic area and the first frame of the changed static area according to the playing sequence of the video frames in the video when the current dynamic area and the current static area are changed until the video rendering is finished.
7. A video processing apparatus, comprising:
the algorithm module is used for acquiring a video to be processed and a processing algorithm corresponding to the video to be processed;
the segmentation module is used for segmenting the video to be processed into a dynamic region and a static region through the processing algorithm, and comprises: when the processing algorithm is a deep learning algorithm, identifying a main body of a video to be processed, establishing a deep learning model according to the main body, and matching each frame in the video to be processed with the deep learning model according to the deep learning model, wherein a pixel region matched with the deep learning model in each frame is the dynamic region, and other regions are the static regions;
the identification module is used for adding area identifications to the dynamic area and the static area and counting the duration of the static area;
and the sending module is used for sending the video to be processed, the duration of the static area and the area identifier of the video to be processed to the client when a video acquisition request of the client is received.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of claim 1 or 2 are implemented when the processor executes the program.
9. A readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of claim 1 or 2.
CN201711020523.2A 2017-10-26 2017-10-26 Video rendering method and device, computer equipment and readable storage medium Active CN107801093B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711020523.2A CN107801093B (en) 2017-10-26 2017-10-26 Video rendering method and device, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711020523.2A CN107801093B (en) 2017-10-26 2017-10-26 Video rendering method and device, computer equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN107801093A CN107801093A (en) 2018-03-13
CN107801093B true CN107801093B (en) 2020-01-07

Family

ID=61548153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711020523.2A Active CN107801093B (en) 2017-10-26 2017-10-26 Video rendering method and device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN107801093B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110087096A (en) * 2019-04-15 2019-08-02 北京奇艺世纪科技有限公司 Method for processing video frequency, device and computer readable storage medium
CN110582021B (en) * 2019-09-26 2021-11-05 深圳市商汤科技有限公司 Information processing method and device, electronic equipment and storage medium
CN112333516B (en) * 2020-08-20 2024-04-30 深圳Tcl新技术有限公司 Dynamic display method, device, equipment and computer readable storage medium
CN111931678B (en) * 2020-08-21 2021-09-07 腾讯科技(深圳)有限公司 Video information processing method and device, electronic equipment and storage medium
CN113596561B (en) * 2021-07-29 2023-06-27 北京达佳互联信息技术有限公司 Video stream playing method, device, electronic equipment and computer readable storage medium
CN116761018B (en) * 2023-08-18 2023-10-17 湖南马栏山视频先进技术研究院有限公司 Real-time rendering system based on cloud platform

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1646220A1 (en) * 2004-10-05 2006-04-12 Magix AG System and method for creating a photo movie
CN1851709A (en) * 2006-05-25 2006-10-25 浙江大学 Embedded multimedia content-based inquiry and search realizing method
CN106156747A (en) * 2016-07-21 2016-11-23 四川师范大学 The method of the monitor video extracting semantic objects of Behavior-based control feature
CN106408646A (en) * 2015-07-27 2017-02-15 常州市武进区半导体照明应用技术研究院 Instant light color rendering system and method used for play scenes
CN106454348A (en) * 2015-08-05 2017-02-22 ***通信集团公司 Video coding method, video decoding method, video coding device, and video decoding device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1646220A1 (en) * 2004-10-05 2006-04-12 Magix AG System and method for creating a photo movie
CN1851709A (en) * 2006-05-25 2006-10-25 浙江大学 Embedded multimedia content-based inquiry and search realizing method
CN106408646A (en) * 2015-07-27 2017-02-15 常州市武进区半导体照明应用技术研究院 Instant light color rendering system and method used for play scenes
CN106454348A (en) * 2015-08-05 2017-02-22 ***通信集团公司 Video coding method, video decoding method, video coding device, and video decoding device
CN106156747A (en) * 2016-07-21 2016-11-23 四川师范大学 The method of the monitor video extracting semantic objects of Behavior-based control feature

Also Published As

Publication number Publication date
CN107801093A (en) 2018-03-13

Similar Documents

Publication Publication Date Title
CN107801093B (en) Video rendering method and device, computer equipment and readable storage medium
US10977809B2 (en) Detecting motion dragging artifacts for dynamic adjustment of frame rate conversion settings
US20200250798A1 (en) Three-dimensional model encoding device, three-dimensional model decoding device, three-dimensional model encoding method, and three-dimensional model decoding method
CN108933935B (en) Detection method and device of video communication system, storage medium and computer equipment
CN110971929B (en) Cloud game video processing method, electronic equipment and storage medium
CN110139104B (en) Video decoding method, video decoding device, computer equipment and storage medium
CN107622504B (en) Method and device for processing pictures
CN112182299B (en) Method, device, equipment and medium for acquiring highlight in video
CN108694719B (en) Image output method and device
WO2019184822A1 (en) Multi-media file processing method and device, storage medium and electronic device
CN111314702B (en) Vehicle real-time monitoring system, method and equipment based on image recognition
CN109089126B (en) Video analysis method, device, equipment and medium
CN109120995B (en) Video cache analysis method, device, equipment and medium
CN111954032A (en) Video processing method and device, electronic equipment and storage medium
CN107113464B (en) Content providing apparatus, display apparatus, and control method thereof
JP5950605B2 (en) Image processing system and image processing method
CN111954034B (en) Video coding method and system based on terminal equipment parameters
CN111654747B (en) Bullet screen display method and device
CN112565887A (en) Video processing method, device, terminal and storage medium
CN113452996A (en) Video coding and decoding method and device
CN105635715A (en) Video format identification method and device
WO2016161899A1 (en) Multimedia information processing method, device and computer storage medium
CN110599525A (en) Image compensation method and apparatus, storage medium, and electronic apparatus
CN116980604A (en) Video encoding method, video decoding method and related equipment
CN113542864B (en) Video splash screen area detection method, device and equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201214

Address after: 518000 1902, phase I, international student venture building, No. 29, Gaoxin South Ring Road, high tech Zone community, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen Talos Innovation Co.,Ltd.

Address before: 902, building 5, Dachong International Center, 39 Tonggu Road, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN DKVISION TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right