CN113114946B - Video processing method and device, electronic equipment and storage medium - Google Patents

Video processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113114946B
CN113114946B CN202110419872.1A CN202110419872A CN113114946B CN 113114946 B CN113114946 B CN 113114946B CN 202110419872 A CN202110419872 A CN 202110419872A CN 113114946 B CN113114946 B CN 113114946B
Authority
CN
China
Prior art keywords
video
information
adjacent frames
weighting coefficient
video information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110419872.1A
Other languages
Chinese (zh)
Other versions
CN113114946A (en
Inventor
张民
吕德政
崔刚
张彤
张艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Frame Color Film And Television Technology Co ltd
Original Assignee
Shenzhen Frame Color Film And Television Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Frame Color Film And Television Technology Co ltd filed Critical Shenzhen Frame Color Film And Television Technology Co ltd
Priority to CN202110419872.1A priority Critical patent/CN113114946B/en
Publication of CN113114946A publication Critical patent/CN113114946A/en
Application granted granted Critical
Publication of CN113114946B publication Critical patent/CN113114946B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4854End-user interface for client configuration for modifying image parameters, e.g. image brightness, contrast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Television Systems (AREA)

Abstract

The application provides a video processing method, a video processing device, electronic equipment and a storage medium, wherein high dynamic range image processing is performed on original video information according to preset video brightness information to obtain processed first video information; adjusting the frame rate of the first video information to obtain second video information, wherein the frame rate of the second video information is greater than that of the first video information; and carrying out dithering processing on the second video information according to the dithering degree corresponding to the video brightness information to obtain the processed video information. According to the video brightness adjusting method and device, after the video brightness is adjusted, the frame rate is adjusted, so that the shaking effect in the video is reduced, the definition of the video is improved, and the video is not shaken. Meanwhile, in order to avoid the decrease of the reality of the video after the picture is clear and not jittered, the video after the frame rate adjustment is also subjected to jittering, so that the reality of the video after the jittering is improved.

Description

Video processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video processing method and apparatus, an electronic device, and a storage medium.
Background
Currently, in order to reduce the shooting cost of video, the frame rate of video shot with a conventional shooting device is low, for example, the video frame rate is usually 24 frames per second.
However, for a video with a low frame rate, when the moving speed of the shot object is high, the shot video jitter is increased, which affects the viewing experience of people. Also, the higher the brightness of the video, the more noticeable the video jitter observed by the human eye.
In order to avoid the influence of jitter in the conventional low frame rate video, the present application provides a video processing method, an apparatus, an electronic device and a storage medium to solve the above problems.
Disclosure of Invention
The application provides a video processing method and device, electronic equipment and a storage medium, which are used for solving the problem that in the prior art, moving objects in videos often shake to influence the viewing effect of audiences.
A first aspect of the present application provides a method for processing a video, where the method includes:
according to preset video brightness information, carrying out high dynamic range image processing on original video information to obtain processed first video information;
adjusting the frame rate of the first video information to obtain second video information, wherein the frame rate of the second video information is greater than that of the first video information;
and dithering the second video information according to the dithering degree corresponding to the video brightness information to obtain the processed video information, wherein the dithering degrees and the video brightness information meet the relationship that the greater the video brightness information is, the smaller the dithering degree is.
In a possible implementation manner, the performing frame rate adjustment on the first video information to obtain second video information includes:
determining the number of frames to be inserted between every two adjacent frames of image information according to the frame rate of the second video information and the frame rate of the first video information;
determining a motion vector between two adjacent frames of image information aiming at each two adjacent frames of image information in the first video information, and obtaining a frame to be inserted between the two adjacent frames of image information according to the number of the frames to be inserted between the two adjacent frames of image information and the motion vector;
and performing frame interpolation processing on the first video information according to a frame to be interpolated between every two adjacent frames of image information to obtain second video information.
In one possible implementation manner, the dithering the second video information according to the degree of dithering corresponding to the video brightness information includes:
determining a video segment needing to be subjected to jitter processing according to the second video information;
determining a weighting coefficient set corresponding to the jitter degree according to the jitter degree corresponding to the video brightness information, wherein the weighting coefficient set comprises at least one first weighting coefficient, each first weighting coefficient corresponds to at least one group of adjacent frames in the video segment, and each group of adjacent frames corresponds to one first weighting coefficient; the value range of the first weighting coefficient is more than or equal to 0 and less than or equal to 1;
according to a first weighting coefficient corresponding to each group of adjacent frames in the video segment, carrying out weighted summation processing on motion vectors between each group of adjacent frames to obtain a first to-be-distributed vector;
and calculating the average vector of the first vector to be distributed to each group of adjacent frames according to the number of each group of adjacent frames in the video segment, and updating the motion vector between each group of adjacent frames in the video segment according to the average vector.
In one possible implementation manner, the dithering the second video information according to the degree of dithering corresponding to the video brightness information includes:
determining a video segment needing to be subjected to jitter processing according to the second video information;
determining a second weighting coefficient corresponding to the jitter degree according to the jitter degree corresponding to the video brightness information, wherein the value range of the second weighting coefficient is greater than or equal to 0 and less than or equal to 1; the jitter degree and the second weighting coefficient satisfy the relationship that the smaller the jitter degree, the larger the second weighting coefficient;
calculating the product of the second weighting coefficient and the sum of the motion vectors between each group of adjacent frames to obtain a second vector to be distributed;
and calculating the average vector of the second vector to be distributed to each group of adjacent frames according to the number of each group of adjacent frames in the video segment, and updating the motion vector between each group of adjacent frames in the video segment according to the average vector.
In a second aspect, the present application provides an apparatus for processing video, the apparatus comprising:
the first processing unit is used for carrying out high dynamic range image processing on the original video information according to preset video brightness information to obtain processed first video information;
an adjusting unit, configured to adjust a frame rate of the first video information to obtain second video information, where the frame rate of the second video information is greater than the frame rate of the first video information;
and the second processing unit is used for carrying out dithering processing on the second video information according to the dithering degree corresponding to the video brightness information to obtain processed video information, wherein the video brightness information and the dithering degree meet the relationship that the greater the video brightness information is, the smaller the dithering degree is.
In a possible implementation manner, the adjusting unit includes: the first determining module is used for determining the number of frames to be inserted between every two adjacent frames of image information according to the frame rate of the second video information and the frame rate of the first video information;
the second determining module is used for determining a motion vector between every two adjacent frames of image information in the first video information, and obtaining a frame to be inserted between the two adjacent frames of image information according to the number of the frames to be inserted between the two adjacent frames of image information and the motion vector;
and the acquisition module is used for performing frame insertion processing on the first video information according to a frame to be inserted between every two adjacent frames of image information to obtain second video information.
In one possible implementation, the second processing unit includes:
a fourth determining module, configured to determine, according to the second video information, a video segment that needs to be subjected to dithering;
a fifth determining module, configured to determine a weighting coefficient set corresponding to the jitter degree according to the jitter degree corresponding to the video luminance information, where the weighting coefficient set includes at least one first weighting coefficient, each first weighting coefficient corresponds to at least one group of adjacent frames in the video segment, and each group of adjacent frames corresponds to one first weighting coefficient; the value range of the first weighting coefficient is more than or equal to 0 and less than or equal to 1;
a sixth determining module, configured to perform weighted summation processing on the motion vector between each group of adjacent frames according to the first weighting coefficient corresponding to each group of adjacent frames in the video segment, so as to obtain a first to-be-distributed vector;
and the adjusting module is used for calculating the average vector of the first to-be-distributed vector distributed to each group of adjacent frames according to the number of each group of adjacent frames in the video segment, and updating the motion vector between each group of adjacent frames in the video segment according to the average vector.
In one possible implementation, the second processing unit includes:
a seventh determining module, configured to determine, according to the second video information, a video segment that needs to be subjected to dithering;
an eighth determining module, configured to determine, according to a jitter degree corresponding to the video luminance information, a second weighting coefficient corresponding to the jitter degree, where a value range of the second weighting coefficient is greater than or equal to 0 and less than or equal to 1; the jitter degree and the second weighting coefficient satisfy the relationship that the smaller the jitter degree, the larger the second weighting coefficient;
the calculation module is used for calculating the product of the second weighting coefficient and the sum of the motion vectors between each group of adjacent frames to obtain a second vector to be distributed;
and the updating module is used for calculating the average vector of the second vector to be distributed to each group of adjacent frames according to the number of each group of adjacent frames in the video segment, and updating the motion vector between each group of adjacent frames in the video segment according to the average vector.
In a third aspect, the present application provides an electronic device, comprising: a memory, a processor;
a memory; a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the method according to any one of the first aspect according to the executable instructions.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon computer-executable instructions for implementing the method according to any one of the first aspect when executed by a processor.
In a fifth aspect, the present application provides a computer program product comprising a computer program that, when executed by a processor, implements the method according to any one of the first aspect.
According to the video processing method, the video processing device, the electronic equipment and the storage medium, high dynamic range image processing is performed on original video information according to preset video brightness information to obtain processed first video information; adjusting the frame rate of the first video information to obtain second video information, wherein the frame rate of the second video information is greater than that of the first video information; and carrying out dithering processing on the second video information according to the dithering degree corresponding to the video brightness information to obtain the processed video information. According to the video processing method and device, after the video brightness is adjusted, the frame rate is adjusted, so that the shaking effect in the video is reduced, the definition of the video is improved, and the video is not shaken. Meanwhile, in order to avoid the situation that the reality of the video is reduced after the picture is clear and not jittered, the video subjected to frame rate adjustment is subjected to jittering, so that the reality of the video subjected to jittering is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic view of an application scenario provided in the present application;
fig. 2 is a schematic flowchart of a video processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a video frame rate adjustment method according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of another video processing method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of another video processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of another video processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. The drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the concepts of the application by those skilled in the art with reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Fig. 1 is a schematic view of an application scenario provided in the present application. The shooting equipment is used for shooting an object to be shot to obtain original video information. And then, sending the original video information to a video processing platform, and performing processing (for example, processing such as clipping, adding subtitles, rendering images, and the like) by the video processing platform, where the video processing platform may be a server in a cloud, and is not limited herein. And sending the processed video to a playing device (such as a television, a mobile phone, a cinema playing device, and the like), and playing the video after the playing device receives the processed video. And further, the processed video enables the audience to have better film watching experience.
Currently, in order to reduce the shooting cost of video, the frame rate of video shot by using a conventional shooting device is low, for example, the video frame rate is usually 24 frames per second. However, for a video with a low frame rate, when the moving speed of the photographed object is fast (for example, when a scene that a human chase or fight is photographed), the video photographed by using the conventional photographing device at this time shakes significantly, which affects the viewing experience of people. In addition, in the low frame rate video, if the brightness of the low frame rate video is higher, the video jitter effect observed by human eyes is more obvious based on the observation characteristics of the human eyes, and when the brightness of the display screen is greatly improved, the jitter brings discomfort to people.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a schematic flowchart of a video processing method according to an embodiment of the present disclosure. As shown in fig. 2, the method includes:
s101, according to preset video brightness information, high dynamic range image processing is carried out on original video information to obtain processed first video information.
Illustratively, in the real world, the brightness of objects observed by the human eye can reach ten thousand nits, but when playing video, especially when playing movie video in a cinema, the video playing brightness of most theaters only supports 48 nits. Therefore, the brightness of the video viewed by the user at the theater is typically compressed to 48 nits. Under the brightness range, the brightness range of the film is compressed compared with the brightness information of the object in the real world, so that the contrast of light and shade in the film is reduced, and people cannot feel a real scene from the film. Therefore, in order to improve the viewing experience of the user and make the user feel more realistic when watching a movie, the Dynamic Range of the video is increased by performing HDR (High-Dynamic Range) processing on the original video, so that the brightness of the obtained first video information can be increased to preset brightness information. In addition, while the brightness of the original video image is improved, the HDR processing technology can also improve the contrast of the original video image, so that the bright contrast in the video is more obvious, and the video observed by people is more real and finer. For example, for original video information including a fire scene, if a fire occurs at night, after the video is HDR-processed, the flame of the fire scene can be clearly contrasted with the place covered by the smoke of the fire, and the scene is closer to the scene in the real world.
Specifically, the specific process of HDR processing on the original video is similar to the method in the prior art, and is not specifically described here. In addition, the preset brightness information in the step can be set according to human experience; or may be set according to the playing requirements of the playing device.
S102, adjusting the frame rate of the first video information to obtain second video information, wherein the frame rate of the second video information is greater than that of the first video information.
For example, to avoid the problem that the video jitter in the low frame rate video is too large to cause the degradation of the viewing experience of the user, the frame rate of the obtained first video information may be adjusted, so that the frame rate of the adjusted second video information is higher than the frame rate of the first video information, that is, the first video information is subjected to frame lifting processing.
In one example, in order to adjust the frame rate of the first video information, a pre-trained model may be specifically adopted, a previous frame image and a subsequent frame image of a frame to be inserted are input into the pre-trained model, and the number of the frames to be inserted is set at the same time, so as to obtain image information of the frame to be inserted through the model. And inserting the obtained image information of the frame to be inserted between the image of the previous frame and the image of the next frame so as to obtain second video information.
S103, according to the jitter degree corresponding to the video brightness information, performing jitter processing on the second video information to obtain processed video information, wherein the video brightness information and the jitter degree meet the relationship that the greater the video brightness information, the smaller the jitter degree.
Illustratively, the second video information obtained after the processing of step 102 is subjected to frame rate processing, so that the variation of the distribution of two adjacent frames of images in the second video is reduced (wherein the variation includes the motion vector of the object), so that the jitter in the video observed by the human eye is eliminated, and the video is clearer. However, the obtained second video information is too clear and is not shaken, and the second video information is used to shake part of the motion scene according to the long-term film watching habit of the user, so that a shaking effect can be properly added into the second video information, and the film watching habit of the user can be adapted under the condition that the film watching effect of the user is not influenced. Namely, the second video information needs to be subjected to dithering processing, a dithering effect is added, user experience is met, and the original film watching habit of a user is kept.
Specifically, as the video brightness information is larger, the human eyes have different senses of the jitter effect, that is, based on the characteristics of the human eyes, for the video with the same frame rate, the jitter effect observed by the human eyes is more obvious if the brightness information value of the video is larger. Therefore, when the dithering is performed, the dithering needs to be performed according to the dithering degree corresponding to the current brightness information of the second video, that is, the dithering effect is added to the second video. The video brightness information and the jitter degree satisfy the relationship that the greater the video brightness information, the smaller the jitter degree. That is, the larger the video luminance information, the smaller the added judder effect.
The relationship between the video luminance information and the jitter degree may be obtained through a previous experiment, or may be determined according to experience of a relevant professional, which is not limited herein.
In a possible implementation manner, when the degree of shake is obtained according to the video brightness information, a variation (e.g., a motion vector) between two adjacent frames of images in the second video information corresponding to the degree of shake may be determined according to the degree of shake, so as to modify a next frame of image in the two adjacent frames of images, and then modify each frame in sequence according to the time sequence of the video, so as to obtain the processed video information.
In the embodiment, high dynamic range image processing is performed on original video information according to preset video brightness information to obtain processed first video information; adjusting the frame rate of the first video information to obtain second video information, wherein the frame rate of the second video information is greater than that of the first video information; and carrying out dithering processing on the second video information according to the dithering degree corresponding to the video brightness information to obtain the processed video information. In the embodiment, after the video brightness is adjusted, the frame rate is adjusted, so that the jitter effect in the video is reduced, the definition of the video is improved, and the video is not jittered. Meanwhile, in order to avoid the situation that the reality sense of the video is reduced after the picture is clear and not jittered, the video subjected to frame rate adjustment is subjected to jittering, so that the reality sense of the video subjected to jittering is improved, and the video more accords with the viewing habit of people.
Specifically, when the frame rate adjustment is performed on the video, that is, when step S102 is performed, the following steps can be performed. As shown in fig. 3, fig. 3 is a schematic flowchart of a video frame rate adjustment method according to an embodiment of the present application, where the method includes:
and S1021, determining the number of frames to be inserted between every two adjacent frames of image information according to the frame rate of the second video information and the frame rate of the first video information.
For example, when the frame rate of the first video information is adjusted, the number of frames to be inserted between each two adjacent frames in the first video information needs to be determined according to the frame rate of the first video information and the frame rate of the second video information.
For example, if the frame rate of the first video information is 24 frames and the frame rate of the second video information is 48 frames, it may be determined that one frame of image needs to be inserted between every two adjacent frames in the first video information. Wherein, the frame rate of the video is 24 frames, which means that each second of the video comprises 24 frames of images.
S1022, determining a motion vector between two adjacent frames of image information in the first video information, and obtaining a frame to be inserted between the two adjacent frames of image information according to the number of frames to be inserted between the two adjacent frames of image information and the motion vector.
Illustratively, for the image information of each two adjacent frames in the first video information, a motion vector in the image information of the two adjacent frames is first determined, and then a frame to be inserted between the image information of the two adjacent frames is obtained according to the obtained motion vector and the number of frames to be inserted between the two adjacent frames.
In one example, motion vectors of all image elements in two adjacent frames of images can be obtained, wherein the image elements refer to elements constituting the frame of images, such as elements constituting sky, ground, people and the like in one image. After the motion vector of each image element is obtained, the motion vector is averagely distributed according to the number of frames to be inserted, and then the position of each image element is sequentially adjusted according to the first frame of image information in each two adjacent frames of images and the averagely distributed motion vector, so that the frame to be inserted between the two adjacent frames of image information is obtained.
And S1023, performing frame interpolation processing on the first video information according to a frame to be interpolated between every two adjacent frames of image information to obtain second video information.
Illustratively, after a frame to be inserted between two adjacent frames of image information is acquired, the frame to be inserted is inserted between two adjacent frames of images corresponding to the frame to be inserted, and the second video information is obtained.
In this example, a method for processing a video frame rate is provided, where a frame number of a frame to be inserted is determined according to frame numbers of video information before and after processing, and an image corresponding to the frame to be inserted is determined according to a motion vector between each two adjacent frames of images in the first video information and the frame number of the frame to be inserted between the two adjacent frames of images. By the video frame processing method, the motion vector between two adjacent frames of the video with the frame rate increased is reduced, so that the influence of image jitter in the low-frame-rate video on the user viewing is avoided.
Fig. 4 is a schematic flowchart of another video processing method according to an embodiment of the present application, and as shown in fig. 4, the method includes:
s401, according to preset video brightness information, high dynamic range image processing is carried out on the original video information, and processed first video information is obtained.
For example, this step can be referred to as step S101 shown in fig. 2, and is not described here again.
S402, adjusting the frame rate of the first video information to obtain second video information, wherein the frame rate of the second video information is greater than that of the first video information.
For example, this step may refer to step S102 shown in fig. 2, or may be implemented by the step shown in fig. 3, which is not described herein again.
And S403, determining a video segment needing dithering according to the second video information.
For example, when the second video information is subjected to the dithering process, a video segment requiring the dithering process may be first confirmed in the second video information. That is, in the acquired second video information, not all video segments need to be subjected to dithering, and only a video segment with a large motion degree needs to be subjected to dithering. Therefore, before the dithering process is performed, it is necessary to identify the video segment that needs to be dithered. In a possible implementation manner, scenes with large motion vectors are generally close-range shots or panoramic shots according to shot division, a video segment of the close-range shots can be screened from a current video, and motion vector changes are analyzed in the video segment. And the video segment formed by the image frame with the larger motion vector is used as the video segment needing the dithering processing. S404, determining a weighting coefficient set corresponding to the jitter degree according to the jitter degree corresponding to the video brightness information, wherein the weighting coefficient set comprises at least one first weighting coefficient, each first weighting coefficient corresponds to at least one group of adjacent frames in the video segment, and each group of adjacent frames corresponds to one first weighting coefficient; the value range of the first weighting coefficient is greater than or equal to 0 and less than or equal to 1.
For example, a first weighting coefficient corresponding to a motion vector between each group of adjacent frames in the video segment may be determined according to a jitter degree, so as to obtain a weighting coefficient set corresponding to the jitter degree, where a value range of the first weighting coefficient is greater than or equal to 0 and less than or equal to 1, and each first weighting coefficient corresponds to at least one group of adjacent frames in the video segment.
And the values of the first weighting coefficients between every two adjacent frames can be different or partially different.
In one possible embodiment, when determining the weighting coefficient set corresponding to the jitter degree, the weighting coefficient corresponding to the jitter degree may be determined according to a relationship that the smaller the jitter degree, the larger the corresponding first weighting coefficient. And then, enabling the average value of the determined weighting coefficient set to satisfy the relation equal to the weighting coefficient, and further determining the weighting coefficient set.
In another possible implementation manner, when determining the weighting coefficient set corresponding to the jitter degree, a value range of a weighting coefficient corresponds to each jitter degree, and the value ranges of the weighting coefficients corresponding to the jitter degrees are not mutually intersected. And then selecting a first weighting coefficient in the value range according to the value range of the weighting coefficient corresponding to the jitter degree.
S405, according to the first weighting coefficient corresponding to each group of adjacent frames in the video segment, carrying out weighted summation processing on the motion vector between each group of adjacent frames to obtain a first to-be-distributed vector.
Exemplarily, after the weighting coefficient set is determined, the first to-be-distributed vector of the video segment is obtained after the weighting summation processing is performed on the motion vector corresponding to each group of adjacent frames according to the first weighting coefficient corresponding to each group of adjacent frames.
S406, calculating an average vector of the first to-be-distributed vector distributed to each group of adjacent frames according to the number of each group of adjacent frames in the video segment, and updating the motion vector between each group of adjacent frames in the video segment according to the average vector. Illustratively, after the first to-be-allocated vector is determined, the first to-be-allocated vector is equally allocated according to the number of adjacent frames in the video segment to obtain an average vector of each group of adjacent frames, and then the video frames in the video segment are updated according to the average vector starting from the first frame of the video segment according to the calculated average vector.
In this embodiment, before performing the dithering process on the video, a segment requiring the dithering process is determined first. In addition, when the video segments to be processed are screened, the video segments can be divided according to the shooting lens of the video segments and searched in a close shot or a panoramic lens, so that the processing speed of the video is improved. Moreover, there is a corresponding relationship between the jitter degree and the motion vector, and after the jitter degree is determined according to the luminance information, the motion vector sum of the weighted and summed video segment in the video segment, that is, the first to-be-allocated vector, may be determined according to the weighting coefficient set corresponding to the jitter degree. And then, the first vectors to be allocated are equally allocated. By the method, the obtained processed video information is more in line with the viewing habits of people, and different first weighting coefficients can be given to different adjacent frames according to the determined weighting coefficient set, so that the image focused on in the video segment is more prominent, and the addition of the jitter effect is more accurate.
In another example, step S103 can also be implemented by the following steps:
firstly, determining a video segment needing dithering according to the second video information.
Secondly, determining a second weighting coefficient corresponding to the jitter degree according to the jitter degree corresponding to the video brightness information, wherein the value range of the second weighting coefficient is more than or equal to 0 and less than or equal to 1; the degree of shake and the second weighting factor satisfy a relationship that the smaller the degree of shake, the larger the second weighting factor.
And thirdly, calculating the product of the second weighting coefficient and the sum of the motion vectors between each group of adjacent frames to obtain a second vector to be distributed.
And fourthly, calculating an average vector of the second vector to be distributed to each group of adjacent frames according to the number of each group of adjacent frames in the video segment, and updating the motion vector between each group of adjacent frames in the video segment according to the average vector.
That is, in this embodiment, compared with the embodiment shown in fig. 4, after a video segment requiring a dithering process is determined, the present example directly determines a second weighting coefficient corresponding to the dithering degree, calculates the sum of motion vectors between two adjacent frames of images in the video segment, multiplies the second weighting coefficient by the sum of motion vectors to obtain a second vector to be distributed, then obtains an average vector of each group of adjacent frames according to the number of each group of adjacent frames, and updates each video frame in the video segment from the first frame of image of the video segment.
Compared with the embodiment shown in fig. 4, the embodiment has less calculation amount and faster video processing speed when the dithering effect is added.
Fig. 5 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application. As shown in fig. 5, the apparatus includes:
the first processing unit 61 is configured to perform high dynamic range image processing on original video information according to preset video brightness information to obtain processed first video information;
an adjusting unit 62, configured to perform frame rate adjustment on the first video information to obtain second video information, where the frame rate of the second video information is greater than the frame rate of the first video information;
and a second processing unit 63, configured to perform dithering processing on the second video information according to a dithering degree corresponding to the video brightness information, so as to obtain processed video information, where the video brightness information and the dithering degree satisfy a relationship that the greater the video brightness information is, the smaller the dithering degree is.
The apparatus provided in this embodiment is configured to implement the technical solution provided by the foregoing method, and the implementation principle and the technical effect are similar, which are not described again.
Fig. 6 is a schematic structural diagram of another video processing apparatus according to an embodiment of the present application. As shown in fig. 6, in the apparatus shown in fig. 5, the adjusting unit 62 includes:
the first determining module 621 is configured to determine, according to the frame rate of the second video information and the frame rate of the first video information, the number of frames to be inserted between each two adjacent frames of image information;
a second determining module 622, configured to determine, for each two adjacent frames of image information in the first video information, a motion vector between the two adjacent frames of image information, and obtain, according to the number of frames to be inserted and the motion vector between the two adjacent frames of image information, a frame to be inserted between the two adjacent frames of image information;
the obtaining module 623 is configured to perform frame interpolation processing on the first video information according to a frame to be interpolated between each two adjacent frames of image information, so as to obtain second video information.
In a possible implementation manner, the second processing unit 63 includes:
a fourth determining module 631, configured to determine, according to the second video information, a video segment to be subjected to dithering;
a fifth determining module 632, configured to determine a weighting coefficient set corresponding to the degree of jitter according to the degree of jitter corresponding to the video luminance information, where the weighting coefficient set includes at least one first weighting coefficient, each first weighting coefficient corresponds to at least one group of adjacent frames in the video segment, and each group of adjacent frames corresponds to one first weighting coefficient; the value range of the first weighting coefficient is more than or equal to 0 and less than or equal to 1;
a sixth determining module 633, configured to perform weighted summation processing on the motion vector between each group of adjacent frames according to the first weighting coefficient corresponding to each group of adjacent frames in the video segment, so as to obtain a first to-be-allocated vector;
the adjusting module 634 is configured to calculate an average vector of the first to-be-allocated vector allocated to each group of adjacent frames according to the number of each group of adjacent frames in the video segment, and update the motion vector between each group of adjacent frames in the video segment according to the average vector.
Fig. 7 is a schematic structural diagram of another video processing apparatus according to an embodiment of the present application. On the basis of the apparatus shown in fig. 5, the second processing unit 63 includes:
the seventh determining module 635 is configured to determine, according to the second video information, a video segment that needs to be subjected to dithering;
an eighth determining module 636, configured to determine, according to the jitter degree corresponding to the video luminance information, a second weighting coefficient corresponding to the jitter degree, where a value range of the second weighting coefficient is greater than or equal to 0 and less than or equal to 1; the jitter degree and the second weighting coefficient satisfy the relationship that the smaller the jitter degree, the larger the second weighting coefficient;
a calculating module 637, configured to calculate a product of the second weighting coefficient and a sum of motion vectors between each group of adjacent frames, to obtain a second vector to be allocated;
the updating module 638 is configured to calculate an average vector of the second to-be-allocated vector allocated to each group of adjacent frames according to the number of each group of adjacent frames in the video segment, and update the motion vector between each group of adjacent frames in the video segment according to the average vector. The apparatus provided in this embodiment is used to implement the technical solution provided by the above method, and the implementation principle and the technical effect are similar and will not be described again.
Fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, and as shown in fig. 8, the electronic device includes:
a processor (processor) 291, the electronic device further comprising a memory (memory) 292; a Communication Interface 293 and bus 294 may also be included. The processor 291, the memory 292, and the communication interface 293 may communicate with each other through the bus 294. Communication interface 293 may be used for the transmission of information. Processor 291 may call logic instructions in memory 294 to perform the methods of the embodiments described above.
Further, the logic instructions in the memory 292 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product.
The memory 292 is a computer-readable storage medium for storing software programs, computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present application. The processor 291 executes the functional application and data processing by executing the software program, instructions and modules stored in the memory 292, so as to implement the method in the above method embodiments.
The memory 292 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, the memory 292 may include a high speed random access memory and may also include a non-volatile memory.
The embodiment of the present application provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the computer-executable instructions are used for implementing the method provided by the above-mentioned embodiment.
The embodiment of the present application provides a computer program product, which includes a computer program, and the computer program realizes the method provided by the above embodiment when being executed by a processor
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (6)

1. A method for processing video, the method comprising:
according to preset video brightness information, carrying out high dynamic range image processing on original video information to obtain processed first video information;
adjusting the frame rate of the first video information to obtain second video information, wherein the frame rate of the second video information is greater than that of the first video information;
performing dithering processing on the second video information according to the dithering degree corresponding to the video brightness information to obtain processed video information, wherein the video brightness information and the dithering degree meet the relationship that the greater the video brightness information is, the smaller the dithering degree is;
and performing dithering processing on the second video information according to the dithering degree corresponding to the video brightness information, wherein the dithering processing includes:
determining a video segment needing to be subjected to jitter processing according to the second video information;
determining a weighting coefficient set corresponding to the jitter degree according to the jitter degree corresponding to the video brightness information, wherein the weighting coefficient set comprises at least one first weighting coefficient, each first weighting coefficient corresponds to at least one group of adjacent frames in the video segment, and each group of adjacent frames corresponds to one first weighting coefficient; the value range of the first weighting coefficient is more than or equal to 0 and less than or equal to 1;
according to a first weighting coefficient corresponding to each group of adjacent frames in the video segment, carrying out weighted summation processing on motion vectors between each group of adjacent frames to obtain a first to-be-distributed vector;
calculating the average vector of the first vector to be distributed to each group of adjacent frames according to the number of each group of adjacent frames in the video segment, and updating the motion vector between each group of adjacent frames in the video segment according to the average vector;
alternatively, the first and second electrodes may be,
determining a video segment needing to be subjected to jitter processing according to the second video information;
determining a second weighting coefficient corresponding to the jitter degree according to the jitter degree corresponding to the video brightness information, wherein the value range of the second weighting coefficient is greater than or equal to 0 and less than or equal to 1; the jitter degree and the second weighting coefficient satisfy the relationship that the smaller the jitter degree, the larger the second weighting coefficient;
calculating the product of the second weighting coefficient and the sum of the motion vectors between each group of adjacent frames in the video segment to obtain a second vector to be distributed;
and calculating the average vector of the second vector to be distributed to each group of adjacent frames according to the number of each group of adjacent frames in the video segment, and updating the motion vector between each group of adjacent frames in the video segment according to the average vector.
2. The method of claim 1, wherein the frame rate adjusting the first video information to obtain second video information comprises:
determining the number of frames to be inserted between every two adjacent frames of image information according to the frame rate of the second video information and the frame rate of the first video information;
determining a motion vector between two adjacent frames of image information aiming at each two adjacent frames of image information in the first video information, and obtaining a frame to be inserted between the two adjacent frames of image information according to the number of the frames to be inserted between the two adjacent frames of image information and the motion vector;
and performing frame interpolation processing on the first video information according to a frame to be interpolated between every two adjacent frames of image information to obtain second video information.
3. An apparatus for processing video, the apparatus comprising:
the first processing unit is used for carrying out high dynamic range image processing on the original video information according to preset video brightness information to obtain processed first video information;
an adjusting unit, configured to adjust a frame rate of the first video information to obtain second video information, where the frame rate of the second video information is greater than the frame rate of the first video information;
the second processing unit is used for carrying out dithering processing on the second video information according to the dithering degree corresponding to the video brightness information to obtain processed video information, wherein the dithering degrees and the video brightness information meet the relationship that the dithering degree is smaller when the video brightness information is larger;
a second processing unit comprising:
a fourth determining module, configured to determine, according to the second video information, a video segment that needs to be subjected to dithering;
a fifth determining module, configured to determine a weighting coefficient set corresponding to the jitter degree according to the jitter degree corresponding to the video luminance information, where the weighting coefficient set includes at least one first weighting coefficient, each first weighting coefficient corresponds to at least one group of adjacent frames in the video segment, and each group of adjacent frames corresponds to one first weighting coefficient; the value range of the first weighting coefficient is more than or equal to 0 and less than or equal to 1;
a sixth determining module, configured to perform weighted summation processing on the motion vector between each group of adjacent frames according to the first weighting coefficient corresponding to each group of adjacent frames in the video segment, so as to obtain a first to-be-distributed vector;
the adjusting module is used for calculating an average vector distributed to each group of adjacent frames by the first vector to be distributed according to the number of each group of adjacent frames in the video segment, and updating a motion vector between each group of adjacent frames in the video segment according to the average vector;
or the like, or, alternatively,
a second processing unit comprising:
a seventh determining module, configured to determine, according to the second video information, a video segment that needs to be subjected to dithering;
an eighth determining module, configured to determine, according to a jitter degree corresponding to the video luminance information, a second weighting coefficient corresponding to the jitter degree, where a value range of the second weighting coefficient is greater than or equal to 0 and less than or equal to 1; the jitter degree and the second weighting coefficient satisfy the relationship that the smaller the jitter degree, the larger the second weighting coefficient;
the calculation module is used for calculating the product of the second weighting coefficient and the sum of the motion vectors between each group of adjacent frames in the video segment to obtain a second vector to be distributed;
and the updating module is used for calculating the average vector of the second vector to be distributed to each group of adjacent frames according to the number of each group of adjacent frames in the video segment, and updating the motion vector between each group of adjacent frames in the video segment according to the average vector.
4. The apparatus of claim 3, wherein the adjusting unit comprises:
the first determining module is used for determining the number of frames to be inserted between every two adjacent frames of image information according to the frame rate of the second video information and the frame rate of the first video information;
the second determining module is used for determining a motion vector between every two adjacent frames of image information in the first video information, and obtaining a frame to be inserted between the two adjacent frames of image information according to the number of the frames to be inserted between the two adjacent frames of image information and the motion vector;
and the acquisition module is used for performing frame insertion processing on the first video information according to a frame to be inserted between every two adjacent frames of image information to obtain second video information.
5. An electronic device, comprising: a memory, a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the method according to the executable instructions of any one of claims 1-2.
6. A computer-readable storage medium having computer-executable instructions stored thereon, which when executed by a processor, perform the method of any one of claims 1-2.
CN202110419872.1A 2021-04-19 2021-04-19 Video processing method and device, electronic equipment and storage medium Active CN113114946B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110419872.1A CN113114946B (en) 2021-04-19 2021-04-19 Video processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110419872.1A CN113114946B (en) 2021-04-19 2021-04-19 Video processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113114946A CN113114946A (en) 2021-07-13
CN113114946B true CN113114946B (en) 2023-04-18

Family

ID=76718773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110419872.1A Active CN113114946B (en) 2021-04-19 2021-04-19 Video processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113114946B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025202B (en) * 2021-11-03 2024-05-07 抖音视界有限公司 Video processing method, device and storage medium
CN114268703A (en) * 2021-12-27 2022-04-01 安徽淘云科技股份有限公司 Imaging adjusting method and device during screen scanning, storage medium and equipment
CN116193257B (en) * 2023-04-21 2023-09-22 成都华域天府数字科技有限公司 Method for eliminating image jitter of surgical video image
CN117115155A (en) * 2023-10-23 2023-11-24 江西拓世智能科技股份有限公司 Image analysis method and system based on AI live broadcast

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003069961A (en) * 2001-08-27 2003-03-07 Seiko Epson Corp Frame rate conversion
US8659701B2 (en) * 2011-12-19 2014-02-25 Sony Corporation Usage of dither on interpolated frames
JP2014187690A (en) * 2013-02-25 2014-10-02 Jvc Kenwood Corp Video signal processing device and method
US10944938B2 (en) * 2014-10-02 2021-03-09 Dolby Laboratories Licensing Corporation Dual-ended metadata for judder visibility control

Also Published As

Publication number Publication date
CN113114946A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
CN113114946B (en) Video processing method and device, electronic equipment and storage medium
CN104980652B (en) Image processing apparatus and image processing method
KR101864059B1 (en) Mobile terminal and shooting method thereof
CN107948733B (en) Video image processing method and device and electronic equipment
CN108234858B (en) Image blurring processing method and device, storage medium and electronic equipment
CN109413335B (en) Method and device for synthesizing HDR image by double exposure
JP2017537494A (en) Dual-end metadata for judder visibility control
US11190707B2 (en) Motion ghost resistant HDR image generation method and portable terminal
CN110786000B (en) Exposure adjusting method and device
CN109151257B (en) Image processing method and camera
CN110225265A (en) Advertisement replacement method, system and storage medium during video transmission
CN113691737B (en) Video shooting method and device and storage medium
CN112991163B (en) Panoramic image acquisition method, device and equipment
EP2958073A1 (en) Image processing device, image processing method, and recording medium
CN113099132B (en) Video processing method, video processing apparatus, electronic device, storage medium, and program product
US9473716B2 (en) Image processing method and image processing device
CN108513062B (en) Terminal control method and device, readable storage medium and computer equipment
JP2012231441A (en) Image processing system
US20170163852A1 (en) Method and electronic device for dynamically adjusting gamma parameter
US11544830B2 (en) Enhancing image data with appearance controls
CN109308690B (en) Image brightness balancing method and terminal
CN114466228B (en) Method, equipment and storage medium for improving smoothness of screen projection display
CN111800583B (en) High dynamic range image partition processing method and device and computer readable storage medium
US10499009B1 (en) Realistic 24 frames per second output from high frame rate content
CN110941413B (en) Display screen generation method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant