CN111953980B - Video processing method and device - Google Patents

Video processing method and device Download PDF

Info

Publication number
CN111953980B
CN111953980B CN202010848606.6A CN202010848606A CN111953980B CN 111953980 B CN111953980 B CN 111953980B CN 202010848606 A CN202010848606 A CN 202010848606A CN 111953980 B CN111953980 B CN 111953980B
Authority
CN
China
Prior art keywords
data
image
encoded
area image
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010848606.6A
Other languages
Chinese (zh)
Other versions
CN111953980A (en
Inventor
李建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010848606.6A priority Critical patent/CN111953980B/en
Publication of CN111953980A publication Critical patent/CN111953980A/en
Application granted granted Critical
Publication of CN111953980B publication Critical patent/CN111953980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria

Landscapes

  • Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The disclosure relates to a video processing method and device, wherein one method comprises a video processing method comprising the following steps: acquiring video data acquired by a terminal, wherein the video data comprises at least one image frame; for an image frame to be encoded, determining a first area image and a second area image in the image frame to be encoded; encoding the first region image based on a first encoding algorithm to obtain first encoded data of the first region image; transmitting the first encoded data and the data of the second area image to a server; the compression ratio corresponding to the second area image is smaller than the compression ratio corresponding to the first area image. The amount of calculation needed by the compression is smaller when the compression is smaller, so that the amount of calculation corresponding to the second area image is reduced relative to the whole image frame, the amount of calculation corresponding to the whole image frame is reduced, the occupation of processor resources is reduced, the processor load is reduced, the heating problem of a terminal is relieved, and the user experience is improved.

Description

Video processing method and device
Technical Field
The disclosure relates to the technical field of internet, and in particular relates to a video processing method and device.
Background
Under application scenes such as live video broadcasting, video conference, video call and the like, the terminal shoots video data through the camera, sends the video data to the server, and then sends the video data to other terminals.
The video data includes a plurality of consecutive image frames. In order to reduce the occupation of bandwidth, the terminal performs video encoding on the entire image of each image frame and transmits video encoded data to the server, and the server performs video decoding on each video encoded data to obtain the entire image of each image frame, thereby obtaining video data.
In the related art, the video coding calculation amount of the terminal is large in the video coding process, and the processor load is too high due to the fact that more resources of the processor are occupied, so that the terminal can generate heat after the video is processed for a period of time, and user experience is affected.
Disclosure of Invention
The disclosure provides a video processing method and device, which at least solve the problem of large video coding calculation amount in the video coding process of a terminal in the related art. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a video processing method, including:
acquiring video data acquired by a terminal, wherein the video data comprises at least one image frame;
For an image frame to be encoded, determining a first area image and a second area image in the image frame to be encoded;
encoding the first region image based on a first encoding algorithm to obtain first encoded data of the first region image,
transmitting the first encoded data and the data of the second area image to a server; the compression ratio corresponding to the second area image is smaller than the compression ratio corresponding to the first area image.
Wherein the sending the first encoded data and the data of the second area image to the server includes:
and taking the data of the second area image which is not subjected to image coding as the data of the second area image, and sending the first coding data and the data of the second area image to a server.
Before sending the first encoded data and the data of the second area image to a server, the method further comprises:
encoding the second region image based on a second encoding algorithm to obtain second encoded data of the second region image,
and taking the second encoded data as the data of the second area image.
Wherein, for the image frame to be encoded, determining the first area image and the second area image in the image frame to be encoded includes:
Detecting terminal information of the terminal;
and if the terminal information meets the preset condition for representing the excessive power consumption of the terminal, dividing the image frame to be encoded to obtain the first area image and the second area image.
Wherein, for the image frame to be encoded, determining the first area image and the second area image in the image frame to be encoded includes:
detecting terminal information of the terminal and network bandwidth of a network environment where the terminal is located;
and if the terminal information meets the preset condition for representing the excessive power consumption of the terminal and the network bandwidth is larger than the preset bandwidth, dividing the image frame to be encoded to obtain the first area image and the second area image.
Wherein, still include:
detecting terminal information of the terminal again after the preset time;
and if the terminal information meets the preset condition for representing the excessive power consumption of the terminal, increasing the duty ratio of the first area image when dividing the image frame to be encoded, and obtaining the first area image and the second area image.
Wherein the sending the first encoded data and the data of the second area image to the server includes:
Respectively performing a labeling operation on the first encoded data and the data of the second region image, wherein the first encoded data and the data of the second region image have the same label;
and sending the first encoded data and the data of the second area image to the server.
According to a second aspect of embodiments of the present disclosure, there is provided a video processing method, including:
receiving first coded data corresponding to a first region image and data of a second region image in an image frame to be coded, which are sent by a terminal; the compression ratio corresponding to the second area image is smaller than the compression ratio corresponding to the first area image;
performing a decoding operation on the first encoded data based on a first decoding algorithm to obtain first decoded data;
and splicing the first decoding data and the data of the second area image to obtain the image frame to be encoded.
Wherein the splicing the first decoded data and the data of the second area image to obtain the image frame to be encoded includes:
and splicing the first decoding data and the data of the second area image to obtain the image frame to be encoded, wherein the data of the second area image is the data which is not subjected to image encoding.
Before the first decoded data and the second area image data are spliced to obtain the image frame to be encoded, the method further comprises the steps of:
performing a decoding operation on the data of the second region image based on a second decoding algorithm to obtain second decoded data;
and splicing the first decoding data and the second decoding data to obtain the image frame to be encoded.
According to a third aspect of the embodiments of the present disclosure, there is provided a video processing apparatus including:
an acquisition unit configured to perform acquisition of video data acquired by a terminal, the video data including at least one image frame;
a determining unit configured to perform, for an image frame to be encoded, determining a first region image and a second region image in the image frame to be encoded;
a first encoding unit configured to perform encoding of the first region image based on a first encoding algorithm to obtain first encoded data of the first region image,
a transmitting unit configured to perform transmission of the first encoded data and the data of the second area image to a server; the compression ratio corresponding to the second area image is smaller than the compression ratio corresponding to the first area image.
The sending unit is specifically configured to send the first encoded data and the data of the second area image to a server by using the data of the second area image, which is not subjected to image encoding, as the data of the second area image.
Wherein the apparatus further comprises:
and a second encoding unit configured to perform encoding of the second region image based on a second encoding algorithm to obtain second encoded data of the second region image, the second encoded data being used as data of the second region image.
The compression ratio corresponding to the second coding algorithm is smaller than the compression ratio corresponding to the first coding algorithm.
Wherein the determining unit includes:
a detection unit configured to perform detection of terminal information of the terminal;
and the dividing unit is configured to divide the image frame to be encoded to obtain the first area image and the second area image if the terminal information meets a preset condition for representing that the power consumption of the terminal is too high.
Wherein the determining unit includes:
a detection unit configured to perform detection of terminal information of the terminal and a network bandwidth of a network environment in which the terminal is located;
And the dividing unit is configured to divide the image frame to be encoded to obtain the first area image and the second area image if the terminal information meets a preset condition for representing that the power consumption of the terminal is too high and the network bandwidth is larger than the preset bandwidth.
Wherein the detection unit is further configured to perform detecting the terminal information of the terminal again after a preset time;
the dividing unit is further configured to perform, if the terminal information meets a preset condition for representing that power consumption of the terminal is too high, increasing a duty ratio of the first area image when dividing the image frame to be encoded, and obtaining the first area image and the second area image.
Wherein the transmitting unit includes:
a labeling unit configured to perform a labeling operation on the first encoded data and the data of the second area image, respectively, and the first encoded data and the data of the second area image have the same label;
and a transmission data unit configured to perform transmission of the first encoded data and the data of the second area image to the server.
According to a fourth aspect of embodiments of the present disclosure, there is provided a video processing apparatus including:
The receiving unit is configured to execute first coding data corresponding to a first area image and data of a second area image in an image frame to be coded, which are sent by the receiving terminal; the compression ratio corresponding to the second area image is smaller than the compression ratio corresponding to the first area image;
a first decoding unit configured to perform a decoding operation on the first encoded data based on a first decoding algorithm to obtain first decoded data;
and a splicing unit configured to perform splicing of the first decoded data and the data of the second area image to obtain the image frame to be encoded.
The splicing unit is specifically configured to perform splicing of the first decoded data and the data of the second area image to obtain the image frame to be encoded, and the data of the second area image is data which is not subjected to image encoding.
Wherein, before the concatenation unit, still include:
a second decoding unit configured to perform a decoding operation on data of the second region image based on a second decoding algorithm to obtain second decoded data; the compression ratio corresponding to the second coding algorithm is smaller than the compression ratio corresponding to the first coding algorithm;
And the splicing unit is specifically configured to splice the first decoding data and the second decoding data to obtain the image frame to be encoded.
According to a fifth aspect of the embodiments of the present disclosure, there is provided an electronic device, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video processing method of the first aspect.
According to a sixth aspect of embodiments of the present disclosure, there is provided a storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the video processing method of the first aspect.
According to a seventh aspect of embodiments of the present disclosure, there is provided a server comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video processing method of the second aspect.
According to an eighth aspect of embodiments of the present disclosure, there is provided a storage medium, which when executed by a processor of a server, enables the server to perform the video processing method of the second aspect. According to a ninth aspect of embodiments of the present disclosure, there is provided a computer program product capable of performing the video processing method of the first aspect;
According to a tenth aspect of embodiments of the present disclosure, there is provided a computer program product capable of executing the video processing method according to the second aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the present disclosure provides a video processing method, which can determine an image frame to be encoded in video data, and does not perform the same image encoding operation on the entire image frame for the image frame to be encoded, but divides the image frame to be encoded into a first area image and a second area image.
The image encoding operation is performed on the first region image based on the first encoding algorithm, and the second region image does not perform the encoding operation or performs the encoding operation using the second encoding algorithm such that the compression ratio corresponding to the second region image is smaller than the compression ratio corresponding to the first region image.
It will be appreciated that the amount of computation required is small when the compression ratio is small and large when the compression ratio is large, so that the amount of computation corresponding to the second area image is smaller than the amount of computation corresponding to the first area image. The amount of computation corresponding to the second region image is reduced relative to the entire image frame, so the amount of computation corresponding to the entire image frame is reduced.
The method and the device can solve the problem that the calculated amount of the terminal is large in the video coding process, and after the calculated amount of each image frame to be coded is reduced, the occupation of processor resources is reduced, the processor load is reduced, the heating problem of the terminal is relieved, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a flow chart illustrating one embodiment of a video processing method according to one exemplary embodiment;
fig. 2a is a block diagram illustrating a ratio of a preset first area image and a second area image according to an exemplary embodiment;
fig. 2b is a block diagram illustrating a preset ratio of a first area image and a second area image according to an exemplary embodiment;
FIG. 3 is a flow chart illustrating a video processing method embodiment two according to an exemplary embodiment;
FIG. 4 is a flow chart illustrating a third embodiment of a video processing method according to an exemplary embodiment;
FIG. 5 is a flow chart illustrating a video processing method embodiment IV according to an exemplary embodiment;
FIG. 6 is a flow chart illustrating a video processing method embodiment five according to an exemplary embodiment;
FIG. 7 is a flowchart illustrating a video processing method embodiment six according to an exemplary embodiment;
FIG. 8 is a block diagram of a video processing device according to an exemplary embodiment;
FIG. 9 is a block diagram of a video processing device, according to an exemplary embodiment;
fig. 10 is a block diagram for an electronic device, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The video processing method can be applied to application scenes such as video live broadcasting, video conference, video call and the like. Since the processing procedure of the terminal is identical for each video data, a video processing method will be described in detail taking one video data as an example.
Fig. 1 is a flowchart of an embodiment one of a video processing method according to an exemplary embodiment, where the video processing method is used in a terminal, and includes the following steps.
In step S101, video data acquired by a terminal is acquired, the video data including at least one image frame.
Taking a live broadcast scene as an example, a terminal can utilize a camera to shoot video in the live broadcast process, so that acquired video data is acquired, and the video data comprises a plurality of continuous image frames.
The terminal determines an image frame in the video data, one image frame corresponding to each image. For convenience of the following description, referred to as an image frame to be encoded.
In step S102, for an image frame to be encoded, a first region image and a second region image in the image frame to be encoded are determined.
The terminal may perform a dividing operation on the image frame to be encoded, dividing the image frame to be encoded into two parts: a first region image and a second region image. Wherein the first region image is used to perform image encoding and the second region image is not used to perform image encoding.
Regarding the dividing operation, the terminal may randomly generate the ratio of the first area image and the second area image and divide the image frame to be encoded according to the ratio. For example, if the random generation ratio is 1:1, it means that the first area image and the second area image occupy half of each other in the image frame to be encoded.
Since the scheme of randomly generating the ratio of the first area image and the second area image cannot be clearly determined, the ratio of the first area image and the second area image may be preset. The terminal may divide the image frame to be encoded according to a preset ratio of the first region image and the second region image, and determine the first region image to be image-encoded and the second region image not to be image-encoded.
If the ratio of the first area image to the second area image is preset to be 2:1, dividing the image frame to be encoded according to the ratio of 2:1, and obtaining a first area image with the ratio of 2/3 in the image frame to be encoded and a second area image with the ratio of 1/3 in the image frame to be encoded.
Referring to fig. 2a, the first area image and the second area image are preset in a ratio of 2:1. Referring to fig. 2b, the first area image and the second area image are preset in a ratio of 1:1.
In step S103, the first area image is encoded based on a first encoding algorithm to obtain first encoded data of the first area image.
And the terminal performs image coding on the first region image based on a first coding algorithm to obtain first coding data of the first region image.
The first encoding algorithm may include an encoding technique such as H264 or H265, that is, performing an image encoding operation normally on the first region image according to the first encoding operation. The step may normally perform the encoding operation on the first area image in the image frame to be encoded so that the first encoded data has a smaller data amount.
In step S104, the first encoded data and the data of the second area image are sent to a server, where the compression ratio corresponding to the second area image is smaller than the compression ratio corresponding to the first area image.
In order to reduce the amount of computation in the encoding operation, the image encoding operation is not performed on the second region image in the present embodiment, but the data of the second region image which is not image-encoded may be directly used as the data of the second region image. Since the second region image is not image-encoded, the second region image is not substantially required to be calculated, so that the calculation amount in the encoding operation can be greatly reduced.
Since the second region image is not image-encoded, the compression ratio of the second region image is smaller than that of the first region image.
Because the first coding data and the second region image data are two data, in order to establish the association of the first coding data and the second region image data as one image frame to be coded, the first coding data and the second region image data can be respectively labeled, the first coding data and the second region image data have the same label, and the labels of different image frames to be coded have uniqueness; and then, the first coded data and the data of the second area image are sent to the server.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the present disclosure provides a video processing method, in which an image frame to be encoded in video data may be determined, and the image frame to be encoded is divided into a first area image and a second area image instead of performing the same image encoding operation on the entire image frame for the image frame to be encoded.
The image encoding operation is performed on the first region image based on the first encoding algorithm, and the second region image does not perform the encoding operation, so the second region image requires substantially no calculation amount. The amount of computation corresponding to the second region image is reduced relative to the entire image frame, so the amount of computation corresponding to the entire image frame is reduced.
The method and the device can solve the problem that the calculated amount of the terminal is large in the video coding process, and after the calculated amount of each image frame to be coded is reduced, the occupation of processor resources is reduced, the processor load is reduced, the heating problem of the terminal is relieved, and the user experience is improved.
On the basis of the first embodiment, the present disclosure provides an execution procedure of the server. Since the processing procedures of different image frames to be encoded are consistent, this embodiment will be described in detail by taking one image frame to be encoded as an example.
Fig. 3 is a flowchart of a second embodiment of a video processing method according to an exemplary embodiment, applied to a server, as shown in fig. 3, including the following steps.
In step S301, first encoded data corresponding to a first area image and data of a second area image in an image frame to be encoded sent by a terminal are received. The compression ratio corresponding to the second area image is smaller than the compression ratio corresponding to the first area image.
Because the server side can continuously receive the encoded data of the plurality of image frames to be encoded and the data of the second area image, the server side can search the first encoded data and the data of the second area image with the same label from the encoded data, and the image frames to be encoded have the same label and are different from the labels of the image frames to be encoded.
In the first embodiment, the data of the second area image which is not subjected to video encoding is taken as the data of the second area image, so the server receives the first encoded data and the data of the second area image which is not subjected to video encoding.
In step S302, a decoding operation is performed on the first encoded data based on a first decoding algorithm to obtain first decoded data.
The first encoded data is subjected to the image encoding operation, and the data of the second region image is not subjected to the encoding operation, so that the data format of the first encoded data is different from the data format of the data of the second region image, and the server can identify the data of the first encoded data subjected to the image encoding operation and the data of the second region image not subjected to the image encoding operation through the data formats.
Then, an image decoding operation is performed on the first encoded data corresponding to the first region image to obtain first decoded data. It will be appreciated that the image encoding operation in the first embodiment and the image decoding operation in the second embodiment are matched.
In step S303, the image frame to be encoded is obtained by concatenating the first decoded data and the data of the second area image.
The first decoding data and the second area image data are the same format and the same size, so the server can splice the first decoding data and the second area image data to obtain the image frame to be encoded.
The present disclosure provides a second embodiment of a video processing method, where the second embodiment corresponds to the first embodiment, and on the basis of transmitting data of a first encoded data and data of a second area image in the embodiment, a server may identify the first encoded data and the data of the second area image of the same image frame to be encoded by means of a tag.
Since the data of the second area image in the embodiment is not subjected to the video encoding operation, in the embodiment, the image decoding operation is only performed on the first encoded data, and finally the first decoded data and the data of the second area image are spliced to restore the image frame to be encoded.
Fig. 4 is a flowchart illustrating a third embodiment of a video processing method according to an exemplary embodiment, the video processing method being used in a terminal, and including the following steps.
In step S401, video data acquired by a terminal is acquired, the video data including at least one image frame.
In step S402, for an image frame to be encoded, a first region image and a second region image in the image frame to be encoded are determined.
In step S403, the first area image is encoded based on a first encoding algorithm to obtain first encoded data of the first area image.
Steps S401 to 403 may refer to steps S101 to 103, and are not described herein.
In step S404, performing image encoding on the second area image based on a second encoding algorithm to obtain second encoded data of the second area image; the compression ratio corresponding to the second coding algorithm is smaller than the compression ratio corresponding to the first coding algorithm.
In the first embodiment, the data of the second area image, which is not subjected to image encoding, is used as the data of the second area image, so that the calculation amount is greatly reduced, but the data of the second area image is larger, and then more network bandwidth is occupied.
Therefore, the embodiment can encode the second region image based on a second encoding algorithm to obtain second encoded data, and the compression ratio corresponding to the second encoding algorithm is smaller than the compression ratio corresponding to the first encoding algorithm. The second encoded data is used as the data of the second area image in this embodiment.
It will be appreciated that the amount of computation required is small when the compression ratio is small and large when the compression ratio is large, so that the amount of computation corresponding to the second area image is smaller than the amount of computation corresponding to the first area image. The amount of computation corresponding to the second region image is reduced relative to the entire image frame, so the amount of computation corresponding to the entire image frame is reduced.
In step S405, the first encoded data and the data of the second area image are transmitted to a server.
This step can be referred to as step S105, and will not be described herein.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the present disclosure provides a video processing method, which can determine an image frame to be encoded in video data, and does not perform the same image encoding operation on the entire image frame for the image frame to be encoded, but divides the image frame to be encoded into a first area image and a second area image.
And performing image coding operation on the first region image based on the first coding algorithm, and performing coding operation on the second region image by using the second coding algorithm so that the compression ratio corresponding to the second region image is smaller than the compression ratio corresponding to the first region image.
It will be appreciated that the amount of computation required is small when the compression ratio is small and large when the compression ratio is large, so that the amount of computation corresponding to the second area image is smaller than the amount of computation corresponding to the first area image. The amount of computation corresponding to the second region image is reduced relative to the entire image frame, so the amount of computation corresponding to the entire image frame is reduced.
The method and the device can solve the problem that the calculated amount of the terminal is large in the video coding process, and after the calculated amount of each image frame to be coded is reduced, the occupation of processor resources is reduced, the processor load is reduced, the heating problem of the terminal is relieved, and the user experience is improved.
On the basis of the third embodiment, the present disclosure provides an execution procedure of the server. Since the processing procedures of different image frames to be encoded are consistent, this embodiment will be described in detail by taking one image frame to be encoded as an example.
Fig. 5 is a flowchart of a fourth embodiment of a video processing method according to an exemplary embodiment, applied to a server, as shown in fig. 5, including the following steps.
In step S501, first encoded data corresponding to a first area image and data of a second area image in an image frame to be encoded sent by a terminal are received. The compression ratio corresponding to the second area image is smaller than the compression ratio corresponding to the first area image.
In step S502, a decoding operation is performed on the first encoded data based on a first decoding algorithm to obtain first decoded data.
In step S503, a decoding operation is performed on the data of the second region image based on a second decoding algorithm to obtain second decoded data.
In step S504, the first decoded data and the second decoded data are spliced to obtain the image frame to be encoded.
Step S501, step S502, and step S504 can be referred to as step S301, step S302, and step S303 in fig. 3.
The present disclosure provides a fourth embodiment of a video processing method, where the fourth embodiment corresponds to the third embodiment, and on the basis of transmitting data of the first encoded data and the second area image in the third embodiment, the server may identify the first encoded data and the data of the second area image in the same image frame to be encoded by means of a tag.
Because the data of the second area image in the embodiment is subjected to the image encoding operation through the second encoding algorithm, in the embodiment, the image decoding operation is performed on the first encoded data according to the first decoding algorithm, the image decoding operation is performed on the data of the second area image according to the second decoding algorithm, and finally the second decoded data and the second decoded data are spliced to restore the image frame to be encoded.
Fig. 6 is a flowchart of a fifth embodiment of a video processing method according to an exemplary embodiment, where the video processing method is used in a terminal, and includes the following steps.
In step S601, an image frame to be encoded is determined in a video processing process of the terminal, an image encoding operation is performed on the image frame to be encoded, overall encoded data of the image frame to be encoded is obtained, and the overall video encoded data is sent to the server.
When the terminal just starts to execute the video processing operation, the working state of the terminal is good. In order to reduce the occupation of network bandwidth, the terminal performs image encoding on the whole image of each image frame to be encoded and obtains overall encoded data, and then transmits the overall encoded data to the server.
In step S602, terminal information of the terminal is detected during video processing of the terminal, and whether the terminal information meets a preset condition of excessive power consumption of the terminal is determined. If not, the process proceeds to step S601, and if yes, the process proceeds to step S603. Step S601 and step S602 are two processes executed in parallel.
The terminal can monitor whether the terminal information meets the preset condition or not continuously in the video processing process, and the preset condition is a preset condition which indicates that the power consumption of the terminal is too high.
The terminal information may include a processor load or a body temperature of the terminal. Wherein the preset conditions may include: the processor load of the terminal is greater than a preset load, or the body temperature of the terminal is greater than a preset temperature.
If the terminal does not meet the preset condition, that is, the processor load of the terminal is not greater than the preset load, or the body temperature of the terminal is not greater than the preset temperature, the power consumption of the terminal is low, the working state is good, and the step S601 can be entered to continue to execute the image encoding operation on the whole image.
If the terminal meets the preset condition, that is, the processor load of the terminal is greater than the preset load, or the body temperature of the terminal is greater than the preset temperature, the power consumption of the terminal is too high and the working state is not good, and step S603 can be performed to adjust the working state of the terminal.
In step S603, the image encoding operation is repeatedly performed for a preset time.
The image encoding operation may be referred to as step S101 to step S104 of the first embodiment, or as steps S401 to 405 of the third embodiment. And will not be described in detail herein.
The image coding operation is repeatedly executed within the preset time, and the calculated amount is reduced when each image frame to be coded is subjected to image coding, so that the calculated amount of the terminal can be greatly reduced within the preset time, the processor load is reduced, and the resource occupation of the terminal is reduced.
In the preset time, the heat dissipation device of the terminal continuously dissipates heat, so that after the calculated amount is reduced greatly, the heat dissipation capacity of the terminal is reduced, and under the action of the heat dissipation device, the terminal gradually dissipates heat.
Step S604: and the server repeatedly executes the image decoding operation within a preset time.
The image encoding operation may be referred to as step S301 to step S303 of the second embodiment, or as steps S501 to 504 in the fourth embodiment. And will not be described in detail herein. Note that the image encoding operation in step S603 corresponds to the image decoding operation in step S604.
Fig. 7 is a flowchart of a sixth embodiment of a video processing method according to an exemplary embodiment, and as shown in fig. 7, the video processing method is used in a terminal, and includes the following steps.
In this embodiment, the terminal stores a plurality of preset proportions of the first area image and the second area image, and the proportion of the first area image in the plurality of preset proportions of the first area image and the second area image decreases. For example, the proportion of the terminal storing a plurality of preset first area images and second area images includes: 2:1,1:1,1:2. The purpose of the decreasing proportion of the first region image is to gradually reduce the calculation amount of terminal video coding.
In step S701, the ratio of the initial preset first area image and the second area image is determined.
By default, a first ratio among a plurality of ratios of the preset first area image and the second area image is taken as an initial ratio.
In step S702, an image frame to be encoded is determined in a video processing process of the terminal, an image encoding operation is performed on the image frame to be encoded, overall encoded data of the image frame to be encoded is obtained, and the overall video encoded data is sent to the server.
In step S703, terminal information of the terminal is detected during video processing of the terminal, and whether the terminal information satisfies a preset condition of excessive power consumption of the terminal is determined. If not, the process proceeds to step S702, and if yes, the process proceeds to step S704. Step S702 and step S704 are two processes executed in parallel.
See step S602, which is not described in detail herein.
In step S704, the image encoding operation is repeatedly performed for a preset time.
See step S603, which is not described in detail herein.
Step S705: and the server repeatedly executes the image decoding operation within a preset time.
See step S604, which is not described in detail herein.
Step S706: judging whether the terminal information meets preset conditions. If yes, the process proceeds to step S707, and if not, the process proceeds to step S702.
And after the preset time, the terminal acquires the terminal information again, and judges whether the terminal information meets the preset condition or not so as to verify whether the terminal is restored to be in a normal state or not after the preset time.
If the terminal does not meet the preset condition, the terminal is indicated to be in a normal state after the preset time. In order to reduce the occupation of network bandwidth, step S702 may be advanced to perform a video encoding operation on the entire image.
In step S707, if the terminal information satisfies a preset condition, a ratio of the next preset first area image to the second area image is selected from the ratios of the plurality of preset first area images to the second area image, and the process proceeds to step S704.
If the terminal information still meets the preset condition that the power consumption of the terminal is too high, namely the processor load of the terminal is larger than the preset load, or the body temperature of the terminal is larger than the preset temperature, the code calculation amount of the processor is still larger, and the terminal is still in a bad state.
For this reason, the ratio of the next preset first area image and the second area image may be selected, and since the proportion of the first area image among the ratios of the plurality of preset first area images and the second area image decreases, the corresponding first area image among the ratios of the next first area image and the second area image becomes smaller.
Since the calculated amount of the first area image is larger than that of the second area image, after the first area image is reduced, the calculated amount of the image frame to be encoded is reduced more, and thus the encoding calculated amount of the corresponding processor is also reduced more.
According to another embodiment of the present application, the technical core of the present disclosure is to perform image encoding on a partial image, where the partial image is not encoded or performs image encoding with smaller compression, and occupy a larger network bandwidth after the partial image is not encoded or performs image encoding with smaller compression, so that the present disclosure is relatively suitable for a terminal with a better network bandwidth in a network environment. To this end, the terminal may perform the steps of:
S21: monitoring the network bandwidth of the network environment where the terminal is located;
s22: and if the network bandwidth is greater than the preset bandwidth, the terminal executes the steps of the terminal in the first embodiment or the third embodiment.
That is, in the case of a large network bandwidth, a scheme of performing encoding on a partial image may be performed, so as to relieve a processor load, reduce occupation of processor resources, and improve user experience.
S23: and if the network bandwidth is not greater than the preset bandwidth, obtaining an image frame corresponding to the image frame to be encoded in the terminal video processing process, and performing image encoding on the image frame to be encoded to obtain the whole encoded data of the image frame to be encoded.
If the network bandwidth is not good, the scheme of executing image coding by the local image can not be executed, and the scheme of executing image coding by the whole image is still executed so as to ensure that the whole coded data can be normally transmitted preferentially.
The terminal suitable for the scheme can be more conveniently determined by monitoring the network bandwidth of the network environment where the terminal is located, so that the video coding data in the terminal can be preferentially ensured to be normally transmitted, and secondly, the purposes of reducing the processor load, reducing the calculated amount of the image coding, reducing the body temperature of the terminal and improving the user experience can be realized on the basis of normal transmission.
Fig. 8 is a block diagram illustrating a video processing apparatus 800 according to an exemplary embodiment. Referring to fig. 8, the apparatus includes an acquisition unit 81, a determination unit 82, a first encoding unit 83, and a transmission unit 84.
An acquisition unit 81 configured to perform acquisition of video data acquired by a terminal, the video data including at least one image frame;
a determining unit 82 configured to perform determination of a first area image and a second area image in an image frame to be encoded for the image frame to be encoded;
a first encoding unit 83 configured to perform encoding of the first region image based on a first encoding algorithm to obtain first encoded data of the first region image,
a transmitting unit 84 configured to perform transmission of the first encoded data and the data of the second area image to a server; the compression ratio corresponding to the second area image is smaller than the compression ratio corresponding to the first area image.
The transmitting unit 81 is specifically configured to transmit the first encoded data and the data of the second area image to a server, using the data of the second area image, which is not subjected to image encoding, as the data of the second area image.
Wherein the apparatus further comprises:
a second encoding unit 85 configured to perform encoding of the second region image based on a second encoding algorithm to obtain second encoded data of the second region image, the second encoded data being used as data of the second region image; the compression ratio corresponding to the second coding algorithm is smaller than the compression ratio corresponding to the first coding algorithm.
The determining unit 82 includes:
a detection unit 821 configured to perform detection of terminal information of the terminal;
and a dividing unit 822 configured to divide the image frame to be encoded to obtain the first area image and the second area image if the terminal information satisfies a preset condition for indicating that the terminal power consumption is too high.
Wherein the determining unit 82 includes:
a detection unit 821 configured to perform detection of terminal information of the terminal and a network bandwidth of a network environment in which the terminal is located;
and a dividing unit 822 configured to divide the image frame to be encoded to obtain the first area image and the second area image if the terminal information satisfies a preset condition for indicating that the power consumption of the terminal is too high and the network bandwidth is greater than a preset bandwidth.
The detecting unit 821 is further configured to perform detecting of terminal information of the terminal again after a preset time;
the dividing unit 822 is further configured to perform, if the terminal information meets a preset condition for indicating that the power consumption of the terminal is too high, increasing the duty ratio of the first area image when dividing the image frame to be encoded, and obtaining the first area image and the second area image.
The transmitting unit 84 includes:
a labeling unit 841 configured to perform a labeling operation on the first encoded data and the data of the second area image, respectively, and the first encoded data and the data of the second area image have the same label;
a transmission data unit 842 configured to perform transmission of the first encoded data and the data of the second area image to the server.
Fig. 9 is a block diagram illustrating a video processing apparatus 900 according to an exemplary embodiment. Referring to fig. 9, the apparatus includes a receiving unit 91, a first decoding unit 92, and a splicing unit 93.
A receiving unit 91 configured to perform the first encoded data corresponding to the first area image and the data of the second area image in the image frame to be encoded sent by the receiving terminal; the compression ratio corresponding to the second area image is smaller than the compression ratio corresponding to the first area image;
A first decoding unit 92 configured to perform a decoding operation on the first encoded data based on a first decoding algorithm to obtain first decoded data;
and a stitching unit 93 configured to stitch the first decoded data and the second area image data to obtain the image frame to be encoded.
The stitching unit 93 is specifically configured to stitch the first decoded data and the second area image data to obtain the image frame to be encoded, where the second area image data is data that is not image encoded.
Before the splicing unit 93, further includes:
a second decoding unit 94 configured to perform a decoding operation on the data of the second region image based on a second decoding algorithm to obtain second decoded data; the compression ratio corresponding to the second coding algorithm is smaller than the compression ratio corresponding to the first coding algorithm;
the stitching unit 93 is specifically configured to stitch the first decoded data and the second decoded data to obtain the image frame to be encoded.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
In an exemplary embodiment, a storage medium is also provided, such as a memory, including instructions executable by a processor of apparatus 800 to perform a method of video processing performed by a terminal. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
In an exemplary embodiment, a storage medium is also provided, such as a memory, including instructions executable by a processor of apparatus 900 to perform a video processing method performed by a server. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
Fig. 10 is a block diagram illustrating a method for an electronic device 1000, according to an example embodiment.
Fig. 10 is a block diagram of an electronic device 1000, shown in accordance with an exemplary embodiment. For example, electronic device 1000 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device exercise device, personal digital assistant, or the like.
Referring to fig. 10, an electronic device 1000 may include one or more of the following components: a processing component 1002, a memory 1004, a power component 1006, a multimedia component 1008, an audio component 1010, an input/output (I/O) interface 1012, a sensor component 1014, and a communication component 1016.
The processing component 1002 generally controls overall operation of the electronic device 1000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1002 can include one or more processors 1020 to execute instructions to perform all or part of the steps of the methods described above.
Further, the processing component 1002 can include one or more modules that facilitate interaction between the processing component 1002 and other components. For example, the processing component 1002 can include a multimedia module to facilitate interaction between the multimedia component 1008 and the processing component 1002.
The memory 1004 is configured to store various types of data to support operations at the electronic device 1000. Examples of such data include instructions for any application or method operating on the electronic device 1000, contact data, phonebook data, messages, pictures, videos, and so forth.
The memory 1004 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 1006 provides power to the various components of the electronic device 1000. The power components 1006 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 1000.
The multimedia component 1008 includes a screen between the electronic device 1000 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP).
If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation.
In some embodiments, the multimedia assembly 1008 includes a front-facing camera and/or a rear-facing camera. When the electronic device 1000 is in an operational mode, such as a shooting mode or a video mode, the front-facing camera and/or the rear-facing camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 1010 is configured to output and/or input audio signals. For example, the audio component 1010 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 1000 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in memory 1004 or transmitted via communication component 1016. In some embodiments, the audio component 1010 further comprises a speaker for outputting audio signals.
The I/O interface 1012 provides an interface between the processing assembly 1002 and peripheral interface modules, which may be a keyboard, click wheel, buttons, and the like. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 1014 includes one or more sensors for providing status assessment of various aspects of the electronic device 1000. For example, the sensor assembly 1014 may detect an on/off state of the electronic device 1000, a relative positioning of components such as a display and keypad of the electronic device 1000, the sensor assembly 1014 may also detect a change in position of the electronic device 1000 or a component of the electronic device 1000, the presence or absence of a user's contact with the electronic device 1000, an orientation or acceleration/deceleration of the electronic device 1000, and a change in temperature of the electronic device 1000.
The sensor assembly 1014 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 1014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1014 can also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1016 is configured to facilitate communication between the apparatus 1000 and other devices, either wired or wireless. The electronic device 1000 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof.
In one exemplary embodiment, the communication component 1016 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1016 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 1000 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
The application also provides a server, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement a video processing method performed by the server.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (22)

1. A video processing method, comprising:
acquiring video data acquired by a terminal, wherein the video data comprises at least one image frame;
for an image frame to be encoded, determining a first area image and a second area image in the image frame to be encoded;
encoding the first region image based on a first encoding algorithm to obtain first encoded data of the first region image;
transmitting the first encoded data and the data of the second area image to a server; the compression ratio corresponding to the second area image is smaller than the compression ratio corresponding to the first area image;
the determining, for an image frame to be encoded, a first area image and a second area image in the image frame to be encoded includes:
detecting terminal information of the terminal;
and if the terminal information meets the preset condition for representing the excessive power consumption of the terminal, dividing the image frame to be encoded to obtain the first area image and the second area image.
2. The video processing method according to claim 1, wherein the transmitting the first encoded data and the data of the second area image to the server side includes:
and taking the data of the second area image which is not subjected to image coding as the data of the second area image, and sending the first coding data and the data of the second area image to a server.
3. The video processing method according to claim 1, further comprising, before transmitting the first encoded data and the data of the second area image to a server, the steps of:
image coding is carried out on the second area image based on a second coding algorithm so as to obtain second coding data of the second area image; the compression ratio corresponding to the second coding algorithm is smaller than the compression ratio corresponding to the first coding algorithm;
and taking the second encoded data as the data of the second area image.
4. The method according to claim 1, wherein the determining, for an image frame to be encoded, a first area image and a second area image in the image frame to be encoded includes:
detecting terminal information of the terminal and network bandwidth of a network environment where the terminal is located;
And if the terminal information meets the preset condition for representing the excessive power consumption of the terminal and the network bandwidth is larger than the preset bandwidth, dividing the image frame to be encoded to obtain the first area image and the second area image.
5. The video processing method according to claim 1 or 4, characterized by further comprising:
detecting terminal information of the terminal again after the preset time;
and if the terminal information meets the preset condition for representing the excessive power consumption of the terminal, increasing the duty ratio of the first area image when dividing the image frame to be encoded, and obtaining the first area image and the second area image.
6. The video processing method according to claim 1, wherein the transmitting the first encoded data and the data of the second area image to the server side includes:
respectively performing a labeling operation on the first encoded data and the data of the second region image, wherein the first encoded data and the data of the second region image have the same label;
and sending the first encoded data and the data of the second area image to the server.
7. A video processing method, comprising:
Receiving first coded data corresponding to a first region image and data of a second region image in an image frame to be coded, which are sent by a terminal; the compression ratio corresponding to the second area image is smaller than the compression ratio corresponding to the first area image; the first area image and the second area image are terminal information of the terminal detected by the terminal, and the image frames to be encoded are divided under the condition that the terminal information meets preset conditions for representing that the power consumption of the terminal is too high;
performing a decoding operation on the first encoded data based on a first decoding algorithm to obtain first decoded data;
and splicing the first decoding data and the data of the second area image to obtain the image frame to be encoded.
8. The video processing method according to claim 7, wherein the concatenating the first decoded data and the data of the second area image to obtain the image frame to be encoded comprises:
and splicing the first decoding data and the data of the second area image to obtain the image frame to be encoded, wherein the data of the second area image is the data which is not subjected to image encoding.
9. The video processing method according to claim 7, characterized by, before the data of the first decoded data and the second area image are spliced to obtain the image frame to be encoded, further comprising:
Performing a decoding operation on the data of the second region image based on a second decoding algorithm to obtain second decoded data; the first coding data are data obtained by coding a first region image through a first coding algorithm, the data of a second region image are coded through a second coding algorithm, and the compression ratio corresponding to the second coding algorithm is smaller than that corresponding to the first coding algorithm;
and splicing the first decoding data and the second decoding data to obtain the image frame to be encoded.
10. A video processing apparatus, comprising:
an acquisition unit configured to perform acquisition of video data acquired by a terminal, the video data including at least one image frame;
a determining unit configured to perform, for an image frame to be encoded, determining a first region image and a second region image in the image frame to be encoded;
a first encoding unit configured to perform encoding of the first region image based on a first encoding algorithm to obtain first encoded data of the first region image,
a transmitting unit configured to perform transmission of the first encoded data and the data of the second area image to a server; the compression ratio corresponding to the second area image is smaller than the compression ratio corresponding to the first area image;
The determination unit includes:
a detection unit configured to perform detection of terminal information of the terminal;
and the dividing unit is configured to divide the image frame to be encoded to obtain the first area image and the second area image if the terminal information meets a preset condition for representing that the power consumption of the terminal is too high.
11. The video processing apparatus according to claim 10, wherein the transmitting unit is specifically configured to transmit the first encoded data and the data of the second area image to a server using the data of the second area image that is not image-encoded as the data of the second area image.
12. The video processing apparatus of claim 10, wherein the apparatus further comprises:
a second encoding unit configured to perform encoding of the second region image based on a second encoding algorithm to obtain second encoded data of the second region image, the second encoded data being used as data of the second region image;
the compression ratio corresponding to the second coding algorithm is smaller than the compression ratio corresponding to the first coding algorithm.
13. The video processing apparatus according to claim 10, wherein the determination unit includes:
a detection unit configured to perform detection of terminal information of the terminal and a network bandwidth of a network environment in which the terminal is located;
and the dividing unit is configured to divide the image frame to be encoded to obtain the first area image and the second area image if the terminal information meets a preset condition for representing that the power consumption of the terminal is too high and the network bandwidth is larger than the preset bandwidth.
14. The video processing apparatus according to claim 10 or 13, wherein,
the detection unit is further configured to execute detection of terminal information of the terminal again after a preset time;
the dividing unit is further configured to perform, if the terminal information meets a preset condition for representing that power consumption of the terminal is too high, increasing a duty ratio of the first area image when dividing the image frame to be encoded, and obtaining the first area image and the second area image.
15. The video processing apparatus according to claim 11, wherein the transmission unit includes:
a labeling unit configured to perform a labeling operation on the first encoded data and the data of the second area image, respectively, and the first encoded data and the data of the second area image have the same label;
And a transmission data unit configured to perform transmission of the first encoded data and the data of the second area image to the server.
16. A video processing apparatus, comprising:
the receiving unit is configured to execute first coding data corresponding to a first area image and data of a second area image in an image frame to be coded, which are sent by the receiving terminal; the compression ratio corresponding to the second area image is smaller than the compression ratio corresponding to the first area image; the first area image and the second area image are terminal information of the terminal detected by the terminal, and the image frames to be encoded are divided under the condition that the terminal information meets preset conditions for representing that the power consumption of the terminal is too high;
a first decoding unit configured to perform a decoding operation on the first encoded data based on a first decoding algorithm to obtain first decoded data;
and a splicing unit configured to perform splicing of the first decoded data and the data of the second area image to obtain the image frame to be encoded.
17. The video processing device according to claim 16, wherein the stitching unit is specifically configured to perform stitching of the first decoded data and the data of the second region image to obtain the image frame to be encoded, and wherein the data of the second region image is data that has not been image encoded.
18. The video processing apparatus of claim 16, further comprising, prior to the stitching unit:
a second decoding unit configured to perform a decoding operation on data of the second region image based on a second decoding algorithm to obtain second decoded data; the first coding data are data obtained by coding a first region image through a first coding algorithm, the data of a second region image are coded through a second coding algorithm, and the compression ratio corresponding to the second coding algorithm is smaller than that corresponding to the first coding algorithm;
and the splicing unit is specifically configured to splice the first decoding data and the second decoding data to obtain the image frame to be encoded.
19. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video processing method of any one of claims 1 to 6.
20. A storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the video processing method of any one of claims 1 to 6.
21. A server, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video processing method of any of claims 7 to 9.
22. A storage medium, which when executed by a processor of a server, enables the server to perform the video processing method of any one of claims 7 to 9.
CN202010848606.6A 2020-08-21 2020-08-21 Video processing method and device Active CN111953980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010848606.6A CN111953980B (en) 2020-08-21 2020-08-21 Video processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010848606.6A CN111953980B (en) 2020-08-21 2020-08-21 Video processing method and device

Publications (2)

Publication Number Publication Date
CN111953980A CN111953980A (en) 2020-11-17
CN111953980B true CN111953980B (en) 2023-11-21

Family

ID=73358933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010848606.6A Active CN111953980B (en) 2020-08-21 2020-08-21 Video processing method and device

Country Status (1)

Country Link
CN (1) CN111953980B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596450B (en) * 2021-06-28 2022-11-11 展讯通信(上海)有限公司 Video image compression method, decompression method, processing method, device and equipment
CN115543083A (en) * 2022-09-29 2022-12-30 歌尔科技有限公司 Image display method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106060544A (en) * 2016-06-29 2016-10-26 华为技术有限公司 Image encoding method and relevant equipment and system
CN110784745A (en) * 2019-11-26 2020-02-11 科大讯飞股份有限公司 Video transmission method, device, system, equipment and storage medium
CN111010582A (en) * 2019-12-18 2020-04-14 深信服科技股份有限公司 Cloud desktop image processing method, device and equipment and readable storage medium
CN111464812A (en) * 2020-04-17 2020-07-28 西安万像电子科技有限公司 Method, system, device, storage medium and processor for encoding and decoding

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3468187A1 (en) * 2017-10-03 2019-04-10 Axis AB Method and system for encoding video streams

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106060544A (en) * 2016-06-29 2016-10-26 华为技术有限公司 Image encoding method and relevant equipment and system
CN110784745A (en) * 2019-11-26 2020-02-11 科大讯飞股份有限公司 Video transmission method, device, system, equipment and storage medium
CN111010582A (en) * 2019-12-18 2020-04-14 深信服科技股份有限公司 Cloud desktop image processing method, device and equipment and readable storage medium
CN111464812A (en) * 2020-04-17 2020-07-28 西安万像电子科技有限公司 Method, system, device, storage medium and processor for encoding and decoding

Also Published As

Publication number Publication date
CN111953980A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
US11057853B2 (en) Methods and apparatus for indicating and determining synchronization block, and base station and user equipment
CN109076558B (en) Method and device for identifying downlink transmission
CN108702281B (en) Method and device for determining size of downlink control information format
CN114554571B (en) Network slice using method and device
CN109714415B (en) Data processing method and device
CN112114765A (en) Screen projection method and device and storage medium
CN110536168B (en) Video uploading method and device, electronic equipment and storage medium
EP1887816A1 (en) Method for performing communication after SIM card withdrawal
CN111953980B (en) Video processing method and device
EP2986020A1 (en) Method and apparatus for adjusting video quality based on network environment
CN111259289B (en) Picture loading method and device, electronic equipment and storage medium
CN111294850A (en) Measurement reporting method and device, and terminal equipment information acquisition method and device
CN110337825B (en) Service switching method and device
CN111654354A (en) Detection method, device and storage medium of Maximum Transmission Unit (MTU)
CN107733556B (en) Message checking method and device
CN109120929B (en) Video encoding method, video decoding method, video encoding device, video decoding device, electronic equipment and video encoding system
CN111726516A (en) Image processing method and device
US10085050B2 (en) Method and apparatus for adjusting video quality based on network environment
CN106709027B (en) Picture recommendation method and device
CN110784721A (en) Picture data compression method and device, electronic equipment and storage medium
CN111724398A (en) Image display method and device
CN113424471B (en) Method, device and storage medium for determining resources
CN113544995B (en) Method, device, equipment and storage medium for sending and receiving downlink transmission
CN114500819B (en) Shooting method, shooting device and computer readable storage medium
CN112544043B (en) Receiving state feedback method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant