CN113014958A - Video transmission processing method and device, computer equipment and storage medium - Google Patents

Video transmission processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113014958A
CN113014958A CN202110261843.7A CN202110261843A CN113014958A CN 113014958 A CN113014958 A CN 113014958A CN 202110261843 A CN202110261843 A CN 202110261843A CN 113014958 A CN113014958 A CN 113014958A
Authority
CN
China
Prior art keywords
image
current
video
video image
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110261843.7A
Other languages
Chinese (zh)
Inventor
王健宗
李佳琳
瞿晓阳
郭俊雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202110261843.7A priority Critical patent/CN113014958A/en
Publication of CN113014958A publication Critical patent/CN113014958A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a video transmission processing method, a video transmission processing device, computer equipment and a storage medium. The method comprises the following steps: acquiring a current video image, and extracting a current RGB value; determining whether a previous video image belonging to the same video scene as the current video image exists based on the current video image; if the prior video image exists, vector conversion is carried out on the current RGB value to obtain a current feature vector, an incremental feature vector is obtained based on the current feature vector and the prior feature vector corresponding to the prior video image, and the incremental feature vector is determined as image feature information; if no prior video image exists, determining the current RGB value as image characteristic information; and sending the image characteristic information to a video receiving end so that the video receiving end carries out image synthesis based on the image characteristic information to obtain a target synthesized image. The method can achieve the purposes of occupying less network transmission bandwidth and resources and having better integrity of the restored video in the video transmission process.

Description

Video transmission processing method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of video transmission technologies, and in particular, to a video transmission processing method and apparatus, a computer device, and a storage medium.
Background
In the video transmission process, in order to ensure the integrity of the video signal, the video transmission is generally performed by directly transmitting the video signal, but the method of directly transmitting the video signal occupies a larger network transmission bandwidth and resource, and has lower transmission efficiency. In order to reduce network transmission bandwidth and resources occupied in the video transmission process, in the prior art, important regions containing important information in video signals are positioned, feature vectors of the important regions are extracted, and the purpose of video transmission is achieved by transmitting the feature vectors of the important regions. Therefore, how to ensure that the occupied network transmission bandwidth and resources are less in the video transmission process and the video receiving end can effectively and completely restore the video becomes a technical problem to be solved urgently in the current video transmission field.
Disclosure of Invention
Embodiments of the present invention provide a video transmission processing method and apparatus, a computer device, and a storage medium, so as to solve the problem that network transmission bandwidth occupation and fewer resources cannot be considered in a video transmission process, and the integrity of a restored video is better.
The invention provides a video transmission processing method, which comprises the following steps executed by a video sending end:
acquiring a current video image, and extracting a current RGB value corresponding to the current video image;
determining whether there is a previous video image belonging to the same video scene as the current video image based on the current video image;
if the previous video image exists, performing vector conversion on a current RGB value corresponding to the current video image to obtain a current feature vector, acquiring an incremental feature vector corresponding to the current video image based on the current feature vector and the previous feature vector corresponding to the previous video image, and determining the incremental feature vector as image feature information corresponding to the current video image;
if the prior video image does not exist, determining the current RGB value corresponding to the current video image as the image characteristic information corresponding to the current video image;
and sending the image characteristic information corresponding to the current video image to a video receiving end so that the video receiving end carries out image synthesis based on the image characteristic information to obtain a target synthesized image.
The invention provides a video transmission processing device, comprising:
the current image extraction module is used for acquiring a current video image and extracting a current RGB value corresponding to the current video image;
a previous image detection module for determining whether a previous video image belonging to the same video scene as the current video image exists based on the current video image;
a first feature information obtaining module, configured to perform vector conversion on a current RGB value corresponding to the current video image to obtain a current feature vector if the previous video image exists, obtain an incremental feature vector corresponding to the current video image based on the current feature vector and a previous feature vector corresponding to the previous video image, and determine the incremental feature vector as image feature information corresponding to the current video image;
a second characteristic information obtaining module, configured to determine, if the previous video image does not exist, a current RGB value corresponding to the current video image as image characteristic information corresponding to the current video image;
and the image characteristic information sending module is used for sending the image characteristic information corresponding to the current video image to a video receiving end so that the video receiving end carries out image synthesis based on the image characteristic information to obtain a target synthesized image.
The invention provides a video transmission processing method, which comprises the following steps executed by a video receiving end:
receiving image characteristic information of a current video image sent by a video sending end;
analyzing the image characteristic information to obtain a current RGB value or an incremental characteristic vector;
if the image characteristic information contains the current RGB value, image synthesis is carried out based on the current RGB value, and a target synthetic image is obtained;
and if the image feature information contains the incremental feature vector, acquiring a prior synthesized image, and performing image synthesis based on the prior synthesized image and the incremental feature vector to acquire a target synthesized image.
The invention provides a video transmission processing device, comprising:
the image characteristic information receiving module is used for receiving the image characteristic information of the current video image sent by the video sending end;
the image characteristic information analysis module is used for analyzing the image characteristic information to obtain a current RGB value or an incremental characteristic vector;
the first image synthesis processing module is used for carrying out image synthesis based on the current RGB value if the image characteristic information contains the current RGB value so as to obtain a target synthesis image;
and the second image synthesis processing module is used for acquiring a prior synthesized image if the image characteristic information contains the incremental characteristic vector, and performing image synthesis based on the prior synthesized image and the incremental characteristic vector to acquire a target synthesized image.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the above video transmission processing method when executing the computer program.
A computer-readable storage medium storing a computer program which, when executed by a processor, implements the above-described video transmission processing method.
According to the video transmission processing method, the video transmission processing device, the computer equipment and the storage medium, when the current video image has the prior video image belonging to the same video scene, the incremental feature vector corresponding to the current video image is determined as the image feature information, so that the network transmission bandwidth and the resources occupied in the transmission process are reduced, and the transmission efficiency is improved; when the current video image does not have a previous video image belonging to the same video scene, determining the current RGB value of the current video image as image characteristic information, so that the image characteristic information contains all information of the current video image, and the integrity of the restoration of a subsequent video image is favorably ensured.
According to the video transmission processing method, the video transmission processing device, the computer equipment and the storage medium, the image characteristic information of the current video image received by the video receiving end can be the current RGB value or the incremental characteristic vector, and the network transmission bandwidth and the resources occupied in the transmission process can be ensured to be less; when the image characteristic information is the current RGB value, image synthesis is directly carried out based on the current RGB value, so that the obtained target synthetic image contains complete image information, and the image effect of the target synthetic image is guaranteed; when the image feature information is the incremental feature vector, image synthesis can be performed on the basis of the incremental feature vector and the prior synthesized image, and the information integrity of all target synthesized images is guaranteed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a schematic diagram of an application environment of a video transmission processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a video transmission processing method according to an embodiment of the present invention;
FIG. 3 is another flow chart of a video transmission processing method according to an embodiment of the invention;
FIG. 4 is another flow chart of a video transmission processing method according to an embodiment of the invention;
FIG. 5 is another flow chart of a video transmission processing method according to an embodiment of the invention;
FIG. 6 is another flow chart of a video transmission processing method according to an embodiment of the invention;
FIG. 7 is a diagram of a video transmission processing apparatus according to an embodiment of the present invention;
FIG. 8 is another schematic diagram of a video transmission processing apparatus according to an embodiment of the invention;
FIG. 9 is a schematic diagram of a computer device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The video transmission processing method provided by the embodiment of the invention can be applied to the application environment shown in fig. 1. Specifically, the video transmission processing method is applied to a video transmission processing system, the video transmission processing system comprises a video sending end and a video receiving end as shown in fig. 1, the video sending end and the video receiving end communicate through a network, so that video signals can be transmitted between the video sending end and the video receiving end, network transmission bandwidth and resources occupied in the video transmission process are reduced, the video receiving end can be guaranteed to effectively restore videos, and integrity of the restored video signals is guaranteed.
In an embodiment, as shown in fig. 2, a video transmission processing method is provided, which is described by taking an example that the method is applied to a video sending end in fig. 1, and includes the following steps:
s201: and acquiring a current video image, and extracting a current RGB value corresponding to the current video image.
S202: based on the current video image, it is determined whether there is a previous video image that belongs to the same video scene as the current video image.
S203: if the prior video image exists, vector conversion is carried out on a current RGB value corresponding to the current video image to obtain a current feature vector, an incremental feature vector corresponding to the current video image is obtained based on the current feature vector and the prior feature vector corresponding to the prior video image, and the incremental feature vector is determined as image feature information corresponding to the current video image.
S204: and if no prior video image exists, determining the current RGB value corresponding to the current video image as the image characteristic information corresponding to the current video image.
S205: and sending the image characteristic information corresponding to the current video image to a video receiving end so that the video receiving end carries out image synthesis based on the image characteristic information to obtain a target synthesized image.
The current video image refers to a video image which needs to be transmitted currently. Generally, a target video to be transmitted includes multiple frames of video images ordered according to a time sequence, and a current video image refers to a video image to be transmitted at a current time. The current RGB values refer to RGB values of all pixel points extracted from the current video image, and the RGB values are gray values of R, G and B channels. In this example, the current RGB value is three-dimensional data or three two-dimensional data.
As an example, in step S201, after receiving a video transmission request triggered by a user and determining a target video to be transmitted, a video sending end may sequentially determine a current video image from the target video and extract a current RGB value corresponding to the current video image, so as to determine image feature information to be transmitted through a network based on the current RGB value, so as to send the image feature information corresponding to the current video image to a video receiving end frame by frame, so that the video receiving end performs video restoration based on the received image feature information. Understandably, each frame of the current video image carries a generation timestamp, which is the generation time of the current video image.
The video scene refers to a scene corresponding to a current video image. A previous video image refers to a video image that has been transmitted prior to processing of the current video image. In this example, the previous video image is specifically an image that belongs to the same video scene as the current video image and whose generation timestamp is closest to the generation timestamp of the current video image. The current video image and the previous video image belong to the same video scene, and it can be understood that the previous video image and the current video image belong to two frames of images before and after the same target video.
As an example, in step S202, when determining a current video image that needs to be transmitted, a video transmitting end needs to determine whether there is a previous video image that belongs to the same video scene as the current video image, that is, whether the current video image is an initial frame video image in a target video that needs to be transmitted, so as to extract different image characteristic information according to a determination result, so as to ensure that network transmission bandwidth and resources occupied by the transmitted image characteristic information are less, and integrity of a video restored based on the image characteristic information is better. The image characteristic information refers to information which needs to be transmitted and corresponds to the current video image.
The current feature vector is one-dimensional feature information formed by vector conversion of the current RGB value, for example, a one-dimensional feature vector [0, 233, 255 ]. Understandably, compared with the current RGB value, the dimensionality of the current feature vector is low, and the transmission process occupies less network transmission bandwidth and resources. The prior feature vector refers to a feature vector obtained by vector converting prior RGB values extracted from a prior video image before the current time of the system.
The incremental feature vector is a feature vector in which a difference exists between a current feature vector and a previous feature vector.
As an example, in step S203, when determining that there is a previous video image belonging to the same video scene as the current video image, the video sending end determines that the current video image is not an initial frame video image of the video scene to which the current video image belongs, at this time, vector conversion may be performed on a current RGB value corresponding to the current video image to convert the current RGB value in a three-dimensional or two-dimensional form into a current feature vector in a one-dimensional form. Then, the video sending end compares and judges the current characteristic vector of the current video image and the prior characteristic vector of the prior video image, and determines the incremental characteristic vector of the current video image compared with the prior video image, wherein the incremental characteristic vector is the characteristic vector of the current video image which is different from the prior video image, so that the incremental characteristic vector has less data quantity compared with the current characteristic vector. And finally, the video sending end determines the incremental characteristic vector as the image characteristic information corresponding to the current video image, so that the data volume of the image characteristic information required to be sent by the current video image is less, and the network transmission bandwidth and resources occupied in the transmission process are further reduced.
As an example, in step S204, when determining that there is no previous video image belonging to the same video scene as the current video image, the video sending end determines that the current video image is an initial frame video image of the video scene to which the current video image belongs, that is, the current video image is a first frame video image in the target video to which the current video image belongs, and at this time, may determine a current RGB value corresponding to the current video image as image feature information corresponding to the current video image. In this example, when the current video image is an initial frame video image in the target video, the current RGB values corresponding to all pixel points in the current video image may be used to determine the image feature information corresponding to the current video image, so that the image feature information includes all image information in the current video image, so that the initial frame video image may be completely restored based on the image feature information, which is helpful to ensure the integrity of video image transmission.
Further, after determining the current RGB value corresponding to the current video image as the image feature information corresponding to the current video image, the video sending end may further perform vector conversion on the current RGB value corresponding to the current video image to obtain the current feature vector corresponding to the current video image, and store the current feature vector and the current video image in a first database of the video sending end in an associated manner. Understandably, when no previous video image exists in the current video image, that is, the current video image is an initial frame video image in the target video, the current feature vector obtained by vector conversion of the current RGB value is stored in the first database so as to serve as a previous feature vector corresponding to a next frame video image of the current video image, which is beneficial to improving the transmission efficiency of the subsequent video image.
As an example, in step S205, the video sending end sends image feature information corresponding to the current video image to the video receiving end, so that the video receiving end performs image synthesis based on the image feature information to obtain a target synthesized image. The image characteristic information can be the current RGB value of the initial frame video image of the same video scene, so that the image characteristic information contains all information of the current video image, and the information integrity of the target composite image is favorably ensured; because the image characteristic information can also be the increment characteristic vector of other current video images except the initial frame video image in the same video scene, the method is beneficial to ensuring that the network transmission bandwidth and resources occupied in the transmission process are less.
Further, the video sending end sends the image characteristic information corresponding to the current video image and the scene identification corresponding to the video scene to the video receiving end together, so that the video receiving end carries out image synthesis based on the image characteristic information and the scene identification, and the efficiency of image synthesis processing is improved.
In the video transmission processing method provided by the embodiment, when a previous video image belonging to the same video scene exists in a current video image, an incremental feature vector corresponding to the current video image is determined as image feature information, which is beneficial to ensuring that less network transmission bandwidth and resources are occupied in the transmission process, and improving the transmission efficiency; when the current video image does not have a previous video image belonging to the same video scene, the current RGB value of the current video image is determined as the image characteristic information, so that the image characteristic information contains all information of the current video image, and the completeness of the restoration of the subsequent video image is favorably ensured.
In one embodiment, as shown in fig. 3, the step S202 of determining whether there is a previous video image belonging to the same video scene as the current video image based on the current video image includes:
s301: and inquiring a first database based on the current video image to acquire the existing video image.
S302: and carrying out similarity calculation on the current video image and the existing video image to obtain the image similarity.
S303: and if the image similarity is greater than the first similarity threshold, determining the existing video image as a previous video image belonging to the same video scene with the current video image.
The first database is a database which is arranged on the video transmitting end or connected with the video transmitting end and used for storing images. The existing video image refers to a video image that has been transmitted before the current video image is transmitted. The first similarity threshold is a pre-configured threshold for evaluating whether the criteria for the video scene considered to be the same are met.
As an example, in step S301, the video sending end may query the first database based on the generation timestamp of the current video image, and determine whether there is an existing video image with the generation timestamp before the generation timestamp of the current video image, so as to determine whether there is a previous video image based on the existing video image, thereby ensuring the transmission sequence of all video images in the target video, and being beneficial to ensuring the integrity and effectiveness of video image synthesis. For example, if the generation timestamp of the current video image is T1 and the generation timestamp of the existing video image is T0, T0< T1.
Further, in order to improve the processing efficiency and guarantee the timeliness of video transmission processing, the video sending end can also query the first database based on the generation timestamp of the current video image, and judge whether the existing video image with the generation timestamp in the preset acquisition period before the generation timestamp of the current video image exists or not, so that whether the prior video image exists or not is determined based on the existing video image, the transmission sequence of all video images in the target video is guaranteed, and the completeness and the effectiveness of video image synthesis are guaranteed. For example, if the generation timestamp of the current video image is T1, the preset acquisition period is Δ T, and the generation timestamp of the existing video image is T0, then T1- Δ T < T0< T1 can guarantee the limitation of the number of the acquired existing video images, which is beneficial to saving the processing time of the subsequent image similarity calculation.
Understandably, when a video sending end queries a first database based on a current video image and cannot acquire an existing video image, namely, when an existing video image with a generation timestamp before the generation timestamp of the current video image does not exist, a previous video image without the current video image belonging to the same video scene is directly acquired.
As an example, in step S302, the video sending end may use a similarity algorithm, for example, a cosine similarity algorithm or other similarity algorithms, to perform similarity calculation on the current video image and the existing video image, so as to obtain an image similarity, so as to determine whether the current video image and the existing video image belong to the same video scene according to the image similarity, that is, determine whether the current video image and the existing video image belong to two frames of video images before and after the same video image.
In an embodiment, in step S302, performing similarity calculation on the current video image and the existing video image to obtain an image similarity, including: (1) and comparing the current RGB value corresponding to the same pixel point in the current video image and the existing video image with the existing RGB value of the existing video image. (2) And counting the number of pixel points with the same current RGB value and the existing RGB value, and determining the number as the first same number. (3) And counting the number of all pixel points in the current video image, and determining the number as a first total number. (4) And acquiring the image similarity according to the first same quantity and the first total quantity. The existing RGB values refer to RGB values corresponding to each pixel point in the existing video image. In this example, the image similarity may be determined by summing a first same number corresponding to a pixel point having the same RGB value as the existing RGB value and a first total number of all pixel points in the current video image, and comparing and counting the RGB values of the pixel points to determine the image similarity quickly.
As an example, in step S303, the video sending end compares the calculated image similarity with a first preset similarity threshold, and if the image similarity is greater than the first similarity threshold, the existing video image is identified as a previous video image belonging to the same video scene as the current video image, that is, the existing video image and the current video image are identified as two previous and next frames of video images in the target video.
In one embodiment, as shown in fig. 4, the step S202 of determining whether there is a previous video image belonging to the same video scene as the current video image based on the current video image includes:
s401: and inquiring a first database based on the current video image to acquire the existing video image.
S402: and performing target detection on the current video image and the existing video image to obtain a current detection target corresponding to the current video image and an existing detection target corresponding to the existing video image.
S403: and carrying out similarity calculation on the current detection target and the existing detection target to obtain the target similarity.
S404: and if the target similarity is greater than the second similarity threshold, determining the existing video image as a previous video image belonging to the same video scene as the current video image.
Wherein the second similarity threshold is a pre-configured threshold for evaluating whether the criteria for the identified same video scene are met. The second similarity threshold is a threshold for whether the similarity degree of a specific area in the current video image reaches a standard that considers the two video images to be the same video scene standard, and the second similarity threshold is larger than the first similarity threshold.
In this example, the processing procedure of step S401 is the same as the processing procedure of step S301, and is not repeated here to avoid repetition.
As an example, in step S402, the video sending end may perform target detection on the current video image by using a target detection algorithm, and obtain a current detection target corresponding to the current video image; and carrying out target detection on the existing video image by adopting a target detection algorithm to obtain an existing detection target corresponding to the existing video image. For example, the current video image and the existing video image can be detected by adopting, but not limited to, target detection algorithms such as Fast R-CNN, R-FCN, YOLO, SSD, RetinaNet and the like, so as to extract a current detection target corresponding to the current video image and an existing detection target corresponding to the existing video image, and the current detection target and the existing detection target can reflect important information of the video image.
As an example, in step S403, the video sending end may use a similarity algorithm, for example, a cosine similarity algorithm or other similarity algorithms, to perform similarity calculation on the current detection target corresponding to the current video image and the existing detection target corresponding to the existing video image, and obtain a target similarity between the current detection target and the existing detection target, so as to determine whether the current video image and the existing video image belong to the same video scene according to the target similarity, that is, determine whether the current video image and the existing video image belong to two frames of video images before and after the same video image.
Calculating the similarity of the current video image and the existing video image to obtain the image similarity, comprising the following steps: (1) and comparing the current RGB value corresponding to the same pixel point in the current detection target and the existing detection target with the existing RGB value of the existing detection target. (2) And counting the number of pixel points with the same current RGB value and the existing RGB value, and determining the number as the second same number. (3) And counting the number of all pixel points in the current detection target, and determining the number as a second total number. (4) And acquiring the target similarity according to the second same quantity and the second total quantity. The existing RGB values refer to RGB values corresponding to each pixel point in the existing detection target. In this example, the target similarity may be determined by summing the second same number corresponding to the pixel point having the same RGB value as the existing RGB value with the second total number of all the pixel points in the current detection target, and the target similarity may be determined rapidly by performing comparison and statistics on the RGB values of the pixel points.
As an example, in step S404, the video sending end compares the calculated target similarity with a second similarity threshold configured in advance, and if the target similarity is greater than the second similarity threshold, the existing video image is identified as a previous video image belonging to the same video scene as the current video image, that is, the existing video image and the current video image are identified as two previous and next frames of video images in the target video. Understandably, the similarity calculation is performed by using the current detection target extracted from the current video image and the existing detection target extracted from the existing video image, and compared with the similarity calculation performed by using the current video image and the existing video image, the similarity detection range can be effectively reduced, so that the processing efficiency is improved.
In an embodiment, as shown in fig. 5, in step S203, acquiring an incremental feature vector corresponding to the current video image based on the current feature vector and a previous feature vector corresponding to the previous video image includes:
s501: and comparing the current characteristic vector and the prior characteristic vector corresponding to the same pixel point in the current video image and the prior video image.
S502: and if the current characteristic vector and the prior characteristic vector corresponding to the same pixel point are different, determining the pixel point as a target pixel point.
S503: and determining the current characteristic vectors corresponding to all target pixel points in the current video image as the incremental characteristic vectors corresponding to the current video image.
As an example, in step S501, since the current video image and the previous video image belong to the same video scene, that is, belong to two frames of video images before and after the same target video, the current video image and the previous video image contain the same number of pixel points, and the current feature vector and the previous feature vector corresponding to the same pixel point need to be compared to determine whether there is a difference between the current feature vector and the previous feature vector corresponding to the same pixel point.
For example, the current video image S includes N sequentially ordered pixel points S1, S2 video transmission processing sn, and the previous video image P includes N sequentially ordered pixel points P1, P2 video transmission processing pn; the current feature vector corresponding to the pixel point S1 in the current video image S can be compared with the previous feature vector corresponding to the pixel point P1 in the previous video image P, and the current feature vector corresponding to the pixel point S2 in the current video image S is compared with the previous feature vector corresponding to the pixel point P2 in the previous video image P, and the current feature vector corresponding to the pixel point sn pixel point in the current video image S is compared with the previous feature vector corresponding to the pixel point pn in the previous video image P in video transmission processing.
As an example, in step S502, when the video sending end compares the current feature vector corresponding to the same pixel point with the previous feature vector, and when the current feature vector corresponding to the same pixel point is different from the previous feature vector, the video sending end determines a pixel point as a target pixel point, where the target pixel point can be understood as a pixel point that needs to transmit image feature information. For example, if the current feature vector corresponding to the pixel point s1 is different from the previous feature vector corresponding to the pixel point P1 in the previous video image P, the pixel points s1 and P1 are target pixel points.
As an example, in step S503, the video sending end may determine current feature vectors corresponding to all target pixel points in the current video image as incremental feature vectors corresponding to the current video image, where the incremental feature vectors only include the current feature vectors corresponding to the target pixel points, so that the data volume included in the incremental feature vectors is small, and the incremental feature vectors are determined as image feature information corresponding to the current video image, which is beneficial to reducing network transmission bandwidth and resources occupied in the transmission process.
In an embodiment, as shown in fig. 6, a video transmission processing method is provided, which is described by taking the application of the method to the video receiving end in fig. 1 as an example, and includes the following steps:
s601: and receiving the image characteristic information of the current video image sent by the video sending end.
S602: and analyzing the image characteristic information to obtain the current RGB value or the incremental characteristic vector.
S603: and if the image characteristic information contains the current RGB value, performing image synthesis based on the current RGB value to obtain a target synthesized image.
S604: and if the image characteristic information contains the increment characteristic vector, acquiring a prior synthesized image, and performing image synthesis based on the prior synthesized image and the increment characteristic vector to acquire a target synthesized image.
As an example, in step S601, the video receiving end may receive, in real time, image feature information corresponding to a current video image sent by the video sending end frame by frame, where each image feature information may be a current RGB value or an incremental feature vector corresponding to the current video image. The current RGB values are the RGB values of all the pixel points extracted from the current video image. The incremental feature vector is a feature vector which is distinguished when the current feature vector obtained by converting the current RGB value is compared with the prior feature vector of the prior video image.
As an example, in step S602, after receiving each image feature information, the video receiving end needs to parse the image feature information to extract the current RGB value or the incremental feature vector from the image feature information. Specifically, if the image characteristic information is an initial frame video image in the transmitted target video, the current RGB value can be obtained, which is helpful for ensuring the integrity of the image information; if the image characteristic information is not the initial frame video image in the transmitted target video, namely the prior video image belonging to the same video scene exists, the incremental characteristic vector can be obtained, which is beneficial to ensuring that the network transmission bandwidth and resources occupied in the image characteristic information transmission process are less.
As an example, in step S603, when the video receiving end analyzes the image feature information and obtains the current RGB value, the video receiving end may directly perform image synthesis based on the current RGB value to obtain a target synthesis image corresponding to the image feature information. When the received image characteristic information is the current RGB value of the initial frame video image of the target video, the image synthesis can be directly carried out based on the current RGB value, so that the obtained target synthetic image contains complete image information, and the image effect of the target synthetic image is guaranteed.
Further, the video receiving end may receive the image feature information and the scene identifier, and after performing image synthesis based on the current RGB value to obtain the target synthesized image, may further store the target synthesized image in the second database corresponding to the scene identifier, so as to perform image synthesis management using the target synthesized image as a previous synthesized image corresponding to subsequently received image feature information.
As an example, in step S604, when the video receiving end parses the image feature information and obtains the incremental feature vector, it may be determined that the video receiving end has received the image feature information of the previous video image belonging to the same video scene as the image feature information before receiving the image feature information, and performs image synthesis based on the image feature information of the previous video image, and obtains and stores the previous synthesized image. Therefore, the video receiving end may acquire the previous synthesized image, for example, may query the second database based on the scene identifier corresponding to the image feature information, acquire the previous synthesized image from the second database, and the previous synthesized image may be one of the synthesized images in the second database that is closest to the received image feature information. And finally, the video receiving end can perform image synthesis based on the prior synthesized image and the incremental characteristic vector to obtain a target synthesized image corresponding to the image characteristic information.
For example, a video sending end sends image characteristic information corresponding to M frames of current video images to a video receiving end, and the video receiving end receives M image characteristic information in sequence and analyzes each image characteristic information; the image characteristic information corresponding to the 1 st frame of current video image is the current RGB value, and the image characteristic information is determined as the information corresponding to the initial frame of video image in the same video scene, at the moment, the image synthesis can be directly carried out based on the current RGB value, and the 1 st frame of target synthetic image is obtained; the image feature information corresponding to the current video image of the 2-M frame is an incremental feature vector, the incremental feature vector of the ith frame and the previous synthesized image of the i-1 frame can be subjected to image synthesis to obtain a target synthesized image of the ith frame, and i is greater than or equal to 2 and less than or equal to M.
In this example, because the incremental feature vectors corresponding to the current video image are the current feature vectors corresponding to all target pixel points in the current video image, and image synthesis is performed based on the previously synthesized image and the incremental feature vectors, the current feature vectors corresponding to all target pixel points may be specifically used to replace the previous feature vectors corresponding to the target pixel points in the previously synthesized image, so as to achieve an image synthesis effect and ensure the information integrity of the target synthesized image. Understandably, the 1 st frame of target synthetic image synthesized in the target video refers to the current RGB value for image synthesis, so that the information integrity of the 1 st frame of target synthetic image can be effectively guaranteed; the 2 nd frame target synthetic image is synthesized based on the 2 nd frame image characteristic information and the 1 st frame target synthetic image, the video transmission processing of the information integrity of the 2 nd frame target synthetic image can be effectively guaranteed, and the information integrity of all target synthetic images can be guaranteed according to analogy.
In the video transmission processing method provided by this embodiment, the image feature information of the current video image received by the video receiving end may be a current RGB value or an incremental feature vector, which can ensure that the network transmission bandwidth and resources occupied during the transmission process are less; when the image characteristic information is the current RGB value, image synthesis is directly carried out based on the current RGB value, so that the obtained target synthetic image contains complete image information, and the image effect of the target synthetic image is guaranteed; when the image feature information is the incremental feature vector, image synthesis can be performed on the basis of the incremental feature vector and the previously synthesized image, which is helpful for ensuring the information integrity of all target synthesized images.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In an embodiment, a video transmission processing apparatus is provided, and the video transmission processing apparatus corresponds to the video transmission processing methods in the above embodiments one to one. As shown in fig. 7, the video transmission processing apparatus includes a current image extraction module 701, a previous image detection module 702, a first feature information acquisition module 703, a second feature information acquisition module 704, and an image feature information transmission module 705. The functional modules are explained in detail as follows:
the current image extraction module 701 is configured to obtain a current video image and extract a current RGB value corresponding to the current video image.
A previous image detection module 702 configured to determine whether there is a previous video image belonging to the same video scene as the current video image based on the current video image.
The first feature information obtaining module 703 is configured to, if there is a previous video image, perform vector conversion on a current RGB value corresponding to the current video image to obtain a current feature vector, obtain an incremental feature vector corresponding to the current video image based on the current feature vector and the previous feature vector corresponding to the previous video image, and determine the incremental feature vector as image feature information corresponding to the current video image.
The second characteristic information obtaining module 704 is configured to determine, if there is no previous video image, a current RGB value corresponding to the current video image as image characteristic information corresponding to the current video image.
The image characteristic information sending module 705 is configured to send image characteristic information corresponding to the current video image to the video receiving end, so that the video receiving end performs image synthesis based on the image characteristic information to obtain a target synthesized image.
Preferably, the prior image detection module 702 comprises:
and the existing image acquisition unit is used for inquiring the first database based on the current video image to acquire the existing video image.
And the image similarity obtaining unit is used for carrying out similarity calculation on the current video image and the existing video image to obtain the image similarity.
And the first previous image determining unit is used for determining the existing video image as a previous video image belonging to the same video scene with the current video image if the image similarity is greater than the first similarity threshold.
Preferably, the image similarity obtaining unit includes:
and the RGB value comparison subunit is used for comparing the current RGB value corresponding to the same pixel point in the current video image and the existing video image with the existing RGB value of the existing video image.
And the first same-quantity counting subunit is used for counting the quantity of the pixel points with the same current RGB value and the existing RGB value and determining the pixel points as the first same quantity.
And the first total number counting subunit is used for counting the number of all pixel points in the current video image and determining the number as the first total number.
And the image similarity obtaining subunit is used for obtaining the image similarity according to the first same quantity and the first total quantity.
Preferably, the prior image detection module 702 comprises:
and the existing image acquisition unit is used for inquiring the first database based on the current video image to acquire the existing video image.
And the target detection processing unit is used for carrying out target detection on the current video image and the existing video image and acquiring a current detection target corresponding to the current video image and an existing detection target corresponding to the existing video image.
And the target similarity acquiring unit is used for calculating the similarity between the current detection target and the existing detection target to acquire the target similarity.
And the second previous image determining unit is used for determining the existing video image as the previous video image belonging to the same video scene as the current video image if the target similarity is greater than the second similarity threshold.
Preferably, the first characteristic information obtaining module 703 includes:
and the feature vector comparison unit is used for comparing the current feature vector and the previous feature vector corresponding to the same pixel point in the current video image and the previous video image.
And the target pixel point determining unit is used for determining the pixel point as the target pixel point if the current characteristic vector corresponding to the same pixel point is different from the previous characteristic vector.
And the increment characteristic vector determining unit is used for determining the current characteristic vectors corresponding to all the target pixel points in the current video image as the increment characteristic vectors corresponding to the current video image.
In an embodiment, a video transmission processing apparatus is provided, and the video transmission processing apparatus corresponds to the video transmission processing methods in the above embodiments one to one. As shown in fig. 8, the video transmission processing apparatus includes an image feature information receiving module 801, an image feature information analyzing module 802, a first image composition processing module 803, and a second image composition processing module 804. The functional modules are explained in detail as follows:
an image characteristic information receiving module 801, configured to receive image characteristic information of a current video image sent by a video sending end.
And an image characteristic information analyzing module 802, configured to analyze the image characteristic information to obtain a current RGB value or an incremental characteristic vector.
The first image synthesis processing module 803 is configured to, if the image feature information includes the current RGB value, perform image synthesis based on the current RGB value, and acquire a target synthesis image.
And a second image synthesis processing module 804, configured to, if the image feature information includes an incremental feature vector, obtain a previous synthesized image, perform image synthesis based on the previous synthesized image and the incremental feature vector, and obtain a target synthesized image.
For specific limitations of the video transmission processing apparatus, reference may be made to the above limitations of the video transmission processing method, which are not described herein again. The respective modules in the video transmission processing apparatus can be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for data adopted or generated by the video transmission processing method process. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a video transmission processing method.
In an embodiment, a computer device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the video transmission processing method in the foregoing embodiments is implemented, for example, S201 to S205 shown in fig. 2, or shown in fig. 3 to fig. 6, which is not described herein again to avoid repetition. Alternatively, when the processor executes the computer program, the functions of the modules/units in the embodiment of the video transmission processing apparatus, such as the functions of the current image extraction module 701, the previous image detection module 702, the first feature information acquisition module 703, the second feature information acquisition module 704, and the image feature information sending module 705 shown in fig. 7, or the functions of the image feature information receiving module 801, the image feature information analysis module 802, the first image synthesis processing module 803, and the second image synthesis processing module 804 shown in fig. 8, are implemented, and are not repeated here to avoid repetition.
In an embodiment, a computer-readable storage medium is provided, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the video transmission processing method in the foregoing embodiments is implemented, for example, S201 to S205 shown in fig. 2, or shown in fig. 3 to fig. 6, which is not described herein again to avoid repetition. Alternatively, when being executed by a processor, the computer program implements the functions of the modules/units in the embodiment of the video transmission processing apparatus, such as the functions of the current image extraction module 701, the previous image detection module 702, the first feature information acquisition module 703, the second feature information acquisition module 704, and the image feature information transmission module 705 shown in fig. 7, or the functions of the image feature information reception module 801, the image feature information analysis module 802, the first image synthesis processing module 803, and the second image synthesis processing module 804 shown in fig. 8, which are not described herein again to avoid repetition.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A video transmission processing method is characterized by comprising the following steps executed by a video sending end:
acquiring a current video image, and extracting a current RGB value corresponding to the current video image;
determining whether there is a previous video image belonging to the same video scene as the current video image based on the current video image;
if the previous video image exists, performing vector conversion on a current RGB value corresponding to the current video image to obtain a current feature vector, acquiring an incremental feature vector corresponding to the current video image based on the current feature vector and the previous feature vector corresponding to the previous video image, and determining the incremental feature vector as image feature information corresponding to the current video image;
if the prior video image does not exist, determining the current RGB value corresponding to the current video image as the image characteristic information corresponding to the current video image;
and sending the image characteristic information corresponding to the current video image to a video receiving end so that the video receiving end carries out image synthesis based on the image characteristic information to obtain a target synthesized image.
2. The video transmission processing method according to claim 1, wherein said determining whether there is a previous video image belonging to the same video scene as the current video image based on the current video image comprises:
inquiring a first database based on the current video image to obtain an existing video image;
carrying out similarity calculation on the current video image and the existing video image to obtain image similarity;
and if the image similarity is greater than a first similarity threshold, determining the existing video image as a previous video image belonging to the same video scene as the current video image.
3. The video transmission processing method according to claim 2, wherein said performing similarity calculation between the current video image and the existing video image to obtain image similarity comprises:
comparing the current RGB value corresponding to the same pixel point in the current video image and the existing video image with the existing RGB value of the existing video image;
counting the number of pixel points with the same current RGB value and the existing RGB value, and determining the number as a first same number;
counting the number of all pixel points in the current video image, and determining the number as a first total number;
and acquiring the image similarity according to the first same quantity and the first total quantity.
4. The video transmission processing method according to claim 1, wherein said determining whether there is a previous video image belonging to the same video scene as the current video image based on the current video image comprises:
inquiring a first database based on the current video image to obtain an existing video image;
performing target detection on the current video image and the existing video image to obtain a current detection target corresponding to the current video image and an existing detection target corresponding to the existing video image;
similarity calculation is carried out on the current detection target and the existing detection target, and target similarity is obtained;
and if the target similarity is greater than a second similarity threshold, determining the existing video image as a previous video image belonging to the same video scene as the current video image.
5. The video transmission processing method of claim 1, wherein obtaining the incremental feature vector corresponding to the current video image based on the current feature vector and a previous feature vector corresponding to the previous video image comprises:
comparing the current characteristic vector and the prior characteristic vector corresponding to the same pixel point in the current video image and the prior video image;
if the current feature vector and the previous feature vector corresponding to the same pixel point are different, determining the pixel point as a target pixel point;
and determining the current feature vectors corresponding to all the target pixel points in the current video image as the incremental feature vectors corresponding to the current video image.
6. A video transmission processing method is characterized by comprising the following steps executed by a video receiving end:
receiving image characteristic information of a current video image sent by a video sending end;
analyzing the image characteristic information to obtain a current RGB value or an incremental characteristic vector;
if the image characteristic information contains the current RGB value, image synthesis is carried out based on the current RGB value, and a target synthetic image is obtained;
and if the image feature information contains the incremental feature vector, acquiring a prior synthesized image, and performing image synthesis based on the prior synthesized image and the incremental feature vector to acquire a target synthesized image.
7. A video transmission processing apparatus, comprising,
the current image extraction module is used for acquiring a current video image and extracting a current RGB value corresponding to the current video image;
a previous image detection module for determining whether a previous video image belonging to the same video scene as the current video image exists based on the current video image;
a first feature information obtaining module, configured to perform vector conversion on a current RGB value corresponding to the current video image to obtain a current feature vector if the previous video image exists, obtain an incremental feature vector corresponding to the current video image based on the current feature vector and a previous feature vector corresponding to the previous video image, and determine the incremental feature vector as image feature information corresponding to the current video image;
a second characteristic information obtaining module, configured to determine, if the previous video image does not exist, a current RGB value corresponding to the current video image as image characteristic information corresponding to the current video image;
and the image characteristic information sending module is used for sending the image characteristic information corresponding to the current video image to a video receiving end so that the video receiving end carries out image synthesis based on the image characteristic information to obtain a target synthesized image.
8. A video transmission processing apparatus, comprising:
the image characteristic information receiving module is used for receiving the image characteristic information of the current video image sent by the video sending end;
the image characteristic information analysis module is used for analyzing the image characteristic information to obtain a current RGB value or an incremental characteristic vector;
the first image synthesis processing module is used for carrying out image synthesis based on the current RGB value if the image characteristic information contains the current RGB value so as to obtain a target synthesis image;
and the second image synthesis processing module is used for acquiring a prior synthesized image if the image characteristic information contains the incremental characteristic vector, and performing image synthesis based on the prior synthesized image and the incremental characteristic vector to acquire a target synthesized image.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the video transmission processing method according to any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the video transmission processing method according to any one of claims 1 to 6.
CN202110261843.7A 2021-03-10 2021-03-10 Video transmission processing method and device, computer equipment and storage medium Pending CN113014958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110261843.7A CN113014958A (en) 2021-03-10 2021-03-10 Video transmission processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110261843.7A CN113014958A (en) 2021-03-10 2021-03-10 Video transmission processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113014958A true CN113014958A (en) 2021-06-22

Family

ID=76404495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110261843.7A Pending CN113014958A (en) 2021-03-10 2021-03-10 Video transmission processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113014958A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104867077A (en) * 2014-02-25 2015-08-26 华为技术有限公司 Method for storing medical image, method for exchanging information and device thereof
CN107920276A (en) * 2016-10-09 2018-04-17 中国电信股份有限公司 A kind of O&M operation On line inspection method, apparatus and auditing system
CN108282674A (en) * 2018-02-05 2018-07-13 天地融科技股份有限公司 A kind of video transmission method, terminal and system
CN108985162A (en) * 2018-06-11 2018-12-11 平安科技(深圳)有限公司 Object real-time tracking method, apparatus, computer equipment and storage medium
CN112053313A (en) * 2020-08-31 2020-12-08 西安工业大学 Night vision anti-halation video processing method for heterogeneous image fusion
CN112333347A (en) * 2021-01-05 2021-02-05 同方威视技术股份有限公司 Image data transmission method, device, equipment and computer storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104867077A (en) * 2014-02-25 2015-08-26 华为技术有限公司 Method for storing medical image, method for exchanging information and device thereof
CN107920276A (en) * 2016-10-09 2018-04-17 中国电信股份有限公司 A kind of O&M operation On line inspection method, apparatus and auditing system
CN108282674A (en) * 2018-02-05 2018-07-13 天地融科技股份有限公司 A kind of video transmission method, terminal and system
CN108985162A (en) * 2018-06-11 2018-12-11 平安科技(深圳)有限公司 Object real-time tracking method, apparatus, computer equipment and storage medium
CN112053313A (en) * 2020-08-31 2020-12-08 西安工业大学 Night vision anti-halation video processing method for heterogeneous image fusion
CN112333347A (en) * 2021-01-05 2021-02-05 同方威视技术股份有限公司 Image data transmission method, device, equipment and computer storage medium

Similar Documents

Publication Publication Date Title
CN108985162B (en) Target real-time tracking method and device, computer equipment and storage medium
CN110740103A (en) Service request processing method and device, computer equipment and storage medium
CN110719332B (en) Data transmission method, device, system, computer equipment and storage medium
CN110490594B (en) Service data processing method and device, computer equipment and storage medium
CN107015942B (en) Method and device for multi-core CPU (Central processing Unit) packet sending
CN110399367B (en) Business data processing method and device, computer equipment and storage medium
CN112434039A (en) Data storage method, device, storage medium and electronic device
CN112689007B (en) Resource allocation method, device, computer equipment and storage medium
CN111241938A (en) Face recognition method and device based on image verification and computer equipment
CN112035531B (en) Sensitive data processing method, device, equipment and medium
CN111445487A (en) Image segmentation method and device, computer equipment and storage medium
CN111224939A (en) Task request intercepting method and device, computer equipment and storage medium
CN114494744A (en) Method and device for obtaining object track similarity, electronic equipment and storage medium
CN111263113B (en) Data packet sending method and device and data packet processing method and device
CN112541102B (en) Abnormal data filtering method, device, equipment and storage medium
CN113014958A (en) Video transmission processing method and device, computer equipment and storage medium
CN110049350B (en) Video transcoding processing method and device, computer equipment and storage medium
CN116886770A (en) Engineering truck data transmission method based on Internet of things and engineering truck
CN109474386B (en) Signaling tracking method, system, network element equipment and storage medium
CN109857344B (en) Heartbeat state judgment method and device based on shared memory and computer equipment
US20230082766A1 (en) Image synchronization method and apparatus, and device and computer storage medium
CN111191612B (en) Video image matching method, device, terminal equipment and readable storage medium
CN110557374B (en) Power data acquisition method and device, computer equipment and storage medium
CN108965426B (en) Data processing method and device for audio system, computer equipment and storage medium
CN110365449B (en) Cyclic redundancy check acceleration method and device and access network equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination