CN111629264B - Web-based separate front-end image rendering method - Google Patents

Web-based separate front-end image rendering method Download PDF

Info

Publication number
CN111629264B
CN111629264B CN202010485289.6A CN202010485289A CN111629264B CN 111629264 B CN111629264 B CN 111629264B CN 202010485289 A CN202010485289 A CN 202010485289A CN 111629264 B CN111629264 B CN 111629264B
Authority
CN
China
Prior art keywords
rendering
algorithm
video data
server
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010485289.6A
Other languages
Chinese (zh)
Other versions
CN111629264A (en
Inventor
刘天弼
冯瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202010485289.6A priority Critical patent/CN111629264B/en
Publication of CN111629264A publication Critical patent/CN111629264A/en
Application granted granted Critical
Publication of CN111629264B publication Critical patent/CN111629264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/6437Real-time Transport Protocol [RTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a web-based separate front-end image rendering method, which is characterized in that the method is used for rendering and combining a client end into a front-end image to be displayed by the client end according to rendering elements output by a web server and video data acquired by a camera, wherein the rendering elements are generated by an algorithm server based on video data processing and are sent to the web server, and the method comprises the following steps: step S1, standardizing the rendering elements output by the web server; step S2, calculating the arrival time delay of the video data at the algorithm side; step S3, calculating algorithm execution time, algorithm result transmission time, server processing time and rendering transmission time; step S4, calculating the arrival time delay of the video data at the user side; and step S5, the client analyzes the video data transmitted by the camera, calculates the time difference between the rendering element and the video data, further obtains the video data of the frame corresponding to the rendering element based on the time difference, and renders and displays the video data frame by frame on the display interface.

Description

Web-based separate front-end image rendering method
Technical Field
The invention belongs to the field of digital image processing and the field of web application, relates to a video image rendering method, and particularly relates to a front-end image rendering method based on web and with a video image source separated from an overlapped material.
Background
The intelligent video analysis plays an important role in the current society, and various monitoring services realized through a video analysis algorithm are widely applied in various industries. The intelligent video analysis is used for achieving the effects of informing, warning and the like, information such as shapes, characters, icons and the like is often superposed on an analyzed picture, and the rendered picture is sent to a monitoring large screen, a streaming media server, a user PC display and the like. Generally, there are the following methods for displaying pictures by using intelligent video analysis:
1. the intelligent camera or the edge computing equipment is used for directly realizing intelligent analysis and calculation at a video shooting position and uploading analysis result data and a rendering picture to a server;
2. transmitting the video to a server, starting a video starting algorithm on the server for analysis, and directly displaying the analyzed rendering picture and alarm information;
3. and transmitting the video to a special algorithm server for analysis, and transmitting an analysis result to a server for monitoring the service.
The above methods all have respective disadvantages:
firstly, an intelligent camera or edge computing equipment is used, and large-scale complex operation cannot be performed due to equipment performance, so that the accuracy, speed and intellectualization of a video analysis algorithm are limited; secondly, video analysis is performed at a server side, so that the calculation pressure of the server is too large, the calculation pressure is limited by the architecture of calculation resources and a monitoring system, and an excessive number of videos cannot be analyzed; third, the use of a dedicated video analysis algorithm server is preferable in terms of the sufficiency of video analysis algorithm utilization, and is capable of providing the algorithm with computing power by fully utilizing dedicated computing resources, and is not limited in number. However, when the third method needs to be displayed on a large screen on the console, the problem of image transmission can be faced: the transmission of a series of rendered pictures in a network requires a large bandwidth, the time delay is too large if the pictures are compressed and encoded into a video stream, and the server is stressed when the video is too much.
The video data are subjected to mild analysis, and an intelligent camera or edge calculation is suitable for use; a small number of passes of video are analyzed and video image analysis algorithms can be executed on the server. The mainstream method for analyzing multi-channel videos in the market at present is to use a server of a special video image analysis algorithm to provide video analysis results for service requirements. If the large screen display is needed on the master console, the images of any multi-channel videos rendered by the algorithm are displayed in real time, and a special client side or a special streaming media service is often needed to be customized. On one hand, the methods are complex in implementation means and low in efficiency, time delay is further increased due to the links of repeated coding, decoding, forwarding and the like, and the methods are not universal in technology.
Disclosure of Invention
In order to solve the problems, the invention provides a method for respectively obtaining a video image source and an algorithm analysis result through a client and completing real-time fusion rendering, thereby avoiding the pressure of a large number of images on a server and a network in the transmission process, and the invention adopts the following technical scheme:
the invention provides a web-based separate front-end image rendering method, which is characterized in that the method is used for rendering and combining a client end into a front-end image to be displayed by the client end according to rendering elements output by a web server and video data acquired by a camera, wherein the rendering elements are generated by an algorithm server based on video data processing and are sent to the web server, and the web-based separate front-end image rendering method comprises the following steps: step S1, standardizing the rendering elements output by the web server; step S2, the algorithm server sends a request to the camera and records the time interval of the request and the response of the camera as the algorithm side response interval, and the algorithm side video data arrival time delay t is calculated and obtained based on the algorithm side response interval1(ii) a Step S3, recording the processing of the video data through the algorithm server and the web server and sending the video data to the client for each step of execution time stamp, further respectively calculating the algorithm execution time t of the algorithm server based on the time stamp2Algorithm result transmission time t between algorithm server and web server3Server processing time t of web server4And rendering transmission time t between web server and client5(ii) a Step S4, the client sends a request to the camera and records the time interval between the request and the response of the camera as the client-side response interval, and further calculates the client-side video data arrival time delay t based on the client-side response interval6(ii) a Step S5, the client analyzes the video data transmitted by the camera and according to t1、t2、t3、t4、t5And t6Calculating the time difference between the rendering element and the video data, further obtaining the video data of the frame corresponding to the rendering element based on the time difference, and performing frame-by-frame rendering display on the display interface, wherein the step S3 includes the following sub-steps: step S3-1, recording the time stamp of the video data reaching the algorithm server, and recording the time stamp when the algorithm is executed and the algorithm server sends the algorithm result, thereby calculating the algorithm execution time t2(ii) a Step S3-2, recording web serviceThe time stamp of the algorithm result is received by the device, so that the transmission time t of the algorithm result is calculated3(ii) a Step S3-3, recording the time stamp of the rendering element sent by the web server to the client, thereby calculating the server processing time t4(ii) a Step S3-4, recording the time stamp of the client receiving the rendering element, thereby calculating the rendering transmission time t5
The web-based separated front-end image rendering method provided by the invention can also have the technical characteristics that the step S1 comprises the following sub-steps: step S1-1, defining corresponding data structures for all elements needing to be drawn on the display interface; step S1-2, normalizing the coordinate position of the element, dividing the x coordinate by the image width of the video data to be used as a normalized x coordinate, and dividing the y coordinate by the image height of the video data to be used as a normalized y coordinate; and step S1-3, normalizing the sizes of the elements to obtain rendering elements, wherein the width and height related numerical values are proportional values relative to the width and height of the image of the video data when the sizes of the elements are normalized.
The web-based separate front-end image rendering method provided by the invention can also have the technical characteristics that the calculation mode of the time difference is as follows: Δ t ═ t1+t2+t3+t4+t5-t6The request sent by the algorithm server to the camera and the request sent by the client to the camera are multiple times, and the algorithm side video data arrival time delay t1And client side video data arrival time delay t6The algorithm execution time t is the average time delay obtained based on multiple requests2Algorithm result transmission time t3Server processing time t4And rendering the transmission time t5The corrected relatively stable value is calculated through the step S3 a plurality of times.
The web-based separate front-end image rendering method provided by the invention can also have the technical characteristics that the frame-by-frame rendering is non-precise alignment type rendering, and when the client acquires the video data of the frame corresponding to the rendering element based on the time difference, the client obtains the corresponding frame number difference through approximate accurate calculation according to the time difference and the frame number of the video data, so as to acquire the video data of the frame corresponding to the rendering element.
Action and Effect of the invention
According to the web-based separated front-end image rendering method, the video data acquired by the camera is analyzed and processed by the algorithm server and the web server to form rendering elements, and the client further obtains the rendering elements and the video data respectively and renders the rendering elements and the video data frame by frame in the client, so that the web server and the algorithm server are prevented from encoding, transmitting and the like the video data, and the pressure of the server is greatly saved. Furthermore, the arrival time delay and the transmission and processing time among the camera, the algorithm server, the web server and the client are respectively calculated, and the time difference between the video data and the rendering element is calculated in the client, so that the video data and the rendering element can be corresponding according to the time difference when the frame-by-frame rendering is carried out, and the dislocation of the rendering element and the video data is avoided. According to the web-based separated front-end image rendering method, the rendering elements are drawn on the frame image before the video stream is decoded and displayed at the client, and then the frame image is displayed, so that an effective calculation link is fully utilized, the operation of repeatedly encoding and decoding the video image for analysis and transmission is omitted, and the pressure of a large number of images on a server and a network in the transmission process is avoided.
Drawings
FIG. 1 is a block diagram of the architecture of a web platform in an embodiment of the invention;
FIG. 2 is a flow chart of a method for web-based discrete front-end image rendering in an embodiment of the present invention;
FIG. 3 is a schematic illustration of the time difference in an embodiment of the present invention; and
FIG. 4 is a schematic diagram comparing the method of the present invention with a conventional method.
Detailed Description
In order to make the technical means, the creation features, the achievement purposes and the effects of the invention easy to understand, the web-based separated front-end image rendering method of the invention is specifically described below with reference to the embodiments and the accompanying drawings.
< example >
Fig. 1 is a block diagram of the structure of a web platform in the embodiment of the present invention.
In this embodiment, the web-based separate front-end image rendering method is implemented in the web platform 100 adopting the B/S architecture. As shown in fig. 1, the web platform 100 includes a plurality of cameras 101, a plurality of algorithm servers 102, a web server 103, a plurality of clients 104, and a communication network 105.
Each algorithm server 102 is in communication connection with a web server through a communication network 105a, and each algorithm server 102 is in communication connection with a plurality of cameras 101 in charge of each algorithm server through a communication network 105 b; each client 104 is in communication connection with the web server 103 through the communication network 105c, and can establish connection with each camera 101 through the web server 103 and complete data communication. Communication network 105a is a local area network and both communication network 105b and communication network 105c are ethernet networks.
In this embodiment, taking as an example that the video data obtained by shooting with the camera 101 is a pedestrian monitoring video and the client 104 is used for displaying a pedestrian capturing monitoring picture:
48 cameras 101 are deployed in an area to be monitored, and all the cameras 101 can provide standard RTSP streaming video data and support at least 16 video requests.
The operating system of algorithm server 102 is ubuntu16.04, the deep learning framework pytorch version 1.01 is used, the CUDA version 9.0, and the image acceleration computing unit uses NVIDIA1080Ti GPU. The algorithm server 102 has a conventional pedestrian detection, pedestrian ReID algorithm deployed therein.
The operating system of the web server 103 is ubuntu16.04, the web framework used is Django 2.0, and the database uses MySQL.
OpenCVjs is deployed in the client 104 for parsing and rendering RTSP streams.
The processing flow of the pedestrian monitoring video in this embodiment is that video data shot by the camera 101 is transmitted to the algorithm server 102 for pedestrian detection and pedestrian ReID, and the identified pedestrian identity and position frame data are transmitted to the web server 103. The web server 103 sends alarm information and pedestrian identity and position information to the client 104 according to the identity and position of the pedestrian. After the client 104 opens the web page, the user can track the position of the pedestrian in the map in real time, and can see the monitoring picture capturing the pedestrian in the display interface of the client 104, the pedestrian is framed out by a red frame in the monitoring picture, and the name of the pedestrian is marked beside the frame.
FIG. 2 is a flow chart of a method for web-based discrete front-end image rendering in an embodiment of the present invention.
As shown in fig. 2, the web-based separate front-end image rendering method includes the steps of:
in step S1, the algorithm server 102 sends the rendering element to the web server 103 and performs a normalization operation on the rendering element before transmitting the rendering element to the client 104 (i.e., the front end). The step S1 specifically includes the following sub-steps:
step S1-1, defining all elements to be drawn on the display interface of the client 104 into a data structure according to the requirement, so that the elements are convenient to serialize and transmit in the network;
step S1-2, normalizing the coordinate position of the element, dividing the x coordinate by the image width as a normalized x coordinate, and dividing the y coordinate by the image height as a normalized y coordinate;
and step S1-3, normalizing the sizes of the elements to obtain rendering elements, wherein the width and height related numerical values are proportional values relative to the width and height of the image of the video data when the sizes of the elements are normalized.
In this embodiment, when the pedestrian monitoring video is subjected to the standardized operation of step S1, the following data is structured in step S1-1: camera ID, pedestrian name, pedestrian location (including x-coordinate, y-coordinate, width, height, unit is pixel). Next, the pedestrian x coordinate is divided by the image width as a normalized x coordinate, and the pedestrian y coordinate is divided by the image height as a normalized y coordinate by step S1-2. Finally, the pedestrian width is divided by the image width as the normalized width, and the pedestrian height is divided by the image height as the normalized height by step S1-3, thereby completing the normalization operation of the rendering elements of the pedestrian-monitored video.
In this embodiment, after each frame of image is analyzed, the algorithm server 102 performs the normalization operation through the step S1, and the rendering elements subsequently transmitted to the web server 103 and sent to the client 104 by the web server 103 are the result of the normalization.
Step S2, each algorithm server 102 sends a request to the camera 101 responsible for analysis, records a time interval between each request and a response of the corresponding camera 101 as an algorithm side response interval, and further calculates an algorithm side video data arrival time delay t based on the algorithm side response interval1
In this embodiment, the operation of sending a request to each camera 101 and recording the response interval of the algorithm side by the algorithm server 102 is repeatedly executed 5 times, and the average value of each time interval is calculated as the arrival time delay t of the video data at the algorithm side1. If the difference between the individual time delay and other time delays is too large, the individual time delay is discarded, the request is retransmitted, and the average value is calculated.
Step S3, recording the timestamp of each step of execution (i.e. the whole flow of analysis of the pedestrian monitoring video) of the video data processed by the algorithm server 102 and the web server 103 and sent to the client 104, until sending the data to the client 104, and further calculating the time consumption of each link. Specifically, the step S3 includes the following sub-steps:
step S3-1, recording the time stamp a of the arrival of the video data at the algorithm server 1021And records the time stamp a when the algorithm is executed and the algorithm server 102 sends the result2According to the time stamp a1And a2Computing algorithm execution time t2
Step S3-2, recording the time stamp a of the web server 103 receiving the algorithm result data3According to the time stamp a2And a3Calculating the algorithm result transmission time t3
Step S3-3, recording the time stamp a of the data sent by the web server 103 to the user4According to the time stamp a3And a4Recording server processing time t4
Step S3-4, recording the time stamp a of the web server 103 data received by the client 1045According to the time stamp a4And a5Recording the transmission time t of the data sent by the web server5
In step S3 of the present embodiment, the time (i.e., t) calculated in each step is2、t3、t4、t5) 9 records are reserved, and the new time consumption measured after each algorithm execution is averaged with the previous records to serve as the current effective time consumption.
Step S4, according to the page configuration used by the client 104, sends a request to the camera 101 that needs to be rendered, and records the time interval between the request and the response of the camera 101 as the client-side response interval. Repeatedly executing for 5 times, and calculating the average value as the arrival time delay t of the video data6
In this embodiment, the page is configured as a configuration web page acquired by the client 104 through the web server 103, and the configuration web page displays the IP and the port of all the cameras 101, so that the user can select the surveillance video shot by the camera 101 to be viewed according to the IP and the port. At this time, the client 104 sends a request to the corresponding camera 101 according to the IP and the port.
In this embodiment, the operation of sending a request to the camera 101 by the client 104 and recording the client response interval is also repeatedly executed 5 times, and the average value of each time interval is calculated as the video data arrival time delay t6. If the difference between the individual time delay and other time delays is too large, discarding the time delay, resending the request and calculating the average value.
In addition, in this embodiment, one algorithm server 102 may analyze the video data of a plurality of cameras 101, and the timestamps and the calculated time delays recorded in steps S2, S3, and S4 all correspond to a specific camera. And when the front end performs the separated image rendering, the rendering element is corresponding to the video ID.
In step S5, the client 104 parses the video data transmitted by the camera 101, and determines t according to the video data1、t2、t3、t4、t5Andt6and calculating the arrival time difference between the rendering element and the video data, further acquiring the video data of the frame corresponding to the rendering element based on the time difference, and performing frame-by-frame rendering display on the display interface.
In this embodiment, the client 104 separately receives video data and rendering elements, and forms a video image displayed in the display interface by rendering frame by frame. The base map of the video image (i.e., video data) is taken directly from the camera 101, and is neither processed by an algorithm nor processed by the web server 103; on top of the base graph, image elements (i.e., rendering elements) of the monitoring service are drawn, from the structured data provided by the web server 103.
The frame-by-frame rendering by client 104 is a non-precisely aligned rendering. Ideally, a frame of data is sent to the algorithm server for video analysis and also to the user front end, and the analysis result is strictly aligned with the frame by calculating the time difference to perform accurate rendering — however, in the current common video stream format transmitted in the network, the frame of data does not have a time stamp, and the accurate calculation of the time difference cannot be performed. Therefore, based on the correlation between the video frame sequences, a small amount of time error does not affect the visual effect of the front-end rendering, and the client 104 realizes the visual error-free rendering effect through approximate accurate calculation of the time error.
Therefore, before performing rendering display, the client 104 calculates the time difference between the two sources separately rendered according to the time delays of all links.
Fig. 3 is a schematic diagram of the time difference in the embodiment of the present invention.
As shown in fig. 3, the time difference is calculated by:
Δt=t1+t2+t3+t4+t5-t6
parameter t used in the above formula1、t2、t3、t4、t5And t6The time-consuming data calculated in steps S2 to S4. Wherein, t1The average time delay obtained by multiple tests when the algorithm server 102 is initialized is calculated; t is t6The average time delay obtained by multiple tests during the initialization of the client 104 is calculated; t is t2To t5The correction is continuously carried out in the process of executing the analysis algorithm each time, and a relatively stable value is achieved. Since the rendering elements are not precisely aligned with the frames (video data), the average delay is used to stabilize the rendering effect so as not to cause the random delay to have a destructive effect on the rendering.
Next, the client 104 parses the video data transmitted by the camera 101 and the rendering element transmitted by the web server 103.
Typically, the client 104 receives the video (data of the RTSP stream) first, and the rendering element is relatively delayed. The client 104 analyzes the RTSP stream by OpenCVjs and arranges the RTSP stream into a frame buffer queue.
When the rendering element arrives at the client 104 from the web server 103, the client 104 calculates the frame number difference corresponding to the time difference, for example: for 25 frames/s video, the time difference is divided by 40ms to obtain the frame number difference. And finding a corresponding frame in the frame buffer queue according to the frame number difference, drawing a rectangular frame for framing the pedestrian on the frame image by using OpenCVjs, and drawing the name of the pedestrian near the frame.
Through the above processing, the client 104 completes frame-by-frame rendering of the video data and the rendering elements, and can smoothly display the rendered pedestrian monitoring video in the display interface.
Examples effects and effects
According to the web-based separate front-end image rendering method provided by the embodiment, the video data acquired by the camera is analyzed and processed by the algorithm server and the web server to form rendering elements, and the client further obtains the rendering elements and the video data respectively and performs frame-by-frame rendering in the client, so that the web server and the algorithm server are prevented from encoding, transmitting and the like the video data, and the pressure of the server is greatly reduced. Furthermore, the arrival time delay and the transmission and processing time among the camera, the algorithm server, the web server and the client are respectively calculated, and the time difference between the video data and the rendering element is calculated in the client, so that the video data and the rendering element can be corresponding according to the time difference when the frame-by-frame rendering is carried out, and the dislocation of the rendering element and the video data is avoided.
FIG. 4 is a schematic diagram comparing the method of the present invention with a conventional method.
As shown in fig. 4, in the conventional method, images are rendered on a server and pushed to a client, and the amount of data of the images including compressed images is large, which may cause stress on the processing capacity of the network and the server. If the rendered image is encoded into a video stream again, not only the computational burden of the server is increased, but also the time delay of the monitoring picture is greatly increased.
Compared with the prior art, as shown in fig. 4, the method only needs the web server and the algorithm server to transmit rendering elements (including shapes, colors, texts and the like, and the data volume is small), and renders images in real time at the client, so that an effective computing link is fully utilized, the operation of repeatedly encoding and decoding video images for analysis and transmission is omitted, and the pressure of a large number of images on the server and a network in the transmission process is avoided.
In the embodiment, as 48 cameras are arranged and a plurality of algorithm servers are deployed for intelligent video analysis, the analysis results can be centralized on a monitoring platform (web platform) for sorting, and the monitoring service is not affected; when a client needs to watch the monitoring video, especially when multiple users look up the monitoring results of multiple paths of videos at a terminal, the pressure of a monitoring platform server can be ensured not to be influenced.
In this embodiment, since the video data requested by the client is a video stream encoded by the camera, the data size is small, and the time delay is short. In the traditional method, images are rendered on a server and pushed to a client, and the data volume of the simple images including the compressed images is large, so that the processing capacity of the network and the server is stressed. If the rendered image is encoded into a video stream again, not only the computational burden of the server is increased, but also the time delay of the monitoring picture is greatly increased. According to the method, effect rendering is not performed on the server, rendering elements are drawn on the frame image before the video stream is decoded and displayed on the client side, and then displaying is performed, so that an effective computing link is fully utilized, the computing amount is reduced, and the efficiency is improved.
In the embodiment, because the video data and the rendering elements are separated, the rendering elements can be accurately drawn on the base map, and therefore, the data of the rendering elements are normalized through standardized operation, so that the drawing effect of the rendering elements on the picture is not influenced no matter whether the image sent by the camera is a main code stream or a subcode stream, whether the image is a large-resolution or a small-resolution, and whether the image is stretched or zoomed in the middle, thereby providing a more free display means for the effect picture of the monitoring platform.
The above-described embodiments are merely illustrative of specific embodiments of the present invention, and the present invention is not limited to the description of the above-described embodiments.
In the above embodiment, the communication network 105a is a local area network, and the communication networks 105b and 105c are ethernet networks. In another embodiment of the present invention, the communication networks 105a, 105b, and 105c may be replaced by communication methods such as ethernet, lan, and wired connection.

Claims (4)

1. A web-based separated front-end image rendering method is characterized in that a client renders and combines a front-end image to be displayed by the client according to rendering elements output by a web server and video data collected by a camera, the rendering elements are processed and generated by an algorithm server based on the video data and are sent to the web server, and the web-based separated front-end image rendering method comprises the following steps:
step S1, the algorithm server standardizes the rendering element before sending the rendering element;
step S2, the algorithm server sends a request to the camera and records the time interval of the request and the response of the camera as the algorithm side response interval, and the algorithm side video data arrival time delay t is calculated and obtained based on the algorithm side response interval1
Step S3, recording the video numberRespectively calculating the algorithm execution time t of the algorithm server based on the time stamp according to the time stamp of each step of execution processed by the algorithm server and the web server and sent to the client2An algorithm result transmission time t between the algorithm server and the web server3Server processing time t of the web server4And a rendering transmission time t between the web server and the client5
Step S4, the client sends a request to the camera and records a time interval between the request and the response of the camera as a client-side response interval, and calculates a client-side video data arrival time delay t based on the client-side response interval6
Step S5, the client analyzes the video data transmitted by the camera and according to t1、t2、t3、t4、t5And t6Calculating the time difference between the rendering element and the video data, acquiring the video data of the frame corresponding to the rendering element based on the time difference, and performing frame-by-frame rendering display on a display interface,
wherein the step S3 includes the following substeps:
step S3-1, recording the time stamp of the video data reaching the algorithm server, and recording the time stamp of the algorithm server when the algorithm is executed and the algorithm server sends the algorithm result, thereby calculating the algorithm execution time t2
Step S3-2, recording the time stamp of the web server receiving the algorithm result, thereby calculating the transmission time t of the algorithm result3
Step S3-3, recording the time stamp of the web server sending the rendering element to the client, thereby calculating the server processing time t4
Step S3-4, recording the time stamp of the client receiving the rendering element, thereby calculating the rendering transmission time t5
2. The web-based discrete front-end image rendering method of claim 1, wherein:
wherein the step S1 includes the following substeps:
step S1-1, defining corresponding data structures for all elements needing to be drawn on the display interface;
step S1-2, normalizing the coordinate position of the element, dividing the x coordinate by the image width of the video data to be a normalized x coordinate, and dividing the y coordinate by the image height of the video data to be a normalized y coordinate;
step S1-3, normalizing the size of the element to obtain the rendering element, where the width and height related values are both proportional values relative to the image width and height of the video data when the size of the element is normalized.
3. The web-based discrete front-end image rendering method of claim 1, wherein:
wherein, the calculation mode of the time difference is as follows: Δ t ═ t1+t2+t3+t4+t5-t6
The request sent by the algorithm server to the camera and the request sent by the client to the camera are multiple times, and the time delay t of the algorithm side video data is up1And client side video data arrival time delay t6To obtain an average delay based on a number of such requests,
the algorithm execution time t2The algorithm result transmission time t3The server processes the time t4And the rendering transmission time t5The corrected relatively stable value is calculated by the step S3 a plurality of times.
4. The web-based discrete front-end image rendering method of claim 1, wherein:
the frame-by-frame rendering is non-precise alignment type rendering, and when the client acquires the video data of the frame corresponding to the rendering element based on the time difference, the client obtains the corresponding frame number difference through approximate accurate calculation according to the time difference and the frame number of the video data, so as to acquire the video data of the frame corresponding to the rendering element.
CN202010485289.6A 2020-06-01 2020-06-01 Web-based separate front-end image rendering method Active CN111629264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010485289.6A CN111629264B (en) 2020-06-01 2020-06-01 Web-based separate front-end image rendering method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010485289.6A CN111629264B (en) 2020-06-01 2020-06-01 Web-based separate front-end image rendering method

Publications (2)

Publication Number Publication Date
CN111629264A CN111629264A (en) 2020-09-04
CN111629264B true CN111629264B (en) 2021-07-27

Family

ID=72273245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010485289.6A Active CN111629264B (en) 2020-06-01 2020-06-01 Web-based separate front-end image rendering method

Country Status (1)

Country Link
CN (1) CN111629264B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669404B (en) * 2020-12-28 2023-11-14 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN112995636B (en) * 2021-03-09 2022-03-25 浙江大学 360-degree virtual reality video transmission system based on edge calculation and active cache and parameter optimization method
CN113518215B (en) * 2021-05-19 2022-08-05 上海爱客博信息技术有限公司 3D dynamic effect generation method and device, computer equipment and storage medium
CN113489789B (en) * 2021-07-06 2024-06-25 广州虎牙科技有限公司 Statistical method, device, equipment and storage medium for cloud game time-consuming data
CN114257785A (en) * 2021-12-13 2022-03-29 山东电工电气集团有限公司 Video processing method based on edge calculation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930592A (en) * 2012-11-16 2013-02-13 李金地 Cloud computation rendering method based on uniform resource locator analysis
CN106375793A (en) * 2016-08-29 2017-02-01 东方网力科技股份有限公司 Superposition method and superposition system of video structured information, and user terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6697103B1 (en) * 1998-03-19 2004-02-24 Dennis Sunga Fernandez Integrated network for monitoring remote objects

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930592A (en) * 2012-11-16 2013-02-13 李金地 Cloud computation rendering method based on uniform resource locator analysis
CN106375793A (en) * 2016-08-29 2017-02-01 东方网力科技股份有限公司 Superposition method and superposition system of video structured information, and user terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视频流的远程渲染关键技术研究与实现;荣宇博;《中国优秀硕士学位论文全文数据库 信息科技辑》;20200115;全文 *

Also Published As

Publication number Publication date
CN111629264A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111629264B (en) Web-based separate front-end image rendering method
US10305613B2 (en) Method and system for detecting image delay
CN105554450B (en) Distributed video panorama display system
CN108010037B (en) Image processing method, device and storage medium
CN103414915B (en) Quality evaluation method and device for uploaded videos of websites
CN106454388B (en) A kind of method and apparatus for determining live streaming setting information
EP2541932A1 (en) Quality checking in video monitoring system.
US20180032819A1 (en) Dynamic parametrization of video content analytics systems
CN114513655A (en) Live video quality evaluation method, video quality adjustment method and related device
WO2023056896A1 (en) Definition determination method and apparatus, and device
CN112422909B (en) Video behavior analysis management system based on artificial intelligence
CN108881896A (en) Test method, system and the storage medium of capture machine
CN113784118A (en) Video quality evaluation method and device, electronic equipment and storage medium
CN102158683B (en) The method of testing of video delay in video conference and computer
CN114419502A (en) Data analysis method and device and storage medium
CN114173087A (en) Video data acquisition and processing method, edge gateway and storage medium
CN111431761B (en) Method and device for measuring time delay of cloud mobile phone response rendering stage
CN110971870B (en) Data processing method for image display
CN114866763A (en) Video quality evaluation method and device, terminal equipment and storage medium
CN113411543A (en) Multi-channel monitoring video fusion display method and system
CN114079777A (en) Video processing method and device
Xu et al. Assessment of subjective video quality on 4K Ultra High Definition videos
KR20150095080A (en) Apparatus and Method for Transmitting Video Data
CN116708389B (en) Multi-terminal monitoring method, terminal equipment and readable storage medium for online examination
KR20200003598A (en) Method and system for real time measuring quality of video call service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant