CN112312145A - Access server, burst traffic caching method, system, computer device and readable storage medium - Google Patents

Access server, burst traffic caching method, system, computer device and readable storage medium Download PDF

Info

Publication number
CN112312145A
CN112312145A CN201910702151.4A CN201910702151A CN112312145A CN 112312145 A CN112312145 A CN 112312145A CN 201910702151 A CN201910702151 A CN 201910702151A CN 112312145 A CN112312145 A CN 112312145A
Authority
CN
China
Prior art keywords
service
service data
client
data
live broadcast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910702151.4A
Other languages
Chinese (zh)
Other versions
CN112312145B (en
Inventor
赵海林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hode Information Technology Co Ltd
Original Assignee
Shanghai Hode Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hode Information Technology Co Ltd filed Critical Shanghai Hode Information Technology Co Ltd
Priority to CN201910702151.4A priority Critical patent/CN112312145B/en
Publication of CN112312145A publication Critical patent/CN112312145A/en
Application granted granted Critical
Publication of CN112312145B publication Critical patent/CN112312145B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2408Monitoring of the upstream path of the transmission network, e.g. client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses an access server, a burst flow caching method, a burst flow caching system, computer equipment and a readable storage medium, and belongs to the technical field of live broadcast application. According to the invention, by monitoring the change of the popularity data of the live broadcast room, when the popularity data is greater than the preset popularity threshold value, the business layer can mark the business data related to the live broadcast room, so that the purpose of monitoring the change of the number of people in the live broadcast room in real time is achieved; when the client sends the service request, the service request is identified, and the marked service data corresponding to the cache unit and the service request can be directly sent to the client, so that the access pressure of a service layer is reduced, the efficiency of dealing with burst flow is improved, and the experience effect of watching live broadcast by a user is ensured.

Description

Access server, burst traffic caching method, system, computer device and readable storage medium
Technical Field
The invention relates to the technical field of live broadcast application, in particular to an access server, a burst flow caching method, a burst flow caching system, computer equipment and a readable storage medium.
Background
With the rapid development of the live broadcast industry, various types of emergencies of live broadcast rooms are increasing day by day. Due to the unpredictability of the internet, the situation of instantaneous burst traffic is usually handled by applying a current limiting and degrading mode. However, current limiting and degradation easily cause that part of users cannot access the live broadcast room, and the experience effect of watching the live broadcast is influenced.
Disclosure of Invention
Aiming at the problem of the existing instantaneous burst flow, an access server, a burst flow caching method, a system, a computer device and a readable storage medium which aim at caching service data and avoiding current limitation or degradation are provided.
A method for buffering burst traffic comprises the following steps:
presetting a cache unit for storing marked service data;
monitoring the popularity data of the live broadcast room;
if the popularity data of the live broadcast room is larger than a preset popularity threshold value, marking the business data related to the address of the live broadcast room;
and identifying a service request sent by a client, and sending the marked service data corresponding to the service request in the cache unit to the client.
Preferably, the step of monitoring the popularity data of the live broadcast room comprises:
and monitoring the mode of the popularity data of each live broadcast room accessed by the CDN node, acquiring the popularity data of each live broadcast room, and judging whether the popularity data of the live broadcast rooms is greater than a preset popularity threshold value.
Preferably, if the popularity data of the live broadcast room is greater than a preset popularity threshold, the step of marking the service data associated with the address of the live broadcast room includes:
and if the popularity data of the live broadcast room is larger than a preset popularity threshold value, acquiring the address of the live broadcast room, and sending the address of the live broadcast room to each service server in a service layer, wherein the service server marks the service data related to the address of the live broadcast room.
Preferably, the step of identifying a service request sent by a client and sending service data of a mark corresponding to the service request in a lock cache unit to the client includes:
acquiring the service request sent by the client, identifying whether the cache unit comprises service data corresponding to the service request, and if so, sending the service data corresponding to the service request to the client; if not, sending the service request to the service layer to acquire service data corresponding to the service request; and judging whether the service data carries a mark, if so, storing the service data in the cache unit and sending the service data to the client, and if not, sending the service data to the client.
Preferably, the service layer includes a service server selected from at least one of the following: user server, room server, prop server, wallet server and activity server.
The invention also provides a caching method of the access server, which comprises the following steps:
presetting a cache unit for storing marked service data;
and identifying a service request sent by a client, and sending the marked service data corresponding to the service request in a cache unit to the client.
Preferably, the step of identifying a service request sent by a client and sending service data of a tag corresponding to the service request in a cache unit to the client includes:
acquiring the service request sent by the client, identifying whether the cache unit comprises service data corresponding to the service request, and if so, sending the service data corresponding to the service request to the client; if not, sending the service request to the service layer to acquire service data corresponding to the service request; and judging whether the service data carries a mark, if so, storing the service data in the cache unit and sending the service data to the client, and if not, sending the service data to the client.
The invention also provides a buffer system of burst flow, comprising:
the service layer is used for storing service data;
the monitoring unit is used for monitoring the popularity data of the live broadcast room, and controlling the business layer to mark the business data related to the address of the live broadcast room when the popularity data of the live broadcast room is larger than a preset popularity threshold value;
the buffer unit is used for storing the marked service data;
the access layer is used for acquiring the service request sent by the client, identifying whether the cache unit comprises service data corresponding to the service request, and if so, sending the service data corresponding to the service request to the client; if not, sending the service request to the service layer to acquire service data corresponding to the service request;
the access layer is further used for judging whether the service data carries a mark, if so, the service data is stored in the cache unit and is sent to the client, and if not, the service data is sent to the client.
The invention also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method when executing the computer program.
The invention also provides a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
The beneficial effects of the above technical scheme are that:
in the technical scheme, by monitoring the change of the popularity data of the live broadcast room, when the popularity data is greater than a preset popularity threshold value, the business layer can mark the business data related to the live broadcast room, so that the purpose of monitoring the number change of the live broadcast room in real time is achieved; when the client sends the service request, the service request is identified, and the marked service data corresponding to the cache unit and the service request can be directly sent to the client, so that the access pressure of a service layer is reduced, the efficiency of dealing with burst flow is improved, and the experience effect of watching live broadcast by a user is ensured.
Drawings
FIG. 1 is a diagram illustrating an architecture of an embodiment of a burst traffic caching system according to the present invention;
fig. 2 is a flowchart of an embodiment of a burst traffic caching method according to the present invention;
fig. 3 is a flowchart of another embodiment of a burst traffic caching method according to the present invention;
fig. 4 is a flowchart of an embodiment of a caching method of an access server according to the present invention;
fig. 5 is a block diagram of an embodiment of a burst traffic caching system according to the present invention;
FIG. 6 is a block diagram of an embodiment of an access server according to the present invention;
FIG. 7 is a diagram of the hardware architecture of one embodiment of the computer apparatus of the present invention;
FIG. 8 is a block diagram illustrating an embodiment of a burst traffic caching system according to the present invention;
fig. 9 is a block diagram of a buffering system for burst traffic according to another embodiment of the present invention.
Detailed Description
The advantages of the invention are further illustrated in the following description of specific embodiments in conjunction with the accompanying drawings.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the description of the present invention, it should be understood that the numerical references before the steps do not identify the order of performing the steps, but merely serve to facilitate the description of the present invention and to distinguish each step, and thus should not be construed as limiting the present invention.
The video of the embodiment of the application may be presented on clients such as large-scale video playing devices, game machines, desktop computers, smart phones, tablet computers, MP3(movingpicture expeerpercgroupandioudiolayer iii) players, MP4(movingpicture expeerpercgroupandioudiolayer rlv) players, laptop portable computers, e-book readers, and other display terminals.
The caching method for burst traffic in the embodiment of the application can be applied to a live scene, for example, to interactive videos of audiences and a main broadcasting/video broadcasting party and playing of interactive games (for example, "black mirror" or "invisible guardian" in the industry). The embodiment of the present application takes the application of video to live video as an example, but is not limited to this.
In the embodiment of the application, the access server inquires whether the cache unit of the access server has the service data corresponding to the service request according to the service request sent by the client, and if so, the access server can directly send the corresponding service data in the cache unit to the client so as to reduce the access pressure of a service layer; if not, the service layer can be requested for the marked service data corresponding to the service request, the obtained marked service data is stored in the cache unit and then is sent to the client, so that the subsequent access to the service data can be directly extracted from the cache unit, and the access pressure of the service layer is relieved. Referring to fig. 1, fig. 1 is a diagram illustrating a cache system architecture for burst traffic according to an embodiment of the present disclosure. As shown in fig. 1, a user a accesses a live broadcast room and sends a service request to an access server W through a wireless network, and simultaneously sends a video request to a CDN node, and the CDN node obtains video data from a corresponding video server according to the received video request and feeds the video data back to the user a. The method comprises the steps that people data of each live broadcast room accessed by a CDN node are monitored in real time through a monitoring terminal G, the current access state of each live broadcast room is obtained, and if the people data are larger than a preset people threshold, the monitoring terminal G can enable a service server Q to mark service data related to the live broadcast room; the access server W inquires whether the cache unit of the access server W has the service data corresponding to the service request according to the service request, if so, the access server W can directly send the corresponding service data in the cache unit to the user A so as to reduce the access pressure of the service server Q; if not, the marked service data corresponding to the service request can be requested from the service server Q, the obtained marked service data is stored in the cache unit and then is sent to the user A, so that the service data can be directly extracted from the cache unit for subsequent access. Only one access server W and one a-subscriber are shown here, and the application scenario here may also include a plurality of mutually independent access servers W, as well as a plurality of a-subscribers. The access server W may be a cloud server or a local server. The device of the user a is not limited to the illustrated mobile device, and all intelligent terminals capable of uploading videos are applicable.
The invention provides a burst flow caching method capable of avoiding current limitation or degradation to overcome the defect of current limitation or degradation caused by transient burst flow. Referring to fig. 2, which is a schematic flow chart of a burst traffic caching method according to a preferred embodiment of the present invention, the burst traffic caching method mainly includes the following steps:
s01, presetting a cache unit for storing marked service data;
in practical application, the burst traffic caching method can be applied to an access server of an access stratum. In this step, a cache unit may be preset in the access server.
S02, monitoring the popularity data of the live broadcast room;
the popularity data refers to data which can represent the number of live viewers watched currently in the live broadcast room. Preferably, the popularity data is the current online number of people in the live broadcast room. The number of the online people can be obtained through the stream pulling address, namely the number of the users who download and watch the video through the live broadcast stream pulling address corresponding to the live broadcast room at present is used as a data source of the number of the online people.
Further, the step S01 of monitoring popularity data of the live broadcast room includes:
the method comprises the steps of monitoring the mode of the popularity data of each live broadcast room accessed by a CDN (Content Delivery Network) node, obtaining the popularity data of each live broadcast room, and judging whether the popularity data of the live broadcast rooms is larger than a preset popularity threshold value or not.
In practical application: a monitoring unit can be adopted to monitor the popularity data of the live broadcast room; the monitoring unit can be embedded into the CDN node; the monitoring unit can be formed at a monitoring terminal, and one monitoring terminal can detect one CDN node and also can monitor a plurality of CDN nodes simultaneously.
The CDN node is mainly used for video distribution, and may also count the number of users downloading and watching videos in the live streaming pull address corresponding to each live broadcast room while video distribution, for example: the number of people online in the live room, etc.
In this embodiment, the monitoring unit monitors the CDN node to obtain popularity data of each live broadcast room accessed by the CDN node, so as to monitor whether the popularity data of each live broadcast room reaches a preset popularity threshold in real time, execute step S03, and if not, return to execute step S02.
S03, if the popularity data of the live broadcast room is larger than a preset popularity threshold value, marking the business data related to the address of the live broadcast room;
in practical application, when the popularity data of the live broadcast room is greater than a preset popularity threshold (for example, the preset popularity threshold can be periodically/dynamically set according to historical/dynamic data of the number of people watching live broadcast on line in a website, the threshold can also be used as an evaluation standard of a hot live broadcast room, or a numerical value of the hot live broadcast room is directly taken to be set, and the hot live broadcast room is a live broadcast room with a higher popularity in the live broadcast room), the monitoring unit can broadcast an address of the live broadcast room to each service server in a service layer through a message queue, and after the service server receives the address broadcast, the service server marks the service data associated with the address, so as to identify whether the service data belongs to the service data of the hot live broadcast room according to the mark.
It should be noted that: the service layer may be composed of a plurality of service servers, and the service servers in the service layer may be selected from at least one of the following: user server, room server, prop server, wallet server, activity server, and the like.
In this embodiment, the property server can be used to distribute properties (e.g., gift properties, lottery properties, rhythm storm properties, full-screen animation properties, etc.); the user server can be used for releasing the information of the anchor user; the wallet server may be used for purchase, exchange, etc. of virtual currency; the activity server may be used to publish activity information; the room server may be used to publish attention information, live room information, leaderboard information, point information, and the like.
In this embodiment, step S03 may specifically include:
and if the popularity data of the live broadcast room is larger than a preset popularity threshold value, acquiring the address of the live broadcast room, and sending the address of the live broadcast room to each service server in a service layer, wherein the service server marks the service data related to the address of the live broadcast room.
And S04, identifying a service request sent by a client, and sending the marked service data corresponding to the service request in the cache unit to the client.
In practical application, the client may send a service request to the access server and send a video request to the CDN node at the same time. When the client side sends a service request to the access server and sends a video request to the CDN node, the CDN node obtains video data from the corresponding video server according to the received video request and feeds the video data back to the corresponding client side so that the client side can watch live video data; when the access server receives a service request, inquiring whether a cache unit of the access server stores service data corresponding to the service request, if so, directly sending the corresponding service data in the cache unit to the client so as to reduce the access pressure of a service layer; if not, the service layer can be requested for the marked service data corresponding to the service request, the obtained marked service data is stored in the cache unit and then is sent to the client, so that the subsequent access to the service data can be directly extracted from the cache unit, and the efficiency of dealing with burst flow is improved.
As shown in fig. 3, further, step S04 may specifically include:
s041, acquiring the service request sent by the client, identifying whether the cache unit comprises service data corresponding to the service request, and if so, executing step S042; if not, executing step S043;
s042, sending the service data corresponding to the service request to the client;
s043, sending the service request to the service layer to obtain service data corresponding to the service request, and executing the step S044;
in practical application, an access server sends a service request to a corresponding service server in a service layer according to the address of the service server requested in the service request, the service server analyzes the service request, and if the service request only relates to service data in the current service server, the corresponding service data is directly sent to the access server; if part of the service data corresponding to the service request is not in the service server, extracting corresponding service data from other service servers, converging all the obtained service data, and sending the converged service data to the access server.
S044, judging whether the service data carries a mark, if so, executing the step S045; if not, executing step S046;
s045, storing the service data in the cache unit and sending the service data to a client;
and S046, sending the service data to the client.
In the embodiment, the change of the popularity data of each live broadcast room is monitored through the monitoring unit, and when the popularity data is larger than a preset popularity threshold (for example, the popularity data reaches the upper limit of the capacity), the business layer can mark the business data related to the live broadcast rooms, so that the purpose of monitoring the number change of the live broadcast rooms in real time is achieved; when the client sends a service request, the access server inquires whether the cache unit has service data corresponding to the service request, if so, the corresponding service data in the cache unit can be directly sent to the client so as to reduce the access pressure of a service layer; if not, the marked service data corresponding to the service request can be requested from the service layer, the obtained marked service data is stored in the cache unit and then is sent to the client, so that the subsequent access to the service data can be directly extracted from the cache unit, the access pressure of the service layer is relieved, and the live broadcast watching experience effect of a user is ensured.
The invention also provides a caching method of the access server, which comprises the following steps:
presetting a cache unit for storing marked service data;
and identifying a service request sent by a client, and sending the marked service data corresponding to the service request in the cache unit to the client.
By way of example and not limitation, the service data stored in the cache unit may be periodically cleared according to a preset clearing mechanism for the service data. Each type of service data can correspond to a corresponding clearing mechanism according to the service mechanism of the type of service; and the service data in the buffer unit can be fixed in a periodic condition according to a preset clearing mechanism in the buffer unit.
In practical application, the client may simultaneously and respectively send a service request to the access server and a video request to the CDN node. When the client side simultaneously sends service requests to the access server and video requests to the CDN node, the CDN node obtains video data from the corresponding video server according to the received video requests and sends the video data to the corresponding client side so that the client side can watch live video data; when the access server receives a service request, inquiring whether a cache unit of the access server stores service data corresponding to the service request, if so, directly sending the corresponding service data in the cache unit to the client so as to reduce the access pressure of a service layer; if not, the service layer can be requested for the marked service data corresponding to the service request, the obtained marked service data is stored in the cache unit and then is sent to the client, so that the subsequent access to the service data can be directly extracted from the cache unit, and the efficiency of dealing with burst flow is improved.
As shown in fig. 4, further, the above steps may include:
s11, acquiring the service request sent by the client, identifying whether the cache unit comprises service data corresponding to the service request, and if so, executing step S12; if not, go to step S13;
s12, sending the service data corresponding to the service request to the client;
s13, sending the service request to a service layer to acquire service data corresponding to the service request, and executing step S14;
in practical application, an access server sends a service request to a corresponding service server in a service layer according to the address of the service server requested in the service request, the service server analyzes the service request, and if the service request only relates to service data in the current service server, the corresponding service data is directly sent to the access server; if part of the service data corresponding to the service request is not in the service server, extracting corresponding service data from other service servers, converging all the obtained service data, and sending the converged service data to the access server.
S14, judging whether the service data carries a mark, if so, executing a step S15; if not, go to step S16;
s15, storing the service data in the cache unit and sending the service data to a client;
and S16, sending the service data to the client.
In this embodiment, when the client sends a service request, the access server queries whether service data corresponding to the service request exists in the cache unit, and if so, the corresponding service data in the cache unit can be directly sent to the client to reduce the access pressure of a service layer; if not, the marked service data corresponding to the service request can be requested from the service layer, the obtained marked service data is stored in the cache unit and then is sent to the client, so that the subsequent access to the service data can be directly extracted from the cache unit, the access pressure of the service layer is relieved, and the live broadcast watching experience effect of a user is ensured.
As shown in fig. 5, the present invention further provides a burst traffic caching system 2, including: a cache unit 24, a monitoring unit 23, an access layer 21 and a service layer 22; wherein:
a service layer 22 for storing service data;
a cache unit 24, configured to store the marked service data;
in this embodiment, the cache unit 24 may be embedded in the access stratum 21.
The monitoring unit 23 is configured to monitor popularity data of a live broadcast room, and when the popularity data of the live broadcast room is greater than a preset popularity threshold, control the service layer 22 to mark service data associated with an address of the live broadcast room;
the popularity data refers to data which can represent the number of live viewers watched currently in the live broadcast room. Preferably, the popularity data is the current online number of people in the live broadcast room. The number of the online people can be obtained through the stream pulling address, namely the number of the users who download and watch the video through the live broadcast stream pulling address corresponding to the live broadcast room at present is used as a data source of the number of the online people.
By way of example and not limitation, the monitoring unit 23 may be formed in a monitoring terminal server.
In this embodiment, the monitoring unit 23 obtains the popularity data of each live broadcast room by monitoring the popularity data of each live broadcast room accessed by each CDN node 4, and determines whether the popularity data of the live broadcast room is greater than a preset popularity threshold. So as to monitor whether the popularity data of each live broadcast room reaches a threshold value in real time.
It should be noted that: one monitoring unit 23 may correspond to one CDN node 4, or one monitoring unit 23 may correspond to a plurality of CDN nodes 4. The CDN node 4 is mainly used for video distribution, and may also count the number of users downloading and watching videos in the live streaming address corresponding to each live broadcast room while video distribution, for example: the number of people online in the live room, etc.
In practical application, when the popularity data of the live broadcast room is greater than the preset popularity threshold, the monitoring unit 23 may broadcast the address of the live broadcast room to each service server 221 in the service layer 22 through the message queue, and after the service server 221 receives the address broadcast, mark the service data associated with the address, so as to identify whether the service data belongs to the service data of the hot live broadcast room according to the mark.
When the popularity data of the live broadcast room is greater than a preset popularity threshold value, the monitoring unit 23 acquires the address of the live broadcast room and sends the address of the live broadcast room to each service server 221 in the service layer 22, so that the service server 221 marks the service data associated with the address of the live broadcast room.
By way of example and not limitation, the service layer 22 includes a service server 221 selected from at least one of: user server, room server, prop server, wallet server, activity server, and the like.
In this embodiment, the property server can be used to distribute properties (e.g., gift properties, lottery properties, rhythm storm properties, full-screen animation properties, etc.); the user server can be used for releasing the information of the anchor user; the wallet server may be used for purchase, exchange, etc. of virtual currency; the activity server may be used to publish activity information; the room server may be used to publish attention information, live room information, leaderboard information, point information, and the like.
The access layer 21 is configured to acquire the service request sent by the client 3, identify whether the cache unit 24 includes service data corresponding to the service request, and send the service data corresponding to the service request to the client 3 if the cache unit includes the service data corresponding to the service request; if not, sending the service request to the service layer 22 to obtain service data corresponding to the service request;
the access layer 21 is further configured to determine whether the service data carries a flag, store the service data in the cache unit 24 and send the service data to the client 3 if the service data carries the flag, and send the service data to the client 3 if the service data does not carry the flag.
In this embodiment, the access layer 21 may be an access server, and the service layer 22 may be composed of a plurality of service servers. The cache unit may be a cache server, or may be a storage space inside an access layer (e.g., an access server). The system for caching burst traffic provided by this embodiment may include a plurality of service servers, each service server corresponds to a cache unit, and each access server is in communication connection with a service layer.
In practical application, the client can simultaneously and respectively send a service request to the access server, send a video request to the CDN node, and send a service request only to the access server. When the client side sends service requests to the access server and sends video requests to the CDN node, the CDN node obtains video data from the corresponding video server according to the received video requests and feeds the video data back to the corresponding client side so that the client side can watch live video data; when the access server receives a service request, inquiring whether the cache region of the access server stores service data corresponding to the service request, if so, directly sending the corresponding service data in the cache region to the client so as to reduce the access pressure of a service layer; if not, the marked service data corresponding to the service request can be requested from the service layer, the obtained marked service data is stored in the cache region and then is sent to the client, so that the service data can be directly extracted from the cache region for subsequent access, and the efficiency of dealing with burst flow is improved.
In this embodiment, the change of the popularity data of each live broadcast room is monitored by the monitoring unit 23, and when the popularity data is greater than a preset popularity threshold, the business layer 22 can mark the business data related to the live broadcast room, so as to achieve the purpose of monitoring the change of the number of people in the live broadcast room in real time; when the client 3 sends a service request to the access layer 21, the access layer 21 queries whether service data corresponding to the service request exists in the cache unit 24, and if so, the corresponding service data in the cache unit 24 can be directly sent to the client 3, so as to reduce the access pressure of the service layer 22; if not, the access layer 21 requests the service layer 22 for the marked service data corresponding to the service request, stores the obtained marked service data in the cache unit, and then sends the service data to the client 3, so that the subsequent access to the service data can be directly extracted from the cache unit, the access pressure of the service layer 22 is relieved, and the experience effect of watching live broadcast by the user is ensured.
The invention also provides an access server which comprises a cache unit used for storing the marked service data.
The access server acquires a service request which is sent by a client and corresponds to the marked service data, sends the marked service data which corresponds to the service request in a cache unit to the client, or requests the marked service data which corresponds to the service request from a service layer, stores the acquired marked service data in the cache unit and sends the marked service data to the client.
In practical application, the client can simultaneously and respectively send a service request to the access server, send a video request to the CDN node, and send a service request only to the access server. When the client side simultaneously sends service requests to the access server and video requests to the CDN node, the CDN node obtains video data from the corresponding video server according to the received video requests and sends the video data to the corresponding client side so that the client side can watch live video data; when the access server receives a service request, inquiring whether a cache unit of the access server stores service data corresponding to the service request, if so, directly sending the corresponding service data in the cache unit to the client so as to reduce the access pressure of a service layer; if not, the service layer can be requested for the marked service data corresponding to the service request, the obtained marked service data is stored in the cache unit and then is sent to the client, so that the subsequent access to the service data can be directly extracted from the cache unit, and the efficiency of dealing with burst flow is improved.
As shown in fig. 6, the access server 21 includes: a buffer unit 24, a receiving unit 213, an identifying unit 215, a sending unit 211, a communication unit 214 and a processing unit 212, wherein:
a cache unit 24, configured to store the marked service data;
a receiving unit 213, configured to receive a service request sent by the client 3;
an identifying unit 215, configured to identify whether the caching unit 24 includes service data corresponding to the service request;
a sending unit 211, configured to send service data to the client 3;
a communication unit 214, configured to send the service request to the service layer 22, so as to obtain service data corresponding to the service request;
in practical application, an access server sends a service request to a corresponding service server in a service layer according to the address of the service server requested in the service request, the service server analyzes the service request, and if the service request only relates to service data in the current service server, the corresponding service data is directly sent to the access server; if part of the service data corresponding to the service request is not in the service server, extracting corresponding service data from other service servers, converging all the obtained service data, and sending the converged service data to the access server.
The processing unit 212 is configured to determine whether the service data carries a flag, store the service data in the cache unit if the service data carries the flag, and send the service data to the client 3 through the sending unit 211; if not, the service data is sent to the client 3 through the sending unit 211;
in this embodiment, when the client 3 sends a service request, the access server queries whether service data corresponding to the service request exists in the cache unit, and if so, the corresponding service data in the cache unit can be directly sent to the client 3, so as to reduce the access pressure of the service layer 22; if not, the service layer 22 can be requested for the marked service data corresponding to the service request, the obtained marked service data is stored in the cache unit and then is sent to the client 3, so that the subsequent access to the service data can be directly extracted from the cache unit, the access pressure of the service layer 22 is relieved, and the live broadcast experience effect of the user is ensured.
As shown in fig. 7, a computer device 5, the computer device 5 comprising:
a memory 51 for storing executable program code; and
a processor 52 for calling said executable program code in said memory 51, the execution steps including the above-mentioned caching method of the access server.
One processor 52 is illustrated in fig. 7.
The memory 51, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the caching method of the access server in the embodiment of the present application (for example, the caching unit 24, the monitoring unit 23, the access stratum 21, and the service stratum 22 shown in fig. 5). The processor 52 executes various functional applications and data processing of the computer device 5 by executing the nonvolatile software programs, instructions and modules stored in the memory 51, namely, implements the caching method of the access server of the above-mentioned method embodiment.
The memory 51 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store playback information of the user on the computer device 5. Further, the memory 51 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 51 may optionally include memory 51 remotely located from the processor 52, and these remote memories 51 may be connected to the burst traffic caching system 2 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 51, and when executed by the one or more processors 52, perform the caching method of the access server in any of the above-described method embodiments, for example, perform the above-described method steps S01 to S04 in fig. 2 and S041 to S046 in fig. 3, and implement the functions of the caching unit 24, the monitoring unit 23, the access stratum 21 and the service stratum 22 shown in fig. 5.
The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in the embodiments of the present application.
The computer device 5 of the embodiments of the present application exists in a variety of forms, including but not limited to:
(1) a mobile communication device: such devices are characterized by mobile communications capabilities and are primarily targeted at providing voice, data communications. Such terminals include: smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) Ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PDA, MID, and UMPC devices, etc., such as ipads.
(3) A portable entertainment device: such devices can display and play multimedia content. This type of device comprises: audio, video players (e.g., ipods), handheld game consoles, electronic books, and smart toys and portable car navigation devices.
(4) A server: the device for providing the computing service comprises a processor, a hard disk, a memory, a system bus and the like, and the server is similar to a general computer architecture, but has higher requirements on processing capacity, stability, reliability, safety, expandability, manageability and the like because of the need of providing high-reliability service.
(5) And other electronic devices with data interaction functions.
Embodiments of the present application provide a non-transitory computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, which are executed by one or more processors, such as one processor 52 in fig. 7, so that the one or more processors 52 may perform the caching method of the access server in any of the above method embodiments, for example, perform the above-described method steps S01 to S04 in fig. 2, and the method steps S041 to S046 in fig. 3, and implement the functions of the caching unit 24, the monitoring unit 23, the access layer 21, and the service layer 22 shown in fig. 5.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on at least two network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-only memory (ROM), a Random Access Memory (RAM), or the like.
The first embodiment,
Referring to fig. 8, a user a sends a prop service request to an access server W1 through a wireless network, and a user B sends a wallet service request to an access server W2 through a wired network; the access server W1 queries whether the buffer unit has the service data corresponding to the prop service request according to the prop service request sent by the user a, and directly sends the corresponding prop service data in the buffer unit to the user a when the buffer unit has the service data corresponding to the prop service request; the access server W2 inquires whether its cache unit has service data corresponding to the wallet service request according to the wallet service request sent by the B subscriber, when the cache unit has no service data corresponding to the wallet service request, the access server W2 requests the service layer P for service data corresponding to the wallet service request, the service layer P queries the corresponding service server according to the wallet service request to obtain wallet service data, and transmits the acquired wallet service data to the access server W2, the access server W2 recognizes the received wallet service data, determines whether the wallet service data is signed, and when the wallet service data is signed, the wallet service data is stored in the cache unit and is sent to the B user, and if the wallet service data is not marked, the wallet service data is directly sent to the B user.
Example II,
Referring to fig. 9, a user a sends a prop service request to an access server W1 through a wireless network, and a user B sends a wallet service request to an access server W1 through a wired network; the access server W1 queries whether the buffer unit has the service data corresponding to the prop service request according to the prop service request sent by the user a, and directly sends the corresponding prop service data in the buffer unit to the user a when the buffer unit has the service data corresponding to the prop service request; the access server W1 inquires whether the cache unit has the service data corresponding to the wallet service request according to the wallet service request sent by the B user, when the cache unit does not have the service data corresponding to the wallet service request, the access server W2 requests the service data corresponding to the wallet service request from the service layer P, the service layer P inquires the corresponding service server according to the wallet service request to obtain the wallet service data and sends the obtained wallet service data to the access server W1, the access server W1 identifies the received wallet service data, and when the wallet service data is marked, the access server W1 stores the wallet service data in the cache unit and sends the wallet service data to the B user.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (10)

1. A method for buffering burst traffic is characterized by comprising the following steps:
presetting a cache unit for storing marked service data;
monitoring the popularity data of the live broadcast room;
if the popularity data of the live broadcast room is larger than a preset popularity threshold value, marking the business data related to the address of the live broadcast room;
and identifying a service request sent by a client, and sending the marked service data corresponding to the service request in the cache unit to the client.
2. The burst traffic buffering method according to claim 1, wherein the step of monitoring the popularity data of the live broadcast room comprises:
and monitoring the popularity data of each live broadcast room accessed by the CDN node, and judging whether the popularity data of the live broadcast room is greater than a preset popularity threshold value.
3. The burst traffic caching method according to claim 1, wherein if the popularity data of the live broadcast room is greater than a preset popularity threshold, the step of marking the service data associated with the address of the live broadcast room comprises:
and if the popularity data of the live broadcast room is larger than a preset popularity threshold value, acquiring the address of the live broadcast room, and sending the address of the live broadcast room to each service server in a service layer, wherein the service server marks the service data related to the address of the live broadcast room.
4. The burst traffic caching method according to claim 3, wherein the step of identifying a service request sent by a client and sending service data of a tag corresponding to the service request in the caching unit to the client comprises:
acquiring the service request sent by the client, identifying whether the cache unit comprises service data corresponding to the service request, and if so, sending the service data corresponding to the service request to the client; if not, sending the service request to the service layer to acquire service data corresponding to the service request; and judging whether the service data carries a mark, if so, storing the service data in the cache unit and sending the service data to the client, and if not, sending the service data to the client.
5. The method for buffering burst traffic as claimed in claim 3 or 4, wherein the service layer comprises a service server selected from at least one of the following: user server, room server, prop server, wallet server and activity server.
6. A caching method for an access server is characterized by comprising the following steps:
presetting a cache unit for storing marked service data;
and identifying a service request sent by a client, and sending the marked service data corresponding to the service request in the cache unit to the client.
7. The caching method for the access server according to claim 6, wherein the step of identifying a service request sent by a client and sending service data of a tag corresponding to the service request in the caching unit to the client comprises:
acquiring the service request sent by the client, identifying whether the cache unit comprises service data corresponding to the service request, and if so, sending the service data corresponding to the service request to the client; if not, sending the service request to a service layer to acquire service data corresponding to the service request; and judging whether the service data carries a mark, if so, storing the service data in the cache unit and sending the service data to the client, and if not, sending the service data to the client.
8. A system for buffering burst traffic, comprising:
the service layer is used for storing service data;
the monitoring unit is used for monitoring the popularity data of the live broadcast room, and controlling the business layer to mark the business data related to the address of the live broadcast room when the popularity data of the live broadcast room is larger than a preset popularity threshold value;
the buffer unit is used for storing the marked service data;
the access layer is used for acquiring the service request sent by the client, identifying whether the cache unit comprises service data corresponding to the service request or not, and if so, sending the service data corresponding to the service request to the client; if not, sending the service request to the service layer to acquire service data corresponding to the service request;
the access layer is further used for judging whether the service data carries a mark, if so, the service data is stored in the cache unit and is sent to the client, and if not, the service data is sent to the client.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program, when executed by a processor, implements the steps of the method of any one of claims 1 to 5.
CN201910702151.4A 2019-07-31 2019-07-31 Access server, burst traffic caching method, system, computer device and readable storage medium Active CN112312145B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910702151.4A CN112312145B (en) 2019-07-31 2019-07-31 Access server, burst traffic caching method, system, computer device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910702151.4A CN112312145B (en) 2019-07-31 2019-07-31 Access server, burst traffic caching method, system, computer device and readable storage medium

Publications (2)

Publication Number Publication Date
CN112312145A true CN112312145A (en) 2021-02-02
CN112312145B CN112312145B (en) 2023-04-18

Family

ID=74485810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910702151.4A Active CN112312145B (en) 2019-07-31 2019-07-31 Access server, burst traffic caching method, system, computer device and readable storage medium

Country Status (1)

Country Link
CN (1) CN112312145B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114003179A (en) * 2021-11-09 2022-02-01 中国建设银行股份有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN115314718A (en) * 2021-05-07 2022-11-08 北京字节跳动网络技术有限公司 Live broadcast data processing method, device, equipment and medium
CN115379013A (en) * 2022-06-29 2022-11-22 广州博冠信息科技有限公司 Data processing method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150222681A1 (en) * 2014-01-31 2015-08-06 Fastly, Inc. Caching and streaming of digital media content subsets
CN105187521A (en) * 2015-08-25 2015-12-23 努比亚技术有限公司 Service processing device and method
CN106937136A (en) * 2017-03-29 2017-07-07 武汉斗鱼网络科技有限公司 Data delay method and system based on statistical information between network direct broadcasting
CN107249140A (en) * 2017-07-12 2017-10-13 北京潘达互娱科技有限公司 List information acquisition method and its device
CN109815716A (en) * 2019-01-08 2019-05-28 平安科技(深圳)有限公司 Access request processing method, device, storage medium and server

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150222681A1 (en) * 2014-01-31 2015-08-06 Fastly, Inc. Caching and streaming of digital media content subsets
CN105187521A (en) * 2015-08-25 2015-12-23 努比亚技术有限公司 Service processing device and method
CN106937136A (en) * 2017-03-29 2017-07-07 武汉斗鱼网络科技有限公司 Data delay method and system based on statistical information between network direct broadcasting
CN107249140A (en) * 2017-07-12 2017-10-13 北京潘达互娱科技有限公司 List information acquisition method and its device
CN109815716A (en) * 2019-01-08 2019-05-28 平安科技(深圳)有限公司 Access request processing method, device, storage medium and server

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115314718A (en) * 2021-05-07 2022-11-08 北京字节跳动网络技术有限公司 Live broadcast data processing method, device, equipment and medium
CN115314718B (en) * 2021-05-07 2023-07-14 北京字节跳动网络技术有限公司 Live broadcast data processing method, device, equipment and medium
CN114003179A (en) * 2021-11-09 2022-02-01 中国建设银行股份有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN115379013A (en) * 2022-06-29 2022-11-22 广州博冠信息科技有限公司 Data processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN112312145B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN112311684B (en) Burst traffic processing method, computer device and readable storage medium
CN110536146B (en) Live broadcast method and device based on cloud game and storage medium
US11962858B2 (en) Video playback method, video playback terminal, and non-volatile computer-readable storage medium
CN112312145B (en) Access server, burst traffic caching method, system, computer device and readable storage medium
EP3203748B1 (en) Cloud streaming service system, cloud streaming service method using optimal gpu, and apparatus for same
US20170171585A1 (en) Method and Electronic Device for Recording Live Streaming Media
CN111770355B (en) Media server determination method, device, server and storage medium
CN110381346B (en) Advertisement display method and equipment
WO2022111027A1 (en) Video acquisition method, electronic device, and storage medium
CN104144351A (en) Video playing method and device applying virtualization platform
CN104219286A (en) Method and device for processing stream media, client, CDN (content delivery network) node server and terminal
US11540028B2 (en) Information presenting method, terminal device, server and system
US20230285854A1 (en) Live video-based interaction method and apparatus, device and storage medium
CN112019905A (en) Live broadcast playback method, computer equipment and readable storage medium
CN111541555A (en) Group chat optimization method and related product
CN113286157A (en) Video playing method and device, electronic equipment and storage medium
CN112492347A (en) Method for processing information flow and displaying bullet screen information and information flow processing system
CN112492324A (en) Data processing method and system
CN110248211B (en) Live broadcast room message current limiting method and device, electronic equipment and storage medium
US20170171339A1 (en) Advertisement data transmission method, electrnoic device and system
US20170155739A1 (en) Advertisement data processing method and router
US20170155727A1 (en) Method and electronic device for information pushing in smart television
CN110837573B (en) Distributed audio file storage and reading method and system
CN111954041A (en) Video loading method, computer equipment and readable storage medium
CN114025184A (en) Video live broadcast method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant